Comment by simonw

Comment by simonw 19 hours ago

59 replies

Clarification added later: One of my key interests at the moment is finding ways to run untrusted code from users (or generated by LLMs) in a robust sandbox from a Python application. MicroQuickJS looked like a very strong contender on that front, so I fired up Claude Code to try that out and build some prototypes.

I had Claude Code for web figure out how to run this in a bunch of different ways this morning - I have working prototypes of calling it as a Python FFI library (via ctypes), as a Python compiled module and compiled to WebAssembly and called from Deno and Node.js and Pyodide and Wasmtime https://github.com/simonw/research/blob/main/mquickjs-sandbo...

PR and prompt I used here: https://github.com/simonw/research/pull/50 - using this pattern: https://simonwillison.net/2025/Nov/6/async-code-research/

simonw 18 hours ago

Down to -4. Is this generic LLM-dislike, or a reaction to perceived over-self-promotion, or something else?

No matter how much you hate LLM stuff I think it's useful to know that there's a working proof of concept of this library compiled to WASM and working as a Python library.

I didn't plan to share this on HN but then MicroQuickJS showed up on the homepage so I figured people might find it useful.

(If I hadn't disclosed I'd used Claude for this I imagine I wouldn't have had any down-votes here.)

  • claar 18 hours ago

    I think many subscribe to this philosophy: https://distantprovince.by/posts/its-rude-to-show-ai-output-...

    Your github research/ links are an interesting case of this. On one hand, late AI adopters may appreciate your example prompts and outputs. But it feels like trivially reproducible noise to expert LLM users, especially if they are unaware of your reputation for substantive work.

    The HN AI pushback then drowns out your true message in favor of squashing perceived AI fluff.

    • simonw 17 hours ago

      Yeah, I agree that it's rude to show AI output to people... in most cases (and 100% if you don't disclose it.)

      My simonw/research GitHub repo is deliberately separate from everything else I do because it's entirely AI-generated. I wrote about that here: https://simonwillison.net/2025/Nov/6/async-code-research/#th...

      This particular case is a very solid use-case for that approach though. There are a ton of important questions to answer: can it run in WebAssembly? What's the difference to regular JavaScript? Is it safe to use as a sandbox against attacks like the regex thing?

      Those questions can be answered by having Claude Code crunch along, produce and execute a couple of dozen files of code and report back on the results.

      I think the knee-jerk reaction pushing back against this is understandable. I'd encourage people not to miss out on the substance.

      • rpdillon 16 hours ago

        Counterpoint to the sibling comment: posting your own site is fine. Your contributions are substantial, and your site is a well-organized repository of your work. Not everything fits (or belongs) in a comment.

        I'd chalk up the -4 to generic LLM hate, but I find examples of where LLMs do well to be useful, so I appreciated your post. It displays curiosity, and is especially defensible given your site has no ads, loads blazingly fast, and is filled with HN-relevant content, and doesn't even attempt to sell anything.

      • lossolo 17 hours ago

        And again you're linking to your site. Maybe try pasting the few relevant sentences instead of constantly pushing your content in almost every comment. That's what people find annoying. Maybe link to other people's stuff more, or just write what you think here on HN.

        If someone wants to read your blog, they will, they know it exists, and some people even submit your new articles here. There's no need to do what you're doing. Every day you're irritating more people with this behavior, and eventually the substance won't matter to them anymore, so you're acting against your own interests.

        Unless you want people to develop the same kind of ad blindness mechanism, where they automatically skip anything that looks like self promotion. Some people will just see a comment by simonw and do the same.

        A lot of people have told you this in many threads, but it seems you still don’t get it.

      • gaigalas 16 hours ago

        > can it run in WebAssembly?

        You can safely assume so. Bellard is the creator of jslinux. The news here would be if it _didn't_.

        > What's the difference to regular JavaScript?

        It's in the project's README!

        > Is it safe to use as a sandbox against attacks like the regex thing?

        This is not a sandbox design. It's a resource-constrained design like cesanta/mjs.

        ---

        If you vibe coded a microcontroller emulation demo, perhaps there would be less pushback.

  • garganzol 16 hours ago

    Thank you for sharing.

    A lot of HN people got cut by AI in one way or another, so they seem to have personal beefs with AI. I am talking about not only job shortages but also general humbling of the bloated egos.

    • foobarchu 15 hours ago

      > I am talking about not only job shortages but also general humbling of the bloated egos.

      I'm gonna give you the benefit for the doubt here. Most of us do not dislike genAI because we were fired or "humbled". Most of us dislike it because a) the terrible environmental impacts, b) the terrible economic impacts, and c) the general non-production-readiness of results once you get past common, well-solved problems

      Your stated understanding comes off a little bit like "they just don't like it because they're jealous".

    • wartywhoa23 4 hours ago

      I'm constanly encountering this "bloated ego" argument every time the narrative is being steered away to prevent monetary losses for AI companies.

      Especially so when it concerns AI theft of human music and visual art.

      "Those pompous artists, who do they think they are? We'll rob them of their egos".

      The problem is that these ego-accusations don't quite come from egoless entities.

  • colesantiago 18 hours ago

    It is because you keep over promoting AI almost every day of the week in the HN comments.

    In this particular case AI has nothing to do with Fabrice Bellard.

    We can have something different on HN like what Fabrice Bellard is up to.

    You can continue AI posting as normal in the coming days.

    • simonw 18 hours ago

      Forget about the AI bit. Do you think it's interesting that MicroQuickJS can be used from Python via FFI or as a compiled module, and can also be compiled to WebAssembly and called from Node.js and Deno and from Pyodide running in a browser?

      ... and that it provides a useful sandbox in that you can robustly limit both the memory and time allowed, including limiting expensive regular expression evaluation?

      I included the AI bit because it would have been dishonest not to disclose how I used AI to figure this all out.

      • alabhyajindal 18 hours ago

        It's interesting but I don't think it belongs as a comment under this post. I can use LLMs to create something tangential for each project posted on HN, and so can everyone else. If we all started doing this then the comment section will quickly become useless and not on point.

      • eichin 18 hours ago

        Usually I watch your stuff very closely (and positively) because you're pushing the edges of how LLMs can be useful for code (and are a lot more honest/forthwright than most enthusiasts about it Going Horribly Wrong and how much work you need to do to keep on top of it.) This one... looks like a crossbar of random things that don't seem like things anyone would actually want to do? Mentioning the sandboxing bit in the first post would have helped a lot, or anything that said why that particular modes are interesting.

      • Imustaskforhelp 17 hours ago

        Simon although I find it interesting. And I respect you in this field. I still feel like the reason people call out AI usage or downvote in this case is that in my honest opinion, it would be also more interesting to see people actually write the code and more so (maintain) it as well and create a whole community/github project around microquickjs wasm itself

        I read this post of yours https://simonwillison.net/2025/Dec/18/code-proven-to-work/ and although there is a point that can be made that what you are doing isn't a job and I myself create prototypes of code using AI, long term (in my opinion) what really matters are the maintainance and claim (like your article says in a way, that I can pin point a person whose responsible for code to work)

        If I find any bug right now, I wouldn't blame it on you but AI and I have varying amount of trust on it

        My opinion on the matter is that for prototyping AI can be considered good use but long term it definitely isn't and I am sure that you share a similar viewpoint.

        I think that AI is so contrasting that there stops existing any nuance. Read my recent comment (although warning, its long) (https://news.ycombinator.com/item?id=46359684)

        Perhaps you can build a blog post about the nuance of AI? I imagine that a lot of people might share a similar aspect of AI policy where its okay to tinker with it. I am part of the new generation and trust be told I don't think that there becomes much incentives long term unless someone realizes things of not using AI because using AI just feels so lucrative for especially the youngsters.

        I am 17 years old and I am going to go into a decent college with (might I add immense competition to begin with) when I have passion about such topics but only to get dissuaded because the benchmark of solving assignments etc. are done by AI and the signal ratio of universities themselves are reducing but they haven't reduced to the point that they are irrelevant but rather that you need to have a university to try to get a job but companies have either freezed hiring which some point out with LLM's

        If you ask me, Long term to me it feels like more people might associate themselves with hobbyist computing and even using AI (to be honest sort of like pewdiepie) without being in the industry.

        I am not sure what the future holds for me (or for any of us as a matter of fact) but I guess the point I am trying to state is that there is nuance to the discussion from both sides

        Have a nice day!

  • yeasku 5 hours ago

    Because it adds nothing to the conversation. Im

  • TheTaytay 7 hours ago

    I was hoping you experimented with this! I am right there with you, hoping for an easier wasm sandbox for LLMs.

    (Keep posting please. Downvotes due to mentioning LLMs will be perceived as a quaint historic artifact in the not so distant future…)

    • wartywhoa23 4 hours ago

      On the contrary, it's pretty possible that LLMs themselves will be perceied as a quaint historic artefact and join the ranks of mechanical turks, zeppelins, segways, google glasses and blockchains.

  • alex_suzuki 15 hours ago

    I think the people interacting with this post are just more likely to appreciate the raw craftsmanship and talent of an individual like Bellard, and coincidentally might be more critical of the machinery that in their perception devalues it. I count myself among them, but didn’t downvote, as I generally think your content is of high quality.

  • petercooper 18 hours ago

    Your tireless experimenting (and especially documenting) is valuable and I love to see all of it. The avant garde nature of your recent work will draw the occasional flurry of disdain from more jaded types, but I doubt many HN regulars would think you had anything but good intentions! Guess I am basically just saying.. keep it up.

  • SeanAnderson 18 hours ago

    I didn't downvote you. You're one of "the AI guys" to me on HN. The content of your post is fine, too, but, even if it was sketch, I'd've given you the benefit of the doubt.

  • halfmatthalfcat 18 hours ago

    I downvoted because I'm tired of people regurgitating how they've done this or that with whatever LLM of the week on seemingly every technical post.

    If you care that much, write a blog post and post that, we don't need low effort LLM show and tell all day everyday.

    • simonw 16 hours ago
      • halfmatthalfcat 16 hours ago

        No I mean post it as an HN post and if anybody cares to see it, they'll upvote that and comment in there. That, instead of pigging backing on other posts to get visibility.

      • lioeters 12 hours ago

        I love it, I find the note interesting, educational, and adds to the discussion in context. Guess you're bound to get a few haters when you share your work in public, but I for one appreciate all your posts, comments, articles, open-source projects.

sublimefire 2 hours ago

Look at how others implement quickjs and restrict its runtime for sensitive workloads [1], should be similar.

But there are other ways, e.g. run the logic isolated within gvisor/firecracker/kata.

[1] github.com/microsoft/CCF under src/js/core

MobiusHorizons 17 hours ago

What is the purpose of compiling this to web assembly? What web assembly runtimes are there where there is not already an easily accessible (substantially faster) js execution environment? I know wasmtime exists and is not tied to a js execution engine like basically every other web assembly implementation, but the uses of wasmtime are not restricted from dependencies like v8 or jsc. Usually web assembly is used for providing sandboxing something a js execution environment is already designed to provide, and is only used when the code that requires sandboxing is native code not javascript. It sounds like a good way to waste a lot of performance for some additional sandboxing, but I can't imagine why you would ever design a system that way if you could choose a different (already available and higher performance) sandbox.

  • simonw 16 hours ago

    I want to build features - both client- and server-side - where users can provide JavaScript code that I then execute safely.

    Just having a WebAssembly engine available isn't enough for this - something has to take that user-provided string of JavaScript and execute it within a safe sandbox.

    Generally that means you need a JavaScript interpreter that has itself been compiled to WebAssembly. I've experimented with QuickJS itself for that in the past - demo here: https://tools.simonwillison.net/quickjs - but MicroQuickJS may be interesting as a smaller alternative.

    If there's a better option than that I'd love to hear about it!

    • MobiusHorizons 15 hours ago

      This is generally the purpose of JavaScript execution environments like v8 or jsc (or quickjs although I understand not trusting that as a sandbox to the same degree). They are specifically intended for executing untrusted scripts (eg web browsers). Web assembly’s sandboxing comes from js sandboxing, since it was originally a feature of the same programs for the same reasons. Wrapping one sandbox in another is what I’m surprised by.

      • simonw 15 hours ago

        Running v8 itself as a sandbox is non-trivial, at least embedded in a Python or Node.js or similar application.

        The web is littered with libraries that half do that and then have a note in the README that says "do not rely on this as a secure sandbox".

  • kettlecorn 10 hours ago

    As I noted in another comment Figma has used QuickJS to run JS inside Wasm ever since a security vulnerability was discovered in their previous implementation.

    In a browser environment it's much easier to sandbox Wasm successfully than to sandbox JS.

    • MobiusHorizons 7 hours ago

      That’s very interesting! Have they documented the reasoning for that approach? I would have expected iframes to be both simpler and faster sandboxing mechanism especially in compute bound cases. Maybe the communication overhead is too high in their workload?

      EDIT: found this from your other comment: https://www.figma.com/blog/an-update-on-plugin-security/ they do not address any alternatives considered.