Show HN: Write Go code in JavaScript files
(npmjs.com)158 points by yar-kravtsov 7 days ago
I built a Vite plugin that lets you write Go code directly in .js files using a "use golang" directive. It compiles to WebAssembly automatically.
158 points by yar-kravtsov 7 days ago
I built a Vite plugin that lets you write Go code directly in .js files using a "use golang" directive. It compiles to WebAssembly automatically.
That would also avoid the problem with this syntax, that it's not a valid Go file (it doesn't start with `package ...` and I don't think a bare top-level string is valid), which lots of editors will be pretty unhappy about.
There’s no reason to unilaterally dismiss others use cases, this debate is as old as ReactJS (mixed JS and HTML).
Modern tools often make this tradeoff, like Astro, and none of the tools authors are claiming you need to use the tool.
Yes, the pattern can be abused, but dogmatic rules against mixing languages may also entail downsides.
I stand firm that there's no reason to write go in a .js file other than ragebaiting, especially with that "use" directive that clearly everyone is hating on Twitter at the moment (due to Vercel, etc)
To be clear I'm fine with importing .go from JS, it's the "go in file.js" thing I don't like.
I was playing around with WASM and WebGL a few years ago to see if it could be used to increase JS performance on certain computationally heavy tasks. I might be misremembering but if I recall correctly the answer was generally always no because of the overheads involved in JS -> WASM -> JS.
Additionally JIT optimisations means that even if you're doing very computationally heavy tasks unless they're one-offs or have a significant amount of computational variance JavaScript is surprisingly performant.
So unless you need to compute something for several seconds and it's done as a one-off typically there will be very little (if any) gain in trying to squeeze out a bit of additional performance in this way.
However this is all off the top of my head and from my own experimentation several years back. Someone please correct me if I'm wrong.
> Scientific computing where you already have Go code
This is a really cool project and I must admit that and I am on the side as well also asking for something similar to your project for julia since that has one of the highest focus on scientific computing. I would like it if you could create something similar to this but for julia as well, it shall be really cool.
Now coming back to my main point, my question is that what if the scientific computing project is too complicated and might require on features which shall not be available on tinygo as from what I remember, tinygo and go aren't 1:1 compatible
How much impact could it have though, like I am basically asking about the state of tinygo really and if it could do the scientific thing as accurately as you describe it but still a great project nonetheless. Kudos.
Hah. Back in the day I wrote a plugin to convert Lua files into a module that ran via one of the JS lua vms. Good fun.
Reminds me of this toy I made some years ago: https://www.npmjs.com/package/polyglot-tag
Looks interesting and good use case for introducing folks to extending web apps with WASM functionality.
Used a similar technique using tinygo wasm builds (without Vite ofcourse) on toy project where WASM based functionality acted as a fallback if the API wasn't available or user was offline - found it an interesting pattern.
Better performance? For javascript code that calls into native platform apis provided by the browser it's been alteady proven that performance is an order of magnitude better than calling into wasm and doing all the sheningans to move bytes from and to wasm
I don't think any of the use cases suggested really make sense though. For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work. Build a frontend and make an API that runs on a server somewhere.
As for cryptography, trusting that the WASM build of your preferred library hasn't introduced any problems demonstrates a level of risk tolerance that far exceeds what most people working in cryptography would accept. Besides, browsers have quite good cryptographic APIs built in. :)
> For a compute-intense task
The browser often runs on an immensely powerful computer. It's a waste to use that power only for a dumb terminal. As a matter of fact, my laptop is 6 years old by now, and considerably faster than the VPS on which our backend runs.
I let the browser do things such as data summarizing/charting, and image convolution (in Javascript!). I'm also considering harnassing it for video pre-processing.
> For a compute-intense task like audio or video processing, or for scientific computing where there's likely to be a requirement to fetch a ton of data, the browser is the wrong place to do that work.
... I mean... elaborate?
Everytime I've heard somebody say this, it's always a form of someone stuck in the 90s/00s where they have this notion that browsers showing gifs is the ceiling and that real work can only happen on the server.
Idk how common this is now, but a a few years ago (~2017) people would show projects like figma tha drew a few hundred things on screen and people would be amazed. Which is crazy, because things like webgl, wasm, webrtc, webaudio are insanely powerful apis that give pretty low level access. A somewhat related idea are people that keep clamoring for dom access in wasm because, again, people have this idea that web = webpage/dom, but that's a segway into a whole other thing.
great points, agreed
also "segway" is a scooter, "segue" is a narrative transition
I would rather instantiate wasm module myself and have a build step to compile .go file. This way both JS and Go tooling would work.
Just be careful with this backend-code-in-frontend stuff. If it's needed for some computationally expensive logic that is logically client side, then fine. But be wary of letting the client dictate business rules and having open-for-anything APIs (GraphQL is particularly prone to this).
I've seen teams do this in the wild more than once.
REST is the solution to this but it's reduced to JSON RPC over HTTP nowadays.
If we're talking about a HTML server (a REST API) then I agree, but if it is a choice between JSON REST and JSON RPC, I'd take JSON RPC any day to be honest with you.
a REST API needs to be descriptive enough and have a wide enough contract with the client that the response can modify the behaviour of the client so as to deal with any multitude of situations going on with the server. This works great if the response is HTML and the client is a browser, as the HTML dictates where and how to interact with the server (e.g. a link is a GET request to XYZ, followed by a page load). For JSON REST to meet that bar one needs JSON+HATEOAS, and having worked on a project that tried that, let me tell you that there is HATE aplenty to be found in trying to make that work.
So if we abandon the strict notion of what REST is, then what does JSON REST mean? In my experience, its been a lot of arguing over what paths and methods and resources to use, which at best are a waste of time (because no one is going to see the choice, its just whatever your JS lib is going to call and your backend is going to return) and at worse it puts bad constraints on how the backend is modeled by forcing one to do it in terms of Resources for ones REST API to work effectively.
In my opinion, its much better to use an RPC API which simply describes API "functions". These APIs can work over any number of actual db resources (and sometimes none) and importantly, leave you the time and the freedom to model your backend in terms of business rules and not "RESTful" norms.
What I meant was using a backed oriented language for frontend oriented work. My shorthand was unclear, apologies.
Like it. Especially the how to use it and when to use it guidance.
Fair take! Though, this was literally built as a joke in response to @ibuildthecloud's tweet. Sometimes the dumbest ideas are the most fun to prototype.
Beautiful. Minor feedback: rather than having a "use golang" directive, just allow imports of .go files. This is more idiomatic for JS bundlers.