Comment by pjmlp

Comment by pjmlp 6 months ago

31 replies

To be fair neither are WebGL and WebGPU, versus the native API counterparts, the best you can get are shadertoy demos, and product visualisation on ecommerce sites.

Due to tooling, sandboxing and not having any control about what GPU gets selected, or why the browser blakckboxes it and switches into software rendering.

torginus 6 months ago

That's kinda untrue - there are games (not necessarily high end ones, but quite sophisticated ones nontheless).

The biggest issue is the API limitations that restrict you from doing things a certain modern way, and you have to use more mid 2000s techniques.

Here's a game that uses an Electron powered engine that uses Js and WebGL:

https://store.steampowered.com/app/1210800/Rum__Gun/

  • pjmlp 6 months ago

    I would be impressed if it actually was available on a browser, without native help.

    And that is Flash 3D quality, we would expect something better 15 years later.

    • torginus 6 months ago

      It does run without the browser, it's just that on Steam, people expect an executable.

      And sorry for the lack of quality - this project was built by just one guy who did custom everything (art, engine, editor, writing, music etc.) - it's super impressive imo. I'm sure if you replaced the models and textures with better looking ones (at no increase of technical complexity), it would look better.

      • gpderetta 6 months ago

        Having looked at just the trailer on steam, it does look impressive, especially for a one man effort.

  • fidotron 6 months ago

    Honestly I've come to view the "it's the API's fault" as cope.

    There are times when it is legitimately true, but it's far easier to say that your would have been amazing efforts were blocked because you needed obscure feature of prototype GPU than it is to accept you were never going to get close in the first place for completely different reasons.

    • whizzter 6 months ago

      No, it's a valid complaint. Even before hardware raytracing a huge amount of code was moving to compute shaders, most global illumination techniques in the last 10-15 years is easier to implement if you can write into random indices (often you can refactor to collecting reads but it's quite cumbersome and will almost certainly cost in performance).

      Even WebGL 2 is only equivalent of GLES 3.1 (and that's maybe a low 4.1 desktop GL equivalent). I think my "software" raytracing for desktop GL was only feasible with GL 4.3 or GL 4.4 if i remember correctly (And even those are kinda ancient).

      • flohofwoe 6 months ago

        WebGL2 is GLES 3.0, not 3.1 (that would be big because 3.1 has storage buffers and compute shaders).

        • whizzter 6 months ago

          Thanks for correcting, I only remembered that 3.2 was the latest so went one down since I remembered compute wasn't part but seems it was in 2 steppings. :)

      • fidotron 6 months ago

        > No, it's a valid complaint.

        For what?

        This is exactly what I am talking about: successful 3D products don’t need raytracing or GI, the bulk of the audience doesn’t care, as shown by the popularity of the Switch. Sure those things might be nice but people act like they are roadblocks.

        • whizzter 6 months ago

          Yes and no, Nintendo properties has a certain pull that aren't available for everyone.

          But also, I'm pretty sure that even the Switch 1 has far more capable graphics than WebGL 2. Doom Eternal is ported to it and reading a frame teardown someone did they mentioned that parts of it are using compute.

          Yes, you can do fairly cool stuff for a majority of people but old API's also means that you will spend far more time to get something halfway decent (older worse API's just takes more time to do things with than the modern ones).

      • pjmlp 6 months ago

        That is PlayStation 3 and XBox 360 kind of graphics level, yet we hardly see them on any browser, other than some demos.

samiv 6 months ago

Not to mention

  - the incredible overhead of each and every API call
  - the nerfed timers that jitter on purpose
  - the limitation of a single rendering context and that you *must* use the JS main thread to all those rendering calls (so no background async for you..)
  • dakom 6 months ago

    > overhead of each API call

    Yeah, that's an issue, esp with WebGL.. but you can get pretty far by reducing calls with a cache, things like "don't set the uniform / attribute if you don't need to".. but I hear WebGPU has a better API for this, and eventually this should get native performance.. though, I also wonder, is this really a bottleneck for real-world projects? I love geeking out about this.. but.. I suspect the real-world blocker is more like "user doesn't want to wait 5 mins to download AAA textures"

    > Nerfed timers

    Yeah, also an issue. Fwiw Mainloop.js gives a nice API for having a fixed timestep and getting an interpolation value in your draw handler to smooth things out. Not perfect, but easy and state-of-the-art afaict. Here's a simple demo (notice how `lerp` is called in the draw handler): https://github.com/dakom/mainloop-test

    Re: multithreading, I don't think that's a showstopper... more like, techniques you'd use for native aren't going to work out of the box on web, needs more custom planning. I see this as more of a problem for speeding up systems _within_ systems, i.e. faster physics by parallelizing grids or whatever, but for having a physics WASM running in a worker thread that shares data with the main thread, it's totally doable, just needs elbow grease to make it work (will be nice when multithreading _just works_ with a a SharedArrayBuffer and easily)

    • samiv 6 months ago

      Multithreading yes that works the way you mention but I meant multiple rendering contexts.

      In standard OpenGL the de-facto way to do parallel GPU resource uploads while rendering is to have multiple rendering contexts in a "share group" which allows them to share some resources such as textures. So then you can run rendering in one thread that uses one context and do resource uploads in another thread that uses a different context.

      There was a sibling comment that mentioned something called off screen canvas which hints that it might be something that would let the web app achieve the same.

  • flohofwoe 6 months ago

    > - the incredible overhead of each and every API call

    The calling overhead between WASM and JS is pretty much negligible since at least 2018:

    https://hacks.mozilla.org/2018/10/calls-between-javascript-a...

    > - - the nerfed timers that jitter on purpose

    At least Chrome and Firefox have "high-enough" resolution timers in cross-origin-isolated contexts:

    https://developer.chrome.com/blog/cross-origin-isolated-hr-t...

    ...also, if you just need a non-jittery frame time, computing the average over multiple frames actually gives you a frame duration that's stable and exact (e.g. 16.667 or 8.333 milliseconds despite the low-resolution inputs).

    Also, surpise: there are no non-jittery time sources on native platforms either (for measuring frame duration at least) - you also need to run a noise-removal filter over the measured frame duration in native games. Even the 'exact' presentation timestamps from DXGI or MTLDrawable have very significant (up to millisecond) jitter.

    > - the limitation of a single rendering context and that you must use the JS main thread to all those rendering calls (so no background async for you..)

    OffscreenCanvas allows to perform rendering in a worker thread: https://web.dev/articles/offscreen-canvas

    • samiv 6 months ago

      I didn't mean just WASM -> JS but the WebGL API call overhead which includes marshalling the call from WASM runtime across multiple layers and processes inside the browser.

      Win32 performance counter has native resolution < 1us

      OffScreencanvas is something I haven't actually come across before. Looks interesting, but I already expect that the API is either brain damaged or intentionally nerfed for security reasons (or both). Anyway I'll look into it so thanks for that!

      • flohofwoe 6 months ago

        > Win32 performance counter has native resolution < 1us

        Yes but that's hardly useful for things like measuring frame duration when the OS scheduler runs your per-frame code a millisecond late or early, or generally preempts your thread in the middle of your timing code (eg measuring durations precisely is also a non-trivial problem on native platforms even with high precision time sources).

chilmers 6 months ago

Figma uses WebGL for rendering and they seem to be doing ok.

  • locallost 6 months ago

    Although I will say that the difference between my old Intel MacBook and the M2 Pro is night and day.

  • pjmlp 6 months ago

    Yeah, at a level like GDI+, CoreGraphics, XWindows hardware surfaces,....

    This isn't really what real-time graphics is all about in modern times.

    This is,

    https://youtu.be/AV279wThmVU?si=Ou04h5z0Mju7kiJ0

    The demo is from 2018, 7 years ago!

    • chilmers 6 months ago

      The claim was that with WebGL was "the best you can get are shadertoy demos, and product visualisation on ecommerce sites". Figma is neither, regardless of how it's making use of WebGL under the hood. Not sure what relevance an Unreal engine demo is, as you seem to think I was making a claim about real-time graphics that I wasn't.

      • fidotron 6 months ago

        I had this argument with pjmlp not too long ago, and it goes in circles.

        Basically they define anything less than pushing the extreme limits of rendering technology to be worthless, while simultaneously not actually understanding what that is beyond the marketing hype. The fact most users would not be able to run that seven year old demo on their systems today, even natively, would be beside the point of course.

        WebGL particularly absolutely has problems, but the revealing thing is how few people state what they really are, such as the API being synchronous or the inability to use inverted z-buffers. Instead it's a lot of noise about ray tracing etc.

        WASM per call overhead is a whole other problem too, GC or not.

        • davexunit 6 months ago

          Thanks for bringing a reasonable perspective to this discussion.

      • pjmlp 6 months ago

        Figma falls under the graphics requirements of ecommerce sites, I have my doubts that they even make use of WebGL 2.0 features.

        It only a way to hopefully get hardware accelerated canvas, if the browser doesn't consider GPU blacklisted for 3D acceleration.

        That isn't real time graphics in the sense of games programming.

davexunit 6 months ago

Yes, I know there's more overhead on the web than native, but that is missing the point of my post. I'm talking about issues with Wasm GC relative to other ways of rendering with Wasm. I've played many browser games with good performance, btw.

  • pjmlp 6 months ago

    I would be interested in any that beats iPhone's demo of OpenGL ES 3.0 in 2011, Infinity Blade.

    In real time graphics quality rendering that is.