Comment by Negitivefrags
Comment by Negitivefrags 3 days ago
I really hope that this doesn't come to pass. It's all in on the two worst trends in graphics right now. Hardware Raytracing and AI based upscaling.
Comment by Negitivefrags 3 days ago
I really hope that this doesn't come to pass. It's all in on the two worst trends in graphics right now. Hardware Raytracing and AI based upscaling.
AI upscaling is equivalent to lowering bitrate of compressed video.
Given netflix popularity, most people obviously don’t value image quality as much as other factors.
And it’s even true for myself. For gaming, given the choice of 30fps at a higher bitrate, or 60fps at a lower one, I’ll take the 60fps.
But I want high bitrate and high fps. I am certainly not going to celebrate the reduction in image quality.
> AI upscaling is equivalent to lowering bitrate of compressed video.
When I was a kid people had dozens of CDs with movies, while pretty much nobody had DVDs. DVD was simply too expensive, while Xvid allowed to compress entire movie into a CD while keeping good quality. Of course original DVD release would've been better, but we were too poor, and watching ten movies at 80% quality was better than watching one movie at 100% quality.
DLSS allows to effectively quadruple FPS with minimal subjective quality impact. Of course natively rendered image would've been better, but most people are simply too poor to buy game rig that plays newest games 4k 120FPS on maximum settings. You can keep arguing as much as you want that natively rendered image is better, but unless you send me money to buy a new PC, I'll keep using DLSS.
People have different sensitivities. For me personally, the reduction in image quality is very noticeable.
I am playing on a 55” TV at computer monitor distance, so the difference between a true 4K image and an upscaled one is very significant.
The contentious part from what I get is the overhead for hallucinating these pixels, on cards that also cost a lot more than the previous generation for otherwise minimal gains outside of DLSS.
Some [0] are seeing 20 to 30% drop in actual frames when activating DLSS, and that means as much latency as well.
There's still games where it should be a decent tradeoff (racing or flight simulators ? Infinite Nikki ?), but it's definitely not a no-brainer.
Not sure I'd put much faith into the word of someone who uses Amonsgold clips and seems to look for drama where there is none.
Asking gaming youtube channels, especially those focusing on FPS and performance, to be clinical and drama free is a tall order.
There are a lot of theoretical arguments I could give you about how almost all cases where hardware BVH can be used, there are better and smarter algorithms to be using instead. Being proud of your hardware BVH implementation is kind of like being proud of your ultra-optimised hardware bubblesort implementation.
But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
A common argument is that we don't have fast enough hardware yet, or developers haven't been able to use raytracing to it's fullest yet, but it's been a pretty long damn time since this hardware was mainstream.
I think the most damning evidence of this is the just released Battlefield 6. This is a franchise that previously had raytracing as a top-level feature. This new release doesn't support it, doesn't intend to support it.
And in a world where basically every AAA release is panned for performance problems, BF6 has articles like this: https://www.pcgamer.com/hardware/battlefield-6-this-is-what-...
> But how about a practical argument instead. Enabling raytracing in games tends to suck. The graphical improvements on offer are simply not worth the performance cost.
Pretty much this - even in games that have good ray tracing, I can't tell when it's off or on (except for the FPS hit) - I cared so little I bought a card not known to be good at it (7900XTX) because the two games I play the most don't support it anyway.
They oversold the technology/benefits and I wasn't buying it.
I think one of the challenges is that game designers have trained up so well at working within the non-RT constraints (and pushing back those constraints) that it's a tall order to make paying the performance cost (and new quirks of rendering) be paid back by RT improvements. There's also how a huge majority of companies wouldn't want to cut off potential customers in terms of whether their hardware can do RT at all or performance while doing so. The other big one is whether they're trying to recreate a similar environment with RT, or if they're taking advantage of what is only possible on the new technique, such as dynamic lighting and whether that's important to the game they want to make.
To me, the appeal is that game environments that can now be way more dynamic because we're not being limited by prebaked lighting. The Finals does this, but doesn't require ray tracing and it's pretty easy to tell when ray tracing is enabled: https://youtu.be/MxkRJ_7sg8Y
But that's a game design change that takes longer
> Enabling raytracing in games tends to suck.
Because enabling raytracing means the game supports non-raytracing too. Which limits the game's design on how they can take advantage of raytracing being realtime.
The only exception to this I've seen The Finals: https://youtu.be/MxkRJ_7sg8Y . Made by ex-Battlefield devs, the dynamic environment from them 2 years ago is on a whole other level even compared to Battlefield 6.
There's also Metro: Exodus, which the developers have re-made to only support RT lighting. DigitalFoundry made a nice video on it: https://www.youtube.com/watch?v=NbpZCSf4_Yk
> But how about a practical argument instead.
With raytracing lighting a scene goes from taking hours-days to just designating objects that emit light
naive q: could games detect when the user is "looking around" at breathtaking scenery and raytrace those? offer a button to "take picture" and let the user specify how long to raytrace? then for heavy action and motion, ditch the raytracing? even better, as the user passes through "scenic" areas, automatically take pictures in the background. Heck, this could be an upsell kind of like the RL pictures you get on the roller coaster... #donthate
(sorry if obvious / already done)
Not exactly the same but adaptive rendering based on viewer attention reminded me of this: https://en.wikipedia.org/wiki/Foveated_rendering
Even without RT I think it'd be beneficial to tune graphics settings depending on context, if it's an action/combat scene there's likely aspects the player isn't paying attention to. I think the challenge is it's more developer work whether it's done by implementing some automatic detection or manually being set scene by scene during development (which studios probably do already where they can set up specific arenas). I'd guess an additional task is making sure there's no glaring difference between tuning levels, and setting a baseline you can't go beneath.
It will never be fast enough to work in real time without compromising some aspect of the player's experience.
Ray tracing is solving the light transport problem in the hardest way possible. Each additional bounce adds exponentially more computational complexity. The control flows are also very branchy when you start getting into the wild indirect lighting scenarios. GPUs prefer straight SIMD flows, not wild, hierarchical rabbit hole exploration. Disney still uses CPU based render farms. There's no way you are reasonably emulating that experience in <16ms.
The closest thing we have to functional ray tracing for gaming is light mapping. This is effectively just ray tracing done ahead of time, but the advantage is you can bake for hours to get insanely accurate light maps and then push 200+ fps on moderate hardware. It's almost like you are cheating the universe when this is done well.
The human brain has a built in TAA solution that excels as frame latencies drop into single digit milliseconds.
The problem is the demand for dynamic content in AAA games. Large exterior and interior worlds with dynamic lights, day and night cycle, glass and translucent objects, mirrors, water, fog and smoke. Everything should be interactable and destructable. And everything should be easy to setup by artists.
I would say, the closest we can get are workarounds like radiance cascades. But everything else than raytracing is just an ugly workaround which falls apart in dynamic scenarios. And don't forget that baking times and storing those results, leading to massive game sizes, are a huge negative.
Funnily enough raytracing is also just an approximation to the real world, but at least artists and devs can expect it to work everywhere without hacks (in theory).
Manually placed lights and baking not only takes time away from iteration but also takes a lot of disk space for the shadow maps. RT makes development faster for the artists, I think DF even mentioned that doing Doom Eternal without RT would take so much disk space it wouldn’t be possible to ship it.
edit: not Doom Etenral, it’s Doom The Dark Ages, the latest one.
The quoted number was in the range of 70-100 GB if I recall correctly, which is not that significant for modern game sizes. I’m sure a lot of people would opt to use it as an option as a trade off for having 2-3x higher framerate. I don’t think anyone realistically complains about video game lighting looking too “gamey” when in a middle of an intense combat sequence. Why optimize a Doom game of all things for standing still and side by side comparisons? I’m guessing NVidia paid good money for making RT tech mandatory. And as for shortened development cycle, perhaps it’s cynical, but I find it difficult to sympathize when the resulting product is still sold for €80
You still have to manually place lights. Where do you think the rays come from (or rather, go to).
It's fast enough today. Metro Exodus, an RT-only game runs just fine at around 60 fps for me on a 3060 Ti. Looks gorgeous.
Light mapping is a cute trick and the reason why Mirror's Edge still looks so good after all these years, but it requires doing away with dynamic lighting, which is a non-starter for most games.
I want my true-to-life dynamic lighting in games thank you very much.
> it requires doing away with dynamic lighting
Most modern engines support (and encourage) use of a mixed lighting mode. You can have the best of both worlds. One directional RT light probably isn't going to ruin the pudding if the rest of the lights are baked.
How is Metro Exodus Enhanced Edition (that is purely raytraced) compromised compared to regular version that uses traditional lighting?
Much higher resource demands, which then requires tricks like upscaling to compensate. Also you get uneven competition between GPU vendors because it is not hardware ray tracing but Nvidia raytracing in practice.
On a more subjective note, you get less interesting art styles because studio somehow have to cram raytracing as a value proposition in there.
Not OP, but a lot of the current kvetching about hardware based ray tracing is that it’s basically an nvidia-exclusive party trick, similar to DLSS and physx. AMD has this inferiority complex where nvidia must not be allowed to innovate with a hardware+software solution, it must be pure hardware so AMD can compete on their terms.
1. People somehow think that just because today's hardware can't handle RT all that well it will never be able to. A laughable position of course.
2. People turn on RT in games not designed with it in mind and therefore observe only minor graphical improvements for vastly reduced performance. Simple chicken-and-egg problem, hardware improvements will fix it.
The gimmicks aren't the product, and the customers of frontier technologies aren't the consumers. The gamers and redditors and smartphone fanatics, the fleets of people who dutifully buy, are the QA teams.
In accelerated compute, the largest areas of interest for advancement are 1) simulation and modeling and 2) learning and inference.
That's why this doesn't make sense to a lot of people. Sony and AMD aren't trying to extend current trends, they're leveraging their portfolios to make the advancements that will shape future markets 20-40 years out. It's really quite bold.
So far the AI upscaling/interpolating has just been used to ship horribly optimized games with a somewhat acceptable framerate
And they're achieving "acceptable" frame rates and resolutions by sacrificing image quality in ways that aren't as easily quantified, so those downsides can be swept under the rug. Nobody's graphics benchmark emits metrics for how much ghosting is caused by the temporal antialiasing, or how much blurring the RT denoiser causes (or how much noise makes it past the denoiser). But they make for great static screenshots.
I disagree. From what I’ve read if the game can leverage RT the artists save a considerable amount of time when iterating the level designs. Before RT they had to place lights manually and any change to the level involved a lot of rework. This also saves storage since there’s no need to bake shadow maps.
So what stops the developers from iterating on a raytraced version of the game during development, and then executing a shadow precalcualtion step once the game is ready to be shipped? Make it an option to download, like the high resolution texture packs. They are offloading processing power and energy requirements to do so on consumer PCs, and do so in an very inefficient manner
The amount of drama about AI based upscaling seems disproportionate. I know framing it in terms of AI and hallucinated pixels makes it sound unnatural, but graphics rendering works with so many hacks and approximations.
Even without modern deep-learning based "AI", it's not like the pixels you see with traditional rendering pipelines were all artisanal and curated.