> If developers can deliver playable experiences on the Switch, they likely can do so with a HD 530 too.
The screen size (resolution) of Switch is very small, and that’s why Switch GPU can produce decently looking games there. Intel GPUs used on larger screens need a few times faster performance to achieve comparable visual quality.
First party switch games look decent even when docked on TVs (output is still lower resolution). They don't compare to PS5 or XSX graphics fidelity but they are still beautiful and playable.
This is due to a stylistic choice that Nintendo makes - they don’t care about photorealism at all as a design goal.
Almost every other game is measured by the yardstick of how “realistic” the games artwork looks when playing, which means those types of games are forever craving more resources and graphics compute as it becomes available.
Nintendo uses bright colours, cartoon-like designs and sparse environments that require an order of magnitude less compute per frame but still look great. You can see this on a Switch if you play a game of Mario Kart 8 or Splatoon 3 then immediately launch a multi-platform game like Fortnite or Apex Legends that isn’t designing for limited resources. The difference is completely night and day.
As it happens, both of the latest Zelda games are IMHO the most beautiful open worlds created on any platform - likely because not focusing on photorealism allows for more freedom with the artwork itself.
This has long been one of World of Warcraft's strengths as well. Even with incremental improvements, that engine is still almost twenty years old. Their great art direction with distinctly stylised art has let the game age way better than it otherwise would have.
StarCraft II (2010) also still looks great, and scales well from low-end to high-end systems. CPU utilization in team games (with lots of units) is the bigger issue, but that's a hard problem in RTS games.
Since you mention Zelda, I have to add that Zelda at this point plays better (higher res and frame rate) on emulators than on the platform it was made for because of its hardware limitations.
It has a higher frame rate and resolution, but it doesn’t necessarily play “better”, depending on how you view the importance of the native controllers etc.
That said, it speaks more to the point of designing for something other than reality simulation. If other games focused less on looking “real” they’d likely see vastly better performance on normal hardware.
> It has a higher frame rate and resolution, but it doesn’t necessarily play “better”, depending on how you view the importance of the native controllers etc.
I literally use a Switch controller over bluetooth with my switch emulator.
The Switch hardware is visibly inadequate for even their own first party games built in the last few years that use a compensatory art style. 4k@120fps isn't needed and I'm not even particularly a highly concerned gamer that nitpicks about frame rates but you can watch many of their games frequently dip from something acceptably smooth above 30fps into 10-20 fps slideshows. Inconsistency is worse than low, especially if you're required to time button presses which is something Nintendo loves to implement in their game designs.
>it doesn’t necessarily play “better”, depending on how you view the importance of the native controllers etc.
You can have the same graphics, same controllers, just better performance, if you so desire.
You can play with Joycons, or when playing with other controllers (like an Xbox controller), you can have the UI changed to match. I used a 360 controller and left the UI stock.
And you can just turn off Switch-specific quality drops that are used to improve performance on the Switch hardware (at least one of which actually drops performance in emulation, see "LOD Improvement" mod), and/or use better quality emulated versions for an experience that is effectively a theoretical "Switch Pro".
I think in this specific instance, better isn't really that subjective.
I frankly don't understand why photorealism is a goal for almost _any_ game project.
Look outside games for a moment, at the "Spider-verse" movies. Universally lauded for their splashy, comic-book style animation. Imagine those movies if the CGI was photorealistic… or if they were just straight-up filmed!
Fantasy and fantastical stories benefit from artistic flair. Even a game like Call of Duty, IMO, would benefit from some cel-shading and a more rotoscoped look. (I'm probably the only person who was disappointed to see that the next GTA game looks more like reality than GTA box art, which always should have been their stylistic lighthouse.)
I prefer the Spider-verse movies over MCU spiderman, but both Nolan Batman and early MCU showed that photorealism can work if the tone/plot/etc. matches and uses photorealism effectively. Similarly, using animation effectively, as the Spider-verse movies, the various Marvel cartoons of the 90s (IMHO animated teenage Iron Man is more interesting than MCU Iron Man), or even ATLA (unlike the movie...) did, also show that you can tell interesting and deep stories even though it's not live action.
Lately I've noticed increasing visual fidelity is actively hurting my gaming experience. There's just too much stuff to look at and filtering things that are worth paying attention to from things that aren't is taking more and more effort. Take a game like Starfield where every desk has a hundred pieces of random garbage on it and may or may not have something really valuable, so you either spend half your play time staring at shelves or you miss out on that thing you needed.
Design counts for so much more than triangles. A Series X game character has twenty times the polygon count of an Xbox 360 character yet somehow Johnny Silverhand looks like plastic-skinned garbage compared to Marcus Fenix.
> latest Zelda games are IMHO the most beautiful open worlds created
I’ve only played BoTW (and I enjoyed it a lot), but tbh I can’t really remember any distinct environments except maybe the volcanic region, Kakariko village and the Korok forest. IMO the minimalism of the game world simply doesn’t give the designers enough “tools” to create impressive environments, at least compared to games like Horizon Zero Dawn or Red Dead Redemption 2, or even Skyrim.
There’s also one more advantage: Nintendo games, in general, age much better over time. It turns out stylization doesn’t look dated nearly as quickly…
If you took a random segment of the population and let them try the top games on a GameCube versus a PS2 in 2023, I think the GameCube would win handily.
> likely because not focusing on photorealism allows for more freedom with the artwork itself.
How does constraining color and design schemes allow for more art freedom? There are much fewer options in art design for game developers on a hardware-constrained platform like the Switch as against PC or the mainstream consoles. You can make minimalist and cartoonish games on them as well, if you wish.
There have been a fair few non-photo-realistic games on PC and console but they’re not system sellers, and rarely move the needle.
For every Hades there’s pages on pages of indie games in Steam and the PS/Xbox marketplaces that hardly anyone has played or ever will again - many of which were well received by critics but just didn’t sell.
It’s a test bed for the latest unreal engine, but it still performs very well on a lot of different hardware.
The switch port likely struggles because it needs to be compatible with the other versions for cross-platform play, so it has to support the same resources (map layout/assets etc) that higher end machines do. There’s a limit to how much you can optimise something when you can’t fundamentally alter it and still keep your other customers happy.
Way back in 2018 they were able to optimize the game enough to bring up the PS4 build from locked 30 to locked 60. Few games have ever had a feat like that performed, so I don’t think “horrible optimized” is fair.
Regarding this: Breath of the Wild runs at 60 FPS (ish) when in handheld mode, but drops to 30 FPS when docked. This is regardless of resolution. You can set the resolution to 480p in docked mode and it'll still run at 30 FPS. Made the game significantly less playable to me, the large panning camera moves on a big screen look absolutely terrible to me at 30 FPS.
BotW targets 30 FPS in both docked and handheld mode and regularly drops frames in both - perhaps it’s just more noticeable on the big screen. There’s also no support on either the internal panel or HDMI-out for VRR (not that it would be helpful at sub-30 FPS anyway).
Oh wow, I could've sworn it ran at 60 FPS in hand-held mode. But all sources I can find online agree with you that it always targets 30 FPS.
I wanna do my own testing just to be completely sure, but assuming literally everyone else is correct, it really does speak to how much more noticeable low FPS is on larger screens.
That, and a PC is an incredibly inefficient gaming machine. Developers squeeze more out of consoles than PCs. An RTX 4090 has 5x the TFLOPS of an Xbox Series X but maybe it's twice as good on a cross platform Windows/Xbox game.
Part of this is having to design for an amorphous target, whereas a console is a very known quantity for tuning resource usage (far beyond what you can put in a settings menu).
The other side of this is that the consoles have technology far in advance of PC’s that people are unaware of because for whatever reasons they wrongly chose a long time ago to decide that GPU’s were all that matters.
As one example: the PS5 (Kraken) and XS (Velocity) have dedicated decompression and storage access hardware that allows access to the NVME without chewing up CPU cycles. This is massive for resource loading and not something that currently exists in the PC world. Even where some amalgam of it might exist soon via DirectStorage, it’ll not be ubiquitous, so it can’t be designed in to the core of PC versions of games because not everyone will have it.
This is a pretty critical difference when you see it in a game like Ratchet and Clank Rift Apart or Spiderman 2. There’s no equivalent that would allow those games to be ported to PC without significant compromises. Even texture streaming is dramatically faster than on the highest end PC.
I can dig out the tests on this if anyone cares to see it, but reading from a gen 4 NVME at 7GB/s takes something north of 80% CPU usage on a 5800x if memory serves.
They really dont have a happy path. Incorporating an API is not at all the same as being able to base the product around that feature.
Forspoken can’t base the design around instant load times, because most PCs don’t have DirectStorage over ultra-fast NVME drives. Even those that do, don’t have coprocessors to read from the drives, meaning they’re eating up massive amounts of compute to read that fast.
Not even a little bit the same as the situation on the PS5.
An API is designed for a developer to build a product with the API determining the interaction with the "base product". Developers of games don't have to think about the "base product".
Yes, the PS5 has a special chip -- because it's CPU is too anemic to decompress the data that fast.
Forspoken is a game that's not built with rapid loads in mind - it's designed to work on a range of hardware that will in the worst case include spinning hard disks (or more realistically, basic SATA SSD drives). That it can load assets relatively quickly on a high end machine with DirectStorage completely misses the point - the pipeline just isn't there to load assets at a consistent fast pace like it is with the PS5 architecture (and the XS architecture).
In the example above from Rift Apart (which to the best of my knowledge is not using DirectStorage to be fair), the frame-rate dives hard on any asset loading task. DirectStorage is more efficient with this, but it still chews cycles to move the data in the absence of dedicated hardware - there's just no way around this logically, and that's the point. The developer is constrained to designing a game in a way that will work consistently across hardware.
Faster loading screens are better than slow loading screens, but they're not better than no loading screens at all. Likewise, having games introduce elevators to nowhere or tunnels of nothingness to allow assets to be loaded is not a problem that you can work around on current PC titles - it fundamentally limits how the game is designed, not just how well it performs.
The PS5 CPU is far from anaemic, and buying a really expensive CPU for a PC with double the core count to get less real-terms performance doesn't seem like a great trade-off.
I don’t think the problem is necessarily PCs being inefficient. IMO it is more likely that it is simply much harder to optimize for random hardware combinations that people run, and that there is a lot of pressure on devs to optimize console gameplay (likely at the expense of optimizing PC gameplay, perhaps devs also see less of a need to since most PCs are better equipped than consoles anyhow, that and deadline pressure)
Something to keep in mind is that modern consoles do deliver many games in 60fps (and some in 120fps), while many PC gamers are targeting even higher frame rates that have diminishing returns and possibly aren’t even compatible with living room TVs that rarely exceed 60Hz and never (to my knowledge) exceed 120Hz.
When it comes to the typical 1080p resolution target, it’s really easy to hit 60FPS with very mediocre hardware in most current AAA games without sacrificing much if any visual fidelity. Owning a PC might give you the ability to bump the resolution or turn on inefficient enhancements like ray tracing, but someone sitting on a couch playing on a 60Hz $350 Costco TV isn’t going to experience a difference.
As an example, with the Cyberpunk Phantom Liberty benchmark, you’ll hit 60FPS average at 1080p Ultra settings (ray tracing off) with previous-generation mid-range hardware like the RTX 3060 or RX 6600. And of course this is without doing a single bit of console-specific optimization.
Shooter and esports examples like Call of Duty and Overwatch 2 are designed to be frame rate optimized and easily deliver a 120fps experience on modern consoles.
What consoles can do beautifully is give you a solid 75-100% of the “graphical fidelity to the casual observer” while being far weaker than a more expensive PC.
The console market isn’t really like the early 2000s where a crunchy 30fps was a more common expectation.
"5x the power on PC should give you 3x the performance" isn't exactly broscience, and while it's not sourced neither was the claim they're responding to.
People hook up their Switch to their 4k TVs all the time. Whatever upscaling trick the Switch can use to get acceptable graphics the Switch can use.
Perhaps Intel's 2015 iGPU (the one tested here) isn't good enough to render games I comparable quality at 1080p, but after eight years Intel has made some advancements too.
Integrated graphics should be more than enough these days, but very few people bother to optimise for them the way they do for the Switch. It makes sense, in a way; you can't tell Switch owners their mid tier GPU us insufficient, but in the PC world you're expected to buy a dedicated GPU if you want to play anything more complex than Candy Crush.
Perhaps with Intel entering the dedicated GPU market, we'll see more games optimised for Intel, so integrated GPUs can run games better.
I think it has got something to do with how easy it is to render Switch games. Steam Deck that uses AMD GPUs have to contend with rendering a much wider array of games, and they don’t have as much control of games as Nintendo does.
The Switch also has unified memory, which is a tremendous boost to performance for certain types of gaming workloads. The same doesn't apply to PC even if you're using a desktop iGPU, because the memory model is different.
IGPs have had unified memory for more than a decade. Consoles use variants of those.
Whether toolkits on PCs can transparently make use of unified memory when on IGPs though I could not tell you, and I would not expect game devs to cater much to that.
Unified Memory is not about bandwidth or latency. On a desktop PC, you have CPU and GPU side buffers with different addresses that may or may not map to the same physical memory, and the APIs require you to perform mapping/unmapping operations or perform explicit uploads and downloads.
On target with unified memory, it is all one address space, the data is just where you put it. There are not split CPU and GPU memory modules. (For some consoles there are different categories of memory, usually based on traits like write combining, but it's all one unified address space). Even now with Vulkan, the experience of programming for a unified memory console is very different.
A desktop PC driver can, ideally, optimize out many of your upload/download/map/unmap operations. With unified memory, they simply don't exist.
Modern GPUs allow mapping memory through PCIe, so you can treat memory as the same. Resize-able BAR effectively allows the same the other way, where all the GPU vram can be mapped into the CPU.
The issue is that suddenly accessing an address can have orders of magnitude lower bandwidth and higher latency, if it is resident on the far memory bus. So for performance reasons you still really need to treat them differently.
And it hast to be noted that the Apple Silicon has also a unified memory architecture. It should allow for some interesting gaming performance if any developer tries to fully optimize for it.
Is there any reason the hardware on say Intel integrated GPU hardware shouldn't be able to do the same? Even outside of gaming the rise of composited desktops seems like motivation to push for low copy + low latency graphics systems.
That's not entirely true. I have 1080p screens abd it's still noticeable the downgrade. Even ori and the blind forest (2d+3d) looks way worse than my pc
Keep in mind that these scalers need additional buffers and video memory to operate properly, which can in turn lower your framerate and increase the game's resource usage. So they're not a free lunch, and some games simply can't generate the buffers necessary for these algorithms to work.
If you hang on to 10 frames of RGB plus motion vectors at 1080p, and have 4 4K buffers where you're messing with output, that's 6MB * 2 * 10 + 25MB * 4 = 220MB of video memory. That's barely anything. And that's an extra-high estimate of the resolutions that would be involved in a switch game.
But we're talking about a computer with several extra gigs lying around running the same game. And specifically that number is for 4K output; anything less and you're under 100MB.
very interesting to compare the custom hardware approaches provided by amd and nvidia.
while unaware on exact reasons why nintendo went with nvidia, they ended with some legacy hand-me-downs instead of proper custom hardware to suit their needs.
regardless, it is a good reminder of how the current hardware landscape allows developers to be lax with supporting limited specs by default. it is refreshing to see some nudges made in the pc gaming space thanks to steam deck and its similar hardware approach.
> If developers can deliver playable experiences on the Switch, they likely can do so with a HD 530 too.
The screen size (resolution) of Switch is very small, and that’s why Switch GPU can produce decently looking games there. Intel GPUs used on larger screens need a few times faster performance to achieve comparable visual quality.