Your long rant contains some valuable insights but then also stuff like this that is nonsensical. Games target different FPS on different platforms all the time. Some console releases even offer choice of high framerate or high resolution. Physics can very much be simplified from one platform to the other - very few games these days feature heavily “physics-based” gameplay so the devs can choose how elaborate they want their cloth or foliage simulations to be based on target platform or Low/Med/Ultra settings.
I meant at the margins. Sure, you can offer 30FPS 4K vs 60FPS HD, but you can't offer 25FPS baseline, not for most games. And if the game utilizes physics as a core mechanic, you can't just degrade that. You can't have your spaceship in orbit just fall out of orbit because you can't afford to run the algorithm. You have specific constraints based on your design decisions. There's a nice example of this in Factorio which I've mentioned elsewhere that causes it to be so heavily single core constrained. It's because they have as a design requirement that the game be deterministic, and maintaining thread coherency in a deterministic simulation is very hard and expensive to do, to the extent that you can pretty quickly consume all of your gains in overhead management.
Note, the issue here isn't whether the hardware or programming language can or can't multithread, the problem is you set a constraint that is extremely unsuited to threading and multiple cores and you get
rapidly diminishing returns on threading/cores. Change that constraint like other games do and you can thread to your hearts content and rely on stochastic mechanisms to cover the coherency gaps.
But I think you also highlight why AAA games are what they are - because they allow for a lot of smearing. There aren't a lot of hard constraints on a game like Skyrim, so you can adapt it to pretty much any environment by shoving LOD, frame rates, resolution, and the like. That's kind of an indictment of the industry, by the way, because it means entire other categories of games go by the wayside not so much because they are harder to implement, but they are less flexible when the marketing folks walk into the room and say 'hey, let's port this to iPhone'. And that's really what drives the industry, not making interesting or novel games.
In a similar vein developers and/or their engines can easily include vectorized SIMD instructions either “manually” through intrinsics or automatically by the compiler. Platforms that support it will just end up having faster load times or lower CPU frame times, not going to break anything.
The bigger thing is really, what is the developer incentive to do so? Gamers seem to be happy to drop $100 on a some Platinum Edition pre-order of a game that has terrible performance problems so the developer incentive is not really there.
Most games are bottlenecked by GPU these days at the resolutions and framerates people care about.
Right, but it's not like GPU doesn't have similar constraints on RAM, how quickly you can stream assets to it, and so on. Someday someone will develop a game that requires raytracing as a gameplay element. I dunno, you can only see enemies in reflections or something. Understand, I'm not a game developer by profession, but I've been a gamer for half a century and I did write games in the ancient times and I continued to write software through my career. I've seen all this stuff come and go and the long and short of it is that the industry is extremely lowest common denominator oriented now, and my friend group includes a load of Blizzard guys that have confirmed this. It's about a 6 years development cycle for a AAA game, so quite often your launch platform doesn't even exist yet on paper apart from the compute target. It's not like the 90s when you could knock out a game in 18 months often with prototype hardware in hand and it was much easier to target specific hardware/compute features.
SIMD is not a magic easy button for game engines like Unreal and Unity that are heavily structured around “game objects” or “actors” and a single threaded main update loop. It’s great and I’m sure it’s being used for contiguous blocks of data like loading assets and decompression, some procedural generation tasks, etc.
It’s things like all the virtual function calls (in the case of Unreal) and garbage collection (in the case of Unity) that wreck the game
thread performance. As well as lazy developers slapping things they shouldn’t be on tick/update events. Going to an ECS data-oriented style programming stack can greatly improve this but there’s only a subset of developers that like working that way and it gets second class treatment in the engines.
Except the virtual function calls is how you make the economics of the industry work. You call it lazy, but without that you don't get portability. And that kind of abstraction is EVERYWHERE. We moved up from hand written assembly to C to C + Lua, from direct code to game libraries that carry a lot of overhead, and so on and so on. You're always trading out potential performance for portability, for time to market, etc because games have gotten so large in terms of code, assets, platform reach and so on. You can bypass that all of course and you get wonderful games like Factorio which is WAY better in terms of revenue per employee and things like that, and is likely much better optimized, but isn't going to turn $500M in sales.
One of the things I don't really like about the games industry is that the really big AAA games are visually very impressive and generally kind of sh*t games, and not moving forward the underlying mechanics of games - storytelling, etc. They're mainly just bigger and shinier. And that's largely because there is this economic driver there that pulls everything along with it including the game engines, etc.
Where we tend to see developers focus CPU optimization effort (perhaps ironically) in the industry is on the part gamers never see- dedicated servers and backend. There’s real dollars associated with savings there in terms of hosting cost.
Yep. But that's not just because there are real dollars but they have absolute control over that hardware. A friend of mine was the lead for Battlenet when WOW first launched. I didn't see him for an entire year. But he had staff waiting at HP (I think it was) factory to pull blade servers off the assembly line and drive them directly over to the datacenter to install them. They didn't need to write to a lowest common denominator like the WOW developers did, they could write to utilize every ounce of compute that specific bit of hardware had because they specced it, and they were buying it in the hundreds or thousands and didn't need to port. They had control. The game developers didn't.