• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Flaws on Oxide

Enigmoid

Platinum Member
Today Oxide (Star Swarm) appeared on Steam. I decided to benchmark it. I found nothing remotely impressive about it.

While Oxide is made to showcase mantle, performance between DX and mantle cannot be compared. Star Swarm utilizes a number of technologies which are designed to suck CPU power and make no visible difference.

Test Setup

Lenovo Y580

i7-3630qm (2.4 ghz base, 3.2 boost)
8 GB RAM
Nvidia 660m @ 1085/1250
Plextor M5M 256 GB
331.65 Nvidia Driver

Observe. The absolute worst way to render a number of dots.

ar7o.png

(Extreme RTS Setting)

There is nothing great about this frame. Oxide has gone about and rendered the scene in the absolute worst way possible. You could simply use sprites for this scene and you would never know the difference.

On Low

09ji.jpg


The dots are now a lot less distinct.

I am using RTS mode for comparison as I was GPU bottlenecked in attract mode.

But Attract Mode (close up).

Low

l0cx.png


kzmu.png


Extreme

1atc.png


29zw.png


(There is a horrendous amount of blur)

Then of course there is CPU and GPU usage.

Attract mode completely used up all GPU resources on Extreme.
However, on RTS mode, GPU utilization hovered around 50%, clearly a CPU bottleneck. However CPU usage was only ~25%. I appears that oxide is only capable of using two threads (at least in RTS mode). Oxide also completely disables Turbo on my laptop.

4e3y.png


(MSI AB and Windows Task manager report the same usage).

Looks like oxide uses two threads and forces the CPU to run at 2.4-2.5 ghz. The spike in the middle is a Prime 95 run which kept 3.2 ghz steady. The application open on desktop (not open) used a stupid amount of CPU itself, so much so that CPU utilization doesn't really change when Star Swarm is launched.

RTS Extreme- Avg fps around 10.
RTS LOW - avg fps around 22-28
Attract Extreme - avg fps around 14-18
Attract Low - avg fps around 50

If my CPU was at the right speed and 4 cores were being used I would expect roughly 3x the fps.

Of course there are cases when fps dropped as low as 3-4 on extreme settings.

All I can say is that oxide is as I said previously: not impressive. No LOD (why would you even think about rendering those ships as objects when all you see is a dot) and poor CPU management.
 
Last edited:
Not surprising that a Mantle demo does not use all DX11 features. What is surprising is how hard they're trying to sell Mantle by crippling DX11.

It's no wonder that they claim a 300% performance improvement over DX11 with Mantle. The <25% utilization shows that they're doing immediate context DX11 without threaded scene traversal. They put EVERYTHING on that one thread. Even BF4 DX11 on AMD drivers use token+replay to multithread without command lists.

The level of laziness on the DX11-side is laughable. I just hope that it doesn't translate into real world games and developers don't spend the effort to properly implement the DX11 render path.
 
Last edited:
You might want to read up and or watch the interviews with the Oxide devs on what their goals were with their engine, how its engineered, and what is actually going on that you are looking at.

The short is that they were going for a render engine that is exactly like what Hollywood uses in movies only it's real time. Everything on the screen is an independent entity with full physics and AI were applicable.

http://forums.anandtech.com/showthread.php?t=2361756

https://www.youtube.com/watch?v=VC8RWntPRvI
 
The short is that they were going for a render engine that is exactly like what Hollywood uses in movies only it's real time. Everything on the screen is an independent entity with full physics and AI were applicable.

Can a sprite not have physics and AI? Is Hollywood motion blur that ugly? What's the point of trying to apply non-realtime techniques to realtime game engines?

More importantly, why is the DX11 engine reminiscent of something released from 2009?
 
I would suggest not running on benchmark mode that way you can get more info, and you can move to different units and move the camera how you want to.

Also in the assets file assets folder holds the Scenario files, and the settings file. You can adjust them from those files.
 
Can a sprite not have physics and AI? Is Hollywood motion blur that ugly? What's the point of trying to apply non-realtime techniques to realtime game engines?
Absolutely, hollywood motion blur can be that ugly when using HDR source, have length of blur be full frame with way too small amount of samples. 😉
Motion blur would be better with some additional blurring etc. (need to download the demo and check if one can force FXAA on top of it.)
They clearly have a lot of work ahead of them.

Does demo allow tweaking of sampling parameters?
Would love to see artifacts they get when undersampling in high resolutions. (shaded sample in lets say once in 25 pixels.)

Their shading pipeline should be similar to this. (but I doubt they use stochastic rendering, just accumlation buffer.)
http://fileadmin.cs.lth.se/graphics/research/papers/2014/atss/
 
Last edited:
Not surprising that a Mantle demo does not use all DX11 features. What is surprising is how hard they're trying to sell Mantle by crippling DX11.

It's no wonder that they claim a 300% performance improvement over DX11 with Mantle. The <25% utilization shows that they're doing immediate context DX11 without threaded scene traversal. They put EVERYTHING on that one thread. Even BF4 DX11 on AMD drivers use token+replay to multithread without command lists.

The level of laziness on the DX11-side is laughable. I just hope that it doesn't translate into real world games and developers don't spend the effort to properly implement the DX11 render path.

Not surprised. It seems all the tricks in the book will be used to try shovel mantle out. GPU benefit was 0. And now they do all they can to sell it for the CPU part.
 
Not surprised. It seems all the tricks in the book will be used to try shovel mantle out. GPU benefit was 0. And now they do all they can to sell it for the CPU part.

I agree the CPU improvements seem are awesome. This is a good example of the status quo.

I suppose lots of nubs don't understand how PR use the best case scenario.
 
There is nothing great about this frame. Oxide has gone about and rendered the scene in the absolute worst way possible. You could simply use sprites for this scene and you would never know the difference.

I seem to remember people saying pretty much the same thing about physx eye candy, but of course many who defend every physx implementation will be the ones who jump all over mantle in order to do damage control for their favorite company.
 
Would suit some of you to go "Uhm, that looks promising, but lets wait and see, I have a feeling..", rather than going on a full fledged FUD-Fail campaign.
 
I agree the CPU improvements seem are awesome. This is a good example of the status quo.

I suppose lots of nubs don't understand how PR use the best case scenario.

CPU imporvement is only awesome when they already crippled DX. AMD still lacks a proper multithreaded DX driver as well.

And when they made it even worse in the demo on purpose. Then its just sad.
 
CPU imporvement is only awesome when they already crippled DX. AMD still lacks a proper multithreaded DX driver as well. And when they made it even worse in the demo on purpose. Then its just sad.

Yes CPU improvement is awesome and also good for top end rigs with crossfire.

True most games are poorly written DX so this is a real demo!!
 
CPU imporvement is only awesome when they already crippled DX. AMD still lacks a proper multithreaded DX driver as well.

And when they made it even worse in the demo on purpose. Then its just sad.

and yet you keep ignoring things like this:


ohan Andersson &#8207;@repi 20h @firefreak111 we support it but it is fundamentally broken in DX. check my slide #34 from 3 years ago:
http://www.slideshare.net/DICEStudio/directx-11-rendering-in-battlefield-3
https://twitter.com/repi


but you know honest representation of the situation isn't something your actually after right 🙄.


who cares if AMD DX11 isn't multi threaded when it doesn't actually buy you anything........

so NV supports DCL, so does frostbite and yet mantel still shows massive performance improvements above and beyond it....
 
Last edited:
Bs. You are drawcall limited. As simply as that.

Oh sure, I have no doubt. But its essentially a singlethreaded game as shivansps showed.

The problem is the engine design, hollywood like render is fine and all but the fact remains that even In ultra the units are butt ugly up close, like running Maya with the default renderer and not mental ray plus tweaking, much more blurry and ugly that anything n most games. The elimination of lod is just like tesselated concrete in crysis two, except multiplied by 100. The whole point of a video game engine is to get the most iq with the least amount of processing; something which this demo is definitely not doing.
 
no it isn't and infact tessellation would be a great way to remove LOD. if all those units ended up in close proximity which is completely possible then what would you be saying?

Also its a demo, why are you judging ART in a technical demo 🙄


some people are trying really hard and reaching really far...........
 
Higher resolutions demand larger textures, different LOD settings and resource management. I think that's the point of this benchmark.

In a year or two that space ship filling 16 pixels (4*4) in a 1080p display will take 64 pixels (8*8) in a 4k display being the new standard. That would make LOD resolution defined as a tree in the distance may look fine at 1080p and like crap in a 4k monitor. LOD is usually set for both geometry and textures. The only fair metric in this benchmark is no LOD settings at all.

As for the threaded performance you have barely a handful of games using DX multithreaded rendering and it's "fundamentally broken". This is an example of the regular DX game versus cherry picked Mantle best case.

That said I think the same about this benchmark and others like Tessmark. Pointless as the scenario can't be met in an actual game.
 
Can a sprite not have physics and AI? Is Hollywood motion blur that ugly? What's the point of trying to apply non-realtime techniques to realtime game engines?

More importantly, why is the DX11 engine reminiscent of something released from 2009?

Rendering sprites in real time is a terrible idea. The only reason why they are used in games today is exactly what the demo is trying to show you. It's draw call limited.

Pixel art is lame and hopefully with Mantle remove it completely.
 
Turning off motion blur on my computer dropped batch count by a factor of 3 to 6 and increased the FPS by a lot. I tried turning it on and off with the demo paused. One time I was around 23k batches with motion blur off and 120k with motion blur on. this is while paused so the numbers didn't change as I flicked between motion blur on and off.

With high batch count my CPU usage for this demo was around 24%, and GPU usage around 20%. With low batch count my CPU usage was around 70%, and GPU usage 99%.
 
Seems a lot of complaints could be fixed by more LOD distances, and better ship models and textures. That has nothing to do with the engine, just what the artists created for the demo.
 
Back
Top