hawtdawg
Golden Member
- Jun 4, 2005
- 1,223
- 7
- 81
The original Titan was like 40%+ faster than 7970ghz. The same difference between 290x and Titan x.
yeah, but thanks to Nvidia's revolutionary drivers, the Titan is now barely faster than a 280x
The original Titan was like 40%+ faster than 7970ghz. The same difference between 290x and Titan x.
Interesting points:
nVidia desires to protect their work which is understandable -- addressed on one of my points which is do no harm -- and the performance hit on AMD is due to tessellation and AMD can still optimize with binary.
Yeeaah, as a 780 SLI owner, the way Kepler owners have been treated with recent drivers has ensured that im going with AMD next time around.
Generally, AMD's architectures have been more forward looking than nvidia. Nvidia optimizes for the games that are out now and coming in the near future, and their proprietary effects work around the strengths of their latest architecture.
Kepler sucked at compute. Gameworks' effects are all compute based now, leveraging Maxwell's strength there. AMD is pretty good at compute too, so they're not as badly impacted at Kepler.
When Pascal comes out, I'm sure nvidia will create effects optimized for that architecture, and Maxwell will fall short in some new way. The Witcher 3 and other games make use of DirectCompute.
Take a look at this and look at the directcompute results:
http://www.anandtech.com/show/8526/nvidia-geforce-gtx-980-review/20
Does the rough performance of the nvidia cards look familiar? The little Kepler results are all well behind the early GCN cards. Only big Kepler is able to beat the 7970.
Nvidia just doesn't build forward looking cards. They have enough influence on games to dictate when new features become prevalent, and heavily optimize their drivers around the deficiencies of their cards.
I wouldn't be surprised if GCN eventually edges out Maxwell as well, despite losing in current games. It's been a trend since the start of programmable cards between ATI and Nvidia. Radeon 8500 was slower than Geforce3/4 but did better in games later (and DirectX8.1 made a very noticeable visual difference, closer to DX9 in the games it was used in than DX8). Radeon 9700 pro started out faster than Geforce FX, and completely destroyed it once FP24/32 shaders were used. Radeon X1800/X1900 series held up better than the Geforce 7 series, and now GCN is beating Kepler.
780 has a 384 bit memory bus and gets beaten by the 128bit 960 in 1440p...
:thumbsup:
Good post.
At least you are trying to think about things on a deeper level. That seems ultra rare these days. Like, totally nonexistent.
I commend you sir. A million times, as it is great to see that there are still people out there that i fell i can have a meaningful discussion with. I am telling you, i have been catching up on the forums this morning and almost lost all faith.......wondering, why i even bother. But then i read your post.
i am sure will completely be ignored by most.........wait, i am just being negative now, but as i stated already, I kind of lost all my faith.
Anyway, I just want to add that i think it is a little more complicated than that. Obviously though, you are right for the most part. The consoles going GCN and the fact that AMDs design were very strong in direcompute (as well as openCL), kepler is at a huge disadvantage. But, it is not only compute, it is actually bigger than that. Others have stated (but been ignored) that fundamentally the architectures Maxwell and GCN are much closer and similar that either is to kepler. Looking at the breakdown block to block, they are.
But why try to be logical about any of this? It is so much easier to invent and mud sling, you know getting nuts. Its so much fun.
But back to my point. We could say that AMDs architecture was more forward thinking, AMD intended GCN to last for many years. The industry seemed to go that route and Nvidia did seem to adjust their maxwell architecture to end up much more GCN like.
That is how it ended up. But I think this goes back to fermi. This is where Kepler evolved. Fermi was very forward thinking for its time. In the days of the 4890 and 5870. Fermi was a revolution. But, obviously it was far from perfect. AMDs take, their shot was GCN. Obviously, there was a lot more to look at by the time GCN shows up. We were moving towards a new direction and it wasnt into the dark. The future was visible, so to speak.
Fermi evolves to Kepler and AMD launches GCN. Nvidia launches a more GCN like architecture called Maxwell.
There are other factors that may play, like cuda and if there was any benefit specifically related to the layout of kepler/fermi SMM. It really doesnt matter though, because ultimately we have maxwell and GCN much more alike than maxwell and kepler. Without these changes AMD should/could have had a huge advantage because of the GCN in the consoles.
We can say that nvidia didnt look far enough out. That AMD was more forward thinking. I could say that as architectures evolved, a better path emerged. AMD would have been in a unique position and could objectively decide on the route they wanted to take. Nvidia followed through with the fermi to kepler transition and ultimately adjusted with their Maxwell architecture.
I just feel like Cuda could have had a role in this becuase we see no Maxwell DP cuda monster. It may just may have nothing to do with it, but i do find it interesting.
Anyway, just to end this. Nvidias directcompute was so poor with kepler, there is no way we would have seen gameworks using directcompute, like we see today. It would have all been CUDA driven, like the water in Just Cause 2.......
I mean, there is so much more we could be talking about. You know, on a PC tech forum. In my opinion, all of our discussions get reduced to worthless and childish rubbish. It is unfortunate. Terribly so.
Obviously there is deeper things happening and true underlining reasons on a technical level, that we could be discussing!!! I think it would be so much more worth while. And hopefully, people will take the time and have deeper discussions than we typically have.
So what does the 960 have over it then? Less cores, less bandwidth, less ROPS, less VRAM
If NV doesn't fix Kepler performance in something that has potential to be GOTY, I won't be buying NV cards for a while. Do they think it's going to make me upgrade sooner if my GPUs stop performing? Sure, but I'll buy team red.
It's not that NV is nerfing Kepler - it's just that it seems like they aren't even trying. But that's fine as long as Maxwell tops the charts. Everything else is irrelevant. Now I expect the same to happen with Maxwell so why would I upgrade to that?
But I guess that what happens after posting large profits, seeing stock prices surge, and taking most of the market share. There's just less incentive for them to try.
First, I have no clue what you mean when you say "optimize with binary" as that makes no sense from a developers point of view. Windows drivers are written in C or Assembly.
And you ignoring the elephant in the room that is Kepler cards performing worse than AMD cards.
Obviously there is deeper things happening and true underlining reasons on a technical level, that we could be discussing!!! I think it would be so much more worth while. And hopefully, people will take the time and have deeper discussions than we typically have.
Memory bus doesnt matter if its not the bottleneck.
I'm all for learning and objective discussions but the GTX 960 only has 1024 cuda cores and a 128-bit bus ------and defeating a GTX 780 is very odd.
This is how I am feeling. I figured i would have my card longer like the 470 I had.Since I was happy with the 470, I spent more to get the 780ti. Now I feel like I have been screwed over and wish I went the AMD 290 instead(which I was looking at but because of the crazy bitcoin stuff, I did not go that way).
thisMore than that, if there are serious flaws in the Kepler architecture, why are they most pronounced in games with nvidia code in them? It would lead toward the conclusion that either nvidia actively wants Kepler performance to suffer, or wants GCN performance to suffer relative to Maxwell and doesn't care about what that means for Kepler. The best case I can think of is that nvidia is merely totally apathetic about how their code performs for anything they're not actively selling.
That's how it goes when there's a new architecture.
I'd bet that Nvidia stopped optimizing for its Tesla GPUs when Fermi was released. Kepler was an evolved Fermi, so optimizations could trickle down.
Nvidia didn't want games to support DX10.1 because the 200 series didn't have it, but when Fermi came and introduced DX11 support for Nvidia, Nvidia obviously didn't care about the 200's series disadvantage anymore.
Likewise, AMD probably stopped optimizing for the HD 5000/6000 when GCN was released. The same year GCN was released, AMD even moved the HD 2000-4000 series to legacy support.
Not the case i was using quadfire 5970s a year and a half after GCN was released and the performance was great, it was only because of the Vram on one of the cards was failing why i stopped using them.
