the Radeon 8500 could actually run Battlefield 2 (with playable framerate) the Geforce 3 ti 500 (faster than the 8500) and the Geforce 4 Ti 4600 (way faster) could not, due to the lack of PS1.4 support
Ha! At what settings was it faster? Are we talking it ran at 35-40 fps at 1024x768? Let's assess the reasonability of your argument here.
BF2 came out
June 21, 2005. Radeon 8500 64MB came out October 17, 2001. Believe me I played games during that time on a ViewSonic 19" 1600x1200 CRT and as a previous owner of an 8500 64MB, by the point you site it was a pile of garbage. Do you know how far GPU hardware has come by that point?
June 22, 2005, nV released the
7800GTX 256MB.
Voodoo Power Rankings have:
Radeon 8500 (DX8.1) -- 2.1 VP
By June 2005, I bet one could buy a 6600GT for $80-100 max
Geforce 6600GT (DX9.0c) -- 6.3 VP (3X faster than 8500)
Geforce 7800GTX 256MB (DX9.0c) -- 15.4 VP (
7.33X faster than Radeon 8500)
http://forums.anandtech.com/showthread.php?t=2298406
Your argument doesn't make any sense since to play games on an 8500 at that time would have required insane compromises.
as for DX9, the 9700 was the first gen of DX9 and it could run DX9 games like Half Life 2 with full settings, the trouble is that DX9 was quickly replaced by the newer versions, specially DX9c the 6 series from Nvidia supported, the X8x0 series from ATI did not, that means the X850XT 1 or 2 years after launch simply couldn't play some new games, while a 6 series could, the 6800 performance for SM3.0 was not very good, but it allowed newer games to be played with reduced settings, while the x850 would display an error message and not launch the game, or SM3.0 features to be enabled on early DX9c games (with a big cost to performance) like on Counter Strike Source (HDR)
Exact same story as above. I am not going to go digging up reviews and games. By the time SM3.0 came into play, the
entire GeForce 6 stack became outdated. How do I know? I had 6600GT and I upgraded to Radeon HD4890.
But let's go with your story:
Geforce 6800 Ultra 256MB (DX9.0c) -- 10.0 VP
Radeon HD 4890 2GB (DX10.1) -- 88 VP (
8.8X faster than GeForce 6800U)
Again, your argument makes no sense. Neither the 9700Pro/9800Pro nor 6800Ultra/X850XT were good enough for modern DX9 games. I had a 1600x1200 monitor which meant there is no way I could have bought a card and used it for 4-5 years as you want to imply. Most of us upgraded way more frequently in the past.
5 years from 2010 to 2015 had a lot of stability with the OS and API, and even with the Nvidia architecture overall, while in the past we were used to a lot more changes, a 480 or even 460 can play current games a lot better than a 2000 card could in 2005 or a 2005 card could in 2010, 5 years old cards are more relevant now than they used to,
I've just finished Witcher 3 with a Radeon 5800
You mean Radeon 8500, not 5800? Look, if you like gaming at 800x600 or 1024x768 at 30 fps with everything on LOW, that's your choice, but don't try claiming how GTX460, 480 or especially Radeon 8500 are going to provide a good TW3 experience in games.
Fact of the matter is Fermi and Kepler cards themselves are getting outdated faster than GCN 1.0 (770 < 280X/7970Ghz, 680 < 7970/7950/280X, 780/OG Titan < 290, 780Ti < 290X) that it's not going to matter as much as DX12 tiers.
Also, look up the other functionality of Fermi and Kepler cards in the form of DX12, they are behind GCN on DX12 feature set.
If you are going to argue that Maxwell will perform better in DX12 games than GCN 1.1/1.2, it will depend on the game I bet. I would be more worried about GTX970's 3.5GB of VRAM vs. 290/290X's 4GB by the time DX12 games roll. As I said where NV will have an inherent advantage are GameWorks titles and UE4, which isn't going to be because of DX12_1.
It means GCN 1.0 doesnt support DX12 featurelist and wont do. You need GCN 1.1, 1.2, Maxwellv2 or Skylake IGP for that.
Dude, aren't you tired of constant AMD bashing? I guess Fermi, Kepler and Maxwell won't support some DX12 features either since none of those support Binding Tier 3 of DX12.
zlatan, a game developer already confirmed that GCN supports Tier 3.
In this video per MS, Tier 1 = DX 12_1. Also MS gives explanation to Tiers:
http://channel9.msdn.com/Events/GDC/GDC-2015/Advanced-DirectX12-Graphics-and-Performance
Also, you continue to discuss semantics not reality. Right now the entire GCN 1.0 stack outperforms the entire Kepler stack that it was meant to compete, at every level. In fact, Kepler performs so poorly, the OG Titan or 780 are hardly faster than the 280X. It's possible NV might shove specific DX12 features not available on GCN to purposely cripple its performance in its GameWorks/UE4-engine partnered GW titles but that's expected given how they operate nowadays.
As has already been mentioned, by the time DX12 games arrive, chances are on average they will have way more advanced graphics than today's DX11 games. No one is going to be able to play those games well on 7970 or GTX680 to start with. Pretty much the 2GB o VRAM limitation of most Kepler cards will kill them off faster anyway. By December 2011, HD7970 will turn 4 years old. Fermi is basically a write-off anyway since a $150 R9 280 at stock outperform GTX580 by 50%. Once Pascal/14nm GPUs launch next year, everything today will move 1 tier down.