Originally posted by: Azn
Originally posted by: schneiderguy
Originally posted by: Azn
We know that quad cores have more cache than a dual core. This could be the real reason why it performs 5-10% faster than dual cores.
The e6750 and q6700 you were comparing have the same amount of cache per core.
no. Q6700 shares the cache between cores which has 8 meg cache. While Q6750 has 4megs.
You're dodging the issue. The point is that you don't have to get no change in the scores to show you?re CPU limited, the performance just has to be disproportional to the workload or hardware change.Originally posted by: Azn
Not necessarily CPU bound. It could just be overhead of the game.
So I'll ask you for the fourth time why you're asking Taltimir to underclock his CPU when according to your claim CPU clock speeds won?t make much difference?When Anandtech did their article the CPU clock speeds wasn't generating real performance gains. Only thing it mattered was architectural and cache.
You'd be wrong then. In fact it's one of the few engines out there were a slower quad can beat a faster dual.I don't think UT3 engine is optimized for quad cores at all...
You have absolutely no evidence to suggest this is happening in Mass Effect. The external limit (60 FPS game cap) appears to not affect Taltimar so the most obvious answer then becomes a CPU limitation.It's not magical? It's just the way a game was programmed. I've seen this happen in some games over the years. It doesn't matter how much gpu power you pump into the game and raw CPU mhz doesn't change a thing either.
See Azn, this is exactly why people get annoyed. Even when your arguments are proven wrong you respond with irrelevant rhetoric to occlude the issue and weasel out of the situation.Originally posted by: Azn
So it does. Good info. The difference is there but not a huge one like going from single core to a dual core.
Phenom isn't so bad for UT3 considering a 2.6ghz is neck and neck with Q6600 @ 2.4ghz.
Originally posted by: schneiderguy
Originally posted by: Azn
Originally posted by: schneiderguy
Originally posted by: Azn
We know that quad cores have more cache than a dual core. This could be the real reason why it performs 5-10% faster than dual cores.
The e6750 and q6700 you were comparing have the same amount of cache per core.
no. Q6700 shares the cache between cores which has 8 meg cache. While Q6750 has 4megs.
No it doesn't. The two pairs of cores each have access to 4mb of cache. The one pair of cores on the E6750 has access to the same amount.
Originally posted by: BFG10K
You're dodging the issue. The point is that you don't have to get no change in the scores to show you?re CPU limited, the performance just has to be disproportional to the workload or hardware change.
So I'll ask you for the fourth time why you're asking Taltimir to underclock his CPU when according to your claim CPU clock speeds won?t make much difference? Do you understand how your comments contradict each other?
You'd be wrong then. In fact it's one of the few engines out there were a slower quad can beat a faster dual.
You have absolutely no evidence to suggest this is happening in Mass Effect. The external limit (60 FPS game cap) appears to not affect Taltimar so the most obvious answer then becomes a CPU limitation. Furthermore asking him to underclock his CPU while claiming MHz doesn?t impact the UT3 engine is laughable. It?s like you?re trying to set up a false test so you can say ?I told you so?.
Originally posted by: Azn
Tatamir, some games do have an overhead not necessarily CPU bound like you think. Frame rates doesn't increase much when you lower or raise resolution long as you have enough GPU power to run the game. Only when the GPU runs out of steam in some resolution it is limited to does it start drop frames. You did gain when you did lower resolution. Not a huge one but it did improve non the less. Mass Effect could be one of those games.
Originally posted by: BFG10K
See Azn, this is exactly why people get annoyed. Even when your arguments are proven wrong you respond with irrelevant rhetoric to occlude the issue and weasel out of the situation.Originally posted by: Azn
So it does. Good info. The difference is there but not a huge one like going from single core to a dual core.
Phenom isn't so bad for UT3 considering a 2.6ghz is neck and neck with Q6600 @ 2.4ghz.
Originally posted by: Azn
Originally posted by: schneiderguy
Originally posted by: Azn
Originally posted by: schneiderguy
Originally posted by: Azn
We know that quad cores have more cache than a dual core. This could be the real reason why it performs 5-10% faster than dual cores.
The e6750 and q6700 you were comparing have the same amount of cache per core.
no. Q6700 shares the cache between cores which has 8 meg cache. While Q6750 has 4megs.
No it doesn't. The two pairs of cores each have access to 4mb of cache. The one pair of cores on the E6750 has access to the same amount.
Not quite. I've seen where a quad core is actually faster than dual core even though it's not quad optimized. This only leads to cache differences.
Originally posted by: chizow
If you look at the 3GHz results its very clear that 4870 CF is not scaling well compared to a single 4870 and results in the same performance as 4850 CF even at 2560. Now look at the 4GHz results and you'll see the story is entirely different, with a single 4870 outperforming the 4870 in CF @ 3GHz at 1920 and 4870CF @ 4GHz distancing itself from 4850CF.
Originally posted by: taltamir
So you have SEEN quad cores be faster, you just attribute it to them having more cache, even though they DO NOT because the cache is not shared (like in a multi GPU video card).
Originally posted by: RussianSensation
Originally posted by: chizow
If you look at the 3GHz results its very clear that 4870 CF is not scaling well compared to a single 4870 and results in the same performance as 4850 CF even at 2560. Now look at the 4GHz results and you'll see the story is entirely different, with a single 4870 outperforming the 4870 in CF @ 3GHz at 1920 and 4870CF @ 4GHz distancing itself from 4850CF.
True enough but in a game that isn't graphically intensive to begin with. I can play UT3 at 1920x1080 with every option in the menu on HIGH on a 8800GTS 320mb! So it would be similar to me showing a benchmark of Far Cry 1 and telling you it's being 'bottlenecked' at 100fps.....
Not all games will be equally gpu limited. Obviously games like HL2 are very cpu limited with today's graphics cards. But to begin with you woulnd't consider buying 4850/4870s in CF to play those types of games. So the point is rather irrelevant. You buy CF setups for Oblivion, GRID, Crysis and so on where you'll get far greater benefit from adding a 2nd card than overclocking C2Q from 3.0ghz to 4.0ghz.
Originally posted by: taltamir
Originally posted by: Azn
Originally posted by: schneiderguy
Originally posted by: Azn
Originally posted by: schneiderguy
Originally posted by: Azn
We know that quad cores have more cache than a dual core. This could be the real reason why it performs 5-10% faster than dual cores.
The e6750 and q6700 you were comparing have the same amount of cache per core.
no. Q6700 shares the cache between cores which has 8 meg cache. While Q6750 has 4megs.
No it doesn't. The two pairs of cores each have access to 4mb of cache. The one pair of cores on the E6750 has access to the same amount.
Not quite. I've seen where a quad core is actually faster than dual core even though it's not quad optimized. This only leads to cache differences.
So you have SEEN quad cores be faster, you just attribute it to them having more cache, even though they DO NOT because the cache is not shared (like in a multi GPU video card).
Originally posted by: chizow
Originally posted by: RussianSensation
Originally posted by: chizow
If you look at the 3GHz results its very clear that 4870 CF is not scaling well compared to a single 4870 and results in the same performance as 4850 CF even at 2560. Now look at the 4GHz results and you'll see the story is entirely different, with a single 4870 outperforming the 4870 in CF @ 3GHz at 1920 and 4870CF @ 4GHz distancing itself from 4850CF.
True enough but in a game that isn't graphically intensive to begin with. I can play UT3 at 1920x1080 with every option in the menu on HIGH on a 8800GTS 320mb! So it would be similar to me showing a benchmark of Far Cry 1 and telling you it's being 'bottlenecked' at 100fps.....
Not all games will be equally gpu limited. Obviously games like HL2 are very cpu limited with today's graphics cards. But to begin with you woulnd't consider buying 4850/4870s in CF to play those types of games. So the point is rather irrelevant. You buy CF setups for Oblivion, GRID, Crysis and so on where you'll get far greater benefit from adding a 2nd card than overclocking C2Q from 3.0ghz to 4.0ghz.
If you were getting 100FPS then you wouldn't need to upgrade. But since you're not, the only situation you'd fully benefit from a 4870CF with a slower CPU is in heavily GPU bound situations, which only occurs at 2560 even in the games you pointed out. At that point you'd need to ask yourself if 4870CF or 4870X2 is worth it over 4850CF, or even 9800GTX SLI or 9800GX2 since you'll see they all perform about the same in CPU bottlenecked situations. You can see this in AT's benches even in the "intensive" games you mentioned. Minimal difference at 1680 and 1920 due to CPU bottlenecking.
Even some of the most intensive games out there like Crysis and Age of Conan will not scale beyond a certain FPS due to CPU limitations. Crysis is easy to see as you'll never see a benchmark with 100FPS even at very low resolutions. Age of Conan is even more obvious as 2x, 4x and 8xAA are free at 1920x1200 running ~52FPS on a 3.66GHz C2Q. So people who are complaining they can't get higher than say, 45FPS in AoC. There's a reason, its either heavily CPU limited or there's an external frame cap. Nothing will raise this frame cap except for a faster CPU (or removal of the limit if its something the Devs built into the engine).
Originally posted by: Azn
At least I acknowledge when I'm mistaken but you? :laugh;
Originally posted by: chizow
If you were getting 100FPS then you wouldn't need to upgrade. But since you're not, the only situation you'd fully benefit from a 4870CF with a slower CPU is in heavily GPU bound situations, which only occurs at 2560 even in the games you pointed out. At that point you'd need to ask yourself if 4870CF or 4870X2 is worth it over 4850CF, or even 9800GTX SLI or 9800GX2 since you'll see they all perform about the same in CPU bottlenecked situations.
Originally posted by: chizow
Originally posted by: Azn
At least I acknowledge when I'm mistaken but you? :laugh:
LMAO. I'm sure BFG will have something to add to the rest of what you wrote, but I'm not going to reply to any more of your nonsense until you do as you say, put on your Smart Guy X-Ray glasses and show us what its like to experience the "life of person who can see a bit more than others" and admit to being wrong with the following:
1) Posting some link to an AT article in this thread that you didn't and still don't understand which has nothing to do with your claim that ME isn't CPU intensive.
2) Claiming the 3870 is a faster card than the 9600GT in Age of Conan based on "Highest Playable" even when the "Apples to Apples" comparison showed the 9600GT was faster at a higher resolution and higher in-game settings.
3) Claiming the 9800GTX was much faster than the 8800GTX even when linked benchmarks showed them within a few FPS both ways. Ironically now you're claiming the 8800GTX and 4850 are at the same performance level, which by association means you're saying the 9800GTX is faster than the 4850. I'm sure there's quite a few here that would disagree with you.
I could go on but we can start with that. In the "gangsta life I live" you would've been asked to leave the conference room long ago, or at the very least muzzled if you consistently showed similar incompetence.![]()
That's not true at all though, especially when that 3.0GHz isn't enough to push a 4870CF's performance higher than the 4850CF set-up. Honestly what you're arguing doesn't make sense, if it comes down to semantics over "cpu limited" vs. "cpu bottlenecked" then it doesn't make sense to continue arguing.Originally posted by: RussianSensation
But this happens every single generation! Wait 6-8 months and there will be another Far Cry, another Oblivion and another Crysis and your 4850CF setup will be utilized to its fullest and C2Q 3.0ghz will be more than sufficient to feed it. If you dont benefit from those cards now, yes you are cpu limited, but not Bottlenecked!
Comparing the GTX 280 to a single 4850 makes no sense, especially if you're getting into GPU bound situations like 1920+4xAA. The GTX is the faster card without a doubt, how much faster depends on the game. If a 4850 is 60% of a GTX 280 and you CF'd them then of course you'd expect to see an increase over the GTX 280 with good scaling. That just means the GTX 280 isn't hitting whatever frame limits at that resolution and CPU speed. If you want to compare Apples to Apples though, you need look no further than the 4870CF vs 4850CF. When you see 1680 and 1920 no AA and there is no benefit of 4870CF over 4850CF it isn't obvious you're CPU bottlenecked?Just look at the benches I posted above where GTX 280 doubles 4850's frames. The game is clearly GPU limited. Of course at some point you are always either gpu or cpu limited. It's always been said that CF/SLi setups are only useful for highest resolutions and AA possible. There is nothing new here. Yet 4850CF still outperforms GTX 280 in a lot of games despite this "CPU bottleneck" in resolutions lower than 2560x1600 with a "slow" 3.0ghz Quad.
It doesn't ease at some magical interval, it scales. I'm sure you would see an incremental increase as you increased your CPU clockspeed. We're only seeing the advantage of faster CPUs now that there are faster GPU solutions out there, although there were some convincing results before with SLI configs and Crysis (Check Derek's comments in his Crysis Tri-SLI review). I'm quite certain there is not a CPU available short of LNI2 cooled that's fully able to take advantage of things like X2 CrossFireX or GTX 280 Tri-SLI.Also consider the benches you linked of 3.0ghz vs. 4.0ghz Quad. We don't know at which point the limitation eases - is it 3.4ghz, 3.6, 3.8ghz or you truly need 4.0ghz? Regardless who is complaining about getting 100 vs. 120 frames in 4870CF ? If you don't need it, save your $.
But in this case it has been 2-3 generations and we're still dealing with the same once-fast Core 2. I can guarantee you graphics cards have scaled more in the last 2 years in speed than the Core 2, if 8800GTX SLI was the fastest thing 2 years ago there's already single GPU cards that are nearly its speed that can also be CF/SLI'd. And I'm quite certain you won't get 60FPS with any single card in Crysis, probably not even at very low resolutions and settings.People always say cpu is bottlenecking and time and time again the cpu continues to outlast 2-3 generations of graphics cards. It's funny, first people complain they are getting 15fps in Crysis at 1920x1200 on a $500 8800GTX, then they complain the game is stuck at 60FPS at 1920x1200 on a $150 graphics card...
Originally posted by: Azn
It's not a multi gpu card first off and it doesn't work that way.
Here's an example how it might work with a quad core that is only dual core optimized.
Let say core 1 and 2 that share 4 meg cache and the cores 3 and 4 that share the other 4 meg cache. You can use core 1 to utilize all 4 meg cache and use core 3 to utilize the other 4meg cache. Which is total of 8meg cache to optimize a dual core only game.
Dual core has only 4 meg cache to share between the 2 cores. Which will always be limited to 4meg cache.
Originally posted by: chizow
That's not true at all though, especially when that 3.0GHz isn't enough to push a 4870CF's performance higher than the 4850CF set-up. Honestly what you're arguing doesn't make sense, if it comes down to semantics over "cpu limited" vs. "cpu bottlenecked" then it doesn't make sense to continue arguing.
Comparing the GTX 280 to a single 4850 makes no sense, especially if you're getting into GPU bound situations like 1920+4xAA.
If you want to compare Apples to Apples though, you need look no further than the 4870CF vs 4850CF. When you see 1680 and 1920 no AA and there is no benefit of 4870CF over 4850CF it isn't obvious you're CPU bottlenecked?
It doesn't ease at some magical interval, it scales. I'm sure you would see an incremental increase as you increased your CPU clockspeed.
But in this case it has been 2-3 generations and we're still dealing with the same once-fast Core 2. I can guarantee you graphics cards have scaled more in the last 2 years in speed than the Core 2, if 8800GTX SLI was the fastest thing 2 years ago there's already single GPU cards that are nearly its speed that can also be CF/SLI'd.
Like I said the first time no one is saying don't upgrade the GPU first, just don't be surprised if you don't see as big of a difference as you expected if you have a slower CPU or if you don't see any difference compared to a "slower" solution.Originally posted by: RussianSensation
Originally posted by: chizow
That's not true at all though, especially when that 3.0GHz isn't enough to push a 4870CF's performance higher than the 4850CF set-up. Honestly what you're arguing doesn't make sense, if it comes down to semantics over "cpu limited" vs. "cpu bottlenecked" then it doesn't make sense to continue arguing.
Just like X2 5200+ wasn't fast enough to push HL2 and 8800GTX in HL2 compared to a Q6600 at 3.0ghz..... You could have said easily back then X2 5200+ is the bottleneck. Would you tell someone you shouldn't waste your $ on getting a 4870 with a 5200+ because it'll bottleneck it over 8800GTX?
No I'm not, I'm talking about what are supposed to be the fastest solutions performing the same as slower solutions with slower CPUs, but performing as they should with faster CPUs. There's obviously a huge gap between single and dual GPU solutions, the only cards that can really be compared as single GPU to multi-GPU are the GTX 280 and in some cases the 4870 and GTX 260.That's the exact same thing you are talking about now. You are describing being "cpu limited" (i.e. if you add a slightly faster cpu, you get slightly more performance).
And I haven't said anything to contradict this. As I said earlier Mass Effect is a bit of both, you'll certainly benefit more from a faster GPU but that doesn't mean you aren't CPU bottlenecked.Mass Effect is GPU bottlenecked with a cpu limitation, it's plain and simple. You can get a greater increase in performance from upgrading to a new gpu except if you are already running something like a 4850CF setup. So in this case you can technically call it "CPU bottleneck" for that one specific game.
lol c'mon, not only is that clearly a GPU limited resolution and IQ setting, but those results are most likely due to lack of frame buffer.However, there is no way I can conclude that since we do not know how well the drivers have been programmed to scale with 4870CF vs. 4850CF in this game -- yet you seem to be 100% certain we are looking at a cpu bottleneck. How about this?
GRID 2560x1600 4AA
4850 CF = 29.5
4870 CF = 30.4
Again, both cards have 1GB of frame buffer at a resolution that is historically GPU/frame buffer/bandwidth limited. We won't know for sure until we see tests with faster CPUs though.If one were to just look at these 2 numbers in isolation with your logic, you'll quickly say "Oh, there is barely any performance improvement because the cpu is the bottleneck" Yet once we introduce GTX 280 SLI or 4870 X2, they completely outperform the CF setups:
GTX 280 SLI = 68.6
4870 X2 = 84.2
So please don't make statements like C2Q 3.0ghz is the "bottleneck" in a game as if it's 100% FACT.
But I'm not trying to prove a game is GPU limited....there's plenty more than 1 example, there's entire reviews showing 4850 and 4870CF performing nearly identically in many games at resolutions up to 1920.Of course it makes sense. If you want to prove if a game is GPU limited, you compare GPUs not cpus..... You are just pointing out to 1 example when the game might be cpu limited after you throw a 4850CF setup on it...which only affects 5% of all gamers.
Which is why I don't put too much emphasis on results from different vendors and solutions with vastly different "expected" levels of performance. This is similar to the example you keep referring to, a single card vs. multi-GPU with a slower CPU. Yes you will see more performance with the 2nd GPU, but will you see as much as with a faster CPU also? Probably not.Moreso, neither you nor i can say with certainty how much the 4870CF setup is being bottlenecked by a cpu (and at what speed precisely) vs. a driver issue. Look at any 4850 Cf vs. 4870 CF benches and you'll see very little improvement in some yet GTX 280 SLI outperforms them both! That's GPU limited my friend. Just because CF doesn't scale, doesn't mean the game is cpu limited.
But you can when you increase the CPU clocks and that results in the expected performance gains (as linked in my original reply).No, it's not obvious. Please see my links above. You cannot assume 100% causation to a cpu.
If you're going to link to a 4 year old article, especially one from HOCP, you're really going to have to refine your argument and point out what you're referring to. I'm not going to reset my entire CPU/GPU frame of reference circa 2004 and go at it blind.There have been plenty of cases in the past where XP 2500+ was not sufficient, but once you scaled your cpu beyond XP 3000+, the benefits started to subside in a game beyond. So you cannot just assume the framerates will scale linearly from 3.0ghz to 4.0ghz to "prove" that you "need" 4.0ghz C2Q...
Look Here
E6700 @ 2.66 - 146fps
E6700 @ 2.93 - 147fps (barely any!)
E6700 @ 3.30 - 153fps
E6700 @ 3.60ghz - 151fps
E6700 @ 4.0ghz - 150fps
E6700 @ 4.2ghz - 153fps
Now imagine I linked you 2.0ghz E6700 and just 4.2ghz benches you wouldn't be able to conclude that 3.30ghz is sufficient to achieve same frames as a 4.2ghz system, would you? So your argument that 4.0ghz is required might be correct but it might not be since we do not have proper scaling graph (not to mention potential driver issues with CF).
Well I don't think game demands, with the exception of Crysis, are scaling as fast as GPU advancements. I think its pretty obvious when the G80/G92 made 1920x1200 a viable resolution (look at G80 reviews, yes they're still using 1280). This next generation along with the commodization of affordable CF/SLI has brought this a step further where many games are showing CPU bottlenecking even at 1920 while offering excellent performance even at 2560 (and with AA with the 4870X2). I think we're going to continue seeing this until 1) we get faster CPUs, 2) Devs start coding games to make better use of multiple cores or 3) some of the effects like physics slowing down the CPU are accelerated on the GPU.Yes but you games have gotten more complex as well. So far we havent said what isnt known - for CF/SLI - you want the fastest cpu you can afford. For everything else you almost always are GPU limited. Even then, you can expose the flaw in this argument since SLI GTX 280s STILL outperform 4870s in CF despite the same 3.0ghz cpu which means you are MORE gpu bottlenecked....
Originally posted by: schneiderguy
Originally posted by: Azn
It's not a multi gpu card first off and it doesn't work that way.
Here's an example how it might work with a quad core that is only dual core optimized.
Let say core 1 and 2 that share 4 meg cache and the cores 3 and 4 that share the other 4 meg cache. You can use core 1 to utilize all 4 meg cache and use core 3 to utilize the other 4meg cache. Which is total of 8meg cache to optimize a dual core only game.
Dual core has only 4 meg cache to share between the 2 cores. Which will always be limited to 4meg cache.
First of all, developers don't have control over what cores the threads are going to be run on, windows decides that, so there's no guarantee that the threads will each have acess to 4mb cache.
Second, even if windows decides to put the two threads on cores 1 & 3 (so each has acess to 4mb cache), what happens when the threads need to exchange data? If they were able to share cache they could just take a look in there, but instead they have to go out into the system memory to get the data that the other thread is working on, which is slower than just looking in the cache.
Just like X2 5200+ wasn't fast enough to push HL2 and 8800GTX in HL2 compared to a Q6600 at 3.0ghz..... You could have said easily back then X2 5200+ is the bottleneck. Would you tell someone you shouldn't waste your $ on getting a 4870 with a 5200+ because it'll bottleneck it over 8800GTX? That's the exact same thing you are talking about now. You are describing being "cpu limited" (i.e. if you add a slightly faster cpu, you get slightly more performance). Mass Effect is GPU bottlenecked with a cpu limitation, it's plain and simple. You can get a greater increase in performance from upgrading to a new gpu except if you are already running something like a 4850CF setup. So in this case you can technically call it "CPU bottleneck" for that one specific game. However, there is no way I can conclude that since we do not know how well the drivers have been programmed to scale with 4870CF vs. 4850CF in this game -- yet you seem to be 100% certain we are looking at a cpu bottleneck. How about this? GRID 2560x1600 4AA 4850 CF = 29.5 4870 CF = 30.4 If one were to just look at these 2 numbers in isolation with your logic, you'll quickly say "Oh, there is barely any performance improvement because the cpu is the bottleneck" Yet once we introduce GTX 280 SLI or 4870 X2, they completely outperform the CF setups: GTX 280 SLI = 68.6 4870 X2 = 84.2 So please don't make statements like C2Q 3.0ghz is the "bottleneck" in a game as if it's 100% FACT.
Of course it makes sense. If you want to prove if a game is GPU limited, you compare GPUs not cpus..... You are just pointing out to 1 example when the game might be cpu limited after you throw a 4850CF setup on it...which only affects 5% of all gamers. Moreso, neither you nor i can say with certainty how much the 4870CF setup is being bottlenecked by a cpu (and at what speed precisely) vs. a driver issue. Look at any 4850 Cf vs. 4870 CF benches and you'll see very little improvement in some yet GTX 280 SLI outperforms them both! That's GPU limited my friend. Just because CF doesn't scale, doesn't mean the game is cpu limited.
Originally posted by: Azn
Originally posted by: chizow
Originally posted by: Azn
At least I acknowledge when I'm mistaken but you? :laugh:
LMAO. I'm sure BFG will have something to add to the rest of what you wrote, but I'm not going to reply to any more of your nonsense until you do as you say, put on your Smart Guy X-Ray glasses and show us what its like to experience the "life of person who can see a bit more than others" and admit to being wrong with the following:
1) Posting some link to an AT article in this thread that you didn't and still don't understand which has nothing to do with your claim that ME isn't CPU intensive.
2) Claiming the 3870 is a faster card than the 9600GT in Age of Conan based on "Highest Playable" even when the "Apples to Apples" comparison showed the 9600GT was faster at a higher resolution and higher in-game settings.
3) Claiming the 9800GTX was much faster than the 8800GTX even when linked benchmarks showed them within a few FPS both ways. Ironically now you're claiming the 8800GTX and 4850 are at the same performance level, which by association means you're saying the 9800GTX is faster than the 4850. I'm sure there's quite a few here that would disagree with you.
I could go on but we can start with that. In the "gangsta life I live" you would've been asked to leave the conference room long ago, or at the very least muzzled if you consistently showed similar incompetence.![]()
LMAO. I can't wait until BFG weasel out of that one. :laugh:
Hypocrite much? You say you aren't going to reply yet here you are replying to my post.
1. Already answered in this thread. Reading comprehension FTW!
2. If it's giving better graphic details than 9600gt and faster. Sure why not.
3. You have some major reading comprehension problems I see. Point to where I said "9800gtx is much faster than 8800gtx" I think it was you who said 9800gtx is slower than G80 because you had 8800gtx with major inferiority complex towards G92 at the time.I only mentioned 9800gtx is faster in lower resolutions without AA and G80 is faster with AA in some ridiculous higher resolutions. :brokenheart:
Your gangster life? Tough guy you. :laugh: You sell drugs and shoot guns for chump change?
Why don't you explain to me why 260gtx wasn't able to crush 4870 since you say even had a thread about ROP is the biggest factor when it comes to performance.![]()