Nvidia GPUs soon a fading memory?

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126

Scali

Banned
Dec 3, 2004
2,495
0
0
Yeah but a 5570 using DDR3 gets 28 GB/s, doesn't it?

But a 5570 has its own dedicated memory controller and videomemory.
A CPU memory controller is way different. It's optimized for low latency and random access. A GPU has very deep pipelining and latency doesn't matter, and access is nearly completely linear.
You've seen the benchmarks... the latest and greatest Phenom II doesn't have a lot of bandwidth.
This is always going to be a problem...
On the XBox/PlayStation they turned it around... they optimize the architecture for videomemory, and the CPU memory performance is sacrificed. But that's not a realistic option for a regular PC.
 

Edgy

Senior member
Sep 21, 2000
366
20
81
Need for more bandwidth can be resolved with additional GPU dedicated Hypertransport incorporated into the die which is not improbable as current 4/8 way Opterons have 3 hypertransport buses vs desktop having only 1.

I am not so sure that memory bandwidth would be the limitation.

What I'm looking at are discrete graphics CARDS being sold today that probably is as large in surface area as a uATX MB. Forget the GPU, all that PCB real estate & components for graphics cards - where would that be integrated? MB? That would be trying to fit almost 2 full PCB worth of components/IC into 1.

Not sure, because I don't know very much about PCB & IC design for MB or graphics cards but I really can't fathom such a thing happening easily at all.
 

rahvin

Elite Member
Oct 10, 1999
8,475
1
0
If I can add a small point. nVidia and their parrots keep talking about how big this HPC market is and even though they made less than 1% of their earnings on HPC last year nVidia claims that it's going to sustain them and be bigger than the discreet graphics market in the end. Personally I find it all a bunch of hogwash.

The only area of the HPC segment where these graphics chips and open-cl are any value is large matrix multiplication scenarios. This is a VERY small market, a little image processing, maybe some nuclear simulation, some climate modeling and a few other specialties where they sell maybe 10-15 super computers a year. There is a reason supercomputers cost 10's of millions and it's not cause they are that expensive to produce, it's because they only make so few of them that all the R&D goes into the cost. There are some additional areas such as finite element analysis of structural engineering and materials science where these cards might be useful but this market will never even be 10% of the size of the enthusiast graphics market, but nVidia is betting the whole farm on HPC.

The question is, do they actually think everyone is going to pay $2k for a card to do large matrix mathematics that is only needed in highly specialized professions? Or do they think they are going to sell $10billion worth of cards to the millitary for nuclear simulation? Frankly nVidia is smoking some hefty crack or they truly believe discreet graphics is a dead market in 5-10 years and are trying to position themselves in a market that will exist.

With continued process shrinks and better FPU and matrix operations on the CPU, discreet graphics are going away. We are nearly to the point that a general CPU with some advanced FPU's (derived from graphics chips) are going to be capable of real time ray-tracing. Once we reach real time ray-tracing at HD or better resolution the discreet graphics chip market is dead. Intel is 100% correct about that and it's the reason AMD and nVidia saw Larrabee as such a threat, x86 FPU's with the power of discreet graphics chips will destroy the discreet graphics market. Both Intel and AMD moving in this direction at the same time is confirmation that they both see this as inevitable. Personally IMO with ATI in house I believe AMD will get there first by integrating the ATI FPU units directly into the CPU but Intel won't be far behind even though their graphics stuff is slower. They've already started the first steps in expanding the x86 instruction set to make it possible.

I don't see much of a future for nVidia unless they can get the government to force Intel and AMD to let nVidia buy Via and gain access to the third x86 license. Either that or get Microsoft and all the software makers to port all the software to ARM. I would love to see nVidia buy Via, I think they could probably add a third x86 processor competitor and bring processor prices down again. Even if they can do that I wouldn't put high odds on success, part of being a successful CPU producer is being able to keep up on process technology and even AMD wasn't able to match Intel and had to end up selling their Fab's. I like nVidia products, but discreet graphics are dead before 2020 and no other market they are chasing is anywhere near the size and profitability of the enthusiast graphics card market. I just can't see them surviving intact another decade.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Need for more bandwidth can be resolved with additional GPU dedicated Hypertransport incorporated into the die which is not improbable as current 4/8 way Opterons have 3 hypertransport buses vs desktop having only 1.

I am not so sure that memory bandwidth would be the limitation.

I'm quite sure it is.
They'd also have to add an extra memory controller (and you'd need 4 sticks of DDR3 instead of 2... not very cost-effective). And if they want REAL performance (the 5570 is just a joke with its 28 GB/s, try a 5870 with 153 GB/s, or a GTX480, which delivers a whopping 177.4 GB/s), they're going to need to add GDDR memory on the motherboard. There currently is no GDDR memory available in modules... and it would probably be very expensive (a large part of the cost of high-end videocards is the memory).... Not to mention the technical problems of such high-bandwidth communication through a socket interface... regular motherboards already suffer from that and sometimes don't work properly with 4 DIMMs or high-speed memory, and need to be downclocked.
It's just not going to happen. There's no advantage to having GDDR memory on your motherboard... might aswell just put the GPU with dedicated memory controller and its GDDR memory on a PCI-e card.

And obviously there would be the problem with die size, power consumption and all that.
It's just not going to happen. IGPs would already have been a LOT faster today, if they weren't held back by these issues. Moving them to the CPU package doesn't solve anything.
 
Last edited:

dzoner

Banned
Feb 21, 2010
114
0
0
Uhhh, yes they are.
Q6600 and above schooled Phenom and Phenom II for a long time ...

http://www.anandtech.com/show/2702/1
"Compared to the Core 2 Quad Q9400, the Phenom II X4 940 is clearly the better pick. While it's not faster across the board, more often than not the 940 is equal to or faster than the Q9400. If Intel can drop the price of the Core 2 Quad Q9550 to the same price as the Phenom II X4 940 then the recommendation goes back to Intel.

??? First you say Core 2 Duo 'schooled' Phenom II and you 'prove' it with an Anand Tech article that gives the clear price/performance win to the Phenom II.

How do you manage to make an argument like this?
 

Scali

Banned
Dec 3, 2004
2,495
0
0
??? First you say Core 2 Duo 'schooled' Phenom II and you 'prove' it with an Anand Tech article that gives the clear price/performance win to the Phenom II.

Like I said, AMD gets price/performance because they sell their CPUs cheaper. The CPUs themselves can't compete with the Core2 Quad on performance (Phenom II 940 was their best offering, and Q9400 one of the slowest and cheapest Core2 Quads), despite both being on the same process. Core2 Quad wins on IPC and power consumption, and is cheaper to make (two dies, less transistors in total).
So no, on the same process, Phenom II was not better than Core2 Quad.
It only had attractive price/performance because AMD worked with much lower profit margins than Intel... which is why Intel turned in great quarterly results, while AMD was bleeding cash, and was eventually forced to spin off their manufacturing business.
I don't expect you to understand (you didn't in the previous post either, so why now), but it is the simple truth.
 

dzoner

Banned
Feb 21, 2010
114
0
0
A) AMD's own strategy for Fusion takes 3 steps (as they were revealed couple of years back) - integration, optimization, then exploitation. Pls read this http://www.anandtech.com/show/2229/3

Initial stage of Fusion launch is integration - basically CPU & GPU into 1 processor package.

It really matters little at integration stage whether they are on same die or not (much like 2 CPUs in one package vs 1 die - remember that debate).

The 'Merge of CPU & GPU on one chip' you're talking about will actually happen at what AMD calls optimization step. This is where they'll add the x86 instruction extension to provide direct access to GPU like they do for CPU currently (plus any additional architectural improvements etc.,).

My point is that initial Fusion'd BD launch will be integration only and that's next year sometime and it's just moving IGP into 1 processor package with CPU cores - it is nothing earth-shattering.

B) Fully "optimized" Fusion products (BD or its successor) will be much later as it would more than likely require fairly significant instruction set and architectural changes to the CPU design. Seeing as how AMD wants to mimic Intel's tick tock - if that holds true then most likely guess at a time-frame for full optimized Fusion would be 2 years after BD launch.

A) The roadmap was substantially modified after Dirk took over. Llano is clearly falls well into the 'optimization' step on the roadmap in that article, not the 'integration' step, which was putting two chips on one package, ala westmere. AMD is just skipping that step altogether. Keep in mind the 'integration' step was to have taken place in the 2008-2009 timeframe. Llano is coming in 2011. It is unknown what the software ecosystem releasing with Llano will look like, but I would point out AMD has had four years to work on it from the time of that article. It's going to have some breadth, depth and polish.

B) My current guess on the timeline is:

Fusion '1' - 2011-2012, 32nm, STARS + Evergreen, basic fusion of cpu~gpu, functional software ecosystem.

Fusion '2' - 2012-2013, 32 nm, BD + NI elements, increased fusion of cpu~gpu, functional software ecosystem.

Fusion '3' - 2013-2014, 22nm, die shrink of Fusion '2' with some hardware modification and optimised software ecosystem.

Fusion '4' - 2014-2015, 22nm, fully fused cpu~gpu > new architectures, functional software ecosystem. Discrete cpu and gpu product lines diminishing as Fusion becomes predominate in all product segments.

Fusion '5' - 2015+ (whenever 16nm becomes available), 16nm die shrink of optimised Fusion '4', optimised software ecosystem. Discrete CPU and GPU architecture chips phased out as AMD moves to a fully Fusion populated product line.

Give or take.
 
Last edited:

Edgy

Senior member
Sep 21, 2000
366
20
81
A) The roadmap was substantially modified after Dirk took over. Llano is clearly falls well into the 'optimization' step on the roadmap in that article, not the 'integration' step, which was putting two chips on one package, ala westmere. AMD is just skipping that step altogether. Keep in mind the 'integration' step was to have taken place in the 2008-2009 timeframe. Llano is coming in 2011. It is unknown what the software ecosystem releasing with Llano will look like, but I would point out AMD has had four years to work on it from the time of that article. It's going to have some breadth, depth and polish.

B) My current psoomah guess on the timeline is:

Fusion '1' - 2011-2012, 32nm, STARS + Evergreen, basic fusion of cpu~gpu, functional software ecosystem.

Fusion '2' - 2012-2013, 32 nm, BD + NI elements, increased fusion of cpu~gpu, functional software ecosystem.

Fusion '3' - 2013-2014, 22nm, die shrink of Fusion '2' with some hardware modification and optimised software ecosystem.

Fusion '4' - 2014-2015, 22nm, fully fused cpu~gpu > new architectures, functional software ecosystem. Discrete cpu and gpu product lines diminishing as Fusion becomes predominate in all product segments.

Fusion '5' - 2015+ (whenever 16nm becomes available), 16nm die shrink of optimised Fusion '4', optimised software ecosystem. Discrete CPU and GPU architecture chips phased out as AMD moves to a fully Fusion populated product line.

Give or take.

http://www.xbitlabs.com/news/cpu/di...tion_of_AMD_Fusion_Chips_Due_in_2015_AMD.html
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
But a 5570 has its own dedicated memory controller and videomemory.
A CPU memory controller is way different. It's optimized for low latency and random access. A GPU has very deep pipelining and latency doesn't matter, and access is nearly completely linear.
You've seen the benchmarks... the latest and greatest Phenom II doesn't have a lot of bandwidth.
This is always going to be a problem...
On the XBox/PlayStation they turned it around... they optimize the architecture for videomemory, and the CPU memory performance is sacrificed. But that's not a realistic option for a regular PC.

Well we will have wait for their implementation.

I'm not seeing what will be the benefit of having 400 or 480 SP if then the GPU portion will just perform like a 5450 that only has 80 SP due to the bandwidth limitation.

Do you?

Fusion white paper (http://sites.amd.com/us/Documents/48423B_fusion_whitepaper_WEB.pdf) suggest improvements on the memory control/sharing area.

Sincerely I'm not sure if those improvements will be included on Llano, but if it is only a test run, including a 400 or 480 SP part to have 80 SP performance, seems illogical.

Is this a CPU discussion in the video card forum? seriously?

Anyway, you guys might find this interesting http://www.xbitlabs.com/news/cpu/di...tion_of_AMD_Fusion_Chips_Due_in_2015_AMD.html

Well in the future you might see APU and OC forum. ;)
 

Edgy

Senior member
Sep 21, 2000
366
20
81
damn... Skurge beat me to it...

Anyways - Liano = Integration stage.

AMD had its hands full trying to come up with competitive BD architecture on CPU side and Evergreens & NI/SI on GPU side - there's no way they're done with optimization stage with Fusion.
 

Skurge

Diamond Member
Aug 17, 2009
5,195
1
71
damn... Skurge beat me to it...

Anyways - Liano = Integration stage.

AMD had its hands full trying to come up with competitive BD architecture on CPU side and Evergreens & NI/SI on GPU side - there's no way they're done with optimization stage with Fusion.

HAH!! I'm finally 1st to something :D
 

Scali

Banned
Dec 3, 2004
2,495
0
0
Well we will have wait for their implementation.

I'm not seeing what will be the benefit of having 400 or 480 SP if then the GPU portion will just perform like a 5450 that only has 80 SP due to the bandwidth limitation.

Do you?

I don't know where the '400 or 480 SP' figure comes from in the first place.
It's not in that whitepaper of yours, and I haven't seen it mentioned anywhere else either.
Given the obvious bandwidth limits, no I don't think AMD is going to put 400 or 480 SP in Llano.
 

dzoner

Banned
Feb 21, 2010
114
0
0
Is this a CPU discussion in the video card forum? seriously?

Anyway, you guys might find this interesting http://www.xbitlabs.com/news/cpu/di...tion_of_AMD_Fusion_Chips_Due_in_2015_AMD.html

"The first iteration of Fusion will include a CPU and GPU, but by 2015 the model could change. In the second iteration [in] 2015, you are not going to be able to tell the difference. It's all going away," said Leslie Sobon, vice president of marketing at AMD, reports IDG News agency."

Interesting, but vice president of marketing??

This has little correspondence to how the Fusion line will actually EVOLVE from 2011 to 2015. There is obviously not going to be a one step transition from STARS~Evergreen to fully integrated heterogeneous "you are not going to be able to tell the difference. It's all going away" Fusion 2.

That's just bonkers. Who could possibly take that literally at face value?

The Fusion engineers cringed when they read that.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
I don't know where the '400 or 480 SP' figure comes from in the first place.
It's not in that whitepaper of yours, and I haven't seen it mentioned anywhere else either.
Given the obvious bandwidth limits, no I don't think AMD is going to put 400 or 480 SP in Llano.

Several sources. Also some die shots point at that.

Probably one of the most recent.
http://www.tomshardware.com/reviews/athlon-ii-x2-260-athlon-ii-x3-445,2629-3.html

(Update: AMD let us know that quad-core Zosma-based Phenom II X4 960T engineering samples have been manufactured, but the company does not expect that this processor will be released for general availability. It remains to be seen if the 960T makes its way to OEMs by special request). AMD's roadmaps suggest that next year will see an eight-core CPU based on the new Bulldozer architecture, but the potential game changer is Fusion. This is AMD's upcoming combination of CPU and graphics processor (codenamed llano), supposedly sampling now, to be released in 2011. Information from AMD suggests that llano's integrated graphics core may perform on par with the discrete Radeon HD 5570. This is very, very powerful for an integrated part and may truly bring 1680x1050 gaming to the masses.

http://www.xbitlabs.com/news/cpu/di...no_Die_4_x86_Cores_480_Stream_Processors.html

Based on the die shot displayed by Rick Bergman, senior vice president and general manager of AMD’s products group, the first Fusion processor from AMD will feature 4 x86 cores that resemble those of Propus processor (AMD Athlon II X4) as well as 6 SIMD engines (with 80 stream processors per engine) that resemble those of Evergreen graphics chip (ATI Radeon HD 5800), PC3-12800 (DDR3 1600MHz) memory controller, possibly, with some tweaks to better serve x86 and graphics engines.

The processor lacks unified L3 in order to reduce manufacturing cost, but will have 2MB of L2 cache (512KB per core), which contradicts to previously available information that the chip has 4MB of L3.

AMD’s Llano will feature around 1 billion of transistors, which is logical since AMD’s Propus processor has around 300 million of transistors, whereas 480 stream processors and additional special purpose logic includes around 600 million of transistors. The chip will be made using 32nm silicon-on-insulator fabrication process.

http://phx.corporate-ir.net/External.File?item=UGFyZW50SUQ9MjAzMjR8Q2hpbGRJRD0tMXxUeXBlPTM=&t=1

Some discussion about the pic.

http://www.xtremesystems.org/forums/showthread.php?t=238693

EDIT: There is some talk that Llano will have 1MB L2 cache per core and not 512KB.
 
Last edited:

dzoner

Banned
Feb 21, 2010
114
0
0
I don't know where the '400 or 480 SP' figure comes from in the first place.
It's not in that whitepaper of yours, and I haven't seen it mentioned anywhere else either.
Given the obvious bandwidth limits, no I don't think AMD is going to put 400 or 480 SP in Llano.

http://www.xbitlabs.com/news/cpu/di...no_Die_4_x86_Cores_480_Stream_Processors.html

"AMD’s Llano will feature around 1 billion of transistors, which is logical since AMD’s Propus processor has around 300 million of transistors, whereas 480 stream processors and additional special purpose logic includes around 600 million of transistors. The chip will be made using 32nm silicon-on-insulator fabrication process"

The size of the gpu has been variously estimated around the web from 400 to 480 sps based on the AMD reported overall transistor count, what AMD said the 4 core cpu transistor count would be and a very low resolution (and cropped) shot of what was said to be a Llano chip.

But I haven't seen an AMD verification of the size of the graohics core.

If that 400-480 sp core is accurate, Llano is going to compete with the entire sub $100 discrete (and OEM) graphics market.
 
Last edited:

dzoner

Banned
Feb 21, 2010
114
0
0
damn... Skurge beat me to it...

Anyways - Liano = Integration stage.

AMD had its hands full trying to come up with competitive BD architecture on CPU side and Evergreens & NI/SI on GPU side - there's no way they're done with optimization stage with Fusion.

By the definitions of the original roadmap, later substantially changed, Llano CLEARLY falls into the 'optimization' stage of THAT roadmap with the 'integration' stage being skipped altogether in actual practise.

What's with the - 'there's no way they're done with optimization stage with Fusion"?? - That doesn't make any sense.

OBVIOUSLY Lllano is the initial step of the 'optimization' stage with a BD + NI optimization step to come.

THAT will be followed by the 'exploitation' stage, a fully hetergenous fused APU.
 

Scali

Banned
Dec 3, 2004
2,495
0
0
But I haven't seen an AMD verification of the size of the graohics core.

My point exactly.

If that 400-480 sp core is accurate, Llano is going to compete with the entire sub $100 discrete (and OEM) graphics market.

No, because it would still be bandwidth-limited. There is a reason why current IGPs only have 40 SP. And that is that adding more of them makes no sense, as the bandwidth simply isn't available.
The leap from 40 to 400 or 480 just doesn't make sense. There isn't a magic tenfold increase in memory bandwidth, so why would there be a tenfold increase in SP count?

Another potential problem is that a 400-480 SP GPU takes up about 40W by itself (again more or less a tenfold increase over current IGPs)... is there enough room in the thermal envelope of the CPU for that? Especially for notebooks it sounds completely unreasonable.

You can cling to that figure all you want, but anyone with a bit of common sense will realize that there is no way it is going to work.
 

GaiaHunter

Diamond Member
Jul 13, 2008
3,700
406
126
No, because it would still be bandwidth-limited. There is a reason why current IGPs only have 40 SP. And that is that adding more of them makes no sense, as the bandwidth simply isn't available.
The leap from 40 to 400 or 480 just doesn't make sense. There isn't a magic tenfold increase in memory bandwidth, so why would there be a tenfold increase in SP count?

Another potential problem is that a 400-480 SP GPU takes up about 40W by itself (again more or less a tenfold increase over current IGPs)... is there enough room in the thermal envelope of the CPU for that? Especially for notebooks it sounds completely unreasonable.

You can cling to that figure all you want, but anyone with a bit of common sense will realize that there is no way it is going to work.

Well the picture of the die shot is real and seems to be confirmed.

And looking to the pictures it seems to be clear where the GPU is and its relative size. Additionally, AMD officially have said the number of transistors used in Llano - 1 billion and we know the CPU side is around 320 million. So we have 680 million that we don't know the use.

So what we have is a few options:

1) The GPU is in fact not more than an IGP and that wont have any/reduced impact in the market. That doesn't explain the 680 million transistors (unless AMD lied about the transistor count),

2) The GPU is in fact 400-480 SP but it is bandwidth limited and the performance will be abysmal, so AMD is being idiot.

3) The GPU is in fact 400-480 SP and AMD found a way to circumvent the bandwidth limit, giving in fact far superior performance to any IGP till this date. That will/has the potential to reshape the market.

Additionally you don't need a ten fold increase in bandwidth - a 5570 is 28Gb/s and we already saw that a thuban has like 13.6Gb/s and dual-channel 1600 MHz DDR3 has up to 25.6 Gb/s.

Considering the power envelopes (although this doesn't include any GPU details):

http://www.xbitlabs.com/news/cpu/di...uliarities_of_32nm_Llano_Microprocessors.html

At the International Solid-State Circuits Conference (ISSCC) Advanced Micro Devices has disclosed peculiarities regarding its x86 microprocessors produced using 32nm silicon-on-insulator process technology with high-K metal gate (HKMG) technology. Apparently, AMD’s first Fusion chip code-named Llano will be able to dynamically scale clock-speeds of its x86 cores in order to boost performance or trim power consumption.

As reported, AMD Llano accelerated processing unit (APU) will have four x86 cores based on the current micro-architecture each of which will have 9.69mm² die size (without L2 cache), a little more than 35 million transistors (without L2 cache), 2.5W – 25W power consumption, 0.8V – 1.3V voltage and target clock-speeds at over 3.0GHz clock-speed. The clock-speeds will dynamically scale their clock-speeds and voltages within the designated thermal design power in order to boost performance when a program does not require all four processing engines or trim power consumption when there is no demand for resources.

In order to further reduce power consumption and enable all the aforementioned characteristics, AMD had to implement a number of innovations into the chip on process technology and design levels:

* Core power gating: thanks to the new “power gating-to-ground” approach enabled by SOI manufacturing process, AMD can completely disconnect cores from the power grid. According to AMD, usage of NFET power gating transistor reduces power leakage versus previous power gating solutions by 10 times. Besides, ground-gating can also use the much more conductive chip package for gate supply redistribution rather than a special thick metal layer on the die.
* Digital APM module: each of AMD’s x86 cores feature their own digital power meters which allow to measure actual load of each core very precisely and deliver accurate information to the chip’s power manager that accurately tunes each core’s clock-speed, voltage and other characteristics in accordance with the actual load. As a result, AMD’s Llano processors will be able to overclock select cores within the CPU and disconnect the others to deliver higher performance without increasing power consumption.
* Power aware clock grid design: the new power grid design reduces clock switching power by two times, clock grid metal capacitance by 80% and the number of final clock buffers by 50%.

Interestingly, AMD has not disclosed any details regarding operation of built-in ATI Radeon HD 5000-class graphics core as well as memory controller.

AMD’s and Globalfoundries’ 32nm SOI process will use high-k metal gates, 11 copper metal layers with low-k dielectric, silicon germanium-based strained silicon to improve performance as well as second generation immersion lithography
 

Scali

Banned
Dec 3, 2004
2,495
0
0
2) The GPU is in fact 400-480 SP but it is bandwidth limited and the performance will be abysmal, so AMD is being idiot.

I think it's this one.

Additionally you don't need a ten fold increase in bandwidth - a 5570 is 28Gb/s and we already saw that a thuban has like 13.6Gb/s and dual-channel 1600 MHz DDR3 has up to 25.6 Gb/s.

Actually you need a lot more than that, since the memory is SHARED between CPU and GPU.
This means two things:
1) There will be bus contention issues.
2) There will be a lot of non-localized memory accesses, as the CPU and GPU work in different memory areas.

These two factors mean that efficiency quickly drops well below the maximum theoretical bandwidth (IGPs rarely performed anywhere near the theoretical maxium bandwidth either).
Since AMD is going to use the existing Stars architecture and therefore its memory controller... where exactly is there room for any 'magic' to increase memory performance anyway? It just doesn't add up.

All in all I get a deja-vu feeling about all this. It sounds like as big a pipe dream as the Barcelona was... "40% faster than Clovertown"... yea right... in reality it was pretty much the other way around.
Don't get your hopes up.
 

NIGELG

Senior member
Nov 4, 2009
852
31
91
I certainly don't want Nvidia to be a fading memory.I want both ATI and Nvidia to be slugging it out as they always have because I like to see competition........
 
Status
Not open for further replies.