Cheesetogo
Diamond Member
- Jan 26, 2005
- 3,824
- 10
- 81
Is it going to be possible to buy an xbox 360 and take the graphics card out to use in a pc? Or is it going to be using some wierd connection?
Nvidia will unveil its next-generation flagship chip, the SLI-supporting G70 graphics chip, at the Computex Taipei 2005 show from May 31 to June 4, according to motherboard makers in Taiwan.
Market sources indicated that the Nvidia G70 should deliver twice the performance of the current flagship series, the GeForce 6800. The chips will be built using a 0.11-micron process at Taiwan Semiconductor Manufacturing Company (TSMC), as opposed to ATI?s R520 chip, which will be manufactured using a 90nm process, the sources suggested.
Since the G70 is currently under non-disclosure agreement (NDA) with Nvidia, motherboard makers declined to provide more details about the product. Nvidia also had no comment on the news as it is its policy not to comment on products not yet released.
The makers did say the G70 may begin volume shipments in the latter half of the third quarter at the earliest and will retail for US$549.
Originally posted by: Cheesetogo
Is it going to be possible to buy an xbox 360 and take the graphics card out to use in a pc? Or is it going to be using some wierd connection?
Originally posted by: Pete
Yeah, the current rumor is 24 pipes, 110nm, and at least 450MHz. Not bad, if it debuts at the same price points as the current 6800GT and U (with am ultra-luxury 512MB option for those so inclined).
Originally posted by: Bateluer
Originally posted by: apoppin
When the Xbox originally launched, the GPU was a GF3/4 hybred . . . faster than anything for PC . . . .
I feel the need to correct this. The Xbox GPU is a straight GF3 GPU, with a higher clock frequency than the PC flapship GF3 Ti500. While, technically, a faster GPU because of its higher clock speed, the Xbox GF3 was crippled because of its pairing with DDR400 memory; memory that was slower than the memory on the Ti500 models. Feature set wise, they were identical GPU, they just had different clock speeds.
Originally posted by: gobucks
my money is on a 24-pipe design, and very modest clockspeed increases, 450MHz-500MHz sounds plausible. It's possible they will physically be 32-pipe designs, but even if so, it's sounding like they will only use 24 of them in the first generation of parts. For memory, I think they have GDDR3 up to 1400MHz right now, so that's a safe bet for at least some of the parts. Maybe they'll try some 1.8-2.0GHz GDDR4 for the Ultra flavor and 1.4GHz GDDR3 for the GT, and i imagine both will come with 512MB buffers, maybe a 256MB version for the GT or non-ultra if they have one again. And hopefully they'll be WGF 1.0 compliant so they'll run Aero Glass with all its cool features.
I'm just curious about the mainstream market. With prices on 6800GTs and X800XLs dipping below $250, they're gonna need a card that can keep up with these cards for the $200 segment. I'm guessing we'll see a 12-pipe 7600GT @~600MHz or a 16-pipe card @~300-350MHz, and either really fast GDDR4 and a 128-bit interface or a 256-bit interface with slower GDDR3. And i'm sure they'll all be at least 256MB cards.
Originally posted by: otispunkmeyer
Originally posted by: Bateluer
Originally posted by: apoppin
When the Xbox originally launched, the GPU was a GF3/4 hybred . . . faster than anything for PC . . . .
I feel the need to correct this. The Xbox GPU is a straight GF3 GPU, with a higher clock frequency than the PC flapship GF3 Ti500. While, technically, a faster GPU because of its higher clock speed, the Xbox GF3 was crippled because of its pairing with DDR400 memory; memory that was slower than the memory on the Ti500 models. Feature set wise, they were identical GPU, they just had different clock speeds.
i thought it had an extra vertex shader on it?
Originally posted by: gobucks
I'm just curious about the mainstream market. With prices on 6800GTs and X800XLs dipping below $250, they're gonna need a card that can keep up with these cards for the $200 segment. I'm guessing we'll see a 12-pipe 7600GT @~600MHz or a 16-pipe card @~300-350MHz, and either really fast GDDR4 and a 128-bit interface or a 256-bit interface with slower GDDR3. And i'm sure they'll all be at least 256MB cards.
Originally posted by: otispunkmeyer
it shares 512mb system ram.....GDDR3 700Mhz (1400) and it has some kind of cache type DRAM on a wide bus serving up 256Gb/s of bandwith.
you do make sense mate
No, it'll likely be soldered directly to the PCB.Originally posted by: Cheesetogo
Is it going to be possible to buy an xbox 360 and take the graphics card out to use in a pc? Or is it going to be using some wierd connection?
Originally posted by: Pete
The 6800U packs about 225M transistors. You're probably looking at 400M+ for 32 pipes. I think we're going to see a native 24-pipe GPU, with partially defective ones sold as 16-pipe cards clocked slightly higher than the current 6800GT or U. 400M transistors just sounds too huge, especially for 110nm.
Edit: a 550MHz core would put a 24-pipe G70 at twice the performance of a 16-pipe, 400MHz 6800U--provided bandwidth wasn't a limitation.
Originally posted by: Bar81
Besides the fact that the ATI chip in the Xbox isn't the same one in the Revolution isn't the same one as Fudo. Makes it kind of impossible to use them interchangeably.
Originally posted by: Pete
It really depends on the G70's RAM and clock speeds. Like I said, memory aside, a 550MHz, 24-pipe G70 would be twice as fast as a 400MHz, 16-pipe 6800U--on paper.
Paper and silicon are two different things, though. Without correspondingly fast RAM (meaning, 1GHz GDR3, which ain't gonna happen), I imagine we won't see double the performance in older, simpler games, but we should see it in newer, shader-heavy ones.
Edit: And when IHVs mention a ____ing of previous gen performance, they typically mean the flagship card (and typically mention a doubling of performance).
