• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

ATI SLI

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
They will soon be moving on to nanotechnology to do these things.

AMD and Intel are probably researching and developing those things at the mo.
 
Originally posted by: Drayvn
They will soon be moving on to nanotechnology to do these things.

AMD and Intel are probably researching and developing those things at the mo.

which kind of nanotechnology do you mean? they are already manufacturing chips at a scale measured in nano-meters? maybe you mean something else?
 
Originally posted by: rgreen83
Originally posted by: Drayvn
They will soon be moving on to nanotechnology to do these things.

AMD and Intel are probably researching and developing those things at the mo.

which kind of nanotechnology do you mean? they are already manufacturing chips at a scale measured in nano-meters? maybe you mean something else?

yes we are technically. the new winchesters/prescotts all have 90nm transistors and we're soon moving down to 65nm production. the new video cards are now 110nm production. i think some place i read they made a 3nm no gate out of carbon nanotbes and gold.
 
Originally posted by: rgreen83
Originally posted by: malak
Originally posted by: rgreen83
Originally posted by: malak
Originally posted by: Kensai
They have to design something comparable to SLI or else they'd be left in the dust. It depends on what you do with your system, what kind of card you have, your budget and your estimated future budget.

Why? SLI is a fad that won't last. The future is multi-core design, which is already starting to happen. ATI is only doing it for ATI fanboys that want SLI. It isn't going to matter in 2 years. ATI is pursuing other things right now which is keeping them ahead of the game.

What do you mean by multi core? Graphics are infinately parallel, and are already multi threaded by design, hence increasing numbers of pixel and vertex processing engines. It would be cheaper from a manufacturing standpoint to build one 32 pipe core than 2 16 pipe cores., and probably have better performance too. And wont matter in two years? hell 3dfx went from top to gone in two years.

I'm sorry, but you know little about video card design. Please exit the thread.

And yet you provide no argument? Only the equivalent of "i am right because i say i am"?
More transistors = more chance of problems meaning a core can't be used.
Making huge cores means that there's more risk per core, and more waste, because you have to discard the whole core.
If you make 2 smaller cores, then each one is less likely to fail, and if there is a problem with one, there is less waste.

Dual cores are expensive to manufacture. Y ields (the number of working chips on one wafer) are roughly proportional to size. Larger, dual core chips will always have lower yields than smaller, single core chips on the same process technology
OK?
 
@ Lonyo

How is that different/better than how defective quads can be disabled now?

one defective quad out of sixteen pipe core = 12 pipe 6800 vanilla

one defective core out of two 8 pipe cores = 8 pipe 6600gt

6800 > 6600
 
Why? SLI is a fad that won't last. The future is multi-core design, which is already starting to happen.

For very simplistic chips such as CPUs you are correct, but certainly not for something with the complexity of GPUs. Consumer level chips can't reasonably exceed 300mm squared, the yields beyond that point plummet making them non viable for use in commodity based applications.

Processor technology has been using a relatively low transistor count and then relying on tweaks and process advances to move forward by increasing frequency- THAT is currently proving to be a dead end path.

GPU makers, OTOH, have been utilizing advances in process technologies to significantly increase the amount of transistors on a given core. They do this by adding functional units- this is precisely what CPU makers are going to be doing by moving to multicore chips. GPU makers are already there, they are already pushing the limits of the fab technology available to them and the only major boost they are going to see outside of the fairly timely wait for advances in process technology is by adding multiple chips.

For the ultimate example of the multi-core design in terms of CPUs look to Cell. The chip is designed from the ground up to be multi-core, its' instruction set is built for it, everything about the chip is built with the intention of slapping as many cores as they can fit on a single die. One of the areas IBM is going to be using Cell in is its thousand processor "SuperComputer" class machines. They are doing this by adding thousands of Cell processors each with as many cores as they can fit.
 
Originally posted by: rgreen83
@ Lonyo

How is that different/better than how defective quads can be disabled now?

one defective quad out of sixteen pipe core = 12 pipe 6800 vanilla

one defective core out of two 8 pipe cores = 8 pipe 6600gt

6800 > 6600

The defect isn't always in one of the quads. Plus, when that happens, they still have to sell the whole thing for less, which equals less money for them.

If you have use 2 physically separate cores, which is what they will probably do, then you just use the defective chip for a low end card, if possible, and grab another core to slap on the card.
 
What do you mean by multi core? Graphics are infinately parallel, and are already multi threaded by design, hence increasing numbers of pixel and vertex processing engines. It would be cheaper from a manufacturing standpoint to build one 32 pipe core than 2 16 pipe cores., and probably have better performance too. And wont matter in two years? hell 3dfx went from top to gone in two years.
GPU's still only handle a thread at a time. Not everything is parallel yet, just certain parts. Dual core is very complicated to program for.

WOW...wasn't expecting this many replies....
This makes me think maybe I can get a cheaper NON-SLI motherboard and a 6800GT....hmmmm.....

Thank you for your replies.
Unless you have much money, their's really no point to getting SLI.

 
Originally posted by: CheesePoofs
They will keep shrinking the manufactering process, so it shouldn't get to big ever; unless something happens where they cant shrink the process further.

Shrinking the manufacturing process isn't as simple as it sounds. Intel are already having a helluva time with their 65nm current leaks.
 
Originally posted by: PrayForDeath
Originally posted by: CheesePoofs
They will keep shrinking the manufactering process, so it shouldn't get to big ever; unless something happens where they cant shrink the process further.

Shrinking the manufacturing process isn't as simple as it sounds. Intel are already having a helluva time with their 65nm current leaks.

i think you mean their current 90nm chips have leakage problems...

the 65nm chips will be SOI + SS which will help diminish the problems youre seeing now.
 
Originally posted by: VIAN
What do you mean by multi core? Graphics are infinately parallel, and are already multi threaded by design, hence increasing numbers of pixel and vertex processing engines. It would be cheaper from a manufacturing standpoint to build one 32 pipe core than 2 16 pipe cores., and probably have better performance too. And wont matter in two years? hell 3dfx went from top to gone in two years.
GPU's still only handle a thread at a time. Not everything is parallel yet, just certain parts. Dual core is very complicated to program for.

WOW...wasn't expecting this many replies....
This makes me think maybe I can get a cheaper NON-SLI motherboard and a 6800GT....hmmmm.....

Thank you for your replies.
Unless you have much money, their's really no point to getting SLI.


I think SLI technology is actually working towards successful dual core implementation. SLI distributes work to the two gpu's, because the threads are not really parallel and only the pixels are SLI is kind of a necessary evil for the time being. Although it does have a pretty bad overhead and reduces the gains made by adding more pipes over time the method will improve and multiple cores will be feasible.

Earlier in the thread someone said that multiple cores are pointless but they obviously didnt think from a cost analysis point of view. If you can use two 16 pipe cores instead of one 32 then you can also have a lesser card that uses a single core and a better card in the future that uses 3 instead of two. This saves R&D time and makes it easier to mix and match for price points.

Whoever said SLI is pointless and dual core is the way to go - I think it was malak, probably failed to realize that this is LIKELY the direction things are going and that SLI is a step in this direction. I doubt that ATi is going SLI just for fan boys. They want profit.
 
Back
Top