Ati's R500 . . . *Update* X-b0xNext info*

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: chsh1ca
Since the features are supposedly identical, for comparison between the R300 and R400 (R300 / R400):
Core Clock: 325 / 520
Mem Clock: 620 / 1120
Process: 150nm / 130nm

The end result of these differences? Take a look: http://www.anandtech.com/video/showdoc.aspx?i=2044&p=11
The "same core" has scaled very well. By shrinking the process and upping the core/mem clock (as well as other modifications) ATI has managed to stay on top/neck in neck for 2+ years.
Avoiding comparisons between GPUs and CPUs, things like SM3.0 seem to me to be like SSE3, though they tend to be adopted slightly faster. Eventually it will be used in a lot of places, but it takes some time for developers to make use of the technology. Far Cry is a good example of people expecting too much from a patch. It'll be probably a good year or so until we see a properly implemented SM3.0 game, and when we do it will no doubt shine on NVidia cards. In that time, I expect ATI to release a newer card with support for it.

Ackmed, just because R500 is intended to be all new doesn't mean it will be a fantastic success like R300 was. The FX line was "all new" and it was troubled from the start by delays and lacking performance. The original Radeons were all new as well, and they weren't exactly competetive with their contemporaries.

Don't forget double the number of pipelines, which is another huge reason for the speed boost.


Originally posted by: Ackmed

Tell that to NV then, they said it themselves, as have many reviewers.

edit, like for you. http://www.elitebastards.com/page.php?pageid=4929&head=1&comments=1

"ATI X800 XT
An Act of Desperation!

-Built on last year's architecture and software
(R300 Shader Model 2.0 Architecture)"

So again, what I said was true. I said its "pretty much two year old tech".

That is some serious FUD from Nvidia, the kings of the "spring refresh." I seem to remember the GeForce 2 GTS and the GeForce 4 series, which were just the previous generation's technology but higher clock speeds; kind of like what ATI has done here, except ATI has done more work because they had to accomodate new memory and more pipelines; Nvidia just made a smaller core and thus got higher clockspeeds.
 

Regs

Lifer
Aug 9, 2002
16,666
21
81
You are also forgetting the implementation of sub-pipelines and processes, branch predicting in the GPU itself, internal registers and FPU's, transistor buffering for less cross talk, etc etc. It's not the same technology. It's based on older technology like everything else is. That is the key word, based.
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
My bad about the pipelines. ;)
My post was meant somewhat sarcastically, if you look at some of my other posts, I've said it isn't "just the same core", but that even if it were its performance is still sufficient to keep them at or near the top in most games.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: BFG10K
People are trying to beat Moore's law - that's the problem.
I don't think that has anything to do with it. The problem is that throwing more transistors and frequencies at the problem is nearing the point of limit IMO. Also while a smaller process can help, it also causes leaks and makes cooling harder because the surface is smaller.

It's about time we dumped electricity and started using lasers instead.

Prescott cooks, but the A64 series isn't that bad.
That's because the 3800 isn't running at 3.6 GHz. If it was it would be just as bad, if not worse.

Edit: grammar.

You'll see diamond semi-conductors before you see lasers.

The 3800+ isn't running at 3.6 GHz because it doesn't need to. It's design is more efficient in terms of the amount of work done per clock cycle. If the 3800+ ran at 3.6 GHz it would be doing a crapload more work than a Prescott at 3.6 GHz... and THAT is why it would create more heat, not just because of the speed in MHz.

The same seems to apply to GPU's right now... the NV40 is more efficient than the R420... hence it's lower clock speed and similar performance.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
ATI R500 ? Shader Model 3.0 and Beyond? (xbit labs)
ATI Technologies has been consistently downplaying the importance of Shader Model 3.0 introduced by NVIDIA with the GeForce 6-series of processors, but Microsoft?s Xbox 2 has all chances to force ATI Technologies to implement the pixel and vertex shaders 3.0 into its graphics chips.

[keep readin' . . . lots of tidbits on Xb0x II]
edit couldn't RESIST: :p
Specifications of the graphics engine the Xbox 2 console is reported to have impress much: the chip seems to have 10 times higher geometry and 4 times higher pixel performance compared to the RADEON X800 XT. In case the same applies to the desktop R500, then next year we will see processors outperforming today?s chips in graphics-intensive applications by a factor of 3, at least?

OUCH! Gonne have to UPgrade NEXT year. :roll:

that $700 XT-PE buyer is gonna have BUYER's REMORSE. :D
 

Regs

Lifer
Aug 9, 2002
16,666
21
81
I just can't believe their going to make a console with that much power. It's hard to believe. I really hope HL2 and D3 put PC back on the map or else the PC industry is going to be looking very grim with that bad boy out there.


In case the same applies to the desktop R500, then next year we will see processors outperforming today?s chips in graphics-intensive applications by a factor of 3, at least?

That's insane! It has came to the point where instead of imagining how powerful graphic cards will be but what to expect from the level of details we will see in games. Welcome to Millennium 2000. Unreal 4 isn't looking to sluggish now.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Regs
I just can't believe their going to make a console with that much power. It's hard to believe. I really hope HL2 and D3 put PC back on the map or else the PC industry is going to be looking very grim with that bad boy out there.


In case the same applies to the desktop R500, then next year we will see processors outperforming today?s chips in graphics-intensive applications by a factor of 3, at least?

That's insane! It has came to the point where instead of imagining how powerful graphic cards will be but what to expect from the level of details we will see in games. Welcome to Millennium 2000. Unreal 4 isn't looking to sluggish now.
My WHOLE POINT is that i was FOOLed 2.

We were LEAD to believe THESE (6800u/XTpe) were the GPUs to "look foward to". . . .now we find they are "in between" and NV50/r500 should have THREE TIMES (3x) the performance of TODAY's FASTEST chips . . . THEN we will have r600/DX10 (with longhorn) in '07 . . . :p

:roll:
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: BFG10K

That's because the 3800 isn't running at 3.6 GHz. If it was it would be just as bad, if not worse.

Edit: grammar.



Well no shink, sherlock. I was saying 2 years ago that Intel's brute force method of ramping clock speeds was getting silly, and that AMD had been on the right track since the Thunderbird line, when they started to switch over to more efficient clocks instead of just more of them...Now AMD has a significant jump over Intel primarily because they chose efficiency over raw speed.

We need to focus on more efficient processors I think, not faster ones. GPU manufactuerers do that, which is why they've advanced so much faster than CPUs the past few years...
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Originally posted by: apoppin

We were LEAD to believe THESE (6800u/XTpe) were the GPUs to "look foward to". . . .now we find they are "in between" and NV50/r500 should have THREE TIMES (3x) the performance of TODAY's FASTEST chips . . . THEN we will have r600/DX10 (with longhorn) in '07 . . . :p

:roll:

and so on...and so on .... and so on....gpu's always get faster...with this logic of waiting for the Next best GPU, you'll always be stuck with your old one

The first rule any avid reader should remember is to not trust any marketing hype (if the marketing hype becomes true - good; but if it doesnt' you are setting yourself and the product up for disappointment)

I don't need to bring up claims of nvidia or ati of 8x the shader processing power, etc.
We have all seen that when this translates to real world numbers, this will never happen, even if the theoretical throughput is 8x faster.

Previous historical events and new technological introductions show that CPU or GPU speed doubles approximately every 15-18 months. Suggesting that by this time next year the new GPU's will have 3x the performance of today's top end cards is ludicrous. It is very doubtful that technology could suddenly accelerate this much in such a short period of time, when prior history has never shown any such occurance. Furthermore, ATI's and the general graphics market push for a longer product cycle of 18-24 months seems hypocrytical if they will release 3x faster cards in less than 12 months. Most importantly, considering that the fastest CPu on the market (let's say FX53) is still not sufficient enough for lower resolutions to match the power of the current top end cards, it leads one to question the effectiveness of the "3x more powerful GPUs" in ~1 year's time. CPU speed will probably be at around 5100-5300+ at maximum by sept of next year. If the processor speed doesn't even double, and gpu speed tripples, then the effectiveness of the newer cards will be reduced even more than we see now. Finally, to have such a substantial leap in performance would constitute a continuation of a similar or greater increase in the future. Afterall, consistency of progress is very important in the technological market. If the consumer becomes trained to expect such exponential leaps, then a company could potentially set itself up for not living up to expectations. That would be detrimental to the stock value and put more pressure on future growth. There is a reason why new car models are not introduced every 1-2 years but instead every 5 years or so and why the exterior/interior designs take slow steps of progress = because the market is simply not ready for it (even if the technology and ideas are available today). Similarly, the rest of the computer world must evolve just as fast to prove effective as a whole. Also think about it: If you presume that R500/NV50 come out next year and R600/NV60 with longhorn in 2007, then what happens in 2006? A bunch of refreshes with faster clock rates for 18 months?
 

Shad0hawK

Banned
May 26, 2003
1,456
0
0
Originally posted by: Ackmed
Probably because the X800 cards are pretty much 2 year old tech. People love to bring that up all the time. Their two year old tech, is keeping up with NV's brand new tech.

yup. in applications that do not take advantage of the new tech(farcry does not use that much SM3 if you will remember) so what is your point?
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Insomniak
Originally posted by: BFG10K

That's because the 3800 isn't running at 3.6 GHz. If it was it would be just as bad, if not worse.

Edit: grammar.



Well no shink, sherlock. I was saying 2 years ago that Intel's brute force method of ramping clock speeds was getting silly, and that AMD had been on the right track since the Thunderbird line, when they started to switch over to more efficient clocks instead of just more of them...Now AMD has a significant jump over Intel primarily because they chose efficiency over raw speed.

We need to focus on more efficient processors I think, not faster ones. GPU manufactuerers do that, which is why they've advanced so much faster than CPUs the past few years...

It also helps a lot that 3D rendering tasks are nearly infinitely parallelizable. In theory, you could have a pipeline per pixel, allowing you to render a full frame in just a few GPU clock cycles. GPUs in the last few years have gone from 1 or 2 to 16 parallel pipelines, whereas CPUs still only have 1 (although that 1 has gotten a lot faster). HT sort of gives you a second CPU on modern Intel processors, but it's not true parallelism, since only one thread at a time can be using the ALU, FPU, memory, etc. (only certain pipeline stages are shared).

While graphics processing has gotten more sophisticated (including the move from fixed-function processing towards programmable shaders), I'd still say the majority of the improvement in GPUs over the last 3-4 years has been of the brute force variety -- higher clocks, more pipes, more memory bandwidth. I mean, the X800-series from ATI is essentially a 9600XT times four, with higher clocks and GDDR3. The fixed-function (ie, DX8 and lower) part of the 6800 is similar to what was found in the GeForceFX and GeForce4 cards, but with more pipelines.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
R420 (or R300) IS a 2 year old tech. And exactly that's why they can't have any seizable yields of X800XT. Sure R300 was a great technology at the time (I bought one) but come on, what they do is adding more pipes and raising clock speed then labeling them as "9800Pro" and "X800XT". As you may well know, you can raise the clock speed only as much. ATI needs to bring a new core ASAP. Would you buy another ATI card next year if they come up with the same core, with 24 pipes, running at 800MHz?? I know I certainly wouldn't.

Oh and don't say "There is no 6800Ultra available, either". I'm not into that childish fanboism show-off. We're talking about technology - R300/420, not the competition. Not to mention nearly every 6800GT owner has achieved their clock speed comparable to Ultra specs.
 

lopri

Elite Member
Jul 27, 2002
13,314
690
126
R420 (or R300) IS a 2 year old tech. And exactly that's why they can't have any seizable yields of X800XT. Sure R300 was a great technology at the time (I bought one) but come on, what they do is adding more pipes and raising clock speed then labeling them as "9800Pro" and "X800XT". As you may well know, you can raise the clock speed only as much. ATI needs to bring a new core ASAP. Would you buy another ATI card next year if they come up with the same core, with 24 pipes, running at 800MHz?? I know I certainly wouldn't.

Oh and don't say "There is no 6800Ultra available, either". I'm not into that childish fanboism show-off. We're talking about technology - R300/420, not the competition. Not to mention nearly every 6800GT owner has achieved their clock speed comparable to Ultra specs.
 

Insomniak

Banned
Sep 11, 2003
4,836
0
0
Originally posted by: Matthias99
It also helps a lot that 3D rendering tasks are nearly infinitely parallelizable. In theory, you could have a pipeline per pixel, allowing you to render a full frame in just a few GPU clock cycles. GPUs in the last few years have gone from 1 or 2 to 16 parallel pipelines, whereas CPUs still only have 1 (although that 1 has gotten a lot faster). HT sort of gives you a second CPU on modern Intel processors, but it's not true parallelism, since only one thread at a time can be using the ALU, FPU, memory, etc. (only certain pipeline stages are shared).


I disagree about calling more pipes brute force - brute force in my opinion is simply ramping clocks. I'd call additional pipes as part of the IPC equation - more work per clock.

I don't see why it's such a problem to move to multiple core processors. I mean, think about all the dead space we have in our boxes now...imagine a CPU with 4 cores....it's not like space is an issue. Heat and voltage may be a slight tripwire, but nothing proper cooling and PSU couldn't handle...
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: Insomniak
Originally posted by: Matthias99
It also helps a lot that 3D rendering tasks are nearly infinitely parallelizable. In theory, you could have a pipeline per pixel, allowing you to render a full frame in just a few GPU clock cycles. GPUs in the last few years have gone from 1 or 2 to 16 parallel pipelines, whereas CPUs still only have 1 (although that 1 has gotten a lot faster). HT sort of gives you a second CPU on modern Intel processors, but it's not true parallelism, since only one thread at a time can be using the ALU, FPU, memory, etc. (only certain pipeline stages are shared).


I disagree about calling more pipes brute force - brute force in my opinion is simply ramping clocks. I'd call additional pipes as part of the IPC equation - more work per clock.

That's not the standard definition of IPC, and it's not generally what I'd call making something more "efficient" (which is what you called it above). Throwing more hardware at the problem (whether by ramping speeds or providing multiple cores/pipelines) is, in my mind, a "brute force" solution. Of course, I'm a software engineer by trade. :p

I don't see why it's such a problem to move to multiple core processors. I mean, think about all the dead space we have in our boxes now...imagine a CPU with 4 cores....it's not like space is an issue. Heat and voltage may be a slight tripwire, but nothing proper cooling and PSU couldn't handle...

The issues are more in handling communications and synchronization between the CPU cores, and in getting the chipsets and operating systems to play nice with them. And space on the CPU die is *certainly* an issue, albeit a more easily managed one (especially with the move to 90nm manufacturing).
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: lopri
R420 (or R300) IS a 2 year old tech. And exactly that's why they can't have any seizable yields of X800XT.

I'm not sure that your conclusion follows from your premise. Certainly I would think an established design would be easier to scale up and get good yields with than a brand-new one.

Sure R300 was a great technology at the time (I bought one) but come on, what they do is adding more pipes and raising clock speed then labeling them as "9800Pro" and "X800XT".

They added a few new features, but essentially, yes. The X800XT is pretty much four times a 9600XT, with higher clocks. Note that it is based on the R360, not the R300 or R350 cores.

As you may well know, you can raise the clock speed only as much.

Well, obviously, yes.

ATI needs to bring a new core ASAP. Would you buy another ATI card next year if they come up with the same core, with 24 pipes, running at 800MHz?? I know I certainly wouldn't.

Depends on what was out to compete with it, and how important SM3.0 becomes. Keep in mind that the card you described (24 pipes, 800Mhz) would have nearly 2.5 times the fillrate and pixel shader power of an X800XT.

Oh and don't say "There is no 6800Ultra available, either". I'm not into that childish fanboism show-off. We're talking about technology - R300/420, not the competition. Not to mention nearly every 6800GT owner has achieved their clock speed comparable to Ultra specs.

Huh? If you're going to compain about bad yields and no new features on the R420, you should at least acknowledge that NVIDIA seems to be having supply problems as well -- and that their new features are so far not doing much of anything. And just about every X800Pro will clock at X800XT speeds or higher...
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Well we know FOR sure youu do NOT want an x800pro for Doom III when the 6800GT will BLOW it AWAW!

id Software's Official DOOM3 Benchmarks
Looking at the cream of the crop in video cards, it is painfully obvious that ATI is going to have to make some changes in their product line to stay competitive, at least with DOOM 3 gamers. There is no way for a $500 X800XT-PE to compete with a $400 6800GT when the GT is simply going to outperform the more expensive card by a good margin. I am sure ATI is trying their best to figure out their next move and it will certainly be interesting to see if their driver teams pull a rabbit out of their hat or not.

All that considered, for those of you that are in the high end video card market, the GeForce 6800GT looks to very much be the sweet spot when it comes to playing DOOM 3 with all the eye candy turned on at high resolutions.

and no "need" to wait for r500 . . . :p

ToDAY's cards will do FINE with with Doom III and HL2 engines. ;)
 

SunnyD

Belgian Waffler
Jan 2, 2001
32,675
146
106
www.neftastic.com
Originally posted by: apoppin
Specifications of the graphics engine the Xbox 2 console is reported to have impress much: the chip seems to have 10 times higher geometry and 4 times higher pixel performance compared to the RADEON X800 XT. In case the same applies to the desktop R500, then next year we will see processors outperforming today?s chips in graphics-intensive applications by a factor of 3, at least?

OUCH! Gonne have to UPgrade NEXT year. :roll:

that $700 XT-PE buyer is gonna have BUYER's REMORSE.

Hate to say it, but when the original XBox was announced with NVIDIA's chip, basically they said the same thing about the at-the-time-current generation of video cards. Have we remorsed over that? No.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Originally posted by: lopri
ATI needs to bring a new core ASAP. Would you buy another ATI card next year if they come up with the same core, with 24 pipes, running at 800MHz?? I know I certainly wouldn't.

So why do you keep buying new cars, or the majority of the people out there? The basic design of the internal combustion engine has remained virtually unchanged for the last 100 years. Not to mention the fact that it is only 30% efficient, with the rest of the energy being converted (wasted) into heat, CO2, noise, etc. Most car platforms are reused for a long time. The mustang's platform hasn't been redesigned in 25 years until the new one in 2005; but does it make it a bad car?

Furthermore, according to you, we should all stop buying CPUs for the rest of our lives because they are still made on the x86 architecture.

Newer doesn't always mean better. In this case, "old tech" is still able to keep up with "new tech" easily. If ATI brought "old tech" with PS3.0 technology, and was just as fast as Nvidia, why wouldn't you consider it? Ask yourself, do you care about the outcome (gaming experience) or the internal process inside the gpu that makes it all work?

X800xt pe is just as fast as 6800ultra....nuff said. Your logic only makes sense if the "old tech" was lagging behind the new (ie XP vs. P4, so A64 had to come out)
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: SunnyD
Originally posted by: apoppin
Specifications of the graphics engine the Xbox 2 console is reported to have impress much: the chip seems to have 10 times higher geometry and 4 times higher pixel performance compared to the RADEON X800 XT. In case the same applies to the desktop R500, then next year we will see processors outperforming today?s chips in graphics-intensive applications by a factor of 3, at least?

OUCH! Gonne have to UPgrade NEXT year. :roll:

that $700 XT-PE buyer is gonna have BUYER's REMORSE.

Hate to say it, but when the original XBox was announced with NVIDIA's chip, basically they said the same thing about the at-the-time-current generation of video cards. Have we remorsed over that? No.
did u forget . . . when xbox came out the graphics were excellent; it was a real alternative to the top CPU gaming systems . . . looks like THIS x-B0xNext is trying to be MORE ambitious - to better PC gaming (of course PC gaming will have SLI by then) . . . ;)

for its day, the X-box nVidia GPU was a pretty advanced GF3/GF4 hybred that can evidently run the latest game - Doom III with 64MB RAM and a 733Mhz CPU :p

:roll:
 

chsh1ca

Golden Member
Feb 17, 2003
1,179
0
0
Originally posted by: RussianSensation
So why do you keep buying new cars, or the majority of the people out there? The basic design of the internal combustion engine has remained virtually unchanged for the last 100 years. Not to mention the fact that it is only 30% efficient, with the rest of the energy being converted (wasted) into heat, CO2, noise, etc. Most car platforms are reused for a long time. The mustang's platform hasn't been redesigned in 25 years until the new one in 2005; but does it make it a bad car?
Cars are not GPUs, and in fact, the basic design of the car HAS changed in 100 years, in addition to the obvious enhancements. Consider the becoming-more-common hybrid engines you are seeing nowadays. A car is more than just an engine, to say it hasn't changed in 100 years is somewhat ignorant of the facts.

Furthermore, according to you, we should all stop buying CPUs for the rest of our lives because they are still made on the x86 architecture.
The person you are responding to never actually said any such thing. CPUs HAVE been updated since the original 8086 spec was drawn up, and even the most recent AMD and Intel processors have continued to expand upon x86 with things like x86-64 and SSE3.

Newer doesn't always mean better. In this case, "old tech" is still able to keep up with "new tech" easily. If ATI brought "old tech" with PS3.0 technology, and was just as fast as Nvidia, why wouldn't you consider it? Ask yourself, do you care about the outcome (gaming experience) or the internal process inside the gpu that makes it all work?
Very good point, unfortunately some people here do care about the internal process more than the gaming experience. If that wasn't true, you wouldn't have half these threads wherein people argue the merits of one card over another (or why trylinear or brilinear are bad).
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
To bad we only read about it working in Sci-Fi books.
IIRC there has been a reasonable amount of research put into optic CPUs. Lasers are inherently better than electricity because they don't leak and because they can cross paths with zero interference.

You'll see diamond semi-conductors before you see lasers.
Electricity is the main problem, not conductors.

If the 3800+ ran at 3.6 GHz it would be doing a crapload more work than a Prescott at 3.6 GHz
Yes but due to its design it probably can't. I agree about the efficiency thing but there's only so much you can do these days. We really need to completely overhaul how chips are designed.

Now AMD has a significant jump over Intel primarily because they chose efficiency over raw speed.
AMD has to raise clock speeds just like Intel, nVidia or ATi. The problem of electricity and its impacts are universal to all designs that work around it.

I'd call additional pipes as part of the IPC equation - more work per clock.
Yes but more pipes increases transistor count which increases heat and power requirements. Compare a 16 pipe card to a 1 pipe card. It's certainly more efficient but it consumes a gargantuan amount of power so it's not really solving the fundamental problem.

Again, if something drastic doesn't happen soon we could all be looking at mandatory water cooling.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,587
10,225
126
Originally posted by: BFG10K
To bad we only read about it working in Sci-Fi books.
IIRC there has been a reasonable amount of research put into optic CPUs. Lasers are inherently better than electricity because they don't leak and because they can cross paths with zero interference.

I believe that's only when they are perpendicular to each other. Light is both a wave and a particle, and waves intefere with each other. In fact, I think that's probably how they implement "semi-conductors" with light, if you think about it. Waves either strengthening each other, or cancelling each other out, etc.

Originally posted by: BFG10K
You'll see diamond semi-conductors before you see lasers.
Electricity is the main problem, not conductors.

If the leakage current at small transistor feature-sizes could be significantly reduced, then silicon-based devices could continue to scale down. It would be interesting to use atomic scale CVD or something similar, to build semi-conductors on a crystalline-carbon (aka diamond) substrate base, instead of crystalline silicon. That's at the limit of my knowledge on the subject, I'd have to read up on that some more.

I too, question when exactly that we will see optical computers. I would almost be more interested in a biological "artificial brain" type of computer. You feed it organic materials, it could be self-healing, and if it ever becomes obsolete, you dump it into the environment and let it bio-degrade. At least the disposal part would be much advantagous to what we have now, although I expect that most traditional users might get a bit grossed out by their new CPU being a "brain in a jar". Plus, we would have to invent the necessary interface technologies to utilize such a development, and those same technologies might prove to be even more disruptive, because they would probably be used to attempt to interface with a human brain first. I don't see that happening for some time either.

Originally posted by: BFG10K
Again, if something drastic doesn't happen soon we could all be looking at mandatory water cooling.

I'm really almost surprised that we haven't seen such things as stock, on higher-end OEM systems. (Well, I guess AlienWare is going to do something like that with their really high-end systems.) The benefit isn't just better cooling though, but also a wonderful reduction of noise. I can't say that I've tried water-cooling myself, but if it could be made 100% "safe", then I might. So far, I don't really mind the white-noise that my system fans put out, it helps me sleep.
 

masp

Junior Member
Jul 23, 2004
1
0
0
Why does this all matter? In my experience, a game running at 80fps looks almost exactly the same as a game running at 100fps - I and others cannot really tell the difference. Hell, I'll take 60fps with all the visual goodies enabled. These cards are all beasts relative to the software that runs on them. Unless you're one of those people that has to have the new technology when it first comes out just for the sake of having it, then there's no reason to have to constantly upgrade hardware. Trying to keep up with technology is pointless unless your bank account is a bottomless pit of money.

I'm getting rid of my geforce fx 5800 now in exchange for the x800 pro mostly because of halflife 2. I don't care if it's an "in-between card," the x800 pro is good enough for what I need it to do - and it probably will be good enough for a couple more years until new games come out that force me to get another new system so that I can play my games without having to worry about choppy graphics. It's always fun to read about how quickly our technology advances and to learn about new architecture, etc. But in trying to own today's technology, tomorrow you'll find yourself saying, "I should have waited longer." My approach to this may be too pragmatic, but I've had my share of buyer's guilt - it's a terrible feeling, one that I'm fed up with.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Light is both a wave and a particle, and waves intefere with each other.
AFAIK I don't think it should matter in the grand scheme of things because all we need to know is if there's a signal or not. Shine two torches across each other and both will still hit their respective opposing walls. Just pick wavelengths to ensure the signal isn't cancelled and it should be OK.

If the leakage current at small transistor feature-sizes could be significantly reduced, then silicon-based devices could continue to scale down. It would be interesting to use atomic scale CVD or something similar, to build semi-conductors on a crystalline-carbon (aka diamond) substrate base, instead of crystalline silicon.
Now I understand the context of your diamond comment but I'd like to add that reduced leakage doesn't help the heat/power issue.