• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

When exactly does the Geforce 6800 NDA embargo lift?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: Alkali
lol Guru, grab an iced tea or something, you need it.... 🙂

Im going to try and use non-technical terms here for our friend...

Lets assume both GPU's have 8x2 pipelines:
GPU1 may be 92% efficient at running code through its pipelines
GPU2 may be 75% efficient at running code through its pipelines

If GPU1 is running at 500Mhz, and GPU2 is running at 600Mhz:
GPU1 may be 92% efficient --> equivolent to 460Mhz
GPU2 may be 75% efficient --> equivolent to 450Mhz

So, even though GPU2 is a whole 100Mhz faster, its possible its efficency is not enough to beat the 'slower' GPU.


I tried to keep this simple, but in actual fact, there are lots more variables, I'm just trying to make it simple.

I hope this helps explaining in some way...
 
im not specifying cards, im saying in general, you guys(russian or whatever his name is and you(AIW)) started comparing 4200's to 5800's. Lemme break it down, your saying a card with great clock speeds will lose to a card that has a lower clock speed but better architecture or something like FPU, and I agree with that statement...but my statement is saying that if two cards have same architecture , shaders and such but the only difference is clock speed, the one with the greater clock speed will win!!!
 
Originally posted by: James3shin
im not specifying cards, im saying in general, you guys(russian or whatever his name is and you(AIW)) started comparing 4200's to 5800's. Lemme break it down, your saying a card with great clock speeds will lose to a card that has a lower clock speed but better architecture or something like FPU, and I agree with that statement...but my statement is saying that if two cards have same architecture , shaders and such but the only difference is clock speed, the one with the greater clock speed will win!!!

I never said any such thing. I simply said that someone making a comparison between a 4200 and 5800 had some erronious information.
The statement you just made is a 'duh sh!t' statement and not what you originally said. You said that because ATI and Nvidia's offering both have 16 pipes, higher clock speed will win. Now that you're confronted with information about clock efficiency, your story changes DRAMATICALLY to a comparison which has no relevancy to the two GPUs being discussed.
Please go to a hospital. Your scalp should be stitched back on.
 
alkali what is max refresh on your monitor @ 1024...sorry off topic but I have a 17 Mitsubishi Diamondpoint that has a max refresh of 85 @ 1024...i was wondering if your in the hundreds at 1024.
 
Originally posted by: James3shin
plain and simple if pipes, shaders and co. are equal and if the only difference is clock speeds, then the one with the higher clock speed will win. I don't see what else there is to say, seriously...

That is what i originally said, i don't know if i can be any more broader there...where in the hell did i mention ati or NV cards?
 
Originally posted by: James3shin
Originally posted by: James3shin
plain and simple if pipes, shaders and co. are equal and if the only difference is clock speeds, then the one with the higher clock speed will win. I don't see what else there is to say, seriously...

That is what i originally said, i don't know if i can be any more broader there...where in the hell did i mention ati or NV cards?

No...what's BELOW is what you ACTUALLY said and it states that Nvidia need to up their clock frequency to 'compete' with ATI's higher clock rate just because they have the same ammount of pipes...
every post you've made since has enforced this misinformed opinion until this page where you've completely reversed yourself
rolleye.gif


s x800pro confirmed to have 12 lines thought it would have 16 as well? just curious, and im pretty sure NV will release some 6800XT Ultra or something to fend with x800xt, x800xt is coming what 3 months after the x800pro? in that time i think NV will up there clock to compete with the x800xt. just my 2 cents.
 
i didn't even know ATi had a 16 pipe card coming!!! I freaking asked the question in this thread and you answered it.
 
plain and simple if pipes, shaders and co. are equal and if the only difference is clock speeds, then the one with the higher clock speed will win. I don't see what else there is to say, seriously...

Even if that is what you oringally said, it's still COMPLETELY WRONG and not relevant even if it were true (which it's not)
The only time that what you said could EVER be relevant is if you're comparing a Radeon 9700 to a 9700pro.
Completely irrelevant.
 
ok im gonna take a stab at what your thinking, your thinking that all my statements/examples are in context to the x800 and 6800, correct? when I was clearly being in general in those statements/examples, did i ever mention a 6800 or x800, nor did you. In the statement you quoted regarding my question on whether x800 was 12 pipes or 16 was clearly directed at the x8 and 68.
 
lol, cool out you two. Each card is getting a totally revamped archetecture. We don't have anything valid that says one is faster than the other.

James, you might want to read this atricle here .
 
Originally posted by: agnitrate
Originally posted by: JBT
says NDA will lift at 9AM tomarrow morning.. nvnews

WTF?! That's not an April 13th launch then if nobody can talk about it :|:|:|:|

-silver

Launched at an Nvidia LAN party which ends at midnight today.
Also, anandtech is not under NDA.
 
Originally posted by: James3shin
alkali what is max refresh on your monitor @ 1024...sorry off topic but I have a 17 Mitsubishi Diamondpoint that has a max refresh of 85 @ 1024...i was wondering if your in the hundreds at 1024.

160Hz at 1024x768
 
Launched at an Nvidia LAN party which ends at midnight today.
Also, anandtech is not under NDA.

its already 'launched' and shown somewhere at some party....so what's up ?
What's the big deal in giving us the info and reviews already ?

And btw. this finnish site with the alleged benchmarks is down or something...i want REAL reviews 🙂


 
Originally posted by: AIWGuru
Originally posted by: RussianSensation
comparison with previous generations might be irrelevant but based on history logically speaking the chances of a videocard winning while having lower gpu speeds while having all other things equal is very unlikely. Can you name 1 card that has done that?

Radeon 9700pro (325mhz) rapes 5800 ultra (500mhz) especially in PS performance.
rolleye.gif

YES but read what i said carefully, all other things being equal! 5800 ultra has 128-bit memory bus vs. 256-bit bus of 9700Pro thus it's already 2x as slow to begin with. So extra 175mhz on the gpu makes up for part of it but cant make up for 2x less the memory bandwidth

please read what i say carefully.

i said specifically if any 2 cards have equal memory bandwidth/interface ie 128 or 256 but equal, equal number of pipelines, and so on, the card with faster memory and gpu speeds will win 99% of the time regardless of clock efficiency.
And i asked anyone here to bring me 1 example when that is not the case. and 9700pro vs. 5800 ultra is not that example because all things besides clock speeds are not equal.

now i think based on this the x800xt has an advantage because all things equal, it does have faster gpu speed.

I also understand the concept of different architectures you guys are trying to explain in THEORY U ARE RIGHT. See this pipeline stuff works for CPUs, but i have yet to see a real world exapmle where a gpu's shorter pipeline or archicture makes such a dramatic difference. I am not saying it's impossible for a slower clocked gpu to be faster in theory, but in real world, unlike the cpu compeittion between intel and amd, the graphics industry is much closer in design that is why it's more competitive because every ounce of speed counts. Also their architectures are not so different as to expect one to peform 1.5 times more per clock cycle to see Ati's card to peform just as fast while clocked a lot lower or vice verase. At least history hasnt shown 1 example of this that i can think of.

Besides nvidia's architecutre would have to be 33% more efficient than ATI, if X800xt is clocked at 600mhz and 6800ultra is clocked at 450, which is a very large number dont you think? but i am gonna leave this topic out and lets all just wait for reviews. On april 26th, Guru I'll tell you if this time i was way off. Cheers.
 
I only read the first line of your post since I'm going to bed. But the first line is wrong. The GPU outperformed on operations which are not reliant on memory bandwith. Shader operations, low res AA, etc.
 
Back
Top