X1900 benchmarks

Zbox

Senior member
Aug 29, 2003
881
0
76
ugly graphs, but numbers look believable... definitely interesting :D
 

Lord Banshee

Golden Member
Sep 8, 2004
1,495
0
0
i do not believe it at all

and that would be one hell of a come back in Age of Empire 3 for ATI if it was real lol

*well the first one looks fake* the other seem ok 20-30% increase over 7800GTX 512b about what Inquirer said it would be if it is true
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Looks fake to me. Why would a single x1900 score a lot lower than the 512gtx in Lost Coast, but then totally wipe the floor with dual 512gtx's in when 2 x1900's are Crossfire'd?
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
I'm expecting x1900 to beat 7800GTX 512mb, but the numbers just look impossible...
I mean, if x1900 really has 48 pipelines then I would believe it, but since its only 16 pipeline, I seriously doubt the numbers..

 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
This is so out of proportion, if it were true nVidia might as well close down
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
You know what this reminds me of? Those sell sheets Ati published when the r520 was launched, showing a bunch of graphs with the red bars usually way longer than the green bars, but no hard numbers or details on the actual benchmarks.
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
The last time this came true was about 10 years ago with Nvidia's GTs(not sure what it was named..). I think it was about 3 times faster than anything at the time, but that was it..
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
On second thought, look at the scale of the graphs. Despite the misleading bars, what it really shows is the r580 being about 20% faster than the 512gtx, more or less depending on the game. Also, we all know for a fact Nv is gonna lose badly when comparing SLI 16xAA vs. 14x SuperAA, so it may not be as fake as I originally thought. But I still have my doubts...
 

Steelski

Senior member
Feb 16, 2005
700
0
0
it does not look fake to me.
I especially like the Fear benchmark in shich the X1900 runs fear as fast as 2x512gtx's, if true then it will be one heck of a card. Although i dont see how it would tackle a 32piped gtx at around 700mhz in most benchmarks, maybee there will be an OC version when that card comes out.
these look suspiciously like the graphs last time that were on the box of the X1800
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
25% faster than a 512 GTX is fine for me - I just want to try out an ATi card again and get a chance to play with angle independent AF. Course if G71 slaughters it my R580 will quickly go on sale.. ;) Speaking of which, how are people so sure a 700 mhz G71 will be available in quantity? Most thought the same of the 512 GTX and when I made a thread casting doubt about it's availability, price and performance, I was bashed for it. I'm willing to bet a 700 mhz G71 will be in short supply or not even exist at all.
 

Steelski

Senior member
Feb 16, 2005
700
0
0
Originally posted by: 5150Joker
25% faster than a 512 GTX is fine for me - I just want to try out an ATi card again and get a chance to play with angle independent AF. Course if G71 slaughters it my R580 will quickly go on sale.. ;) Speaking of which, how are people so sure a 700 mhz G71 will be available in quantity? Most thought the same of the 512 GTX and when I made a thread casting doubt about it's availability, price and performance, I was bashed for it. I'm willing to bet a 700 mhz G71 will be in short supply or not even exist at all.

nobody is sure!
Except Nvidia and insiders.
700 does seem a bit far fetched when you consider that its their first 09.nm manufacturing process and its with 32 pipes. ATI were going on about their high speed cores and rumours were around of 750mhz to 850mhz chips. in the end it was a 16piped core qith 625mhz.
The GTX512 was one giant marketing campaign that really backfired on Nvidia.
I am also thinking that the part we are all expecting 700mhz and 1800ddr is a little too much. I personally think that the difference between the g71 and g70 is more likeley to be like the difference between a 7800 GT and GTX both at stock speeds. at least percentage wise (comparing the 512 GTX and a G71). Am i making sence.
 

RobertR1

Golden Member
Oct 22, 2004
1,113
1
81
Originally posted by: 5150Joker
lol fake

Nah. I'm sure they're real by Ati's Marketing standards, similar to the R520 charts.

*IF* and a big IF they are even close, what really impresses me is the Crossfire performance. I mean, WOW!
 

Matthias99

Diamond Member
Oct 7, 2003
8,808
0
0
Originally posted by: beggerking
I'm expecting x1900 to beat 7800GTX 512mb, but the numbers just look impossible...
I mean, if x1900 really has 48 pipelines then I would believe it, but since its only 16 pipeline, I seriously doubt the numbers..

Consider that the X1800XT has "only" 16 pipelines and is between the 7800GTX and 7800GTX512. The X1900XT has higher clocks and triple the shader processing capability.

That first graph is wack (maybe something wrong with their SLI setup? Or a bug with SLIAA?), but the rest look more or less believable. The X1900 should be significantly faster in games with heavy shader loads (like AOE3 or FEAR), but may not show much improvement on games that don't.
 

beggerking

Golden Member
Jan 15, 2006
1,703
0
0
Originally posted by: Matthias99
Originally posted by: beggerking
I'm expecting x1900 to beat 7800GTX 512mb, but the numbers just look impossible...
I mean, if x1900 really has 48 pipelines then I would believe it, but since its only 16 pipeline, I seriously doubt the numbers..

Consider that the X1800XT has "only" 16 pipelines and is between the 7800GTX and 7800GTX512. The X1900XT has higher clocks and triple the shader processing capability.

That first graph is wack (maybe something wrong with their SLI setup? Or a bug with SLIAA?), but the rest look more or less believable. The X1900 should be significantly faster in games with heavy shader loads (like AOE3 or FEAR), but may not show much improvement on games that don't.

Are you sure by triple the shader processor per pipe = 3x shader processing capability... ?
 

videoclone

Golden Member
Jun 5, 2003
1,465
0
0
^_^ ... Its fast but my Kawasaki ZXR250 would be a little bit faster then the X1900XTX at 215kmph over 1k of freeway.
 

tuteja1986

Diamond Member
Jun 1, 2005
3,676
0
0
i only believe the fear bechmark as i have heard alot of people say that 7800gtx sli vs x1900xt crossfire ! x1900xt crossfire wins by %50
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Originally posted by: Steelski
Originally posted by: 5150Joker
25% faster than a 512 GTX is fine for me - I just want to try out an ATi card again and get a chance to play with angle independent AF. Course if G71 slaughters it my R580 will quickly go on sale.. ;) Speaking of which, how are people so sure a 700 mhz G71 will be available in quantity? Most thought the same of the 512 GTX and when I made a thread casting doubt about it's availability, price and performance, I was bashed for it. I'm willing to bet a 700 mhz G71 will be in short supply or not even exist at all.

nobody is sure!
Except Nvidia and insiders.
700 does seem a bit far fetched when you consider that its their first 09.nm manufacturing process and its with 32 pipes. ATI were going on about their high speed cores and rumours were around of 750mhz to 850mhz chips. in the end it was a 16piped core qith 625mhz.
The GTX512 was one giant marketing campaign that really backfired on Nvidia.
I am also thinking that the part we are all expecting 700mhz and 1800ddr is a little too much. I personally think that the difference between the g71 and g70 is more likeley to be like the difference between a 7800 GT and GTX both at stock speeds. at least percentage wise (comparing the 512 GTX and a G71). Am i making sence.

This would not be Nvidia's first 90nm part. There is 7800GTXGO/7600GO for laptops. And it's not so much a problem if it was Nvidia's first 90nm part. It would matter however, if it were TSMC's first 90nm part.

Now, lets review and see if 700MHz is really as far fetched as you believe.

Nvidia G70 can, and does reach 550MHz (7800GTX512) on a 110nm process. Some have reached higher clocks on air.
Nvidia G71 will have two more quads, and two more vertex processors. On a 90nm LOW-K process. I explained this once before in another thread.

Do you remember when ATI went from the 9800pro to the X800XT? Ok, consider these things. 9800pro is an 8 pipe card on a non low k process.
ATI shrunk it and added 8 more pipes (100% increase) and opted for low k. Even with the 100% increase in pipes, ATI was able to increase clockspeed from 350 (9800pro) to 500 (X800XT) without issue. The X800XTPE was a Phantom edition at 520. So, 150MHz increase with 100% more pipes was doable and done. Now take G70 to G71. 110nm non low k, to 90nm low k. Nvidia increasing the pipes by a mere 25%. 700MHz doesn't seem that far fetched now does it?
 

Sc4freak

Guest
Oct 22, 2004
953
0
0
Originally posted by: beggerking
Originally posted by: Matthias99
Originally posted by: beggerking
I'm expecting x1900 to beat 7800GTX 512mb, but the numbers just look impossible...
I mean, if x1900 really has 48 pipelines then I would believe it, but since its only 16 pipeline, I seriously doubt the numbers..

Consider that the X1800XT has "only" 16 pipelines and is between the 7800GTX and 7800GTX512. The X1900XT has higher clocks and triple the shader processing capability.

That first graph is wack (maybe something wrong with their SLI setup? Or a bug with SLIAA?), but the rest look more or less believable. The X1900 should be significantly faster in games with heavy shader loads (like AOE3 or FEAR), but may not show much improvement on games that don't.

Are you sure by triple the shader processor per pipe = 3x shader processing capability... ?

If you compare a 16-1-1-1 to a 16-1-3-1 configuration with the same clocks, then yes, the 16-1-3-1 will have triple the shader processing capability.

Wether or not games make use of it is another story...
 

Keysplayr

Elite Member
Jan 16, 2003
21,211
50
91
Here is an interesting quote from "defaultuser" forum member at DT.
-----------------------------------------------------------------
"When you consider the 7800 GTX has the exact same number of ROPs, less fragment pipes, and a much lower clock speed, this performance borders on pathetic. If adding such a huge number of fragment pipes is so important, it should boot the card's performance much higher than this.

But everyone who has been paying attention knew this wouldn't be all that spactacular. More than one fragment pipe can be excellent (witness the 6600 GT with 2:1, the 6800 GS and 7800 GTX with 1.5:1), but ATI's approach is just wasteful.

When you consider how much this design mimics four x1600 cores, you start to understand why it disappoints. the x1600 never gets to use all its fragment pipelines (hell, the 6600 rarely gets to use all its fragment pipelines effectively, and it has 4 less!), and the limitation of only four texture units means multitexture effects are held back.

Thus, we have 3 times as many fragment pipes, but with the same number of ROPs and texture units, the design is hindered on multitexture and will probably show little AA performance improvement. At least Nvidia has the smarts to keep the number of extra fragment pipes reasonable, and to provide more texture units than ROPs for good multitexture performance. Even G71 is expected to follow this mantra."
---------------------------------------------------------------
Basically, this guy is saying, what is the use of 48 shader processors if the core is never called upon to utilize even 1/3rd of them. Now, we don't exactly know how many shaders are called upon for any given game at any given time, so I don't totally agree with this guys argument. No two games are exactly alike.

However, I do believe that the R580 will be no faster than a current R520 in games that do not call upon more than 16 shader operations per pass. Because technically that is the limit of the R520 now.

Anyway, I'm just discussing my thoughts here guys so feel free to slap me on my wrists if you catch me spreading FUD.. :D
 

akugami

Diamond Member
Feb 14, 2005
6,210
2,551
136
Originally posted by: keysplayr2003
Basically, this guy is saying, what is the use of 48 shader processors if the core is never called upon to utilize even 1/3rd of them. Now, we don't exactly know how many shaders are called upon for any given game at any given time, so I don't totally agree with this guys argument. No two games are exactly alike.

However, I do believe that the R580 will be no faster than a current R520 in games that do not call upon more than 16 shader operations per pass. Because technically that is the limit of the R520 now.

Anyway, I'm just discussing my thoughts here guys so feel free to slap me on my wrists if you catch me spreading FUD.. :D

Well, what I take from it is this. It all depends on the game. If a game is programmed with using a lot of multi-texturing then yeah, the nVidia will do better. If a game is programmed with a lot of shader power needed like FEAR (hopefully more efficiently) then ATI will do better.

So in the end...it's as you say, no two games are alike. It all depends on how a game is written.