ATI price drops - via Xbit

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: 5150Joker
Originally posted by: Acanthus
Originally posted by: 5150Joker
Originally posted by: Acanthus
Originally posted by: 5150Joker
Originally posted by: Acanthus


Youre beating branching to death, evidence?

Next time do your own research:

R520's batch size, being only 4x4 pixels large, should be very efficient for large batch sizes, at least in relation to NVIDIA's G70 which is described as having batch sizes of 64x16 (1024) pixels. R520's Pixel Shader architecture also has a specific Branch Execution Unit which means that ALU cycles aren't burned just calculating the branch alone for each pixel.

Source: http://www.beyond3d.com/reviews/ati/r520/index.php?p=04

Im not the one spouting off in a thread about it. Dont bitch about needing to back up your claims.

Do you know what branching is?


Dynamic branching is clearly spelled out in the article or do you need someone to translate it for you as well? Are you mentally disabled? You don't have to be a programmer to understand the article.

I know exactly what branching is and how it works. The problem here is the performane hit for doing it the way nvidia does it is not large enough to matter. There is no evidence to point to that says this hinders performance at all.

And you start with personal attacks, suprise suprise.

Quick, pull up all of those awesome effects in games that occupy 16 pixels of the screen.


If you know what it is, then why ask for a link? Or were you just trying to troll as usual?

Actually im calling out the Ati troll that takes any part of an article that he thinks is beneficial to his argument and beats it to death in every single video thread he comes across.

Since you seem to think im trolling we can go into detail.

You have effect X and effect Y

ATis buffer for these effects is 16 pixels (4x4)
Nvs buffer for these effects is 1024 pixels (64x16)

Now, lets say we got with beyond 3ds worst case scenario, and say that the particular effect we are putting together is relatively small X can be 10x10, and Y can be 10x10.

But, we can make it even worse for nvidia and make X overlap 4 Ys, so the calculation must be repeated 4 times.

So we have to calculate X effect + Y effect = output 4 times individcually on nv hardware.

On ATi without going into insane logistics and just giving a best case scenario, lets assume they perfectly overlap.

ATi would have to make 9 combination calculations on the exact same effect in an ideal situation.

So while the buffer is smaller individually, more calculations have to be made. Its a benefit and a drawback at the same time.

Potentially by atis design more effects could be combined in total on the same size total buffer, but by nvidias design less total calculations are made.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: moonboy403
sheez....this is worst than my linear algebra class taught by a guy from jpl!

I could draw some diagrams, but itd really be a waste of time.

More or less, its just 2 different approaches to the same problem. One requires more calculations, the other requires larger buffers. Neither is a bad approach.
 

moonboy403

Golden Member
Aug 18, 2004
1,828
0
76
man....now i wanna grab a pair of x1800xt for crossfire

but then i'd have to buy a new board too =(

hmm....guess i'll just step up to the lowly 7900 gtx (hopefully it won't be lowly)
 

FalllenAngell

Banned
Mar 3, 2006
132
0
0
Originally posted by: Cookie Monster
Not sure if ATi could afford this price cut, hey atleast prices are comnig down :D

Dude, who cares?!!? They cut prices, us middle class people win.

I sort of wonder why they would cut their own throats, though. If their hardware is better, why bother cutting prices?
 

Munky

Diamond Member
Feb 5, 2005
9,372
0
76
Originally posted by: Topweasel
Originally posted by: Extelleron
I think ATi CLEARLY has the upper hand right now and looking ahead it is just going to get better. Right now, most games dont make extensive use of shaders, like future games will, and the X1900XTX will SHINE here, yet even though it STILL beats the 7800GTX 512 in current games. In future games, where performance will probably best be represented by the performance in shader-heavy F.E.A.R, ATi is just going to get a BIGGER lead. ATi has the performance advantage NOW, and it will only increase as new games come out. While ATi looks ahead to what next-gen games will require, nVidia moves to a smaller process and releases an "overclocked 7800GTX 512", aka 7900GTX.

But since the XTX beats the 512 in most thing doesn't mean with the the speed boost with the 7900GTX it won't beat the XTX. While I agree that the XX will probably pay off better in the future if things go shader intensive, but do we know for shure if thats the case. I mean Unreal I think is being devloped on an NVidia solution isn't it? (I really am not sure hear so I could be wrong) If thats the case then people should head over to Epics Unreal site, a huge amount of studios and promising games are being produced on the Unreal engine.

As it stands Their are 4 engines worth speaking about, Source, Doom, U3, and the monilith engine. (the Farcry I don't think has any real adoption). The Doom and U3 engines are Nvidia based the Source is ATI based but the CPU is usually more important leaving just the Monolith engine which history has shown to have a relatively small usage outside monolith themselves.

I really don't think that the world is going to go shader crazy, and if/when they do make that leap it won't be fore 2-3 years because current engines are not going to be redone for awhile. by this point for people who are looking to be on the top now those XTXs and XTs will be worthless.

Actually, games are getting increasingly shader-intensive, even according to Carmack and Sweeney.
http://www.beyond3d.com/#news27836

The fact that a game was developed on Nv hardware, and even if it's sponsored by TWIMTBP, does not mean it will run better on Nv hardware, especially if the hardware is not well-suited for the application. One only need to look at titles such as TR:AOD, FarCry, and FEAR to see that Nv sponsorship does not guarantee faster or even equal performance on NV HW.
 

vaccarjm

Banned
Jul 9, 2004
185
0
0
Originally posted by: munky
Originally posted by: vaccarjm
Originally posted by: 5150Joker
Originally posted by: KeithTalent
ATI?s Radeon X1900 XTX to Cost $549

After the price-drop, the Radeon X1900 XTX ? the current flagship of ATI ? will cost $549, the Radeon X1900 XT will drop to $479, while the Radeon X1800 XT products with 512MB or 256MB should cost around $299 (512MB version will cost higher than that). Additionally, ATI is expected to introduce Radeon X1800 XL 512MB for $299 and drop the price of the Radeon X1800 XL 256MB.

I was actually expecting a PE edition of some sort, rather than just price drops. Has anyone heard anything about a PE edition card?


Of course it will be enough. ATi vid cards have better feature support and are more "future proof". If they cost the same as the 7900 cards, then nVidia has nothing on them.


Oh and let me guess there 5150....nvidia didnt see this happening right? One of the biggest vid companies in history didnt plan that ATI were going to cut prices as well to respond to their 7900 series. Of course Nvidia knows this and im sure they have something planned. They just dont spends zillions of cash not knowing what to expect from ATI.

What do you do for a living? live under a rock?

This isn't so much about "knowing" as it is about "doing." People knew about the r580 for months before its launch, and what the specs were going to be. Nv knew it too, but did they do anything about it? If you still believe Nv was just sitting on a pile g71's, waiting for Ati to release the r580, then I have a bridge to sell you. I remember Jen Hsue (sp) talking about moving all their gpu's to 90nm in the second half of 2005, and look how long it took them to release the g71... it's march 2006, and I'm still waiting!

And then if you get into the whole "I knew that they knew that I knew..." thing, it becomes just pure speculation. Suffice it to say that price drops are a good thing for the consumer, but Nv would not drop prices if they knew they could get away with higher ones, and neither would Ati.


Its funny how you hardcore video card fanatics talk out of your a$$ all day long about who is the best and this card pwns this card. It almost sounds like you all majored in cosumer graphics at UCNV or Cal State ATI. Regardless how Nvidia cards bench with the 7900gtx.......they will sell and sell alot. They will profit and move on to the next card. 5150 Joker will still be making his generic salary and he will move on. Life will move on.

I just get whatever is the fastest out there and buy it regardless who makes it. I leave all the senseless drama talk to you guys.
 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: FalllenAngell
Originally posted by: Cookie Monster
Not sure if ATi could afford this price cut, hey atleast prices are comnig down :D

Dude, who cares?!!? They cut prices, us middle class people win.

I sort of wonder why they would cut their own throats, though. If their hardware is better, why bother cutting prices?
From ATI's recent conf call, yeilds are excellent. :) They wouldnt cut prices if they were not in such a good position.
 

FalllenAngell

Banned
Mar 3, 2006
132
0
0
Originally posted by: crazydingo
Originally posted by: FalllenAngell
Originally posted by: Cookie Monster
Not sure if ATi could afford this price cut, hey atleast prices are comnig down :D

Dude, who cares?!!? They cut prices, us middle class people win.

I sort of wonder why they would cut their own throats, though. If their hardware is better, why bother cutting prices?
From ATI's recent conf call, yeilds are excellent. :) They wouldnt cut prices if they were not in such a good position.

I wasn't talking about the number of chips they get per wafer, I was wondering why they announce price cuts before the competing products come out.

A lot of people on these message boards seem to think they'll still have the performance leader, so why cut prices if performance justifies them?

It doesn't make any sense to me, but I'm all for price cuts.
 

vaccarjm

Banned
Jul 9, 2004
185
0
0
Originally posted by: Acanthus
Originally posted by: Ackmed
Originally posted by: vaccarjm
I leave all the senseless drama talk to you guys.

http://forums.anandtech.com/messageview...atid=31&threadid=1816035&enterthread=y

Um, thats pretty "senseless" right there. Not to mention, just plan 'ol idiotic.

owned :laugh:

Wasnt i though....i mean it was staright wtfpwnd!@@!!!

Kinda like what happens to you every morning when you wake and you realize your still you.

 

crazydingo

Golden Member
May 15, 2005
1,134
0
0
Originally posted by: FalllenAngell
Originally posted by: crazydingo
Originally posted by: FalllenAngell
Originally posted by: Cookie Monster
Not sure if ATi could afford this price cut, hey atleast prices are comnig down :D

Dude, who cares?!!? They cut prices, us middle class people win.

I sort of wonder why they would cut their own throats, though. If their hardware is better, why bother cutting prices?
From ATI's recent conf call, yeilds are excellent. :) They wouldnt cut prices if they were not in such a good position.

I wasn't talking about the number of chips they get per wafer, I was wondering why they announce price cuts before the competing products come out.

A lot of people on these message boards seem to think they'll still have the performance leader, so why cut prices if performance justifies them?

It doesn't make any sense to me, but I'm all for price cuts.
My post was not directed at you. ;)

As for annoucng price cuts, they didnt. As of now this is just a rumor. And I agree with what you said, why lower prices when their cards are competitive with more features. Unless, ATI wants to hurt Nvidia when they are down. :D
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: vaccarjm
Originally posted by: Acanthus
Originally posted by: Ackmed
Originally posted by: vaccarjm
I leave all the senseless drama talk to you guys.

http://forums.anandtech.com/messageview...atid=31&threadid=1816035&enterthread=y

Um, thats pretty "senseless" right there. Not to mention, just plan 'ol idiotic.

owned :laugh:

Wasnt i though....i mean it was staright wtfpwnd!@@!!!

Kinda like what happens to you every morning when you wake and you realize your still you.

welcome back turtle!
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Originally posted by: Acanthus
snip

Here's some synthetic tests done by Xbit labs with dynamic branching:

Xbitmark


Notice the huge performance difference between the two architectures in heavy dynamic branching. Next is a shadermark 2.1 test done by B3D for the X1800/X1900/7800, look at the diff. between tests 20 and 21:

X1800:
shader 20 82 (static branching)
shader 21 151 (dynamic branching)

7800 gtx:
shader 20 99 (static branching)
shader 21 95 (dynamic branching)

X1900:
shader 20 155 (static branching)
shader 21 178 (dynamic branching)


Yeah it's a theoretical test but the point all along was that the R520/R580 handle dynamic branching more efficiently than G70/G71 would.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: 5150Joker
Originally posted by: Acanthus
snip

Here's some synthetic tests done by Xbit labs with dynamic branching:

Xbitmark


Notice the huge performance difference between the two architectures in heavy dynamic branching. Next is a shadermark 2.1 test done by B3D for the X1800/X1900/7800, look at the diff. between tests 20 and 21:

X1800:
shader 20 82 (static branching)
shader 21 151 (dynamic branching)

7800 gtx:
shader 20 99 (static branching)
shader 21 95 (dynamic branching)

X1900:
shader 20 155 (static branching)
shader 21 178 (dynamic branching)


Yeah it's a theoretical test but the point all along was that the R520/R580 handle dynamic branching more efficiently than G70/G71 would.

It would depend very wildly on the situation. "heavy dynamic branching" should mean dozens of effects being combined over large parts of the screen. This is not common in games today, but sometime in the future it will be more important.

My best guess for shadermark is that their "heavy dynamic branching" is a very large number of effects, well beyond the typical load in a game we see today (or in the near future). The large number of effects is precisely what would cripple nvidia, the larger buffers will use resources less efficiently.

However, if you had a large number of smaller calculations, nvidia would have the upper hand. (this really isnt the case in current games either, dont think im talking up nvidia)

It depends entirely on the scene to how each architecture will fare, you cant paint it with a broad brush and just say "Geforces suck at branching".

Edit: spelling and clarification.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
Originally posted by: Acanthus
Originally posted by: 5150Joker
Originally posted by: Acanthus
snip

Here's some synthetic tests done by Xbit labs with dynamic branching:

Xbitmark


Notice the huge performance difference between the two architectures in heavy dynamic branching. Next is a shadermark 2.1 test done by B3D for the X1800/X1900/7800, look at the diff. between tests 20 and 21:

X1800:
shader 20 82 (static branching)
shader 21 151 (dynamic branching)

7800 gtx:
shader 20 99 (static branching)
shader 21 95 (dynamic branching)

X1900:
shader 20 155 (static branching)
shader 21 178 (dynamic branching)


Yeah it's a theoretical test but the point all along was that the R520/R580 handle dynamic branching more efficiently than G70/G71 would.

It would depend very wildly on the situation. "heavy dynamic branching" should mean dozens of effects being combined over large parts of the screen. This is not common in games today, but sometime in the future it will be more important.

My best guess for shadermark is that their "heavy dynamic branching" is a very large number of effects, well beyond the typical load in a game we see today (or in the near future). The large number of effects is precisely what would cripple nvidia, the larger buffers will use space less efficiently.

However, if you had a large number of smaller calculations, nvidia would have the upper hand. (this really isnt the case in current games either, dont think im talking up nvidia)

It depends entirely on the scene to how each architecture will fare, you cant paint it with a broad brush and just say "Geforces suck at branching".

Edit: spelling and clarification.



I never said Geforce "sucked at branching" so why did you quote it as if I did? You just restated what I said all along, that the R580 is theoretically more future proof because of it's ability to handle dynamic branching better. Glad you agree with me.
 

Acanthus

Lifer
Aug 28, 2001
19,915
2
76
ostif.org
Originally posted by: 5150Joker
Originally posted by: Acanthus
Originally posted by: 5150Joker
Originally posted by: Acanthus
snip

Here's some synthetic tests done by Xbit labs with dynamic branching:

Xbitmark


Notice the huge performance difference between the two architectures in heavy dynamic branching. Next is a shadermark 2.1 test done by B3D for the X1800/X1900/7800, look at the diff. between tests 20 and 21:

X1800:
shader 20 82 (static branching)
shader 21 151 (dynamic branching)

7800 gtx:
shader 20 99 (static branching)
shader 21 95 (dynamic branching)

X1900:
shader 20 155 (static branching)
shader 21 178 (dynamic branching)


Yeah it's a theoretical test but the point all along was that the R520/R580 handle dynamic branching more efficiently than G70/G71 would.

It would depend very wildly on the situation. "heavy dynamic branching" should mean dozens of effects being combined over large parts of the screen. This is not common in games today, but sometime in the future it will be more important.

My best guess for shadermark is that their "heavy dynamic branching" is a very large number of effects, well beyond the typical load in a game we see today (or in the near future). The large number of effects is precisely what would cripple nvidia, the larger buffers will use space less efficiently.

However, if you had a large number of smaller calculations, nvidia would have the upper hand. (this really isnt the case in current games either, dont think im talking up nvidia)

It depends entirely on the scene to how each architecture will fare, you cant paint it with a broad brush and just say "Geforces suck at branching".

Edit: spelling and clarification.



I never said Geforce "sucked at branching" so why did you quote it as if I did? You just restated what I said all along, that the R580 is theoretically more future proof because of it's ability to handle dynamic branching better. Glad you agree with me.

It only handles a certain kind better, is what i am saying. In different scenario ATi could prove less efficient.
 

DrZoidberg

Member
Jul 10, 2005
171
0
0
I do hope x1800xt 256mb would be $300 MSRP and less for street prices. Yay for competition. IF ATI or nvidia was to be a monopoly we consumers would suffer.

 

FalllenAngell

Banned
Mar 3, 2006
132
0
0
Originally posted by: crazydingo
As for annoucng price cuts, they didnt. As of now this is just a rumor. And I agree with what you said, why lower prices when their cards are competitive with more features. Unless, ATI wants to hurt Nvidia when they are down. :D

Is nVidia "down"? Didn't they just break some sort of sales record or something like that?

http://www.channelregister.co.uk/2005/02/18/nvidia_results_q4_fy2005/