• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD beliefs: DirectX 11 Radeons pleasantly fast

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Originally posted by: Keysplayr
Essentially, I didn't say AMD needs two GPU's. I mentioned they need 1600 shaders.
And if AMD stays the course with it's current arch. I predict they will need somewhere in the neighborhood north of 3000 shaders to be competitive with GT300 (If the specs hold true). Whether they use 2 or 3 or even 4 cores on a single card to get this, will be interesting to see.

And as far as xstors density.
GT200 @ 55nm = 470mm2 at 1.408 million transistors
1,408,000,000/470 = @2,995,744 transistors per sq. mm

For fun, Deduct the 20% compute transistors, we have 376mm2 for GT200
1,126,400,000/376 = @2,995,744 transistors per sq. mm

R700 @ 55nm = 260mm2 at 965 million transistors
965,000,000/260 = @3,711,538 transistors per sq. mm

Anyway, In the case of the 4850X2 vs. GTX285, the AMD part needs just shy of 2 billion (965mil x2) to compete with a 1.4 bill NV part. Or 520mm2 vs 470mm2.

I know this is all very pointless, which is what I wanted to convey. It's just that this die size argument is just dumb. Different ways of doing things, as you say. I totally agree.

But you seems to forgot that the HD 4890 comes dangerously close to the GTX 285, like the GTX 275 do, and that all HD 4890 are overclockable, while is true that the GTX 285 also overclocks, it's much more expensive, multi GPU solutions will always be less profitable and more expensive, but unless if you are a stock holder, who cares? ATi has much smaller chip, which is very scalable, which is much more efficient considering that it sucks almost the same power consumption as the much bigger GTX 285 and it offers 90% of the GTX 285 performance for much less money and a much smaller chip, is a design win.
 
Originally posted by: Keysplayr
A few things to note from the posts in this thread:

Architecture scalability: How is ATI's architecture any more or less scalable than their competitors?

ATI will get crushed if they don't change their arch: While I don't think they will get crushed, and will probably just double or triple the shaders and impliment DX11 support, they still end up with a arch that nobody wants to code for. Even if technically ATI's architecture is superior (subjective) on paper, it won't be realised in real world apps and games. But then agan we don't know how NVs MIMD arch will perform either. Total wait and see.

NV more focused on GPGPU than on gaming: It would appear that this statement has no teeth. NV was extremely focused on the CUDA architecture since G80, and has been leading ATI in gaming performance ever since. Doesn't appear they have lost site to the gaming aspect of their GPUs. Big die, small die, transistor budgets really shouldn't matter. I think that "Idontcare" did a rough calculation of what the die size should be for the number of rumored transistors of the GT300 core based on 40nm. Correct me if I'm wrong IDC, but I think you said somewhere around 220-250mm2.

ATI isn't interested in competing in the high end: This is a pleasant yet nonsensical spin on the real statement, "We can't best them, so we'll say we never intended to. Yeah, we'll go with that. And also fellow board members, we have opted to adopt Havok as our Physics method of choice. This will give us about two years to actually create a GPU that can actually be programmed for efficiently and actually be able to run GPU Physics. It saves us the embarrassment of the public actually finding out that we can't run a whole lot more than just games well on our current architecture. Meeting adjourned. Sushi anyone?"

By the way, anyone planning on getting insulted by these comments needs to understand that they are directed at AMD/ATI. Getting personally insulted over it would be kind of silly. Don't let it happen to you. 🙂
still selling doom and gloom for ATI :! wait Nvidia defense force : you , wreckage ... now who else is in this force... haven't been in chatting or reading the form but may start again :! need to start doing some defending.
 
Originally posted by: ShawnD1
Originally posted by: Creig
Giving the best video card purchasing advice to someone goes far beyond looking at the 2560x1600 benchmark column and simply picking the fastest one.

I think it's perfectly reasonable to look at the highest resolution alone. As was already stated, you can make a 9800 get the same frame rate as a GTX 260 if you use a low resolution with no AA or AF. Would it be fair to tell someone that they're the same speed? Of course not. When you boost the resolution to some crazy amount nobody uses and run the game at 4x AA with 8x AF (common benchmark quality settings), that's where you start to see the card's limitations. Then you look at it and say "aha, so the 260 is faster than the 9800!"

Also try to keep in mind that resolution and texture filtering are almost interchangable. You can't just say "I'm running the game at 1024x768" and think everything is going to be ok. I would be pissed off if I bought a card and expected to use AA to cancel out the shittiness of low resolution then find that my video card doesn't even have enough bandwidth to do such a thing. In fact, that happened before. Radeon 9600XT trying to play Doom 3; I couldn't use 2x AA because the game's textures are so damn big. If I looked at high resolution benchmarks, I probably would have caught that. I didn't, so I screwed myself.

Not all cards scale identically between resolutions. Card A that is faster than card B at 1680x1050 and 1920x1200 may actually be slower than card B at 2560x1600. So if a person is looking to buy card to use only at 1920x1200 purchases the card that is faster at 2560x1600, they may actually be getting a worse deal.

As an example, the 4890 is generally faster than the GTX275 at 1680x1050 and 1920x1200 but only ties it at 2560x1600.

http://www.anandtech.com/video/showdoc.aspx?i=3539&p=16
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=17
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=19
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=20

The 4890, basically a tweaked and overclocked 4870, does improve performance over the 4870 1GB and puts up good competition for the GTX 275. On a pure performance level the 4890 and GTX 275 trade blows at different resolutions. The 4890 tends to look better at lower resolutions while the GTX 275 is more competitive at high resolutions. At 1680 x 1050 and 1920 x 1200 the 4890 is nearly undefeated. At 2560 x 1600, it seems to be pretty much a wash between the two cards.

Therefore, it's very important to take the intended resolution into account when using benchmarks to decide what card to purchase.
 
Originally posted by: tuteja1986
Originally posted by: Keysplayr
A few things to note from the posts in this thread:

Architecture scalability: How is ATI's architecture any more or less scalable than their competitors?

ATI will get crushed if they don't change their arch: While I don't think they will get crushed, and will probably just double or triple the shaders and impliment DX11 support, they still end up with a arch that nobody wants to code for. Even if technically ATI's architecture is superior (subjective) on paper, it won't be realised in real world apps and games. But then agan we don't know how NVs MIMD arch will perform either. Total wait and see.

NV more focused on GPGPU than on gaming: It would appear that this statement has no teeth. NV was extremely focused on the CUDA architecture since G80, and has been leading ATI in gaming performance ever since. Doesn't appear they have lost site to the gaming aspect of their GPUs. Big die, small die, transistor budgets really shouldn't matter. I think that "Idontcare" did a rough calculation of what the die size should be for the number of rumored transistors of the GT300 core based on 40nm. Correct me if I'm wrong IDC, but I think you said somewhere around 220-250mm2.

ATI isn't interested in competing in the high end: This is a pleasant yet nonsensical spin on the real statement, "We can't best them, so we'll say we never intended to. Yeah, we'll go with that. And also fellow board members, we have opted to adopt Havok as our Physics method of choice. This will give us about two years to actually create a GPU that can actually be programmed for efficiently and actually be able to run GPU Physics. It saves us the embarrassment of the public actually finding out that we can't run a whole lot more than just games well on our current architecture. Meeting adjourned. Sushi anyone?"

By the way, anyone planning on getting insulted by these comments needs to understand that they are directed at AMD/ATI. Getting personally insulted over it would be kind of silly. Don't let it happen to you. 🙂
still selling doom and gloom for ATI :! wait Nvidia defense force : you , wreckage ... now who else is in this force... haven't been in chatting or reading the form but may start again :! need to start doing some defending.

Way to zero in on members and not the subject matter. Participate in the topic, or please don't say anything at all. Best for everyone. If your mind is set on "defend" without really considering the data here, that's a problem.
 
Originally posted by: Creig
Originally posted by: ShawnD1
Originally posted by: Creig
Giving the best video card purchasing advice to someone goes far beyond looking at the 2560x1600 benchmark column and simply picking the fastest one.

I think it's perfectly reasonable to look at the highest resolution alone. As was already stated, you can make a 9800 get the same frame rate as a GTX 260 if you use a low resolution with no AA or AF. Would it be fair to tell someone that they're the same speed? Of course not. When you boost the resolution to some crazy amount nobody uses and run the game at 4x AA with 8x AF (common benchmark quality settings), that's where you start to see the card's limitations. Then you look at it and say "aha, so the 260 is faster than the 9800!"

Also try to keep in mind that resolution and texture filtering are almost interchangable. You can't just say "I'm running the game at 1024x768" and think everything is going to be ok. I would be pissed off if I bought a card and expected to use AA to cancel out the shittiness of low resolution then find that my video card doesn't even have enough bandwidth to do such a thing. In fact, that happened before. Radeon 9600XT trying to play Doom 3; I couldn't use 2x AA because the game's textures are so damn big. If I looked at high resolution benchmarks, I probably would have caught that. I didn't, so I screwed myself.

Not all cards scale identically between resolutions. Card A that is faster than card B at 1680x1050 and 1920x1200 may actually be slower than card B at 2560x1600. So if a person is looking to buy card to use only at 1920x1200 purchases the card that is faster at 2560x1600, they may actually be getting a worse deal.

As an example, the 4890 is generally faster than the GTX275 at 1680x1050 and 1920x1200 but only ties it at 2560x1600.

http://www.anandtech.com/video/showdoc.aspx?i=3539&p=16
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=17
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=19
http://www.anandtech.com/video/showdoc.aspx?i=3539&p=20

The 4890, basically a tweaked and overclocked 4870, does improve performance over the 4870 1GB and puts up good competition for the GTX 275. On a pure performance level the 4890 and GTX 275 trade blows at different resolutions. The 4890 tends to look better at lower resolutions while the GTX 275 is more competitive at high resolutions. At 1680 x 1050 and 1920 x 1200 the 4890 is nearly undefeated. At 2560 x 1600, it seems to be pretty much a wash between the two cards.

Therefore, it's very important to take the intended resolution into account when using benchmarks to decide what card to purchase.

And even more important are minimum framerates at a particular res you'd like to play at.
 
At lower resolutions you want to look at average and minimum frame rates.
http://www.bjorn3d.com/read.php?cID=1539&pageID=6664

Here you can see the GTX275 dominating the 4890. The gap gets much wider looking at the 280 and 285.

Also you still want a powerful card at a lower resolution to run things like physics and ambient occlusion (unless of course your card does not support such things).

Not to mention if you run folding.
 
Originally posted by: Wreckage
At lower resolutions you want to look at average and minimum frame rates.
http://www.bjorn3d.com/read.php?cID=1539&pageID=6664

Here you can see the GTX275 dominating the 4890. The gap gets much wider looking at the 280 and 285.

And here we see a 4890 generally neck and neck with a GTX275 at minimum, maximum and overall framerates. So the GTX275 and HD4890 are close competitors. No dominating involved. :roll:


Originally posted by: Wreckage
Also you still want a powerful card at a lower resolution to run things like physics and ambient occlusion (unless of course your card does not support such things).

Not to mention if you run folding.

Or if you don't care about those things or want to pay extra for them. The GTX275 is more expensive than a 4890, sometimes by a significant amount.

Since you're playing the "physics and ambient occlusion" cards for the GTX275, I'll raise you a "DX10.1 and significant overclocking" as points for the HD4890.
 
Originally posted by: ShawnD1IIRC, AMD never had a clear performance advantage after the Pentium 4B. AMD products were competitive because they were cheap. While the Pentium 4 was sometimes faster, it was always ridiculously expensive. I remember buying an Athlon 1700+ because it was at least a hundred dollars cheaper than the Intel equivalent. This would be similar to something like the Phenom II 955 against the Intel i7 920 as competing DDR3 platforms. The i7 is hands down a better processor, but I would probably buy the AMD platform for $100 less.

I believe this is correct. I recall buying my AMD Athlon 2500+ CPU and just clocking it to a 3200+ in the BIOS. It was stupid easy to overclock. It wiped the floor with the 4 and was extremely reasonably priced. When the Northwood and Prescott started coming out, iirc, it was a very fast chip but suffered some limitations: it was a power hungry beast and it was expensive as hell. Anyone remember the Northwood sudden death symdrome? lol

I also recall things going in the GPU area in the same way Ddguy has articulated it. Nvidia spanked ATi with the release of the 8800gt...isn't that like, still the same architecture they still have in their cards? Wasn't there some big scandal over it?



 
Originally posted by: Wreckage
At lower resolutions you want to look at average and minimum frame rates.
http://www.bjorn3d.com/read.php?cID=1539&pageID=6664

Here you can see the GTX275 dominating the 4890. The gap gets much wider looking at the 280 and 285.

Also you still want a powerful card at a lower resolution to run things like physics and ambient occlusion (unless of course your card does not support such things).

Not to mention if you run folding.

I see nothing but trading a few frames and places in the subsequent pages.

Way to cherry pick there sport.

 
Originally posted by: Creig

And here we see a 4890 generally neck and neck with a GTX275 at minimum, maximum and overall framerates. So the GTX275 and HD4890 are close competitors. No dominating involved. :roll:
That site used little or no AA and no AF. I guess if you want to cripple your game. :roll:


Or if you don't care about those things or want to pay extra for them. The GTX275 is more expensive than a 4890, sometimes by a significant amount.

Since you're playing the "physics and ambient occlusion" cards for the GTX275, I'll raise you a "DX10.1 and significant overclocking" as points for the HD4890.
Sales and overclocking are random at best. Not to mention most ATI vendors don't support overclocking in their warranty. More games support physics and ambient occlusion than DX10.1 and 10.1 does not really add any thing visually to a game nor does it always provide a game changing performance boost.
 
Originally posted by: Wreckage
That site used little or no AA and no AF. I guess if you want to cripple your game. :roll:

And I suppose people are interested in 4890s and 275s for running 1280x1024 as some of the benches in your link showed? There are other sites that show the 4890 and 275 virtually neck and neck with AA/AF turned on.


Originally posted by: Wreckage
Sales and overclocking are random at best.
4890 sales have been ongoing virtually from the time it launched. It has gotten to the point where you can obtain a 4890 OC for $125.

MSI Radeon HD 4890 OC - $125.54 after cash back, rebates, eBm

I haven't seen a GTX275 anywhere near that price. EVER.


Originally posted by: Wreckage
Not to mention most ATI vendors don't support overclocking in their warranty.

Most? No. Some? Yes. Not all Nvidia vendors support overclocking either. So for those who have a warranty that allows it (or those who don't care about voiding their warranty), overclocking can be a very real consideration when purchasing a video card.


Originally posted by: Wreckage
More games support physics and ambient occlusion than DX10.1 and 10.1 does not really add any thing visually to a game nor does it always provide a game changing performance boost.

Completely wrong. DX10.1 can both add to a game visually and can increase game performance:

http://www.pcgameshardware.com...th-Lead-Designer/News/

Stormrise first DX10 only game - Interview with Lead Designer

PCGH: You announced that the PC Version of Stormrise will support DX10 and even DX10.1. Is that still the case or did you cancel the support for DX10/10.1? If you cancel the DX10/DX10.1 support what were your reasons to do so?

Artem Kulakov: From day one Stormrise has been designed as a new type of RTS for next generation consoles and PCs. Stormrise has been designed for DirectX 10 and Vista only right from the start. Integrating DX10.1 was an opportunity to increase performance and improve visual quality even further.


PCGH: If there is DX10/10.1 support how do you leverage the API? How can you utilize the advanced feature Set of DX10/DX10.1? How does the API simplify the rendering process?

Artem Kulakov: DX10 has offered a lot of advantages over DX9. First of all, DirectX 10 allowed us to simplify the rendering engine. It matches capabilities of next generation consoles better than DX9, which is important for us considering that Stormrise is a multi-platform title. We had fewer driver-specific compatibility issues with Stormrise compare to our previous games released with DX9.


PCGH: What were the technical advantages of DX10.1's extended feature set? In what way does DX10.1 in particular optimize or simplify the rendering process of Stormrise?

Artem Kulakov: We are currently working with AMDs engineers to implement the following DX10.1 features:
? DX10.1 allows you to read back from a Multi-Sampled Anti-Aliased (MSAA) depth buffer, which means it is no longer necessary to render depth out separately. This means that the current Multiple Render Target (MRT) setup can be bypassed for DX10.1 capable HW, and therefore yielding a performance gain.
? DX10.1 introduced fixed sample patterns for MSAA modes, and also allows the Pixel Shader (PS) to output the MSAA Coverage Mask. This enables us to gain full MSAA Alpha Tested geometry, leading to higher visual quality.
? DX10.1 adds a new instruction called Gather, which can gather 4 texture samples at once, at a much lower cost than issuing 4 separate Sample instructions. Consequently we are able to optimize our shadow map technique, and even shoot for higher quality.
? The Gather instruction will also allow us to optimize our Screen Space Ambient Occlusion (SSAO) algorithm - again producing a higher image quality.
 
Originally posted by: Wreckage
At lower resolutions you want to look at average and minimum frame rates.
http://www.bjorn3d.com/read.php?cID=1539&pageID=6664

Here you can see the GTX275 dominating the 4890. The gap gets much wider looking at the 280 and 285.

Also you still want a powerful card at a lower resolution to run things like physics and ambient occlusion (unless of course your card does not support such things).

Not to mention if you run folding.

Way to go to dodge the previous results with such doubtful benchmarks that shows that the GTX 285 is much faster than the GTX 280 when that's far from true, heck even those results states that the GTX 280 in SLI is still slower than a single GTX 285, another lie specially in that game which is so optimized for nVidia hardware. Ambient Occlusion is a nice feature, which runs faster on ATi hardware thanks to DX10.1 support, http://www.guru3d.com/article/...n-hd-4890-pcs-review/9

Unlike nVidia's implementation which may create artifacts in many games due to it's suspecious implementation, but eventually will catch up since is a very nice feature.

http://www.tomshardware.com/fo...ient-occlusion-drivers

Very few games run it properly

http://www.nzone.com/object/nz...entocclusion_home.html

PhysX is just a gimmick and a niche technology which won't be missed but it will get interesting when OpenCL and DirectX 11 takes off with something much more inmersive and less exclusive.

Originally posted by: Wreckage
Sales and overclocking are random at best. Not to mention most ATI vendors don't support overclocking in their warranty. More games support physics and ambient occlusion than DX10.1 and 10.1 does not really add any thing visually to a game nor does it always provide a game changing performance boost.

They come in overclocked versions too, and tell that to STALKER Clear Sky which benefits greatly of improved shadows thanks to DX10.1, the same with Far Cry 2 and H.A.W.X along with a very minimal performance impact when Anti Aliasing is used compared to the regular DX10.0 path.
 
Originally posted by: Creig


And I suppose people are interested in 4890s and 275s for running 1280x1024 as some of the benches in your link showed? There are other sites that show the 4890 and 275 virtually neck and neck with AA/AF turned on.
LOL!!!!! You guys are hilarious. First you complain I used to high a resolution, now you complain it's too low. BTW I was focusing on the 1680x1050.

Originally posted by: lavaheadache
Wreckage, I'm curious as to why you have such low resolution monitors? I feel as a guy that cares as deeply as you do about the 2560 res. performance of all these cards you would actually be gaming there.

Originally posted by: SlowSpyder

Fair enough... but for 99% of us, who do not use a 30" display the single GPU 48x0 cards compete just fine with the single GPU GTX2xx cards.

:laugh:

 
Originally posted by: Wreckage
Originally posted by: Creig


And I suppose people are interested in 4890s and 275s for running 1280x1024 as some of the benches in your link showed? There are other sites that show the 4890 and 275 virtually neck and neck with AA/AF turned on.
LOL!!!!! You guys are hilarious. First you complain I used to high a resolution, now you complain it's too low. BTW I was focusing on the 1680x1050.

It's all about using the proper video card for a specified resolution. If you choose to purchase card A to run at too low of a resolution, you just wasted money. If you choose to purchase card B to run at too high of a resolution, you just wasted money.

If you're still confused as to the difference, let me know and I can point you to a few websites that can clear up the distinction for you. :thumbsup:
 
Originally posted by: Creig
Not all cards scale identically between resolutions. Card A that is faster than card B at 1680x1050 and 1920x1200 may actually be slower than card B at 2560x1600. So if a person is looking to buy card to use only at 1920x1200 purchases the card that is faster at 2560x1600, they may actually be getting a worse deal.
Fair enough. Everyone has a different standard for measuring things. I use an approach similar to yours when I'm looking at CPUs; I just just look at the gaming benchmarks because that's the one task that I really care about.

Or if you don't care about those things or want to pay extra for them. The GTX275 is more expensive than a 4890, sometimes by a significant amount.
Oh come on, let's not get silly. Radeon 4890 is $195 on newegg. GTX 275 is $205 and many of them come in COD 5. So basically it's $10 apart.
Ironically you an still buy a Radeon 4850 for $190 and GTX 260 for $285. When comparing the prices I usually just look at the cheapest one and say that's the price of the card. They might cost a little more if you have a certain brand preference.


Also, doesn't it seem weird that in a debate about performance, Anandtech is not referenced? In Anand's tests, 4890 and 275 seem fairly equal. Fallout 3 and GRID are the only ones that consistently work better on the 4890. Neither is "crushed" by the other.
 
Originally posted by: ShawnD1
Also, doesn't it seem weird that in a debate about performance, Anandtech is not referenced? In Anand's tests, 4890 and 275 seem fairly equal. Fallout 3 and GRID are the only ones that consistently work better on the 4890. Neither is "crushed" by the other.

:sun::sun::sun:

and the darkness cleared.
 
Originally posted by: ShawnD1
Originally posted by: Creig
Not all cards scale identically between resolutions. Card A that is faster than card B at 1680x1050 and 1920x1200 may actually be slower than card B at 2560x1600. So if a person is looking to buy card to use only at 1920x1200 purchases the card that is faster at 2560x1600, they may actually be getting a worse deal.
Fair enough. Everyone has a different standard for measuring things. I use an approach similar to yours when I'm looking at CPUs; I just just look at the gaming benchmarks because that's the one task that I really care about.

Or if you don't care about those things or want to pay extra for them. The GTX275 is more expensive than a 4890, sometimes by a significant amount.
Oh come on, let's not get silly. Radeon 4890 is $195 on newegg. GTX 275 is $205 and many of them come in COD 5. So basically it's $10 apart.
Ironically you an still buy a Radeon 4850 for $190 and GTX 260 for $285. When comparing the prices I usually just look at the cheapest one and say that's the price of the card. They might cost a little more if you have a certain brand preference.


Also, doesn't it seem weird that in a debate about performance, Anandtech is not referenced? In Anand's tests, 4890 and 275 seem fairly equal. Fallout 3 and GRID are the only ones that consistently work better on the 4890. Neither is "crushed" by the other.



You can find great deals for the 4890... $125 here. Newegg has it for $169.99 AR. That's the difference in my opinion, the 4890 and GTX275/280 are all very, very close. The 4890 can be had for a much better deal it seems though.
 
Originally posted by: Schmide
Originally posted by: ShawnD1
Also, doesn't it seem weird that in a debate about performance, Anandtech is not referenced? In Anand's tests, 4890 and 275 seem fairly equal. Fallout 3 and GRID are the only ones that consistently work better on the 4890. Neither is "crushed" by the other.

:sun::sun::sun:

and the darkness cleared.

This is pretty much what it is.
The AMD club wants the 4890 to have a tag on it that says "Dangerously close to GTX285".
The Nvidia club wants the 4890 to be beaten by the GTX275 not even considering the 285.

That's pretty much it.
I've said it before, these cards, the 4890 and 275 are too close to really call a winner in gaming performance.
 
Originally posted by: Keysplayr
The AMD club wants the 4890 to have a tag on it that says "Dangerously close to GTX285".
The Nvidia club wants the 4890 to be beaten by the GTX275 not even considering the 285.
Which by itself is silly because people keep trying to compare different prices. Have we not heard of the scientific method? When testing things, try to keep as many variables constant as possible and just change 1 thing at a time. Price would be one of those fixed variables. The reason Anandtech has a lot of articles like 4890 vs 275 or AMD's Athlon II against Intel's E6300 is because they're in the same price league.

Of course someone could compare the 4890 to the 285 but that's a bullshit test because they're different prices and different performance. If the 285 is faster in every test but costs $50 more, how does that help me pick one? I'm still left wondering which one is a better value. I pay more and I get more? Any retard could tell you that correlation.
 
Originally posted by: ShawnD1
If the 285 is faster in every test but costs $50 more, how does that help me pick one? I'm still left wondering which one is a better value. I pay more and I get more? Any retard could tell you that correlation.

If the difference was only $50 then it would be a clear win for the 285 but just looking at Newegg, the difference is $100+ and sometimes $150 if you count some of the deals on 4890s that have been happening recently. That makes it a bit more fuzzy especially if the 4890 does come close to the 285 in some games.

If price plays a large part in someone's purchase (I'm in this boat since I don't think computer hardware is a wise investment), you better believe they'd choose 90% of the performance for 60-70% of the price.
 
Originally posted by: ShawnD1
Originally posted by: Keysplayr
The AMD club wants the 4890 to have a tag on it that says "Dangerously close to GTX285".
The Nvidia club wants the 4890 to be beaten by the GTX275 not even considering the 285.
Which by itself is silly because people keep trying to compare different prices. Have we not heard of the scientific method? When testing things, try to keep as many variables constant as possible and just change 1 thing at a time. Price would be one of those fixed variables. The reason Anandtech has a lot of articles like 4890 vs 275 or AMD's Athlon II against Intel's E6300 is because they're in the same price league.

Of course someone could compare the 4890 to the 285 but that's a bullshit test because they're different prices and different performance. If the 285 is faster in every test but costs $50 more, how does that help me pick one? I'm still left wondering which one is a better value. I pay more and I get more? Any retard could tell you that correlation.


Well you cant really bring up science and variables and throw out something as ambiguous and relative as "value" for consideration.

Someone may "value" 10.1 or physx and be willing to pay $20 more for it, some would think that is crazy.

Some may think that getting an extra couple FPS @ 19XX 12XX is worth the price to jump up card classes, some may not.

Some people think it is worth the extra money for 30mhz in core speed on the factory BIOS, others think that is retarded knowing you can use simple programs to achieve the same thing.


All you can work on is hard numbers. Unless you can create a universally accepted formula to measure "value," it is hard for reviewers to put that in a review.



 
Originally posted by: ShawnD1If the 285 is faster in every test but costs $50 more, how does that help me pick one? I'm still left wondering which one is a better value. I pay more and I get more? Any retard could tell you that correlation.

Of course any retard could tell you that, but reviews give us specific numbers on how much better or worse a particular card is in relation to other cards. And with that information you can better judge whether or not it's worth going for.
 
Another factor that is often overlooked is power usage. Although all the above cards can be considered in the same range, it is not always so. It would be more of an issue if nVidia didn't go through it's 55nm shrink. One irony is regardless of price for the graphics market, the power usage generally follows the performance curve.

Side note: The some cards I have in my possition: 4870(3), gtx 260(1), 9600gso(3), 9600gt(1), 4550(1), etc Why? Because I can't resist a bargain.
 
Originally posted by: Schmide
Another factor that is often overlooked is power usage. Although all the above cards can be considered in the same range, it is not always so. It would be more of an issue if nVidia didn't go through it's 55nm shrink. One irony is regardless of price for the graphics market, the power usage generally follows the performance curve.

Side note: The some cards I have in my possition: 4870(3), gtx 260(1), 9600gso(3), 9600gt(1), 4550(1), etc Why? Because I can't resist a bargain.

Except GT200 cleaned up as far as power usage due to the 2D downclocking.
 
Originally posted by: OCguy
Except GT200 cleaned up as far as power usage due to the 2D downclocking.

That's very easy to do with software if it is a priority for you. Anyone running something like Rivatuner or Tray Tools could do this.
 
Back
Top