Rumor Section: About the new GPU's

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
I agree with you, may be if Crossfire and SLI in the future uses shared memory pool it may scale better.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: OCguy
Originally posted by: evolucion8
Originally posted by: Scali
They finally got it right with the 4870X2, but that doesn't automatically mean that AMD will get it right again for this generation.

They did!! Welcome to 2009!!, The HD 4870X2 is the fastest single PCB card on the planet, no single nVidia GPU can beat it, the only card that barely outperforms it is the sandwich GX2 which uses 2 PCB!! You may say that ATi needs 2 GPU's to compete with 1 GPU from nVidia, but I say that nVidia needs 1.4B transistor chip which is twice as big to be competitive with the 959M ATi chip, how that can be?


Fastest "single pcb" :roll:


And Phenom was the fastest quad for a while as well by your standards.

If you don't have something nice or informative to say to prove me wrong, don't say nothing at all, thank you.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: evolucion8
Originally posted by: OCguy
Fastest "single pcb" :roll:


And Phenom was the fastest quad for a while as well by your standards.

If you don't have something nice or informative to say to prove me wrong, don't say nothing at all, thank you.

Well, he just means that the "fastest single pcb" criterion is a rather arbitrary one, just like "native quadcore". It seems to have little meaning to the end-user.
 

Creig

Diamond Member
Oct 9, 1999
5,170
13
81
Originally posted by: Idontcare
You don't get invited for an exclusive backstory on RV770 by beating up the chip designer in your reviews. A little back-scratching goes a long ways in the world of advertising and marketing.

That said, I much enjoyed reading about the RV770 backstory, so if it took a little "let's be somewhat selective in our evaluation procedures" wink-wink nod-nod to grease the skids for an eventual article like that then I have no issue with how this industry operates. It all comes with the territory I suppose.

Wow, IDC. So what you're saying is that AnandTech purposely favored AMD in previous reviews in order to be rewarded with this interview? Because that's how I read your statement.

Why is it that you think that AnandTech deliberately compromised their standards to favor AMD in order to get an interview? I seem to recall Anand and Derek making not-too-favorable comments in the past about AMD and ATi hardware and drivers.

Isn't there just the slightest possibility that AMD was rightfully proud of their overwhelming success with the RV770? And that they wanted to share the story behind the decisions that led to its success with one of the most often visited tech related websites on the net?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Creig
Originally posted by: Idontcare
You don't get invited for an exclusive backstory on RV770 by beating up the chip designer in your reviews. A little back-scratching goes a long ways in the world of advertising and marketing.

That said, I much enjoyed reading about the RV770 backstory, so if it took a little "let's be somewhat selective in our evaluation procedures" wink-wink nod-nod to grease the skids for an eventual article like that then I have no issue with how this industry operates. It all comes with the territory I suppose.

Wow, IDC. So what you're saying is that AnandTech purposely favored AMD in previous reviews in order to be rewarded with this interview? Because that's how I read your statement.

Why is it that you think that AnandTech deliberately compromised their standards to favor AMD in order to get an interview? I seem to recall Anand and Derek making not-too-favorable comments in the past about AMD and ATi hardware and drivers.

Isn't there just the slightest possibility that AMD was rightfully proud of their overwhelming success with the RV770? And that they wanted to share the story behind the decisions that led to its success with one of the most often visited tech related websites on the net?

I'm just connecting dots, I didn't create the dots.

You can disagree with the picture it paints if you like/prefer an alternative way of viewing.

Did AT's review favor highlighting ATI's strong points and avoid shining light on the weaker points? I'm not judging that, you decide. Did AT score an exclusive on a juicy behind-the-scenes article on the RV770? Why was it an exclusive? Why just AT, there are plenty of other non-profit review sites out there aren't there? Why not bring four or five of them together for showing your pride to more than just one target audience?

I'm not claiming cause-and-effect here. But it is all marketing, is it not?

AT is a for-profit review site, AMD is a for-profit publicly held company.

What business do these two companies have getting together, ever, if it isn't to maximize their shareholder value and their profits?

I do not find the idea itself to be unscrupulous or incorrigible, this is not a holy/noble cause versus shill issue, AT is as it must be and I find value in it. But I ain't going to cast aside what type of business they are operating and refuse to acknowledge that marketing has but one and only one purpose.

It's not for me to tell you how to connect the dots, you deal with them however makes yourself comfortable. I have seen too much of how this industry works from the other side of it, I can't just unlearn and forget those years and years. When I see the dots I connect them and move on. Bottom line is no matter how you connect them it doesn't change a damn thing, the dots still exist, so do with them what you like.

edit: just to further attempt to make sure my comments aren't being taken out of unintended context - let me reiterate that I am in no way implying sinister or unsavory ethical conduct or behavior by the AT writing staff...I am saying that this business contains a lot of gray area and you can't expect yourself to develop friendly relationship on good working order by being a hard-nose about it ala HardOCP style. Of course these guys value their credibility and have pride in being impartial and doling out the tough love where needed.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
Originally posted by: Scali
Originally posted by: evolucion8
Originally posted by: OCguy
Fastest "single pcb" :roll:


And Phenom was the fastest quad for a while as well by your standards.

If you don't have something nice or informative to say to prove me wrong, don't say nothing at all, thank you.

Well, he just means that the "fastest single pcb" criterion is a rather arbitrary one, just like "native quadcore". It seems to have little meaning to the end-user.

Exactly.
 

nitromullet

Diamond Member
Jan 7, 2004
9,031
36
91
Originally posted by: OCguy
Originally posted by: Scali
Originally posted by: evolucion8
Originally posted by: OCguy
Fastest "single pcb" :roll:


And Phenom was the fastest quad for a while as well by your standards.

If you don't have something nice or informative to say to prove me wrong, don't say nothing at all, thank you.

Well, he just means that the "fastest single pcb" criterion is a rather arbitrary one, just like "native quadcore". It seems to have little meaning to the end-user.

Exactly.

I think an even broader issue here is the general subjective nature of how some want to look at history and create arbitrary criteria that make this discussion difficult.

In one post, we're supposed to disregard 2900XT because it's was a 'flop'. Next the fact that ATI's dual gpu card is on a single PCB somehow makes it the greatest dual gpu card since sliced bread compared to NVIDIA's card that just happens to use two PCB's but generally performs better. How are we supposed to carry on a debate if one side continues to dictate what can be discussed and the criteria used to measure success?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: nitromullet
How are we supposed to carry on a debate if one side continues to dictate what can be discussed and the criteria used to measure success?

By making it clear and upfront what the metrics of success are defined to be for a given conversation.

Yes this can then degrade into a debate over the merits of any given metric of success (what is better: price/performance or performance/watt...TCO or initial cost...etc) but the truth of it is that if such a debate occurs then to be sure you would have wasted your time having a discussion in which neither party agrees to the point of the conversation.

Find middle ground, seek mutual agreement on the metrics of success for which the products are to be evaluated and proceed from there.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: nitromullet
In one post, we're supposed to disregard 2900XT because it's was a 'flop'.

Well actually that makes discusions VERY easy.
I mean, if we just disregard all the flops that companies have had, then all their products were great :)
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Originally posted by: bryanW1995
Originally posted by: SickBeast
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.

too bad they haven't spend more time on k9/k10/etc...

What do you think the Phenom 2 is?

AMD has publicly admitted that they messed up the Phenom 1's original design and had to rush it out as a result.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: SickBeast
Originally posted by: bryanW1995
Originally posted by: SickBeast
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.

too bad they haven't spend more time on k9/k10/etc...

What do you think the Phenom 2 is?

AMD has publicly admitted that they messed up the Phenom 1's original design and had to rush it out as a result.

Speaking of a backstory I would love for Anand to get the inside scoop on...
 

SlowSpyder

Lifer
Jan 12, 2005
17,305
1,002
126
Originally posted by: Idontcare
Originally posted by: SickBeast
Originally posted by: bryanW1995
Originally posted by: SickBeast
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.

too bad they haven't spend more time on k9/k10/etc...

What do you think the Phenom 2 is?

AMD has publicly admitted that they messed up the Phenom 1's original design and had to rush it out as a result.

Speaking of a backstory I would love for Anand to get the inside scoop on...

Something tells me they'd rather just talk about their successful products. :) Though it would be nice to hear where things went wrong with PhI.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Yeah. like the NV3X moratorium, what went wrong, it was a very informative story, the same should be done with the HD 2900 series and Pentium 4, it should attract a lot of readers starting with myself :p
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: evolucion8
Yeah. like the NV3X moratorium, what went wrong, it was a very informative story, the same should be done with the HD 2900 series and Pentium 4, it should attract a lot of readers starting with myself :p

Isn't it mostly very simple though?

Phenom: It's all about a combination of two things: native quadcore design and moving to 65 nm. Intel has this magic rule that one should never try to do a new CPU design and a new manufacturing node at the same time. AMD didn't have much choice because they had to get a new CPU out ASAP in order to remain significant on the CPU market. So they took the gamble and went for a large native quadcore chip on a relatively new 65 nm process. The 65 nm process didn't turn out as well as AMD had hoped, and the big native quadcore chip had problems with yields and scaling to high clockspeeds. To top it all off, AMD took some shortcuts during CPU validation, which meant that the dreaded TLB bug was found by an OEM when the CPUs were already shipped. So AMD had to release a performance-killing patch and work on a new stepping to fix the bug in hardware.

NV3X: nVidia took the gamble that the new float shaders wouldn't be used that much in the first generation of DX9 games, and came up with a design that still used the same integer pipelines as their earlier DX8 cards (but updated with ps1.4), and added a single float unit.
This turned out to be a poor choice, probably especially because ATi's Radeon 9700 beat them to it, and DID have full float pipelines, so little performance penalty for using full float shaders. So nVidia had to wait this round out until they finished their full float design, the 6-series.

Pentium 4: Intel figured that they could make the most out of their manufacturing advantage over AMD by going for very high clockspeeds. As we all know, performance is a combination of IPC and clockspeed... Up to now, new CPUs tried to increase both IPC and clockspeed at the same time. Intel figured it was also possible to trade off a bit of IPC for more clockspeed, resulting in yet more performance.
The initial 180 nm Willamette wasn't that much of a success, but after Intel shrunk the design to 130 nm and added extra cache in the Northwood, the design really came into its own. The Pentium 4 skyrocketed clockspeeds from about 1.5 GHz to over 3 GHz in just a matter of months.
But, when they wanted to shrink it again, to 90 nm, disaster struck. There wasn't a whole lot known about transistor leakage with such small feature sizes. Instead of linear leakage, the problem seemed to have more of an exponential nature. The result was that the 90 nm process couldn't really give power savings or higher clockspeeds, because it was just leaking away. This meant that the Pentium 4 hit a brick wall, rather than scaling to clockspeeds past 5 GHz, as Intel had originally envisioned.
It also triggered more research to new materials and other ways to control leakage and improve performance (eg different metal-oxides, now hafnium, strained silicon etc). As a result, the 65 nm process of Intel was a great success, even for the Pentium 4/D. The power consumption dropped greatly from the 90 nm variations, and they actually became good overclockers again.
And the same 65 nm process was also used for the hugely successful Core2 series.

The HD2900 is the only one where I don't really know why it went wrong. It was a large, almost 'over-engineered' chip... But compared to what nVidia did, it didn't seem that out of the ordinary. And since both companies have their chips made at TSMC, they weren't just manufacturing-related problems either. I think they just somehow slightly overstepped the boundaries, designing a chip that was just a bit too large, and had to run at just too high a clockspeed to get into its 'comfort zone', that the TSMC manufacturing process just couldn't really deliver what ATi had intended. nVidia seemed to nicely stay within the limits with the G80.

I guess in all cases it's just a gamble that didn't work out. You have to gamble a bit, because it takes years to design a processor, and you can't really predict what the manufacturing technology will do by the time your processor design goes into production, or what kind of demands the software will have on your processor. But, that's what keeps the industry exciting :)
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
Originally posted by: evolucion8

Yeah, but the same thing could be said to the processors, which reached a point that it couldn't get higher performance no matter how many optimizations were made to increase the IPC.
I?m not sure why you think that given it didn?t happen. Compare a 3 GHz C2D to a 3 GHz dual-core P4 and it?ll probably be about 50% faster clock-for-clock.

If processors had really hit a wall with the P4, then only way forward would be to keep adding more and more P4 cores while everything else stayed the same. Clearly that never happened.

I can?t see any 32 core Pentium 4, can you? But I can certainly see the Core 2 Duo in my system, which would demolish a 32 core P4 in 99% of situations.

The fact is, even if the core remains completely identical (which it doesn?t), a die-shrink almost certainly guarantees higher clock speeds.

The C2D was an amazing processor because it dominated single-threaded performance because of its high IPC and low thermals. Even today, the number of applications that show a benefit from two cores is a drop in the bucket compared to the total software out there, much less those that benefit from four cores.

Multi-core is mainly about selling more cores through marketing; pushing the concept of more cores automatically implying better performance. In fact, replace the term ?multi-core? with ?MHz? and it?ll be just like the P4 days. That and the manufacturing process is refined enough that it?s not a significant burden to add those extra cores.

On top of this, we?ve barely scratched the surface of organic CPUs, quantum computers, and optical CPUs.

The same issue will happen eventually to the GPUs which will reach a point that they would become so big and power hungry that will be too expensive to manufacture and reach reasonable yields.
If a single GPU hits a wall, then so does multi-GPU given you?ll reach the limit of how many GPUs you can pack onto a single slot solution. At that point, the only way forward is adding more and more GPUs in extra PCIe slots using a rack system of sorts.

This is the part people don?t seem to get when they talk about single GPUs hitting a ?wall?. If a single GPU hits a wall then so does multi-GPU.

Again, that hasn?t happened. The fact that something like the 4870X2 came after the 3870X2 is exactly because single GPUs haven?t hit a wall.

In multi GPU/CPU environments always there will be issues, but is just a matter of time to iron them out, multi CPU environment is moving fast, the same should happen to the GPU environment.
I?m not sure if you?re familiar with the nature of programming drivers for multi-GPU setups, but the gist of it is that they require constant coddling and application specific workarounds because the solution will never be as robust as a single GPU. They?re never going to be ironed out unless the application base stops growing.

This is much like programming for multiple cores where simply spinning multiple threads is absolutely no guarantee of any performance gain.

People have been working on a general solution for this for decades, but there?s absolutely no silver bullet in sight anywhere.
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: BFG10K
I?m not sure why you think that given it didn?t happen. Compare a 3 GHz C2D to a 3 GHz dual-core P4 and it?ll probably be about 50% faster clock-for-clock.

But I'm not talking about the P4, I'm talking about the C2D/Athlon 64 generation, is the Phenom 2 much faster in a per clock basis against the Athlon 64? Is the Core i7 much faster than the C2Q in a per clock basis? NOO. They reached a point that is much more expensive and very hard to increase the parallelism inside of a CPU (IPC), so the best way is going multi core and that's what Intel is currently doing with it's Core i7 architecture. Why Intel didn't make the Core i7 a Quad Core/4 thread CPU?

If a single GPU hits a wall, then so does multi-GPU given you?ll reach the limit of how many GPUs you can pack onto a single slot solution. At that point, the only way forward is adding more and more GPUs in extra PCIe slots using a rack system of sorts.

But that's far too fetched. The same could be said with the CPU's. Unlike CPU's, a GPU which has a lots of mini CPU's inside can be enhanced within each generation by simply adding more and some other tweaks. That;s something that can't be done with CPU's. The progress can't stop, but adding a 2nd GPU is the easiest way to increase performance.


This is much like programming for multiple cores where simply spinning multiple threads is absolutely no guarantee of any performance gain.

People have been working on a general solution for this for decades, but there?s absolutely no silver bullet in sight anywhere.

But the GPU graphic work is highly parallel, while there is no silver bullet yet, the posibility is there. CPU's aren't and yet the benefits are there, but definitively the developers are far behind from exploiting the new technology.

Originally posted by: Scali
Isn't it mostly very simple though?

Phenom: It's all about a combination of two things: native quadcore design and moving to 65 nm. Intel has this magic rule that one should never try to do a new CPU design and a new manufacturing node at the same time. AMD didn't have much choice because they had to get a new CPU out ASAP in order to remain significant on the CPU market. So they took the gamble and went for a large native quadcore chip on a relatively new 65 nm process. The 65 nm process didn't turn out as well as AMD had hoped, and the big native quadcore chip had problems with yields and scaling to high clockspeeds. To top it all off, AMD took some shortcuts during CPU validation, which meant that the dreaded TLB bug was found by an OEM when the CPUs were already shipped. So AMD had to release a performance-killing patch and work on a new stepping to fix the bug in hardware.

I mean at the architecture level, AMD has been working for a while with the 65nm process, so the problem is not the manufacturing process but something at the architecture level like for example the TLB bug which killed the CPU performance, the Phenom 2 is actually faster in a per clock basis compared to the Phenom 1, so something was wrong with the original Phenom.

NV3X: nVidia took the gamble that the new float shaders wouldn't be used that much in the first generation of DX9 games, and came up with a design that still used the same integer pipelines as their earlier DX8 cards (but updated with ps1.4), and added a single float unit.
This turned out to be a poor choice, probably especially because ATi's Radeon 9700 beat them to it, and DID have full float pipelines, so little performance penalty for using full float shaders. So nVidia had to wait this round out until they finished their full float design, the 6-series.

You also forgot that the NV3X didn't had enough registers and when native DX9 code is running, the NV3X GPU simply runs out of registers and it starts juggling and scrambling data to make space wasting performance. The NV35 came with Floating Point pipelines which the NV30 didn't but the register problem was there. Also it had few texture units and a weird vertex shader layout which wasn't optimal.

Pentium 4: Intel figured that they could make the most out of their manufacturing advantage over AMD by going for very high clockspeeds. As we all know, performance is a combination of IPC and clockspeed... Up to now, new CPUs tried to increase both IPC and clockspeed at the same time. Intel figured it was also possible to trade off a bit of IPC for more clockspeed, resulting in yet more performance.
The initial 180 nm Willamette wasn't that much of a success, but after Intel shrunk the design to 130 nm and added extra cache in the Northwood, the design really came into its own. The Pentium 4 skyrocketed clockspeeds from about 1.5 GHz to over 3 GHz in just a matter of months.
But, when they wanted to shrink it again, to 90 nm, disaster struck. There wasn't a whole lot known about transistor leakage with such small feature sizes. Instead of linear leakage, the problem seemed to have more of an exponential nature. The result was that the 90 nm process couldn't really give power savings or higher clockspeeds, because it was just leaking away. This meant that the Pentium 4 hit a brick wall, rather than scaling to clockspeeds past 5 GHz, as Intel had originally envisioned.
It also triggered more research to new materials and other ways to control leakage and improve performance (eg different metal-oxides, now hafnium, strained silicon etc). As a result, the 65 nm process of Intel was a great success, even for the Pentium 4/D. The power consumption dropped greatly from the 90 nm variations, and they actually became good overclockers again.
And the same 65 nm process was also used for the hugely successful Core2 series.

The Pentium 4 was never meant to be a multi core CPU, with it's little cache subsystem and it's cache layout which is placed after the decoding stage. I think that Intel stated in 2001 that the Pentium 4 was meant to reach 10GHz lolll

The HD2900 is the only one where I don't really know why it went wrong. It was a large, almost 'over-engineered' chip... But compared to what nVidia did, it didn't seem that out of the ordinary. And since both companies have their chips made at TSMC, they weren't just manufacturing-related problems either. I think they just somehow slightly overstepped the boundaries, designing a chip that was just a bit too large, and had to run at just too high a clockspeed to get into its 'comfort zone', that the TSMC manufacturing process just couldn't really deliver what ATi had intended. nVidia seemed to nicely stay within the limits with the G80.

It was over engineered, had broken ROP's, had large TMU's units which most of it's part would idle (They were trimmed and optimized in the HD 4x00 series) It was clocked higher to try to remain competitive.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: evolucion8
I mean at the architecture level, AMD has been working for a while with the 65nm process, so the problem is not the manufacturing process but something at the architecture level like for example the TLB bug which killed the CPU performance, the Phenom 2 is actually faster in a per clock basis compared to the Phenom 1, so something was wrong with the original Phenom.

How long do you think it takes to design a CPU? Yes 65 nm Athlons were around slightly before Barcelona was finished, but it was way too late in the design process to modify Barcelona to the limits of the 65 nm process.
Phenom 2 is faster per clk for a very simple reason: larger caches.

Originally posted by: evolucion8
You also forgot that the NV3X didn't had enough registers and when native DX9 code is running, the NV3X GPU simply runs out of registers and it starts juggling and scrambling data to make space wasting performance. The NV35 came with Floating Point pipelines which the NV30 didn't but the register problem was there. Also it had few texture units and a weird vertex shader layout which wasn't optimal.

It's all a result of the design not being a full float design.

Originally posted by: evolucion8
The Pentium 4 was never meant to be a multi core CPU, with it's little cache subsystem and it's cache layout which is placed after the decoding stage. I think that Intel stated in 2001 that the Pentium 4 was meant to reach 10GHz lolll

The Athlon64 was never meant to be a multi core CPU either, or the Pentium M for that matter.
Core i7 actually has a post-decode cache aswell, difference is that it's only used for loops.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Idontcare
Originally posted by: Creig
Originally posted by: Idontcare
You don't get invited for an exclusive backstory on RV770 by beating up the chip designer in your reviews. A little back-scratching goes a long ways in the world of advertising and marketing.

That said, I much enjoyed reading about the RV770 backstory, so if it took a little "let's be somewhat selective in our evaluation procedures" wink-wink nod-nod to grease the skids for an eventual article like that then I have no issue with how this industry operates. It all comes with the territory I suppose.

Wow, IDC. So what you're saying is that AnandTech purposely favored AMD in previous reviews in order to be rewarded with this interview? Because that's how I read your statement.

Why is it that you think that AnandTech deliberately compromised their standards to favor AMD in order to get an interview? I seem to recall Anand and Derek making not-too-favorable comments in the past about AMD and ATi hardware and drivers.

Isn't there just the slightest possibility that AMD was rightfully proud of their overwhelming success with the RV770? And that they wanted to share the story behind the decisions that led to its success with one of the most often visited tech related websites on the net?

I'm just connecting dots, I didn't create the dots.

You can disagree with the picture it paints if you like/prefer an alternative way of viewing.

Did AT's review favor highlighting ATI's strong points and avoid shining light on the weaker points? I'm not judging that, you decide. Did AT score an exclusive on a juicy behind-the-scenes article on the RV770? Why was it an exclusive? Why just AT, there are plenty of other non-profit review sites out there aren't there? Why not bring four or five of them together for showing your pride to more than just one target audience?

I'm not claiming cause-and-effect here. But it is all marketing, is it not?

AT is a for-profit review site, AMD is a for-profit publicly held company.

What business do these two companies have getting together, ever, if it isn't to maximize their shareholder value and their profits?

I do not find the idea itself to be unscrupulous or incorrigible, this is not a holy/noble cause versus shill issue, AT is as it must be and I find value in it. But I ain't going to cast aside what type of business they are operating and refuse to acknowledge that marketing has but one and only one purpose.

It's not for me to tell you how to connect the dots, you deal with them however makes yourself comfortable. I have seen too much of how this industry works from the other side of it, I can't just unlearn and forget those years and years. When I see the dots I connect them and move on. Bottom line is no matter how you connect them it doesn't change a damn thing, the dots still exist, so do with them what you like.

edit: just to further attempt to make sure my comments aren't being taken out of unintended context - let me reiterate that I am in no way implying sinister or unsavory ethical conduct or behavior by the AT writing staff...I am saying that this business contains a lot of gray area and you can't expect yourself to develop friendly relationship on good working order by being a hard-nose about it ala HardOCP style. Of course these guys value their credibility and have pride in being impartial and doling out the tough love where needed.

After some constructive and enjoyable off-line discussion on the topic I'd like to reverse my opinion on this topic and state for the record I do NOT think there was any manner or degree of back scratching involved in any of the AT articles.

My thoughts weren't well collected nor well presented on the subject and even I now have a hard time understanding where I thought I was going with the dialogue.

I was certainly not intending to imply any questionable business ethics were involved...regardless, FWIW, if I could delete the post above from the forum I would do so as it no longer represents my opinion on the matter.

(thanks Creig for keeping it real while I fumbled around with some poorly conceived interpretations of past events)
 

evolucion8

Platinum Member
Jun 17, 2005
2,867
3
81
Originally posted by: Scali
What caused this sudden change of heart, Idontcare?

That sounds quite contradictory, asking a question and then saying I don't care hehe :p
 

Elfear

Diamond Member
May 30, 2004
7,165
824
126
Originally posted by: Idontcare

After some constructive and enjoyable off-line discussion on the topic I'd like to reverse my opinion on this topic and state for the record I do NOT think there was any manner or degree of back scratching involved in any of the AT articles.

My thoughts weren't well collected nor well presented on the subject and even I now have a hard time understanding where I thought I was going with the dialogue.

I was certainly not intending to imply any questionable business ethics were involved...regardless, FWIW, if I could delete the post above from the forum I would do so as it no longer represents my opinion on the matter.

(thanks Creig for keeping it real while I fumbled around with some poorly conceived interpretations of past events)

Kudos to you Idontcare. Takes a real man (or woman) to admit an error, especially on a forum where people can be rather predatory.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Elfear
Originally posted by: Idontcare

After some constructive and enjoyable off-line discussion on the topic I'd like to reverse my opinion on this topic and state for the record I do NOT think there was any manner or degree of back scratching involved in any of the AT articles.

My thoughts weren't well collected nor well presented on the subject and even I now have a hard time understanding where I thought I was going with the dialogue.

I was certainly not intending to imply any questionable business ethics were involved...regardless, FWIW, if I could delete the post above from the forum I would do so as it no longer represents my opinion on the matter.

(thanks Creig for keeping it real while I fumbled around with some poorly conceived interpretations of past events)

Kudos to you Idontcare. Takes a real man (or woman) to admit an error, especially on a forum where people can be rather predatory.

What exactly was the 'error' though?
I mean, Idontcare, do you mean that you felt you were wrong to post such allegations without any proof, or have you seen evidence that proves your earlier suspicions wrong?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Scali
What exactly was the 'error' though?
I mean, Idontcare, do you mean that you felt you were wrong to post such allegations without any proof, or have you seen evidence that proves your earlier suspicions wrong?

I realized in hindsight I was making the grave error of having a theory first and then seeking out data that supported it. Failure of the scientific method on my part.

Yeah I was connecting dots all right, but what it really was turning out to be was me just collecting the noise (and discarding the rest of the dataset) to draw a picture I wanted to see (call me a cynic I guess, age does that) while all the volumes of counter-points were escaping my conscience assessment.

At any rate I had some helpful and polite pm's that inquired about my line of thinking in a Socratic method manner of teaching, resulting (as intended I suspect) in me having an epiphany regarding the root of the error in my logic.

Long story short, not all smoke leads to a fire, and sometimes what at first looks like smoke can turn out to just be a fog bank that clears once the sun comes up and burns it off.

In summary...the error was my connecting so few dots in a needlessly cynical fashion without accounting for the myriad of other articles that have graced AT's website which in toto overwhelmingly support a conclusion that reviews can hardly be construed as showing favoritism or playing to just the strengths of the hardware being reviewed.
 

Scali

Banned
Dec 3, 2004
2,495
1
0
Originally posted by: Idontcare
In summary...the error was my connecting so few dots in a needlessly cynical fashion without accounting for the myriad of other articles that have graced AT's website which in toto overwhelmingly support a conclusion that reviews can hardly be construed as showing favoritism or playing to just the strengths of the hardware being reviewed.

True... I can't quite shake the feeling that the 3870X2 review was a bit of an outlier though.
Then again, as you said earlier, there's not necessarily something wrong with that.
It often happens that a vendor only lets you review their products if you agree to a few basic rules. nVidia did it recently, I think with the GTX275 introduction. I believe the rule was something like: you may benchmark 6 games, and we specify 5 out of the 6.
This led to all sites having a virtually identical benchmark suite, and obviously nVidia had cherry-picked games that ran well on their architecture.
When vendors give you such "offers you can't refuse", what is a website going to do? Either you take the offer, or you don't get the hardware, so you don't get a review, and you don't get hits. And as you say, it's about the hits in the end, they bring in the advertising money.

I suppose that because of deals like these in the past, we get a bit jaded.