evolucion8
Platinum Member
- Jun 17, 2005
- 2,867
- 3
- 81
I agree with you, may be if Crossfire and SLI in the future uses shared memory pool it may scale better.
Originally posted by: OCguy
Originally posted by: evolucion8
Originally posted by: Scali
They finally got it right with the 4870X2, but that doesn't automatically mean that AMD will get it right again for this generation.
They did!! Welcome to 2009!!, The HD 4870X2 is the fastest single PCB card on the planet, no single nVidia GPU can beat it, the only card that barely outperforms it is the sandwich GX2 which uses 2 PCB!! You may say that ATi needs 2 GPU's to compete with 1 GPU from nVidia, but I say that nVidia needs 1.4B transistor chip which is twice as big to be competitive with the 959M ATi chip, how that can be?
Fastest "single pcb" :roll:
And Phenom was the fastest quad for a while as well by your standards.
Originally posted by: evolucion8
Originally posted by: OCguy
Fastest "single pcb" :roll:
And Phenom was the fastest quad for a while as well by your standards.
If you don't have something nice or informative to say to prove me wrong, don't say nothing at all, thank you.
Originally posted by: Idontcare
You don't get invited for an exclusive backstory on RV770 by beating up the chip designer in your reviews. A little back-scratching goes a long ways in the world of advertising and marketing.
That said, I much enjoyed reading about the RV770 backstory, so if it took a little "let's be somewhat selective in our evaluation procedures" wink-wink nod-nod to grease the skids for an eventual article like that then I have no issue with how this industry operates. It all comes with the territory I suppose.
Originally posted by: Creig
Originally posted by: Idontcare
You don't get invited for an exclusive backstory on RV770 by beating up the chip designer in your reviews. A little back-scratching goes a long ways in the world of advertising and marketing.
That said, I much enjoyed reading about the RV770 backstory, so if it took a little "let's be somewhat selective in our evaluation procedures" wink-wink nod-nod to grease the skids for an eventual article like that then I have no issue with how this industry operates. It all comes with the territory I suppose.
Wow, IDC. So what you're saying is that AnandTech purposely favored AMD in previous reviews in order to be rewarded with this interview? Because that's how I read your statement.
Why is it that you think that AnandTech deliberately compromised their standards to favor AMD in order to get an interview? I seem to recall Anand and Derek making not-too-favorable comments in the past about AMD and ATi hardware and drivers.
Isn't there just the slightest possibility that AMD was rightfully proud of their overwhelming success with the RV770? And that they wanted to share the story behind the decisions that led to its success with one of the most often visited tech related websites on the net?
Originally posted by: Scali
Originally posted by: evolucion8
Originally posted by: OCguy
Fastest "single pcb" :roll:
And Phenom was the fastest quad for a while as well by your standards.
If you don't have something nice or informative to say to prove me wrong, don't say nothing at all, thank you.
Well, he just means that the "fastest single pcb" criterion is a rather arbitrary one, just like "native quadcore". It seems to have little meaning to the end-user.
Originally posted by: OCguy
Originally posted by: Scali
Originally posted by: evolucion8
Originally posted by: OCguy
Fastest "single pcb" :roll:
And Phenom was the fastest quad for a while as well by your standards.
If you don't have something nice or informative to say to prove me wrong, don't say nothing at all, thank you.
Well, he just means that the "fastest single pcb" criterion is a rather arbitrary one, just like "native quadcore". It seems to have little meaning to the end-user.
Exactly.
Originally posted by: nitromullet
How are we supposed to carry on a debate if one side continues to dictate what can be discussed and the criteria used to measure success?
Originally posted by: nitromullet
In one post, we're supposed to disregard 2900XT because it's was a 'flop'.
Originally posted by: bryanW1995
Originally posted by: SickBeast
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.
too bad they haven't spend more time on k9/k10/etc...
Originally posted by: SickBeast
Originally posted by: bryanW1995
Originally posted by: SickBeast
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.
too bad they haven't spend more time on k9/k10/etc...
What do you think the Phenom 2 is?
AMD has publicly admitted that they messed up the Phenom 1's original design and had to rush it out as a result.
Originally posted by: Idontcare
Originally posted by: SickBeast
Originally posted by: bryanW1995
Originally posted by: SickBeast
I'll bet that the CPU division at AMD helped 'ATi' to re-vamp the 4890 GPU. What Xbit is describing reminds me of the Thunderbird Athlons. AMD is very good at that stuff.
too bad they haven't spend more time on k9/k10/etc...
What do you think the Phenom 2 is?
AMD has publicly admitted that they messed up the Phenom 1's original design and had to rush it out as a result.
Speaking of a backstory I would love for Anand to get the inside scoop on...
Originally posted by: evolucion8
Yeah. like the NV3X moratorium, what went wrong, it was a very informative story, the same should be done with the HD 2900 series and Pentium 4, it should attract a lot of readers starting with myself![]()
I?m not sure why you think that given it didn?t happen. Compare a 3 GHz C2D to a 3 GHz dual-core P4 and it?ll probably be about 50% faster clock-for-clock.Originally posted by: evolucion8
Yeah, but the same thing could be said to the processors, which reached a point that it couldn't get higher performance no matter how many optimizations were made to increase the IPC.
If a single GPU hits a wall, then so does multi-GPU given you?ll reach the limit of how many GPUs you can pack onto a single slot solution. At that point, the only way forward is adding more and more GPUs in extra PCIe slots using a rack system of sorts.The same issue will happen eventually to the GPUs which will reach a point that they would become so big and power hungry that will be too expensive to manufacture and reach reasonable yields.
I?m not sure if you?re familiar with the nature of programming drivers for multi-GPU setups, but the gist of it is that they require constant coddling and application specific workarounds because the solution will never be as robust as a single GPU. They?re never going to be ironed out unless the application base stops growing.In multi GPU/CPU environments always there will be issues, but is just a matter of time to iron them out, multi CPU environment is moving fast, the same should happen to the GPU environment.
Originally posted by: BFG10K
I?m not sure why you think that given it didn?t happen. Compare a 3 GHz C2D to a 3 GHz dual-core P4 and it?ll probably be about 50% faster clock-for-clock.
If a single GPU hits a wall, then so does multi-GPU given you?ll reach the limit of how many GPUs you can pack onto a single slot solution. At that point, the only way forward is adding more and more GPUs in extra PCIe slots using a rack system of sorts.
This is much like programming for multiple cores where simply spinning multiple threads is absolutely no guarantee of any performance gain.
People have been working on a general solution for this for decades, but there?s absolutely no silver bullet in sight anywhere.
Originally posted by: Scali
Isn't it mostly very simple though?
Phenom: It's all about a combination of two things: native quadcore design and moving to 65 nm. Intel has this magic rule that one should never try to do a new CPU design and a new manufacturing node at the same time. AMD didn't have much choice because they had to get a new CPU out ASAP in order to remain significant on the CPU market. So they took the gamble and went for a large native quadcore chip on a relatively new 65 nm process. The 65 nm process didn't turn out as well as AMD had hoped, and the big native quadcore chip had problems with yields and scaling to high clockspeeds. To top it all off, AMD took some shortcuts during CPU validation, which meant that the dreaded TLB bug was found by an OEM when the CPUs were already shipped. So AMD had to release a performance-killing patch and work on a new stepping to fix the bug in hardware.
NV3X: nVidia took the gamble that the new float shaders wouldn't be used that much in the first generation of DX9 games, and came up with a design that still used the same integer pipelines as their earlier DX8 cards (but updated with ps1.4), and added a single float unit.
This turned out to be a poor choice, probably especially because ATi's Radeon 9700 beat them to it, and DID have full float pipelines, so little performance penalty for using full float shaders. So nVidia had to wait this round out until they finished their full float design, the 6-series.
Pentium 4: Intel figured that they could make the most out of their manufacturing advantage over AMD by going for very high clockspeeds. As we all know, performance is a combination of IPC and clockspeed... Up to now, new CPUs tried to increase both IPC and clockspeed at the same time. Intel figured it was also possible to trade off a bit of IPC for more clockspeed, resulting in yet more performance.
The initial 180 nm Willamette wasn't that much of a success, but after Intel shrunk the design to 130 nm and added extra cache in the Northwood, the design really came into its own. The Pentium 4 skyrocketed clockspeeds from about 1.5 GHz to over 3 GHz in just a matter of months.
But, when they wanted to shrink it again, to 90 nm, disaster struck. There wasn't a whole lot known about transistor leakage with such small feature sizes. Instead of linear leakage, the problem seemed to have more of an exponential nature. The result was that the 90 nm process couldn't really give power savings or higher clockspeeds, because it was just leaking away. This meant that the Pentium 4 hit a brick wall, rather than scaling to clockspeeds past 5 GHz, as Intel had originally envisioned.
It also triggered more research to new materials and other ways to control leakage and improve performance (eg different metal-oxides, now hafnium, strained silicon etc). As a result, the 65 nm process of Intel was a great success, even for the Pentium 4/D. The power consumption dropped greatly from the 90 nm variations, and they actually became good overclockers again.
And the same 65 nm process was also used for the hugely successful Core2 series.
The HD2900 is the only one where I don't really know why it went wrong. It was a large, almost 'over-engineered' chip... But compared to what nVidia did, it didn't seem that out of the ordinary. And since both companies have their chips made at TSMC, they weren't just manufacturing-related problems either. I think they just somehow slightly overstepped the boundaries, designing a chip that was just a bit too large, and had to run at just too high a clockspeed to get into its 'comfort zone', that the TSMC manufacturing process just couldn't really deliver what ATi had intended. nVidia seemed to nicely stay within the limits with the G80.
Originally posted by: evolucion8
I mean at the architecture level, AMD has been working for a while with the 65nm process, so the problem is not the manufacturing process but something at the architecture level like for example the TLB bug which killed the CPU performance, the Phenom 2 is actually faster in a per clock basis compared to the Phenom 1, so something was wrong with the original Phenom.
Originally posted by: evolucion8
You also forgot that the NV3X didn't had enough registers and when native DX9 code is running, the NV3X GPU simply runs out of registers and it starts juggling and scrambling data to make space wasting performance. The NV35 came with Floating Point pipelines which the NV30 didn't but the register problem was there. Also it had few texture units and a weird vertex shader layout which wasn't optimal.
Originally posted by: evolucion8
The Pentium 4 was never meant to be a multi core CPU, with it's little cache subsystem and it's cache layout which is placed after the decoding stage. I think that Intel stated in 2001 that the Pentium 4 was meant to reach 10GHz lolll
Originally posted by: Idontcare
Originally posted by: Creig
Originally posted by: Idontcare
You don't get invited for an exclusive backstory on RV770 by beating up the chip designer in your reviews. A little back-scratching goes a long ways in the world of advertising and marketing.
That said, I much enjoyed reading about the RV770 backstory, so if it took a little "let's be somewhat selective in our evaluation procedures" wink-wink nod-nod to grease the skids for an eventual article like that then I have no issue with how this industry operates. It all comes with the territory I suppose.
Wow, IDC. So what you're saying is that AnandTech purposely favored AMD in previous reviews in order to be rewarded with this interview? Because that's how I read your statement.
Why is it that you think that AnandTech deliberately compromised their standards to favor AMD in order to get an interview? I seem to recall Anand and Derek making not-too-favorable comments in the past about AMD and ATi hardware and drivers.
Isn't there just the slightest possibility that AMD was rightfully proud of their overwhelming success with the RV770? And that they wanted to share the story behind the decisions that led to its success with one of the most often visited tech related websites on the net?
I'm just connecting dots, I didn't create the dots.
You can disagree with the picture it paints if you like/prefer an alternative way of viewing.
Did AT's review favor highlighting ATI's strong points and avoid shining light on the weaker points? I'm not judging that, you decide. Did AT score an exclusive on a juicy behind-the-scenes article on the RV770? Why was it an exclusive? Why just AT, there are plenty of other non-profit review sites out there aren't there? Why not bring four or five of them together for showing your pride to more than just one target audience?
I'm not claiming cause-and-effect here. But it is all marketing, is it not?
AT is a for-profit review site, AMD is a for-profit publicly held company.
What business do these two companies have getting together, ever, if it isn't to maximize their shareholder value and their profits?
I do not find the idea itself to be unscrupulous or incorrigible, this is not a holy/noble cause versus shill issue, AT is as it must be and I find value in it. But I ain't going to cast aside what type of business they are operating and refuse to acknowledge that marketing has but one and only one purpose.
It's not for me to tell you how to connect the dots, you deal with them however makes yourself comfortable. I have seen too much of how this industry works from the other side of it, I can't just unlearn and forget those years and years. When I see the dots I connect them and move on. Bottom line is no matter how you connect them it doesn't change a damn thing, the dots still exist, so do with them what you like.
edit: just to further attempt to make sure my comments aren't being taken out of unintended context - let me reiterate that I am in no way implying sinister or unsavory ethical conduct or behavior by the AT writing staff...I am saying that this business contains a lot of gray area and you can't expect yourself to develop friendly relationship on good working order by being a hard-nose about it ala HardOCP style. Of course these guys value their credibility and have pride in being impartial and doling out the tough love where needed.
Originally posted by: Scali
What caused this sudden change of heart, Idontcare?
Originally posted by: Idontcare
After some constructive and enjoyable off-line discussion on the topic I'd like to reverse my opinion on this topic and state for the record I do NOT think there was any manner or degree of back scratching involved in any of the AT articles.
My thoughts weren't well collected nor well presented on the subject and even I now have a hard time understanding where I thought I was going with the dialogue.
I was certainly not intending to imply any questionable business ethics were involved...regardless, FWIW, if I could delete the post above from the forum I would do so as it no longer represents my opinion on the matter.
(thanks Creig for keeping it real while I fumbled around with some poorly conceived interpretations of past events)
Originally posted by: Elfear
Originally posted by: Idontcare
After some constructive and enjoyable off-line discussion on the topic I'd like to reverse my opinion on this topic and state for the record I do NOT think there was any manner or degree of back scratching involved in any of the AT articles.
My thoughts weren't well collected nor well presented on the subject and even I now have a hard time understanding where I thought I was going with the dialogue.
I was certainly not intending to imply any questionable business ethics were involved...regardless, FWIW, if I could delete the post above from the forum I would do so as it no longer represents my opinion on the matter.
(thanks Creig for keeping it real while I fumbled around with some poorly conceived interpretations of past events)
Kudos to you Idontcare. Takes a real man (or woman) to admit an error, especially on a forum where people can be rather predatory.
Originally posted by: Scali
What exactly was the 'error' though?
I mean, Idontcare, do you mean that you felt you were wrong to post such allegations without any proof, or have you seen evidence that proves your earlier suspicions wrong?
Originally posted by: Idontcare
In summary...the error was my connecting so few dots in a needlessly cynical fashion without accounting for the myriad of other articles that have graced AT's website which in toto overwhelmingly support a conclusion that reviews can hardly be construed as showing favoritism or playing to just the strengths of the hardware being reviewed.
