• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD confirms new architecture + launch timeframe for HD6xxx

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
literally seeing their market share disappear in less than a couple of years...

Hyperbole. Only after the Fermi debacle did NV finally lose majority of DX11 sales for a while. Given the fire sale prices on the GTS 450 and GTX 460, it seems that NV is perfectly fine with fighting for market share by lowering prices. Keep in mind NV makes most of its profits not in gaming cards but in things like professional graphics cards where it owns more than 80% of the market and has margins AMD could only dream of.

NV bankrolls what?

Does TWIMTBP mean anything to you? NV bankrolls stuff like assisting in creating CUDA programming classes, too.

"plausible PhysX"...?

Imagine if NV only had 25% of gaming card market share. Would PhysX make any sense? no. If NV had only 25% of the market it would probably give up and join AMD in going OpenCL. Worse, maybe AMD would be the one going proprietary.

I wrote it here a year ago and people didn't believe me: NV is doomed if they cannot convince enough companies to switch to CUDA - they lost their entire chipset business, their usual fake speccing did not work in mobile business so Tegra is a disaster so far and as stronger and stronger x86-based integrated CPU-GPU chips will arrive NV's income will fall even lower - they need CUDA to survive but I'm highly skeptical about its potential market, regardless all the BS they try to sell to their investors.

Certainly NV took some painful blows like Intel shutting them out of chipsets, the Fermi setback at 40nm, Intel and AMD both going the Fusion route (CPU + GPU, single die) and thus wiping out the low-end discrete card market, doing only so-so in retaining console GPU contracts (the future could be worse; I heard it may be a clean sweep in favor of AMD for the next console generation, ouch), and possibly getting more pricing pressure from Intel should it ever decide to make a sustained push into discrete GPUs again (Larrabee 2).

Ultimately NV may end up a much smaller company, but don't count NV out so long as it owns the professional graphics and can hang even with AMD in discrete gaming graphics. NV also leads in GPGPU (CUDA is easier to program for and already exists). We'll see how NV does in the mobile space, though I agree that so far Tegra-series chips have underwhelmed, financially speaking.


They outshipped Nvidia last quarter, although Nvidia still has it's quadro stronghold.

Yes, but NV didn't take kindly to that and is putting pricing pressure on AMD with GTS 450 and GTX 460 price cuts. AMD probably isn't interested in fighting back TOO hard, not when it's still bleeding red ink on the CPU side.
 
By the way, Charlie & co. at S|A seem to be strongly hinting at a 256-bit bus for Barts XT: http://www.semiaccurate.com/2010/09/20/northern-islands-barts/

The takeaway points as I interpret them:

1. Juniper is a well-balanced GPU that wasn't particularly memory bandwidth limited by a 128-bit bus (this has been shown true at other sites over the last year, too).

2. However, Barts XT is a faster GPU than Juniper and thus may have become overly memory-limited with a 128-bit bus, even if one were to use 20% faster GDDR5 VRAM.

3. The cost of going to 256-bit bus is not that high on the GPU die, and overall the cost may be only ~$10, with most of the going to costlier PCBs.

Putting the pieces together, I'm interpreting this hint to mean that Barts XT will be well over 20% faster than Juniper, else AMD would rather just stick with the 128-bit bus and use faster GDDR5 chips (assuming that faster chips are cheaper than using die space + costlier PCBs). That probably wasn't news to anyone who has been keeping track of rumors, but it's nice to see this latest tidbit be consistent with existing Barts XT rumors.

Given the above, and given that a high-ranking AMD official stated that HD6xxx will have a new architecture (see the first post in this thread for more info), I certainly hope nobody still believes that garbage about Barts being a mere rebadge of Juniper. 🙂
 
Yes, but NV didn't take kindly to that and is putting pricing pressure on AMD with GTS 450 and GTX 460 price cuts. AMD probably isn't interested in fighting back TOO hard, not when it's still bleeding red ink on the CPU side.

Not to mention the fact they are already on the threshold of launching their next-gen cards. It's definitely possible that Nvidia will regain the marketshare crown next quarter (they'll definitely gain some desktop share, but mobile is harder to predict), but if it does happen it will most likely be a single quarter anomoly.

AMD definitely needs to spruce up their Firepro drivers (it is rumored they'll be putting more emphasis here), but even if they do it will take them awhile to overcome Nvidia's momentum in the pro space. Probably a *long* while.

CUDA is an area where Nvidia is ahead, and it may have reached some kind of critical mass in universities, but I can't see it becoming a mainstream standard. When fusion and sandy bridge hit, people won't be thinking about CUDA. They'll be thinking of how to extract that extra performance already in the processor, and they'll want one language that works for both AMD and Intel processors with inegrated GPUs. That'll mean either OpenCL and/or DirectCompute. CUDA is probably faster, due to tight integration with a specific architecture, but in the end that will be irrelevant. The best CUDA can hope to be is the new Fortran in universities.
 
I agree with you about DX11, but remember that there was a long period of time prior to Fermi in which NV had ATI on the ropes. There is a lot of pro-NV inertia, and one bad product launch won't be enough to derail that. The vast majority of Steam users are on NV cards, for instance. There is a large NV ecosystem (PhysX and CUDA). F@H was written with NV GPUs in mind. So were some graphics programs. That large NV ecosystem will not vanish overnight.

As for pro cards, Firepro hardware alone won't be enough, given CUDA penetration and NV software support. Basically AMD can't tie NV and hope to gain marketshare; AMD would need to outright beat NV in hardware--and in software and support, too.

It's hard to say what will end up happening in GPGPU/HPC/supercomputing/etc. with CUDA. CUDA's big advantage is in the ubiquity of C-fluent programmers--something I wouldn't casually dismiss. But I can also see your scenario happening, too.

Not to mention the fact they are already on the threshold of launching their next-gen cards. It's definitely possible that Nvidia will regain the marketshare crown next quarter (they'll definitely gain some desktop share, but mobile is harder to predict), but if it does happen it will most likely be a single quarter anomoly.

AMD definitely needs to spruce up their Firepro drivers (it is rumored they'll be putting more emphasis here), but even if they do it will take them awhile to overcome Nvidia's momentum in the pro space. Probably a *long* while.

CUDA is an area where Nvidia is ahead, and it may have reached some kind of critical mass in universities, but I can't see it becoming a mainstream standard. When fusion and sandy bridge hit, people won't be thinking about CUDA. They'll be thinking of how to extract that extra performance already in the processor, and they'll want one language that works for both AMD and Intel processors with inegrated GPUs. That'll mean either OpenCL and/or DirectCompute. CUDA is probably faster, due to tight integration with a specific architecture, but in the end that will be irrelevant. The best CUDA can hope to be is the new Fortran in universities.
 
Last edited:
I agree with you about DX11, but remember that there was a long period of time prior to Fermi in which NV had ATI on the ropes. There is a lot of pro-NV inertia, and one bad product launch won't be enough to derail that. The vast majority of Steam users are on NV cards, for instance.

Weren't we talking about marketshare on a quarter-to-quarter basis, and not on an overall market penetration basis?

Firepro hardware alone won't be enough, though, given CUDA penetration and NV software support. Basically AMD can't tie NV and hope to gain marketshare; AMD would need to outright beat NV. In software, too.

By pro, I mean pro 3D rendering. In other words, I'm talking about Quadro, not Tesla. GPGPU is miniscule in comparison to the pro-rendering market. Quadro is NV's cash cow, and while NV is creating a new market for themselves with Tesla, I doubt the've sold enough Tesla cards to cover the R&D expenses, and the manufacturing expenses of having all that GPGPU functionality in their non-GPGPU cards. Tesla has the *potential* to be another cash cow like Quadro, and that's why NV is pursueing it. But yeah, for the vast majority of the pro market (which is rendering) CUDA is meaningless and AMD needs to improve their drivers.

It's hard to say what will end up happening in GPGPU/HPC/supercomputing/etc. with CUDA. CUDA's big advantage is in the ubiquity of C-fluent programmers--something I wouldn't casually dismiss. But I can also see your scenario happening, too.

I think OpenCL has that advantage as well. It is very CUDA-like, but is further removed from the hardware and completely vendor agnostic. The fact that soon all (or at least the vast majority) of CPUs will have GPU hardware all but guarantees that hardware-specific languages like CUDA will die in the mainstream market. In scientific/HPC computing however, a language tailored to run fast on specific hardware makes a lot of sense.
 
Weren't we talking about marketshare on a quarter-to-quarter basis, and not on an overall market penetration basis?

I think we were talking past each other a bit, sorry. I was thinking more in terms of overall NV ecosystem; the installed base. The bathtub water is still much more green than red, despite the fact that lately the water flowing into the tub has been equally red and green. And the tub drain isn't as fast as one might think. (Translation: Pre-DX11 cards are slowly either dying, getting thrown out, or being replaced by DX11 cards, but the rate isn't so fast as to put a large dent in NV's ecosystem yet. Most gaming-grade cards are green, apparently. Look at Steam Hardware Surveys for instance.) F@H is still optimized for NV GPUs. So are some graphics programs. NV has had YEARS to build up its ecosystem, even before G80. There is enough pro-NV inertia out there for NV to easily survive one bad year in gaming cards. However, now NV is staring at the possibility of losing two years in a row.

If I were NV, I'd sacrifice profit in discrete non-pro graphics cards in order to maintain market share and thus maintain my ecosystem, so that people continue to write programs with my GPUs in mind, continue to use CUDA, continue to use PhysX, and continued to consider my GPUs relevant. Then I'd hope that my next GPU were a smash hit and allowed me to raise prices and regain my profit margin. This is in fact what NV appears to be doing with its recent price cuts on Fermi derivatives like the GTX460.

By pro, I mean pro 3D rendering. In other words, I'm talking about Quadro, not Tesla. GPGPU is miniscule in comparison to the pro-rendering market. Quadro is NV's cash cow, and while NV is creating a new market for themselves with Tesla, I doubt the've sold enough Tesla cards to cover the R&D expenses, and the manufacturing expenses of having all that GPGPU functionality in their non-GPGPU cards. Tesla has the *potential* to be another cash cow like Quadro, and that's why NV is pursueing it. But yeah, for the vast majority of the pro market (which is rendering) CUDA is meaningless and AMD needs to improve their drivers.

I mostly agree with this, but I thought some high-end graphics programs have GPU acceleration that is compatible with or optimized for NV GPUs but not ATI GPUs? I could be mistaken, as it's been a while since I looked into it. Even if this weren't the case, though, NV's dev relationships and software and support give it great defensive position. Avatar wasn't rendered on ATI hardware, to nobody's surprise, I'm sure. 😉

I think OpenCL has that advantage as well. It is very CUDA-like, but is further removed from the hardware and completely vendor agnostic. The fact that soon all (or at least the vast majority) of CPUs will have GPU hardware all but guarantees that hardware-specific languages like CUDA will die in the mainstream market. In scientific/HPC computing however, a language tailored to run fast on specific hardware makes a lot of sense.

Like I said, we'll see. I think your scenario is plausible, even probable, but not inevitable.
 
Last edited:
If AMD releases a high card for professional users based on the new NI architecture(which they will) then NVDA's lead in this area will disappear as quickly as AMDs lead did in the enthusiast CPU market after Core 2 was released.
Pro developers aren't motivated by fanboism or brand loyalty which is why NVDA has the lead in this area.Their product has been better....soon it won't be.
 
If AMD releases a high card for professional users based on the new NI architecture(which they will) then NVDA's lead in this area will disappear as quickly as AMDs lead did in the enthusiast CPU market after Core 2 was released.
Pro developers aren't motivated by fanboism or brand loyalty which is why NVDA has the lead in this area.Their product has been better....soon it won't be.

You're a bit confused. AMD's disadvantage isn't due to architecture (well, that might be a part of it too... or not, it's hard to tell), but to bad drivers. In many situations the new Cypress based Firepro cards aren't all that much faster than their RV770 predecessors despite being basically doubled versions of them. In order to catch up to NV, AMD needs to improve their drivers, and stay competitive with NV long enough to intice people to switch.

As far as pro developers not being motivated by "brand loyalty of fanboism", sorry, you are dead wrong. Most pro developers would call it brand trustfullness or security though. If a company buys Nvidia products that work and do not give them trouble, then switching to an AMD product, even if it is faster and/or cheaper, is taking a risk. Just look at how long it took for AMD's CPU division to break into the business sector.

And yeah, AMD having sucky drivers outside of games does not help their cause. Some of the issues are downright stupid and incompetent. Good luck finding an unreal engine developer (and there are a ton of developers using UE3) who uses an AMD card -- there is bug that produces hardcore graphical curruption in UnrealEd whenever you change a lightsource or something that is affected by a lightsource (translation: everything in a level) where you need to rebuild your lighting to make it go away. This affects anyone using an ATI card and has been there since UE3 engine launched.
 
I agree with blastingcap that nVidia is, I beleive, the stronger GPU company compared to AMD. But I also beleive AMD knows this and that's why they have their foot on the gas pedal, executing and pushing out cards like no tomorrow.

I would also be scared of Fermi II, nVidia will want to come out fighting with this one.
 
You're a bit confused. AMD's disadvantage isn't due to architecture (well, that might be a part of it too... or not, it's hard to tell), but to bad drivers. In many situations the new Cypress based Firepro cards aren't all that much faster than their RV770 predecessors despite being basically doubled versions of them. In order to catch up to NV, AMD needs to improve their drivers, and stay competitive with NV long enough to intice people to switch.

As far as pro developers not being motivated by "brand loyalty of fanboism", sorry, you are dead wrong. Most pro developers would call it brand trustfullness or security though. If a company buys Nvidia products that work and do not give them trouble, then switching to an AMD product, even if it is faster and/or cheaper, is taking a risk. Just look at how long it took for AMD's CPU division to break into the business sector.

And yeah, AMD having sucky drivers outside of games does not help their cause. Some of the issues are downright stupid and incompetent. Good luck finding an unreal engine developer (and there are a ton of developers using UE3) who uses an AMD card -- there is bug that produces hardcore graphical curruption in UnrealEd whenever you change a lightsource or something that is affected by a lightsource (translation: everything in a level) where you need to rebuild your lighting to make it go away. This affects anyone using an ATI card and has been there since UE3 engine launched.

If he's confused you are even worse becuase it has nothing to do with "bad drivers" - it's simply no or little tools. 🙂
AMD simply does not assign legions of engineers to professional card support, they don't push OpenCL like NV does CUDA etc.
It's economics, simple as 1-2-3 - they first needed to reach profitability and the easiest way to do it on the commodity graphics card market, not on the professional ones.
OTOH if this next 68xx-series will be successful I'm sure as hell they start ramping up their professional efforts especially OpenCL support.
 
Exciting stuff. My decision to purchase the 6870 will depend on how well crossfire scales with these new cards(assuming its a decent jump up from the 480). Word on the street is that its much improved, if it isn't any better, Ill just pick up an extra 480. I guess we'll find out soon enough.

well don't bet on xfire to scale well on games when this new card's first few months, they typically had to work out the kinks, esp. if you have a new architecture on hand.
 
AMD simply does not assign legions of engineers to professional card support, they don't push OpenCL like NV does CUDA etc.

It's economics, simple as 1-2-3 - they first needed to reach profitability and the easiest way to do it on the commodity graphics card market, not on the professional ones.
OTOH if this next 68xx-series will be successful I'm sure as hell they start ramping up their professional efforts especially OpenCL support.

I'm thinking the first priority of a GPU manufacturer would be to make GPU's. It makes more sense to budget your R&D dollars into making better products and incorporating future industry standard features into them. I'd think that it's up to software developers to harness the power of the GPU to their benefit! Just seems like everybody these days want's to point the finger at the other guy and looks for as many free rides as possible.

On another note: If your gonna design your GPU's around proprietary features then you kinda have the obligation/responsability to provdide the tools, R&D to make your proprietary features work as advertised don't ya!
 
I'm confused?
😵

I think what he means by this is the other part of the equasion is as follows.

AMD needs to do all the legwork for the lazy developers/software companies that don't seem to have the time or money to take advantage of the hardware. Kinda like how nvidia adds physX features to games.
 
If AMD releases a high card for professional users based on the new NI architecture(which they will) then NVDA's lead in this area will disappear as quickly as AMDs lead did in the enthusiast CPU market after Core 2 was released.
Pro developers aren't motivated by fanboism or brand loyalty which is why NVDA has the lead in this area.Their product has been better....soon it won't be.

Nope. Pros are extremely hesitant to change brands. Generally speaking, they won't change product lines even if it means not keeping up in raw performance. The risk of some sort of failure often far outweighs potential gains.

Also, if the "pro" - designers, CAD/CAM users, etc. - has a good relationship with his vendor, he will not switch.
 
I think what he means by this is the other part of the equasion is as follows.

AMD needs to do all the legwork for the lazy developers/software companies that don't seem to have the time or money to take advantage of the hardware. Kinda like how nvidia adds physX features to games.

I agree that they need to begin to do this (they have a history of not doing it though, even on the CPU front). You can build the best component in the world, but if no-one is using it to its full potential, who is going to buy it?

All the consumer wants is for his 3D model to be rendered quickly and correctly. If it isn't doing that while using your product, the consumer won't care that the hardware is capable but the software isn't written correctly to take advantage of it; he just won't buy it, since it doesn't do what he wants it to do. If you are building these products, and see that major applications aren't written to take advantage of your product, you better start working with the software companies to fix this or your product won't sell well.
 
If he's confused you are even worse becuase it has nothing to do with "bad drivers" - it's simply no or little tools. 🙂
AMD simply does not assign legions of engineers to professional card support, they don't push OpenCL like NV does CUDA etc.
It's economics, simple as 1-2-3 - they first needed to reach profitability and the easiest way to do it on the commodity graphics card market, not on the professional ones.
OTOH if this next 68xx-series will be successful I'm sure as hell they start ramping up their professional efforts especially OpenCL support.
Exactly. I think AMD's leads see the benefit and profitability in expanding their endeavors, but I also think they're wise enough to not spread themselves too thinly. Hopefully with each successful series they're putting some of the profit into hiring more teams to develop these technologies.

I'm interested to see what the architecture redesign of the 6xxx series does for performance gains in different arenas. For instance, I'm interested to see if it allows better CF scaling in the high end.
 
Why is it that every video thread these days ultimately devolves into NVIDIA's market share or business strategy?

It's interesting to some to some extent to hear the ramblings of speculative, wannabe pundits, but it doesn't really have all that much to do with "AMD confirms new architecture + launch timeframe for HD6xxx". Plus, it has already been discussed ad nauseum. Perhaps we need a sub-forum, "arm-chair CEO", for all of you people who could run all these companies better than their actual management.
 
Exactly. I think AMD's leads see the benefit and profitability in expanding their endeavors, but I also think they're wise enough to not spread themselves too thinly. Hopefully with each successful series they're putting some of the profit into hiring more teams to develop these technologies.

I'd think that it would be wiser to just sit back and let some company such as Apple take the hardware to new levels and kinda make it their own....After all Apple did kick nvidia to the curb 🙂

Seems like if anybody has the R&D money to spend Apple would be the most likely candidate. I haven't checked lately but doesn't Apple still have a pretty good foothold in the professional graphics market? If Apple could harness the power of OpenCL and direct compute I'm sure windows would follow.
 
Why is it that every video thread these days ultimately devolves into NVIDIA's market share or business strategy?

It's interesting to some to some extent to hear the ramblings of speculative, wannabe pundits, but it doesn't really have all that much to do with "AMD confirms new architecture + launch timeframe for HD6xxx". Plus, it has already been discussed ad nauseum. Perhaps we need a sub-forum, "arm-chair CEO", for all of you people who could run all these companies better than their actual management.

Because it's just one big never ending football game and a bunch of painted faces in the bleechers. In much the same way as a sports fan is concerned about next years draft, player trade, how much the players are earning to some, is as just as exciting as actually watching the game. It does bore me to tears though. But to each his own.
So, has anyone come across any good sources for the specs for the new 6 series? I know AMD has an ongoing leak detecting campaign currently underway, so it's very hard to tell what is, and what isn't.
 
Man... no offense but you have no idea about this: FYI chipsets were one-third of their revenues. ONE-THIRD.
Mobile market? Apple booted NV for good, Optimus sales are negligible, Tegra is a disaster so far - as you said the only bright spot is their Quadro line but it's obviously not enough to survive when they have to keep pouring billions over billions into R&D, each year requiring more and more.

Tesla market is next to nothing, even NV admits it.

There is so much wrong with your post from every angle, it's impossible to begin to correct it. Please do some research before you provide incorrect information about a company's financials or product lines you obviously know nothing about. Honestly, your post is an insult to people who follow the industry.

Regardless your intentions, this is flame-bait material and is a call-out. Flame-baiting is just as egregious an offense as flaming in response to flame-bait.

T2K deserves a better response than this, and if you are going to make your opinion a public one then the community deserves a better response than this.

In the future, if you feel absolutely compelled to respond to a post to which you just can't bite your tongue and say nothing then you need to craft your response such that it clearly is refuting/negating the contents of the post in question and not the poster of the post in question.

Moderator Idontcare
 
Last edited by a moderator:
There is so much wrong with your post from every angle, it's impossible to begin to correct it. Please do some research before you provide incorrect information about a company's financials or product lines you obviously know nothing about. Honestly, your post is an insult to people who follow the industry.

Please, RS - either pont out anything or just remain silent, your response is way too childish, even for the VGA forum.

Regardless your intentions, this is a flame post in response to flame-bait material and is a call-out.

Publicly responding in an inflammatory way to an existing flame-bait post is just as egregious an offense as the flame-bait post itself.

The proper venue for responding to flame-bait posts is to either take it private and go to pm's or report the post (the red triangle) and leave the matter of flame-baits to the moderators.

If you are going to make your opinion a public one then the community deserves a better response than what you have posted here.

In the future, if you feel absolutely compelled to respond to a post to which you just can't bite your tongue and say nothing then you need to craft your response such that it clearly is refuting/negating the contents of the post in question and not the poster of the post for which you take issue.

Moderator Idontcare
 
Last edited by a moderator:
Because it's just one big never ending football game and a bunch of painted faces in the bleechers. In much the same way as a sports fan is concerned about next years draft, player trade, how much the players are earning to some, is as just as exciting as actually watching the game. It does bore me to tears though. But to each his own.

Exactly. Beyond technical interests I have other interests including economy, both macro and micro. Some might find it boring - I find arguing about drivers completely ridiculous but as you said, to each his own. 🙂

So, has anyone come across any good sources for the specs for the new 6 series? I know AMD has an ongoing leak detecting campaign currently underway, so it's very hard to tell what is, and what isn't.
From what I've heard the Reds are in full-blown "gyezinformacija" mode for long weeks now... 😎
 
Getting this thread back on track:

Thermalright inadvertently mentioned the upcoming 68xx and 67xx series cards in their latest youtube upload. It's a boring video about GPU cooling:

http://www.youtube.com/watch?v=n46_N939di8

The interesting part (which I've bolded and underlined) is when you click on the More Info arrow, you see this statement: "This secret premium VGA cooler "SHAMAN" will be released before end of September, it's aimed for supreme cooling and at the same time Quiet performance, Shaman cooler will be bundled with our latest TY-140 silent fan, so no need to buy another fan, it is universal too, it is compatible with most cards from ATI and nVIDIA including the latest GTX460,465,470,480 ATI 4850,4870,5850,5870,6800,6750 series"

Given Digitimes's article talking about a new AMD GPU launch in October, Charlie's forecast of Oct 25 plus or minus a couple of days, the AMD employee's talking about HD6xxx rollout of multiple GPUs by Christmas, and now this slip by Thermalright, I am feeling just that much more certain that we'll see the launch of the first HD6xxx series card by the end of October. 🙂
 
Back
Top