Do Kepler and Maxwell have an AIB future?

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

psoomah

Senior member
May 13, 2010
416
0
0
The target is 1080p 3D, that's effectively 1920x1080 w 4x AA @120fps.

http://www.tomshardware.com/reviews/geforce-gtx-480-3-way-sli-crossfire,2622-11.html

Tri SLI GTX 480s, the fastest graphics setup money can buy, doesn't hit half the performance level of a game that will be considered rather dated by the time the next gen consoles hit. Crossfire 5870s can't hit four frames per second at that setting, less then 1/30th of the performance target for the next gen consoles. You honestly think APUs are going to be 30 times faster then a 5970 when the next gen consoles launch? Again, this is a title that will be considered outdated by the time they do come out.

Metro 2033 is meaningless. There is no pushing the graphics envelope beyond current graphics capability in console game development.

For next gen consoles The DISPLAY target is 1080P 120hz and the developers will tailor the games TO the hardware.
 

psoomah

Senior member
May 13, 2010
416
0
0
Paging Trip Hawkins!

As a proud 3DO owner I lived the dream - only to see it dashed.

It's a great idea but I just don't see it happening. Plus, the idea of a common platform is vaguely socialist isn't it?

Sounds kinda un-american to me..;)

Ain't dem Sonny and Nintender ferrinurs?

Sounds kinder UN-amerricun to me. :colbert:
 

psoomah

Senior member
May 13, 2010
416
0
0
My point is games will scale down, you don't have to have tessellation and other settings cranked up to max. The statement, "Fusion APU will not be able to handle them" is a blanket statement and not factual. What Fusion WILL bring to the table is the ability to play current titles at respectable settings and frame rate considering the power envelope and price point, something we have not seen before.

What Fusion will also bring to the table on a yearly basis is being able to play at ever more respectable settings and ever better frame rates. Won't be long before it's good enough for 94.67% of us.

A built in upgrade incentive.
 
Last edited:

Genx87

Lifer
Apr 8, 2002
41,091
513
126
AMD said at the last CES their express intention was to update their fusion chips annually incorporating the previous generation's graphics architecture.

So ... I guess you're saying AMD doesn't know anything about hardware?

Think about it. AMD is entering an APU arms race with Intel. Why leave ammunition laying on the table?

They also planning to also increase the memory bandwidth with each iteration? Or are we going to have a 5970 level GPU with an R300 level memory bandwidth?
 

psoomah

Senior member
May 13, 2010
416
0
0
They also planning to also increase the memory bandwidth with each iteration? Or are we going to have a 5970 level GPU with an R300 level memory bandwidth?

I, quite reasonably I think, am assuming AMD has design/engineering solutions that will enable them to accomplish what they say they intend to do.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I, quite reasonably I think, am assuming AMD has design/engineering solutions that will enable them to accomplish what they say they intend to do.

What is that, squeeze 256GB\sec worth of memory banwidth performance out of a 25GB\sec pipe? It will be interesting to see how they accomplish this feat.
 
Last edited:

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I, quite reasonably I think, am assuming AMD has design/engineering solutions that will enable them to accomplish what they say they intend to do.

So you are just assuming that all inherent problems with tacking the GPU onto the CPU and sharing the memory controller, die space, heat dissipation, and all other issues will just POOF go away and the APU will begin steam rolling competent discrete GPU's? That is a bold assumption! (no pun intended)
 

psoomah

Senior member
May 13, 2010
416
0
0
What is that, squeeze 256GB\sec worth of memory banwidth performance out of a 25GB\sec pipe? It will be interesting to see how they accomplish this feat.

I'm not that architecture savvy. I have no clue how they are going to do it. I just assume if they say they are going to do it they have some way of doing it.

All the more so with Dirk Meyer's AMD.
 

psoomah

Senior member
May 13, 2010
416
0
0
So you are just assuming that all inherent problems with tacking the GPU onto the CPU and sharing the memory controller, die space, heat dissipation, and all other issues will just POOF go away and the APU will begin steam rolling competent discrete GPU's? That is a bold assumption! (no pun intended)

I would consider it bolder to assume AMD DOESN'T have solutions to enable them to accomplish what they say they intend to do.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I'm not that architecture savvy. I have no clue how they are going to do it. I just assume if they say they are going to do it they have some way of doing it.

All the more so with Dirk Meyer's AMD.

You dont have to be architecture savvy to realize increasing memory bandwidth performace 10 fold in the next 3 years will be a miracle of god.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
I would consider it bolder to assume AMD DOESN'T have solutions to enable them to accomplish what they say they intend to do.

I don't know man. Intel said they could pull off Larrabee as well.
Point? It isn't always given that when somebody "says" they can do something, it gets done. This applies to all companies in this world. And every company has past examples of not being able to deliver as promised or touted. Hey it happens. Just try not to take everything at face value.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
I would consider it bolder to assume AMD DOESN'T have solutions to enable them to accomplish what they say they intend to do.

I don't know man. Intel said they could pull off Larrabee as well.

This exactly. If I cared enough, I'd take the time to look through your (psoomah's) post history in various forums to read the doom and gloom you were spelling for Nvidia 2 years ago when Intel was hyping up Larrabee to be the up and coming new GPU solution.
 

psoomah

Senior member
May 13, 2010
416
0
0
I don't know man. Intel said they could pull off Larrabee as well.
Point? It isn't always given that when somebody "says" they can do something, it gets done. This applies to all companies in this world. And every company has past examples of not being able to deliver as promised or touted. Hey it happens. Just try not to take everything at face value.

Understood, but with what i've seen lately from AMD, my natural tendency is to give them the benefit of the doubt. The opposite would hold true for Nvidia.
 

psoomah

Senior member
May 13, 2010
416
0
0
This exactly. If I cared enough, I'd take the time to look through your (psoomah's) post history in various forums to read the doom and gloom you were spelling for Nvidia 2 years ago when Intel was hyping up Larrabee to be the up and coming new GPU solution.

I wasn't spelling doom and gloom for Nvidia two years ago. I extrapolate based on the data to hand, and the data to hand two years ago didn't warrant a doom and gloom assessment of Nvidia.

I never commented on Larrabee vs Nvidia back in the day.

...

The data to hand DOES warrant a doom and gloom assessment for Nvidia today.
 

busydude

Diamond Member
Feb 5, 2010
8,793
5
76
Understood, but with what i've seen lately from AMD, my natural tendency is to give them the benefit of the doubt.

After the original Phenom/Barcelona fiasco, many people may think otherwise.

As others have stated, and which has been actively discussed in the past, I too have my doubts on APU's performing on par with comparable discreet parts.
 

Keysplayr

Elite Member
Jan 16, 2003
21,219
54
91
Understood, but with what i've seen lately from AMD, my natural tendency is to give them the benefit of the doubt. The opposite would hold true for Nvidia.

Well, what I have seen lately from AMD, are CPU's, graphics cards, eyefinity, and little else. What have you seen?
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
Can't wait to bump all of these threads when we see Fusion benchmarks.
I share the same sentiment, but not the course of action (necro=bad), although I guess I know what you actually mean.

It would be interesting to finally see if Fusion/SB products actually do away with the low-end segment, instead of just the IGP segment. It is this accomplishment (obsoleting the low end) that would forecast whether the production of discrete cards will still remain economically viable for the card makers (there is no question of their technological need, since there will always be gamers and enthusiasts that will need add-on cards) so as to continue producing discrete cards even if they will mostly serve only the midrange and high-end segment.

If AMD and Intel's Fusion-type products only end up replacing mobo IGPs (performance still below the level of low-end discrete cards), all the ruckus in this thread is for nothing, since the assumption behind the panic/doom/gloom is that Fusion products will obsolete the highest-volume low-end discrete segment.
 

dzoner

Banned
Feb 21, 2010
114
0
0
If AMD and Intel's Fusion-type products only end up replacing mobo IGPs (performance still below the level of low-end discrete cards), all the ruckus in this thread is for nothing, since the assumption behind the panic/doom/gloom is that Fusion products will obsolete the highest-volume low-end discrete segment.

I remember AMD making a statement at CES they intended to upgrade the graphics on their fusion chips annually with the previous years graphics architecture.

They've also made it very clear APU's = their future.

They have some solution to the bandwidth problem figured out, although that solution may not be implemented in full until the 2nd gen fusion chips.

Llano appears to be a 'bridging' solution.
 

Genx87

Lifer
Apr 8, 2002
41,091
513
126
I remember AMD making a statement at CES they intended to upgrade the graphics on their fusion chips annually with the previous years graphics architecture.

They've also made it very clear APU's = their future.

They have some solution to the bandwidth problem figured out, although that solution may not be implemented in full until the 2nd gen fusion chips.

Llano appears to be a 'bridging' solution.

And what is the solution to a 5970 fusion part choking on 25GB\sec bandwidth?
 

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
I remember AMD making a statement at CES they intended to upgrade the graphics on their fusion chips annually with the previous years graphics architecture.

They've also made it very clear APU's = their future.

They have some solution to the bandwidth problem figured out, although that solution may not be implemented in full until the 2nd gen fusion chips.

Llano appears to be a 'bridging' solution.
I appreciate your enthusiasm, but let's put that in context:

-"they intended to upgrade the graphics on their fusion chips annually with the previous years graphics architecture"
--> This only indicates the architecture upon which the graphics portion of the APU will be based. It has no implications as to performance (IGP-level, low end discrete, midrange, high-end). For example, the HD 4200/4300 was based upon the previous gen arch, just as the 4870 was. But that doesn't mean the 4200 is as high performance as the 4870, or that the performance is comparable at all.

-"They've also made it very clear APU's = their future. "
--> Also not a hint of performance. When a company says "X is their future", what that means as that it will be a profit-driver (such as an emerging market) in the long term, hence worth investing resources upon. So this statement from them has completely zero to do with performance, and has nothing to do with discrete cards getting phased out.



What's more interesting is your claim that they have a solution to the bandwidth conundrum. So far they (AMD) have made no public statement about it, hence this claim of yours is interesting but currently has no basis in fact.
 

dzoner

Banned
Feb 21, 2010
114
0
0
Bandwidth fairy?? :confused:

Whatever. Because a few of you can't think your way past this bandwidth 'immovable wall' doesn't mean AMD can't. AMD seems pretty certain they've got a solution to 'bandwidthgate'. Good enough for me.

http://www.anandtech.com/show/2871/3

Anand:

Cheesy Marketing Names for Cool Tech, AMD Velocity Ensures New Designs Every 12 Months

AMD’s first APUs drop in 2011, but what happens in 2012? Intel is committed to new microprocessor architectures every 2 years as a part of its tick-tock strategy. AMD’s GPU-inspired equivalent is called Velocity.

About every year we get a new GPU architecture, whether it’s a strict doubling of execution resources or something more significant, it happens like clockwork assuming TSMC isn’t fabbing the chips. AMD Velocity just states that, in turn, every year we’ll get a brand new chip that integrates this new GPU architecture. The CPU side may or may not change, but with yearly design cycles we could see regular improvements on that end as well.
Velocity also means that even if it’s difficult getting more performance out of a CPU architecture, AMD can always rely on a beefed up GPU core to give users a reason to upgrade.

......................................

Not much ambiguity there.
 
Last edited:

jvroig

Platinum Member
Nov 4, 2009
2,394
1
81
And what is the solution to a 5970 fusion part choking on 25GB\sec bandwidth?
They don't actually have to make an APU that rivals the high-end.

All they need to do is manage to solve the bandwidth problems for what is equivalent to the entire low-end segment. (That would still take some doing, but more doable than shooting for high-end)

If they manage to do that and, as a result, take away what I assume to be the highest volume segment of discrete cards (the low-end segment; and further assumption being that this highest volume segment also contributes a significant slice of the add-on cards revenue pie), the higher-end parts may die (without being conquered in performance by APUs) simply because without the highest volume segment, it may become economically infeasible to continue with the discrete card business as the revenue from them will not be enough to sustain the business.

Of course, this has always been a concern, hence I am sure NVIDIA and ATi made sure to keep IGP performance as low as they were. The only thing different now is that Intel is in the picture, seemingly committed to improve their graphics (their SB offering exceeded every expectation, despite their previous one being the same old subpar Intel "decelerator"), so AMD may not have the luxury of artificially limiting IGP performance (now within Fusion) to keep from cannibalizing their highest-volume segment of discrete cards.

Cliffs: Fusion doesn't need to be as fast as high-end cards to make those high-end cards disappear or become more expensive (alternative to disappearing; necessary to make the whole endeavor economically feasible for the involved companies). Whether AMD and Intel will play that game, though, remains to be seen, but there seems to be no downside for Intel (no discrete products to cannibalize), so if Intel decides to go ahead, AMD will have no choice.