• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

Why can't Intel compete with AMD in GPU Sector?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
Intel may be developing their iGPUs because it's the cheapest way to compete against themselves. In the CPU sector, Intel competes more against itself than against AMD. People need a reason to upgrade, and the new chip being 10% faster and much cheaper isn't a lot of incentive to replace an entire platform. Each new generation of Intel chip has a GPU with more capability, and considering a vast majority of PCs don't have (or can't have) discrete GPUs, it makes good sense.

The low-hanging fruit is gone in single-threaded performance. Adding more cores is nearly useless to the average consumer, and probably counterproductive in mobile. A better iGPU is likely the most appreciated upgrade.
 

Blitzvogel

Platinum Member
Oct 17, 2010
2,012
23
81
Intel is increasing the graphics die % basically to kill the low and medium end dGPUs knowing that will severely hurt AMD and nVidia's dGPU profits. Whether they can make it work is the question.

I don't think it will work as well as you think. Intel's lineup of CPUs with varying amounts of CPU and graphics cores is both confusing and in some cases way too limited even for basic use. It's a great way to kill the inherent flexibility and cost effectiveness of upgradeable CPUs and graphics.

There are scenarios for APUs.

There are scenarios for 2 cores + large die graphics.

There are scenarios for 8 cores + minimal graphics.

There are numerous scenarios for inbetween.
 

jpiniero

Lifer
Oct 1, 2010
16,843
7,288
136
It's not supposed to make sense to the consumer. It's supposed to destroy AMD and nVidia's main source of GPU revenue.
 

III-V

Senior member
Oct 12, 2014
678
1
41
So...





He says that he expects Intel to deliver groundbreaking IGP performance Skylake & Carrizo will not bring any major improvements.

So is it distorting everything he said what I posted this



Or is it just me interpreting his post.?
Alright, well perhaps you aren't distorting it. But if you were well informed, you wouldn't be questioning it. Without a memory bandwidth upgrade, IGPs will not move far from their current position. Although AMD might be able to squeeze out a little bit more performance if they incorporate the compression used in Tonga, they simply don't have much room for growth.

Skylake will be featuring eDRAM on a fair number of SKUs, and where eDRAM isn't present, DDR4 is available. Meanwhile, Carrizo will not have DDR4, and will only have HBM for some top SKUs (if it ends up having HBM at all, but I guess recent rumors may have brought some credibility to that).
So did Nvidia also waste their time when they brought Gsync to the market as it will become irrelevant with AMD bring Adaptive Sync in collaboration with the VESA standard.
It's hard to say. Who cares anyway? Consumers are the ones who won out, regardless of whether or not Nvidia made profit off of it. It's a win-win for everyone, and I'm not sure why the AMD fan crowd seems to bring GSync up all the time as the butt of their jokes -- they should be thanking Nvidia. We're getting both GSync and FreeSync because of them.

Also, bringing up GSync couldn't be less relevant.
I agree that their drivers seem fine for the most part but is it really worth the effort for Intel trying to break in the discrete graphics market to gain hold when they already have the most promising GPGPU solution and will likely dominate laptops with their IGPs ?

Another problem that Intel has to face is dealing with lower profit margins too for this endeavor to work ...
No, I don't think it really makes sense for Intel to pursue at this point in time.
It's not supposed to make sense to the consumer. It's supposed to destroy AMD and nVidia's main source of GPU revenue.
You say this as if Intel is doing it just to spite their competition. That's not it at all. They profit from replacing OEM wins for dGPUs with their IGPs.
 
Last edited:

Morbus

Senior member
Apr 10, 2009
998
0
0
considering a vast majority of PCs don't have (or can't have) discrete GPUs, it makes good sense.

I'm sensible to that proposed reality, but I'm not too certain the people who DON'T use discrete graphic cards care about stuff like performance, instruction sets or any of the usual stuff that would drive someone to upgrade a CPU.

What I'm saying is targeting your upgrade-centric market at non-techies is probably a big mistake. And I don't think Intel is doing it. What I think is they don't create a discrete GPU because there's just no need for a new brand on the market. We already have enough headaches with two, let alone three.
 

rtsurfer

Senior member
Oct 14, 2013
733
15
76
Alright, well perhaps you aren't distorting it. But if you were well informed, you wouldn't be questioning it. Without a memory bandwidth upgrade, IGPs will not move far from their current position. Although AMD might be able to squeeze out a little bit more performance if they incorporate the compression used in Tonga, they simply don't have much room for growth.

Skylake will be featuring eDRAM on a fair number of SKUs, and where eDRAM isn't present, DDR4 is available. Meanwhile, Carrizo will not have DDR4, and will only have HBM for some top SKUs (if it ends up having HBM at all, but I guess recent rumors may have brought some credibility to that).

Did you know that Carrizo is rumored to have both eDRAM & HBM.?
http://forums.anandtech.com/showthread.php?t=2390965

Even if it only has it on only some high-end parts, let's not forget how affordable Iris Pro is. Until we actually have a Skylake SKU is not super-expensive, it can be as High-end as eDRAM/HBM Carrizo.

And your argument is no better than the earlier guy I quoted, Intel has seamless sea of Graphics performance to explore in the future, while AMD that actually has a whole sub-division that has been making dGPU for decades, has no where to increase its performance.

Were you informed of all this, good sir.?

It's hard to say. Who cares anyway? Consumers are the ones who won out, regardless of whether or not Nvidia made profit off of it. It's a win-win for everyone, and I'm not sure why the AMD fan crowd seems to bring GSync up all the time as the butt of their jokes -- they should be thanking Nvidia. We're getting both GSync and FreeSync because of them.

Also, bringing up GSync couldn't be less relevant.

And what evidence you have that we are not getting a promise of a low-level like experience in DX12, because AMD started it with Mantle.

But why would you acknowledge that, it isn't Nvidia, am I right.?

Bringing Gsync was relevant because we have a similar situation,

A Company (Nvidia & AMD) comes out with a new technology ( Gsync & Mantle), that benefits Gamers, this technology is limited to their own products for now, their competition (AMD & Microsoft[I know Microsoft isn't a competitor, but can't find the right word for it] ) helps bring the benefits of the technology to the masses, instead of being limited to a subset of users.

Yet only needs to be thanked, while the other

What is Mantle's worth, exactly? It will become irrelevant with DX12. It is a waste of time and money for AMD.


Do the parallels make sense to you now.?
See the relevance.?
 
Last edited:

III-V

Senior member
Oct 12, 2014
678
1
41
I'm not going to waste my time on someone that doesn't know what eDRAM is. Sorry.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Honestly, it's too small of a market for Intel to care about. Combine that with seasoned competition from nVidia and AMD and the pie is simply to small for them to bother.

Now, integrated solutions are another matter, and Intel has been making significant jumps in performance with every new CPU generation. In fact, their iGPU performance difference is far greater than the performance difference between a brand new and a generation old CPU.
 

rtsurfer

Senior member
Oct 14, 2013
733
15
76
I'm not going to waste my time on someone that doesn't know what eDRAM is. Sorry.

I think eDRAM is the memory that is on-die with the processor like a L1, L2 , L3 Cache.

Like Iris Pro & Xbox One APU made by AMD.

I think I mixed it with HBM, I am a bit confused about what HBM is.
Need to do more reading.

Although I don't expect much of a person who only sees more Mhz as more Perfomance & runs away from a thread when he is proven wrong.

http://forums.anandtech.com/showthread.php?t=2408184


Cough.. Cough..
 
Last edited:

III-V

Senior member
Oct 12, 2014
678
1
41
In context...

HBM is eDRAM by Intel's definition.

http://i.imgur.com/RCXEkOR.jpg

Package Embedded DRAM => eDRAM
eDRAM totally doesn't mean it is on die cache. No, it means in package cache!
Exactly... HBM is a type of embedded DRAM.

I am just not going to participate in a technical discussion against someone that has not done the required research to know what they are talking about.
person who only sees more Mhz as more Perfomance & runs away from a thread when he is proven wrong.
I simply do not have the time to post on these forums very often. I was hardly proven wrong, anyway. Even if I am wrong, it is a semantic victory -- the major thing to take home is that Bulldozer was hundreds of millions of dollars down the drain, and is hardly any, if at all, better than their preexisting uarch. That money would have been much better spent retrofitting K10 for mobile.
 
Last edited:

rtsurfer

Senior member
Oct 14, 2013
733
15
76
In context...

HBM is eDRAM by Intel's definition.

http://i.imgur.com/RCXEkOR.jpg

Package Embedded DRAM => eDRAM
eDRAM totally doesn't mean it is on die cache. No, it means on package cache!
Jeez.

I guess, " The More You Know".
Need to do more reading.

My point still stands as AMD does have an rumored provision to deal with their memory bottleneck in Carrizo. I maybe a bit technically wrong.
 

rtsurfer

Senior member
Oct 14, 2013
733
15
76
Exactly... HBM is a type of embedded DRAM.

I am just not going to participate in a technical discussion against someone that has not done the required research to know what they are talking about.

I simply do not have the time to post on these forums very often. I was hardly proven wrong, anyway. Even if I am wrong, it is a semantic victory -- the major thing to take home is that Bulldozer was hundreds of millions of dollars down the drain, and is hardly any, if at all, better than their preexisting SKUs. That money would have been much better spent retrofitting K10 for mobile.

Were we even discussing Bulldozer here.
I thought this thread was about APUs.:whiste:

Also, only you can be semantically wrong ( Its wasn't semantic, but whatever) , while someone else hasn't done their research.
 

III-V

Senior member
Oct 12, 2014
678
1
41
Were we even discussing Bulldozer here.
I thought this thread was about APUs.:whiste:

Also, only you can be semantically wrong ( Its wasn't semantic, but whatever) , while someone else hasn't done their research.
Oh, that thread. Well, I was actually right, I just haven't polished up my findings yet. On average, Nvidia's 680 is capable of 10% higher overclocks than AMD's Tahiti. Got to double check some things, before I publish, though. I need to account for what speeds the cards' memory is rated for, as well as brand.

There's not much point in arguing with many people around here. The integrity of many of the members here is unfortunately rather poor, and people actively seek out facts that support their beliefs, while dismissing those that do not.
 
Last edited:

rtsurfer

Senior member
Oct 14, 2013
733
15
76
Oh, that thread. Well, I was actually right, I just haven't polished up my findings yet. On average, Nvidia's 680 is capable of 10% higher overclocks than AMD's Tahiti. Got to double check some things, before I publish, though.

I never disputed your claim that Nvidia clocks higher, I asked you to state a benefit of those higher clocks.

Bulldozer clocks higher than Haswell too, but we all know its a pile of crap.
 

III-V

Senior member
Oct 12, 2014
678
1
41
I never disputed your claim that Nvidia clocks higher, I asked you to state a benefit of those higher clocks.

Bulldozer clocks higher than Haswell too, but we all know its a pile of crap.
Higher bandwidth primarily benefits ROP throughput.
 

rtsurfer

Senior member
Oct 14, 2013
733
15
76
Higher bandwidth primarily benefits ROP throughput.

Since this is off-topic for this thread, this is the last post I will make about it (Don't want an infraction from the Mods).

I suggest you start reading from Post #13 of the thread.
http://forums.anandtech.com/showthread.php?t=2408184

Basically, AMD already gets higher Bandwidth than Nvidia because of the wider-bus, what is the point in clocking their memory higher.?

Same problem ( Memory Bandwidth) different solution (Higher mem speed, wider Bus).

Let's take it to the other thread if you want to talk more.
 

consifu

Junior Member
Nov 16, 2014
1
0
0
I think intel is better than AMD almost in every aspects,so there is no need for intel integrate AMD GPU in its own chip
 

meloz

Senior member
Jul 8, 2008
320
0
76
Intels iGPUs are at least two generations behind AMDs.

Only in more powerful (25 watts and above) parts. In lower TDP chips Intel is competitive with AMD. With Skylake they will hopefully match AMD in desktop class iGPUs as well.
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
Only in more powerful (25 watts and above) parts. In lower TDP chips Intel is competitive with AMD. With Skylake they will hopefully match AMD in desktop class iGPUs as well.

Thats only because AMD screwed it big time with the SC controller on Bobcat/Kabini/Beema.

For example, a year old BT is still considerably behind a +3 year old E-350 if we talk about IGP power alone, its only because the E350 is horrible in cpu and lacks DC that it performs worse in most cases.

Something similar happens with the 5350 vs Haswell GT1, the 5350 IGP should be better than a Haswell GT1, its not, and thats only because of SC mem.
 

USER8000

Golden Member
Jun 23, 2012
1,542
780
136
Future AMD APUs probably will have the texture compression technology we saw in Tonga too.
 

Thala

Golden Member
Nov 12, 2014
1,355
653
136
Maybe I am getting old, but traditionally eDRAM refers to a technology where DRAM (e.g. capacitor based) is embedded into a logic process. Package integration does not matter and would not qualify DRAM as eDRAM.
 
Last edited:

jpiniero

Lifer
Oct 1, 2010
16,843
7,288
136
Broadwell-K's IGP should be faster than anything AMD has at the time of it's release. Unless it gets delayed again. It will of course be in a different price range.