News Intel GPUs - Intel launches A580

Page 28 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

FaaR

Golden Member
Dec 28, 2007
1,056
412
136
Second line contradicts the first.
Errrr... No it doesn't! lol Second line logically follows from the first. What drugs are you on anyway? :p

Vertical integration is only about bringing multiple stages of a device pipeline under your own control. It doesn't refer to performance or value.
Dude, that's exactly what I said.

And in any case, Apple is still under the whims of its third party foundries because Apple is a fabless company.
They're such a large customer for TSMC (especially now that they're also building Mac CPUs there) that they're undoubtedly given special access to the internal development of new processes and whatnot. Thus they know going forwards what to expect, and so far every iphone release has gone off without any hitch worth talking about on a regular, yearly basis going back to the original A4 whether the SoC was on an existing process or a new, groundbreaking one.

So I'd say they seem to have their procedures pretty much down pat by now... :)

Apple are still "slaves" to the whims of third party vendors who produce software for macOS and Windows right now.
That's an asinine and pointless statement. Everybody who builds anything relies on outside sources for things like materials, tools, components or energy, and likely all of the above. Even if all you do is hand wood carvings, did you chop down the tree you're carving up yourself? With what, the axe you also made yourself with iron you smelted yourself, beating it into shape against a rock with your own fists...? lol No.

The Mac Pro is an ultra low volume seller, and it may be one of the first cuts for Apple.
They just introduced a new one not even a year ago as a show of commitment to their pro users along with their newest monster monitor costing five grand, why would they then cut it immediately? If they were going to cut it they would have cut the Mac Pro line with the end of the trashcan. They knew back then already of course they were transitioning to ARM; it'd be a waste of hardware and software engineering resources to spend years designing a new Mac Pro with all kinds of bells and whistles attached to it if they would only offer it for a short, limited time.

I know a few people who just bailed on ordering a 14-30K GBP systems because of Apple's plans and just how they may be treated by software they rely on to pay their bills.
Worthless anecdotes. You surely do not personally know any statistically representative number of Mac Pro users. Besides, Apple has stated that Intel software support will continue for the foreseeable future - IE they'll most likely obsolete hardware as they normally do at their usual rate, and when the last Intel systems are obsolete then they'll stop future x86 development (except for security updates one should hope. For some time, anyway.)

I don't want to sound ageist and assume you're young
Hah. I bought my first computer in 1987, for my own money, at age 15. Do I qualify? :p

things were a lot different before Intel steamrolled AMD for a decade. If new AMD keeps providing incredible gains in performance YoY, there's absolutely no reason to sit on the same hardware for years at a time unless you can't afford it.
AMD's resurgence has been amazing to see, for sure. As a hardware enthusiast it's been the greatest boon for me in countless years now. To go from four cores at the high end of consumer processors (where things had previously stagnated for over a decade) to 16 in just over three years' time is basically unprecedented in history.

So yeah, as great as this has been, things were not like that ALL the time back then! :) Still, Apple knows this of course and if their competitors deliver regular major upgrades they would want to do that as well, to capture the customers who desire to buy new, faster hardware. That's the beauty of capitalism and competition, when it works. Corporations typically don't sit back and release no new products when they have competitors who do just that; they sit back and do nothing when they're on top and nothing else can touch them. Like Intel did! ;)

If you're a professional editor or studio, you offload render work. You also work off of another source. I'd provide an example of what we employ at work but it would give away who I work for. There's only two companies in the world that manifest that kind of in-house power.
Look, I assumed we were talking about private individuals who buy and pay for their own hardware. If you're a bleeding edge world class corporation with international renown, obviously the rules are a little different for you then. ;)
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Guys, its off-topic and it has been a while. Please stop.

When things take care of themselves there's no need for laws or rules.
 

A///

Diamond Member
Feb 24, 2017
4,352
3,154
136
Errrr... No it doesn't! lol Second line logically follows from the first. What drugs are you on anyway? :p

Yes, it does. Reported for an ad-hominem attack.

Dude, that's exactly what I said.
Not really.
They're such a large customer for TSMC (especially now that they're also building Mac CPUs there) that they're undoubtedly given special access to the internal development of new processes and whatnot. Thus they know going forwards what to expect, and so far every iphone release has gone off without any hitch worth talking about on a regular, yearly basis going back to the original A4 whether the SoC was on an existing process or a new, groundbreaking one.

So I'd say they seem to have their procedures pretty much down pat by now... :)

Especially now? Lol In 2018, Apple sold nearly 220M phones alone. Their largest non-Apple competitor sold almost 300M. Samsung has maybe a dozen or more phones on sale in all global operating areas. Apple carries 1-3 phones they actively produce. New current gen, last gen, and scraps for people seeking an older phone because of the price. If at that. On top of that, Apple sold nearly 50M iPads in 2019 alone. Your statement is really weird here. As if Apple is now just about to get creme de la creme service from TSMC because they decided to make their Apple Silicon processors with TSMC, yet you ignore the fact that Apple is one of TSMC's largest customers and a cash cow for TSMC.


That's an asinine and pointless statement. Everybody who builds anything relies on outside sources for things like materials, tools, components or energy, and likely all of the above. Even if all you do is hand wood carvings, did you chop down the tree you're carving up yourself? With what, the axe you also made yourself with iron you smelted yourself, beating it into shape against a rock with your own fists...? lol No.

Your point for vertical integration was based on ignoring the decline and whims of a third party; Intel. In this case, my example, Apple is still at the tether of third parties who may very well disappoint Apple in the long run if they can't deliver a product good enough. Going with another company is easier said than done, and it's a huge logistical volley.

They just introduced a new one not even a year ago as a show of commitment to their pro users along with their newest monster monitor costing five grand, why would they then cut it immediately? If they were going to cut it they would have cut the Mac Pro line with the end of the trashcan. They knew back then already of course they were transitioning to ARM; it'd be a waste of hardware and software engineering resources to spend years designing a new Mac Pro with all kinds of bells and whistles attached to it if they would only offer it for a short, limited time.

The Mac Pro was largely unchanged since 2013 and largely ignored. I wouldn't call that a commitment, especially their over the top case and garish pricing of a cheap Xeon. Their monitor merely emulates a pro grade Sony grading monitor and doesn't really come close. A professional who makes a large amount of money or a studio will still pick up the tab for a $20-30K monitor.

Except there's been no real confirmation on which Apple models will get ARM or not. You're reaching here and basing your opinion on Apple's comments.

Worthless anecdotes. You surely do not personally know any statistically representative number of Mac Pro users. Besides, Apple has stated that Intel software support will continue for the foreseeable future - IE they'll most likely obsolete hardware as they normally do at their usual rate, and when the last Intel systems are obsolete then they'll stop future x86 development (except for security updates one should hope. For some time, anyway.)

Apple claims a 2 year transition period for their product stack and claimed they will release a couple new Intel models. They never mad an exact claim of how long they'd support the service. It's not in Apple's playbook to provide long term support. 5 years seems to be a guideline, but in 5 years from now will your now shiny Pro be a viable computer? If Apple ceases development of Rosetta 2 in 3 years and vendors like Adobe drop x86 for Mac support, what will those people do then?
Hah. I bought my first computer in 1987, for my own money, at age 15. Do I qualify? :p

Nice, what model?
AMD's resurgence has been amazing to see, for sure. As a hardware enthusiast it's been the greatest boon for me in countless years now. To go from four cores at the high end of consumer processors (where things had previously stagnated for over a decade) to 16 in just over three years' time is basically unprecedented in history.

So yeah, as great as this has been, things were not like that ALL the time back then! :) Still, Apple knows this of course and if their competitors deliver regular major upgrades they would want to do that as well, to capture the customers who desire to buy new, faster hardware. That's the beauty of capitalism and competition, when it works. Corporations typically don't sit back and release no new products when they have competitors who do just that; they sit back and do nothing when they're on top and nothing else can touch them. Like Intel did!

My only concern for Apple right now is how long they can squeeze performance from their ARM license, and what their path in the future is. RISC-V has made some solid strides in the past couple of years but it's still a Hmm from me. AMD's resurgence, even though I had zero faith in them continuing such improvements after the original Zen, have surprised me. AMD has a game changing formula and I hope they keep improving on it for years to come. My next desktop will be an AMD and it's going to speed up my workload considerably.
Look, I assumed we were talking about private individuals who buy and pay for their own hardware. If you're a bleeding edge world class corporation with international renown, obviously the rules are a little different for you then. ;)

Private individuals operating as sole proprietors are still considered small business, at least here in the US. I convinced a few people wanting to get a Mac Pro setup in the 8-20K range to opt for the Lenovo instead. I remember a few years ago, around Christmas 2016 when HP released their Xeon 28c. setups where you could have dual processors. It destroyed the very best of the then trash can and even the new one. Still, around 20K for two processors is a lot. It makes me wonder if we'll see dual TR Pro processors in the future if possible.
 
  • Like
Reactions: Tlh97

A///

Diamond Member
Feb 24, 2017
4,352
3,154
136
Guys, its off-topic and it has been a while. Please stop.

When things take care of themselves there's no need for laws or rules.
Yeah just wanted to get a word in. @FaaR if you want to continue this hit me up on PMs. I think we're angering the residents. :p


Sorry for anyone else who's as annoyed or bored.
 
  • Like
Reactions: Tlh97

blckgrffn

Diamond Member
May 1, 2003
9,121
3,049
136
www.teamjuchems.com
Came in here hoping to see discussion about Xe, found... something.

Anyway, it looks like Intel is going to be able to leverage the chiplet GPU in the datacenter to scale similarly to how the "special" sauce of Zen was chiplets? And beating AMD to the punch on that? That's an interesting dichotomy to me.

If they are as good at math as is claimed I would think they would score some design wins.

I guess I am just a little floored that Intel looks like they might be mounting a competitive product launch here. I thought for sure they would flop GPUs like they did before, but maybe they are getting back on the Paranoid Survival plan and actually executing?
 

thigobr

Senior member
Sep 4, 2016
231
165
116
Very curious to see next year what kind of performance on real games workloads they can get out of those tiles... On paper and on their recent peak FLOPS demo it looks good but it's hard to predict how well this will translate to real world gaming workloads.
 

NTMBK

Lifer
Nov 14, 2011
10,232
5,007
136
Very curious to see next year what kind of performance on real games workloads they can get out of those tiles... On paper and on their recent peak FLOPS demo it looks good but it's hard to predict how well this will translate to real world gaming workloads.

As far as I'm aware, the HPG gaming GPU won't be built in tiles- no mention of fancy packaging like EMIB or Foveros. I'd expect it to be a conventional monolithic chip.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
As far as I'm aware, the HPG gaming GPU won't be built in tiles- no mention of fancy packaging like EMIB or Foveros. I'd expect it to be a conventional monolithic chip.

I wonder how they are going to fix the problem of games scaling if this is the future? If its like ST to MT in CPUs it'll have a much harder time. Remember SLI and Crossfire? And you're not talking about 1 to 2 cards but 4, 8, and so on!
 

blckgrffn

Diamond Member
May 1, 2003
9,121
3,049
136
www.teamjuchems.com
I wonder how they are going to fix the problem of games scaling if this is the future? If its like ST to MT in CPUs it'll have a much harder time. Remember SLI and Crossfire? And you're not talking about 1 to 2 cards but 4, 8, and so on!

Won't it be like Zen chiplets? I guess in my head that was how I was always reading it - some static resources in the "main" chip, like access to memory and PCIe, then different types of cores on chiplets and scaled as needed, like a RTX card would have tensor cores and the more traditional cores on different chiplets using different lithography and clock speeds as needed to hit performance goals for that piece of the silicon.

More tensor core sensitive? Add more chiplets with that to the card like a Zen CPU adds another chiplet. ¯\_(ツ)_/¯

Makes sense in my head, I guess. The special sauce remains in the interconnects and how all that work gets spread out I guess.

But that seems more scalable than the old SLI/Crossfire days. Again, in my head :p
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Won't it be like Zen chiplets?

Way different. Off-die memory controller has been done before on PCs. Actually we had a separate chip for that.

Modern GPU resources communicate with each other at many hundreds of GB per second which is many times what's required for a desktop CPUs memory controller. The higher latency on AMD parts are also due to the fact its not on the same die. Everything that has to go off die will increase both power usage and decrease performance.

I don't think it'll be all peachy as it looks, and likely the reason Intel's HPG is only 1 tile.

If its like a better scalable SLI/Crossfire then it's down to game support and it'll be messy. At least in CPUs even if there were no applications with multi-threading support, multi-tasking itself gave us immediate benefits. What do you do on a GPU? Launch 10 games?
 
Last edited:

blckgrffn

Diamond Member
May 1, 2003
9,121
3,049
136
www.teamjuchems.com
Way different. Off-die memory controller has been done before on PCs. Actually we had a separate chip for that.

Modern GPU resources communicate with each other at many hundreds of GB per second which is many times what's required for a desktop CPUs memory controller. The higher latency on AMD parts are also due to the fact its not on the same package. Everything that has to go off die will increase both power usage and decrease performance.

I don't think it'll be all peachy as it looks, and likely the reason Intel's HPG is only 1 tile.

If its like a better scalable SLI/Crossfire then it's down to game support and it'll be messy. At least in CPUs even if there were no applications with multi-threading support, multi-tasking itself gave us immediate benefits. What do you do on a GPU? Launch 10 games?

Well...

That's probably the only way to clear my Steam backlog? ;)

Clearly you blew up my thoughts on how it would be largely transparent to the application that the there where multiple dies in use because it was all being abstracted by some centralized manager. Too simple of a model, I guess. I was under the impression that GPUs were already massively parallel beasts and so if the interconnect was performant enough (or the workload different enough that communication overhead was minimized) that multiple dies could be used in a such a fashion, that keeping them all glue together in one die was a problem that could be solved.

It makes a *lot* more sense this would work in HPC land where the applications will be much more aware of their target architecture and in some cases written specifically for it.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
That's probably the only way to clear my Steam backlog? ;)

True!

Clearly you blew up my thoughts on how it would be largely transparent to the application that the there where multiple dies in use because it was all being abstracted by some centralized manager.

They are only doing this because process transitions are becoming that much harder and expensive.

No matter how fast or low power an external interconnect is, on-die will be even faster and lower power. It's just physics. If games don't change at all, it'll assume it'll be all on-die and all equally fast, which won't be true.
 

NTMBK

Lifer
Nov 14, 2011
10,232
5,007
136
Way different. Off-die memory controller has been done before on PCs. Actually we had a separate chip for that.

Modern GPU resources communicate with each other at many hundreds of GB per second which is many times what's required for a desktop CPUs memory controller. The higher latency on AMD parts are also due to the fact its not on the same die. Everything that has to go off die will increase both power usage and decrease performance.

I don't think it'll be all peachy as it looks, and likely the reason Intel's HPG is only 1 tile.

If its like a better scalable SLI/Crossfire then it's down to game support and it'll be messy. At least in CPUs even if there were no applications with multi-threading support, multi-tasking itself gave us immediate benefits. What do you do on a GPU? Launch 10 games?

The Zen interconnect is going over an organic package. If you use EMIB to connect the dies, you can get much lower latency and higher bandwidth.
 

Tup3x

Senior member
Dec 31, 2016
957
940
136
Consumer cards go chiplet style when they will be seen as one massive GPU. I don't think that will happen in near future but once they solve the packing and latency issues, things get interesting.
 
  • Like
Reactions: Tlh97 and A///

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
The Zen interconnect is going over an organic package. If you use EMIB to connect the dies, you can get much lower latency and higher bandwidth.

EMIB is better but still far away from making it work like a monolithic die. Latencies are a harder problem to fix. More complex programming seems to be an unavoidable route.
 
  • Like
Reactions: Tlh97 and A///

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Intel Xe-HPG up to 960 EUs: https://videocardz.com/newz/intel-xe-hpg-graphics-cards-rumored-to-offer-up-to-960-eus

Looking above the RTX 2080 Ti now, as it'll only need 870MHz to rival it in Flops. It's hard to compare with Ampere since it changes the compute versus other ratio dramatically. Pure Flop-wise, at 1.8GHz it'll be almost on par with RTX 3080.

The 384EU DG2 is said to be 180-190mm2, so if we scale it up linearly, a 960EU one will end up at 500mm2.

The only issue is timing. Q2 2021 is 9 months away.
 
Last edited:

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
That and Raja....Wouldn't be the 1st nor the last that he's been or will be involved in over hyping....Guess we'll see in ?? months.

He's not the only one. There's quite a bit of over hype on Ampere too. It seems to be the state of things.

Spec-wise it seems fine. It's just late that's all. If it were to arrive in say October this year it would go straight against Ampere.
 
Last edited:
  • Like
Reactions: lightmanek

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
According to Raichu DG1 comes with LPDDR4x-4266 memory and a size of 4 GB.

This makes no sense to me. What the hell? Maybe it shares the memory with the system or something. It smells of TurboCache scheme by Nvidia many years ago.

If they used GDDR6, they could get away with using less amount of chips, which would save PCB space.

And why would a dGPU have such low bandwidth? We know the Xe LP GPU in Tigerlake can benefit from higher bandwith, and that's only at 1.35GHz, while the DG1 clock 20% higher.

There's a thing about the memory controller too. So it has an LPDDR4x PHY?
 

mikk

Diamond Member
May 15, 2012
4,132
2,127
136
And why would a dGPU have such low bandwidth? We know the Xe LP GPU in Tigerlake can benefit from higher bandwith, and that's only at 1.35GHz, while the DG1 clock 20% higher.

But this is shared with the CPU cores on Tigerlake, assuming this is a dedicated LPDDR4x DG1 has more bandwidth. There are some benefits, LPDDR4x should be much more energy efficient and it should be cheaper. However this is a rumor and not confirmed.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
I'm going to lean on DG1 sharing system memory, if that rumor is accurate. It'll be an easy way for a dGPU to save on costs.

Of course rumors have been quite inaccurate and exaggerated on lots of things. Weren't there rumors that it'll use GDDR5/6 with 96-bit interface as well? With 96EUs @ ~1.5GHz that kind of bandwidth is perfect.