News Intel GPUs - Intel launches A580

Page 12 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

AtenRa

Lifer
Feb 2, 2009
14,000
3,357
136
I would say that Navi is probably not going to be a great compute card. We'll see.

I believe NAVI will be much faster in compute than Polaris. Both AMD and NV are going for compute the last 3-4 years and with RayTracing added lately, we going to get even more compute in GPUs the coming years.
 

maddie

Diamond Member
Jul 18, 2010
4,722
4,627
136
I believe NAVI will be much faster in compute than Polaris. Both AMD and NV are going for compute the last 3-4 years and with RayTracing added lately, we going to get even more compute in GPUs the coming years.
Yes and no. Several have said that designs will specialize for the separate uses going forward. Of course, there will still be many shared units in both lines.

As an example, the HBCC using the HBM2 as the cache is wasted in gaming, now and in the future, but extremely useful for huge datasets in the compute role.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Yes and no. Several have said that designs will specialize for the separate uses going forward. Of course, there will still be many shared units in both lines.

Intel's Gen 11 architecture ditches FP64 altogether. It's part of the reason the EU sizes at ISO-process is 25% smaller. If you want to run it it has to be emulated. So you might see with future Intel products a line that's very focused on gaming, and another that's very focused on compute.

Since process gains are becoming smaller and less frequent, expect more specialization to extract maximum gains.
 

Dayman1225

Golden Member
Aug 14, 2017
1,152
973
146
Intel's Gen 11 architecture ditches FP64 altogether. It's part of the reason the EU sizes at ISO-process is 25% smaller. If you want to run it it has to be emulated. So you might see with future Intel products a line that's very focused on gaming, and another that's very focused on compute.

Since process gains are becoming smaller and less frequent, expect more specialization to extract maximum gains.
image0.jpg

That’ll almost certainly be the case based of this slide, look at the text above “Investing in a brand new Architecture (XE) and 2 new Micro Architectures”, DC optimised and Client Optmised.
 

beginner99

Diamond Member
Jun 2, 2009
5,208
1,580
136
All they have to do is to fit enough units into a small enough TDP and they will be competitive.
Even the cost factor is pretty irrelevant because intel could just sell at a loss to gain market share.

AFAIK current intel iGPU tech is pretty space and power inefficient. it either uses much more space than NV or AMD or has to be clocked to high to keep up. But as the slide above shows, the dGPU supposeldy is a new different uArch and dropping FP64 fully makes sense for intel because they have enough volume for such a design: their iGPUs. Maybe with this new XE uArch the GPU will be attached by emib and essentially would be the same chip as on the dGPU. That would mean intel will have more than enough volume to justify a FP64 less dGPU.
And deeplearning doesn't need FP64 anyway. So such a product could also target that market.
 

ozzy702

Golden Member
Nov 1, 2011
1,151
530
136

SpaceBeer

Senior member
Apr 2, 2016
307
100
116
Jim Keller? :D

  • Alexander Lyashevsky, a very talented engineer and a corporate fellow at AMD was hired as the Senior Director of Machine Learning Algorithms and Software Architecture at Intel.
  • Jason Gunderson, another top CPU engineer and Sr. Director Program Managemen at AMD was brought on as Senior Director and Cheif of Staff Silicon Engineering at Intel.
  • Radhakrishna Giduthuri, a top AI engineer at AMD who had spent 7 years in their software architecture and AI department was snagged by Intel as a Deep Learning Architect in the AI Products Group.
  • Joseph Facca, is one of the key talent that was not taken directly from AMD but was approached form a different route. Joseph had previously been with AMD and led board design teams for ATI but had left in the September of 2016. He had been working in a much smaller company as an R&D expert and Intel brought him on board as an ‘industry leader’ to “create, design and deliver industry-leading discrete graphics products.

https://wccftech.com/intel-amd-talent-wars-heat/
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
Eh,even if it's an Aprils fool this is pretty much the most logical thing for intel to do.
For GPUs you can just stick more and more compute units into your cards to deal with higher resolutions,I mean radeon is doing this for years, or not?Isn't that why radeon is so much better value for mining and so on,because they have much more units compared to the competition.
 

Hitman928

Diamond Member
Apr 15, 2012
5,177
7,628
136
Eh,even if it's an Aprils fool this is pretty much the most logical thing for intel to do.
For GPUs you can just stick more and more compute units into your cards to deal with higher resolutions,I mean radeon is doing this for years, or not?Isn't that why radeon is so much better value for mining and so on,because they have much more units compared to the competition.

There's a huge difference between increasing the compute units in your design and getting multiple discrete GPUs to act seamlessly as one unit.
 

jpiniero

Lifer
Oct 1, 2010
14,509
5,159
136
There's a huge difference between increasing the compute units in your design and getting multiple discrete GPUs to act seamlessly as one unit.

Maybe that's where the 150W card rumor is coming from. The compute cards will have multiple dies on the package but the gaming cards will only have one.
 

Ajay

Lifer
Jan 8, 2001
15,332
7,792
136
Maybe that's where the 150W card rumor is coming from. The compute cards will have multiple dies on the package but the gaming cards will only have one.
That could work, although it would leave the gaming cards compute heavy and hence on the slow side.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,785
136
Maybe that's where the 150W card rumor is coming from. The compute cards will have multiple dies on the package but the gaming cards will only have one.

Or, we are taking the April Fools joke too seriously.

Faked slides: GPU performance increase becomes exponential!
Faked slides: Shows graph with linear relationship.

How about:
-4D XPoint
-June 31st
-8TB/s bandwidth
-DX 14.2
 

TheELF

Diamond Member
Dec 22, 2012
3,967
720
126
There's a huge difference between increasing the compute units in your design and getting multiple discrete GPUs to act seamlessly as one unit.
Other then assigning addresses to all the units what else is there to consider? They will all be on the same board or chip even so you won't have all the problems that come from trying to sync up two cards on different slots/busses.
 

tajoh111

Senior member
Mar 28, 2005
298
312
136
That's the thing: NV, AMD, and now Intel already know that. There's a limit to the number of cards they can sell to people chasing top performance. So the trick is to push prices ever higher, and maybe drag everyone else along for the ride wherever possible.

When I first got into PC gaming heavily, my first card that I bought for a custom system was a GeForce 2 Ti 200 (I had an ATI card in my pre-built P5-100 machine). It was a decent midrange card that performed below GeForce3 solutions, but outperformed some previous GeForce2 products, and definitely sat above the entry-level category. I don't remember what I paid for it, but it had to be $150 or less. Here's an old roundup of GeForce 2/3 products from AT:

https://www.anandtech.com/show/873/24

The GeForce 2 Ti 200 products ranged in price from $115-$125 depending on which one you got.

Can you get a midrange card today for $115? No. Midrange is slowly vanishing anyway, but if you want to think of a 1660Ti as midrange, then you can see how prices have moved.

Only if you're willing to make compromises on framerates and/or image quality. Take the used card market out of the picture and look at what's available today: can you guarantee a 1% or .1% percentile of 60fps @ 1440p in current titles with something like an RX590?

https://www.anandtech.com/show/13570/the-amd-radeon-rx-590-review

For many of those games, the answer is "no". Even RX Vega 64 struggles in a few areas. My overclocked Vega FE had issues staying above 60 fps all the time in a few games that didn't like it, like Fallout 4 (though that may have been more my CPU). Compare that to the Quake III Arena benchmarks from the old 2002 AT benchmarks: every card tested was way over 60 fps! Resolutions were lower back then, but still. If you had a monitor that supported 1280x1024 or 1600x1200 then you could get that higher resolution and probably still not drop below 60fps very often.

So yes, I respect that you (and many others) have set price limitations on what you will pay. It's smart not to cross those lines. We are still all paying more for less. Intel knows that, and will price accordingly.

The price you listed were not launch prices of these cards, those prices were after some time on the market already. Below is the launch prices and they were not all that great upon closer inspection.

https://www.tomshardware.com/reviews/nvidia-launches-titanium-series,371-10.html

Look at the specs and build of the cards and you will realize much of the reason why prices were so low back then is because these cards were budget cards compared vs the high end of today. The card you are referring to is incredibly low end in todays market in terms of design and bill of materials.

https://www.techpowerup.com/gpu-specs/geforce2-ti.c794

Yes it was 150 dollars, but the die size was only 81mm, the PCB was tiny and basic requiring a single slot cooler and no additional power connector. These cards were basic, cheap to make and have nothing on the midrange of today.

Even the high end look like low end graphic cards.

https://www.techpowerup.com/gpu-specs/geforce3-ti500.c741

128bit bus, 128mm2 die size and 350 dollars MSRP with a power consumption of 50-75 watts.

Simply looking at the appearance of these cards, we can see they resemble the low end of today. If cards were not allowed to flesh out and increase in price to reflect the market desire for faster card, we would likely have tiny GPU's still and this can be seen from the CPU market which has largely stagnated in comparison because although prices didn't grow, CPU companies cut corners by making smaller chips which resulted in much less performance increases.

If you don't believe me, look how long and the type of pricing Intel charged for 4 core chips for so long. People praised Intel for not increasing the price of their x700k series chips but if you look at the size of the chips, you realizes if this was like the GPU market, these CPU would have been relegated to budget processors if the market moved like the GPU one. E.g Lynnfield or the 870k was a 293mm2 die. By the time skylake came out, the chips had shrunk to only 114mm2 die with only 50mm2 of that dedicated to CPU.

Both of these were 4 core chips and although the cost was the same mostly, the chips we were getting were tiny. This is equally presented by the increase in performance over the years.

https://forums.overclockers.co.uk/threads/lynnfield-i7-skylake-i5-performance-compared.18736047/

Between lynnfield and skylake, for gaming performance, the improvement is embarassing.

https://www.techspot.com/article/1039-ten-years-intel-cpu-compared/page3.html

We can extrapolate from above because skylake was not much of any improvement over a 4790k, that we might be seeing double the performance between the i7 870 vs the i7 6700k but games it was like 30%.

The high end cards today, have increase in speed much faster than the CPU market because chips grew in size with advancing nodes and the extra cost was absorbed and accepted by the consumers.

Compare this to GPUs over the same period of time, and we will see the difference in performance is 8x to 10x comparing a gtx 980 ti vs a gtx 285.

The big reason why is because these chips did not shrink in size with increasing nodes advances and power consumption was allowed to go up which allowed the size and complexity of the chips to grow. E.g from 240 cores to 2816 cores. Not the static 4 cores of the CPU market for the longest time.

Ultimately what I am trying to say is GPU's were allowed to evolve and mature because the chips were allowed to grow beyond their humble beginnings which is possible because of the cost being passed onto consumers and the consumer accepting it.

The CPU market is starting to show the same desire which is why we are seeing $3000 processors from Intel and $2200 processor from AMD in the consumer CPU market with a substantial increase in core count finally to reflect how long we were stuck with 4 to 8 cores.

If high end chips were around 300 dollars and everything scaled downward to support that pricing structure, we would not have the big complex chips we have today and fondness and nostalgic memories of the past really do not look that great upon inspection.

`
 
  • Like
Reactions: VirtualLarry

DrMrLordX

Lifer
Apr 27, 2000
21,582
10,785
136
The price you listed were not launch prices of these cards, those prices were after some time on the market already. Below is the launch prices and they were not all that great upon closer inspection.

Right, $150 is close to what I paid for my GF2 Ti 200 if I recall correctly. But remember that Polaris is also quite long in the tooth, and I compared a $125 GF2 Ti 200 to an RX 590 of today (which itself has been out for awhile).

Look at the specs and build of the cards and you will realize much of the reason why prices were so low back then is because these cards were budget cards compared vs the high end of today. The card you are referring to is incredibly low end in todays market in terms of design and bill of materials.

Price/performance for games of the day was still stellar compared to what we have today. Somehow they did this with designs that are (by modern standards) incredibly low-end. Honestly I don't care if it chews up 400W or 75W, if it pushes frames at acceptable quality levels then I'm sold.

Yes yes I know, low-hanging fruit has already been picked. Don't tell me that all the reductions in price/performance over the last 15+ years have come from increases in design costs/node design costs. That ain't true.
 
  • Like
Reactions: Tlh97

coercitiv

Diamond Member
Jan 24, 2014
6,151
11,683
136
Right, $150 is close to what I paid for my GF2 Ti 200 if I recall correctly. But remember that Polaris is also quite long in the tooth, and I compared a $125 GF2 Ti 200 to an RX 590 of today (which itself has been out for awhile).
So you're comparing game requirements from early 2000's to game requirements from 2019 and somehow reach the conclusion the ~80mm2 chip from 2001 was a better deal than the ~230mm2 chip from 2019, because the latter is not "midrange" anymore. Don't you see a problem here?