[Digitimes] AMD updates product roadmap for 2014 and 2015

Page 13 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Insert_Nickname

Diamond Member
May 6, 2012
4,971
1,696
136
Isnt that how VIA turned out when it moved down?

VIA/Cyrix never was much of a CPU manufacturer. Especially after Socket 7/Super7 was phased out. VIA did make most of the chipsets for the original Athlon (KT133A anyone...? ;)) /AthlonXP along with Nvidia's nForce2. What really killed VIA in the PC space was the IMC on Athlon64, Intel's refusal to licence their chipsets for anything newer then the P4 and a deliberate shift to focus on low power embedded systems. Actually VIA still has something of a presence in the current market, but not where you might think.

You're right about Kabini, but I haven't seen that many systems using it yet. Could be a while before it really takes off. I'm not so sure of how Silvermont will perform in the market, so I'll reserve judgement on that for the time being.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
832 cores with 128bit DDR3 sounds like a bad choice, the Xbox One (768?) have 256bit DDR3 + embedded ram for a reason, the 7770 (640) have 128bit GDDR5 for the same reason....


looking at the 7730 DDR3 results, even for a 384 GCN cores GPU 128bit DDR3 is a big limitation, or even better, 6670 DDR3 vs GDDR5.

Nobody said it will not be Memory limited, but Kaveri will also have newer Memory controller, HuMA and GCN graphics architecture. Combine all that with 832 Radeon Cores and you have a beast. The Compute capability should be off the scale.

http://www.anandtech.com/show/5541/amd-radeon-hd-7750-radeon-hd-7770-ghz-edition-review
Also, have a look at HD7770 (128bit memory) with 640 GCN Radeon cores against HD6850(256bit memory) with 960 VLIW4 cores and HD5770 (128bit memory) with 800 VLIW5 cores.

HD7770 is consistently faster than HD5770 with a huge league in newer games(tessellation, compute shaders etc). It is also very close to HD6850 that has a 256bit memory controller.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,115
136
Remember platform cost and size would increase.

Look back at your own post #284 above. So, to me, it seems like this was a possibility, though not one AMD chose to make.

Carrizo will be 45W and 65W SKUs that puts a limit on options as well. Also I doubt they use eDRAM, but rather DDR4 as the only solution. AMD is still far away from intel in terms of GPU integration into caches. Not to mention it will have to deal with Skylake and Skymont products.

AMD have been on the path of no return to "VIAville" for quite some time.

65W on 20nm isn't horrible in and of itself. Totally depends on what EX can bring to the table over SR. I've read about considerations for eDRAM from AMD and GF in the past. I think they could pull it off, probably not as well Intel but it would still likely be the best solution for improving graphics performance without going past dual channel DRAM.

I agree that Skylake and Skymont will practically kill off AMD in the laptop space, save for the "cheapies" because Intel just won't go there.

AMD stuck themselves on the road the "VIAville" with incompetence starting with the delay to 65nm then paying waaay too much for ATi and finally spinning off their fabs and agreeing to the GF WSA (all architectected by Hector 'Ruins All' Ruiz and the BOD).

Could AMD turn around? It would take, basically, the undoing of those three things to make it happen, for example, GloFo getting 14nm ahead of schedule, a large cash infusion from an 'Angel' investor and sufficient shipments to make the WSA effectively void. Sadly, the odds of all those things happening are dismal, and AMD's current margins just won't support a self sustained turn around - hence your conclusion (which you aired, what, last year?) is still pretty much on target, IMHO. Getting both major console wins will likely slow down AMD's decline, but it won't save the company from becoming a shadow of it's former glory.
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
No, the platform dont allow it. Also it would be completely incompatible with their HSA dreams.

Someone really needs to smash some reallity to AMD, they cant impose HSA with that very limited market share, and even less try to speed the adoption by devs.
They cant do wharever they want just because they sold HD7850 cores has IGP for consoles.

Gaming performance on APU is the only thing that keep them alive, if they try to sacrifice it for HSA its game over for them, the mayor superiority on the igp sector before Sandy Bridge its over, and it seems like they did not reallised yet.

VIA/Cyrix never was much of a CPU manufacturer. Especially after Socket 7/Super7 was phased out. VIA did make most of the chipsets for the original Athlon (KT133A anyone...? ;)) /AthlonXP along with Nvidia's nForce2. What really killed VIA in the PC space was the IMC on Athlon64, Intel's refusal to licence their chipsets for anything newer then the P4 and a deliberate shift to focus on low power embedded systems. Actually VIA still has something of a presence in the current market, but not where you might think.

You're right about Kabini, but I haven't seen that many systems using it yet. Could be a while before it really takes off. I'm not so sure of how Silvermont will perform in the market, so I'll reserve judgement on that for the time being.

VIA Nano 2 could have compited with Brazos, the problem was VIA started with single core 65nm process and when 40nm dual core was avalible it was too late. Combine that with crappy drivers/crappy igp and you got a recipe for failure.
VIA should have looked for Nvidia chipsets after the Intel C2D/Atom+Nvidia partnership was over.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Look back at your own post #284 above. So, to me, it seems like this was a possibility, though not one AMD chose to make.

65W on 20nm isn't horrible in and of itself. Totally depends on what EX can bring to the table over SR. I've read about considerations for eDRAM from AMD and GF in the past. I think they could pull it off, probably not as well Intel but it would still likely be the best solution for improving graphics performance without going past dual channel DRAM.

eDRAM is the only way. But again, AMD dont have the knowhow to do it correctly so to say. Just look at the Xbox One. No hUMA.
While this shouldnt be a problem due to priorities. It becomes one with AMD as always. Because AMD puts up a hopeless project like HSA in front of actual usage for the consumer.

No more FX line and Opterons singing on its last breath as well.

I agree that Skylake and Skymont will practically kill off AMD in the laptop space, save for the "cheapies" because Intel just won't go there.

That train left the station for good with Silvermont and soon Airmont. There is nothing left for AMD as such in terms of segments they can offer something better in.

AMD stuck themselves on the road the "VIAville" with incompetence starting with the delay to 65nm then paying waaay too much for ATi and finally spinning off their fabs and agreeing to the GF WSA (all architectected by Hector 'Ruins All' Ruiz and the BOD).

Could AMD turn around? It would take, basically, the undoing of those three things to make it happen, for example, GloFo getting 14nm ahead of schedule, a large cash infusion from an 'Angel' investor and sufficient shipments to make the WSA effectively void. Sadly, the odds of all those things happening are dismal, and AMD's current margins just won't support a self sustained turn around - hence your conclusion (which you aired, what, last year?) is still pretty much on target, IMHO. Getting both major console wins will likely slow down AMD's decline, but it won't save the company from becoming a shadow of it's former glory.

It will never change unless a miracle happens. And that miracle essentially requires an astroid to hit oregon. And nobody will throw money into the black hole called AMD. GloFo will not pull a miracle out either. Its so far behind TSMC and Intel its the joke of the industry. WSA is locked until 2024. In short, it will not change.

AMD is a company still in a state of illusion and arrogance, like its been ever since 2003/2004. And Rory have not been able to change that sick culture.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Someone really needs to smash some reallity to AMD, they cant impose HSA with that very limited market share, and even less try to speed the adoption by devs.
They cant do wharever they want just because they sold HD7850 cores has IGP for consoles.

None of the consoles support HSA, only the PS4 supports hUMA. MS and Sony had more sense than AMD in this matter. They both knew it was useless features. The hUMA support on the PS4 is more an accident than a request. HSA suffers the exact same fate as 3Dnow!, SSE4a and SSE5.

Gaming performance on APU is the only thing that keep them alive, if they try to sacrifice it for HSA its game over for them, the mayor superiority on the igp sector before Sandy Bridge its over, and it seems like they did not reallised yet.

They already sacrificed it for HSA. AMD sits on the biggest memory bandwidth bottleneck there is. They have a solution for it as we have seen with the Xbox One. But that would require them to sacrifice HSA and hUMA. And again, the incompetence and arrogance at AMD strikes again in favour of something that will never catch on.

VIA Nano 2 could have compited with Brazos, the problem was VIA started with single core 65nm process and when 40nm dual core was avalible it was too late. Combine that with crappy drivers/crappy igp and you got a recipe for failure.
VIA should have looked for Nvidia chipsets after the Intel C2D/Atom+Nvidia partnership was over.

Exactly. And AMD sits in the same position now. Delayed and inferiour process nodes, insufficient R&D budgets. The negative feedback spiral at work. And its only gonna get worse.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
Someone is getting afraid of the second coming(Kaveri) :whiste:

Yes, just as afraid of Bulldozer and Phenom. Both was the second coming too. Amazing products that just...well...flopped because they was crap and hyped beyond imagination by fanboys, shills, advocates and AMD PR.
 

SPBHM

Diamond Member
Sep 12, 2012
5,076
440
126
Nobody said it will not be Memory limited, but Kaveri will also have newer Memory controller, HuMA and GCN graphics architecture. Combine all that with 832 Radeon Cores and you have a beast. The Compute capability should be off the scale.

http://www.anandtech.com/show/5541/amd-radeon-hd-7750-radeon-hd-7770-ghz-edition-review
Also, have a look at HD7770 (128bit memory) with 640 GCN Radeon cores against HD6850(256bit memory) with 960 VLIW4 cores and HD5770 (128bit memory) with 800 VLIW5 cores.

HD7770 is consistently faster than HD5770 with a huge league in newer games(tessellation, compute shaders etc). It is also very close to HD6850 that has a 256bit memory controller.

but not a gaming beast for sure,

I don't get it, are you really using 128bit 4500MHz GDDR5 as an example of memory bandwidth limitation? cut that in half and a little more while adding a lot more GPU power and it's easy to see it makes no sense, the 6850 had a lot of memory bandwidth, but a much weaker GPU, it wouldn't suffer as much as the 7770 if you cut the memory bandwidth in half.

even the 6670 which is around the same performance as the IGP from Trinity/Richland suffers a lot with DDR3

Average-Perf.png


basically Richland with GDDR5 would be already receiving a big boost in performance... so you basically want a 2.5x (or more?) better GPU with the same memory bandwidth (which is already to low for Richland/Trinity)?
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,115
136
eDRAM is the only way. But again, AMD dont have the knowhow to do it correctly so to say. Just look at the Xbox One. No hUMA.
While this shouldnt be a problem due to priorities. It becomes one with AMD as always. Because AMD puts up a hopeless project like HSA in front of actual usage for the consumer.

eDRAM isn't the only way, just the best way to keep performance up and platform costs down for a desktop/laptop CPU. But if eDRAM breaks HSA/hUMA, then HSA is flawed and you are correct. Sadly, there seems to be too much momentum within AMD pushing HSA forward. Clearly, a good engineering team could overcome the compatibility problem (Intel is slowly moving toward unified memory as well), but I don't know if AMD has the will or resources to make that change. I hope they do, Carrizo would be a much, much better product with eDRAM. Unfortunately, the tail is wagging the dog here.

Remember, Intel only jumped on eDRAM @ 22nm after being out of the DRAM market for a long time - though they have the resources to commit to such a project and were able to equal IBM performance wise (whose been using eDRAM for three generations, IIRC), on their first shot.
 

Shivansps

Diamond Member
Sep 11, 2013
3,918
1,570
136
I think the best way is using GDDR5 sideport of at least 128-bit... and forget about hUMA and HSA...

Cpu uses the better timmings DDR3 and igp uses the sideport GDDR5, and they whould be no fighting each other for the bandwidth.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,115
136
Someone is getting afraid of the second coming(Kaveri) :whiste:

If Kaveri was coming on a high performance 14nm process I would agree with you. No such luck, however. Kaveri just gives AMD a bit a breathing space on the desktop, since Broadwell will be targeting mobile markets. Carrizo needs to be a home run to 2015 for AMD to stay in the game an not lose a lot more market share. As ShintaiDK points out, a max of 65 W and likelihood of no eDRAM will severely handicap that APU. And if Carrizo is on 28nm instead of 20nm, then it really is game over, because the performance delta will just be too small to matter.

Right now on Desktops, if I were AMD, I would be doing the opposite of Intel and shooting for higher performance instead of low power, because they can't win playing playing by Intel's rules. It's more lucrative to rule a small kingdom than to be a slave in a larger one.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,115
136
I think the best way is using GDDR5 sideport of at least 128-bit... and forget about hUMA and HSA...

Cpu uses the better timmings DDR3 and igp uses the sideport GDDR5, and they whould be no fighting each other for the bandwidth.

That would have been a reasonable option, IMO, but it doesn't seem like anyone can convince AMD of that. It's their 'innovation' (HSA, etc.) and they just won't modify it or change course no matter what it's costing them in the short term. Long term it would be a good idea if they weren't bleeding money right now.
 

jpiniero

Lifer
Oct 1, 2010
17,138
7,524
136
Right now on Desktops, if I were AMD, I would be doing the opposite of Intel and shooting for higher performance instead of low power, because they can't win playing playing by Intel's rules.

Except no OEM wants 100+W cpus.
 

Enigmoid

Platinum Member
Sep 27, 2012
2,907
31
91
Except no OEM wants 100+W cpus.

And the problem with that is that APU's pretty much require a large tdp budget as you are joining a CPU and GPU together on the same chip. Look at Iris Pro at 1100-1150 mhz is using around 30w of power for the igp according to hw monitor (see notebookcheck review) and it would like more. The 7750 uses around 40-50w. On a 65 w chip when you are a company not know for great efficiency it looks like reducing the tdp budget is a bad thing.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Look back at your own post #284 above. So, to me, it seems like this was a possibility, though not one AMD chose to make.



65W on 20nm isn't horrible in and of itself. Totally depends on what EX can bring to the table over SR. I've read about considerations for eDRAM from AMD and GF in the past. I think they could pull it off, probably not as well Intel but it would still likely be the best solution for improving graphics performance without going past dual channel DRAM.

I agree that Skylake and Skymont will practically kill off AMD in the laptop space, save for the "cheapies" because Intel just won't go there.

AMD stuck themselves on the road the "VIAville" with incompetence starting with the delay to 65nm then paying waaay too much for ATi and finally spinning off their fabs and agreeing to the GF WSA (all architectected by Hector 'Ruins All' Ruiz and the BOD).

Could AMD turn around? It would take, basically, the undoing of those three things to make it happen, for example, GloFo getting 14nm ahead of schedule, a large cash infusion from an 'Angel' investor and sufficient shipments to make the WSA effectively void. Sadly, the odds of all those things happening are dismal, and AMD's current margins just won't support a self sustained turn around - hence your conclusion (which you aired, what, last year?) is still pretty much on target, IMHO. Getting both major console wins will likely slow down AMD's decline, but it won't save the company from becoming a shadow of it's former glory.

Which is a real shame, given the bright period of 1999-2006. Released the first cpu that substantially outperformed the p6 core in performance per clock, first to use copper interconnects, first to 1ghz, first to a point to point bus and integrated memory controller, first to 64 bit, first to native dual core, all the while outperforming Intel and IBM (back when Power was a desktop player), only to get destroyed in a combination of Intel leapfrogging both architecture design and fabrication.

65W doesn't sound bad though. Global Foundries/AMD's processes used to be optimized for high wattage designs, the optimal position on the curve for their processes is likely substantially lower than it used to be.
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
but not a gaming beast for sure,

I don't get it, are you really using 128bit 4500MHz GDDR5 as an example of memory bandwidth limitation? cut that in half and a little more while adding a lot more GPU power and it's easy to see it makes no sense, the 6850 had a lot of memory bandwidth, but a much weaker GPU, it wouldn't suffer as much as the 7770 if you cut the memory bandwidth in half.

even the 6670 which is around the same performance as the IGP from Trinity/Richland suffers a lot with DDR3

Average-Perf.png


basically Richland with GDDR5 would be already receiving a big boost in performance... so you basically want a 2.5x (or more?) better GPU with the same memory bandwidth (which is already to low for Richland/Trinity)?

That graph is very interesting, it clearly shows the GT640 which happens to be a DDR-3 part being almost as fast as HD7730 GDDR-5.
I'll tell you a "secret", it has 33% more Texture units and double the color ROPs of HD7730.

When the game needs more pixel shader processing the higher Texture unit count will be faster even if you have lower memory bandwidth. That means that HD7730 is Pixel shader limited in that games first and then memory bandwidth starved.

Also, it is clear that by only changing to a better architecture alone(GCN vs VLIW5/4) can give you another 22% more performance with the same memory bandwidth(HD7730 DDR-3 vs HD6670 DDR-3). Now add that Kaveri will have more Radeon Cores, more Texture units, newer Memory controller and HuMA than Trinity and we are looking for a nice performance gain. Also, Compute performance for Tessellation and other DX-11 features will be way higher than Trinity.

It would gain more with GDDR-5 for sure and it will be Memory Bandwidth limited but memory bandwidth is not everything, especially when your Graphics Pipeline is stalling.

Edit: HD7730 review link bellow.
http://www.tomshardware.com/reviews/radeon-hd-7730-cape-verde-review,3575.html
 
Last edited:

SPBHM

Diamond Member
Sep 12, 2012
5,076
440
126
That graph is very interesting, it clearly shows the GT640 which happens to be a DDR-3 part being almost as fast as HD7730 GDDR-5.
I'll tell you a "secret", it has 33% more Texture units and double the color ROPs of HD7730.

When the game needs more pixel shader processing the higher Texture unit count will be faster even if you have lower memory bandwidth. That means that HD7730 is Pixel shader limited in that games first and then memory bandwidth starved.

Also, it is clear that by only changing to a better architecture alone(GCN vs VLIW5/4) can give you another 22% more performance with the same memory bandwidth(HD7730 DDR-3 vs HD6670 DDR-3). Now add that Kaveri will have more Radeon Cores, more Texture units, newer Memory controller and HuMA than Trinity and we are looking for a nice performance gain. Also, Compute performance for Tessellation and other DX-11 features will be way higher than Trinity.

It would gain more with GDDR-5 for sure and it will be Memory Bandwidth limited but memory bandwidth is not everything, especially when your Graphics Pipeline is stalling.

Edit: HD7730 review link bellow.
http://www.tomshardware.com/reviews/radeon-hd-7730-cape-verde-review,3575.html

the same GT 640 with GDDR5 is as fast as the 7750 (same memory as the 7730 so GK107 is slower than a GPU with a 384 GCN part instead of competing with 512 GPU because of the slower memory it's using, that's pretty significant)

obviously there is potential to go faster with DDR3, but how much? in some situations not much.

will Kaveri be faster than Trinity? yes, but the gains are severely limited by memory bandwidth, you will potentially be wasting more and more performance the higher you go.

(Tomshardware listed specs also shows the 7730 with 200MHz higher effective clock but it's problematic, it lists both at 900, relevant for your 22%, and increasing even more the GPU with the same bandwidth would probably mean less gains in terms of efficiency rate),

the cards on the test are certainly not always limited by memory bandwidth, these are all basic GPUs (but the potential is there... both the 6670 and 7730 are showing), but you are talking about more than 2x a 7730 with similar memory bandwidth... again, why MS see the need to not only have 2x the memory bandwidth of Kaveri (if it's limited to DDR3 2133 dual channel) but to also add 100(or 200?)GB/s+ ESRAM for a 768GCN cores GPU to build a well balanced system?
 

Dresdenboy

Golden Member
Jul 28, 2003
1,730
554
136
citavia.blog.de
None of the consoles support HSA, only the PS4 supports hUMA. MS and Sony had more sense than AMD in this matter. They both knew it was useless features. The hUMA support on the PS4 is more an accident than a request. HSA suffers the exact same fate as 3Dnow!, SSE4a and SSE5.



They already sacrificed it for HSA. AMD sits on the biggest memory bandwidth bottleneck there is. They have a solution for it as we have seen with the Xbox One. But that would require them to sacrifice HSA and hUMA. And again, the incompetence and arrogance at AMD strikes again in favour of something that will never catch on.
HSA is not meant for a closed, never changing environment, where you don't need JIT compilers, since you can compile directly, which results in faster startup times and more efficient execution from the start.

And DRAM B/W can be mitigated by having caches on the CPU and GPU side and direct cacheline transfers between both. Most performance is gained by working on cached data..
 

AtenRa

Lifer
Feb 2, 2009
14,003
3,362
136
the same GT 640 with GDDR5 is as fast as the 7750 (same memory as the 7730 so GK107 is slower than a GPU with a 384 GCN part instead of competing with 512 GPU because of the slower memory it's using, that's pretty significant)

obviously there is potential to go faster with DDR3, but how much? in some situations not much.

will Kaveri be faster than Trinity? yes, but the gains are severely limited by memory bandwidth, you will potentially be wasting more and more performance the higher you go.

(Tomshardware listed specs also shows the 7730 with 200MHz higher effective clock but it's problematic, it lists both at 900, relevant for your 22%, and increasing even more the GPU with the same bandwidth would probably mean less gains in terms of efficiency rate),

the cards on the test are certainly not always limited by memory bandwidth, these are all basic GPUs (but the potential is there... both the 6670 and 7730 are showing), but you are talking about more than 2x a 7730 with similar memory bandwidth... again, why MS see the need to not only have 2x the memory bandwidth of Kaveri (if it's limited to DDR3 2133 dual channel) but to also add 100(or 200?)GB/s+ ESRAM for a 768GCN cores GPU to build a well balanced system?

What Im saying is that those low-end GPUs have more bottlenecks than only memory bandwidth.

HD7750 is 33% faster than HD7730 GDDR-5 because is has more Pixel shaders and higher compute performance.

GDDR-5 gives an additional 25% more performance than DDR-3 for the HD7730. So, HD7730 is more Pixel Shader limited than Memory B/W.

HD7730 is both Pixel Shader limited, Compute performance limited and memory limited.

If kavery will have 832 Radeon Cores it will eliminate the Pixel Shader bottleneck, the Compute bottleneck and it will only be Memory Bandwidth limited. But with a new and better Memory Controller and other things like higher ROP count or higher ROP compute the performance could be as high as HD7750 GDDR-5 in a lot of newer Games(not in those that are memory starved).

Also, GPGPU performance will be more than 2x faster than Trinity and that is the part that AMD focus with FUSION and HSA, games is not the first priority.
 

24601

Golden Member
Jun 10, 2007
1,683
40
86
I don't see why they can't just use quad channel DDR3 like the Xbox One.

It seems like the absolute easiest solution.

They also need to fix their absolutely archaic and horrendously performing integrated memory controller (compared to Intel's)

The only reason Intel still uses dual channel DDR3 on their laptops and desktops is because they have an integrated memory controller than performs about 2x as well as AMD's, making quad channel have no real world benefit for them in desktop applications.
 
Last edited:

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
I don't see why they can't just use quad channel DDR3 like the Xbox One.

It seems like the absolute easiest solution.

They also need to fix their absolutely archaic and horrendously performing integrated memory controller (compared to Intel's)

The only reason Intel still uses dual channel DDR3 on their laptops and desktops is because they have an integrated memory controller than performs about 2x as well as AMD's, making quad channel have no real world benefit for them in desktop applications.

Cost and implementation.

Quadchannel is also more or less impossible on laptops and SSF due to size.

The real solution would rather be to get rid of the parallel based memory busses. But the chartel made sure that would not happen anytime soon.
 

24601

Golden Member
Jun 10, 2007
1,683
40
86
Cost and implementation.

Quadchannel is also more or less impossible on laptops and SSF due to size.

The real solution would rather be to get rid of the parallel based memory busses.

It has barely any space penalty if soldered to the motherboard.

Only the size of the DIMMs are the issue.

Having small laptops have soldered on RAM and the large ones use 4x SODIMMs would seem like a reasonable enough solution.

The SFF could also just use SODIMMs instead of full size DIMMs.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
146
106
It has barely any space penalty if soldered to the motherboard.

Only the size of the DIMMs are the issue.

Having small laptops have soldered on RAM and the large ones use 4x SODIMMs would seem like a reasonable enough solution.

Soldering have its own logistic issues. And soldering could easily take up even more space on the board.