Question Apple's new AFTERBURNER card. What is it? (Also, "MDX" dual GPU solution)

dsc106

Senior member
May 31, 2012
320
10
81
New MacPro's Afterburner card pushes 6 billion pixels per second or something, designed for video editors working with 4K/8K ProRes and ProRes RAW.

What is this hardware, how is it different than a GPU/CPU, and is there (slash WILL there be) a PC equivalent? Basically seems like a GPU acceleration card designed for 2D video streams, not 3D graphics, and it would be nice to put on into a PC video editing rig for a decent price.

Next, they have this new MDX development - basically looks like a PCIe 16x slot with an extra connection point to pump more throughput and power into a card, and allow for 2 GPU Cards in a singular case. Basically it looks like "SLI" with better power and bandwidth and connectivity, for a better SLI experience without the headache. I don't see anything on PC like this but it could bring GPU computations next level. In addition, it appears you can then effectively SLI two of these cards together. This seems to all be made possible by their custom optional extension to the PCIe16x port. Any idea of how/if this kind of tech could come to PCs in the future (or soon?).

Other than these two innovations, it looks like typical Apple - cool design surrounding stock hardware priced astronomically for the privilege of running MacOS.
 

badb0y

Diamond Member
Feb 22, 2010
4,015
30
91
They said infinity fabric is being used on the dual gpu card. Has that been done before?
 

Krteq

Senior member
May 22, 2015
991
671
136
I think it's an FPGA.
Nope, it's dual Vega 20

50-1080.2de35ee18lktd.jpg


They have 2 pcs of this monster in new Mac Pro. Infiniti Fabric is used to connect GPUs on PCB and xGMI(external IF implementation) is used to connect cards.
 
  • Like
Reactions: gradoman and badb0y

Jonica

Junior Member
Jun 4, 2019
1
0
6
Nope, it's dual Vega 20
...
They have 2 pcs of this monster in new Mac Pro. Infiniti Fabric is used to connect GPUs on PCB and xGMI(external IF implementation) is used to connect cards.

I don't thinks it's the GPU. At least according to their website it sounds like an FPGA:
"Introducing Apple Afterburner, a Game-Changing Accelerator Card
The new Mac Pro debuts Afterburner, featuring a programmable ASIC capable of decoding up to 6.3 billion pixels per second."
 

dsc106

Senior member
May 31, 2012
320
10
81
Yeah for clarity, there are two seperate things here:

1.) MDX, which is Dual GPU connected on a single PCI-E slot via Infinity Fabric. This also sounds possible due to Apple's addition to the PCI-E slot where you plug in an extra display port right to the motherboard (or something like that). Is this something we could see brought to PC? What is Infinity Fabric?

2.) Afterburner, which sounds like what some are speculating here is FPGA? What is that (I mean, practically, I can see when I google it but it's all a bit confusing as to what it practically is). How is it different than a CPU/GPU, and might we get a PC version of a card like this without the Apple Tax / OSX exclusivity? I'd love something like this in Adobe Premiere on Windows.
 
Mar 11, 2004
23,070
5,546
146
I got chastised on Ars for calling it an ASIC (when Apple themselves called it a programmable ASIC...), partly because I criticized Apple for other things. I think that chip is the biggest news on this new Mac Pro though as that could be a game changer for video editing (which is what the new Mac Pro seems targeted pretty explicitly at). The rest is fairly meh in my opinion (would've been exciting if it'd been AMD, where they'd be able to offer more cores, and do InfinityFabric between CPU and GPU, as well as PCIe 4, more memory channels, and probably more PCIe lanes).

As for InfinityFabric, well its on every Vega 20, but there hasn't been a dual GPU card with it prior to now. Which I think that is an area that Infinity Fabric is well suited for.

That special PCIe connector is interesting too but seems like it was an engineering fix for a problem that didn't need to exist (i.e. PCs just have power connectors on the cards but guess Apple can't be having that). I personally would rather have seen them make it so the GPUs could slot into sockets like CPUs (and thus use the better heatsink solutions that have been made for CPUs. Just make their own socket that can support say ~300W chips (be it CPU or GPU, or whatever else), and then use same heatsink mechanism on all of them. Make a main motherboard block that has a socket, and then have a separate block for memory that could sit in between multiple socket blocks (so could share memory easily). Or maybe integrate the memory and some storage, so the system memory serves as cache for NAND to speed things up. Or maybe integrate NAND on the video cards and use HBM2 as the cache.
 
Mar 11, 2004
23,070
5,546
146
Yeah for clarity, there are two seperate things here:

1.) MDX, which is Dual GPU connected on a single PCI-E slot via Infinity Fabric. This also sounds possible due to Apple's addition to the PCI-E slot where you plug in an extra display port right to the motherboard (or something like that). Is this something we could see brought to PC? What is Infinity Fabric?

2.) Afterburner, which sounds like what some are speculating here is FPGA? What is that (I mean, practically, I can see when I google it but it's all a bit confusing as to what it practically is). How is it different than a CPU/GPU, and might we get a PC version of a card like this without the Apple Tax / OSX exclusivity? I'd love something like this in Adobe Premiere on Windows.

Not sure what you're talking about displayport into the motherboard although I don't know that matters. The PCIe connector Apple developed seems mainly there to provide power for two ~250W GPUs (the standard PCIe connector can provide quite a bit, like 125-150W, which is why Apple's special one does 475, since the other PCIe connector is providing the other 75 or whatever to power the two Vega 20 chips), because Apple can't use standard 6/8 pin power connectors from power supplies for some reason. It might do more than that, but that seems to be the main impetus behind its inclusion there (since PCIe 3.0 isn't even much of a bottleneck for GPU to CPU tasks usually, and InfinityFabric handles GPU-GPU communication).

InfinityFabric is AMD's own interconnect protocol. Initially it was used between the module/chiplets of CPUs in Zen designs, but it can be used for more than that. They integrated it into their newest GPU (Vega 20), but thus far has only been used for GPU-GPU communication (here on the same card, but their other pro cards do it through the sorta Crossfire type bridge connector). I believe it can also run over the PCIe hardware (so it can be done between CPU and GPU). There's several competing protocols for doing stuff like that though (Nvidia has their own, but theirs like 2-3 others, that I believe AMD has said they support; its mainly used in datacenters/servers/HPC market though).

It seems to be an FPGA that Apple programmed for handling a lot of video feeds. FPGA basically is a processor that just has a bunch of general logic gates that you then use software to dictate the actual chip. So its like a generic CPU that you can then program the logic gates of to perform specific tasks. And you can update it over time via software/firmware (versus needing to rework the transistors themselves physically like you would developing a new CPU/GPU). The thing is, while its programmed for that task it can probably do it well but it couldn't do other tasks well simultaneously (like say a CPU could). They've become popular because it costs a lot of money to develop custom processors and using general purpose stuff might not be feasible (or cost too much or various other reasons), so using an FPGA gives them a chip to then design the processor on, and they can update it via software versus having to produce another chip to get that capability. Which, the base FPGA chips are also improving (partly why they've become popular as well), which is why they can program them for specific tasks and have them outperform much larger, more expensive, and heavily engineered chips for certain tasks (so for instance they can program it to outperform modern GPUs at handling a lot of video feeds because the GPUs have some of that capability built-in, but not 12 8K feed worth or whatever Apple is touting, because few markets need that and it'd drive up the cost of GPUs further to add that capability, when they can just sell people more graphics cards if they need that capability - where people are then paying for a lot of other things they wouldn't necessarily need like the compute and 3D graphics rendering capability).

(Oh and quick aside since you'll see ASIC mentioned as well. ASIC is basically the opposite of FPGA, its designing a chip for a purpose where you do a lot to focus the design of the actual transistors for that, so that you can have a smaller, higher performing chip for that task, but it'd generally be less flexible on the software side. CPUs and now GPUs have become general purpose processors that can do a lot of different types of tasks via software, where they're somewhat specialized so they can often perform better, but because they do handle a wide variety of tasks, there's room to get better performance and/or efficiency by either focusing the chip to that task - where FPGA and ASIC kinda provide opposing methods for going about doing that.)

Possibly but not sure since GPUs tend to handle that (and were better than most chips) but developing better GPUs is becoming more difficult (and they haven't been putting tons of focus on that specific need, since its few markets that need it and they could just buy more graphics cards and CPU power) so areas like that have been stagnating some. There was some company touting a similar thing for image processing on mobile.

In some ways we used to have similar things. Back before 3D graphics processors took off (when that was done via software) there were 2D/video focused graphics cards (Matrox was a brand that I recall being touted for its 2D quality, where I think they'd also often be able to handle more monitors). And some chips used to have TV tuners, and video in processing capabilities (which is kinda more in line with what this is, where you can take a bunch of native high res videos and play them back realtime without it bogging down the GPU and/or CPU).

So basically we're seeing what used to be done by GPUs being split up. So image/video processing will probably be its own chip. Then AI. Then rendering aspects (i.e. geometry processing). We'll probably see others (ray-tracing would be one I'd think). GPUs are being used for all of that right now, but specialized chips can probably do it better. Plus instead of single huge GPU chips (that cost a lot of money to design, engineer, and produce) we'll get multiple smaller chips that are more focused. GPUs were the choice for much of that because they were advancing rapidly and there's a lot of base components that made them feasible (its all just math when it comes down to it) to be used for such a variety of markets. GPUs also evolved specifically to cater more to those markets (they expanded the math capabilities for those markets). That's actually been a bit of a point of contention for gamers that just want maximum pixel processing capabilities, but since those have become less the focus of GPU development gamers are paying for larger graphics chips that haven't advanced in pixel processing capability as much as if the entire chip was used for that. And now they're being developed with lots of other markets and uses in mind, so its kinda exacerbated things.
 
Last edited:

dsc106

Senior member
May 31, 2012
320
10
81
Very interesting, thank you all.

I wonder when we'll actually start seeing more card/processors designed for specific tasks in the way described?

So this is admittedly a niche market for an Afterburner type FPGA/ASIC... (1) do you think we might see someone else (Blackmagic Design, other) have the capability to design and manufacture such a card for sale? (2) any chance one may be able to buy the Afterburner card and get it supported on a Windows PC (I mean, it is PCIe afterall and Adobe Premiere / DaVinci resolve are being coded to support it on Mac).

I didn't realize Infinity Fabric was just an AMD thing. So basically, the MDX card Apple is touting is completely feasible on a PC since we can just jack more power to the card right through the PSU? Apple's custom port addition is for power only, not more data throughput?
 

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
because Apple can't use standard 6/8 pin power connectors from power supplies for some reason
I think Linus mentioned that there are some PCIe power connectors in the new Mac Pro - but that's a butt ugly solution for a high end Apple product ;)
 
Mar 11, 2004
23,070
5,546
146
Very interesting, thank you all.

I wonder when we'll actually start seeing more card/processors designed for specific tasks in the way described?

So this is admittedly a niche market for an Afterburner type FPGA/ASIC... (1) do you think we might see someone else (Blackmagic Design, other) have the capability to design and manufacture such a card for sale? (2) any chance one may be able to buy the Afterburner card and get it supported on a Windows PC (I mean, it is PCIe afterall and Adobe Premiere / DaVinci resolve are being coded to support it on Mac).

I didn't realize Infinity Fabric was just an AMD thing. So basically, the MDX card Apple is touting is completely feasible on a PC since we can just jack more power to the card right through the PSU? Apple's custom port addition is for power only, not more data throughput?

We already do. Just not in the consumer space. I'm not sure we'll see any big ones there, I think for consumers they'll do that at the chip level. They already do in mobile processors, and they're moving to do similar (but instead of integrating it in the same die, they're doing it as multiple different chips and looking to fit them all on the same package - like how AMD has multiple CPU dice on a single CPU package - which even that's not really new, as both AMD and Intel did kinda similar in the early days of going multi-core; but now they're looking at doing GPU die alongside CPU die, and AMD split the input/output into its own separate die now).

But in data centers they already have dedicated AI processors (see Google's TensorProcessingUnit or TPU). And there's a bunch of other stuff that's been made for those markets (wouldn't be surprised if big broadcast companies and movie studios have some specialized video hardware).

In a lot of ways this stuff isn't new at all and is a call back to earlier computing, where there would be lots of specialized chips, you'd have main processor but then you'd have math co-processors, dedicated input/output chips, video chips, audio chips, and a bunch of others. The computing industry kinda fluctuates between specialized and general purpose depending on the production capabilities. Transistor production was advancing so quickly, making it possible to just keep improving general purpose chips like CPUs (and then later GPUs) to where it'd be difficult to keep pace with dedicated chips (because software was where most of the real development was happening, so getting constantly faster CPUs made it able to run software faster, and dedicated chips need software as well so it kinda cut out the development of the specialized processing chip by just focusing on the software side and letting the improvements in CPUs keep pushing performance; and so instead of a bunch of companies doing their own processing chips, they were all buying from say Intel, who then had the money to pour back into advancing their CPUs further).

Its pretty fun to look at old 70s and 80s computers. So many chips. A quick aside, one of the reasons I loved the show Halt and Catch Fire is that the first season delves into that nitty gritty (while not becoming too technical and getting boring, they kinda boil things down to "oh, what if we put chips on both sides of the mainboard instead of just one?" which was a real development that happened in order to make more compact computers).

Er, sorry for the long winded responses!

Yeah its a niche market (although I do think even general consumer market could use some advancement there). AMD actually talked about stuff like this about a decade ago (they talked about making a "holodeck" where one of the steps in getting there is just having lots and lots of displays - they mentioned ridiculous amount like 16 or maybe even 32 displays; which that was behind their push for Eyefinity - which was them pushing multiple displays - especially for gaming although that kinda fell off with consumers and most only ever did 3x1; there are day traders that do lots of displays but they just use multiple video cards to do that). Stuff like that did already exist (ever see in movies where they have like a wall of TVs showing a single video feed?), but a single card/chip solution is something fairly new.

Which Apple's chip isn't really outputting to displays but rather handling video feeds (which there's similarities but its not fully the same thing, they were showing someone having a video editing timeline with like 8 video feeds that they could then sync, splice, merge and do general video editing things with, but it was all on one display; it assuredly could do multiple displays though but I think that'd be handled by the video card - which they seem to have beefed up the video output of with it having several Thunderbolt ports). I'm not sure if the Afterburner chip can do any actual processing beyond that (like decode/encode, I imagine it'd at least need to have some decoding capability, but they were talking about putting raw un-encoded video streams - which is actually more difficult as they're much larger in size, hence why normal systems bog down so much handling multiple feeds of that - so it might not or would need to be decoded by the CPU or GPU before the Afterburner chip would take over handling the video feed).

Its definitely possible that someone could make such a card for Windows. It might already exist actually even in those niche markets (companies aren't going to advertise to general consumers, so unless you worked in those fields you likely wouldn't even know about such products) that would need it as its not unheard of. While its not the same thing, look up Video Toaster for such a product that was built for professional video editing (it was developed by Dana Carvey's brother - well he was one of several that developed it).

I doubt you'd be able to get this Afterburner card to work on Windows. It might possibly be able to hacked to work (after all people make "Hackintosh" where they get MacOSX running on non-Apple systems), but even if you got it working it almost certainly wouldn't work anywhere near as well as it does in Apple's system so it'd be pointless anyway.

Yeah InfinityFabric is just an AMD protocol (they said they'd license it to anyone, but Intel and Nvidia probably aren't going to, especially since there's other similar industry protocols for systems that mix different brands of hardware). It mostly will make it so AMD can have their CPUs and GPUs communicate better.

Its feasible. We already had cards like that (for like 7 or so years there, we'd got high end dual-GPU graphics cards, often targeted at PC gamers when Crossfire and SLI had more popularity than they do now). Cooling, power (supposedly the dual GPU stuff would be limited to 300W, but people would overclock them and they'd push up to like 500W pretty easily), and cost (they were often like $1000+, which now single GPU cards are in that price range) were big issues though. And then multi-GPU support (for Crossfire and SLI) in games has gotten worse so it further pushed consumers from them. A lot of data center tasks that GPUs are used for though still scale very well (as in like almost totally inline with how many GPUs you have, so 2 you get 2x the performance, and 4 you get nearly 4x, but they're starting to bump into issues with moving all that data - hence protocols like InfinityFabric and others to improve that).

I don't know if Apple's special PCIe connector does more than power (I would guess even if it doesn't with this dual GPU card, that it has the flexibility to do data as well). Without knowing the spec itself (which Apple might not release to the general public) though I can't say. They didn't say it was just for power, but no idea what data capability it might have.

I think Linus mentioned that there are some PCIe power connectors in the new Mac Pro - but that's a butt ugly solution for a high end Apple product ;)

Haha, that figures. Which, totally understand if they wanted to "Apple" some nicer power cable setup, but this just seems weird to me. I guess its not that weird but seems like there were better solutions. Frankly I don't even entirely get the dual GPUs on a card that close together. I get they didn't want the noise of blowers and whatnot from using Vega 20 cards, but that's why I said make sockets that can support CPU and GPU. Have that be on a board along the bottom of the case, and make it so you could have like 1-4 of them.

I feel like Apple could've innovated some much more interesting stuff, and I don't know that it would've taken any more engineering to accomplish it. They really could've made a splash if they'd developed some fiber optic interconnect, and it enable low latency and speeds of like 100GB/s now (with the ability to expand it in the future). That's where things are heading, and Apple could've been ahead of the curve. Plus it would enable external stuff to make Thunderbolt look silly (which Thunderbolt was originally supposed to be fiber optic).
 

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
They really could've made a splash if they'd developed some fiber optic interconnect, and it enable low latency and speeds of like 100GB/s now (with the ability to expand it in the future).
Just add an x16 PCIe AIB, they are good for 100Gbs. Any way, I think Apple did a good job creating a truly professional workstation with a lot of useful features and some new tech. Price is high, but this is Pro market, no Pro-sumer.
 
Mar 11, 2004
23,070
5,546
146
Just add an x16 PCIe AIB, they are good for 100Gbs. Any way, I think Apple did a good job creating a truly professional workstation with a lot of useful features and some new tech. Price is high, but this is Pro market, no Pro-sumer.

I meant that as a starting point. And this would be for more than just nearest internal connections. So they could have 100GB/s connections for anywhere in the case, or possibly even externally (likely at lower rates than internally, but probably still much higher than Thunderbolt 3 or 10Gb ethernet).

Plus since they have a rackmount version coming, they seem to expect the new Mac Pro to be used possibly as server/cloud processing, and fiber optic would let them daisy chain systems.

Exactly, pro market would be willing to pay for something like that. And seems like they expect that in the fairly near future. Apple could be ahead of the curve here. I think its pretty meh otherwise and Apple blew too much engineering a silly looking case and internal solutions for problems that already have solutions. The Afterburner card is interesting, but seems like they're gonna need to overhaul the system fairly soon (if they stick with Intel to reap the benefits of Intel's new processors). That's why I think going with AMD and getting a special version of EPYC (basically Threadripper but with all memory channels and maybe extra PCIe links) would've been much more interesting (it also would justify the price and would be very well suited for the intended market where EPYC's core/thread counts would thrash Intel's stuff. They'd have likely been able to upgrade the system via just the CPU, they could make use of InfinityFabric links for more than just on card GPU connection.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
Plus since they have a rackmount version coming, they seem to expect the new Mac Pro to be used possibly as server/cloud processing, and fiber optic would let them daisy chain systems.

Rack mount is probably for render farms and bulk encode/decode servers. Fiber optic cable is used mainly for long haul sometimes for internal trunk lines in larger in larger organizations. Copper cables have been able to scale much better than expected due to improvements in signal engineering.

Exactly, pro market would be willing to pay for something like that. And seems like they expect that in the fairly near future. Apple could be ahead of the curve here. I think its pretty meh otherwise and Apple blew too much engineering a silly looking case and internal solutions for problems that already have solutions. The Afterburner card is interesting, but seems like they're gonna need to overhaul the system fairly soon (if they stick with Intel to reap the benefits of Intel's new processors). That's why I think going with AMD and getting a special version of EPYC (basically Threadripper but with all memory channels and maybe extra PCIe links) would've been much more interesting (it also would justify the price and would be very well suited for the intended market where EPYC's core/thread counts would thrash Intel's stuff. They'd have likely been able to upgrade the system via just the CPU, they could make use of InfinityFabric links for more than just on card GPU connection.

Well, Apple probably started designing the Mac Pro 2-3 years ago, so EPYC wasn’t really a viable option. Apple also has a strong relationship with Intel. I suspect that a lot of the apps that will run on these systems need high single thread and AVX performance. As far as spending a lot of engineering resources on design - that’s Apple's MO, plus they have the money to spend. Lastly, I don’t think Apple will be overhauling this MAC Pro any time soon (except for processor upgrades, as they seem to be making a serious commitment this time around).
 
  • Like
Reactions: ozzy702

joesiv

Member
Mar 21, 2019
75
24
41
I also wonder if Thunderbolt is also another reason why Apple is inclined to stick with Intel for CPUs. They've put a lot of their eggs into that basket. I believe Intel has opened up Thunderbolt somewhat recently, but the exact details of what Intel has allowed for 3rd parties I don't know. I do know that Intel is definitely the place for the best support for the latest and greatest thunderbolt technology.

I also wonder if one day we might see Apple leverage AMD's semi-custom services, and design their own CPU. They could label it apple, and have some very interesting combinations of AMD IP with chiplets or monalithic, for their high end solutions like these Mac Pros.
 

senseamp

Lifer
Feb 5, 2006
35,787
6,195
126
FPGA to make up for the fact that their chosen GPUs can't handle 8k 60fps in real time, unlike PC competition. Of course their users will be the ones paying for this patch.
 

Ajay

Lifer
Jan 8, 2001
15,429
7,847
136
FPGA to make up for the fact that their chosen GPUs can't handle 8k 60fps in real time, unlike PC competition. Of course their users will be the ones paying for this patch.
Yeah, a quadro would have had the ability to handle that; but Apple and Nvidia have no love for each other. I think Afterburner can handle more simultaneous streams, if I read correctly. Given Apple’s semiconductor design prowess, we could see more from them in the future.