[techradar] This startup wants to kill the CPU and GPU in one go

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

NostaSeronx

Diamond Member
Sep 18, 2011
3,811
1,290
136
Do you consider 7-way as not wide?
It is only two x86 macro-ops wide.
One Zen macro-op is one ALU or FPU operation + two loads and one store + one branch(fused), in comparison.

Denver is smaller than Bulldozer, it is even smaller than Jaguar. It is at least on-par with Bobcat. Don't even get me started at how Silvermont is better than Denver. Cortex A57 is larger than Denver. This all about architecture width, by the way. Nvidia's Carmel is smaller than Samsung M3/M4, so the next core isn't great off. It is only a tiny bit larger than the Cortex-A76.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
Now that i am home i have finally time to think about it and read up about it , i think i understand all the factors now.

When the process gets smaller, the resistance of the "wires" increases but since high speed signals also propagate through it, skin effects also becomes a role. As does capacitive effects.
Mainly relative permittivity (dielectric constant) and to some extend permability are the issues. At least that is what i understand of it.

Not so sure about permability, but the rest of it, yes.

Long time ago we had at work some experiments with a time domain reflection meter and the principle behind it. That was all about the velocity factor. And that was dependent on the material surrounding the conductor. Also was all about matching impedance to prevent reflections.
Even at chip layouts, the impedance must be matched yes to prevent reflections ?

Not sure what you mean by the bolded, but yes, velocity factor or signal propagation is dependent on the effective relative permittivity of the material surrounding the microstrip line as well as the ratio of the width of the microstrip line to the height (or thickness) of the dielectric material.

For reflections to occur, the wire must be way longer so that the propagation time of the signal is much longer than the rise time of the signal and the impedance is not matched.

'This is true, the wire needs to be sufficiently long such that a VSWR (voltage standing wave ratio) can form. You are also correct that the rise (and fall) time of the signal is what is important as a pulse signal is represented by a very broad range of frequencies in the frequency domain. I found these slides online that explain it so I don't have to run it through Matlab myself, http://eeweb.poly.edu/~yao/EE3414/signal_freq.pdf . From those slides, you can see a single pulse signal in the frequency domain is actually a sinc() function:
VCCXiJO.png

To accurately transmit a periodic pulsed signal, you need the sum of many sin functions reaching high levels of harmonics which means your actual signal needed to transmit in order to achieve a square wave in time contains frequencies much, much higher than your switching frequency.

9wQGidK.png



An advantage of smaller process technology would be that the propagation time of the signal is shorter but at the same time the relative permittivity increases. And thus the propagation speed goes down as well. There goes the advantage out of the window again.

For the bolded, do you mean propagation distance of the signal is shorter? Also, I don't work with finfet processes so I am unfamiliar with the characteristics of their dielectric material, do you know for sure the relative permittivity is increased from prior nodes?
Because the relative permittivity increases, the coulombforce on the electrons increase. That would make for less easy passing on of the " charge" signal from electron to electron.
And there is something that is confusing me.
I always understood that electron do not really travel that fast through a material because of mainly scattering and other atomic forces.
Thus when a signal is applied with very short rise time, the passing of the charge (the signal wave front) goes with the velocity factor or signal propagation speed but the electrons move at a relative slow pace through the material. But the electrons are very good at passing on information.

Not really sure where coulombforce comes into play in context of the rest of the post. I think you need to remember that FETs work off of transconductance. They take a voltage signal at the input and put out a current at the output. The potential on the gate forms a channel within the FET which begins conduction between the source and drain. The dielectric material between the gate and the channel as well as the dielectric material of the channel itself will be different than the material surrounding the metal interconnects. The channels are specially designed for this process to occur while finfets help greatly in controlling leakage while feature sizes decrease and cutoff frequency increases.

With that said, in order for one stage to drive the next, you have to drive the input capacitance of the next gate with a sufficient current from the first gate. The rate at which the electrons from the first gate can drive the voltage on the input capacitance of the next gate is what forms the signal.

I recently read about ballistic conduction where an electron can travel through a material without scattering. Without encountering resistance but also not being a super conductor.
It is just that at small sizes, the electron encounters less interaction from surrounding atoms. At least i think that is the case of everything influences everything when interacting.

Ballistic conduction happens as you said, when scattering is eliminated. The electrons will still interact with the walls of the conductor, but shouldn't be interfering with each other within the conductor.

*Hopefully I made sense as well and didn't make any silly mistakes. I'm running on very little sleep lately due to a newborn in the house.
 
May 11, 2008
22,551
1,471
126
Not so sure about permability, but the rest of it, yes.


For the bolded, do you mean propagation distance of the signal is shorter? Also, I don't work with finfet processes so I am unfamiliar with the characteristics of their dielectric material, do you know for sure the relative permittivity is increased from prior nodes?
I assumed that since the circuit gets smaller, the signal path ways are shorter as well. But if everything gets smaller and closer packed, i wondered if that would be worse.
But now that i think about it, that is where all the rare metals and diffusions are needed. High K, Low K. Finfets. To be honest, i find it interesting because i like to visualize in my mind what happens and understand it but i am not working in this kind of high tech industry. I do work at a design firm, but we just use the chips, we do not make them. :)
The closest i come to chip design is because of the use of orcad and allegro from cadence who also provide chip design tools.

Not really sure where coulombforce comes into play in context of the rest of the post. I think you need to remember that FETs work off of transconductance. They take a voltage signal at the input and put out a current at the output. The potential on the gate forms a channel within the FET which begins conduction between the source and drain. The dielectric material between the gate and the channel as well as the dielectric material of the channel itself will be different than the material surrounding the metal interconnects. The channels are specially designed for this process to occur while finfets help greatly in controlling leakage while feature sizes decrease and cutoff frequency increases.

With that said, in order for one stage to drive the next, you have to drive the input capacitance of the next gate with a sufficient current from the first gate. The rate at which the electrons from the first gate can drive the voltage on the input capacitance of the next gate is what forms the signal.

Ballistic conduction happens as you said, when scattering is eliminated. The electrons will still interact with the walls of the conductor, but shouldn't be interfering with each other within the conductor.

*Hopefully I made sense as well and didn't make any silly mistakes. I'm running on very little sleep lately due to a newborn in the house.
You made very much sense to me.
The coulombforce was something i was thinking about. But i have no ready available knowledge of how it al works on the quantum level. I have to dig into my literature first.
I am trying to understand how it all works, just for fun.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Note these numbers down and people should figure it out themselves.

How much is the Cost of CPU? Even in 10s of Thousands? And how much time you need to install those.

How much cost of programmers, especially in their special domains, their salary in hundreds, every month. And how much time you need to rewrite your application.

I am convinced, especially on the server, any future innovation will have to be built on top of x86. Until x86 stagnant, before anything else could topple it.
With good Linux and GCC C++/C support, it won't matter, 99% of the time. As in, x86 could be replaced with a fast ARM chip, if one existed, now (today, mobile ARM CPUs are getting close to AMD's Bobcat and Intel's Silvermont, and less battery-constrained versions are still struggling to approach a Core 2's performance...maybe some upcoming server ones will actually be excellent, but I'm guessing it will take several more years of catching up, to compete with 5-10 year-old x86 server CPUs, outside of workloads designed for some specific SoC). It would take a few years to ramp up inertia, of course, as adventurous companies take the risk, and help with ironing out the kinks, but x86 doesn't have a foothold because it is x86, like it used to. It has one because x86, and products based on it, are very good, in many different ways, even with recent security exploits in mind. Hardly anyone programming any business software touches the lowest levels, except maybe to tweak the occasional compiler screw-up with vectorization, and that's not going to be common.
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
A bit of a reminder from the memory lane:

The Secret of Denver: Binary Translation & Code Optimization



The good news is in the case of Prodigy workloads would likely be easier to profile and optimization will be extracted for each case. The bad news is bigger fish have already tried replacing complex hardware with even more complex software, and failed. IIRC the case of the Sandforce controller the performance numbers were there at least, in the case of Denver, not so much.

Because IO task is much more predictable,


Itanium's main problem was non deterministic memory latency, at least as far as i understand it.

Still even if intel fix it, it will still not viable, and only a handful can benefit from it.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
OK, so the big question: how do they intend to change data structures to better utilize memory bus time, and high latency relative to ALUs, when branches need to be determined by the value of data that is not yet known? There's a lot that can be done on lots of data with map, filter, and sort implementations going highly parallel, but special compilers going against OoOE for simplicity's sake try to determine an unknown future, and invariably shift that burden from the calculation hardware to memory. Most potentially viable improvments researchers have simulated aught to work well for improving OoOE, as well as in-order, rather than obsoleting OoOE. While the idea of denser weaker processors had some promise, and may still in some niches, it ended up failing on a large scale due to the much higher latencies from weaker processors making the bigger badder servers worth it. Trying to replace hardware OoOE with software speculation could mean an increase in instruction count of multiple orders of magnitude for some cases, chewing up off-chip RAM time, and on-chip die space and power. That, or an abject failure outside of a couple niches, if they expect the compiler to actually predict memory accesses well, rather than implement parallel branching as a transparent part of the ISA.

Doing it in software, whether before-hand or in the chip (IE, Crusoe, Denver) could reduce power consumption, and explicit branching in the ISA might be able to remove some of the overheads of software-in-the-chip processors, but we've already seen that the market at large won't accept that minor improvement in overall throughput/density, when it means much higher latencies, even on highly parallel tasks. I have a feeling their marketing idea of "data center workloads" is actually an extremely small niche.

Having said that, I could see a different type of processor, like this, performing well for machine learning and AI, if geared towards that style of work from the start. Systems just for certain niches, or specialized coprocessors, may well have a bright future.

As an aside, their note about the brain project does seem reasonable, regardless of how anything else works out. IBM was able to simulate small animal brains running what amounted to G5 chips, at one per neuron, IIRC, some years ago.
 

Nothingness

Diamond Member
Jul 3, 2013
3,307
2,379
136
With good Linux and GCC C++/C support, it won't matter, 99% of the time. As in, x86 could be replaced with a fast ARM chip, if one existed, now (today, mobile ARM CPUs are getting close to AMD's Bobcat and Intel's Silvermont,
Close to Silvermont? Most recent high-perf ARM chips already are much faster than Silvermont. Goldmont+ is more competitive against ARM mobile chips, but Silvermont is certainly far behind.

and less battery-constrained versions are still struggling to approach a Core 2's performance...maybe some upcoming server ones will actually be excellent, but I'm guessing it will take several more years of catching up, to compete with 5-10 year-old x86 server CPUs, outside of workloads designed for some specific SoC)
ThunderX2 seems competitive on many workloads against recent Intel chips: https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Close to Silvermont? Most recent high-perf ARM chips already are much faster than Silvermont. Goldmont+ is more competitive against ARM mobile chips, but Silvermont is certainly far behind.
Last I saw, they were barely getting there. Apple's could manage, for a short time, and some with active cooling do OK, but the Cortexes generally seem too constrained for power and too limited due to design-by-committee. Load/stores always look good on simple benchmarks, that don't stress difficult to predict memory issues.

https://www.anandtech.com/show/12195/hisilicon-kirin-970-power-performance-overview/3
https://www.spec.org/cpu2006/results/res2014q3/cpu2006-20140617-29941.html

Not a perfect comparison, but the best I could quickly find, and should be pretty fair. They're in the same ballpark, worse here, better there, with a couple stand-out SoCs from Qualcomm, but most not looking too amazin. Goldmont+ is a bit more trouble to compare, as you also have to account for clock speed a bit more, since most models are pretty low in that spec, relatively speaking. I may have hyperbolic, though :).

ThunderX2 seems competitive on many workloads against recent Intel chips: https://www.servethehome.com/cavium-thunderx2-review-benchmarks-real-arm-server-option/
That is one to watch, and does very well with stream-like work. But, it's still quite behind overall, yet still consumes a fair bit of power:
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/8
It might have some niches to go into, and will definitely be worth watching as the future generations come out. If they can find a way to knock the power usage way down, it would be very impressive, as a general-purpose server chip. Not enough to swap over from Intel, especially with AMD's new competition, but a step in the right direction, and good choice for people actively wanting to not remain beholden to old standards.
 

Nothingness

Diamond Member
Jul 3, 2013
3,307
2,379
136
Last I saw, they were barely getting there. Apple's could manage, for a short time, and some with active cooling do OK, but the Cortexes generally seem too constrained for power and too limited due to design-by-committee. Load/stores always look good on simple benchmarks, that don't stress difficult to predict memory issues.

https://www.anandtech.com/show/12195/hisilicon-kirin-970-power-performance-overview/3
https://www.spec.org/cpu2006/results/res2014q3/cpu2006-20140617-29941.html

Not a perfect comparison, but the best I could quickly find, and should be pretty fair. They're in the same ballpark, worse here, better there, with a couple stand-out SoCs from Qualcomm, but most not looking too amazin.
You're comparing SPEC rate vs SPEC non rate :) This is the one you want:
https://www.spec.org/cpu2006/results/res2014q3/cpu2006-20140617-29942.html

Also pick these much better results with more recent ARM CPU:
https://www.anandtech.com/show/12520/the-galaxy-s9-review/4

Now if you consider that Intel is known to overtune icc for SPEC, and that we're comparing a phone against a (micro)server chip, I'd still say ARM is better. But on the other hand Silvermont already is quite old.

That is one to watch, and does very well with stream-like work. But, it's still quite behind overall, yet still consumes a fair bit of power:
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/7
https://www.anandtech.com/show/12694/assessing-cavium-thunderx2-arm-server-reality/8
It might have some niches to go into, and will definitely be worth watching as the future generations come out. If they can find a way to knock the power usage way down, it would be very impressive, as a general-purpose server chip. Not enough to swap over from Intel, especially with AMD's new competition, but a step in the right direction, and good choice for people actively wanting to not remain beholden to old standards.
Power consumption has been improved: https://www.servethehome.com/updated-cavium-thunderx2-power-consumption-results/

Yeah still not up to Intel and AMD level, but definitely much closer than what I expected.
 

ksec

Senior member
Mar 5, 2010
420
117
116
With good Linux and GCC C++/C support, it won't matter, 99% of the time. As in, x86 could be replaced with a fast ARM chip, if one existed, now (today, mobile ARM CPUs are getting close to AMD's Bobcat and Intel's Silvermont, and less battery-constrained versions are still struggling to approach a Core 2's performance...maybe some upcoming server ones will actually be excellent, but I'm guessing it will take several more years of catching up, to compete with 5-10 year-old x86 server CPUs, outside of workloads designed for some specific SoC). It would take a few years to ramp up inertia, of course, as adventurous companies take the risk, and help with ironing out the kinks, but x86 doesn't have a foothold because it is x86, like it used to. It has one because x86, and products based on it, are very good, in many different ways, even with recent security exploits in mind. Hardly anyone programming any business software touches the lowest levels, except maybe to tweak the occasional compiler screw-up with vectorization, and that's not going to be common.

You should read Servethehome, it is a great site like Anandtech focusing more on Server side.

What you described is basically what ARM have been saying for years, and you know who topple Intel in server market share? AMD.

Most people just describe software as if Apple could recompile their OS or software instantly become ARM based. Server software recompile and become ARM based.
 

ericlp

Diamond Member
Dec 24, 2000
6,137
225
106
Most people just describe software as if Apple could recompile their OS or software instantly become ARM based.

Remember when apple took the leap of faith... back in the 6502 days of 8 bit systems, they jumped to 32 bit chip. Basically saying BYE to years worth of coding for the 6502 CPU and restarting again a new with no software base.

I think if a computer company wanted to really test the waters, Apple would be a pretty good candidate for making the hardware for Open Source ARM based processors. To be so bold to just let it all hang out without making much $, it would be an instant seller as including myself would switch over. Google may be the one to crack it open tho, time will tell.

X86 with all it's licensing and rules is just a bad idea that keeps getting worse if you ask me. The levee will one day break, as open source is just too ripe for it to remain a geek OS for much longer.
 

scannall

Golden Member
Jan 1, 2012
1,960
1,678
136
You should read Servethehome, it is a great site like Anandtech focusing more on Server side.
Most people just describe software as if Apple could recompile their OS or software instantly become ARM based. Server software recompile and become ARM based.

iOS is OS X, with a touch interface. Converting to ARM would be fairly trivial for them. And they have done architecture changes before. Quite well I might add.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
You're comparing SPEC rate vs SPEC non rate :) This is the one you want:
https://www.spec.org/cpu2006/results/res2014q3/cpu2006-20140617-29942.html

Also pick these much better results with more recent ARM CPU:
https://www.anandtech.com/show/12520/the-galaxy-s9-review/4

Now if you consider that Intel is known to overtune icc for SPEC, and that we're comparing a phone against a (micro)server chip, I'd still say ARM is better. But on the other hand Silvermont already is quite old.
OK, you got me on that, with the Atoms. I had not seen the newest scores being that good, to date. Still a ways to go, but yes, definitely beating the Atoms, finally.

I had seen that, but matching the power, while having very low per-thread performance, at best only a small improvement under heavy many-user loads over Epyc and Xeons, but often none (and again, no advantage when not under near 100% load), they need power to be much lower, like enough that someone might think, "hey, these will save us in summer electricity costs."

I would think that doing something like Sun did would be a good idea, to give them more of a market: add dedicated hardware. An SSL engine embedded into the processor, with Linux support, or something else along those lines, could do it. Something needs to make it stand out, especially with Epyc being there to take some market share away, thanks to Meltdown.
 
Last edited:
  • Like
Reactions: Nothingness

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
X86 with all it's licensing and rules is just a bad idea that keeps getting worse if you ask me. The levee will one day break, as open source is just too ripe for it to remain a geek OS for much longer.
Linux is normal, now, in server land, and both Microsoft and Oracle have been helping that out, lately, too, for some traditional hold-outs (SQL Server licensing changes, and Oracle taking over MySQL, have done wonders for FOSS DBMS development and adoption). Windows software holds people on the x86 for clients, legacy software, and for closed third party systems, but that's about it (yes, that's a big chunk of the market, but it's a shrinking chunk). Nobody in their right minds, for the last 5-10 years, would start a green field server program on Windows, or at least not Windows only. In the last 3-5 years, you'd have to be crazy to do that. With support of distros like Debian and Ubuntu, you're golden. x86 has a good history, thus far, and is a safe choice. ARM (or Power, for Google, but I don't see Power getting big again) systems need something to make them stand out, for people to start switching, and it won't be overnight, but if your software runs on a common Linux distro, you likely have a pretty easy migration path. In the case of scripting languages using other software for the grunt work, you'll generally have no work at all, beyond system configuration.
 
Last edited:

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
This answer to a question at the end made me laugh.

Q: How do you dodge the patents in place in this? A: We building a new beast. Our tech is significantly different. This is America.

Edit: Also, what happened to all the talk about shorter wires enabling high frequency design? This is only running at 4 GHz on 7 nm.

Data wires are small to reduce power

Doesn't ring true to me. These are digital circuits. I doubt AMD and intel were dumb enough to use lines long enough for power loss through transmission to become a problem.
 
  • Like
Reactions: NostaSeronx

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
Sorry for just chaining replies but just reacting as I'm reviewing the presentation.

Tape out in 2019

What is this tape out? Is this production tape-out meaning 6 months later they'll be packaged and ready to sell? Or is this a first spin tapeout? I have a feeling they've relied (almost) entirely on simulated performance up to this point and are in for a rude awakening when actual silicon is tested.
 

french toast

Senior member
Feb 22, 2017
988
825
136
I am amazed at this, what ISA does this new chip use? Is there any chance something like this actually hits production in volume?
Very interested.
 

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
I am amazed at this, what ISA does this new chip use? Is there any chance something like this actually hits production in volume?
Very interested.

It seems to be a VLIW architecture with some tweaks, so their own custom ISA. They claim they can emulate x86 with a 40% performance penalty but still be faster clock for clock, core for core, than a 2.5 GHz Xeon. I thought intel didn't allow for emulating x86 which is why Denver had to recompile on the fly?

I have extreme doubts this ever goes to volume production, but we'll see. They mention including HBM3 controllers which is also interesting because HBM3 is still in development with no announced release date. Best guesses put it in 2020 at the earliest.

Edit: Looked it up, intel has been very aggressive (at least in public threats) to any company who tries to do x86 emulation without their permission, even Microsoft. I don't expect them to not go after Tachyum as well if they try to support x86 through emulation.

It's also interesting that they say Linux is getting ported in 2019. I haven't seen anything pop up in the kernel upstream about Tachyum support (granted I don't follow this stuff too closely). If they want to actually have Linux kernel support next year, then it should be popping up as a candidate for kernel support real soon.
 
Last edited:

french toast

Senior member
Feb 22, 2017
988
825
136
It seems to be a VLIW architecture with some tweaks, so their own custom ISA. They claim they can emulate x86 with a 40% performance penalty but still be faster clock for clock, core for core, than a 2.5 GHz Xeon. I thought intel didn't allow for emulating x86 which is why Denver had to recompile on the fly?

I have extreme doubts this ever goes to volume production, but we'll see. They mention including HBM3 controllers which is also interesting because HBM3 is still in development with no announced release date. Best guesses put it in 2020 at the earliest.
Thought it was faster than 2.5ghz Xeon with 4ghz prodigy..at emulation, faster per clock at integer and floating point math running spec 2k6 GCC.

Very interesting, very suspicious at the same time, "hardware built around the compiler instead of after" seems like a itanium engineers wet dream..
 

Hitman928

Diamond Member
Apr 15, 2012
6,696
12,373
136
Thought it was faster than 2.5ghz Xeon with 4ghz prodigy..at emulation, faster per clock at integer and floating point math running spec 2k6 GCC.

Very interesting, very suspicious at the same time, "hardware built around the compiler instead of after" seems like a itanium engineers wet dream..

That's now how I read it, here's the Q&A

Q: You mention emulation for x86. What kind of penalty in performance? A: About 40% performance loss. Significant because our customer didn't want us to invest in that, as 90% of their software will be natively compiled by next year. It's a temporary deployment. Binary 4.0 GHz emulated still outperforms 2.5 GHz Xeon

I read it as they are using x86 emulation as a stop gap until applications are ported to their ISA. When running x86 code through emulation, they see a 40% performance drop, but that they still outperform a 2.5 GHz Xeon (I'm assuming core for core since no model is mentioned) in those applications. Meaning, when the application is ported and running native code, they'll outperform a 2.5 GHz Xeon by 40%. If you do the calculation, they're basically claiming the same (or a tiny bit higher) performance per core per clock as a Xeon processor but that their CPU will also clock higher.
 

french toast

Senior member
Feb 22, 2017
988
825
136
That's now how I read it, here's the Q&A



I read it as they are using x86 emulation as a stop gap until applications are ported to their ISA. When running x86 code through emulation, they see a 40% performance drop, but that they still outperform a 2.5 GHz Xeon (I'm assuming core for core since no model is mentioned) in those applications. Meaning, when the application is ported and running native code, they'll outperform a 2.5 GHz Xeon by 40%. If you do the calculation, they're basically claiming the same (or a tiny bit higher) performance per core per clock as a Xeon processor but that their CPU will also clock higher.
Yes, we are saying the same thing in different words.
Emulation, temporary or not, takes 40% performance hit, but still means at 4ghz their CPU still outperforms Xeon @ 2.5 GHz.
I will be interested to see if their claims about 90% of the x86 apps being compiled for their ISA....look at ARM troubles with this.
Doubtful.

Edit; Correction. 90% of their customers x86 software, not all x86 software in general, which would be an impossible undertaking.
 
  • Like
Reactions: Hitman928

Nothingness

Diamond Member
Jul 3, 2013
3,307
2,379
136
Yes, we are saying the same thing in different words.
Emulation, temporary or not, takes 40% performance hit, but still means at 4ghz their CPU still outperforms Xeon @ 2.5 GHz.
I will be interested to see if their claims about 90% of the x86 apps being compiled for their ISA....look at ARM troubles with this.
Doubtful.

Edit; Correction. 90% of their customers x86 software, not all x86 software in general, which would be an impossible undertaking.
That's a server chip, so source often is available. I think you misunderstood what they are saying (or I misunderstood what your wrote ): what they claim is that 90% of their customers will have recompiled their software next year and so won;t need emulation.

Q: You mention emulation for x86. What kind of penalty in performance? A: About 40% performance loss. Significant because our customer didn't want us to invest in that, as 90% of their software will be natively compiled by next year.
 
  • Like
Reactions: french toast