Xbit labs: Nvidia Project Denver is on track, does not interfer with ARM's own 64 bit

cbn

Lifer
Mar 27, 2009
12,968
221
106
http://www.xbitlabs.com/news/cpu/di...Not_Interfere_with_ARM_s_Own_64_Bit_Tech.html

Nvidia: Project Denver Is On-Track, Does Not Interfere with ARM's Own 64-Bit Tech.

Nvidia Remains Excited About Project Denver CPU Technology Despite of ARMv8
[11/10/2011 11:08 PM]
by Anton Shilov

Nvidia Corp. will continue to develop its own microprocessor architecture based on ARM instruction set despite of the fact that the new breed of ARMv8 chips will support 64-bit capability and will aim servers and high-performance computing segment. The company remains excited about its project Denver and believes that it is on-track to release it sometimes in 2013.

"We are busily working on Denver, it is on track. Our expectation is that we will talk about it more, hopefully, towards the end of next year [calendar January, 2013]. [...] As we get closer to the release of Denver, we will reveal more and more of the strategies that are related to Denver. I think you are going to be as blown away and as excited as I am," said Jen-Hsun Huang, chief executive officer of Nvidia, during the latest quarterly conference call with financial analysts.

Known under the internal code-name "project Denver, the Nvidia initiative includes an Nvidia CPU running the ARM instruction set, which will be fully integrated on the same chip as the Nvidia GPU. The hybrid processors which will have both 64-bit ARM general-purpose cores as well as Nvidia's custom compute cores known as stream processors will be aimed at various market segments, most importantly at high-performance computing and data center servers. Nvidia's chief executive does not exclude the chips from entering very energy-efficient high throughput servers. Since technologies developed within project Denver are universal, they will eventually span across the whole lineup of Nvidia products, from Tegra to GeForce to Quadro to Tesla. Obviously, Denver-derivatives may power next-generation game consoles as well.

"Our focus [with Project Denver] is to supplement, add to ARM's capabilities by extending the ARM's architecture to segments in a marketplace that they are not themselves focused on. There are some segments in the marketplace where single-threaded performance is still very important and 64-bit is vital. So, we dedicated ourselves to create a new [micro-]architecture that extends the ARM instruction set, which is inherently very energy-efficient already, and extend it to high-performance segments that we need for our company to grow [on] our market," stressed Mr. Huang.

ARM recently announced its own ARMv8 architecture (and instruction set) that supports 64-bit capability and an array of features specifically designed for servers. Although Nvidia's Denver and ARM's eventual Cortex-A cores are aimed at the same market segments, they are unlikely to interfere as the first actual Denver products are projected to emerge in calendar 2013, whereas the first ARMv8 prototypes are only expected in 2014.

"Denver architecture is [...] designed to bring a new level of performance and efficiency to the ARM instruction set beyond where they currently are. [...] In some areas that are very important to us, they are rather non-markets for them today," stressed Mr. Huang.

Sounds like the custom ARM CPU from Denver could be included in the future Tegra line-up as well.

I just wonder how far they took the concept of "Custom core"?
 
Last edited:

podspi

Golden Member
Jan 11, 2011
1,982
102
106
More importantly, is their 64-bit ISA going to be an nvidia exclusive, or is it going to be compatible with ARM's ISA?
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Regarding custom core, I found this article rather interesting.

Two custom ARM cores vs. Four vanilla ARM cores: Which approach is more efficient?

Raj Talluri, Qualcomm: "Nvidia brings a different marketing philosophy. They take a different attitude about how they market their products. Just because you make one device with four cores...[but] the rest of the system has to be big too," he said.

Talluri continued. "What's the big deal? I can get two or I can get four cores from ARM. When we do a core, we design it from the ground up. That means we do a lot of custom transistors. For example, when we run a processor at 1.4GHz or 1.5GHz...we don't push the voltage to get higher [speeds]. Because our design can run that fast at nominal voltage. The power consumption just explodes if you push the voltage up."

Qualcomm has, for many years, had an ARM architectural license, which allows the company to custom design its ARM processors--what Talluri is referring to when he speaks about designing a chip from the "ground up." Nvidia, only recently--earlier this year--got an architectural license from ARM.

Jen-Hsun Huang, Nvidia: "I feel that I answered this question once for dual-core. Using multiple processors is the best way to conserve energy when you need performance. And our strategy, our approach is efficient performance. We want performance but not at the expense of running transistors super hot. Parallel processing is really the most energy efficient way to get performance. And we'll use as many cores as the technology can afford. And the applications can use," he said.

Huang continued. "There's all kinds of application that benefit from mutli-core and quad-core. One is multi-tasking. [For example] if I'm buying an application, updating a whole bunch of applications, while I'm reading a book and connected to Wi-Fi and I'm streaming music. That's a lot of stuff going on. That's even a lot of stuff going on for a desktop PC. So, there's no question that performance lags a bit when that happens and when quad-core hits it's just going to crank right through all of that."
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
More importantly, is their 64-bit ISA going to be an nvidia exclusive, or is it going to be compatible with ARM's ISA?
Exclusive. They started long before ARM got their 64-bit ISA together, so I would expect future versions to come to support ARM's ISA, but the first will be JHH's own take.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Semi-accurate on the Denver CPU core:

http://semiaccurate.com/2011/08/05/what-is-project-denver-based-on/

What is Project Denver based on?
Part 2: A look at the T50 core

Aug 5, 2011

by Charlie Demerjian

There are a lot of misconceptions floating around Nvidia’s (NASDAQ:NVDA) ‘Project Denver’ aka Tegra 5. Publicly, it is going to do everything for everyone, but privately, the situation is a little more complex, and a lot more interesting.

When Nvidia announced Project Denver at the last CES, they made it sound like an all new initiative that was going to take over the world. Denver is not new, but it is very innovative and interesting, at least its CPU cores are. Those cores are Tegra 5/T50/(likely)Logan, and will be in a number of products. The rest of the SoC is ambitious but technically not all that different from the other SoCs out there.

The project itself has been going on for several years, I first wrote up the beginnings of the chip in late 2006 when Nvidia bought the remains of Stexar. (Sorry, no links due to this and this.) That was the birth of the Nvidia x86 program, something that has gone through more changes than a David Bowie retrospective mainly due to management. Denver has been going through what seems like a PoR (Plan of Record) change every six months, pity the people who are working on it, they must have the patience of a saint, or a whole boatload of saints.

We first wrote up that the chip was slated to be Tegra 5 last August, and Denver is just one of the variants in that line. T50 was going to be a full 64-bit x86 CPU, not ARM cored chip, but Nvidia lacked the patent licenses to make hardware that was x86 compatible. Nvidia was trying to ‘negotiate’ a license in the background. Sources close to the negotiating table indicate that Jen-Hsun’s mouth weighed heavily against that happening.

Publicly, Nvidia’s stance was that there was no need for any license because the company was not making x86 hardware. Technically, this is true, T50 is a software/firmware based ‘code morphing’ CPU like Transmeta. The ISA that users see is a software layer, not hardware, the underlying ISA can be just about anything that Nvidia’s engineers feel works out best. T50 is not x86 under all the covers, nor is it ARM, it is something else totally that users will never be privy to.

The idea was that this emulation of x86 in software would be more than enough to dodge any x86 patents that would stop the chip from coming to market. SemiAccurate has it on very good authority that this cunning plan would not have succeeded, and based on what the sources showed us, the chip never would have gotten to market. Since Nvidia’s public bluster was matched by equally fervent negotiations for a license in the background, we would have to conclude that they were aware of what their chances were in court.

On the day that Nvidia settled with Intel over the chipset/patent agreement, all hopes of T50 being an x86 part died. If you read the settlement, section 1.8 specifically mentions, ““Intel Architecture Emulator” shall mean software, firmware, or hardware that, through emulation, simulation or any other process, allows a computer or other device that does not contain an Intel Compatible Processor, or a processor that is not an Intel Compatible Processor, to execute binary code that is capable of being executed on an Intel Compatible Processor.“. Section 1.12 has some similarly curious language, as do other places sprinkled around the document, that isn’t by chance.

So, where does a core go from here? That one is easy, it becomes an ARM core, or if you believe Nvidia PR, it was ARM all along. T50 was never ARM hardware based, we had originally heard it was an A15 or the follow-on part, that emulated x86, that information turned out to be wrong. T50 is its own unique ISA, and emulates the exposed ISA as embedded software. Think of it as an on chip x86 or ARM compiler to the low level instructions.

So, between last fall and CES, out went x86, and in came ARM, specifically the ARM-64 core that is the follow up to the A15 chip. This caused a number of headaches for the already beleaguered engineers, as far as PoR changes go, this one was a whopper. Luckily, with one major exception, the hardware changes needed to carry this out were minimal. The same can’t be said for the software layer, going from x86 to ARM is not trivial.

Those familiar with the ARM-64 ISA will realize that it cleans up a lot of the cruft that is the ARM instruction set, but not all of it. Things like predication still remain, but are thankfully minimized, and other rough spots are cleaned up a bit more. Legacy ARM warts can be minimized in the T50 software layer, likely better than in a pure hardware implementation, but it isn’t a trivial job. Remember when we said the pity the poor engineers working on this?

The T50 core is wide, very wide, 8 pipes wide in fact. Once you have picked your jaw up off the floor, let me just start by saying that the width is not equivalent to an 8 wide ARM core, this is 8 ‘Transmeta’ style software instructions, not ARM instructions. In the end, T50 should be about the performance equivalent of a 4-wide ARM core, a sensible target, with a lot lower power use.

T50 looks to be a remarkable piece of work, it could end up as one of the most ambitious and innovative CPU cores currently in the pipeline. If it comes out. Our skepticism is not based on the technical parts of the core or project, technical problems can be solved by diligent engineering work, and we have no doubt that T50/Denver’s problems are solvable. Our concern is that the engineers may not be allowed to fix the problems.

Looking at Nvidia’s track record over the past few years, it is a casebook of failure after failure. When questioned, Nvidia management points the finger everywhere but where it should be pointed, at themselves. The company has released almost no chips in the past two or three years that were on time, on spec, or both, once again testing the sainthood of their engineers. Ironically, this is not an engineering problem, knowing many Nvidia engineers personally, it is pretty obvious that they are dedicated, hardworking, and quite competent. Nvidia’s problem is what the engineers are told to do.

Engineers are a very logical bunch, quite good at doing what they are told in innovative ways, and solving problems logically. If those engineers are given impossible tasks, or their PoR changes every few months, it makes it very hard to meet unwavering deadlines. If the deadlines change, that reflects badly on management. If the engineers don’t meet deadlines, it isn’t management’s fault, right? Nvidia has a management problem, a big one, and until that changes, we hold little hope for T50 being successful in spite of the engineers best efforts.S|A
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
I just can't see this product running modern games. It's interesting for phones and whatnot, but as a gamer I can't stand the controls on those types of setups.

We're a few years away from ARM ramping up the performance to the point that they can compete with AMD and Intel in terms of outright performance. It's a fun battle to watch. Intel and AMD are pretty much stuck at this point; they're getting maybe a 15% performance improvement every year. ARM seems to be able to ramp things up much quicker; their performance seems to be more than doubling every year.
 

aphelion02

Senior member
Dec 26, 2010
699
0
76
The problem for Nvidia is that given the timeframe they will be competing against Haswell. The GPU portion really has to be amazing otherwise Intel will mop the floor with them.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I just can't see this product running modern games. It's interesting for phones and whatnot, but as a gamer I can't stand the controls on those types of setups.

It will be too big for phones/Tablets.

Remember, Project Denver is meant for HPC.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
I just can't see this product running modern games. It's interesting for phones and whatnot, but as a gamer I can't stand the controls on those types of setups.

We're a few years away from ARM ramping up the performance to the point that they can compete with AMD and Intel in terms of outright performance. It's a fun battle to watch. Intel and AMD are pretty much stuck at this point; they're getting maybe a 15% performance improvement every year. ARM seems to be able to ramp things up much quicker; their performance seems to be more than doubling every year.
I would much more expect it to bridge the CPU/GPU gap. CPUs inside GPGPUs, that don't have to wait on PCI-e to move data? Aw, yeah (awkward tackiness intended). It wouldn't surprise me if compatible lesser versions become their future mobile chips, too.

Intel is beefing up their vector instructions and trusting in HT, for now, with possible weak x86 cores in the future. They are going for straight up ISA extensions. personally, I hope that, like with the P4 and IA64, the industry doesn't follow them like lap dogs, because they are complicating x86 even more, in the process (Jesus Hume Christ, Intel, make a good vector ISA extension, with a full set of useful instructions, than can be expanded in processing width without having to deal with another version of an ISA extension!).

AMD is hoping for the GPU to do highly-threaded vectorizable work, while the CPU does pointer-chasing (but they'll support whatever Intel comes out with on CPUs, too). Rock, meet hard place.

NVidia has no CPU currently, and a Fusion-like processor using ARM's cores would be a PITA, if even possible (note: I mean of the endgame, where the CPU and GPU are no more distinct and int and float cores). A nice GPU that shares coherent memory with a SM or ten would give them the ability to use something other than SIMD-alike cores, where useful, for highly parallel computation; yet also use the same hardware to do a decent job of running traditional CPU tasks. I don't know that they are doing a Transmeta-like CPU, but if they are, good for them. IMO, the best thing they can do, since they don't want to give up their HW specs, is to implement everything in firmware, and expose how to use the firmware, so that they can ditch binary drivers and everything.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
It will be too big for phones/Tablets.

Remember, Project Denver is meant for HPC.

I just don't see it being a mainstream product then. Perhaps it will have use in servers and specialized applications. I can't see people losing their x86 compatibility for the sake of better performance-per-watt and potentially lower price.

Now, if Apple uses it, then beware of the green behemoth dragon. D:

:sneaky:
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
More information from Semi-accurate on the rest of the Denver SOC:

http://semiaccurate.com/2011/08/05/project-denver-is-more-than-a-t50-core/

Project Denver is more than a T50 core
Part 3: The rest of the SoC that is Denver... for now

Aug 5, 2011

by Charlie Demerjian

In part two of this series, we looked at the core of Project Denver, aka T50. In part three, it is time to take a look at the bits surrounding that core, basically what differentiates Denver from a T50 CPU.

To switch gears from T50 to Denver, we scale out from the core to the SoC. Last January at CES, Nvidia was talking about Project Denver to anyone that would listen. Superlatives were tossed about with gay abandon, but details were non-existent. Press that should have known better touted it as the best thing since the last best thing Nvidia announced, and were all quite sure that it would take over the world, even if they couldn’t tell you what it was.

Officially, Denver is a CPU + GPU, SoC, or other buzzword that is aimed at the HPC space. Take a Tesla type HPC oriented GPU, and slap a few CPUs on it, and you have a surefire winner in the HPC world. Who needs an x86 CPU when you have an ARM core and lots of GPUs, right? Take that Intel! Putting aside questions about the sustainability of the GPU based HPC market, it does sound like a cunning plan.

People who talked to us about Denver at CES may have noticed that we giggled every time it was mentioned. The reason for this is that we were well aware of the current state of the project. SemiAccurate moles were telling us that at the time, Project Denver basically didn’t work. Luckily, that didn’t stop the pomp and circumstance.

As late as the last Analyst Day, Nvidia wasn’t being shy about telling any analyst who couldn’t run away fast enough that Denver was set to come out in late 2012, maybe 2013 if things don’t work out well. The problem was that there was absolutely zero chance of that happening, and Nvidia knew it.

Denver is a SoC based on T50 cores and Maxwell GPU cores. When the project started out in 2006, it was an x86 core with Fermi GPU shaders, and that slipped to x86 plus Kepler shaders later on. We told you about the PoR change from hell, aka the great x86 to ARM-64 migration in part 2 of this story, and since then, there has been another PoR change to twist the knife. The GPU side of the house was moved from Kepler to Maxwell, a part not due until 2013. If Nvidia’s Analyst Day spiel is to be believed, Denver, at the moment, will come out a year before it’s GPU cores are done. The skeptic in us tells that this may be a bit of a stretch.

Late 2012 is not a possibility, nor is Q1/2013, but since there have been no official specs on code name Denver released externally, there is the possibility that the name may be re-purposed to suit paper schedules.

Another area of concern for Denver is the interconnect. Imagine a SoC with many CPU cores, many GPU core clusters, and many memory controllers. Now slap a crossbar in the middle, ala Fermi, and you have what people working on the project call ‘a mess’. A mess that doesn’t work. Really.

One of the major problems with Fermi was the interconnect, and it was really never fixed. The large chips, GF100/110, had 8 clusters of shaders. The interconnect failed. The ‘fixed’ part, GF104/114 and smaller had two clusters or less, meaning the crossbar was so simple that it was almost non-existent. Luckily, it wasn’t Nvidia’s fault, it was TSMC that caused all the problems.

Runaway power use was the result of the interconnect scheme, and was never fixed, only lessened a bit via external circuitry. Fingers firmly pointed externally, there is no management problem in Santa Clara, just ask anyone in said management. Meanwhile, companies that let their engineers do their job have moved on to saner interconnect schemes. Those seem to work. That said, Nvidia’s whole philosophy doesn’t bode well for a low power SoC.

There are many Denver variants planned, from phones to supercomputers. What falls under the Denver moniker is more a PR problem than engineering, see the mercurial ‘Ion’ for a good idea of how things will play out here. What Denver was in January is not in question, at the time, engineers were making a SoC with T50 cores attached to Maxwell shaders via a massive crossbar. What Denver will be after the next PoR change is anyone’s guess. What Nvidia calls Denver when something is released may be totally decoupled from any engineering project that previously had the code name, or may be.

In the end, Project Denver is a really cool core. There hasn’t been anything that far out in left field since Transmeta, think of it as a moonshot or maybe moonunit V .5. Nvidia picked up the right team to do the job, and do it well, there is no question about their technical capabilities. Unfortunately, it is very easy to put smart people under poor management, and that is why we fear that Project Denver, as it stands now, may never see the light of day. Good luck guys, I admire your perseverance.S|A
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
i just don't see it being a mainstream product then. Perhaps it will have use in servers and specialized applications. I can't see people losing their ARM compatibility for the sake of better performance-per-watt and potentially lower price.
FTFY ():)

ION has already past. X86 should be dead inside of NVidia. Long live profiling JIT and AOT compilers, HALs, and ISAs they can license without going to court, like ARM.
 
Last edited:

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
there was no need to fix anything.

don't hold your breath for arm. it's going to be at least 2-3 years before they can even attempt to do something interesting on the pc.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
there was no need to fix anything.
Without changing that to ARM, it makes absolutely no sense. You were speaking as though losing compatibility was some kind of future problem, when, as it stands, they've already lost x86 compatibility (ION 2 was their last x86 product), and are actually doing just fine without it.

Their future is not tied to x86, and they have made several moves to cement themselves and their hardware as having value independent of other hardware they are connected to. I personally did not expect them to be so successful, so quickly, at turning a battle with Intel's lawyers into a thriving mobile business to get some high-volume parts out to support them as their X86 chipsets once did.

don't hold your breath for arm.
Who's holding breath? Tegra kind of sucked, but Tegra 2 does not. Tegra 3 appears to have enough of a solid backing by other players I doubt it's going to suck, either. ARM is here, it works, and it sells.

it's going to be at least 2-3 years before they can even attempt to do something interesting on the pc.
The PC is a hard market to get into, and ARM doesn't care too much about it. They will need to emulate some aspects of it, as they're lack of a cohesive hardware and software platform is now holding them back, but I bet they'll mostly leave the PC to x86, after the ARM netbook push failed so miserably.
 

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
Cerb, with all due respect, the future is tied to x86 due to the massive software library available for it.

If ARM starts trying to emulate x86, their performance is going to tank even worse than it already is. We saw Transmeta try this already and it failed. Itanium failed as well. ARM seems like a more legitimate contender to take a stab at it, but at the end of the day my prediction FWIW is that they have a long uphill battle ahead of them save for the smartphone and tablet markets, at least for the next 2-3 years.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
Cerb, with all due respect, the future is tied to x86 due to the massive software library available for it.

If ARM starts trying to emulate x86, their performance is going to tank even worse than it already is. We saw Transmeta try this already and it failed. Itanium failed as well. ARM seems like a more legitimate contender to take a stab at it, but at the end of the day my prediction FWIW is that they have a long uphill battle ahead of them save for the smartphone and tablet markets, at least for the next 2-3 years.

I think your missing the point entirely. Arm doesnt care much about the PC space as Cerb has stated. The x86 market is quite small when compared to what ARM occupies today. On top of that, x86 is barely growing at all. Compared to ARM, x86 is moving backwards! ARM is growing at a rate x86 never has. Yr to yr arm is multiplying its user base. which it does completely with no ties to x86. This huge user base is very attractive and will become even more so as it doubles each yr. Software and programs are not hard to find, ARM has an abundance of this.

Why would ARM care to much about putting x86 out? It makes no difference, its a shrinking market. X86 is much more worried about ARM that ARM is x86. Intel knows that the future of consumer computing is much bigger in the markets ARM has pioneered. If x86 market wants to grow, it has to go after ARM markets. its a much bigger playground. Times are changed, The ball is in motion. Most everyone is seeing things backwards. Arm doesnt need the x86 market, its x86 which needs to invade on ARM.
 
Last edited:

SickBeast

Lifer
Jul 21, 2000
14,377
19
81
The computing experience that ARM is providing is rudimentary compared to that on an x86 PC.

It's great that ARM can make tons of money selling to emerging markets, but at the end of the day, x86 based PCs are still the most powerful mainstream computing devices out there, and that's not going to change any time soon.

Are people seriously spending more money on tablets and phones than they are on actual computers? I find it hard to believe. There are a lot of people in the world who need a PC to get work done.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Publicly, Nvidia’s stance was that there was no need for any license because the company was not making x86 hardware. Technically, this is true, T50 is a software/firmware based ‘code morphing’ CPU like Transmeta. The ISA that users see is a software layer, not hardware, the underlying ISA can be just about anything that Nvidia’s engineers feel works out best. T50 is not x86 under all the covers, nor is it ARM, it is something else totally that users will never be privy to

In the Intel NV settlemrnt NV agreeded not to use X86 in any form including a software layer. Intel AMD NV all have the trasmedia license. Nv agreeded not to use it for some reason in the settlement with intel.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
From the AT article .

NVIDIA also does not get an x86 license. x86 is among an umbrella group of what’s being called “Intel proprietary products” which NVIDIA is not getting access to. Intel’s flash memory holdings and other chipset holdings are also a part of this. Interestingly the agreement also classifies an “Intel Architecture Emulator” as being a proprietary product. At first glance this would seem to disallow NVIDIA from making an x86 emulator for any of their products, be it their GPU holdings or the newly announced Project Denver ARM CPU. Being officially prohibited from emulating x86 could be a huge deal for Denver down the road depending on where NVIDIA goes with it.
 

ocre

Golden Member
Dec 26, 2008
1,594
7
81
The computing experience that ARM is providing is rudimentary compared to that on an x86 PC.

It's great that ARM can make tons of money selling to emerging markets, but at the end of the day, x86 based PCs are still the most powerful mainstream computing devices out there, and that's not going to change any time soon.

Are people seriously spending more money on tablets and phones than they are on actual computers? I find it hard to believe. There are a lot of people in the world who need a PC to get work done.

This is the current state of things. x86 is more powerful. Arm devices are very nice and trendy. PCs are bulky and big. But they are powerful, they really are. But ARM devices are selling like hot cakes. Outselling x86 CPUs by many times.

As far as spending more money? It depends on the person. But you must consider, these phones are very expensive and are refreshed on a yearly basis. The contracts may hide it, but they cost a pretty penny. Arm is in many many devices including Cars, GPS, tablets, phones, TVs, and even music players. A lot of these devices are refreshed in months. It wouldnt take much to spend more money on ARM devices than you do on x86. PPl buy computers that last the 4 to 5yrs. In that time they go thru tons of devices. Pc's arent very expensive anymore. A nice phone will cost just as much and will be out of style before the yr is up. Then consider the PC is usually used by the houshold, while every member gets an ARM device or many ARM devices for themselves. It wouldnt take much at all for the average household to spend more on ARM devices than x86. It is most likely absolutely true in the USA.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
Are people seriously spending more money on tablets and phones than they are on actual computers? I find it hard to believe. There are a lot of people in the world who need a PC to get work done.

You, and the average reader of this forum? Probably not. From watching the people I know, perhaps. My parents max budget for a laptop they're looking to buy soonish is $350 dollars. But they have no problems spending $400 on a Transformer, two Kindle Fires, and two Droid X's.

I also know two people whose laptops are on their way out, one with an iBook I think they're called with a G4, and the other a C2D @ ~ 1.8ghz. Both are considering the Transformer 2 or Ipad2 with no x86 computer in the mix.

My brother does most of his computing (except for school, he's still in Uni) on his iPod touch.


Anecdotal? Yes. But I can 'feel' the trend of computing changing. I can't claim to like it (I've always been a tinkerer), but it is what it is.

In a lot of ways it is a change for the better. Slower CPUs + dedicated silicon is much more energy efficient. But ick.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Cerb, with all due respect, the future is tied to x86 due to the massive software library available for it.
The future of the shrinking PC market? Outside of Microsoft Windows, most software is written either hardware-independent (runs on anything that has a compiler/RE), or hardware-agnostic (made to be tailored to specific HW families as needed--OSes, some drivers, Java, many C and C++ libs, etc.).

Ken Thompson made this language called B.
Dennis Ritchie made this other language called C, based on B, largely for the purpose of not writing more assembly.
Unix got re-written in C.
Later, Unix got ported to other hardware.
Most OSes today are multiplatform, written in C, with bits of assembly only where absolutely required.
The overwhelming majority of user-land software is also written that way.

Within Microsoft Windows, x86 does matter, and will continue to. Windows moving to ARM won't mean the ability to jump ship from x86 on your desktop or notebook. It will mean the death of Windows CE, which everyone should rejoice.

It's not that x86 is going away, but ARM is is only going to grow, because Intel wants x86 everywhere, and they want high margins, and they want tight control over their ISA's ecosystem. ARM is OK with low margins, and limited control. Idle power is a major ARM strength, but it's not their only one.

If ARM starts trying to emulate x86, their performance is going to tank even worse than it already is. We saw Transmeta try this already and it failed.
It didn't fail in performance, however. They had Atom performance and performance per Watt, with most NB functionality in the CPU, a year before the Atom (read: before the recession hit), on a larger process. Firmware updates for ISA extension support also worked very well. They were ahead of their time, and made some poor business decisions, but their technology was sound.

ARM seems like a more legitimate contender to take a stab at it, but at the end of the day my prediction FWIW is that they have a long uphill battle ahead of them save for the smartphone and tablet markets, at least for the next 2-3 years.
...and everywhere else.
PC peripherals? Check.
Cars? Check.
Car accessories? Check.
Audio/video appliances? Check.
Household appliances? Check.
Broadcasting gear? Check.
...and the list goes on. The entrenched PC market, and HPC, are about the only places ARM isn't.

The computing experience that ARM is providing is rudimentary compared to that on an x86 PC.

It's great that ARM can make tons of money selling to emerging markets, but at the end of the day, x86 based PCs are still the most powerful mainstream computing devices out there, and that's not going to change any time soon.
That was the big iron argument, too. It's not a matter that it isn't true, but fewer and fewer people need the most powerful thing they can get, these days.

To me, the computing experience on ARM is a good 90% limited by asshole hardware companies not wanting to make good binary drivers (gotta give NV credit where it's due), nor open hardware. ARM needs to work on making a solid platform, too (each SoC operates a bit too differently to any supporting OS, right now, and they have too many ISA options for their CPUs), but they are working on that.
 
Last edited:

zlejedi

Senior member
Mar 23, 2009
303
0
0
I wonder would such thing allow us to run both x86 ans arm code on the same machine using Windows 8 in the future ?