• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

AMD says "no thanks" to smartphone business

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I just don't get the hype around tablets. Yeah, they have a gee-whiz factor to them, but will they really be around for the long haul? I think that remains to be seen.

Already, it seems like "netbooks" are kind of a fad already.

Although, some of them are morphing into "ultraportables" (AMD fusion netbooks), and that's a good sign. Although, I still like having a DVD drive in the device. (Or Blu-Ray, once prices come down enough.)
 
Already, it seems like "netbooks" are kind of a fad already.
While the economy isn't recovering in any real terms, it's stable enough that people aren't penny-pinching so much. Also, the initial netbooks had Intel and MS restrictions that prevented different classes of them from coming out, at least at any reasonable prices (IE, the next bit of your post).
Although, some of them are morphing into "ultraportables" (AMD fusion netbooks), and that's a good sign. Although, I still like having a DVD drive in the device. (Or Blu-Ray, once prices come down enough.)
 
Imo the future is the cloud books - i.e. google chromebooks. I'm just waiting for apple to bring one out and declare it the next big thing...
 
A wonderful article from realworldtech.com by David Kanter.

http://www.realworldtech.com/page.cfm?ArticleID=RWT050911220752&p=2

Why Apple Won’t ARM the MacBook
By: David Kanter | 05-09-2011
Technical Problems

Studying the history of Apple’s hardware choices and their approach to switching platforms helps to understand why an x86 to ARM migration is exceptionally unlikely. From a technical perspective, the performance and compatibility barriers are huge. Most of these technical problems are equally applicable if Apple were to design their own ARM microprocessor, or if they were to work with a partner.

The most obvious problem with such a switch is performance, pure and simple. While the MacBook Air has survived with mediocre performance, it is still using a fairly fast microprocessor. The MacBook Pro is intended for performance hungry professional applications. The current MBP uses one of the fastest microprocessors around, a quad-core Nehalem that runs up to 2.3GHz and is specifically meant to crunch through software like iMovie, Premiere or Photoshop. The graphics card is similarly intended for performance, with an AMD Radeon HD 6750M for high-end models. Performance was one of the key motivators for the PowerPC to x86 switch. The reality is that for Apple, performance matters.

High performance is simply something that ARM cannot deliver in the next couple of years. Intel’s Sandy Bridge and AMD’s Bulldozer microarchitectures set the bar for client systems today. Both are 64-bit, four wide, out-of-order cores, with multiple load/store units, 256-bit AVX vectors, with slightly different styles of multi-threading. For high-end client systems, 4 of these cores share 8-16MB L3 caches and a 128-bit wide DDR3 memory interface. They are manufactured on leading edge 32nm process technology and can reach 3-4GHz.

ARM is a RISC instruction set and simpler than x86, but it is still fairly complex with several implementation challenges. There are a number of different instruction modes with slightly different lengths (e.g. classic ARM, Thumb, VFP, Neon) which make decoding non-trivial. All instructions are predicated and some set flags (negative, zero, carry, overflow), which complicates register renaming and out-of-order execution. ARM also has an implicit barrel shifter on almost every ALU operation, which requires very expensive hardware. In fact, several of these features were removed from Thumb (e.g. implicit barrel shifter, predication).

Designing high performance microprocessors is not easy, nor for the faint of heart. The leaders in this field - AMD, IBM and Intel - have design teams that have worked together for decades and learned much from their hard earned experience. There are no ARM or Apple design teams with resources that are comparable to any of these three. Apple acquired PA Semi and Intrinsity and Nvidia acquired Stexar and many ex-Transmeta folks, but that does not put them in the same league in terms of resources or experience. It is quite easy to design a microprocessor core that looks good on paper, but in reality falls short – AMD’s Barcelona, Intel’s Pentium 4 and IBM’s POWER6 are just a few examples. Moreover, a modern processor relies on more than the core pipeline and is nearly a complete system-on-a-chip (SOC). The GPU, cache hierarchy, memory controllers and system interfaces are equally critical to performance. Designing these components and integrating them all together is incredibly difficult. Failed projects like the original Itanium, or Larrabee demonstrate the challenges, even for an incumbent like Intel.

There are no ARM microarchitectures that are comparable to Intel’s Sandy Bridge or AMD’s Bulldozer. Currently shipping ARM designs are at roughly 1GHz – where x86 was in 2000. ARM has designs on their roadmap that get much closer to x86; the A15 is a 3-wide, out-of-order design that should run at up to 2.5GHz. Presumably, their next core will come closer still. However, ARM’s ecosystem has relatively little experience with high performance system architecture and dealing with more complex caches, graphics and memory controllers. They will certainly learn as they go along, but it is not an overnight transformation and more of a gradual process. In short, it is quite possible that ARM and partners will catch up with x86 over the coming decade; but not in the next 2-3 years.

The current generation of ARM microprocessors are more power and area efficient than x86, in part due to lower performance. There is no reason to believe that these efficiency advantages will scale for high performance designs. The other components of a complete high performance SOC (e.g. GPU, caches, memory controllers) are mostly unrelated to the instruction set. The performance, area and power efficiency for these parts of the SOC will be similar for ARM, MIPS, PowerPC or x86. This will substantially reduce, but not totally eliminate, ARM’s instruction set advantage over x86.

The last critical point about performance and efficiency is that migrating to ARM would not just require matching, but exceeding x86 in performance. Every Apple transition has gone smoothly thanks to excellent backwards compatibility through emulation or binary translation. Emulating x86 on ARM is eminently feasible, but there is a performance tax. An ARM microprocessor would need to run faster and more efficiently than current and future x86 designs to avoid losing performance and power efficiency for generic software. Moreover, some of the x86 extensions such as AVX, SSE4.2 and AES-NI are exceptionally difficult to emulate and would bring performance to a crawl.

Last, Intel and Apple recently announced Thunderbolt and Lightpeak for future generation electrical and optical I/O. Apple probably intends to eventually consolidate and replace multiple I/O interfaces (e.g. USB, Firewire, DisplayPort) with a single Thunderbolt port. Yet, Intel owns some of the core intellectual property for Thunderbolt and Lightpeak. Intel has little motivation to license the patents to ARM, Apple or other companies designing ARM cores for PCs. Moreover, it seems unlikely that Apple would have consented to such an arrangement if they were planning to abandon x86 in the near future.
 
Last edited:
Imo the future is the cloud books - i.e. google chromebooks. I'm just waiting for apple to bring one out and declare it the next big thing...


all this cloud stuff.... I get that its gonna be a paradime shift, that everything is gonna be run over the net, loaded and run on the server, and your little "laptop" just has to recieve the data and wolla.... doesnt matter how much power your laptop has, because its a server doing all the CPU/GPU heavy tasks, and your just downloading data over the internet.

*yawn*

wake me up when:
1) we have servers powerfull enough to actually do it (1,000,000's users)
2) we have bandwidth from ISP's for that stuff.
3) when ISPs have capacity for it.
4) when avg. joe has 1gigabit internet connections.

untill then... cloud stuff = fluffly dreams of the future.



Just think of a PC game.
Say World of Warcraft.

Now imagine you had 15,000,000 users (playing it, through a cloud system).

How much CPU power, does it take to provide 15,000,000 users playing WoW?
How much GPU power, does it take to provide 15,000,000 users playing WoW?

Can you imagine Blizzard haveing a super-computer doing that?
How BIG will the powerbill in electristy be for Blizzard?

Big enough to off-set their investments into playing WoW through Cloud based system?
How much is that super-computer gonna cost them? and maintainence?
How many times over will blizzard need in data bandwidth? a million times? more?
who's gonna pay for that bill? the users on the otherend?







my theory?:

CPUs will get more and more powerfull, use less and less power, and STILL be in your devices
(phones, tablets, laptops, desktops).
Same with Hard-drives, smaller/cheaper/less power used, and still be in your devices as well.

Cloud becomeing reality? 25years from now? 50years?... it aint happending overnight thats for sure.
 
Last edited:
untill then... cloud stuff = fluffly dreams of the future.

cloud stuff = back to the future. Thin client, terminal servers, etc.

It is inevitable though, ubiquitous computing has been displaced by ubiquitous wireless access (3G/4G/5G) and this makes the inconvenience of non-localized data and computing power much less of a nuisance for 80% of the TAM.

Personally I hate it because it is going to reduce the TAM for powerful desktops (just as powerful x86 desktops reduced the TAM for even more powerful RISC-based big-iron workstations by DEC, MIPS, SGI and and so on) which means our high(er) end dedicated non-cloud computers of the future are going to be more expensive than they would have otherwise been because the market volumes for those products will be all the more reduced.

It can easily be shown by examples, such as yours, that certain corner conditions of the marketspace will simply never be reasonably addressable by cloud-based computing resources. This was true of the arguments 15 yrs ago regarding why x86-based servers were never going to truly threaten the big-iron markets.

As was the case then, the examples were true, those corner conditions are still unsupported by x86 (Power7, Itanium, SPARC still exist), but that did nothing to prevent the gradual persistent erosion of TAM for big-iron.

We can readily identify the easy 20% of the marketspace that won't be well served by a cloud-style computing model, but that alone does nothing to prevent the other 80% of the TAM from going towards a cloud-style computing model in the course of the next 10-15 yrs.

Look at Netflix. In 10yrs I'd be surprised if anything less than 80% of their content is streamed. Bluray won't be dead but the TAM will be much smaller than today, as was the TAM of Laserdisc after DVD's came along.
 
Look at Netflix. In 10yrs I'd be surprised if anything less than 80% of their content is streamed. Bluray won't be dead but the TAM will be much smaller than today, as was the TAM of Laserdisc after DVD's came along.
Thats differnt though, how much cpu power does it take to stream someone a movie? not much. All it requires more or less is just the bandwidth.

Differnt thing with pc games..... I cant imagine how demanding it would be on servers, to have say 15,000,000 WoW users playing and then haveing it "streamed" to them.

And games are only going to get more demanding, and more complex with time, cpu/gpu wise.

I dont see the consol/pc being cloud based for gamers.... ever (unless its like farmville stuff thats run in a browser, but thats like 2D sprites only).

Future pc/consol, will still have a harddrive, you ll still buy "games" you instill on it, and play from there, with your own cpu/gpu handling things (my point of view = cloud isnt suited for everything).
 
Last edited:
all this cloud stuff.... I get that its gonna be a paradime shift, that everything is gonna be run over the net, loaded and run on the server, and your little "laptop" just has to recieve the data and wolla.... doesnt matter how much power your laptop has, because its a server doing all the CPU/GPU heavy tasks, and your just downloading data over the internet.

paradigm not paradime.
voila not wolla.

As for cloud being fluffy dreams... the notion that we can transfer everything to the cloud is silly fluffy dreams (I like that term, I am stealing it!). Many things are moving to the cloud but not everything. onLive and its ilk at trying to do so with gaming and its doing poorly. I should also mention distributed computing like seti@home has trounced the performance of any supercomputer / dedicated server farm. Distributed computing is like, the anti-cloud, its polar opposite, and more and more high demand projects are moving to it.

So for the future:
Highest demand of performance = Distributed computing, the anti-cloud
Normal demand of performance (video games) = local computing, no cloud shift.
Low demand of performance (emails, facebook) = cloud, already happened.

There isn't much that is suitable for cloud computing that hasn't already been moved there, and every year I see more things moving to distributed computing anti-cloud scheme.

BOINC (Berkeley open infrastructure of networked computing) is an open source infrastructure for distributed computing made by the creators of seti@home at the Berkeley university (who migrated seti@home to boinc)...
It current has 82 separate projects, a continuously growing number.
http://boincstats.com/page/project_ranking.php
 
paradigm not paradime.
voila not wolla.

As for cloud being fluffy dreams... the notion that we can transfer everything to the cloud is silly fluffy dreams (I like that term, I am stealing it!). Many things are moving to the cloud but not everything. onLive and its ilk at trying to do so with gaming and its doing poorly. I should also mention distributed computing like seti@home has trounced the performance of any supercomputer / dedicated server farm. Distributed computing is like, the anti-cloud, its polar opposite, and more and more high demand projects are moving to it.

So for the future:
Highest demand of performance = Distributed computing, the anti-cloud
Normal demand of performance (video games) = local computing, no cloud shift.
Low demand of performance (emails, facebook) = cloud, already happened.

There isn't much that is suitable for cloud computing that hasn't already been moved there, and every year I see more things moving to distributed computing anti-cloud scheme.

BOINC (Berkeley open infrastructure of networked computing) is an open source infrastructure for distributed computing made by the creators of seti@home at the Berkeley university (who migrated seti@home to boinc)...
It current has 82 separate projects, a continuously growing number.
http://boincstats.com/page/project_ranking.php

It's amazing that there are people as bright as you around, great with grammar, excellent analyst, etc.

As for OnLive doing poorly, maybe my metric is a bit different, but having some of the largest computer companies in the world signed up as investors, surviving several years out of beta, having >50M in startup capital, well that's not doing poorly. The ******* US government is outsorcing a significant share of its server needs to Amazon. The Hartford teamed up with IBM to outsorce a huge portion of its desktop & server needs to local/hybrid cloud. Clearly failing.


Profanity is not allowed in the technical forums.

Idontcare
Super Mod
 
Last edited by a moderator:
OnLive attracted capital investment and corporate partners.
But ping is an issue, as well as image quality. Users are NOT getting the promised "max setting", in fact the image quality and setting are equivalent to that of a mid-low end system. Most gamers can afford something much better locally and don't have to pay a subscription fee for the hardware and their games don't poof out of existance if onlife fails and closes their doors. (unless they made the mistake of buying a game that requires an active internet connection to a DRM server for offline play)

As for outsourcing server hosting... people have been outsourcing their servers for many many years. And is outsourcing your server even really any more "cloud"?
But there are indeed things that ARE moving into the cloud. As I said before it is ideal for low performance applications find it ideal.

There is actually one other type that is appearing in the cloud, high performance distributed applications too small and monetized to draw volunteers for distributed computing or have their own sprawling infrastructure to set up their own in house distributed computing network (you can do that with boinc). That would be companies that offer password cracking (of your legally owned files for whom you lost the password of course) via a cluster of servers rented from amazon. AFAIK there only exists one such project although more might appear and some might actually succeed and stay and grow. There has been talk about doing something like that for video editing without building a render farm. Likewise there is a boinc project for distributed video editing as well.
 
Last edited:
wake me up when:
1) we have servers powerfull enough to actually do it (1,000,000's users)
2) we have bandwidth from ISP's for that stuff.
3) when ISPs have capacity for it.
4) when avg. joe has 1gigabit internet connections.
1) Never heard of load balancing?
2) Yeah that's true for the vast majority of developed countries
3) same
4) 20mbit are pretty much standard today and 100mbit isn't much out of the ordinary. And 100mbit are more than enough for all the usual stuff the vast majority of people would use their PCs for.

Apart from gaming (low latency is always a problem and makes it expensive if you want to do it right) pretty much anything 98% of all users would do at home could be easily be outsourced to the "cloud". People have been doing that for computational intense stuff for decades - after all basically every scientist has some SSH access to one cluster or another.. same concept. Needs just a whole lot of finish for the average user.

But funny, who'd have thought in the 90s that thin clients would be the newest trend 20years later?
 
But funny, who'd have thought in the 90s that thin clients would be the newest trend 20years later?
everybody? things always get smaller, faster, use less power, as things progress with tech.

Still... I rather have my programs run on my own pc, my files on my own pc, and cloud isnt ever gonna work for gameing (so i ll still need my own pc).

Im sure theres alot of people that feel that way.

On the otherhand, Im sure there are alot of people that just want a tablet, and have say office run over a cloud system in a browser for them (where they pay a tiny amount monthly to be able to use office).

Will everything eventually end up that way?

I kinda hope not, I like haveing a pc thats my own,
and not just a interface that you hold, and then everything else is done on some server far far away (that you dont own). Downtime? out of your hands, monthly fee's to "use" your data/programs? ect ect.

There are gonna be alot of trad-offs for each way of doing things.
Its "trendy" as you said, and will probably make a place for itself in the market space, but I doubt it ll ever take over everything.
 
Last edited:
The big thing about the cloud taking off is when joe bloggs ditches his non-cloud windows machine and switches to a cloud based system to do his web, office, video and gaming - when your average person doesn't have any old fashioned windows machines left.

That time is close. Even gaming has a cloud solution although I don't see that really replacing consoles yet. That's really only hard core gamers though - most people just play flash games anyway which work fine.

There are big advantages for your average person - the machine no longer needs maintenance in the same way - no sorting out updates for all your software, no backups, no virus checkers, no upgrading drivers/software/etc - it's all done for you. Doesn't even matter if the whole machine dies - you just log into another one and everything is still there and working.

If you are a geek it's a big step back, if you are a non-techy it's a big step forward.
 
The big thing about the cloud taking off is when joe bloggs ditches his non-cloud windows machine and switches to a cloud based system to do his web, office, video and gaming - when your average person doesn't have any old fashioned windows machines left.

That time is close.
You can already buy a chromebook
http://en.wikipedia.org/wiki/Chromebook
Or use a tablet / smartphone.
Or do you mean that the time is close until those will become the majority?

If you are a geek it's a big step back, if you are a non-techy it's a big step forward.
I don't see why its a big step back. I am a proud geek and I see the two working side by side, complementing each other. Enjoying the best of both worlds.
 
Back
Top