OBLAMA2009
Diamond Member
why do people care about this? is there going to be a desktop/consumer chip too?
If they wanted to compete, I don't think they'd have much of a choice here other than opting to go with ultra low power.
Compete? Against who? Via? Because they won't be able to do with other ARM players. Samsung is developing custom ARM cores, do does Qualcomm, Nvidia and Calxeda. How do you expect AMD to compete with vanilla ARM cores?
why do people care about this? is there going to be a desktop/consumer chip too?
http://techreport.com/news/23768/tsmc-16-nm-finfet-process-coming-next-year
The chasm that exists in x86 in nodes isn't as evident in the low power segment.
TSMC aims to have chip design kits for its 16-nm process available in January with the first foundation IP blocks such as standard cells and SRAM blocks ready a month later. It will start limited so-called risk production of the 16-nm process in November 2013. Production chip tape outs will follow about four or five quarters later.
Originally, TSMC planned to introduce finFETs at 14nm by late 2014. Now, the company has no plans to brand its finFETs at 14nm, but rather it will introduce the technology at 16nm. TSMCs finFET risk production is slated for the end of 2013 or early 2014, with production scheduled for the second half of 2015, Chang said.
Very, very few people do, even within the market they are trying to target.
I wasn't aware Samsung makes server parts. Or Qualcomm for that matter. And I'm also pretty sure nVidia only sticks its nose into server with GPU...
What was your point again?
Samsung has been hiring some former AMD server guys, Qualcomm CEO explicitly listed servers as one of the main routes the company is looking for its future, and Nvidia and its project Denver is all about servers.
Ops, sorry, of course you knew those facts. Maybe if I draw a picture you'll understand better.
How soon after the tape-out would volume production products be available at retailers?
With zero interconnect.
http://techreport.com/news/23768/tsmc-16-nm-finfet-process-coming-next-year
The chasm that exists in x86 in nodes isn't as evident in the low power segment. From tapeout-to-products rolling off the fab, TSMC took less than a year with 28nm. Assuming the same time frame post tapeout, we'd be looking at 2014 16nm FinFET which is when Intel would be on 14nm Broadwell.
The above is the most likely reason AMD is opting with ARM. TSMC, GloFo and the Fishkill alliance won't be rushing any sort of high power node to viability when they can make much more money in the mobile market. As a result, we'd see AMD stuck on 28nm while Intel would be skipping happily along to 14nm. If they wanted to compete, I don't think they'd have much of a choice here other than opting to go with ultra low power.
Atleast it fits Rorys mantra of outsourcing, this time the entire core development.
I wasn't aware Samsung makes server parts. Or Qualcomm for that matter. And I'm also pretty sure nVidia only sticks its nose into server with GPU...
What was your point again?
Isn't that what CUDA/OpenCL/GPGPU addresses? If you want gigathreads then you don't need a sea of ARM cores to get that.
It seems AMD learned absolutely nothing since Hector. They still pursue the server segment, a segment they are already completely destroyed in with something around 4% marketshare today for x86 systems. And they solely pursue it for the status of it.
Can they still implement their own IP should it benefit the processor? Or are they just directly using the stock CPU design from ARM?
Compete? Against who? Via? Because they won't be able to do with other ARM players. Samsung is developing custom ARM cores, do does Qualcomm, Nvidia and Calxeda. How do you expect AMD to compete with vanilla ARM cores?
Right now only the ARMv8 ISA has been released, so it's not really possible to answer those questions until we know more about the first ARMv8 cores. Which from what I hear is coming up this week when ARM introduces Atlas.Question:
What advantage will a 64bit v8 core have over a Haswell EP or broadwell EP - if they loose on perf per watt?
Why would anyone choose a stacked dense enviroment with this ?
(Other than really cheap mass webhosters).
This is what i fail to see - for these time\latency workloads that are a cluster-balanceable - it's there.
But that's like the enthusiast market in the server world - the 1%.
Will v8 cores clock at 3 ghz? 4ghz? or am i missing something?
I cannot but agree. MIPS beat them all to the punch by years in some ways, but not by making anything COTS. Even if there ends up being enough markets (no one niche will be enough for more than any one company, I don't think), AMD would only have a significant advantage v. Calxeda and Baserock, who only support rather slow interconnects. For fighting thermal density with parallel workloads, I'd be willing to bet that the forthcoming Atoms will smash the upcoming ARM SoCs, and that AMD would have been better to make a RAS-heavy Jaguar, and try to stuff it into every niche they could find (and, the same chip w/ those features not turned on could be an OK mobile and AIO CPU).Yes, but it is far easier to develop a custom interconnect than to develop a custom ARM core. Even AMD could afford to buy an interconnect maker, and AMD financial position is subpar.
AMD Seamicro will make a difference on highly complex or specific designs. For your everyday servers the chip itself will matter a lot more than the interconnect.
So they are switching to cheap, vanilla ARM servers, most likely inferior to its competitors, which means that whatever gains comes from manufacturing the boxes themselves. I doubt that this business will be able to muster enough margins to sustain the company in the long run.
32-bit isn't viable for anything more fancy than a Sheevaplug. Yes, it works, but 64-bit makes the world, as the CPU, OS, RAM-heavy services and file-heavy services, see it, orders of magnitude simpler, once you get into using GBs of RAM, and 100s of MBs of data sets. Wider registers are a nice bonus for small copies and DB work, too. It's the difference between admins getting pissed off and disabling the OOM killer, then recompiling the kernel, then getting pissed off all over again when they still run out of memory with enough available...to occasionally having to tweak a VM tunable.Samsung is working on ARM based server parts for release in 2014, at 32 bits ARM is inadequate for a large server application, but with instruction set V8 it becomes viable.
Pretty much. I could see OLAP going GPU, but (Apache/Nginx)+(MySQL/PostgreSQL)+(PHP/Python/Ruby), or anything like that, will be much more benefited from big caches and fast memory access than more processors. Not merely that, but they benefit greatly from caching in more abstract ways than the memory cache: sharing chunks of files through the OS, sharing their own data structures in memory (Python explicitly will need to use a 3rd-party object DB, PHP can use APC with a few config changes), and implicitly sharing most of their VM contents (depending on server config, anyway--upstream defaults are not good for web apps). In addition, MySQL and PostgreSQL both scale very well to many cores, provided there are also many transactions.For now, I don't think anybody is running LAMP on GPUs. That may change, but my understanding is that for an individual thread, a GPU is very slow - to the point where, if you have any latency restrictions at all, a GPU thread is not likely to meet your performance requirements. I've heard that for a number of server workloads, the current generation of low-power processor cores (from all major vendors) just don't have quite enough performance to get a foot in the door. Websites care very much about how long a page takes to load, and if you can't e.g. serve Anandtech forums in under 100ms (random made up number), it doesn't matter how cheap/low-power/etc your chip is.
All you're missing is that it's new, and there's obviously some demand. But, with no useful parts out, nobody knows exactly how it will shape up. So, anybody who can wants to be ready, over the next few years, in case it ends up not only being a few users totaling 1%. If they are good enough for enough users, and/or people find unexpected ways to exploit them, and it explodes, you don't want to be the guy that said it would be a waste of time to pursue, do you?Will v8 cores clock at 3 ghz? 4ghz? or am i missing something?
Samsung hasn't announced the development of custom ARM cores. Calxeda announced that they'll release a 64-bit ARM solution in 2014 but there's no word that they'll have developed the core themselves (much like AMD implies I'm sure ARM will announce availability of their own 64-bit core).
All you're missing is that it's new, and there's obviously some demand. But, with no useful parts out, nobody knows exactly how it will shape up. So, anybody who can wants to be ready, over the next few years, in case it ends up not only being a few users totaling 1%. If they are good enough for enough users, and/or people find unexpected ways to exploit them, and it explodes, you don't want to be the guy that said it would be a waste of time to pursue, do you?
Question:
What advantage will a 64bit v8 core have over a Haswell EP or broadwell EP - if they loose on perf per watt?
Why would anyone choose a stacked dense enviroment with this ?
(Other than really cheap mass webhosters).
This is what i fail to see - for these time\latency workloads that are a cluster-balanceable - it's there.
But that's like the enthusiast market in the server world - the 1%.
Will v8 cores clock at 3 ghz? 4ghz? or am i missing something?