Handel Jones: cost per transistor flat from 28 to 7nm

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
It's crazy that a SoC that is not much smaller than a $350 Intel CPU only costs that much. But this sort of makes my point, who actually expects Apply SoC costs to rise every generation? Because that is what has to happen if cost per transistor is going to stop falling. Because we know each new SoC is going to have significantly more transistors. They always do.

There are fundamental reasons that occurred in history made that situation possible. Those fundamental reasons occurred both in the recent past (measured in years) but also several decades ago (stuff from the 1980s to 2000s)

Those fundamental reasons have changed since then, and now the market is beginning the slow process of recalibrating the price per performance of many things related to technology not just CPU prices.

-----

Take for instance the Apple Newton Project is probably over the long term the most successful device of apple's history. (newton started development in 87, first product shipped was 93, last product shipped was 98). Yet the individual Apple Newton Devices were complete and utter failures.

Why was it the most important and successful device of apple's history?

Because without the Apple Newton we would never have Acorn RISC Machine. Acorn RISC Machine was a company that was a joint venture formed from 3 other companies

1) Apple who wanted a product that largely did not exist at the time.

2) Acorn Computers (the british Apple that unlike Apple largely failed), who had a unique collection of cpu design technology / patents / ideas. [Very brief Acorn Computers Timeline here](http://www.telegraph.co.uk/finance/...43162/History-of-ARM-from-Acorn-to-Apple.html) wikipedia has far more history

3) and VLSI Technology who specialzied in making integrated circuits and custom integrated circuits for specific companies.

These 3 companies made a spin off company / collaboration in 1990 that was named Advanced RISC Machines Ltd. Apples share was 43% of Advanced RISC Machines Ltd for 2.5 million dollars.

8 years later this private company spin off did an IPO and became a public company in 1998. This company renamed itself from Advanced RISC Machines ltd to ARM Holdings.

And most people call the company ARM Holdings by the same name ARM as its main product.

-----

In other words Apple due to lack of small circuits at the time that could be used in the first PDA, well Apple had to make them, and this led to the foundation of the future cellphone and all this "mobile technology" where computers and sensors are in your pocket instead on some desk.

Aka accidents through history led to the starting groundwork to a company that fundamentally has changed the world about a decade after their IPO.

-----

Oh one last bit, this was during the years that Apple and Jobs had nothing to do with each other.

1985 CEO John Sculley outs Jobs,

Jobs re-enters Apple in Dec 1996 when Apple buys NeXT, becomes the apple ceo 7 months later in July 1997,

At the time in July 1997 apple was in financial ruins, it an interview many years later Steve Jobs claimed at the time he became CEO in July 1997 Apple was only 3 months away from bankruptcy. Besides money problems, Apple was losing support of its developers and many people were wondering why would you invest your personal finances buying hardware and software in an OS where many people was not sure it would be around soon after. (Remember we were still in the very expensive 3k pc phase)

August 1997, only a month after Jobs becomes CEO, Microsoft invests 150 million into Apple and announces some of their products such as Microsoft Office will come to the Macs, and a 5 year partnership with apple and microsoft with things like patents. It is often argued this 150 million dollars investment of Microsoft into Apple is what kept the lights on. Now do understand that Microsoft did not due this out of generosity at the time they were fighting a several year battle over things like patents in the court and during this time in 1997 the companies decided to stop fighting in court and they came to an arrangement.

Now during Job's new apple tenture as ceo, he began to sell the shares of ARM Holdings that Apple still owned from the late 90s and early 2000s Apple sold it shares for a total of $792 million (remember at the time of Jobs coming back to Apple as CEO was mid 1997, and the ARM IPO was April 1998).

Now selling these ARM Holding Shares were partly responsible for keeping the lights on with Apple and Apple has made billions of money since then, so do not shed a tear of hindsight of Apple selling its shares of ARM Holding during the first few years of ARM holding being a public company.

-----

So this whole little saga I told about how Apple is the reason why ARM exists today, and how the newton was not a lost cause for Apple made up the losts of the newton by selling ARM, and then later on this technology was key for things like the iPod and iPhone...well none of it would be predictable in the late 1980s and we now only see the connections due to the hindsight of history where we now compare the facts on the ground with today to the facts on the ground from several decades ago.

All of the decisions back then made sense, just like how the decisions back then made sense that allowed Intel to gain such a lead on IPC, Fab Technology, Market Share, Profit, etc of the computing revolution for several decades. It is only recently that the fundamentals of the present business have changed so much that you now have Apple with its custom ARM chip competing with the Core M line on performance in small things.

And while Apple does not sell its custom ARM chip as an individual part, competitor arm chips are now getting close enough performance to compete with core M. Products such as

Cortex A72 and soon to be A73s

Samsung's Mongoose own custom ARM implementation, and with

Qualcomm Kyro (seen in the 820) own custom ARM implementation.

Now bring back your original quote again

Roland00Address said:
Thus to sum it all up, the soc / cpu costs for Apple is anywhere from $22 to $45 dollars plus an additional $15 for modem.

sm625 said:
It's crazy that a SoC that is not much smaller than a $350 Intel CPU only costs that much. But this sort of makes my point, who actually expects Apply SoC costs to rise every generation? Because that is what has to happen if cost per transistor is going to stop falling. Because we know each new SoC is going to have significantly more transistors. They always do.

The ability of Intel to demand you pay that amount of money to have the pleasure of using their cpus is due to accidents of history which at the time were not really accidents but if you were to enter today's world with no knowledge of the past it would seem ludicrous.

It will probably be even more ludicrous in the next few years if Chromebooks take off even more so due to introduction of Android Apps on the Chromebooks and the rise of not on the device computing but instead using Cloud Servers etc for the computing intensive things.
 
Last edited:

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
There are fundamental reasons that occurred in history made that situation possible. Those fundamental reasons occurred both in the recent past (measured in years) but also several decades ago (stuff from the 1980s to 2000s)

Those fundamental reasons have changed since then, and now the market is beginning the slow process of recalibrating the price per performance of many things related to technology not just CPU prices.

-----

Take for instance the Apple Newton Project is probably over the long term the most successful device of apple's history. (newton started development in 87, first product shipped was 93, last product shipped was 98). Yet the individual Apple Newton Devices were complete and utter failures.

Why was it the most important and successful device of apple's history?

Because without the Apple Newton we would never have Acorn RISC Machine. Acorn RISC Machine was a company that was a joint venture formed from 3 other companies

1) Apple who wanted a product that largely did not exist at the time.

2) Acorn Computers (the british Apple that unlike Apple largely failed), who had a unique collection of cpu design technology / patents / ideas. [Very brief Acorn Computers Timeline here](http://www.telegraph.co.uk/finance/...43162/History-of-ARM-from-Acorn-to-Apple.html) wikipedia has far more history

3) and VLSI Technology who specialzied in making integrated circuits and custom integrated circuits for specific companies.

These 3 companies made a spin off company / collaboration in 1990 that was named Advanced RISC Machines Ltd. Apples share was 43% of Advanced RISC Machines Ltd for 2.5 million dollars.

8 years later this private company spin off did an IPO and became a public company in 1998. This company renamed itself from Advanced RISC Machines ltd to ARM Holdings.

And most people call the company ARM Holdings by the same name ARM as its main product.

-----

In other words Apple due to lack of small circuits at the time that could be used in the first PDA, well Apple had to make them, and this led to the foundation of the future cellphone and all this "mobile technology" where computers and sensors are in your pocket instead on some desk.

Aka accidents through history led to the starting groundwork to a company that fundamentally has changed the world about a decade after their IPO.

-----

Oh one last bit, this was during the years that Apple and Jobs had nothing to do with each other.

1985 CEO John Sculley outs Jobs,

Jobs re-enters Apple in Dec 1996 when Apple buys NeXT, becomes the apple ceo 7 months later in July 1997,

At the time in July 1997 apple was in financial ruins, it an interview many years later Steve Jobs claimed at the time he became CEO in July 1997 Apple was only 3 months away from bankruptcy. Besides money problems, Apple was losing support of its developers and many people were wondering why would you invest your personal finances buying hardware and software in an OS where many people was not sure it would be around soon after. (Remember we were still in the very expensive 3k pc phase)

August 1997, only a month after Jobs becomes CEO, Microsoft invests 150 million into Apple and announces some of their products such as Microsoft Office will come to the Macs, and a 5 year partnership with apple and microsoft with things like patents. It is often argued this 150 million dollars investment of Microsoft into Apple is what kept the lights on. Now do understand that Microsoft did not due this out of generosity at the time they were fighting a several year battle over things like patents in the court and during this time in 1997 the companies decided to stop fighting in court and they came to an arrangement.

Now during Job's new apple tenture as ceo, he began to sell the shares of ARM Holdings that Apple still owned from the late 90s and early 2000s Apple sold it shares for a total of $792 million (remember at the time of Jobs coming back to Apple as CEO was mid 1997, and the ARM IPO was April 1998).

Now selling these ARM Holding Shares were partly responsible for keeping the lights on with Apple and Apple has made billions of money since then, so do not shed a tear of hindsight of Apple selling its shares of ARM Holding during the first few years of ARM holding being a public company.

-----

So this whole little saga I told about how Apple is the reason why ARM exists today, and how the newton was not a lost cause for Apple made up the losts of the newton by selling ARM, and then later on this technology was key for things like the iPod and iPhone...well none of it would be predictable in the late 1980s and we now only see the connections due to the hindsight of history where we now compare the facts on the ground with today to the facts on the ground from several decades ago.

All of the decisions back then made sense, just like how the decisions back then made sense that allowed Intel to gain such a lead on IPC, Fab Technology, Market Share, Profit, etc of the computing revolution for several decades. It is only recently that the fundamentals of the present business have changed so much that you now have Apple with its custom ARM chip competing with the Core M line on performance in small things.

And while Apple does not sell its custom ARM chip as an individual part, competitor arm chips are now getting close enough performance to compete with core M. Products such as

Cortex A72 and soon to be A73s

Samsung's Mongoose own custom ARM implementation, and with

Qualcomm Kyro (seen in the 820) own custom ARM implementation.

Now bring back your original quote again



The ability of Intel to demand you pay that amount of money to have the pleasure of using their cpus is due to accidents of history which at the time were not really accidents but if you were to enter today's world with no knowledge of the past it would seem ludicrous.

It will probably be even more ludicrous in the next few years if Chromebooks take off even more so due to introduction of Android Apps on the Chromebooks and the rise of not on the device computing but instead using Cloud Servers etc for the computing intensive things.
Apple, though, made a bunch of boneheaded decisions before Jobs' ouster and during.

One of them was not capitalizing on Lisa. They had a system with protected memory, multitasking, an office suite, a mouse-driven GUI, and all sorts of polish and functionality (RPN calculator with tape, screensaver, saved desktop, etc.). But instead of doing much with it they dumbed it down dramatically (and added a few improvements) to replace it with the Mac, enabling Microsoft to feast on the market with later iterations of Windows. The Lisa itself was marred by some boneheaded design choices, like an unnecessarily low CPU clock and the overly-ambitious Twiggy drives, but it was largely an outstanding system.

As you pointed out, though, market conditions often dictate decisions and the Japanese machinations in terms of DRAM control and pricing led to the Lisa being uneconomical to produce — due to the drastic increase in RAM prices after the Japanese flooded the market and forced Amercian players out in order to raise prices. The RAM-starved dumbed-down Mac OS must have seemed like a viable alternative, even though it was a drastic step backward and cost Apple big time as people became increasingly less tolerant of its primitive underpinnings. Apple had a superior GUI, in comparison with Windows <95 (and even arguably better than Windows of any version) but a rickety and simplistic underlying OS since the Mac wasn't designed around having a hard disk as a minimum requirement or an MMU.

More boneheaded than Lisa and Mac history, clearly though, was the Apple III. It's hard to imagine how anyone with a reasonable IQ thought an 8-bit computer should replace the Apple II line and not be very compatible with it.

One boneheaded thing from the post-Jobs period was the stagnation of innovation on the desktop which was marked by the extreme lifespan of the Mac Plus and culminated in the Performa line. The Mac, which should have had 2-bit graphics from the start rather than 1-bit, and a less tiny screen, was barely updated in terms of the Plus' specs and yet that machine was sold and supported for a very long time. That was nice for people who bought one but not for Apple's competitiveness. Apple led the high-end of the market with the Mac II line but did so without much innovation — not even being willing to put in a graphics co-processor (something the Lisa should have had to speed up its very slow scrolling). The vacuous Performa line — where every small part difference merited a different machine number, some machines were the same as machines with different names and were called Performa only because of which stores they were sold in, and some machines which were PPC but had leftover 68K parts stuck on which ruined their performance and reliability — underscored Apple's management's lack of creativity.

Microsoft, by contrast, was highly effective in gaming everyone — like IBM with OS/2. (Also, one reason for giving Apple that bit of cash was because MS Office on the Mac was a cash cow for it at the time and MS was trying to kill off Netscape.) Products like Microsoft Bob exposed Microsoft's lack of high-quality creativity and clunky GUIs like Windows 98's browser windows exposed its crassness but its business acumen was unmatched.
 
Last edited:

know of fence

Senior member
May 28, 2009
555
2
71
Thanks for catching me up on the various of S.Jobs biographies out there. Interesting read.

If you multiply the numbers in the OP chart with the 7 Billion Transistors of infamous Nvida offereings it amounts to almost exactly 100 $ cost for the huge NVidia GPU dies. And likely these dense low power processes are quite a bit cheaper than their CPU counterparts.

The EETimes article does exclude yield related cost variance so this needs to be taken into account, as well. Did Intel create a cheaper process with their 14nm? They struggled with yields and promised continuous scaling down to 10 nm, I assumed they did finally achieve parity before realizing that i'm looking at prediction slides from Fall 2014.

4a.jpg
 

NTMBK

Lifer
Nov 14, 2011
10,438
5,787
136
If you multiply the numbers in the OP chart with the 7 Billion Transistors of infamous Nvida offereings it amounts to almost exactly 100 $ cost for the huge NVidia GPU dies. And likely these dense low power processes are quite a bit cheaper than their CPU counterparts.

But transistor cost does not scale linearly... bigger dies = worse yields, so effective cost goes up.
 

alcoholbob

Diamond Member
May 24, 2005
6,386
463
126
And thats why the price is going up. Because profits typically increase when theres a node shrink. But since the profits arent there, Nvidia is just hiking the price, to create the profit they otherwide would expect. So even it costs are flat, it explains why a 8 billion 28nm transistor card was $650 but a 7.2 billion 16nm transistor card is $700. The extra $50 is the extra profit Nvidia typically expects from the node shrink. If it doesnt exist, then they pass it on to the consumer. So either the fab or the consumer has to produce that profit for Nvidia.
 

KTE

Senior member
May 26, 2016
478
130
76
That's just his pitch for SOI... I've read this info before somewhere else, couple of weeks back (presentation? conferences?). ARM did presentations on this as well previously and not so long ago, Bich-Yen Nguyen and Christophe Maleville (Soitec) gave an in-depth view.

The biggest cost difference the OPs articles discusses are due to 'no EUV', if I can put it that simply. They don't have EUV (on masks) working for HVP yet and it's needed to make it economical. That's why 16/14nm are gonna be long nodes. GF has been using EUV for non-transistor layers though. IMEC Technology Forum discussed EUV and the options going forward well: https://www.semiwiki.com/forum/cont...-forum-itf-secrets-semiconductor-scaling.html

I would also read this GF in-depth tour and presentation here:
Gary Patton of IBM Microelectronics said:
The IBM 14nm technology will stay in East Fishkill (Fab 10 now). 14LPE acquired from Samsung is run in Malta plus 14LPP is built off of the 14LPE base and will also run in Malta. 14LPP is the same design rules as 14LPE and offers a 10% to 14% performance improvement over 14LPE. When 28nm was brought up in Malta there was no base to work off of, now the 14nm ramp is leveraging the 28nm experience. Both 14LPE (E for early) and 14LPP (P for performance) are ramping in Malta and have “world class yields”. 10nm and 7nm development is all being done in Malta.

I asked Gary for his view of FDSOI versus FinFETs. He said he didn’t see it as “versus”. FDSOI body bias is a great capability but FinFETs are better for high-end smart phones and performance and FDSOI is better for low power. Throughout the interview Gary was very poised and confident. He was very interesting to talk to and I would have been happy to have had more time for the discussions.
But FD-SOI is definitely becoming very 'fashionable'. Tons of research and presentations on this can be seen here:

VLSI '14: http://www.advancedsubstratenews.co...reakthroughs-in-14nm-fd-soi-10nm-soi-finfets/
IEDM '15: http://www.advancedsubstratenews.co...at-iedm-14-part-1-of-2-in-asns-iedm-coverage/
https://www.semiwiki.com/forum/content/5283-iedm-2015-blogs-–-part-1-overview.html

Interesting for me would be FD-SOI with FinFETs as IBM have given in-depth talks ands presentations on SOI at 14nm and going forward.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
There are fundamental reasons that occurred in history made that situation possible. Those fundamental reasons occurred both in the recent past (measured in years) but also several decades ago (stuff from the 1980s to 2000s)

Those fundamental reasons have changed since then, and now the market is beginning the slow process of recalibrating the price per performance of many things related to technology not just CPU prices.

Perhaps. But consider this extension of a well known Moore's Law chart:

moores2022.jpg


If the trend holds, we're looking at our first 100 billion transistor <$500 CPU by 2023. Even if Moore's Law does bend, it is still going to happen by 2025, 2026 at the latest. History has shown that Apple smartphone SoCs are only about 3 years behind this trend. So we're looking at 2026 to 2029 for the first 100 billion transistor Apple iPhone SoC! There is no way Apple is going to be paying the same price per transistor it is paying today. Such an SoC would cost them well over $1000.
 

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
The reason for the cost increases - more than anything else in my opinion - is that we are drawing ~8nm sized stuff using 192nm lasers. I haven't seen anyone else saying this, and to me focusing on how many atoms make up a transistor is the wrong place to be looking.

The problem is not that we are running out of atoms, the problem is that we are trying to draw pictures on the head of a pin with a fat dull pencil. So we did immersion, then we started doing double patterning (doing every step twice for the bottom masks). If you are doing something twice that used to take just one step, it's going to hold cost parity even if you can double the number of transistors. (yes, this is a gross simplification but it illustrates the point).

And every generation is likely to get worse until we can figure out how to get narrower wavelength lithographic method to work in a mass production environment.

And quantum computing is not the answer. Quantum computing can help with some very specialized tasks, but you aren't going to get higher frame rates in Doom with bigger quantum computer.

My thought on all of this has long been that there has always going to be an end to Moore's Law. In fact, at at 18 month cadence per doubling, you'd have more transistors on a silicon die than there are atoms in the silicon die by 2053. So, everyone always knew this couldn't go on forever - or even for very long - including Gordon Moore himself. But Moore's Law has always been fundamentally about manufacturing cost, not performance, as you can see in his paper (click here) under "Costs and Curves" on the second page. And the problem presently with costs is not silicon itself is running out of atoms to play with but that we are running out of the ability to trick lithography to draw increasing smaller lines using a massive wavelength light source. Once we can solve that problem, we can go back to worries about running out of atoms between the transistor source and drain. But regardless of either of these two issues of light sources or atoms, Moore's Law will end in our lifetimes because this "doubling every 18 months" or even "doubling every two years" can't last another hundred years or we'll have more transistors on a silicon chip than there are atoms in the universe.
 
Last edited:

superstition

Platinum Member
Feb 2, 2008
2,219
221
101
Thanks for catching me up on the various of S.Jobs biographies out there.
Perhaps. I've not read any.

pm said:
My thought on all of this has long been that there has always going to be an end to Moore's Law.
It shouldn't have been called a Law. I've never heard of a Law being something that can just be temporary. I suppose if it's stated in a manner that makes its definition refer only to the time it applies... But that sounds more like it should be called a Guideline or something.
 
Last edited:

pm

Elite Member Mobile Devices
Jan 25, 2000
7,419
22
81
It shouldn't have been called a Law. I've never heard of a Law being something that can just be temporary. I suppose if it's stated in a manner that makes its definition refer only to the time it applies... But that sounds more like it should be called a Guideline or something.

To be fair, Dr. Moore was always uncomfortable with people who called it "Moore's Law". It was a media name, not something he ever said. He just observed a trendline in a graph in a paper and used it to make a prediction.
 

know of fence

Senior member
May 28, 2009
555
2
71
My thought on all of this has long been that there has always going to be an end to Moore's Law. In fact, at at 18 month cadence per doubling, you'd have more transistors on a silicon die than there are atoms in the silicon die by 2053. So, everyone always knew this couldn't go on forever - or even for very long - including Gordon Moore himself. But Moore's Law has always been fundamentally about manufacturing cost, not performance, as you can see in his paper (click here) under "Costs and Curves" on the second page. And the problem presently with costs is not silicon itself is running out of atoms to play with but that we are running out of the ability to trick lithography to draw increasing smaller lines using a massive wavelength light source. Once we can solve that problem, we can go back to worries about running out of atoms between the transistor source and drain. But regardless of either of these two issues of light sources or atoms, Moore's Law will end in our lifetimes because this "doubling every 18 months" or even "doubling every two years" can't last another hundred years or we'll have more transistors on a silicon chip than there are atoms in the universe.

I agree. Should cost remain flat, then Moore's Law has officially ended or at least stopped before its last EUV scorched hurrah! Also there aren't that many atoms left. If A FinFET fin is 9 nm thick and an Si-Atom is 0.22 nm across, what's that? 40 atoms wide. Reliability, durability and longevity suffer.
In fact having to replace your CPU every 4 years because it burned out could become the new way for electronics companies to stay in business.
LEDs retain just 70% of their brightness after 25000 hours, similarly OLED screens burn out in a few years. The large majority of people replace phones because they are bent, shattered, stolen, scuffed, drowned, battery dead, rather than to upgrade hardware.
The industry needs to go cheap, specified, recyclable instead of bigger, faster and more, which is probably what IoT really is about.

snps-fin-slide_large.png
 
Last edited:

Roland00Address

Platinum Member
Dec 17, 2008
2,196
260
126
If the trend holds, we're looking at our first 100 billion transistor <$500 CPU by 2023. Even if Moore's Law does bend, it is still going to happen by 2025, 2026 at the latest. History has shown that Apple smartphone SoCs are only about 3 years behind this trend. So we're looking at 2026 to 2029 for the first 100 billion transistor Apple iPhone SoC! There is no way Apple is going to be paying the same price per transistor it is paying today. Such an SoC would cost them well over $1000.

As a rule of thumb I never predict anything about technology and what will be the normal and what will be trending and what will be cutting edge for a period more than 5 years out, let alone 10+ that you did.

Take for example Youtube was founded in Feb 2005, and it was acquired by google in Nov 2006. Yet today streaming video, or digital downloaded video is so ubiquitous that we can't mental time travel to those past years without inserting artificial memories and emotions to the situation. We start blending memories of other dates like mixing in stuff from 2001 or from 2008 or 2009. We also do not have an unbiased view of the past but instead intensify both the best memories and worse memories.

If you told me in 2005 that Youtube would become so popular, with people now video taping their entire lives (or photographing with instigram) and now we have video gameplay and commentary as the new thing for the super young tweens I would say no, video would be big but not like that.

So why do you assume it is going to be a phone in 2023? For all you know it could be something like google glass, or an watch, perhaps a watch that projects a small led overlay screen on your arm that you can now type on. We do not know.

Lets get back to the nuts and bolts of transistors and stop talking about the final devices that may appear in the future.
 

sm625

Diamond Member
May 6, 2011
8,172
137
106
If you told me in 2005 that Youtube would become so popular, with people now video taping their entire lives (or photographing with instigram) and now we have video gameplay and commentary as the new thing for the super young tweens I would say no, video would be big but not like that.

So why do you assume it is going to be a phone in 2023? For all you know it could be something like google glass, or an watch, perhaps a watch that projects a small led overlay screen on your arm that you can now type on. We do not know.

Well I think I've been ahead of most people on this stuff. I've been a netflix subscriber for over 15 years. It was shortly after I bought my PS2 that I got my first netflix DVD in the mail. I knew youtube would be big and had been a regular user of "google video" long before youtube came around. By 2025 I do expect something other than smartphones to really take off. But I still believe smartphones will be around in 2025. I do not know exactly what sorts of devices we will be using, but I am 100% sure that we will all be using at least one device that has 50+ billion transistors by 2025. Moore's Law might bend but its not like progress is going to stop completely. In fact I believe that if you reword Moore's Law so that it is stated as "transistors per person doubles every 18 months" then we might expect Moore's Law to continue completely unbroken for at least the next 10 years. For 2025, that might mean 40 billion transistors in a phone, and 40 billion more in other unknown devices for some and desktop PCs for others, all averaging out to around 100 billion per person. And the cost will be roughly the same as today, which is very roughly around $500 a year on average. I am 100% sure we hit these numbers, and the cost will be right around the same as today, in today's dollars at least. The only way I see transistor costs stagnating is if the value of the major currencies all implode, which is a possibility.
 
Apr 30, 2015
131
10
81
ARM are saying that the A73 will be available in 28nm, 16/14 and 10nm, the latter being for higher performance applications, and presumably more expensive, at first.
Trumpf are building a factory for EUV lamps, due to be finished in 2017.
EUV may be retro-fitted to 10nm lines, presumably depending on cost.
There are other production techniques, including printing of transistors; this is lower density, but potentially very cheap; it has its own scale of Moore's law. Everything could have processing power, including a bottle of soda: 'drink me!'.
Computing will be so ubiquitous that it fades into the background.