Discussion Apple Silicon SoC thread

Page 418 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Eug

Lifer
Mar 11, 2000
24,114
1,760
126
M1
5 nm
Unified memory architecture - LP-DDR4
16 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 12 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache
(Apple claims the 4 high-effiency cores alone perform like a dual-core Intel MacBook Air)

8-core iGPU (but there is a 7-core variant, likely with one inactive core)
128 execution units
Up to 24576 concurrent threads
2.6 Teraflops
82 Gigatexels/s
41 gigapixels/s

16-core neural engine
Secure Enclave
USB 4

Products:
$999 ($899 edu) 13" MacBook Air (fanless) - 18 hour video playback battery life
$699 Mac mini (with fan)
$1299 ($1199 edu) 13" MacBook Pro (with fan) - 20 hour video playback battery life

Memory options 8 GB and 16 GB. No 32 GB option (unless you go Intel).

It should be noted that the M1 chip in these three Macs is the same (aside from GPU core number). Basically, Apple is taking the same approach which these chips as they do the iPhones and iPads. Just one SKU (excluding the X variants), which is the same across all iDevices (aside from maybe slight clock speed differences occasionally).

EDIT:

Screen-Shot-2021-10-18-at-1.20.47-PM.jpg

M1 Pro 8-core CPU (6+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 14-core GPU
M1 Pro 10-core CPU (8+2), 16-core GPU
M1 Max 10-core CPU (8+2), 24-core GPU
M1 Max 10-core CPU (8+2), 32-core GPU

M1 Pro and M1 Max discussion here:


M1 Ultra discussion here:


M2 discussion here:


Second Generation 5 nm
Unified memory architecture - LPDDR5, up to 24 GB and 100 GB/s
20 billion transistors

8-core CPU

4 high-performance cores
192 KB instruction cache
128 KB data cache
Shared 16 MB L2 cache

4 high-efficiency cores
128 KB instruction cache
64 KB data cache
Shared 4 MB L2 cache

10-core iGPU (but there is an 8-core variant)
3.6 Teraflops

16-core neural engine
Secure Enclave
USB 4

Hardware acceleration for 8K h.264, h.264, ProRes

M3 Family discussion here:


M4 Family discussion here:

 
Last edited:

name99

Senior member
Sep 11, 2010
652
545
136
Cool, anyone can be anything on the internet.

But I know one thing: You've never led a company like OpenAI or Anthropic.


Which they do. The more users they have, the more the cost spreads per training run.

Again, they're highly profitable on inference. Read this again: they're highly profitable on inference. One more time. They're highly profitable on inference.

The reason they're losing money is because they believe there is ample opportunities at ever better models. Should they reach a point where the training run cost is no longer justified, they will scale back on training, but still make a ton of money on inference.

Your statement on OpenAI/Anthropic not having a business model is honestly one of the more dumb things I've read.
He's not saying they don't have a plan, he's saying they don't have a "proven" business model...

Of course they have many ideas for how to monetize all this (as do all the various people who have stakes in what they're doing). But nothing that's well-known enough to drawn confident graphs and be sure, from experience, will work at level X as opposed to level X+1 or level X-1.

That's precisely why no-one is exactly confident in who the winners and losers will be. Does Google continue to win as the ad-based default 1st choice for answering questions? Does OpenAI win because enough people establish emotional relationships (aka "stored long term history") with their, and only their, models? Do MS (and Oracle) win because ultimately the most value that can be provided for people willing to pay is to enterprise? Does Apple win because, long term, it turns out that VLM models (sensing the world via Apple glasses or even just AirPods and Apple Watch) are even closer to what people want than pure language models?

You have no idea! Neither does anyone else!
Which is what he is saying.
 

Doug S

Diamond Member
Feb 8, 2020
3,567
6,303
136
Again, they're highly profitable on inference. Read this again: they're highly profitable on inference. One more time. They're highly profitable on inference.

I know this isn't what you were talking about as far as OpenAI but something I found quite interesting were the discussions WRT to Oracle's deal with OpenAI to invest $300 billion in Oracle cloud starting in 2027. OpenAI has yearly REVENUE of $10 billion. It doesn't matter how profitable that revenue is, even if it is pure profit that's far far short of what's required just to meet that one spending commitment.

Doing so requires either revenue growth greater than an order of magnitude within the next couple years and/or massive external investment (i.e. substantial dilution for the current shareholders) Hence why Oracle declined more than 10% from the massive boost that announcement initially gave it - smart investors are wondering how much of that $300 billion Oracle will ever see.

One of things people like about AI, even knowing that it often hallucates or is just plain wrong, is that it isn't "gamed" like search is by SEOs and ensh-ttified by advertising. In order for OpenAI and others to justify their spending they need to increase that revenue. As with everything else on the web, that's mostly going to be done via advertising. Sure they will offer subscriptions but most of us won't pay for them, and subscriptions aren't a guarantee you won't see ads that will cost extra over and above the subscription just like Amazon, Disney, Netflix and everyone else in the streaming world does.

They can't move too fast on this though or people will abandon AI before it becomes ingrained in the way they use the web. If it gets to the point where there is no value added by AI search versus standard search, because its output is too driven by what advertisers are paying for rather than by what users are looking for, or you have to pay a lot for it to avoid that, the mass market will reject it.
 
  • Like
Reactions: Mopetar

dr1337

Senior member
May 25, 2020
523
806
136
Apple being forced to burn billions just to pacify Wall Street, when it's not going to lead to more sales, is dumb. But that's the reality.
Apple has been putting neural accelerators into their chips since 2017, I don't think they're being forced by wallstreet to do anything. If anything they're one of the biggests leaders on the charge towards an AI future by making that hardware actually accessible to millions of devices/developers/customers.
You have no idea! Neither does anyone else!
Which is what he is saying.
Nah, they also said GPTs latest iterations are "massive disappointments" meanwhile OpenAI isn't losing customers to google or facebook. Throwing objectivity out of the window is indeed how you become ignorant.

Johnson's entire point is that he believes distributed AI isn't market viable because its too vague and because companies are spending more than they're making on R&D currently. All the while ignoring the fact OpenAI made 10 billion dollars last year.

Seriously, go back and re-read his post. Then ask yourself, if OpenAI has no ROI, why do microsoft, apple, and others keep investing in it? Are these other billion dollar companies really falling for a scam, or are luddites against AI trying to make it something that it isn't?


Heres the thing, sam altman was convinced he had AGI in the labs almost 3 years ago now. Yes the AI companies do overhype themselves all the time in marketing. But say theres some macro-economic effects being overlooked because of this is baseless. There really is a large customer base for software like AI and so long as models are too big to run on consumer hardware, companies that host AI will always have a guaranteed revenue stream.

Its literally just SaaS. Even if all models truly hit a wall and cannot become any smarter, there is still money in having the best service for people to use...
 

johnsonwax

Senior member
Jun 27, 2024
373
564
96
Seriously, go back and re-read his post. Then ask yourself, if OpenAI has no ROI, why do microsoft, apple, and others keep investing in it? Are these other billion dollar companies really falling for a scam, or are luddites against AI trying to make it something that it isn't?
Theranos had the following individuals on their board, who oversaw the company on the runup to a $9B valuation, overseeing a company that was making claims that every advanced degree microbiologist would tell you were impossible - and many did say that.

George Shultz
James Mattis
Richard Kovacevich (CEO of Wells Fargo)
David Boies
Riley Bechtel (Bechtel Group)
William Foege: Former director of the CDC.
Fabrizio Bonanni: Former executive VP at Amgen.
Sam Nunn
Bill Frist: Former U.S. Senator and a heart-transplant surgeon.

Did all of those VC funds, former cabinet members and high ranking executives really fall for a scam? Yeah, they did. Every damn one of them. Happens all the time, particularly in tech because to most investors technology is indistinguishable from magic, and they forget that at the end of the day, you still need to find a market, and you still need to pay for your upfront costs.

AI is a special case, because it's technology that is likely to change the economic system in which it operates. It will unquestionably change how we handle and value IP. If it delivers even marginally on its worker replacement claims (see Anthropic CEO claiming they will be able to replace 90% of programmers - that's their business model, btw) then what larger impact will they have on the economy, and how will that apply backpressure to the valuation of the company? The bet isn't that OpenAI will be the next Apple, the bet is that OpenAI will be Weyland-Yutani an entity that marks such a radical shift in how the economy works that it sort of begs the question of what that investment would be allowed to be worth? I don't think it's some throwaway observation that the leaders of these companies speak not in terms of market disruption but in social disruption. In the end, that's what's being invested in - having some leverage in what the new economic/social order looks like. If you don't invest in it, and it works, you're f'd. If it doesn't work, who cares - it was idle money anyway, and your existing cashflow and business model still functions. We have a term of art for that: insurance.
 

Doug S

Diamond Member
Feb 8, 2020
3,567
6,303
136
the bet is that OpenAI will be Weyland-Yutani


OK who has xenomorphs for 2035?

4a3jms.jpg
 

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
Theranos had the following individuals on their board, who oversaw the company on the runup to a $9B valuation, overseeing a company that was making claims that every advanced degree microbiologist would tell you were impossible - and many did say that.

George Shultz
James Mattis
Richard Kovacevich (CEO of Wells Fargo)
David Boies
Riley Bechtel (Bechtel Group)
William Foege: Former director of the CDC.
Fabrizio Bonanni: Former executive VP at Amgen.
Sam Nunn
Bill Frist: Former U.S. Senator and a heart-transplant surgeon.

Did all of those VC funds, former cabinet members and high ranking executives really fall for a scam? Yeah, they did. Every damn one of them. Happens all the time, particularly in tech because to most investors technology is indistinguishable from magic, and they forget that at the end of the day, you still need to find a market, and you still need to pay for your upfront costs.
Dude, you just named one fraud cause in Theranos but ignored the countless VC-funded companies you are using on a daily basis. Google? Meta? Apple? Nvidia? AMD? Intel? The tens of thousands of companies that were VC funded? Maybe most public companies have received VC funding?

Are you seriously doing this?

AI is a special case, because it's technology that is likely to change the economic system in which it operates. It will unquestionably change how we handle and value IP. If it delivers even marginally on its worker replacement claims (AI is a special case, because it's technology that is likely to change the economic system in which it operates. It will unquestionably change how we handle and value IP. If it delivers even marginally on its worker replacement claims (see Anthropic CEO claiming they will be able to replace 90% of programmers - that's their business model, btw)- that's their business model, btw)
Is this a joke? He said by June or September, 90% of the code might be written by AI. Not replace 90% of programmers. That's a massive difference.

And guess what? 90% of the code I produce is now generated by GPT5 or Claude 4.0.
 
Last edited:
  • Haha
Reactions: MuddySeal

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
He's not saying they don't have a plan, he's saying they don't have a "proven" business model...

Of course they have many ideas for how to monetize all this (as do all the various people who have stakes in what they're doing). But nothing that's well-known enough to drawn confident graphs and be sure, from experience, will work at level X as opposed to level X+1 or level X-1.

That's precisely why no-one is exactly confident in who the winners and losers will be. Does Google continue to win as the ad-based default 1st choice for answering questions? Does OpenAI win because enough people establish emotional relationships (aka "stored long term history") with their, and only their, models? Do MS (and Oracle) win because ultimately the most value that can be provided for people willing to pay is to enterprise? Does Apple win because, long term, it turns out that VLM models (sensing the world via Apple glasses or even just AirPods and Apple Watch) are even closer to what people want than pure language models?

You have no idea! Neither does anyone else!
Which is what he is saying.
Let me break it down.

@johnsonwax's arguments:
  • OpenAI and Anthropic are losing massive amounts of money right now
  • Even if they're profitable on inference, they are still not profitable overall
  • Training is what makes them unprofitable and the returns from training are "disappointed" so OpenAI and Anthropic don't have a business model
Let's break see why his arguments are just plain wrong:
  • If all AI labs around the world stopped training new models right now. This instance. September 16, 2025. OpenAI would be making 5-6x on ChatGPT the base $20/month subscription. Anthropic would be making 11x - 20x on Claude Code.
  • This would PROVE that even at current intelligence level, people are still willing to pay for the value of OpenAI and Anthropic's products.
  • $0 on training. Become highly profitable on inference.
But wait. Because competitors won't all stop training, OpenAI, Anthropic, xAI, Google, Deepseek, etc. all have to continue to train. So therefore, no business model and @johnsonwax is correct, right?
  • Ask yourself why each AI lab will continue to train and no one wants a truce.
  • It's because each one of them is fighting to be the last ones standing.
  • At some point, training costs will be so great that most AI labs will bail out of the race and 1-3 winners dominate
  • Not all AI labs can keep training forever and have infinite resources to do so
  • Once this happens, investments in training runs will slow and the focus will be on producing profits via inference/products/contracts/etc.
So the notion that we don't have a business model here is clearly wrong. We clearly do. What we don't know is who is going to end up as the top dog(s) and when.
 
Last edited:
  • Haha
Reactions: MuddySeal

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
That's just speedrunning getting fired.
Honestly this kind of koolaid drinking is admirable, I wouldn't ever be able to sniff my own farts so good.
What people think AI generating code is like:
  1. Make me a an app that competes with Instagram
  2. Publishes app on the app store blindly
What AI generating code is actually like:
  1. Write a very detailed spec
  2. Ask the AI to build the app or make changes in an existing app
  3. Ask the AI to make changes or fix issues
  4. Ask AI to check for potential bugs
  5. Ask AI to check for potential performance issues
  6. Review the code written by AI. Spend more time reviewing if it's business critical.
  7. Makes sure changes pass all unit and integration tests and update them with the new changes
  8. Uses the growing list of AI agent software tools to test the app such as UI changes
  9. Push to test server and have QA/other humans test the functionality
  10. Push to live
In this case, AI still writes 90% of the code. AI is also used to speed up things for writing, testing, documenting code. Humans are not replaced. It just makes humans more productive.

Will AI ever get good enough to just write all the code and replace humans? No. Because you still need a human to tell it what is needed.
 
Last edited:
  • Haha
Reactions: MuddySeal

adroc_thurston

Diamond Member
Jul 2, 2023
7,062
9,804
106
What people think AI generating code is like:
  1. Make me a an app that competes with Instagram
  2. Publishes app on the app store blindly
What AI generating code is actually like:
  1. Write a very detailed spec
  2. Ask the AI to build the app or make changes in an existing app
  3. Ask AI to check for potential bugs
  4. Ask AI to check for potential performance issues
  5. Review the code written by AI. Spend more time reviewing if it's business critical.
  6. Makes sure changes pass all unit and integration tests and update them with the new changes
  7. Uses the growing list of AI agent software tools to test the app such as UI changes
  8. Push to test server and have QA/other humans test the functionality
  9. Push to live
In this case, AI still writes 90% of the code. AI is also used to speed up things for writing, testing, documenting code. Humans are not replaced. It just makes humans more productive.

Will AI ever get good enough to just write all the code and replace humans? No. Because you still need a human to tell it what is needed.
You're vomiting out unmaintainable mysterymeat while also spending more time on allat than just writing the code.

Again, I admire the koolaid drinking but this is either bait or you're trying to fail a future performance review. Can be both, even.
 

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
You're vomiting out unmaintainable mysterymeat while also spending more time on allat than just writing the code.

Again, I admire the koolaid drinking but this is either bait or you're trying to fail a future performance review. Can be both, even.
OpenAI and Anthropic must be really good at scamming developers if they are growing 4-5x or more per year.

The amount of uninformed opinions on the state of AI on Anandtech forums is high.

I have a theory: people are afraid of how fast AI is moving. They're afraid of AI making their own jobs redundant. They think it might give people like Sam Altman too much power. They think AI is making GPUs and CPUs more expensive for their hobbies (usually gaming). They don't work at OpenAI, Anthropic, Nvidia, or an AI company so they don't reap the financial benefits. They read a few negative articles on AI and decide to become anti-AI, calling it hype, calling it a bubble, saying it's not "real" AI, saying it's generated garbage. They want to bury their heads in the sand and hope this all goes away soon.
 
  • Haha
Reactions: MuddySeal

adroc_thurston

Diamond Member
Jul 2, 2023
7,062
9,804
106
OpenAI and Anthropic must be really good at scamming developers if they are growing 3-4x per year.
Growth is a meaningless word in a world of subsidized demand.
Only that dumb money runs out eventually, and then everyone and everything goes *poof*.
I have a theory: people are afraid of how fast AI is moving
It's moving so fast LLMs have plateaued completely.
 

mikegg

Golden Member
Jan 30, 2010
1,975
577
136
Growth is a meaningless word in a world of subsidized demand.
Only that dumb money runs out eventually, and then everyone and everything goes *poof*.
Cool. Except the fact that OpenAI and Anthropic makes a profit for each subscriber acquired.

It's moving so fast LLMs have plateaued completely.
1758011407048.png

You're proving my theory correct.

The amount of uninformed opinions on the state of AI on Anandtech forums is high.

I have a theory: people are afraid of how fast AI is moving. They're afraid of AI making their own jobs redundant. They think it might give people like Sam Altman too much power. They think AI is making GPUs and CPUs more expensive for their hobbies (usually gaming). They don't work at OpenAI, Anthropic, Nvidia, or an AI company so they don't reap the financial benefits. They read a few negative articles on AI and decide to become anti-AI, calling it hype, calling it a bubble, saying it's not "real" AI, saying it's generated garbage. They want to bury their heads in the sand and hope this all goes away soon.
 
  • Haha
Reactions: MuddySeal

Trovaricon

Member
Feb 28, 2015
32
55
91
AI is a special case, because it's technology that is likely to change the economic system in which it operates. It will unquestionably change how we handle and value IP. If it delivers even marginally on its worker replacement claims (see Anthropic CEO claiming they will be able to replace 90% of programmers - that's their business model, btw) then what larger impact will they have on the economy, and how will that apply backpressure to the valuation of the company?
90% of the "software engineers" (okay maybe 75%) can be kicked out even today, no AI needed... They used to be forced to at least utilize their conman / social skills before, when people had to be present in the office. There are other social forces at play preventing companies from liquidating the workforce - it will have a domino effect (e.g. it means their manager - and his manager are redundant too...)

Even Copilot in Visual Studio is more productive / has better understanding than most of the "staff engineers" I work with.
 
  • Like
Reactions: Tlh97 and DZero

DZero

Golden Member
Jun 20, 2024
1,623
629
96
90% of the "software engineers" (okay maybe 75%) can be kicked out even today, no AI needed... They used to be forced to at least utilize their conman / social skills before, when people had to be present in the office. There are other social forces at play preventing companies from liquidating the workforce - it will have a domino effect (e.g. it means their manager - and his manager are redundant too...)

Even Copilot in Visual Studio is more productive / has better understanding than most of the "staff engineers" I work with.
At the end with automatization, will end the era of the normal workers.
 

Pontius Dilate

Senior member
Mar 28, 2008
251
448
136
I'm realizing that my own vocabulary on this may be incorrect. I can plug my phone in to a USB port on the car and that replaces the usual center console touchscreen display with something from the phone. I understand this to be CarPlay.

I don't quite understand why all of this couldn't also be done wirelessly over Bluetooth which the car and phone both have and already use to communicate because the car know the phone is there when I get in it.

I could understand if there are some limitations with that kind of connection, but I just want to listen to music on a short trip. It will handle calls that way so clearly playing music shouldn't be too much to ask.
Wireless CarPlay uses Bluetooth to negotiate the connection between the phone and the head unit, but all the rest of the streaming happens over wifi. For cars that don't have wireless CarPlay you need USB to stream the data because Bluetooth has insufficient bandwidth.
 

name99

Senior member
Sep 11, 2010
652
545
136
I know this isn't what you were talking about as far as OpenAI but something I found quite interesting were the discussions WRT to Oracle's deal with OpenAI to invest $300 billion in Oracle cloud starting in 2027. OpenAI has yearly REVENUE of $10 billion. It doesn't matter how profitable that revenue is, even if it is pure profit that's far far short of what's required just to meet that one spending commitment.

Doing so requires either revenue growth greater than an order of magnitude within the next couple years and/or massive external investment (i.e. substantial dilution for the current shareholders) Hence why Oracle declined more than 10% from the massive boost that announcement initially gave it - smart investors are wondering how much of that $300 billion Oracle will ever see.

One of things people like about AI, even knowing that it often hallucates or is just plain wrong, is that it isn't "gamed" like search is by SEOs and ensh-ttified by advertising. In order for OpenAI and others to justify their spending they need to increase that revenue. As with everything else on the web, that's mostly going to be done via advertising. Sure they will offer subscriptions but most of us won't pay for them, and subscriptions aren't a guarantee you won't see ads that will cost extra over and above the subscription just like Amazon, Disney, Netflix and everyone else in the streaming world does.

They can't move too fast on this though or people will abandon AI before it becomes ingrained in the way they use the web. If it gets to the point where there is no value added by AI search versus standard search, because its output is too driven by what advertisers are paying for rather than by what users are looking for, or you have to pay a lot for it to avoid that, the mass market will reject it.
Your background model seems to be that most of the money in AI must come from consumers, and in turn this must come from ads.
An alternative model is that both are false.

Oracle, for example, is an enterprise company with zero interest in consumer. If they see money here, they see it in things like drug discovery, or more rapid simulations of hardware. There is a LOT of money in enterprise, and they're willing to spend it on functionality that works (eg Oracle and IBM).

Likewise people are willing to spend a subscription on many things. If there's no alternative, they will pay a monthly cell phone bill. If the alternatives are inferior (broadcast and Pluto vs Netflix) many will pay for Nextflix. Your assumption seems to be that people currently paying for ChatGPT will drift towards the free version. I could see it going the other way; the free version is kept good (ie ad-free) but throws up barriers (eg no more than two reasoning requests a week) that enough people hit often enough that they see the value in paying.

I suspect that most of the people in this discussion
- are above average smarts
- have above average tech experience
- are older than the median American/world human
All of which mean that you're probably very very far from the pool of customers using ChatGPT and what they want. Simple question - is your phone your primary or sole computing device? If that's not true, I don't think you should be very confident in your intuitions about what "most people" want from computing...

Let me two datapoints.
1. All the Siri hoopla and nonsense. Siri was never meant to be a replacement for Google. There are many legit things to complain about within Siri, but not this one. AND YET this is the primary Siri complaint -- because people treat it as Google and cannot distinguish between Siri, Google, "the internet", and so on
2. I already know one person (and I am not especially social, and don't really know anyone below the age of 40) who uses ChatGPT as a replacement for Google, asking it everything, regardless of the "type" of question. Obviously he has a subscription. And just as obviously to me, he sees that subscription as providing genuine value.

What I see in so much of this discussion (across the whole net, not just here) is
1. extreme ignorance of behaviors and usages outside one's immediate experience
2. an obsession with pointing out that something is not perfect as though that's some sort of proof of anything. Cars aren't perfect. Planes aren't perfect. Neither wiki nor google nor windows nor linux are perfect. And yet we use them all because they're a lot better than nothing.
 

name99

Senior member
Sep 11, 2010
652
545
136
OpenAI and Anthropic must be really good at scamming developers if they are growing 4-5x or more per year.

The amount of uninformed opinions on the state of AI on Anandtech forums is high.

I have a theory: people are afraid of how fast AI is moving. They're afraid of AI making their own jobs redundant. They think it might give people like Sam Altman too much power. They think AI is making GPUs and CPUs more expensive for their hobbies (usually gaming). They don't work at OpenAI, Anthropic, Nvidia, or an AI company so they don't reap the financial benefits. They read a few negative articles on AI and decide to become anti-AI, calling it hype, calling it a bubble, saying it's not "real" AI, saying it's generated garbage. They want to bury their heads in the sand and hope this all goes away soon.
I think it's even simpler than that.

A year or so after 9/11 The Onion wrote a headline something like "Here's why what just happened proves why I'm right about everything". I still thing this describes 99% of commentary.

If AI commentary isn't coming from one of a very few people who are BOTH very smart and deeply within the field, then it's not commentary about AI, it's "Here's why what just happened proves why I'm right about everything". It's a way to yoke your pet theory of the world, whether that's "humans are innately evil" or "globalization hasn't gone far enough" or "we need more babies" or "me me me, look at me, I'm the center of the universe, MEEEEE" to what people are talking about.

So I don't think it's as shallow as 'people are afraid of how fast AI is moving", it's even MORE SHALLOW than that!
It's insert AI buzzwords into a pre-existing mental template...
 

Doug S

Diamond Member
Feb 8, 2020
3,567
6,303
136
Your background model seems to be that most of the money in AI must come from consumers, and in turn this must come from ads.
An alternative model is that both are false.

All the money ultimately comes from consumers. Whether it is washed through a corporation on the way doesn't matter, it ultimately has to be used in a product that consumers (or the government that consumers fund via taxes) buy. Yes you're right that businesses don't "pay" in ads, so what would incentivize them to be the primary/significant funding source for AI? Reducing costs. How would AI reduce costs? By reducing headcount.

Now you can do some simple math based on the ~ $11 trillion in wages paid in the US (yes there's the rest of the world but let's just look at the US) and figure out how much you think AI needs to slice off to make its nut from selling service to businesses. Whatever percentage of that $11 trillion you take, you can probably triple it to get the percentage of people who will lose their jobs (or not be hired if it happens slowly enough they cut via attrition) because the people AI can replace will be those at the lower rungs of the wage scale.

People make "productivity" arguments about AI all the time, but don't consider what productivity is in economic terms. It is increasing revenue/profit/GDP (depending on where you're measuring it) per unit of human labor. That can come from my employer taking away the shovel I was using to dig ditches and giving me a backhoe, or from a smart guy somewhere coming up with a new way of doing things so my employer no longer needs ditches - and therefore no longer needs me and I lose my job. AI is the equivalent of a backhoe for digging ditches more quickly, it is not and will not be the equivalent of the smart guy coming up with a new way of doing things that avoids the need for ditches. So one can argue productivity all they want, but in terms of what AI can deliver it means business needing fewer employees to do the same thing.

So back to the $11 trillion, if you're only taking 1% of that $11 trillion that's not enough to even fund the Oracle deal, let alone everything else OpenAI and whoever else is ultimately a "winner" is spending money on, but that's already 3% higher unemployment. In order to feed itself from the business world, AI would have to give us double digit unemployment. That's going to reduce consumer demand for the very products and services that AI is making more "productive".

Here's where you hear the argument about how every time there have been advances, new types of jobs have appeared. They have, and they probably will again, but they don't appear right away. The industrial revolution was a very slow moving process. Computerization happened more quickly. The internet more quickly than that. AI is moving far more quickly than similar changes have in the past, but the processes that drive the creation of new types of jobs don't. If new jobs aren't appearing at the same rate they're being destroyed you don't just end up a with spike in unemployment and then a recovery, but structural unemployment that lasts much longer. And that has societal costs.
 
  • Like
Reactions: Viknet and Tlh97

gdansk

Diamond Member
Feb 8, 2011
4,567
7,679
136
That relentless push to save pennies. Supply chain guru at it again.
Of course, there are reasonable limits.