igor_kavinski
Lifer
- Jul 27, 2020
- 28,173
- 19,204
- 146
Hmm...I wonder how they could improve that number?meaning global Mac desktop sales are probably around 2% of global PC sales.
*hint* Windows on ARM and first party Linux driver support *hint*
Hmm...I wonder how they could improve that number?meaning global Mac desktop sales are probably around 2% of global PC sales.
They don't care.I wonder how they could improve that number?
Apple's been improving that number by undermining desktop sales. Last I saw desktop PCs had fallen below 30% of all PC sales to ~75 million units globally - compare that to Apple's 52 million iPads, and I'd bet anything that Apple's profits off of those 52 million iPads exceeds the aggregate profits off of those ~75 million desktops by quite a wide margin. So to start, why on earth would Apple want to grab a growing share of a declining market? Never try and catch a falling knife. The upside to the desktop segment is gaming PC sales seem to be holding, and they're high ASP. Enterprise PC sales seem to be shrinking slowly as their lifecycle expands and casual consumer PC sales are collapsing to mobile, helped in part by the enshittification of the web.Hmm...I wonder how they could improve that number?
*hint* Windows on ARM and first party Linux driver support *hint*
Hmm...I wonder how they could improve that number?
*hint* Windows on ARM and first party Linux driver support *hint*
We need a Mac Mini with an ATi All In Wonder, they'll sell tens of units!Or they could keep selling the same number of desktop Macs while the global PC desktop market continues to shrink, which is probably their best case outcome. If that shrinks by 33% their 2% is now 3%.
Why should Apple care about dying markets? Might as well suggest adding cablecard support to Apple TV to let them compete for that dying market.
Thanks! So it's the dang teens shaping markets. Need to raise better teens, folks! Buy them old used spare parts to assemble as kids and make this market grow!So I'll repeat a little story. Back in the mid-aughts, tech analysts were pessimistic about Apple's story, because they could look at the size of total consumer tech spending on consumer electronics and couldn't square that with the projections of iPod sales. They had more faith in the consumer electronic spending number than the iPod number. And then every year Apple not only hit but exceeded their iPod numbers and blew up that consumer electronic spending number. The analysts couldn't understand it until the apparel market analysts chimed in - their market was collapsing. Teens weren't asking for clothes, which had been the primary way that young people expressed identity to their peers, and now were asking for iPods. We had segmented these markets around the nature of the good, rather than around the job to be done. To teens, they did the same job - what jeans you wore used to say something about you, but what earbuds you wore now did. Apple siphoned revenue out of the apparel market to such a degree that retailers went under. That was in the blind spot of the tech analysts and the enthusiasts like us.
I like this comment. Back in those years we thought 14nm is the problematic Intel node. Heh.ARM is clearly dominant in mobile and that's where the consumer space is headed. x86 still totally dominates server and datacenter markets, but with Qualcomm launching 24 core server SoCs with similar i/o capabilities I think we will see a shift to ARM in that space too.
If intel sorts out its 14nm issues and somehow makes a comeback at 10nm (which is looking less and less likely every year) there is a possibility that they will retain there dominant position in the server space. The problem is most news is pointing to TSMC launching 10nm in 2H2016/1H 2017 while intel will still be launching Kabylake and giving the same core counts and same performance as they did before, plus 2.5-5% maybe. With the huge jumps in performance we see in A9X and Exynos 7420/8890 there is reason to believe that Qualcomm or another ARM vendor could continue to improve performance at that cadence, which mathematically makes it impossible for intel to keep up. Intel isn't a company that can pivot quickly and start making something different so they will likely just continue paying off vendors and using x86 lock in to suck revenue out of their server and datacenter customer. The only people who still buy x86 for consumer crap today are people who either need extreme performance to play games or people who are just too cheap to buy Apple. That doesn't look good for the future.
Italics is my note. Hilarious how backwards this opinion ended up being, in hindsight.Oh they (Intel) can. Specially if companies like Apple joins their foundry wagon. Then ARM(besides Apple) get stuck on much higher nodes. because TSMC/Samsung cant afford lower.
Its all a cash and revenue game.
LTS also sucks on Windows with ARM CPUs and don’t even get me started on Qualcomms drivers on windows. ARM on non-Apple platforms is useless.It's interesting that nobody called out the biggest arm problem for consumer laptops and desktops (that I saw) and that's the craptastic long term driver support from arm manufacturers. I can still run a GPU/x86 CPU from that era and basically all the accompanying hardware with pretty decent support in Linux
I think that pitfall of ARM was actually pretty commonly quoted all the time, it just didn't appear in this thread?It's interesting that nobody called out the biggest arm problem for consumer laptops and desktops (that I saw) and that's the craptastic long term driver support from arm manufacturers. I can still run a GPU/x86 CPU from that era and basically all the accompanying hardware with pretty decent support in Linux.
Yet all the arm stuff is basically ewaste at this point. You'd be stuck on a 3.x specially patched version of the kernel.
It wouldn't even take much to fix this problem, just up streaming drivers. The one exception from the period is the raspberry pi.
I think it's telling that I relay a parable about how Apple looks outside of existing markets for growth and you take it as a reason to complain about teenagers.Thanks! So it's the dang teens shaping markets. Need to raise better teens, folks! Buy them old used spare parts to assemble as kids and make this market grow!
I agree I'm terms of the windows ABI being a lot easier to deal with.I think that pitfall of ARM was actually pretty commonly quoted all the time, it just didn't appear in this thread?
Upstreaming doesn't protect you from lack of maintenance leading to eventual breakage of your old drivers, though. Drivers and device support get tossed out of kernel constantly for this reason. The ceaseless versions churn of Linux ecocsystem (with teh related but also orthogonal fragmentation issue) adds to the issue too.
IMHO the Windows model where drivers have a stable API/ABI and can be installed independently of kernel version and build ends up being superior. Well not just somewhat better but the one I would want to see everywhere. This compatibility model helps with application software too, obviously (native Linux game ports, anyone?).
It's true. The new generation thinks the old one is dumb but they haven't done anything particularly amazing so far. Non-upgradable Apple hardware destined for landfills is nothing to be proud of and they voted with their money to make Apple think they did the right thing.you take it as a reason to complain about teenagers.
Every generation thinks the old one is dumb, and every generation thinks the next one is lazy and entitled. My grandfather assembled his own TV and thinks you're a fool for buying one preassembled. I had a coworker who bitched the day he could no longer buy a printer he could debug with a serial cable. I'm old enough coder to be in the 'if you want it done right, write it in assembler' cohort. There's nothing new here.It's true. The new generation thinks the old one is dumb but they haven't done anything particularly amazing so far. Non-upgradable Apple hardware destined for landfills is nothing to be proud of and they voted with their money to make Apple think they did the right thing.
Every generation thinks the old one is dumb, and every generation thinks the next one is lazy and entitled. My grandfather assembled his own TV and thinks you're a fool for buying one preassembled. I had a coworker who bitched the day he could no longer buy a printer he could debug with a serial cable. I'm old enough coder to be in the 'if you want it done right, write it in assembler' cohort. There's nothing new here.
And you're still running grievances rather than taking the lesson. The hardware companies are going to expand the market because that's where the real money is, and they're going to leave you behind. That's what they always do. It's not the customers fault they do that - that's how markets work.
IIRC parts of Wolfenstein 3D and Doom were written in assembly because the hardware just wasn't powerful enough even if was written in C. It certainly helped that John Carmack is a wizard.
Compilers were far less capable then. If he had access to a compiler as good as today's the spots where he could gain benefit from assembly would be few and far between, and I think he'd be the first to admit that.
The only good case for hand coding today basically comes down to stuff like SIMD since even modern compilers have problems extracting and maximizing the benefit of instruction level parallelism. Almost no one is hand coding sequences of ordinary instructions because it is almost impossible to consistently beat the compiler on any sequence longer than fits in a 24x80 window. It isn't worth tweaking even in a tight inner loop to save one or two cycles - saving a cycle in an inner loop run a billion times and you've saved less than a quarter second at modern CPU clock rates lol
Instead of wasting time writing stuff in assembly (other than the aforementioned SIMD) you'll get far more bang for the buck in algorithm redesign. Not necessarily a complete refactor/rewrite, but hints that are derived from a human's high level understanding of what is going on. For example, using extensions like __builtin_prefetch if you know the data access patterns can result in an order of magnitude improvement in some cases without needing to touch or even understand assembly code. Smaller stuff such as marking branches as likely or unlikely, that sort of thing.
We still teach asm programming for embedded development and silicon design. Lots of places where you have pretty rudimentary compilers, and of course if you're designing your own silicon, bootstrapping your own compiler or working in an environment with very limited RAM or an environment with really important hardware security, knowing exactly what the code is doing becomes important. Writing something in C can usually get you there, but it still has a lot of memory overhead that hand coded asm won't have and if you only have a 1K of RAM to work with, that can be important. Also tends to get used a lot in real-time stuff a lot because you can literally count cycles in the code - you don't have to guess what the compiler will do. Also if you are reverse-engineering something, you need to know asm.
It's not like we don't still have a lot of environments with limitations similar to the 90s. (I did my last proper asm project (68K) in that time frame.) Last I checked assembly is still about as commonly used as Rust and PHP, just not on PCs. I think RTKit is mostly programmed in asm and C (the realtime OS inside iPhones that handles the radios, etc.) Generally you don't find it taught much in computer science programs, but it's pretty key in computer engineering and some EE programs. Generally we start with a high level language like python to teach CS concepts, and then go to asm and C to get down to what's happening down at the metal layer, and then teach digital logic and VHDL so you can start designing the metal. There'll be some hardware/software codesign course or two in there. There'll be a realtime course, a parallel and distributed course getting into HPC concepts, maybe a GPU/NPU design course now, etc. Assembly will show up throughout all of that, because that's how you see how the hardware works.
They start with Java because ACM provides a Java model curriculum and the CS AP course is Java based. Back in the mid aughts everyone was convinced Java was going to take over the world and a LOT of intro curricula shifted to Java and only Java. It's a terrible language to use for an intro course, though. But if you're taking transfer students, odds are the students learned Java in the community college so it's held on in the same way that imperial units have.I don't think assembly is taught much outside of specific cases. Hell looking at some CS courses at some state universities they don't even bother with C/C++. They start with Java. The higher tier state schools still taught C/C++ and in one of my courses included assembly. Had I gone the engineering route I'm sure I would've gotten more in depth with it.
They start with Java because ACM provides a Java model curriculum and the CS AP course is Java based. Back in the mid aughts everyone was convinced Java was going to take over the world and a LOT of intro curricula shifted to Java and only Java. It's a terrible language to use for an intro course, though. But if you're taking transfer students, odds are the students learned Java in the community college so it's held on in the same way that imperial units have.
I helped design a CS curriculum and one of our hills we were willing to die on was not starting with Java. We started with Python because everyone should learn Python (seriously) and the language gets out of your way, plus we could teach procedural and OO and functional programming with it, and they walked out with a pretty useful scripting/rapid development language. From there we went to asm/C and did embedded system programming. Then more C/C++. So by the end of their first year they'd seen 4 languages and by the 2nd year everything was then anchored on subject matter and would either lean on those languages or expect students pick up a new one as needed. We found it pretty hard to teach compiler design without students knowing assembly.
CS programs kind of come in 3 flavors: the old school algorithm/theory anchored where none of the courses centered on learning a language and you'd start out learning Lisp or something (they're dying out, but they'll get a resurgence as AI centric programs with a focus on the math involved), the PC centric ones that assumed everyone would go out and work for Oracle or Microsoft (these always started with Java), and the more engineering heavy ones that figured you could learn the PC stuff on your own but not the HPC stuff, compiler design, embedded, etc. What department the program originated from usually informs you which kind of program it is. If it hangs off of engineering (I was in engineering when we built that program) you'll get the last one. If it hangs off the math department, which a lot of early CS programs did you get the first one. If it comes more broadly out of LA&S, you'll get the middle one. From the engineering side, we figured the PC centric jobs were the most fragile and fungible, and also the easiest to self-learn. The jobs you didn't see were the ones you needed to train students for - the realtime failsafe systems that keep airliners from falling out of the sky - and we did send more students to Boeing and Airbus than Microsoft and Oracle. The engineering ones tended to include a decent amount of software engineering as well. The PC centric ones might, perhaps, encourage you to use version control.
I'm surprised it's 15%. I would have guessed 5%.However, if you are looking at strictly desktops, desktop Macs represent only around 15% of total Mac sales, meaning global Mac desktop sales are probably around 2% of global PC sales.
