• Guest, The rules for the P & N subforum have been updated to prohibit "ad hominem" or personal attacks against other posters. See the full details in the post "Politics and News Rules & Guidelines."

Question 64C/128T is the most we will see on Desktop

Jimzz

Diamond Member
Oct 23, 2012
4,331
125
106
One day yes, but software has not caught up with 16cores in many cases; let alone 64.

But 8 cores being common now will push software to work better in the future.
 

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,340
8,044
136
Everyone thinks of desktop/HEDT as a one at a time application thing, that uses a few threads. I do DC, and there is cinebench and many others that all can use as many cores as we can feed them, even in windows, although linux is more efficient.

Kind of like this. One socket....

 
Last edited:

moinmoin

Golden Member
Jun 1, 2017
1,664
1,610
106
For Desktop UHEDT the 64C/128T TR will the CPU with most cores available, seems like after 64 T all apps even Linux apps are not scaling very well


I was hoping to see Milan TR4 go beyond 64C on the desktop, but I think it will only be for servers
You are looking at the status quo, one which is still mainly shaped by a era where CPUs barely exceeding 4 cores. Manufacturers like AMD have to plan up to 5 years ahead, there is no way the amount cores is going to stagnate this long again.

(The other discussion would be what exactly constitutes "desktop". I expect the range of different applications to become much more diversified, not only wrt amount of cores but also additional co-processors like it's already happening on mobile.)
 
  • Like
Reactions: ZGR and scannall

Hulk

Platinum Member
Oct 9, 1999
2,624
55
91
Everyone thinks of desktop/HEDT as a one at a time application thing, that uses a few threads. I do DC, and there is cinebench and many others that all can use as many cores as we can feed them, even in windows, although linux is more efficient.
This is spot on. Besides urging developers to utilize the cores/threads, these cores provide users the flexibility to run multiple compute intensive applications simultaneously.

AMD has done really an amazing thing by offering affordable CPU's from 4 core all the way to 64 cores. Intel was "feeding" us like two additional cores every 4 or 5 years (if we were good). Then AMD comes along and boom! Need 12 cores? Or 16? Or 24? Or 32? We've got you covered. And the price/core ratio scales pretty honestly.

Or imagine buying an 8 core system now and then upgrading to 16 cores at some point, and then 32 cores?
I've been an Intel guy for a long time but I have to give credit where credit is due. AMD has gotten their stuff together and seem to be building on their recent successes. I even feel like the fact that we aren't hearing much about Zen 3 is a good thing. No need to prop it up with teasers. Zen 2 is already at the top of the market and Zen 3 will prove itself when released.
 

Mopetar

Diamond Member
Jan 31, 2011
4,696
879
126
So what you’re saying is that 64C ought to be good enough for everyone?

That kind of reminds me of some other quote about computer hardware. I wonder how that aged.
 
  • Like
Reactions: Nothingness

Markfw

CPU Moderator, VC&G Moderator, Elite Member
Super Moderator
May 16, 2002
20,340
8,044
136
So, before I responded the the OP. But now that I have read the linked thread, it agrees with me. A lot of software can use 64 cores and 128 threads, and linux is better than windows.

Why is the OP saying that 64 cores is the most we will get, nothing like that is in the linked benchmarks ?????
 

Arkaign

Lifer
Oct 27, 2006
20,518
974
126
I sort of agree with the OP in a roundabout way.

If you are in a place with gigabit to multi-gigabit fibre lan, and have regular tasks that need a lot of cores, but not necessarily 24/7, then on-demand hosted scaling subscription type services make more sense. Eg; you may only be able to afford a 32 or 64 core workstation, but be able to afford a subscription to a datacenter that could spin up tens of thousands of cores for your task, then once completed it goes back down to a basic hosted level.

Say : mirror your Applications/etc in a cloud server VM, which is a co-located lease. When 'idle', it could revert to a low-resourse mode. When a big crunching job came along, if your dataset is mirrored between say a Synology and the cloud host, you could have it call up enormous resources to rapidly finish the job, then back to near zero.

It depends obviously on your needs. For someone doing ongoing DC like Folding/SETI/etc, it would be prohibitively expensive to lease huge compute for months or years on end. But on the flip side, if you're a small company to midsize outfit that does say : 8k Epic video editing and effects, then time is extremely important AND you have uneven compute demands. Idle for a lot of the time, then boom you have a task that you need done ASAP. Even the best theoretical 1024 core 4x256 EPYC core 4-socket single box custom WS can't hold a candle to calling up 8,000 or 12,000 EPYC cores to the job.

Scalability is the biggest sector of tech that's developing in recent times thanks to projects like Tianhe-2, and Oak Ridge's Frontier supercomputer projects, not to mention Nvidia, Intel, and AMD doing enormous work on the supply and engineering sides of it. It also solves some of the issues with diminished returns in single package CPU performance growth, as emerging needs from big data and AI need ever expanding performance capabilities to meet their goals. Faster interconnect and more efficient job distribution are other things to watch for rapid advancements.
 
Last edited:

RetroZombie

Senior member
Nov 5, 2019
464
382
96
If the developers don't have access to 64 cores cpus previously how will they make applications to use them?

Just an software example, POVRay developers they didn't have higher than 64 threads, so the program didn't scale past that, now the new version to be released will:
1581725586323.png

AMD Threadripper 3990X Review: Intel’s 18-cores, Crushed by AMD’s 64-cores
 

JasonLD

Senior member
Aug 22, 2017
254
206
86
I would love to have 100% more powerful 64-cores than 128-cores for sure, but since it became so hard to improve single core performance, just throwing more cores whenever transistor density improves is going to be "easy and lazy" approach for next 4-5 years.
Though I expect some major paradigm shift in processor design before we see stuff like 1024-core processor.
 

kschendel

Member
Aug 1, 2018
66
10
51
I couldn't help but recall Thinking Machines while reading this thread. Granted, the CM-1 wasn't a desktop! but in many ways, yet another old idea made new again.
 

IEC

Elite Member
Super Moderator
Jun 10, 2004
13,890
3,420
136
I look forward to 128-core CPUs in 2024.

It'll probably take that long to set aside enough hobby money to buy 1 with how expensive they will be, but it will be worth it.
 

Arkaign

Lifer
Oct 27, 2006
20,518
974
126
At some point (as much as I personally hate the idea because of a lifetime of being a PC nut) the economy of scale will just not support adding ever more cores into single PCs in the mainstream. Even now things are kind of sluggish for uptake beyond 6/8 Cores, and dual cores still flourish in the mass market units, be they 2C/2T or 2C/4T.

As we've moved so much to the cloud, especially in business environments, I see a lot less 'big iron' out there than a decade ago and further back. I would visit a midsize client (company A let's call it), and they had :

Exchange Server (Rackmount Poweredge 2 socket Xeon, 8C/8T, 16GB DDR2 ECC, Raid SAS 4x74GB Array)
Barracuda Firewall
Cisco VPN Router (3640 or similar)
SBS 2008 Server / Domain Controller (Another Rackmount Poweredge, this one 2U, 4 Socket, 16C/16T, 16GB DDR2 ECC, Raid SAS 4x36GB OS + Raid SATA 4x1TB data store)
Deprecated Server 2003 Microtower Server, a Celeron (??), was running some ancient Mag Card security system airgapped, so this I just imaged and tested a cloned box on similar basic hardware, which worked fine, so I set backups of the Login codes and activated/disabled card records to go to a Rackmount 16GB oddball solid state storage thing they had leftover from the 90s. Kind of like a NAS, had serial and RJ45 interface and seemed to run a HTTP intranet storage proprietary OS branch. But, you could open it in a web browser by setting a static octet for 10.0.0x and browsing to it, and then use the web interface to store data or create SMB shares, which I then used to keep the daily backups of the card data (quarterly backups we're all done manually to discs we co-located in a fireproof/waterproof storage).
Few dozen Dell Precision workstations, Core 2 Quad and Xeon Quad Conroe gens, for the engineers and project devs.
Few dozen mixed HP/Dell/etc basic workstations such as Optiplexes/etc for general staff and management.
Few dozen laptops, mainly Dell Latitudes and some XPS and MacBook renegades for some of the folks, that were business travel units and work from home on the VPN units.
Some kind of multichannel T1 bundle approximating 60ish land lines.
Etc.

Now, that same business has NO on site servers whatsoever. Email, Web, Data Storage, their financial and tracking software, their digital phone system, it's all outsourced to big tech outfits. An 8-bay and a 12-bay Synology endlessly mirroring their data is the only remnant in the old server closet.

Looking around the office, it's now a situation where most people run only a laptop, about 2/3rds PC, usually a Surface Pro or Surface Book and dock, and 1/3rd MacBook. A handful of renegades run iMacs or Dell and Lenovo workstations, which account for the only serious real multicore boxes in the business of ~120+ employees and contractors. 90% of the boxes and laptops are 4C or less (lot of Intel 'U' series 2C/4T fluff, ugh), though at least I don't think very many spinners are anywhere to be found.

And ... Nobody is asking for more power :/ I sometimes talk about Ryzen and how much better say a 9700K is over even the recent 7700K, but I get no traction from it, no excitement. Everything is already fast enough for them outside of the not uncommon situation where one of their provider sources goes down, be it the leased SharePoint, their Egnyte cloud storage, or some damned thing or another. And every year or two Office 365 hosted exchange by Microsoft goes tits up during an upgrade or hack or whatever, and everyone loses their minds lol. To be fair that's probably about the same reliability of the old 2008 Exchange box they had, only with much higher mailbox size limits now.

Home users, it's one of two extremes : power ignorant and don't give a crap (perfect explanation of thin/light weaksauce notebooks and stubby desktops with joke PSUs, running awful specs), and the gamers and hobbyists, for whom the performance chase is very alluring.

That last group, including some of us AT faithful, owe a bit of a debt to the rise of the YouTube gamer personality. Your Markipliers, your JackSepticEyes, etc. It seems strange if you aren't familiar, but they have collectively hundreds of millions of subscribers on there, and basically every kid 6-18 watches YouTube, and what do these new rock stars play on? PC! Thus, over the past five years I have had countless calls and requests to help beleaguered parents build them a 'Gaming' PC, a 'Minecraft' PC, a 'Five Nights' PC, a 'Fortnite' PC, and so on. Whether or not we care or can tolerate them, these internet heroes have been instrumental in bringing PC Gaming back to maybe its finest hour. Before YouTube, most kids didn't give half a shit about PC gaming. It was Xbox, PlayStation, and basically nothing else. Ports of multiplatform games circa 2003-2012 were often poor quality when they even bothered bringing them to PC at all. PS360 have gigantic stacks, the bulk of the library, of games never brought to PC. Now, it's almost foregone that anything major outside of PS and Nintendo exclusives will come to PC, where the YouTube personalities will make the chosen titles ever more famous.

These streamers made a mega millionaire out of indie developer Scott Cawthon. These streamers made Minecraft and Fortnite global brands to rival nearly anything else in entertainment.

Idk, I'm just happy they are doing what they've been doing for PC. Those same gamers are the biggest driving force supporting bigger better (enthusiast consumer class) CPUs, GPUs, Variable refresh displays, and yes, unfortunately : senseless glass cases and LED seizure lighting lol. It will keep that economy of scale going at least a bit further, when most people, or at least sensationalist writers, had the PC well in its grave a thousand times by now. 'Replaced by Tablets', 'Replaced by Smartphones', etc. Nope, still alive! They can pry PC building and upgrading from my cold dead hands haha.
 
Last edited:

eek2121

Senior member
Aug 2, 2005
452
342
136
I feel the need to point out that multiple cores aren't just a use case for single workloads. Windows, for example, has hundreds or even THOUSANDs of threads active at any given time. All things being equal, more cores will always be at least somewhat faster, even if it isn't noticable. An example:

Google Chrome spawns a new process for every tab. 6 tabs open? 6 processes. On a single core system your CPU utilization might read as zero, or as high as 20% depending on what the website is doing. Now add in a file download and Spotify playing music in the background. Now let's say you go play a game. As long as the scheduler is doing it's job, a multi-core system will always be able to maintain a higher framerate than a single core system, even if the game is completely single threaded!

Even if you don't do any multi tasking, Windows still does it's own stuff in the background, so your single core system will still be subject to more frame rate stutters than a multi-core system.

Is there a limit as to how high we should go with core count? I don't believe so, though it definitely should not be priority at this point. Do you stand to benefit from more cores? Yes. The difference may be negligible, but it IS there. As long as you have a recent Intel or AMD 8 core/16 thread based system, you are fine. If you have less, you are fine as well, though your system won't age as well. The exception to this is that I have noticed older quad core systems struggling with specific tasks, such as recent games.
 

amrnuke

Senior member
Apr 24, 2019
842
1,025
96
At some point (as much as I personally hate the idea because of a lifetime of being a PC nut) the economy of scale will just not support adding ever more cores into single PCs in the mainstream. Even now things are kind of sluggish for uptake beyond 6/8 Cores, and dual cores still flourish in the mass market units, be they 2C/2T or 2C/4T.
As an avid armchair economist, the economies of scale cut both ways wrt CPUs. Sure, ever more cores push the diminishing returns envelope. Ever more clock speed also delivers diminishing returns as well. Same with IPC. At some point the cost of all of this makes little sense... until the technology catches up and makes it cheap enough that it does make sense. That's where AMD is now. 64C/128T HEDT chips didn't make sense before - it was too costly. But here we are, and we have applications that can utilize all of it.

The great equalizers are market demand and time. I guarantee there will be >64C/>128T HEDT processors in the future. Probably within 5 years. As long as you're still taking perceptible time to render images, and taking minutes to compile, there will be an advantage to ever-increasing CPU speed, be it from IPC increases, clock speed increases, or parallel computing via increased cores/threads. If you have someone making $30/hr sitting around waiting on rendering images or compiling, it's just a matter of time before if becomes cheaper to just buy a faster CPU.
 
  • Like
Reactions: moinmoin

Arkaign

Lifer
Oct 27, 2006
20,518
974
126
As an avid armchair economist, the economies of scale cut both ways wrt CPUs. Sure, ever more cores push the diminishing returns envelope. Ever more clock speed also delivers diminishing returns as well. Same with IPC. At some point the cost of all of this makes little sense... until the technology catches up and makes it cheap enough that it does make sense. That's where AMD is now. 64C/128T HEDT chips didn't make sense before - it was too costly. But here we are, and we have applications that can utilize all of it.

The great equalizers are market demand and time. I guarantee there will be >64C/>128T HEDT processors in the future. Probably within 5 years. As long as you're still taking perceptible time to render images, and taking minutes to compile, there will be an advantage to ever-increasing CPU speed, be it from IPC increases, clock speed increases, or parallel computing via increased cores/threads. If you have someone making $30/hr sitting around waiting on rendering images or compiling, it's just a matter of time before if becomes cheaper to just buy a faster CPU.
That's all true in its own way, and my preferred paradigm. But the $30/hr guy, if he's truly constantly sitting around waiting as jobs come in, then it makes a lot more sense to lease on-demand big compute, perhaps into the thousands of CPUs. That's essentially what MS is after with Azure, moving diffuse compute elements into a more streamlined data services platform.

And of course on a more specific level, the folks at TurboRender are doing Premiere Pro heavy lifting hosting for nominal fees.

In the example where you have one or more $30/hr guys waiting on work, that path will always have a higher potential than a box on the desk. It's kind of early days at present, but it's highly serialized data that isn't overly latency sensitive, so as compute process threading gets more advanced, the big cloud services providers will only grow in relevance.

It could be a situation where :

8C/16T 3700X : 6.5hrs
64C/128T TR4 : 50 minutes
Cloud Render : 25 seconds

The architecture of big data services is utterly fascinating, because of the logistics of load balancing and trying to optimize for minimal waste. Following the example above, a cloud rack farm with core multi-terabit bandwidth and East/Central/West coast datacenters might have tens of thousands or hundreds of thousands of blades and other Rackmount metal. Obviously if demand was too intense at a given time, dedicating enormous numbers of cores would be less possible. On the other hand, if you spin each job up extremely aggressively, you clear the path for the next job coming through.

I mean, I can't say it enough : the carcasses are piling up.

Local email servers, Exchange et al are nearly extinct
Local web servers are basically DOA
Local domain controllers are a dying breed
Local data storage is now more commonly smart NAS if any whatsoever, invariably tied to cloud storage mirror

Loads of specialty professional software has gone cloud side. I worked extensively with Time Matters and Timeslips, supporting midsize law firms, but that is also on the last gasps. Ditto the client and scheduling software for Healthcare, HVAC, etc. The local install is just a memory.

CMS apps and Salesforce, now basically entirely cloud side.

It's just endless. Thankfully the root of the PC : a capable platform, CPU, ram, OS, and I/O, is still going strong. The stupid idea of giving everyone tablets didn't pan out, mainly I believe because they are inefficient for intense data interfacing vs a mouse, keyboard, and as many monitors as you need or want. But instead of running all local apps and off of local network resources, people are running the bulk of typical tasks via cloud apps, cloud storage, etc.

Idk. I don't think PC is a dead end by any means. It just feels like the pressure to increase performance is becoming more rare than ever. A $5k over the top PC circa Athlon 64 FX60 days or Core 2 Extreme type, it STILL wasn't all that fast even way back then. I/O was terribly constrained, and loading apps and running daily multitasking was something that made probably most computer users period wish they had a faster, better PC to use.

But now? A $599 PC with an nVME solid state and lower mid-range CPU w/basic or integrated video is enough to make nearly every regular user's experience almost instant. Word, Excel, Outlook, Acrobat, a web browser or two, perhaps Pandora or Apple Music streaming some subtle tunes while they toil away. But most people that I deal with no longer have any issues with performance even on shockingly old stuff as long as it's configured well and running SSD. 95% of office worker or non gamer home users can do almost any common task virtually instantly. Remember splash screens? Haha those old $5k workstations of 2005-2006 would take many seconds to load Photoshop, or even a nasty mid-aughts flash-infested website. Now systems a tenth that cost or less and you barely get to see a splash screen. By the time you get to Ryzen 3k and Coffee Lake stuff, if you have a PCIe nVME SSD and adequate ram (8GB still totally fine with an SSD for basically everyone besides true power users), opening things feels nearly telepathic, like clicking on things just 'reveals' what was already open, instead of the actual fact that it's opening with 30-50x the storage bandwidth compared to spinner era.

Idk man. I guess we'll see. I just don't see a mass market support for pushing the core envelope all that much more. 99% of PC users in 1995 could use a faster PC. Probably 85% by 2005 also would have noticed a faster PC. Today? I run into people whose Phenom or Sandy Bridge rig needs replacing, and I give them a kick-ass 2019/2020 rig, and they can't even tell me the difference. Things just change.

I actually think gaming will be the final cloud domino to fall. The current infrastructure is a little too erratic and even minor latency feels like garbage to experience, and I can't see any paradigm shifts to get us ~1-2ms global latency any time soon. But basically everything else people do on PCs can be a 'service' :(

If Windows itself ever goes full streaming + thin client as the default, I'll probably just puke on my shoes, give up, and move into a Kaczynski cabin in Montana and never look back.

I don't want any of what I'm describing to be the defacto reality, but at the same time I can't see how we won't get pushed off that cliff.
 

CluelessOne

Member
Jun 19, 2015
51
29
91
I sort of agree with the OP in a roundabout way.

If you are in a place with gigabit to multi-gigabit fibre lan, and have regular tasks that need a lot of cores, but not necessarily 24/7, then on-demand hosted scaling subscription type services make more sense. Eg; you may only be able to afford a 32 or 64 core workstation, but be able to afford a subscription to a datacenter that could spin up tens of thousands of cores for your task, then once completed it goes back down to a basic hosted level.

Say : mirror your Applications/etc in a cloud server VM, which is a co-located lease. When 'idle', it could revert to a low-resourse mode. When a big crunching job came along, if your dataset is mirrored between say a Synology and the cloud host, you could have it call up enormous resources to rapidly finish the job, then back to near zero.

It depends obviously on your needs. For someone doing ongoing DC like Folding/SETI/etc, it would be prohibitively expensive to lease huge compute for months or years on end. But on the flip side, if you're a small company to midsize outfit that does say : 8k Epic video editing and effects, then time is extremely important AND you have uneven compute demands. Idle for a lot of the time, then boom you have a task that you need done ASAP. Even the best theoretical 1024 core 4x256 EPYC core 4-socket single box custom WS can't hold a candle to calling up 8,000 or 12,000 EPYC cores to the job.

Scalability is the biggest sector of tech that's developing in recent times thanks to projects like Tianhe-2, and Oak Ridge's Frontier supercomputer projects, not to mention Nvidia, Intel, and AMD doing enormous work on the supply and engineering sides of it. It also solves some of the issues with diminished returns in single package CPU performance growth, as emerging needs from big data and AI need ever expanding performance capabilities to meet their goals. Faster interconnect and more efficient job distribution are other things to watch for rapid advancements.
The return of VT 100?
 
  • Like
Reactions: Arkaign

Mopetar

Diamond Member
Jan 31, 2011
4,696
879
126
At some point (as much as I personally hate the idea because of a lifetime of being a PC nut) the economy of scale will just not support adding ever more cores into single PCs in the mainstream. Even now things are kind of sluggish for uptake beyond 6/8 Cores, and dual cores still flourish in the mass market units, be they 2C/2T or 2C/4T.

As we've moved so much to the cloud, especially in business environments, I see a lot less 'big iron' out there than a decade ago and further back. I would visit a midsize client (company A let's call it), and they had :

Exchange Server (Rackmount Poweredge 2 socket Xeon, 8C/8T, 16GB DDR2 ECC, Raid SAS 4x74GB Array)
Barracuda Firewall
Cisco VPN Router (3640 or similar)
SBS 2008 Server / Domain Controller (Another Rackmount Poweredge, this one 2U, 4 Socket, 16C/16T, 16GB DDR2 ECC, Raid SAS 4x36GB OS + Raid SATA 4x1TB data store)
Deprecated Server 2003 Microtower Server, a Celeron (??), was running some ancient Mag Card security system airgapped, so this I just imaged and tested a cloned box on similar basic hardware, which worked fine, so I set backups of the Login codes and activated/disabled card records to go to a Rackmount 16GB oddball solid state storage thing they had leftover from the 90s. Kind of like a NAS, had serial and RJ45 interface and seemed to run a HTTP intranet storage proprietary OS branch. But, you could open it in a web browser by setting a static octet for 10.0.0x and browsing to it, and then use the web interface to store data or create SMB shares, which I then used to keep the daily backups of the card data (quarterly backups we're all done manually to discs we co-located in a fireproof/waterproof storage).
Few dozen Dell Precision workstations, Core 2 Quad and Xeon Quad Conroe gens, for the engineers and project devs.
Few dozen mixed HP/Dell/etc basic workstations such as Optiplexes/etc for general staff and management.
Few dozen laptops, mainly Dell Latitudes and some XPS and MacBook renegades for some of the folks, that were business travel units and work from home on the VPN units.
Some kind of multichannel T1 bundle approximating 60ish land lines.
Etc.

Now, that same business has NO on site servers whatsoever. Email, Web, Data Storage, their financial and tracking software, their digital phone system, it's all outsourced to big tech outfits. An 8-bay and a 12-bay Synology endlessly mirroring their data is the only remnant in the old server closet.

Looking around the office, it's now a situation where most people run only a laptop, about 2/3rds PC, usually a Surface Pro or Surface Book and dock, and 1/3rd MacBook. A handful of renegades run iMacs or Dell and Lenovo workstations, which account for the only serious real multicore boxes in the business of ~120+ employees and contractors. 90% of the boxes and laptops are 4C or less (lot of Intel 'U' series 2C/4T fluff, ugh), though at least I don't think very many spinners are anywhere to be found.

And ... Nobody is asking for more power :/ I sometimes talk about Ryzen and how much better say a 9700K is over even the recent 7700K, but I get no traction from it, no excitement. Everything is already fast enough for them outside of the not uncommon situation where one of their provider sources goes down, be it the leased SharePoint, their Egnyte cloud storage, or some damned thing or another. And every year or two Office 365 hosted exchange by Microsoft goes tits up during an upgrade or hack or whatever, and everyone loses their minds lol. To be fair that's probably about the same reliability of the old 2008 Exchange box they had, only with much higher mailbox size limits now.

Home users, it's one of two extremes : power ignorant and don't give a crap (perfect explanation of thin/light weaksauce notebooks and stubby desktops with joke PSUs, running awful specs), and the gamers and hobbyists, for whom the performance chase is very alluring.

That last group, including some of us AT faithful, owe a bit of a debt to the rise of the YouTube gamer personality. Your Markipliers, your JackSepticEyes, etc. It seems strange if you aren't familiar, but they have collectively hundreds of millions of subscribers on there, and basically every kid 6-18 watches YouTube, and what do these new rock stars play on? PC! Thus, over the past five years I have had countless calls and requests to help beleaguered parents build them a 'Gaming' PC, a 'Minecraft' PC, a 'Five Nights' PC, a 'Fortnite' PC, and so on. Whether or not we care or can tolerate them, these internet heroes have been instrumental in bringing PC Gaming back to maybe its finest hour. Before YouTube, most kids didn't give half a shit about PC gaming. It was Xbox, PlayStation, and basically nothing else. Ports of multiplatform games circa 2003-2012 were often poor quality when they even bothered bringing them to PC at all. PS360 have gigantic stacks, the bulk of the library, of games never brought to PC. Now, it's almost foregone that anything major outside of PS and Nintendo exclusives will come to PC, where the YouTube personalities will make the chosen titles ever more famous.

These streamers made a mega millionaire out of indie developer Scott Cawthon. These streamers made Minecraft and Fortnite global brands to rival nearly anything else in entertainment.

Idk, I'm just happy they are doing what they've been doing for PC. Those same gamers are the biggest driving force supporting bigger better (enthusiast consumer class) CPUs, GPUs, Variable refresh displays, and yes, unfortunately : senseless glass cases and LED seizure lighting lol. It will keep that economy of scale going at least a bit further, when most people, or at least sensationalist writers, had the PC well in its grave a thousand times by now. 'Replaced by Tablets', 'Replaced by Smartphones', etc. Nope, still alive! They can pry PC building and upgrading from my cold dead hands haha.
First there’s always a chicken and egg problem with computers because some applications can’t exist until there’s powerful enough hardware to run them and because the applications don’t exist there’s lower demand for this vastly more powerful hardware. Build it and they will come.

The other thing is that just because we hit a point where consumers don’t need more power doesn’t mean we don’t have a need for more cores. Many people had 8-core mobile phones years ago, well before some of us even had an 8-core desktop CPU. If there’s no need for additional powerful cores, expect to start seeing the same big.LITTLE approach where we get efficiency cores.

And for the business and scientific communities there’s no such thing as enough cores. And in time there will be new consumer applications that can easily use 16 cores and that will become a new baseline, and if nothing else web pages will continue to bloat and it will take that many just to deal with the horrible JavaScript the site and it’s 50 ad trackers try running.
 

Arkaign

Lifer
Oct 27, 2006
20,518
974
126
If the Windows scheduler can improve further, then a combo of big/little makes the most economic and logical sense. Ferry the nuisance threads to Atom/Jaguar style power efficient cores with a separate dedicated cache. I know previously things like the split FPU in Bulldozer were suboptimal for how windows distributed the work to its cores.

More PC power in the box, seems driven by relatively few things now. Games, VR, and media creation. The last one is tailor made for datacenter cloud services though, I doubt in ten years there will be more than a token amount of professionals running local rendering, because it can all be done on an immensely larger scale on a hosted cluster, without the need for maintenance/etc. I even predict the very strong possibility that Adobe etc will partner with AWS, MS Azure, or similar to bundle their pro software with cloud production rendering and data colocation (sync to NAS or desktop attached storage). As noted above, the bigger your example of power needs, the more this scales AWAY from desktop, because the advantages of being able to call up thousands or tens of thousands, or potentially even hundreds of thousands of cores on demand, from wherever you have your interface and login, is undeniably a superior option. As the level of value in the sector increases, so does the path towards having it hosted with a monumentally larger power structure to work with.

I mean, take a raytraced animated motion picture project. Scenes that might take many hours or even days to render on a pretty respectable local network could easily be completed in minutes or even seconds with the corresponding power on demand. Ditto protein folding, particle physics calculations, emergent AI development, on so on.

That leaves the relatively mundane minor local load, and gaming, as the chief holdouts for any forward momentum on local performance increases. Obviously the IC makers will want to keep selling new product, but I think they will focus on integration, efficiency, and of course profits. The very example of Zen CPUs, especially on 7nm being tiny makes for a solid point of reference. For mobile this will be fantastic. Along with further display tech and battery advancements, I could see sub-1w CPUs and multi-day battery life for business class laptops.

Web load is pretty marginal compared to the usual major site a decade ago. The emergence of HTML5 is a ludicrous increase in efficiency, and the death first of Shockwave, now Flash, and soon Java, is a great sign of things to come. The new paradigm, anchored partially on compatibility and server side security angles, will be virtualized web content. It will simply feed you a display of the site via stream, and look and feel like a highly responsive local copy, but be nothing more intense than an image that is updated with user interactions and demand. No local code to exploit to be of any use, all encrypted and running on remote systems. No constant hassles of having to optimize for various browsers and platforms, just flow the entire site virtually.

Argh, to be supporting the logical death spiral that will face local compute is agonizing. Gaming will continue to be the bulwark until we achieve a truly ultra low latency end to end comprehensive network infrastructure. Bandwidth is not nearly the issue that latency is. I hate when people, typically clueless execs, think that because streaming set, non interactive 2d content like even 4k video streams is a proven commodity that works and scales across most of the current internet infrastructure, that it will translate into game streaming being anywhere near as good as local hardware. However, when that domino falls, that's the Rubicon. A circa 2028-2030 120 teraflop PCIe 5.0 GPU won't hold a candle to calling up petaflop performance.

RIP me I guess. Will find out what CRT, Radio, and typewriter service professionals felt after those became basically extinct. I have no interest in going to work in some dungeon like maze of cloud service racks, and for every 5000 IT employees today, we might see 1% of that be available slots as the remote stuff becomes disposable dumb interface equipment that neither needs service nor is even designed to be maintained or modified. Glued together, disposable, ubiquitous.

I'm gonna keep on vibing like it's 1999, Goldfinger Athlons, early GeForce and Radeons, Castle Wolfenstein, pre-9/11 infinite warfare state, when the biggest scandals were generally presidential blowjobs or drugs.
 

moinmoin

Golden Member
Jun 1, 2017
1,664
1,610
106
I doubt media creation will ever fully go into the cloud for the simple reason that it moves the I/O bottleneck to the user's internet bandwidth. If not an issue altogether in many places ISPs hilariously still limit upload bandwidth. If media creation involves 4K+ videos or something similar ubiquitous but large in size, being able to locally work with it may still turn out a real time saver.
 

scannall

Golden Member
Jan 1, 2012
1,676
1,046
136
I doubt media creation will ever fully go into the cloud for the simple reason that it moves the I/O bottleneck to the user's internet bandwidth. If not an issue altogether in many places ISPs hilariously still limit upload bandwidth. If media creation involves 4K+ videos or something similar ubiquitous but large in size, being able to locally work with it may still turn out a real time saver.
The last 'media shop' I worked at before retiring had around 350 artists. Hammering just our local servers caused slowdowns. That many artists having to do everything on the Cloud?
 
  • Like
Reactions: moinmoin

ASK THE COMMUNITY