- Mar 3, 2017
- 1,774
- 6,757
- 136
Ok, I don't think your logic is sound. I don't know why you'd buy a chip with the primary purpose of gaming and choose to prioritise the web performance over the gaming. You may be right with the configuration of course, but it won't be because of the logic in these replies.No, I included that.
The Motorola Droid Turbo Z??? The one with the Moto mods, already did that. Magnetic backside with contact points for a larger battery, JBL speaker, or a game pad controller with thumb sticks and buttons.What would stop someone from making a sort of "gaming frame" for iPhone that includes physical controls and on the backside some sort of cooling plate (whether thermoelectric or vapor diffusion w/ fans)
I suppose the hangup would be getting support from developers for what would be a low volume product, or products if more than one came out that worked differently. If Apple cared about hardcore gaming I suppose they could make the thing themselves and that would probably guarantee developer support.
It has gaming too. I don't understand the problem. Just make sure the game is on the right CCD when fine tuning the settings. And done. Best of both worlds until they solve the clock rate (they won't until they move all L3 off chip).Ok, I don't think your logic is sound. I don't know why you'd buy a chip with the primary purpose of gaming and choose to prioritise the web performance over the gaming. You may be right with the configuration of course, but it won't be because of the logic in these replies.
In this phoronix test the 9800X3D is ~5% faster than the 9700X@105W and ~12% faster than the 9700X@65W (the mention an average of 9%), so the 3D vcache also helps in non-gaming tasks, which is why it could make sense to also release a dual vcache 9950X3D. They could for all sake and purposes release the two regular ones and the 9970X3D with dual vcache at a premium segment.I agree AMD is staying the course with one vcache ccd. However, having a faster non vcache ccd helping in web apps vs having gaming run smoothly, which is the selling point of the vcache models in the first place seems like a much better tradeoff than keeping the existing structure of a faster non vcache ccd.
Also, it may not be important in the end, but you're making an assumption that they're not saving the best binned CCDs for the higher end 16core vcache models. You're basing your assumption based on what can be done with single CCD models right now, you don't know what they can hit frequency wise with vcache yet.
This is very counterproductive to squeezing the maximum amount of margin out of every available Si piece.why it could make sense to also release a dual vcache 9950X3D.
Why?This is very counterproductive to squeezing the maximum amount of margin out of every available Si piece.
AMD should just do premium priced limited runs with much higher margins for the few people who would buy such at any price. Same with all kind of OC enabled Threadripper counterparts to Epyc chips. And sell them all only through their web store to cut out scalpers and avoid mixing those with the regular client products.This is very counterproductive to squeezing the maximum amount of margin out of every available Si piece.
I'd say it makes it *easier*, but doesn't make it *easy*.If the product is built so that the CCD with vcache is also the faster of the two CCDs, doesn't that make the scheduling much easier?
First-come-first-served is not an appropriate approach in many cases. It may be a good-enough approach in several specific scenarios.Because the vcache ccd with the fastest cores will be allocated first?
It remains to be seen how the product will look like.The CCD with v-cache is slower
They don't bother, that's the actual problem. An OS has the best insight into how an application uses memory/storage and over several runs, it could cache most frequently accessed data long term and immediately load that up in RAM to get applications to run with max efficiency without waiting for the application to warm up. Users could opt to store caching profiles for their most important applications and all it would cost is storage space.Today's operating systems just don't have information about memory access patterns of the various tasks to make this sort of decision all on their own.
A little caveat: The 9800X3D comes with 120W TDP, not 105W. So that gain is partially due to higher power consumption, and partially due to VCache benefits.In this phoronix test the 9800X3D is ~5% faster than the 9700X@105W and ~12% faster than the 9700X@65W
I disagree. Monitoring of these things — that is, main memory accesses of different concurrent workloads and how they interact with each other and with processor cache capacity and processor caching policy — takes nontrivial effort. And prediction of how the sum of running applications will behave is even harder, and would be highly speculative. (Remember what Niels Bohr said about Prediction.)An OS has the best insight into how an application uses memory/storage and over several runs, it could cache most frequently accessed data long term and immediately load that up in RAM to get applications to run with max efficiency without waiting for the application to warm up. Users could opt to store caching profiles for their most important applications and all it would cost is storage space.
Re: scheduling classes — on the other hand, it *may* sometimes be more optimal to run a high-priority latency-sensitive interactive task on the low-cache CCD and a simultaneous low-priority non-interactive task on the high-cache CCD. That's if memory bandwidth demanding accesses of the latter task drag down memory access latency of the former task¹, but the access patterns are such that the interactive task is fine with 32 MB$ but the batch task happens to be greatly helped with 96 MB$.one would no longer have to give hints which threads prefer cache and which threads prefer clock. But one would still have to give hints which threads are to be blessed to get to run on the good CCD and which threads shall take the back seat. Though in many cases such information already is provided, via scheduling priorities, and scheduling classes (interactive work versus batch jobs).
We will never know if no one tries. The monitoring of process related statistics might very well be a CPU intensive process in which case, the OS can allow the user to turn on "training" mode where the OS learns as much as it can about the user's typical usage over several days. Then once enough data has been collected, the training mode can be turned off and the CPU overhead of monitoring goes away.I disagree. Monitoring of these things — that is, main memory accesses of different concurrent workloads and how they interact with each other and with processor cache capacity and processor caching policy — takes nontrivial effort. And prediction of how the sum of running applications will behave is even harder, and would be highly speculative. (Remember what Niels Bohr said about Prediction.)
Retail threadrippers are sufficiently premium comparing to epyc (considering the volume discounts), yet the threadrippers are ridiculously handicapped.AMD should just do premium priced limited runs with much higher margins for the few people who would buy such at any price. Same with all kind of OC enabled Threadripper counterparts to Epyc chips. And sell them all only through their web store to cut out scalpers and avoid mixing those with the regular client products.
They won't ever do that anyway though.
Probably reading too much into it but gamers nexus just uploaded their weekly news video and ended with a brief mention of the 9950x3d / 9900x3d release date rumor article from wcftech, conspicuously not mentioning the later leak about single vcache ccd, and he even specifically says they've heard that "AMD is planning a major rework of its implementation of dual ccd parts"
The preceding intel news item in the video was from yesterday, so I don't think its a matter of them filming before the newer rumor dropped, and doesn't seem like something he'd miss
Seeing this... has anybody here read that recent David Huang's post about broken CPPC preferred cores? Even 5 years old "easy stuff" like that one is apparently hard to implement correctly & test in real world. So I'd be very skeptical about just anything scheduler-related.... the training mode ...
If there is a market for 14900KS then I also think there is a market for a lineup for three pricing tiers 9900X3D, 9950X3D and 9970X3D (16c dual vcache)I don't see AMD offering dual X3D versions. To me it just sounds like something someone pulled out of their bum and has been repeated so much it is being regarded as plausible. I don't think they could hit the right price with a dual X3D version. It would fix scheduling issues though.
yes, but in a game like ACC, where the 7800X3D > 7950X3D > 7950X >= 7700X, then there is is no regression going from one CCD to two 7700X-> 7950X, but only having vcache on one CCD 7800X3D -> 7950X3D seems to only give you some of the difference between 7700X and 7800X3D, and especially the 1% lows are not improved as much.There still would be scheduling issues with threads migrating to the other CCD. Sure a vcache too but there would still be need to access the other CCD.
Yeah, this could mean anything though. At this point, we would have at least credible rumors from some leaker if it was dual vcache or two CCDs unified over a single large vcache. It aint happening. It probably means the way the previous gamebar/scheduling setup is handled differently or some other nothingburger.Probably reading too much into it but gamers nexus just uploaded their weekly news video and ended with a brief mention of the 9950x3d / 9900x3d release date rumor article from wcftech, conspicuously not mentioning the later leak about single vcache ccd, and he even specifically says they've heard that "AMD is planning a major rework of its implementation of dual ccd parts"
The preceding intel news item in the video was from yesterday, so I don't think its a matter of them filming before the newer rumor dropped, and doesn't seem like something he'd miss
I've never had a storage issue on a Mac. Even my 32-64gb RAM Mac systems that I used for work used the lowest level of storage and at most I've probably used around 60%. A big part of that is likely because I don't game or do other things beyond software development and/or content creation/editing, but I actually have never come close to 100gb, let alone 256. Apple definitely was too slow on RAM as I've had issues with Docker and Photoshop blowing things up in that regard, but not storage.For the kind of person who is just using their PC as an appliance, pretty much using the browser for everything and not getting any third party applications (and that's a huge chunk of the overall PC market) then 256 GB is more than enough.
They aren't marketing the Mini to the kind of people who read and post in tech forums. We aren't the customer they've designed it for. Your complaints read like someone posting in a forum for auto enthusiasts complaining that a Ford Focus doesn't have enough horsepower, and upgrading to the Focus RS where you get real horsepower costs too much and at that price there are better alternatives.
Simple 3D Math is easy, but...This makes no sense. Even a Comet Lake CPU can chew through hundreds of millions of triangle size and position calculations per second, and you think it has an issue with 2D charts with a few hundred data points?
What kind of evidence can you provide to back up such a claim?
...exactly. I am really surprised we don't have a PCIE expansion card that focuses on JS/CSS/HTML acceleration at this point.Web browser...
Tradingview site.
So I am a huge gamer. Like, currently I spend hours per day gaming. When I was working, gaming was second on the list, since I'm now not working, it is first. I've NEVER had an issue with the gamebar. It has always properly detected games (I am forced to use it for social features). That being said, my CPU (7950X) does not require it for things, so YMMVwell pls don't tell me they are still sticking with that xbox gamebar solution...
IMO, that's a broader issue with AMD's product planning. They are very good at penny-pinching, and mostly it's a very smart choice. However, sometimes you just need that no-compromise halo SKU for mindshare, even when it's low volume.
It would be the best-in-everything, a no compromise CPU. If it's expensive to produce they could just rebrand it to "9990X3D BE" or something else and easily sell it for 999$ (at least for the early adopters). Nvidia unfortunately has proven it time and again that there is a market for such products as computing is still a really cheap hobby compared to many others.
Since this product cycle is quite long, I still have fools hope they change their mind and release a follow-up SKU. They have plenty of time to do the validation and productization, even if it arrives in late 2025, as there really is no competition. It might not have made that much sense when Arrow Lake was supposed to be good but now we know that AMD has 2 years of "smooth sailing" ahead. There are always consumers willing to pay a premium for "the best".
I know a few developers who will skip an upgrade because of not having that option (besides myself), and that sentiment is echoed both here and on Twitter.
Controls, yes, however, I've never had a performance or cooling issue on an iPhone. ignoring Apple's planned obsolescence, that is, and honestly, not all phone users are gamers and gamers aren't the majority, so I wouldn't expect any change there. Active cooling is NOT required on a gaming device of any kind, OEMs/ODMs include it for performance. Even PS5 level games don't need an actively cooled system, it is just cheaper to make. Large heatsinks cost more than a fan. Economics suck.Even AAA games on iPhones suck. Handheld gaming needs physical controls and active cooling.
The Switch is proof of this.
Yep but this hypothetical SOC is for PCs, not for handheld IMO.
Excellent point.
Excellent Answer!
If AMD wanted to get into the mobile market, they would absolutely need to have their own modem built into the SOC to be competitive.
Past generations were, but based on current information, I'd bet a lot of money that is no longer the case. 200mhz? maybe. However, it is increasingly looking like there won't be a boost frequency reduction at all, and only a slight base reduction, if any. Every indication I've seen shows at least 5.5ghz SC and MC not much lower. The last rumor I read claimed an ES had a 5.5/4.5 Boost/base speed for a dual CCD part. The cache part of the equation was NOT specified, so take from that what you will.It isn't faster, it's 300-400MHz slower. Peak 1T performance would suffer.
I thought that as well, however even post 9800X3D release, AMD has indicated that dual CCD parts will be different/special. I've honestly have no idea how this will shake out. Hoping for a dual V-Cache release, however, and ironically enough, not for gaming reasons, though, if you read above, gaming is actually a huge part of what I am doing currently on PC. Those rumors have also not died out since other information was released suggesting otherwise, so who knows.I don't see AMD offering dual X3D versions. To me it just sounds like something someone pulled out of their bum and has been repeated so much it is being regarded as plausible. I don't think they could hit the right price with a dual X3D version. It would fix scheduling issues though.
Any word on if it affects idle power consumption?STX C2C Latency Fixed