It may have had more to do with the cost of keeping the people talented enough to properly do those deep dives. When you have guys getting hired away by big tech firms or starting their own sites there wasn't going to be anything they could do to keep them. If they don't have anyone capable of...
The problem with AI isn't that it makes things up, whether arrogantly or not. It is that as the consumer of that information you have no cues to help you make a judgment about whether or not to accept the answer.
I recently encountered a Linux system (DD-WRT router) that had fully deprecated...
It is the same cache per core only if you use all cores.
In the world most of us occupy our CPUs are typically loading only a few cores at a time so you get more cache per core in those circumstances. But even if you're the outlier who is often running all cores at 100% you aren't any worse off...
Yep I've been telling friends and family since late February that we'll start seeing signs this summer and by the fall everyone will "feel" the downturn. Doesn't matter if all the tariffs are "paused" again or 200 imagined deals are announced that just happen to be identical to the deals that...
That's basically what it has worked out to be for iPhones, so it wouldn't be a shock if Macs ended up similarly.
Honestly though using a Mac/PC with an obsolete OS is less of an issue than using a phone with an obsolete OS. With a phone you're basically always under potential remote attack via...
I'm talking about using fp specific benchmarks designed to test the fp, not general purpose benchmarks that happen to include some fp.
The way Javascript uses fp doesn't make them fp benchmarks because using it for ALL numbers (which IMHO was the stupidest design decision in any language in the...
There's a third problem he's also overlooking, granularity. NAND writes happen in pages that are far far larger than cache lines. So even if you had magic NAND with unlimited write cycles, no erase required before write, and latency comparable to DRAM you'd STILL have to re-architect the entire...
Perhaps so, but that's primarily hand crafted assembly - and in the case of stuff like memcpy() it is far from being generally useful. That is the code has to check whether calling the SIMD code is worth doing which it isn't for short copies. Ditto for the kernel's use of SIMD.
Which is fine -...
"[A19 Pro] Estimated to be 4000+"
That doesn't sound like someone with inside information, it sounds like someone making a guess - pretty much in the same ballpark as the guess everyone here would have made.
It doesn't even make any sense. Why pay for more expensive DRAM to get at best the same speed - if you cut the bus width by 25% then the memory needs to be clocked 33% faster to make up for it.
Base M4 uses LPDDR5X-7500 so LPDDR6-10666 (the supposed launch speed) would provide a small bandwidth...
I'd like to see a single core "native int" result that doesn't leverage any SIMD (other than incidental use by system libraries i.e. if a memcpy() call uses it, but the benchmarks compiled for GB6 would have SSE/AVX/NEON/SVE/SME disabled in the compiler flags) and doesn't include any fp, just...
If you make a hybrid controller able to handle either LPDDR5X or LPDDR6 it delivers 24 bits of LPDDR6 or 16 bits of LPDDR5X, that's where that difference comes from.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.