F5, F5, F5.Newegg marketplace!!! Comparing retail vs scalper is taxing to the argument.
F5, F5, F5.Newegg marketplace!!! Comparing retail vs scalper is taxing to the argument.
There's no conspiracy in stating that back then Maxon was full-on Intel. Not that they should be blamed for that, given what AMD had at the timeQuit posting 10 year old videos to back up your conspiracy theories. I got more recent videos of AMD partnering with Maxon to deliver better performance to their customers. Care to see them?
Sure, but EUR is also stronger than USD, the 2 effects should normally equalize each other out, meaning normally the USD price ends up being the EUR price as well. Lots of 'should' and 'normally' in thereYes. In Europe prices include VAT.
If this pricing holds then I would say Intel thinks they are on top again and are pricing accordingly. Pricing seems a little "cheeky" to me but we shall see what the market will bear.
Quit posting 10 year old videos to back up your conspiracy theories. I got more recent videos of AMD partnering with Maxon to deliver better performance to their customers. Care to see them?
There's no conspiracy in stating that back then Maxon was full-on Intel. Not that they should be blamed for that, given what AMD had at the time![]()
I don’t see the point of Apple ever spending xtors and floor plan on SMT when they already are spending both on small cores. Intel's line of cores has been SMT for a long time; and those cores are also in Xeons. Hence the different approach, at least for now.While I doubt Apple will pursue SMT, just because the M1 doesn't have it enabled doesn't necessarily mean it isn't in there. That's not easy to get right given all the security issues that have been found with it over the past few years, so even if they wanted to do it they might need an iteration or two before they would announce/enable it.
Today Intel claims that Cinebench is useless, or it is not useful real world bench.
If we now, that Cinebench dont care about system memory performanse+dont care about big L3 Cache.Hm, what has changed from R20 to R23.
Quit posting 10 year old videos to back up your conspiracy theories. I got more recent videos of AMD partnering with Maxon to deliver better performance to their customers. Care to see them?
Using the Cinebench 20 data below I isolated Golden Cove and Gracemont MT and "Single Core" scores. Used simultaneous equations with 12700K and 12900K data. Then used other data to calculate the numbers below. Take with a grain of salt of course.
UPDATE - I calculated IPC increase incorrectly. New chart is correct.
Is this the part where I have to remind everyone that Cinebench R20 and R23 both use Embree?
It is used starting with CB R15.
Not necessarily. For a long time, Maxon allowed divergence of the Cinebench and Cinema4D code bases. R20 was the first Cinebench to actually use Embree so far as I can tell (unlike Cinema4D which as supported it since R15).
If you read the press releases for Cinebench R20, you'll note everyone making a big deal about the inclusion of Embree support.
Why, Apple’s designs are very wide with a high level of ILP? SMT is a way to sustain high ILP. Why spend more xtors on something they already have achieved. They will have a lower SMT rate than Intel and AMD - one cannot get blood from a stone.Eventually SMT will be the easiest way to add performance. Perhaps Apple has been experimenting with it. AMD showed they were able to do better than Intel. Perhaps Apple wants to do better still.
One of the things that I’ve learned in my 11-year ‘vacation’ [at VMware and EMC] is delivering silicon that isn’t supported by software is a bug. We have to deliver the software capabilities, and then we have to empower it, accelerate it, make it more secure with hardware underneath it. And to me, this is the big bit flip that I need to drive at Intel.
Greg Lavender , Intel CTOWe have these efficiency cores and performance cores. Don’t just leave it up to the operating system to decide what to do. Let the OEMs, ODMs and the channel partners have access to APIs that can do whatever they want to do, let’s say, in telco or at the edge
Why, Apple’s designs are very wide with a high level of ILP? SMT is a way to sustain high ILP. Why spend more xtors on something they already have achieved. They will have a lower SMT rate than Intel and AMD - one cannot get blood from a stone.
Eventually SMT will be the easiest way to add performance. Perhaps Apple has been experimenting with it. AMD showed they were able to do better than Intel. Perhaps Apple wants to do better still.
To me it looks more and more like Gelsinger is the real deal, he has made Intel aware that it needs to execute consistently on all fronts: arch, node, software.
The penalty for a stall in a high speed, narrow, deep pipeline is, IIRC, worse for than for a wide, shorter pipeline (all else being equal, which isn't exactly the case). It comes down to where Apple should use more stores to get the best performance. Perhaps, going forward, if Apple lengthens the pipelines of it large cores for higher-end MXX chips, it would make more sense to add some level of SMT as Intel & AMD do. This is my two cents. Apple, of course, knows how they want spend xtors and die space much better than I@Ajay
TBH as I see it, it is the other way around. A very wide architecture just like a very deep architecture is difficult to feed. I know that Apple did a lot to maintain a high ILP. But for me it is simply hard to believe that there is no significant SMT potential in M1, because there is only so much you can do if instructions fail to be in your desired mixture and your pipeline is already stalling. Ofc I am only an armchair expert, so history might tell.
I think, among other things, he is talking about embedding Intel engineers with large software makers. This is what Nvidia has done with AAA gaming companies and big AI software vendors. Well, lately, probably allot of video conferences. Nvidia will actually write code in some instances for it's top level partners (ergo - "the way it's meant to be played').To me he's just saying, "No more Itanium, okay?". And yes I know that Itanium is dead, but Gelsinger doesn't want to oversee the introduction of another uarch family with capabilities that will go broadly-unsupported by developers. Either Intel will have to stick to the existing software paradigm or they'll have to provide the tools to developers all up and down the chain to utilize new hardware capabilities.
I think, among other things, he is talking about embedding Intel engineers with large software makers.
I sure hope he keeps his words and this will lead to a paradigm shift at Intel ensuring security first at the chip design level. Currently it's rather like the software has to make the hardware secure than the other way around.We have to deliver the software capabilities, and then we have to (...) make it more secure with hardware underneath it.
Intel's FOSS support is pretty outstanding already.Intel will probably provide direct support to some high-profile FOSS projects as well. In fact they've been doing that for years in some capacity.
I think, among other things, he is talking about embedding Intel engineers with large software makers. This is what Nvidia has done with AAA gaming companies and big AI software vendors. Well, lately, probably allot of video conferences. Nvidia will actually write code in some instances for it's top level partners (ergo - "the way it's meant to be played').