Its quite meaningless isnt it. The Cell in theory can provide 230Gflops. Yet its much slower. My Haswell is also 200Gflops or so. Yet its not twice as fast as a 3570K.
For flops it should be, no? I did point out it was a very simplistic data point of showing change, improvement and so on.
Why would you need 2 cores for OS/background? I am sure you could settle for 1. If it was even needed at all.
For all the processes that are running in the background, faster switching, services, etc...some of these are on dedicated processor units also.
But I find amusing the notion that AMD was the only company able to provide a solution for the console OEMs. IBM and Nvidia already had expertise in the area, and there was nothing preventing Intel from doing so, and these companies could devote far more resources to the venture than AMD. AMD's main advantage is that nobody with the resources to develop the console chips were willing to go as low as them in terms of price (mid-teens margins) and was in so much need of cash as them (taking upfront cash from the OEMs, further eroding the margins). AMD simply offered the best "bang for the buck" for the console OEMs, and given that the consoles were fundamentally cost-constrained this time, this is *the* selling point, regardless of all the technical limitations anyone can see on the solution.
AMD was the only company that meet all of the criteria, and the best choice to be picked for PC gamers in the end. The only reason they didn't went ARM was because it had no 64-bit support. Going ARM also has a great business case, greater than x86, for the market, that's for sure. But, at the moment, x86 was a better choice.
Do you think you can make a game like on the level of the planned SC complexity to run on 6 weak cores? you just cant, a minimum of 2 are gona be used for physics and another 2 for API rendering, that leaves you with only 2 for game logic, AI and audio, not good, time to use the chainsaw. And im not even sure if 2 are gona be enoght for rendering.
Is SC, StarCraft...? If so, my answer is,Yes. There used to be a time where you would separate a system and put it on a whole core, that should not be done anymore, now parallel as much as you can and go from there.
Just the fact that a tablet cpu could get probably about 50 to 60% of the cpu power in a "gaming device" is stunning. And the ones with the Z3770 and Z3775 will get even closer.
So? Nobody will ever push those things to their max capacity as developers will a console.
I've seen repeated many times that the current gen of consoles brings nothing new or innovative to the table. Then there is the citation of other consoles that did indeed contain some breakthroughs in graphics. The above is true, but it is more to do with timing than anything else.
Agreed. There is nothing new. These are evolutions of old things or old things finally in consoles. One could say that Microsoft was the one bringing the most innovation and it failed, because of people.
What if the CPU performance targets of the console chips were higher than AMD could provide with the cat cores, what technology would they have deployed for the consoles?
More cores. It is a valid argument. We have supercomputers made of many cores, not few super fast cores. "More cores" is a very acceptable proposition.
But change this set of constraints by moving them close to the bleeding edge (and consequently pay more for the console chips) and Intel and Nvidia might have provided a better value proposition.
Bleeding edge what? For what? As a single company solution, Intel has nothing good for a console, even the i7-4770R is totally out performed. And NVIDIA only has Tegra, so the Tegra TK1 64-Bit Denver @ 2.5GHz, which at less than 365 Gflops per SoC(!), you would need 5 of them to match the Sony/MS consoles.
Where is the value proposition? And if going with an ARM solution, why even go with Nvidia? Generic ARM IP would of been best for many, many reasons. And if going to a custom ARM solution, why go with Nvidia? Qualcomm has millions of its solutions out there already, that would of been a wiser choice. And then, there's Img on all those Apple devices.
A little lost on this comment as I didn't bring up Tegra but it is also possible that a scaled up Tegra could suffice.
It would had, but 64 bit support was a requirement.
Apparently I was wrong. It is the new Nintendo Console
http://wccftech.com/nintendo-confirm...e-powered-amd/
There is no mention of it being from any company.
Why is the shared pool a mandatory item? And why Nvidia or Intel would have to *match* AMD APU levels of performance, why not go beyond it, which you can with a PC-like solution, with the trade off of having higher costs? You are trying to transform a business restriction (the consoles must reach a certain X level of performance within a Y budget) in a technical restriction (Nobody but AMD could reach a given level of performance, whatever this level of performance is).
It has been said many times that shared memory is the future, everyone is on that boat now, in some form or another. It was a requirement.
If I ask you to build me the best thing at X price and Z features and you turn out with AMD stuff, well that's it. But you show me Y, at a slightly higher price, then I will tell you..."Don't waste my time and money on something I did not ask for."
I have given examples of why the console hardware was the best choice, people have only complained.
Is there better hardware out there? Yes.
Could a better solution at a higher price be made? Yes.
Who picked AMD? Microsoft and Sony.
Why? They got what they wanted.