soccerballtux
Lifer
- Dec 30, 2004
- 12,553
- 2
- 76
I think it's fine, developers are lazy and can kiss my shiny FX-8310. I am a C programmer. everything should be written in C
Where do you fit all that? Your preference would take as much size as their custom APUs all over again just for the CPU, meaning either the CPU or GPU would need to be far away from the memory controller; and hat's if it were even doable; or they'd have a big server-CPU-sized chip that would be horribly expensive, until shrunk. A single SR module would take around as much room as they did for a doubled up Jaguar set, if not more, and use more power for its actual performance offered (I agree with Tuna-Fish that if MS or Sony wanted a fat synth core, they'd have bent over backwards to make it happen).Instead of an 8 core, I'd have rather had a faster Quadcore (or a Big/Little combination with a Quadcore x86+ARM Processor for OS or something like that).
Lots of people wanted to buy those PS3s, and did, though often at far lower than $700. But, Sony mismanaged the development, had people in technical positions behind it that were like the Itanium guys, and what didn't sell was the games, in relative terms, until the platform matured. I knew more people that bought PS3s because other early BD players were steaming piles than that were gamers. Sony did not count on that, and so lost a lot of money on the console's early sales. Were they not the juggernaut that they are, it could have been a Sega or Atari death knell release, due to their hubris*.ARM is the future of consoles, no one wanted to buy a 700$ PS3 and it almost bankrupted Sony. If the APU is 100$ like suggested I'm sure Sony feels it still costs too much.
For the most part, they already know, and knew years ago. The problem is that some games are more complicated than your entire Windows OS. Tweaking and rewriting so much of that code has taken ages, and some of it was fruitless without DX10+ improvements, and other OS improvements, on the Windows side (which the XBoxes basically are). Now, using engines that scale out fairly well, they still have to be able to break problems up, and do so without lots of copying (memory may be cheap, but bandwidth is not, and using an Erlang-like, or Haskell-like, way to manage shared data will chew up on-CPU and/or RAM bandwidth in a heartbeat), and without creating a traditional mire of locks (a bad habit largely born from SMP database processing, and memory being expensive). That is not a trivial task for any programmers, even the genius-level ones. It takes time and money, both of which are always in short supply.Developers will eventually figure it out. This will force them to figure out how to properly multithread.
Agreed. I use more VRAM than that with games that came out on the prior consoles (I've used up to 2.6GB in Skyrim, and I'm still not done tweaking my mod choices for my new install; and 1GB was already a limitation for me by the end of 2010, w/ FO3, using not so high res textures).2GB is not enough for high rez textures today.
Intel i5 with 2.5+ teraflops GPU.
Something quite subtle that people may be forgetting is that the xbone/PS4 APU is NOT a true 8 core chip, rather it is two quad core jaguar clusters on die connected by a bus. While L2 access on the same module is a minimum 26 cycles it increases to a whopping ~190 cycles when accessing cache on the other module (RAM latency is ~220 cycles).
Anything?
or you expect us to take your word as granted?
An Intel i3 + custom Iris Pro variant should have been the choice. And before you mention price, Intel is flipping Atom's for nothing, I'd bet they would be flexible in price given they have put so much work into their iGPUs.
Iris pro is far from an atom. They are subsidizing atoms in some markets, because they want to get into mobile desperately. I definitely think they would not make the same concessions with a custom iris pro, especially to get into the console market.
Previous gen AAA games were consiciously designed in order to reduce their RAM/VRAM usage because of how little memory was available of previous gen consoles & how weak their GPUs were.2gb is more than enough for 1080p gaming. The bloated memory demand of newer games is somewhat due to current gem console unified memory.
The diesizes are nowhere near 550mm2. They are 348mm2 and 363mm2.
Something quite subtle that people may be forgetting is that the xbone/PS4 APU is NOT a true 8 core chip, rather it is two quad core jaguar clusters on die connected by a bus. While L2 access on the same module is a minimum 26 cycles it increases to a whopping ~190 cycles when accessing cache on the other module (RAM latency is ~220 cycles).
This is a terrible barrier to effectively multithreading game code; for performance purposes code needed both by the two 'odd duck' cores and the four game cores must pretty much be present in both caches for prompt execution. Thus it seems quite likely that two of the 6 console cores are going to be used for relatively remote tasks that do not depend much on the main game code.
Not just BOM price, but also R&D, but overally answer is: Yes.Just to summarize, from what I can tell so far nobody here has been able to present an alternative hardware spec that would be better than the AMD APU used in the PS4/XBONE, assuming the TDP and price of those units should be the same as currently. Is that correct?
Like you said, the core on PS4 was nowhere near 550mm2. Personally I was disappointed that sony went so conservative route this time. They could have easily order just slightly bigger chip which would include 7870 equivalent GPU making the chip no bigger than 400mm2.
The only real option was ARM but beyond phones I'm not sure what they have in the high performance segment, most of there stuff is sub 5W. They needed 15-20W core(s) with a 80W GPU component SoC, I'm not sure even with A57 if they could come up with similar or better performance given the extra TDP.
Thing is FX series is 32nm and GCN is 28nm.
2 GB is nowhere near enough. If you wanna employ certain techs and get rid of heavy compromises that were made in times of previous gen multiplaform AAA games then you need much more than that.
2gb is more than enough for 1080p gaming. The bloated memory demand of newer games is somewhat due to current gem console unified memory.
this x1000
2gb would be plenty of dedicated VRAM in a console aiming @ 1080p
The problem is you think you know more than Sony and Microsoft combined.
Instead of an 8 core, I'd have rather had a faster Quadcore (or a Big/Little combination with a Quadcore x86+ARM Processor for OS or something like that).
Both using GDDR5 ram.
Stronger GPUs (equivalent to the Xbox360/Available GPU at the time of purchase of that console).
Better Kinect Support (Seriously this wasn't a bad featuer of hte Xbox One. It was poorly handled. On Madden, I should be able to make play calls with the Kinect. In NBA 2K, I should be able to call for a pick, on army games I should be able to tell the AI to flank, etc. the Kinect had a TON of potential and M$ just screwed it up. It should have been integrated into a TON of titles from development phase but instead it was tossed in and half baked and then FORCED onto people).
Better SmartGlass/Smartphone/Tablet support.
This was promised with Halo4 and was TERRIBLE. You were promised more advanced stats on your phone at the end of matches and it never worked well along with the other features they promised. I was very upset and it rarely worked as intended.
In the Xbox's case, I wish they had used the Sony Touchpad on their controller. Simply because when it made it's way over to PC, that controller would be a game changer. Even now the DS4 is a great controller for PC but it's not as reliable as the Xbox controller still.
An Intel i3 + custom Iris Pro variant should have been the choice. And before you mention price, Intel is flipping Atom's for nothing, I'd bet they would be flexible in price given they have put so much work into their iGPUs.
