PS4: new kits shipping now, AMD A10 used as base

Discussion in 'CPUs and Overclocking' started by inf64, Nov 4, 2012.

  1. N4g4rok

    N4g4rok Senior member

    Joined:
    Sep 21, 2011
    Messages:
    285
    Likes Received:
    0
    I don't think this is correct. I've cracked a few of them open, and thy've got a powerPC cpu on the left, with an ATi Xenos chip next to it. they were two distinct chips sharing a bus. That was it.
     
  2. Blitzvogel

    Blitzvogel Golden Member

    Joined:
    Oct 17, 2010
    Messages:
    1,918
    Likes Received:
    0
    The revised systems have the CPU and GPU on a single die. Can't find any real hard data on whether or not the eDRAM was integrated as well, but I would assume it was looking at Xbox 360 Slim mobo shots.
     
  3. jpiniero

    jpiniero Diamond Member

    Joined:
    Oct 1, 2010
    Messages:
    3,223
    Likes Received:
    4
    Wikipedia says that Xenos has 48 shaders. Whether it's really comparable to the R600 shaders I don't know.

    I haven't seen any recent rumors on the 720's GPU, but it was similar to the 7670 (480 shaders) than Tahiti. Anything close to Tahiti isn't going to happen.

    I'd say more likely the PS4's final system is:
    2 SR Modules, 2.3-2.5 Ghz
    500-640 shaders
    4-8 GB Ram, maybe GDDR5.

    With the 720 being similar except some sort of multicore PowerPC cpu, with one of the cores being reserved for Kinect.
     
  4. Haserath

    Haserath Senior member

    Joined:
    Sep 12, 2010
    Messages:
    789
    Likes Received:
    0
    http://www.techradar.com/us/news/gaming/xbox-360s-new-combined-cpu-gpu-explained-711942
    http://www.anandtech.com/show/3774/welcome-to-valhalla-inside-the-new-250gb-xbox-360-slim
     
    #29 Haserath, Nov 4, 2012
    Last edited: Nov 4, 2012
  5. Fox5

    Fox5 Diamond Member

    Joined:
    Jan 31, 2005
    Messages:
    5,957
    Likes Received:
    0
    48 VLIW5 shaders = 240 (according to how ati/amd counts them now)
    It sounds pretty similar to the HD2900 series gpus, which were basically shaved by AMD cutting down die size (ditching the internal ring bus), way pumping up shader count, and improving drivers. On a console, individual games could be optimized for the GPU, but older games wouldn't benefit from driver advances, but I'd imagine the architecture fairs better on a console than it did on a PC before ATI was able to optimize the JIT. (although wikipedia claims r500/xenos is closer to r520/x1800 than it is to r600)
     
  6. Haserath

    Haserath Senior member

    Joined:
    Sep 12, 2010
    Messages:
    789
    Likes Received:
    0
    Considering AMD counts GCN cores a bit differently than VLIW cores, a 7770 style GPU at 1Ghz or slightly less would outperform the Xbox 360 GPU by at least 6x.

    GCN counts vector processors(and not the one scalar unit in the CU).
    VLIW5 counts the 4 vector units+1 scalar unit.

    Basically would be 640 vs. 184 units. Clocks on the 360 are at 500Mhz as well.

    I wouldn't mind seeing that kind of GPU in consoles as long as it kept them reliable and affordable.

    The CPU side should at least come with a 3ghz processor. No point in leaving an anemic CPU for the next 6+ years.

    Power use can't really go over 150W though. The Xbox 360's that RRoD pulled a bit over that.

    What sucks is that these consoles might start on 40nm instead of 28nm...
     
  7. Aikouka

    Aikouka Lifer

    Joined:
    Nov 27, 2001
    Messages:
    26,973
    Likes Received:
    1
    Did he misspeak? The HDMI spec doesn't support 1080p @ 60Hz with full-resolution 3D content (i.e. Frame Packing).
     
  8. Schmide

    Schmide Diamond Member

    Joined:
    Mar 7, 2002
    Messages:
    5,015
    Likes Received:
    5
    HDMI 1.4b supports it. I don't think it would be much of a streach to move from ATI's current 1.4a to 1.4b, considering they claim 4096x2160 resolution per display.

     
  9. Fox5

    Fox5 Diamond Member

    Joined:
    Jan 31, 2005
    Messages:
    5,957
    Likes Received:
    0
    Except for the Wii U, why would they start at 40nm? Volume? Both nvidia and amd video cards have shifted to 28nm, and AMD apus will be there soon.
     
  10. Aikouka

    Aikouka Lifer

    Joined:
    Nov 27, 2001
    Messages:
    26,973
    Likes Received:
    1
    HDMI 1.4a supports 4K. None of that means that it supports 1080p60 using Frame Packing.

    http://www.hdmi.org/manufacturer/hdmi_1_4/

    However, since 1.4b supports 1080p120 (as listed here: http://en.wikipedia.org/wiki/HDMI#Version_1.4), it can support 1080p60 with 3D. Although, consider this... the PS3 only supports HDMI 1.3 through hardware, and it actually supports HDMI 1.4a through software.
     
  11. Schmide

    Schmide Diamond Member

    Joined:
    Mar 7, 2002
    Messages:
    5,015
    Likes Received:
    5
    You do understand that frame packing is just 2 frames next to (above below) each other? The only thing I think 1.4b does is make this auto delectable.

    Edit: 3D 1080p60 is just 3840x1080p60 or 1920x2160p60.

    Edit: That being said. No 7770 class card is going to game at these 3d resolutions. I think they're taking about 3d graphics at 1080p.
     
    #36 Schmide, Nov 5, 2012
    Last edited: Nov 5, 2012
  12. zebrax2

    zebrax2 Senior member

    Joined:
    Nov 18, 2007
    Messages:
    874
    Likes Received:
    0
    The reason i ask is 1.) mainly I'm just curious 2.) trying to see if they could try to work around the x86 license limitations if they customized their current processors.
     
  13. NTMBK

    NTMBK Diamond Member

    Joined:
    Nov 14, 2011
    Messages:
    6,941
    Likes Received:
    14
    I've seen this idea proposed in several places, and I still maintain that it's ridiculous. The difficulties that developers had working on the PS3's messy memory architecture should have scared them away from such a ridiculous model.

    The main benefit of an AMD APU should be obvious- shared memory address space. GCN can work in the x86 address space. You have a single memory bank, something high bandwidth (high end DDR3, DDR4, GDDR5, maybe even XDR2). Both the CPU and GPU sides of the APU work on the same objects in memory- there is no split, no "this half goes for CPU, this half for GPU"- and there is no copying backwards and forwards between graphics and main memory. Removes a major bandwidth bottleneck, as well as removing a developer headache, and makes GPGPU tasks more practical. If the first pass of your algorithm depends heavily on single threaded performance, use CPU cores, and if the second pass requires multicore performance, use the GPU. Normally the practicality of this sort of thing is limited by PCIe bandwidth and latency, but not in a shared memory APU model.

    A games console is the ideal platform to make the most of AMD's claimed gains from using APUs. Adding a "Crossfire"'d card would make development a pain in the arse, again.
     
  14. Haserath

    Haserath Senior member

    Joined:
    Sep 12, 2010
    Messages:
    789
    Likes Received:
    0
    My mistake, I was thinking of the Wii U launch this year instead of late 2013. Very scatterbrained sometimes...

    They should hopefully have the ability to release on 28nm.
     
  15. Cerb

    Cerb Elite Member

    Joined:
    Aug 26, 2000
    Messages:
    17,409
    Likes Received:
    0
    No, but then it won't be any kind of CPU. The x86 memory model is baked into the hardware (8-bit to 64-bit support, and a fully hardware MMU), along with direct execution of many x86 instructions, optimizations for x86's specific handling of a C-like data stack, and also lots of goodies keeping up with exceptions and flags.

    Whether x86 or not, given the advantages of sharing low-latency shared RAM access, I would hope that, if they can't fit it on the die, they'll fit it on the package, so that it won't be stuck with the added latency and/or reduced bandwidth of an external bus.
     
  16. sm625

    sm625 Diamond Member

    Joined:
    May 6, 2011
    Messages:
    7,702
    Likes Received:
    6
    A HD7700 series gpu has 1.5 billion transistors. A10 trinity die is 1.3 billion transistors, with no more than 700 million composing the gpu. So that tells us they could easily double the gpu size of an A10, and still be under 2 billion transistors. That is a very reasonable size and the price wouldnt be too bad either. Should be less than $150. Two years into the console's life that chip would be cut down to $60. It all sounds very reasonable to me.

    Actually going from 500 million transistors (xbox 360) to only 2 billion transistors seems kind of weak. Shouldnt we be looking for a ballpark of 3-4 billion? That's an APU the size of a 7970...
     
    #41 sm625, Nov 5, 2012
    Last edited: Nov 5, 2012
  17. Puppies04

    Puppies04 Diamond Member

    Joined:
    Apr 25, 2011
    Messages:
    5,897
    Likes Received:
    1
    Also don't consoles run at 30fps? I know there are gains to be made having a closed platform but 60FPS @1080p in 3D seems way too optimistic considering what the current gen consoles can achieve.
     
  18. Tuna-Fish

    Tuna-Fish Senior member

    Joined:
    Mar 4, 2011
    Messages:
    770
    Likes Received:
    0
    The software shipped in the box is an early version of what the final system will have, and is likely only made to work on that single system. Also, even if the final performance is not anywhere near the current devkit, several details of it may be very similar to the final platform. Things like how the memory bus is shared between GPU and CPU (Can you just pass pointers? Can they even be virtual pointers or do you have to translate them?) actually affect early engine development more than the raw performance numbers.

    I know from a source I consider very reliable that the earliest xbox next devkit was indeed Intel + nvidia. The later ones aren't. As with last gen, the earliest dev kits are really just the platforms the OS/API devs happened to work on to build their stuff. As the release date grows nearer, the devkits become more similar to the final hardware.

    This would be correct. Dev kits almost always have more ram than the final machines. The final dev kits (ie, not this one yet) this time around might be exceptions to this rule, as all the sensible memory tech available is point-to-point, and so when they move to the final silicon, there is no simple cheap way to ship more memory in the system.

    If a tree falls in the forest...

    Well it still needs a some kind of decoder. What would you put in it's place?

    It would obviously not be directly compatible with any x86 software, but some x86 features, such as strict memory ordering, would still be visible.

    Right now, the speculation is divided between DDR4 and GDDR5. DDR3 will be on it's way out when these are released, and would be expensive to supply for the latter half of the lifetime of the consoles.

    XDR2 does not even exist outside of paper. Also, as I understand it this gen Sony got hurt more than a little for using boutique ram that had almost no other users. In chip manufacture getting volumes up is the number one rule for keeping costs down.

    They were both very active participants in the DDR4 spec, which I expect to mean that that's what they'll use. GDDR5 would give more bandwidth, but would be considerably more expensive. (Especially in the long term, as PCs can be expected to use DDR4 for a very long time, whereas GDDR5 will probably be replaced with something in GPUs in a few years.)

    No, Xenos is vec4 + scalar, which is not same as VLIW5. It would be correct to say that R600 was further development based on Xenos.

    Actually, by much more than that. The vec4 is only useful if there are four independent identical operations you can do on the same fragment/vertex. Which there are for vertex processing and other geometry, but generally aren't for pixel processing. vec4+s is only as good as 2 or 3 scalar operations for most pixel processing, and pixel processing is (way) more than half of all shader processing.
     
  19. Kenmitch

    Kenmitch Diamond Member

    Joined:
    Oct 10, 1999
    Messages:
    6,561
    Likes Received:
    8
    AMD in the console doesn't equate to AMD in your desktop. Not sure why a lot of members are thinking you'll get subpar performance if AMD is inside your console.

    Console = The ability to fully optimize the OS and game(s) to set hardware.

    The bar is set by the console manufacturer(s) and not by the suppliers of the hardware. It's a prove it or loose it type of deal when it comes to the underlying hardware in question. I would think that it's already proven that the underlying AMD hardware can meet or exceed the requirements set by the console manufacturer(s).

    It's not up to AMD or Intel or Nvidia to dictate the performance of your console they only supply the power that was asked for in the first place....Consoles are like Apple products in a way as you only get what they think you need/want.
     
  20. Haserath

    Haserath Senior member

    Joined:
    Sep 12, 2010
    Messages:
    789
    Likes Received:
    0
    I need to stop pulling stuff out of my arse(even though speculation is entertaining), but do you have a source for that or any experience in the field?
     
  21. Aikouka

    Aikouka Lifer

    Joined:
    Nov 27, 2001
    Messages:
    26,973
    Likes Received:
    1
    I know a modest amount about 3D because I spent awhile researching it. I had an older Denon 1910 AVR, which doesn't have 3D support. I wanted to do 3D with my TV, and I was bewildered as to why the 1910, which seemed quite capable, could not actually handle it. There were really two reasons why it can't handle it...

    1) Not having the proper spec (minimum of 1.4a) means it's incapable of saying, "Hey! I can do 3D!"

    2) The extra frame being packaged offsets the audio, which the receiver would be unable to process.

    As long as I wanted to keep the Denon in the loop between my 3D-capable device (PS3) and my TV, I could not use 3D. The only way to make it work would be to route it apart from the AVR and use an optical cable to feed the audio into the AVR. Also consider that using an optical cable would have sacrificed lossless audio. In other words... a giant mess! :(

    Neither of those is correct, because there's actually a 20 pixel spacer between each frame. Although, if I remember correctly, they're laid vertically, and it's also a 48Hz refresh rate, because HDMI 1.4a only supports 3D in 1080p at 24Hz (i.e. an actual 48Hz per combined frame).

    That's another reason why I was curious about it. The current generation games don't render 3D content at 1080p because (1) they can't handle it (2) HDMI 1.4a restricts you to 24Hz at 1080p. I've tried using the latter... it's not pleasant. :p
     
  22. Tuna-Fish

    Tuna-Fish Senior member

    Joined:
    Mar 4, 2011
    Messages:
    770
    Likes Received:
    0
    My source is a few years trying to optimize shaders for the damn things. :)
     
  23. happysmiles

    happysmiles Senior member

    Joined:
    May 1, 2012
    Messages:
    344
    Likes Received:
    0
    I have a ridiculously strong feeling that Kaveri having a fully shared data bandwidth will be in the PS4 custom chip.

    Why lift heavy with one when you can do the lifting with two?

    Regardless, the difference between 720 and PS4 should be less and easier for developers and nice for us consumers! I'd imagine that true 1080p 30fps with all eye candy enabled while having future potential is their current goal.

    4K would be the next step (I'm guessing next gen would be 8 years so wouldn't surprise me)
     
  24. Ventanni

    Ventanni Golden Member

    Joined:
    Jul 25, 2011
    Messages:
    1,050
    Likes Received:
    1
    If I'm not mistaken, couldn't amd also drop support for some instruction sets (like mmx, sse, etc) from the cpu that are deemed unnecessary by Sony? Realistically if Sony is going to do an apu here, then it's going to have to be manufactured very cheaply en mass, and large dies will cut into that. But couldn't there be savings to be found by culling some transistors from the CPU as well?
     
  25. NTMBK

    NTMBK Diamond Member

    Joined:
    Nov 14, 2011
    Messages:
    6,941
    Likes Received:
    14
    a) Removing the vector units would be a MASSIVE redesign of the core.

    b) Why would you want to remove the vector units? Game code is an ideal use case for vector units.