• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Since Xbox runs a stripped down Win8, can it be hacked?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
No, this is what started all of this. I stated :

"Sharing memory is to SAVE MONEY and have a simpler system to manufacture. At the expense of performance." This is a direct quote in the context of shared vs. dedicated memory setups.

You responded with :

Rakehellion said:
110% false. Unified memory removes the step of copying memory between CPU and GPU space, an enormous performance advantage.

Pointing out one particular aspect which *MIGHT* theoretically enhance performance (looks like most likely to be GPGPU non-gaming possibilities, but even those are thin at the moment in hUMA analysis), does not negate the fact that :

Shared/Unified memory is inferior to dedicated memory for a gaming setup, all other things being equal.

If you can offer up a single shred of proof that a unified memory space would be faster than dedicated memory for both the CPU and GPU for the purposes of a gaming setup, I'll be quite interested to see that.

No, I don't write 3d game engines, I would consider that a massive waste of time personally, unless I was good enough to get hired for a major studio project. Even then I'd want to pull my hair out, I did C++, Assembly, and a ton of other incredibly boring crap before deciding that it's simply not for me. 1 million vertices? I don't really care, I'm not John Carmack, I'm not going to reinvent what's already out there, and there are guys out there that know more than 1,000 of us put together, unless you actually ARE employed at a senior design level at a company on the bleeding edge.
 
No, I don't write 3d game engines

So you know absolutely nothing about resource management, data processing or how these are shared between the GPU and CPU. That explains some of your responses.

I would consider that a massive waste of time personally...

Blah blah blah, lame justification for knowing jack shit about anything.

If you can offer up a single shred of proof

As I said before, you can open up a compiler and do some of these things yourself if you actually cared. It's completely free.

Or did you want a link to some tech blog?
 
No, as a matter of fact, there's nothing I can do here at home that would approximate the development of a AAA title, and neither can you. You can be insulting and condescending all you want, it doesn't take away from the fact that you're wrong in the claim that :

Unified memory is better than dedicated memory

Wrong, wrong, wrong.

It's there because it's cheap, and because it's good enough for the price point. That's it.

And yes, a link from a single credible source that says that shared memory is better than dedicated memory would be a start. Otherwise you're just another anonymous guy on a forum who gets off on insulting people without backing a single thing up.
 
So let me ask something here. Lets say that for the sake of argument we get to the point where memory gets the bandwidth of GDDR5 but the latency of DDR3 and we have CPUs on par with Hexcore i7 while the iGPU is on the level of a GTX 770 or similar. It would be worthless to put a pool of 16 or 24GB of that memory on a board that can be used by both the CPU and GPU as needed?

I'm thinking in terms of removing the bottleneck from the slowish CPU and relatively lower end GPU parts. Put in semi-high end parts and reduce the latency of the memory to DDR3 levels but keep the bandwidth at GDDR5 levels. Does that change anything?
 
No, as a matter of fact, there's nothing I can do here at home that would approximate the development of a AAA title, and neither can you. You can be insulting and condescending all you want, it doesn't take away from the fact that you're wrong in the claim that :

Unified memory is better than dedicated memory

Wrong, wrong, wrong.

It's there because it's cheap, and because it's good enough for the price point. That's it.

And yes, a link from a single credible source that says that shared memory is better than dedicated memory would be a start. Otherwise you're just another anonymous guy on a forum who gets off on insulting people without backing a single thing up.

Why do you think shared memory is worse? Should we have dedicated memory for the FPU? Should each core on our cpu have dedicated memory?

The downside is that memory is slower. On PCs, it's much slower. On the PS4, it's got a pretty hefty GDDR5 setup. Sure, they could give the cpu and the GPU each their own memory pool and have faster total bandwidth, but they're also not maxed out on the speed of the GDDR5 or the size of the bus they gave to the APU. At least on the PS4, there's no noticeable loss in memory bandwidth for using shared memory.

The plus side of shared memory is that the cpu can more quickly modify data used by the GPU. (And vice versa now as well) In the pre compute GPU days, this would have been a huge deals, and older consoles took huge advantage of this, even the PS3 made heavy use of this. It's less important now that GPUs are essentially full cpus, but I'm sure game developers will get good use out of it this gen.
And generally eliminating having to use an external bus for sharing data (the PCI Express bus here) has been a big win to PCs. Once AMD gets their gpus sharing the same cache hierarchy as their cpus (maybe they already do?), it'll be possible to relatively seamlessly use the GPU as a coprocessor.
 
there's nothing I can do here at home that would approximate the development of a AAA title

So very specific and well-published techniques on an identical compiler and identical GPU somehow perform differently when funded by a multi-million-dollar budget? Money can defy the laws of physics! It's like how whiskey tastes better when someone else is buying!

So let me ask something here. Lets say that for the sake of argument we get to the point where memory gets the bandwidth of GDDR5 but the latency of DDR3 and we have CPUs on par with Hexcore i7 while the iGPU is on the level of a GTX 770 or similar. It would be worthless to put a pool of 16 or 24GB of that memory on a board that can be used by both the CPU and GPU as needed?

The thing is, gaming GPUs are always add-in cards and shared memory only works with integrated GPUs. Except in the case of the PS4 and Xbone.

AMD is working on bringing the technology to PC's, but it looks like a marketing stunt because no one wants to game on an integrated GPU. Also, AMD and Nvidia are working on some kind of technology where you can directly read the GPU's memory space, but we'll see how that develops.
 
So let me ask something here. Lets say that for the sake of argument we get to the point where memory gets the bandwidth of GDDR5 but the latency of DDR3 and we have CPUs on par with Hexcore i7 while the iGPU is on the level of a GTX 770 or similar. It would be worthless to put a pool of 16 or 24GB of that memory on a board that can be used by both the CPU and GPU as needed?

I'm thinking in terms of removing the bottleneck from the slowish CPU and relatively lower end GPU parts. Put in semi-high end parts and reduce the latency of the memory to DDR3 levels but keep the bandwidth at GDDR5 levels. Does that change anything?

Very good!

Let's compare that hypothetically. One thing we need is balance of course, as historically games respond much more to increases in GPU grunt than to primary system ram speed.

BUT, I think you could achieve something truly special with a multiple bus system.

Give the CPU 8GB of it's own memory, attached to a 256-bit bus and running at very low latency. On the backside of the CPU, have another 256-bit bus connecting to an 8GB pool of unified memory, which is also connected to the GPU. Then for the GPU exclusively, have 8GB of extremely high bandwidth memory attached to a 512-bit bus. Yes, the mainboard would be a nightmare to manufacture, but you could achieve all of the positives with none of the drawbacks.

Why do you think shared memory is worse? Should we have dedicated memory for the FPU? Should each core on our cpu have dedicated memory?

The downside is that memory is slower. On PCs, it's much slower. On the PS4, it's got a pretty hefty GDDR5 setup. Sure, they could give the cpu and the GPU each their own memory pool and have faster total bandwidth, but they're also not maxed out on the speed of the GDDR5 or the size of the bus they gave to the APU. At least on the PS4, there's no noticeable loss in memory bandwidth for using shared memory.

The plus side of shared memory is that the cpu can more quickly modify data used by the GPU. (And vice versa now as well) In the pre compute GPU days, this would have been a huge deals, and older consoles took huge advantage of this, even the PS3 made heavy use of this. It's less important now that GPUs are essentially full cpus, but I'm sure game developers will get good use out of it this gen.
And generally eliminating having to use an external bus for sharing data (the PCI Express bus here) has been a big win to PCs. Once AMD gets their gpus sharing the same cache hierarchy as their cpus (maybe they already do?), it'll be possible to relatively seamlessly use the GPU as a coprocessor.

Yes, the PS4 is something I haven't totally thought about. It's mostly the XB1 and the DDR3 that has me feeling that it's constrained somewhat. The problem with shared memory is that it's never as fast as an equivalent setup that doesn't have to share memory. Effective latency seems to take a hit as neither the CPU nor GPU can have totally uninterrupted full use of the memory bandwidth, it must be divided up.

From every bit of research I've seen, and most of it seems focused on AMD hUMA, there are some very likely benefits to GPGPU, but not even legitimate hypothetical benefits to gaming as of yet, simply due to how game engines load GPU for processing and the memory usage. With the new consoles using x86/GCN, I'd wager that they are utilizing the hardware pretty much like a PC is at this point.

Now I could be wrong, and there may come a time where the game coding is truly using the positives of having the GPU and CPU sharing a single pool of memory to it's fullest potential, but I can't shake the appearance that it will be awful difficult to 100% equal what they could have done with a dedicated memory setup. Particularly in the case of the XB1's complete reliance on DDR3.
 
Yeah anything can be hacked how long depends on the security of the systems and how
Much needs to be edited in the code.
 
Every other console to date has been hacked, so we'll get something eventually. But the chances of you getting a full version of Windows with working graphics drivers is next to nil. The Xbox One has different hardware than a PC and uses a different executable format. It's about as likely as getting Windows on the PS4.

An Xbox One emulator is pretty unlikely too. There still aren't emulators for the PS3 and 360.
Don't let the "Windows" name fool you. The Xbox One is running a highly customized OS that uses DirectX. It's nothing like a gaming computer.

http://en.wikipedia.org/wiki/List_of_video_game_emulators#Xbox_360

http://en.wikipedia.org/wiki/RPCS3
 
Those are non-functional, and looking at the dev speed on them, it's unlikely that they'll be functional any time soon. There really isn't even a truly workable Xbox OG emulator to this day beyond some really wobbly ones that work for a single game or homebrew stuff, if that tells us anything.
 
Give the CPU 8GB of it's own memory, attached to a 256-bit bus and running at very low latency. On the backside of the CPU, have another 256-bit bus connecting to an 8GB pool of unified memory, which is also connected to the GPU. Then for the GPU exclusively, have 8GB of extremely high bandwidth memory attached to a 512-bit bus. Yes, the mainboard would be a nightmare to manufacture, but you could achieve all of the positives with none of the drawbacks.

That's all of the drawbacks with none of the positives, with some extra drawbacks that have never been encountered before.

Negatives:
-You have to program for three memory banks instead of just two (like on the PC) or one (like on the Xbox One)
-You still have to copy data back and forth between the memory banks which is the downside of the PC which is entirely the problem that this wacky system was trying to solve and you still have a slow memory bus which is your purported drawback of the Xbox One.
-When memory bank 2 and 3 aren't needed, you've got an expensive piece of hardware that's essentially collecting dust.


You want all of the positives with none of the drawbacks? Just use high bandwidth memory all across the system.
 
Last edited:
Back
Top