How the PlayStation 4 is better than a PC

Page 16 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Tuna-Fish

Golden Member
Mar 4, 2011
1,653
2,494
136
Im surprised we haven't seen a petition to bring back DDR1 to modern computers. I mean, it has MUCH lower latency than DDR3 so it must be better right?

Except that's not true at all and you don't seem to have the slightest clue what latency means. Hint: Cas latency has basically almost nothing whatsoever to do with what programmers mean when they talk about latency.

It's interesting that about a month ago, the PS4 sucked because its using a weak (compared to a modern gaming PC) APU. Then you have a couple developers hyping the product and using fancy terms like draw calls and "code to metal" and suddenly the console advocates are experts that miraculously managed to transform a weak APU to an i7/Titan killer.

It's not a i7/Titan killer. There are a lot of things that i7/Titan will be able to do that it won't. But there will also be a lot of things that PS4 will be able to do that Titan can't. I'm a dev, although I don't work in games at the moment. Draw call overhead is a real thing. It's very telling that a lot of what AMD and Nvidia are doing right now is about reducing that overhead. hUMA and Maxwell's unified virtual memory are aimed directly at this.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
What can the PS4 do what Titan can not? Titan has up to 5,35 TFLOPs/s compute performance, the GPU of the PS4 only 1,8TFLOPs/s. That is nearly 3x more.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,653
2,494
136
What can the PS4 do what Titan can not? Titan has up to 5,35 TFLOPs/s compute performance, the GPU of the PS4 only 1,8TFLOPs/s. That is nearly 3x more.

It can pass data between the GPU and the CPU by passing pointers to common buffers. The game can issue compute tasks by writing pre-prepared compute tasks directly into the command buffers of the GPU.

This means that if you want to do a little bit of processing on the GPU, then a little bit on the CPU, and then again a little more on the GPU, it can do these transitions much, much faster than Titan can. A lot of tasks, including physics, contains different phases that fit different kind of compute better. On a i7/Titan combo, you basically have to fit these tasks entirely on the CPU or entirely on the GPU, because switching between them is too slow, meaning at least part of the task runs on sub-optimal hardware. The PS4 can mix and match as it feels best.

Not everything is about raw compute power.
 

Spjut

Senior member
Apr 9, 2011
931
160
106
It's not a i7/Titan killer. There are a lot of things that i7/Titan will be able to do that it won't. But there will also be a lot of things that PS4 will be able to do that Titan can't. I'm a dev, although I don't work in games at the moment. Draw call overhead is a real thing. It's very telling that a lot of what AMD and Nvidia are doing right now is about reducing that overhead. hUMA and Maxwell's unified virtual memory are aimed directly at this.

I guess a new API would be needed, but what about Maxwell's integrated ARM CPU? would it be possible for that to be of any benefit in games?
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Except that's not true at all and you don't seem to have the slightest clue what latency means. Hint: Cas latency has basically almost nothing whatsoever to do with what programmers mean when they talk about latency.

I'm drawing an analogy, it was in second part of that paragraph in the followig sentance which you decided to truncate... in other words, ignoring the big picture, not unlike the PS4 debate.



It's not a i7/Titan killer. There are a lot of things that i7/Titan will be able to do that it won't. But there will also be a lot of things that PS4 will be able to do that Titan can't. I'm a dev, although I don't work in games at the moment. Draw call overhead is a real thing. It's very telling that a lot of what AMD and Nvidia are doing right now is about reducing that overhead. hUMA and Maxwell's unified virtual memory are aimed directly at this.

Again, looking at the entire scope, I think it would be certainly premature and bordering on foolish to even imply it will give a titan a run for it's money.

I do have to handed to the hype men, (developers) they have psycological trickery down to a science. They won't come out and say that this weak APU is going to go heads up with an i7/Titan, becuase they know it would be complete BS. What they will do i say things like

We can 'code to metal' here and we can't there
We can write to a shared memory pool here, we can't there
Direct X sucks

They'll repeat all that over and over again, throw it out there for everyone to see and allow your imagination to run wild. Now while all that above may be true, they also know that few people have a clue what that even means and fewer still know the real-wrold implications.
 

sontin

Diamond Member
Sep 12, 2011
3,273
149
106
It can pass data between the GPU and the CPU by passing pointers to common buffers. The game can issue compute tasks by writing pre-prepared compute tasks directly into the command buffers of the GPU.

This means that if you want to do a little bit of processing on the GPU, then a little bit on the CPU, and then again a little more on the GPU, it can do these transitions much, much faster than Titan can. A lot of tasks, including physics, contains different phases that fit different kind of compute better. On a i7/Titan combo, you basically have to fit these tasks entirely on the CPU or entirely on the GPU, because switching between them is too slow, meaning at least part of the task runs on sub-optimal hardware. The PS4 can mix and match as it feels best.

Not everything is about raw compute power.

You don't want send data back to the CPU. It makes no sense for graphics workload. And when you increase the workload of the GPU latency is no problem at all.

With 1,8 TFLOPs/s developers will make sacrifices to graphics or compute tasks.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Nope. They are on the same chip, sharing the same memory interface. This is completely different.

So instead of "The PS4 is a mid level GPU and a low level CPU on a motherboard"

We can say "The PS4 is a mid level GPU and a low level CPU on the same die with a shared pool of GDDR5 on the motherboard"

Couple key aspects that haven't changed there.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
It can pass data between the GPU and the CPU by passing pointers to common buffers. The game can issue compute tasks by writing pre-prepared compute tasks directly into the command buffers of the GPU.

This means that if you want to do a little bit of processing on the GPU, then a little bit on the CPU, and then again a little more on the GPU, it can do these transitions much, much faster than Titan can. A lot of tasks, including physics, contains different phases that fit different kind of compute better. On a i7/Titan combo, you basically have to fit these tasks entirely on the CPU or entirely on the GPU, because switching between them is too slow, meaning at least part of the task runs on sub-optimal hardware. The PS4 can mix and match as it feels best.

Not everything is about raw compute power.


You know what i read here... I read that code optimized to run on the PS4 runs more efficiently (efficiently != faster) on the PS4 than it would on a PC.

I can make the PS4 sound horribly inefficient if i was a hype man too... The i7/Titan combo can fit these tasks entirely on the cpu or entirely on the GPU leveraging the 3x more compute performance it's capable of and not needing to contend with wasted cycles by switching between the two.

In other words, code optimized to run on PC hardware runs more efficiently on PC hardware.

Like I said, these guys are clever, they take a completely obvious premise, change the way they present it, and throw in a couple technical terms.

Hook
Line
Sinker
 

raghu78

Diamond Member
Aug 23, 2012
4,093
1,475
136
It can pass data between the GPU and the CPU by passing pointers to common buffers. The game can issue compute tasks by writing pre-prepared compute tasks directly into the command buffers of the GPU.

This means that if you want to do a little bit of processing on the GPU, then a little bit on the CPU, and then again a little more on the GPU, it can do these transitions much, much faster than Titan can. A lot of tasks, including physics, contains different phases that fit different kind of compute better. On a i7/Titan combo, you basically have to fit these tasks entirely on the CPU or entirely on the GPU, because switching between them is too slow, meaning at least part of the task runs on sub-optimal hardware. The PS4 can mix and match as it feels best.

Not everything is about raw compute power.


very well said. in today's PC architecture physics runs on CPU , GPU or sometimes both. but the GPU physics does not interact with CPU physics. With a APU you have the ability to split physics calculations and run on both CPU and GPU simultaneously. More importantly the GPU physics calculations can affect CPU physics and vice versa. thats not the case currently.


also the Titan is a bad comparison with PS4 as power, performance and price are worlds apart. the PS4 system on whole would draw 150w - 170w, less than GTX 680 power draw. the PS4 because of its console architecture and close to metal programmability is on par in performance with a desktop HD 7970 in terms of what can be achieved technically. lastly the PS4 system will cost around USD 400. so comparisons with Titan are useless.
 

Tuna-Fish

Golden Member
Mar 4, 2011
1,653
2,494
136
You don't want send data back to the CPU. It makes no sense for graphics workload.

Not true. For example, look at the framebuffer ray marching done in Killzone Shadow Fall. That's a good example of a graphics task that would be much better done in a lower-latency device with good caches, ie, a CPU.

The reason modern graphics tasks don't and can't utilize the CPU is that the transitions are simply too expensive. It's not done because it can't be done. Not because devs don't want to do it.

And when you increase the workload of the GPU latency is no problem at all.
Again not true. Just look at all the troubles ID had with porting Rage to PC.

With 1,8 TFLOPs/s developers will make sacrifices to graphics or compute tasks.

With any amount of compute power devs will have to make sacrifices. I think I could easily make use of some 10TFlops per frame for full hd. That's 600 Tflops/s. Alas, we don't have that, so we have to make sacrifices. However, I really do think I'd rather take lower latencies over more compute power in this case.

I'm drawing an analogy, it was in second part of that paragraph in the followig sentance which you decided to truncate... in other words, ignoring the big picture, not unlike the PS4 debate.

My problem with your analogy is that it was simply factually incorrect, and betrays that you don't have a clue of what you are talking about. DDR1 does not have lower latency than DDR3, and frankly, most of the real performance gain between them for CPUs comes not from the added bandwidth, but from the lower latency.

Again, looking at the entire scope, I think it would be certainly premature and bordering on foolish to even imply it will give a titan a run for it's money.
For some things, it will not just give it a run for it's money, it will run around it in circles while taunting it.

I do have to handed to the hype men, (developers) they have psycological trickery down to a science.

And why do I do that? I'm not employed by Sony, and I'm certainly not in any way pro them. I really, really, hated PS3, you can search for my old posts here for proof if you want. I have no reason to want to trick you.

They won't come out and say that this weak APU is going to go heads up with an i7/Titan, becuase they know it would be complete BS. What they will do i say things like

We can 'code to metal' here and we can't there
We can write to a shared memory pool here, we can't there
Direct X sucks

They'll repeat all that over and over again, throw it out there for everyone to see and allow your imagination to run wild. Now while all that above may be true, they also know that few people have a clue what that even means and fewer still know the real-wrold implications.
That's true. The problem with your thinking is just that Raw compute power is simply not a good way to compare systems. If it was, AMD GPUs would have won everything between HD2000 and Titan, and Cell would have been the best CPU before i7 quadcores. Alas, gflops are not the only way to compare things, and latency matters.

I really do believe that given enough time, I could use the PS4 APU to produce better image quality and smoother frames than I could do with a Titan/i7 combo, assuming that the code running on Titan/i7 has to also run on other pc systems and uses common APIs.

However, I think that the next gen GPUs will claw much of these advantages back, while adding another 2x factor in raw performance. There's a reason I mentioned the unified virtual memory of Maxwell and the hUMA of next-gen AMD stuff. As with past gen, the console performance advantage will be very short-lived.
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
I wasn't implying you're hyping the product, I was implying you may be part of the population being fooled by the hype.

I'm also fully aware that raw compute power isn't the end game. I just don't think the efficiency factor of the ps4 is enough to overcome the raw compute deficit it faces. We arent' talking about 30% more powerful PC's, we're talking about 300% more powerful PC's.
 
Last edited:

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,061
3,557
126
If nextgen gaming fails due to casual gaming on buttonless abomination like the iPad I will give it up alltogether.

:thumbsup: :D :sneaky:

i too will give up gaming with a giant face palm if this is the direction were going.

You know what i read here... I read that code optimized to run on the PS4 runs more efficiently (efficiently != faster) on the PS4 than it would on a PC.

and thats why ps3 games are developed on PC?

You remember the many troll events sony did to us showing something off on a PC platform and then saying.. OH it was made by PC software, but its entirely ONLY PS3?

Final Fantasy 13 and all the other FF series ring a bell?

http://kotaku.com/5028961/of-course-final-fantasy-xiii-is-pc-bound-[updated]

"We contacted Square Enix, and according to the company, "The game is being built using PC-based development tools, but that doesn't mean it's being created for the PC platform. "

so tell me how a program made from a PC not as efficient when its ported for console?
OH thats right... WE DONT GET CRAPPY DRMS in Console.... thats the ONLY place console wins.
When you load a game.. IT WILL WORK Gaurentee'd instead of the BS nightmares we get faced with UBISoft and other publishers who dont know how to code a freaken DRM into a game correctly.

You console freaks... all u need to say we win cuz we have no DRMs..
End of discussion... !!!

Any PC enthusiast will give up this arguement b4 it began if u throw in PC's DRMs.
 
Last edited:

Lepton87

Platinum Member
Jul 28, 2009
2,544
9
81
I think the PS4 has a 7850, which is mid-range today.
That's not accurate at all, it has more shader cores and the architecture is updated, hence its architecture is called GCN 2.0.
Definitely you can squeeze way more performance out of it then even 7870. Optimizations can go a long way, who knows maybe developers will be able to produce better graphics with it then my PC can with a Titan inside.
 

Beavermatic

Senior member
Oct 24, 2006
374
8
81
It can pass data between the GPU and the CPU by passing pointers to common buffers. The game can issue compute tasks by writing pre-prepared compute tasks directly into the command buffers of the GPU.

This means that if you want to do a little bit of processing on the GPU, then a little bit on the CPU, and then again a little more on the GPU, it can do these transitions much, much faster than Titan can. A lot of tasks, including physics, contains different phases that fit different kind of compute better. On a i7/Titan combo, you basically have to fit these tasks entirely on the CPU or entirely on the GPU, because switching between them is too slow, meaning at least part of the task runs on sub-optimal hardware. The PS4 can mix and match as it feels best.

Not everything is about raw compute power.



Yeah. We get it. No one has argued it doesnt uses APU intergration.

The point is... APU integration is nothing new. It's been around for nearly a decade in the home pc market. And similar technologies have existed for a couple decades now on various things.

And it has it advantages and disadvantages. Now... tell me why I should be excited its on the ps4. Or why I should care. Technically all the recent console system generations have "some form" of direct cpu/gpu architecture integration, even if not as elaborate.

My point is they are overhyping something that isn't new. It won't make the world of difference you think it will.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
That's not accurate at all, it has more shader cores and the architecture is updated, hence its architecture is called GCN 2.0.
Definitely you can squeeze way more performance out of it then even 7870. Optimizations can go a long way, who knows maybe developers will be able to produce better graphics with it then my PC can with a Titan inside.

doubt it...
optimizations can't help against ROPs and TMUs perfomance

...unless voxel-based engines become mainstream :D
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Yeah. We get it. No one has argued it doesnt uses APU intergration.

The point is... APU integration is nothing new. It's been around for nearly a decade in the home pc market. And similar technologies have existed for a couple decades now on various things.

And it has it advantages and disadvantages. Now... tell me why I should be excited its on the ps4. Or why I should care. Technically all the recent console system generations have "some form" of direct cpu/gpu architecture integration, even if not as elaborate.

My point is they are overhyping something that isn't new. It won't make the world of difference you think it will.

The APU in the PS4 is the first one ever to use hUMA (heterogeneous Uniform Memory Access). Its is very much a new thing. It will be coming to other APU's from AMD at some point. But it is a real and innovative technology.
 
Last edited:

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
The APU in the PS4 is the first one ever to use hUMA (heterogeneous Uniform Memory Access). Its is very much a new thing. It will be coming to other APU's from AMD at some point. But it is a real and innovative technology.

Do you have a link for that? I know the technology has been announced by AMD, but haven't heard of application.

edit: I googled. It's a unknown.

When will we see it? (And what will it mean?)

When asked, AMD stated that Kaveri (due in the second half of 2013) will be the first chip to use these second-generation HSA features. The G-series embedded parts announced last week, based on Kabini, will not. I’m going to go out on a limb and say I’ll be surprised if this new technology doesn’t show up in the Xbox Durango and PS4, even if the graphics cores in those products are otherwise based on GCN.
http://www.extremetech.com/gaming/1...u-memory-should-appear-in-kaveri-xbox-720-ps4
 
Last edited:

Tuna-Fish

Golden Member
Mar 4, 2011
1,653
2,494
136
You know what i read here... I read that code optimized to run on the PS4 runs more efficiently (efficiently != faster) on the PS4 than it would on a PC.

I can make the PS4 sound horribly inefficient if i was a hype man too... The i7/Titan combo can fit these tasks entirely on the cpu or entirely on the GPU leveraging the 3x more compute performance it's capable of and not needing to contend with wasted cycles by switching between the two.

In other words, code optimized to run on PC hardware runs more efficiently on PC hardware.

Except that you're not supposed to burn compute, you are supposed to get the job done. And if some task takes either 100k ops on the CPU, 1000k ops on the GPU, or (20k ops on the CPU and 200k ops on the GPU), it runs more than twice as fast on the system that can afford to switch. And damn near everything interesting has serial parts that run awfully slow on the GPU.

I wasn't implying you're hyping the product, I was implying you may be part of the population being fooled by the hype.

It's not like I started liking low system latencies because Sony started advertising them. I've known I'd like to have this stuff for a decade, and so has a lot of other programmers. Just go find a random programming talk by Carmack, you'll hear at least one rant on the absurd cost of draw calls on PC on it. I know what they are offering, and given my experience programming for such systems, I know that it's worth more than two or three times the compute.

I'm also fully aware that raw compute power isn't the end game. I just don't think the efficiency factor of the ps4 is enough to overcome the raw compute deficit it faces. We arent' talking about 30% more powerful PC's, we're talking about 300% more powerful PC's.

Actually, just 150% more powerful PCs. The Titan is 4.5TFlops, the PS4 is 1.8TFlops. 2.5x difference. And given my experience in programming such systems, I do think that the overall lower latencies are worth more than the additional 150% higher raw throughput.

So instead of "The PS4 is a mid level GPU and a low level CPU on a motherboard"

We can say "The PS4 is a mid level GPU and a low level CPU on the same die with a shared pool of GDDR5 on the motherboard"

Entirely correct. And yes, that's better than a high-end CPU and GPU separate on a MB.

doubt it...
optimizations can't help against ROPs and TMUs perfomance

Yes, they very much do, if you use a deferred engine. Like almost everyone in the industry does today. Besides, I think the PS4 is healthily overprovisioned on ROPs.

The point is... APU integration is nothing new. It's been around for nearly a decade in the home pc market. And similar technologies have existed for a couple decades now on various things.

And it has it advantages and disadvantages. Now... tell me why I should be excited its on the ps4. Or why I should care. Technically all the recent console system generations have "some form" of direct cpu/gpu architecture integration, even if not as elaborate.

My point is they are overhyping something that isn't new.

And you're simply wrong about that. It's pretty new. The xbox360 doesn't have proper cache coherence between the GPU and CPU, which means there is still significant latency between them. It's true that there are PC GPUs that have such low latencies, specifically, Intel HD Graphics, but no dev can write games to depend on the fact that the latencies are so low, because most of the market doesn't use Intel Integrated GPUs.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
zebra is black striped, or zebra is white striped ? :p
Zebra is somebody Mr. Jefferson would not be happy about. :eek:

What can the PS4 do what Titan can not? Titan has up to 5,35 TFLOPs/s compute performance, the GPU of the PS4 only 1,8TFLOPs/s. That is nearly 3x more.
Move data between the CPU and GPU in under 1000 cycles. In some ideal corner cases, maybe under 100. In other cases, by passing no more than the address of that data.

It will be something that near-future AMD and Intel chips, along with the PS4, will be able to do, that no video card can do. Sharing virtual memory with video cards is happening, but tight coupling of code simply isn't going to be done, because that requires tight coupling of data, which requires fast access to that data to be worth doing. IE, it's not that Titan will be slower, but that some code just won't be written for a setup where such memory swapping goes like RAM->PCI-e->VRAM->PCI-e->CPU, instead of simply GPU/CPU<->RAM. The XB360 was close, but didn't have the GPU features, and had an extra bus to go to over.

If latency weren't important, we'd have much faster RAM, we wouldn't worry about CPUs sharing caches, and that pie in the sky, "everything in your house will talk to each other to be like a home LAN supercomputer," idea might have worked.

It's not like we'll be left out in the cold. We'll have better Intel IGPUs, new AMD APUs (and maybe software, which is the major bottleneck for current ones), along Intel and AMD CPUs in the future with AVX2 support, in the coming years. But, the new consoles are getting the goodies, in a way that can be made use of, much earlier.

Im surprised we haven't seen a petition to bring back DDR1 to modern computers. I mean, it has MUCH lower latency than DDR3 so it must be better right?
Nope. Among other things, DDR3 latency is perfectly fine. We got to <100ns random with the K8+DDR, and it's held pretty steady, with some minor bumps and dips, though luckily, more dips. As a generalization, PC3200 at CAS 3 = PC2-6400 at CAS 6 = PC3-12800 at CAS 12 is awfully close to being dead on (though, recent Intel CPUs have faster actual access, it seems, compared to AMD ones). The total time spent sending and receiving data being lower as the speeds increase, also contributes to lower average latencies, by increasing the amount of addressing that can be done over some span of time.

It just seems worse, because CPUs have gotten so much more powerful. It's not RAM getting slower in wall time, but RAM getting 10% faster average access times, while the CPU gets 50-100% faster at processing in that same span of time.

The point is... APU integration is nothing new. It's been around for nearly a decade in the home pc market.
2 years is very far from, "nearly a decade." This integration was arguably done first with Sandy Bridge. Near-future AMD CPUs, and the consoles, should be the first x86 ones with fully-usable GPGPUs sharing the virtual address space, physical address space, and actual physical RAM, with the CPU, on the same die. Haswell supposedly has some GPGPU improvements, too, so don't count Intel out, either (they're just busy trying to redefine TDP again, instead of hyping GPGPU :)).
 

2is

Diamond Member
Apr 8, 2012
4,281
131
106
Nope. Among other things, DDR3 latency is perfectly fine.

Yes, I know they're perfectly fine. That's my point, the CL numbers alone don't tell the whole story. Neither does the shared memory access of the PS4 which is what much of this PS4 > i7/Titan rhetoric is being based off of.
 

alcoholbob

Diamond Member
May 24, 2005
6,380
449
126
5

What in the absolute hell are you talking about?! Sure, the PS4 is gonna have some yet-to-be-determined API that might be all nice and spiffy as far as resource and code goes, but the hardware is still x86 (just like a PC). And the API can only make calls as fast as a CPU or GPU can allow it. It'll still have to function simular to openGL or directx, or in that manner of framework That APU integration is nothing new, nor revolutionary, its just based off already existing tech. And APU intergration have been around for over a decade now, if your going by AMD's coined derfinition. I'm not even sure the guy above praising the PS4 APU even really knows what a APU is, or of what components defines it, lololol.

I dont know who keeps feeding you console sheep these illusions of technical grandeur or superiority, but whoever's doing it has you buying into the biggest load of bull. Maybe their marketing skills are vastly superior, but their hardware sure isn't.

lol, listen to Beaver, don't listen to devs like John Carmack.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Yes, I know they're perfectly fine. That's my point, the CL numbers alone don't tell the whole story. Neither does the shared memory access of the PS4 which is what much of this PS4 > i7/Titan rhetoric is being based off of.
The PS4 > i7/Titan is in your head. Take those goggles off. The PS4 has great potential due to the lack of sucktitude that the PS3's CPU and RAM exhibited, which spilled over to the XB360. The last console gen kinda sucked, dragged down big-budget titles, and then it took years for a good indie scene to come up for the PC. I'm hopeful for the consoles not dragging PC titles down to their level for multiplatform titles (last gen had all the scalar performance of an Athlon64, spread across multiple threads), and for games on them directly that aren't hampered so much by the hardware.

Your i7 can run software a PS4 never will. Same with your Titan.

It's possible that these SoCs may do things that the i7+Titan cannot. IE, the PC dev will need to wait until future x86 CPUs become more common (future AMD and Intel IGPs w/ support for various languages, along with AVX2, will come to exist). Brute force can only get you so far. Tricks to work around sharing memory (physical RAM, and the page tables, with cache-coherency) have been tried, many times, over the years (the Cell's overlays were the last major failure of it, but far from the first). One big flat memory space, with all the HW complexity, and all little stalls, that entails, has won, over, and over and over, again, because it (a) allows some programs to made that otherwise wouldn't be, (b) removes small inefficiencies that can add up in practical software (but don't show up much in simple synthetic benchmarks), and (c) simplifies development by orders of magnitude.

We'll be getting the same stuff, or rough equivalents, but later (well after SDP suffers the same fate as the special P4 TDP did, probably ;)).
 

Beavermatic

Senior member
Oct 24, 2006
374
8
81
lol, listen to Beaver, don't listen to devs like John Carmack.

John Carmack.... he left us. HE LEFT US.

DOOM 3 BFG was supposed to have Oculus VR support upon shipping the dev kits... and he left us high and dry.

Where was he when we needed him the most?!
 
Status
Not open for further replies.