bystander36
Diamond Member
Obviously if a game is made with OpenGL, it will render in OpenGL. The problem is AMD and Nvidia do not have proper OpenGL support on the desktop. That would have to change if all the games start using OpenGL.
Kinda bummed about being frozen out of sports this generation as well. I understand the benefits of a unified memory pool, but does anyone truly believe a decent gaming PC couldn't handle the sports titles?
Consoles should have packed in 16GB RAM, take away the OS overheads and you'd be left with 5GB or 6GB, if these will stagnate for another 7yrs, more RAM would have been a better idea.
Problem with PC sports games is they don't require us to buy a new game each year for an updated roster, we have free mods for that instead of wasting $60 a year.
PS4 uses about 1GB for OS, with 7GB left over. XB1 uses about 3GB for OS with 5GB left over. How are you arriving at 5-6GB left-over after OS overhead on a hypothetical console with 16GB of RAM?
That's definitely a good thing since even the worst GRDDR3/5 memory is significantly faster in terms of latency/access than DDR3.
You know what is really going to happen? Developers will spend millions to build a console game.. then because it will be so easy.. they will simply port it to the pc. So we will be stuck with console games on PC. They won't bother.. I mean why should they.. look.. you get the same experience as the console guy!
The ultimate goal will be to push people to consoles.
Do you have any sources for this? I had a spirited debate with a few guys about this in another thread a few weeks ago, and I told them that DDR3 has lower latency than GDDR5. I told them this not because I had read any solid engineering sources, but because that was the putative view held by many on computer oriented forums across the net. Looking at the latency times for GDDR5 on a video card, it's MUCH higher than DDR3. But I don't know if that's just a function of the memory itself or the memory access pattern of the GPU, which tends to be more focused on bandwidth. Since GPUs are inherently parallel, they aren't affected by latency as much as a CPU..
I searched for a good source, but I was never able to find any.
Latency was the wrong terminology, but it's the peak output. If you look up JEDEC specifications, the output (Gbit/sec) of GDDR5 vs DDR3 is heavily in favor of GDDR5, which is why GPUs don't use DDR3 - DDR3s bandwidth is far lower than GDDR5. GDDR5 has more than twice the bandwidth of DDR3 depending on clockspeed - Furthermore, this 2-3 times higher bandwidth does come at a cost, it is absurdly expensive. By contrast, DDR3 is pretty dirt cheap.
Yeah, but what would be the point of using GDDR5 on a desktop CPU when it can't even utilize that bandwidth? Also, CPUs tend to be more restricted by latency, and if I'm right, DDR3 has much lower latency than GDDR5.
I'm still in favor of hybrid memory types for gaming rigs rather than unified. Unified may work best for consoles or low end computers, but for high end, it's better to have lower latency RAM for the CPU, and higher bandwidth RAM for the GPU.
The Xbone is using eSram which should give it the advantage in latency compared to the PS4's GDDR5.
The point of unified gddr5 on a desktop would be to speed up the igpu immensely.
I don't know if this is a serious post.
I don't want to be that guy, but uh, isn't all of this a moot point since there is no way the AMD cpu is going to be able to take advantage of all the bandwidth available to it?
Like probably a 10th of it's bandwidth?
Obviously if a game is made with OpenGL, it will render in OpenGL. The problem is AMD and Nvidia do not have proper OpenGL support on the desktop. That would have to change if all the games start using OpenGL.
Do you have any sources for this? I had a spirited debate with a few guys about this in another thread a few weeks ago, and I told them that DDR3 has lower latency than GDDR5. I told them this not because I had read any solid engineering sources, but because that was the putative view held by many on computer oriented forums across the net. Looking at the latency times for GDDR5 on a video card, it's MUCH higher than DDR3. But I don't know if that's just a function of the memory itself or the memory access pattern of the GPU, which tends to be more focused on bandwidth. Since GPUs are inherently parallel, they aren't affected by latency as much as a CPU..
I searched for a good source, but I was never able to find any.
I don't want to be that guy, but uh, isn't all of this a moot point since there is no way the AMD cpu is going to be able to take advantage of all the bandwidth available to it?
Like probably a 10th of it's bandwidth?
Um wut. One of the most popular desktop games, Minecraft, is OpenGL.
The GPU will certainly be able to use that bandwidth!