Next gen console's effect on PC gaming graphics

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

TheShiz

Diamond Member
Oct 9, 1999
3,846
0
0
graphics don't make games great. gameplay makes games great. good xbox 360 games still look and play great to me. gears 3 and need for speed hot pursuit come to mind. i can't imagine either being that much better on a next gen system. console manufacturers have to come to a happy medium with price/graphics. Each generation graphics matter less and less, the law of diminishing returns most definitely is coming into play. I have an arcade racing game. Sega Super GT Twin. it is coming up on 16 years old and is still a lot of fun to me. the graphics are dreamcast era but it has force feetback steering and decent pedals and all that. it still holds its own 16 years later.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
graphics don't make games great. gameplay makes games great. good xbox 360 games still look and play great to me. gears 3 and need for speed hot pursuit come to mind. i can't imagine either being that much better on a next gen system. console manufacturers have to come to a happy medium with price/graphics. Each generation graphics matter less and less, the law of diminishing returns most definitely is coming into play. I have an arcade racing game. Sega Super GT Twin. it is coming up on 16 years old and is still a lot of fun to me. the graphics are dreamcast era but it has force feetback steering and decent pedals and all that. it still holds its own 16 years later.

Better graphics merely mean all that much more fun can be added. For its day "Pong" was cutting edge graphics, but the limitations meant the amount of fun that could had was limited as well. Here's a demo run on a current PS3 that illustrates just how much more immersive video games are about to become thanks to better graphics:

http://www.youtube.com/watch?v=Dou4Gy0p97Y

Plans are already being made to produce the first video games that use the exact same graphics as feature length films and people will soon have to ask whether you are watching a movie or playing a game.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
5 years bro

Yeah, some of us have been waiting since the first dual core processors for all this stuff to come to market. Its been a long wait and the real issues simply aren't technological anymore, but economic. Its all come down to who can provide the supply chain necessary to take the next big leap and when is it most profitable to do so and crush any competition.
 

Childs

Lifer
Jul 9, 2000
11,313
7
81
http://www.technologyreview.com/communications/39952/

Costs go down overnight all the time especially in electronics. That's what keeps pushing Moore's Law is that the demand never ceases and the costs go down steadily. Even the original Commodore 64 computer went down overnight to a quarter its original asking price when they found a cheap way to make ram. We'll just have to wait and see how fast they come down for ultra resolution TVs.

Out of curiosity, how did you quote me when I did not make that post? But, since you brought it up... I am aware of the content of that article, I've already seen the panels coming up and how they are intended to be used. That doesnt change my position, the article only reinforced it: The drive for these ultra high res displays is not for consumer 4K, but phones, tablets, etc. And not to actually run apps or content at the native resolution, because that would be too taxing on the devices. I can talk about it in more detail at a later date. Most likely you'll know what I was getting at when you read about it.

There is no demand for consumer 4K, because there is no content. No one is pushing it because everyone is just settling in on HD. PC users would adopt it because more screen real-estate is always good, but it would have to be much cheaper than the current 1600p/1440p displays, as by and large most people dont buy those. Without demand, its hard to drive down prices. And I'm not saying we wont eventually get there, but as I recall this sub discussion started with the notion that the next gen consoles will support 4K and game at that resolution. That still seems unlikely.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
you're mistaking good motion capture for graphics

do want to add though that I'd rather have improved physics, AI (THIS IS WHAT I REALLY WANT), and animations than next gen graphics

That's not just motion capture, but a demo running on a current PS3. Motion capture and even photo-realistic systems already exist, but the goal of quantic dreams is to overcome the uncanny valley using as few resources as possible. At any rate, this demo shows just how immersive next generation graphics can be using even modestly improved resources. Significant improvements to physics and AI would only add to the immersion and make current video games look like bad Saturday morning cartoons.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
Out of curiosity, how did you quote me when I did not make that post?

You'll have to talk to the website moderators since all I did was quote a post apparently made by you.

But, since you brought it up... I am aware of the content of that article, I've already seen the panels coming up and how they are intended to be used. That doesnt change my position, the article only reinforced it: The drive for these ultra high res displays is not for consumer 4K, but phones, tablets, etc. And not to actually run apps or content at the native resolution, because that would be too taxing on the devices. I can talk about it in more detail at a later date. Most likely you'll know what I was getting at when you read about it.

There is no demand for consumer 4K, because there is no content. No one is pushing it because everyone is just settling in on HD. PC users would adopt it because more screen real-estate is always good, but it would have to be much cheaper than the current 1600p/1440p displays, as by and large most people dont buy those. Without demand, its hard to drive down prices. And I'm not saying we wont eventually get there, but as I recall this sub discussion started with the notion that the next gen consoles will support 4K and game at that resolution. That still seems unlikely.

Utter crap. Nature drives the demand and the market merely responds. According to your logic there is no demand for condom free sex because marketers have not provided any content for it. People want they want, want what nature gave them the ability to appreciate, and the market simply follows.
 

Childs

Lifer
Jul 9, 2000
11,313
7
81
You'll have to talk to the website moderators since all I did was quote a post apparently made by you.

And yet the content of the post, which is unedited, has none of that text...

Utter crap. Nature drives the demand and the market merely responds. According to your logic there is no demand for condom free sex because marketers have not provided any content for it. People want they want, want what nature gave them the ability to appreciate, and the market simply follows.

The analogy is utter crap, as condom free sex existed before condoms. And there was a need for it, to prevent pregnancy and disease. Whats the need for 4K? You need to show people a reason to have it. People who use it now are people who need it, and they pay losts of money to have it. When you provide content or the necessary application, people will start to think they need it. You put a 4K TV next to a 1080p TV, play a blu ray, then say...which one you want, this 4K which is $5K, or this 1080p TV which is $1K? Not enough people will pick the 4K. Low adoption rate means movie studios and broadcasters wont look to go to 4K content. And you cant drive down prices when you're not doing the required volume, which then limits the investment to drive down prices even further.

To move the masses towards HD, you had consumer electronics, content providers, and the government pushing people towards it. There is no such push to 4K.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
The analogy is utter crap, as condom free sex existed before condoms. And there was a need for it, to prevent pregnancy and disease. Whats the need for 4K? You need to show people a reason to have it. People who use it now are people who need it, and they pay losts of money to have it. When you provide content or the necessary application, people will start to think they need it. You put a 4K TV next to a 1080p TV, play a blu ray, then say...which one you want, this 4K which is $5K, or this 1080p TV which is $1K? Not enough people will pick the 4K. Low adoption rate means movie studios and broadcasters wont look to go to 4K content. And you cant drive down prices when you're not doing the required volume, which then limits the investment to drive down prices even further.

To move the masses towards HD, you had consumer electronics, content providers, and the government pushing people towards it. There is no such push to 4K.

You've never heard of the iPad 4 I guess. Apple sold 3 million just when the thing first came out. The demand is there and Hollywood is even increasingly filming movies at resolutions as high as 5k because the technology is becoming cheap even if nobody can play their movies at those resolutions yet. Bottom line its just a significantly better picture in every respect, its becoming cheap, and the entire industry is based on planned obsolescence with the vast majority of electronic devices being engineered for a five year lifespan. The only real question remaining is what standards to adopt and how fast it will become the de facto resolution.

Neither one of which concerns next generation consoles since adding the capability is cheap and easy for them and risking being left behind before their ten year cycle is over is not an option for them.
 

Bman123

Diamond Member
Nov 16, 2008
3,221
1
81
I played cod on Xbox and PC side by side in a private lobby by myself and the Xbox was about medium settings compared to the PC. The devs work harder on console games to make them look good, look at uncharted there isn't a person here who can say that game doesn't look awesome
 

Childs

Lifer
Jul 9, 2000
11,313
7
81
You've never heard of the iPad 4 I guess. Apple sold 3 million just when the thing first came out.

The new iPad doesnt have that screen resolution for movies, its for the scalable UI so you can have much more pixels. It makes text and images look cleaner by using more pixels while maintaining the same footprint. Its all about dpi. Look at the specs, video playback is limited to 1080p. Movies from iTunes are 1080p, not 2048x1536.

The demand is there and Hollywood is even increasingly filming movies at resolutions as high as 5k because the technology is becoming cheap even if nobody can play their movies at those resolutions yet.

You have this all mixed up. They film at high resolution and work in high resolution, because at those stages the quality matters. Before digital videography, they shot mostly on 35mm or higher, and that doesnt mean that consumers viewed it like that in their homes. You got to see it in theaters in 35mm only. Outside of a theater you watched it on TV, either via cable/broadcast, or video. And it wasnt in 35mm. Using digital film lowers their cost for film, and its also easier to work with in post using computers, but that doesnt consumers are going to see it in master form.

Bottom line its just a significantly better picture in every respect, its becoming cheap, and the entire industry is based on planned obsolescence with the vast majority of electronic devices being engineered for a five year lifespan. The only real question remaining is what standards to adopt and how fast it will become the de facto resolution.

I am not debating the merits of 4K, its the timeline. You act like its going to come out tomorrow, and everything will be in place to support it as a viable product. Lots of things have to happen for it to get any sort of momentum, and it takes time.

Neither one of which concerns next generation consoles since adding the capability is cheap and easy for them and risking being left behind before their ten year cycle is over is not an option for them.

It was an option for the previous generation in regards to 1080p. Looking at the current rumors, it looks like they are going to try to go even cheaper on the hardware. Even if its $10 more to enable this functionality, assuming the hardware could do it, thats still extra for something that may not even be used in the lifecycle of the product. They'll mostly likely target 1080p, then push 4K in the next gen as a reason to get people to buy new consoles. TV makers and content providers will probably be on board by then, and its a much more appealing sell.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
No, I didn't make a mistake. The rumors are it will use a 6670 and I'm guessing they'll add on the hardware acceleration for partially resident textures first released on the 7970. The only other thing it will require then is extra memory. When you have hardware acceleration the power of the gpu is irrelevant. The thing is hardwired to produce those kinds of resolutions.
No, the GPU is the hardware acceleration, and will need far more processing power and bandwidth to render all the extra pixels, unless it's OK for tomorrow's games to look only slightly better than Quake 3.

http://www.technologyreview.com/communications/39952/

Costs go down overnight all the time especially in electronics. That's what keeps pushing Moore's Law is that the demand never ceases and the costs go down steadily. Even the original Commodore 64 computer went down overnight to a quarter its original asking price when they found a cheap way to make ram. We'll just have to wait and see how fast they come down for ultra resolution TVs.
Costs have lower bounds, and it is rare it happens overnight. It usually takes years. So much of the technology we use today is already dirt cheap, in a relative sense, with very low margins, and costs have gone up, lately, as much as they have gone down. One set of costs may go down that quickly, but all, within a year ros? Doubtful.

No, I'm talking running physics and AI that require 80-300 simplified processors right on the cpu.
But, those processors aren't very fast at it. I have yet to see a non-marketing test that can show GPU physics being more than 2-3x faster than x86 CPU physics, which is a divide that can be made up for. You have to understand that, until AVX, vector processing on x86 CPUs generally sucked. PPC could leave x86 in the dust for SP FP performance. AVX2 should finally take care of that, and in a way that it can remain a good lowest-common-denominator target for many years.

Things that have traditionally been done on a discrete graphics card will eventually be done on the APU. About 16 cpu cores is the max most home computers can use before it just becomes faster and less energy demanding to use simplified cores.
No, it's not. No amount of weak cores can make up for a strong core. Period. Those weak cores are only good for work that is wasteful on any number of strong cores. They just happen to be insanely good for tasks that can use hundreds or more of dumb ALUs well. There are, however, many tasks that are moderately parallel, that are not ideal for either the strong CPU core, nor the dumb core. On the desktop, I doubt we'll even see 16 cores all that soon. All but a small fraction of users can't even use their 8 cores well, and software that could use it is still catching up (catching up well, but still not "there").

Its the divide and conquer strategy and AMD's newest southern islands architecture does the same thing on a discrete gpu as well allowing 4 different tasks to run simultaneously.
As opposed to the same being done on Fermi?

Nope. The hardware is hardwired to make the decision and the operating system has nothing to do with it.
Aw, changing your argument, I see. The hardware does make a decision. The decision of whether a transaction is successful or not is a far cry from hardware scheduling itself, which x86 does not do (though data transfers can have that done, depending on CPU and chipset). The OS, compiler, and programmer, are still the ones handling thread creation, execution, and destruction, and they do good enough jobs that we may never see a hardware engine for it on x86.

Like I said, HP's new 5nm memristors will hit the market next year.

http://www.theregister.co.uk/2011/10/10/memristor_in_18_months/
At what cost? How long before they are proven in the marketplace? Look, I don't doubt these things can be produced. But they need to be effective and cheap ASAP, which requires that the production scales well, the per-unit costs are low, and the demand skyrockets in a matter of months.

It's not just the mighty Intel, but AMD as well coming out with their own version of hardware accelerated transactional memory.
I know. My issue is that you were arguing that HTM somehow did some hardware thread scheduling, and CPU/GPU scheduling decision-making, not whether HTM is coming out or is a good technology. It is not merely a good technology, but a necessary one, moving forward. However, it does not make the CPU choose to do something on a CPU or GPU, nor will a user-space application schedule execution that the OS does not have knowledge or control over.

It's a convergence of technologies all coming on the market at once, but the research to produce these things goes way back and need to bring them to market has grown since the first dual core processor was produced. For some of this technology the only reason it hasn't come on the market before is because it was simply to expensive and nobody could afford to buy it.
The all at once part is what I mostly doubt. I see it coming out, being expensive, and coming down in price over several years. Mass production alone tends to create problems that simply could not be fathomed before it is tried, consumer demand can be fickle, and today, consumer demand may be a bit stagnant.
 

Tweak155

Lifer
Sep 23, 2003
11,449
264
126
The first thing that needs to happen (IMO) in order to start increasing resolutions is the move to vector graphics. Otherwise it just won't happen. Vector graphics is the more feasible first step, so we're likely to see that first.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
No, the GPU is the hardware acceleration, and will need far more processing power and bandwidth to render all the extra pixels, unless it's OK for tomorrow's games to look only slightly better than Quake 3.

If you read the article this is hardware acceleration that has no relationship to the rest of the graphics processing whatsoever. Its a little added hard wiring that only serves one purpose the same as a light switch. It also uses the system ram for virtual memory so the manufacturers can leave out the extra memory and not raise costs significantly.

Rage does partially resident textures, but all in software that uses either cpu or gpu compute resources. Doom 4 will be the first game ever to use hardware acceleration and we'll just have to wait and see how effective it is and what amount of system ram is required for something like 4K resolutions.

Costs have lower bounds, and it is rare it happens overnight. It usually takes years. So much of the technology we use today is already dirt cheap, in a relative sense, with very low margins, and costs have gone up, lately, as much as they have gone down. One set of costs may go down that quickly, but all, within a year ros? Doubtful.

Again with the generic expressions of pessimism. You really should write poetry or do a comedy show.

But, those processors aren't very fast at it. I have yet to see a non-marketing test that can show GPU physics being more than 2-3x faster than x86 CPU physics, which is a divide that can be made up for. You have to understand that, until AVX, vector processing on x86 CPUs generally sucked. PPC could leave x86 in the dust for SP FP performance. AVX2 should finally take care of that, and in a way that it can remain a good lowest-common-denominator target for many years.

It doesn't matter if gpu physics are only a few times faster. Again, its the divide and conquer strategy. Divide up the tasks and do more things simultaneously. In an ideal world we'd all have 60Thz single core processors to replace all this byzantine computer stuff, but this ain't an ideal world.

As opposed to the same being done on Fermi?

Aw, changing your argument, I see. The hardware does make a decision. The decision of whether a transaction is successful or not is a far cry from hardware scheduling itself, which x86 does not do (though data transfers can have that done, depending on CPU and chipset). The OS, compiler, and programmer, are still the ones handling thread creation, execution, and destruction, and they do good enough jobs that we may never see a hardware engine for it on x86.

Nope. You need to read up on transactional memory. The hardware in this case decides all by itself which cores do what.

At what cost? How long before they are proven in the marketplace? Look, I don't doubt these things can be produced. But they need to be effective and cheap ASAP, which requires that the production scales well, the per-unit costs are low, and the demand skyrockets in a matter of months.

There's that pessimism rearing its ugly head again. Certainly nothing I can say will sooth the savage beast. You need to go straight to the source and argue with HP. Good luck with that.

I know. My issue is that you were arguing that HTM somehow did some hardware thread scheduling, and CPU/GPU scheduling decision-making, not whether HTM is coming out or is a good technology. It is not merely a good technology, but a necessary one, moving forward. However, it does not make the CPU choose to do something on a CPU or GPU, nor will a user-space application schedule execution that the OS does not have knowledge or control over.

It chooses. That's the whole point is that the hardware itself chooses if the programmer doesn't. Compilers and whatnot can help, but this way it goes straight to the source of the problem. They tried desperately to find some way to have the first multicore cpu processors thread themselves and theoretically it is possible, but nobody has a clue about how to do it. The math is just too difficult. With transactional memory that's not the case and they do know how it can be done because supercomputers have been developing the technology in software for years.

The all at once part is what I mostly doubt. I see it coming out, being expensive, and coming down in price over several years. Mass production alone tends to create problems that simply could not be fathomed before it is tried, consumer demand can be fickle, and today, consumer demand may be a bit stagnant.

Yet again more pessimism rearing its ugly head. We'll just have to wait and see how things turn out, but it still doesn't affect the decision of the console manufacturers to at least include an upgrade path.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
If you read the article this is hardware acceleration that has no relationship to the rest of the graphics processing whatsoever. Its a little added hard wiring that only serves one purpose the same as a light switch. It also uses the system ram for virtual memory so the manufacturers can leave out the extra memory and not raise costs significantly.
I've only been talking about the rest of graphics processing, though. My whole argument against being able to utilize higher resolutions boils down to this equation:
(3840 * 2160) / (1920 * 1080) = 4

A desktop Radeon HD 6670 is almost entirely limited not by memory bandwidth, but by SP FP processing power, with games today. To keep the same performance and detail levels, it will need 4x the processing power. AMD GPUs already only read the part of the texture they need, so it won't help out any case where there is enough VRAM for all current textures. PRT, and equivalents, should be able to make 1GB VRAM act like 4+GB VRAM, and take care of most of the bottleneck of reading from main system RAM. It could make a scene not renderable on current video cards renderable due to VRAM limitations being broken, and take care of console fuzziness, but won't help the chip itself out much, once it is all in RAM (like in a PC, w/ 1GB+ VRAM, in most games today). But, a scene currently renderable in 1080P at a rate of 40FPS would only be able to run around 10FPS in 4K.

Again with the generic expressions of pessimism. You really should write poetry or do a comedy show.
<Zoidberg>Again with the crazy optimism.</Zoidberg> :p

It doesn't matter if gpu physics are only a few times faster. Again, its the divide and conquer strategy. Divide up the tasks and do more things simultaneously. In an ideal world we'd all have 60Thz single core processors to replace all this byzantine computer stuff, but this ain't an ideal world.
Ah, but it does matter than GPU physics is only a few times faster, as it implies that we are close enough to ideal. Being only that much faster means the GPU is not very efficient at doing it. We know that (a) Intel x86 vector extensions have generally sucked, compared to others out there (often even to AMD's x86 ones), compared to MIPS and PPC extensions, and good NEON implementations, that (c) Intel has already shown great work on improving vector performance with SB (there's bite to go with the bark), and (d) the AVX2 specs released thus far, have everything needed to exceed most RISC-based extensions/coprocessors, if combined with known improvements that will occur through Haswell. I don't doubt that a 4-8x improvement over SSE2-accelerated code today is a more than reasonable expectation, and I fully expect AVX2 to be a lasting lowest-common-denominator spec, like SSE2 has become.

It's not that we need some kind of strategy for the GPUs to do it. It's that GPUs being the things to do physics with turned out to be mostly marketing by NVidia and pre-AMD ATI. Now it will be able to go back where it should be (and has been on the consoles), the CPU; enabling developers to arbitrarily optimize tightly coupled code, instead of having to try to implement highly decoupled, latency-insensitive methods, which are more of a PITA than tuning a few loops here and there.

Prior Intel vector extensions have been heavily grounded in minimal increases to their CPU complexity, to get more out of existing FPUs. This time, they're Doing It Right(tm), to the extent that they reasonably can.

Now, again, some things, like pathfinding, should be made to scale out just fine, and should not be too latency-sensitive, provided the pathfinding implementation is conceived with a coprocessor in mind (bolting a GPGPU implementation to an existing engine is likely not reasonable). For that sort of thing, coming up with a single good set of code for it, that works well enough on any hardware supporting that language (multiple optimizations are much more acceptable than multiple methods of computation), is much more an issue, and will likely get worked out one way or another, over time.

There's that pessimism rearing its ugly head again. Certainly nothing I can say will sooth the savage beast. You need to go straight to the source and argue with HP. Good luck with that.
No arguing with HP, just waiting. HP has made more than a few bombs, and introduced technology too soon. I'm glad they are finally back into being an R&D company, and I don't for a second consider pushing technology they can produce being a bad thing, nor that, if they are finally ready for prime time, memristors won't be good. If it works well and is cheap enough, it will get at least some use, and progress from there. But, more often than not, it will not be an instant hit.

Nope. You need to read up on transactional memory. The hardware in this case decides all by itself which cores do what.
I have, and it doesn't. If the OS doesn't want a thread to run, it won't happen, or at least may run the risk of being killed. The software's logic being executed makes the important decisions, though its logic code. There is not a hardware engine giving a go-ahead to a thread that the OS hasn't allowed to run right now. On small-scale work, 10-100x slowdowns are not uncommon for STM, which is one of several problems hampering adoption. The hardware gets to decide a commit or abort on its own, but the HW is not making any high-level scheduling decisions. It executes code that makes those decisions, but that will be no different with HTM than it is without it, save for not having to make performance sacrifices to use transactional memory. Dividing the work up is done by the compiler and/or programmer, and scheduling it is done by the compiler, and/or a runtime substrate (IE, JVM), and the OS. The program logic and OS still call for a thread to exist, execute, sleep, or die, just as without HTM. That hasn't changed. What HTM allows for is the CPU hardware to detect a potential conflict (a commit guarantees there was no conflict, but an abort does not guarantee there was one), and abort eagerly, without using potentially more CPU time than your intended program logic.

HTM effectively creates speculative multithreading, but the threading part is done by similar software means to how it has been for many years already, and that is not a bad thing. Software management overhead isn't that big, and it offers flexibility that set-in-stone hardware simply cannot. Software detecting or preventing conflicts (with Intel's HTM, software will still have to adjudicate correctness of false conflicts in some cases), performing many function calls for actual work that measures in the tens of instructions, and making copies in heap-space all the time, can simply ruin performance; and HTM replaces most of that small-scale work.

It chooses. That's the whole point is that the hardware itself chooses if the programmer doesn't. Compilers and whatnot can help, but this way it goes straight to the source of the problem. They tried desperately to find some way to have the first multicore cpu processors thread themselves and theoretically it is possible, but nobody has a clue about how to do it. The math is just too difficult. With transactional memory that's not the case and they do know how it can be done because supercomputers have been developing the technology in software for years.
No, humans have been developing the technology for years. Large clustered systems have been mostly used for testbeds and simulators, because on any single system, you can only test up to so many threads, and can't sufficiently test high-level NUMA. There's a point where a mathematical prove that it should work is not good enough, and we will need transactional memory to efficiently develop future many-threaded applications (and, with HLE, extend current ones). Compilers and programmers implement STM, which then uses the HTM for the low-level grunt work, removing massive software overhead, thus leaving the software layer to manage the low-level work, which is the way it aught to be. All the CPU does is execute what it's told, when it's told, and if an existing transaction could cause a conflict, fails in a defined way. There is no deciding to use the GPU or CPU, and there is no HW threading. The math for doing it just in pure hardware, for all workloads, is still too complex, but it is not too complex to handle well in software, for a given workload.

Compile-time and run-time knowledge that are too difficult to narrow down to HW specs are still needed to get near optimal performance, and prevent pathologies (in addition, like the higher-level OS scheduling, updates to fix emergent problems are much easier than in HW). That compile-time and run-time information is enough to keep the HW from needing to do it on its own, just as any other software scheduling has survived innumerable attempts over the years to replace it. Software scheduling is very good, to the point that it's not worth even trying to doing it in HW for our CPUs. Let the CPU handle scheduling only a its knowledge level: pages, cache lines, branches, and program counters. HTM is implemented as a last-mile portion of a STM implementation, since pure software STM incurs too much overhead. Doing much more would likely hose up OS thread scheduling, as it would start getting in the way, and screwing with resource usage that used to be known quantities.

The first thing that needs to happen (IMO) in order to start increasing resolutions is the move to vector graphics. Otherwise it just won't happen. Vector graphics is the more feasible first step, so we're likely to see that first.
DX11 tesselation should allow us to approach that without getting rid of our tried and true polygons (assuming by vector you mean being able to describe things that aren't straight lines). How well it can in actuality, we'll just have to see (that is, a game with models made using tesselation as a means of expressing curvature, where we currently still get visible straight sections). That, I am quite hopeful for, though I don't expect to see too much until a DX11 GPU is actually in an Xbox, since high-end PC video cards will be able to brute-force it for the foreseeable future, and the publishers with big budgets seem to like that kind of approach better, on average. Games that don't go for the faux-realistic look might be able to use such a feature to get substantial added detail, and good performance, out of otherwise-average GPUs. For an extreme example, imagine a Rayman game with what appear to be true curved surfaces, and practically no UV maps. I'd love seeing that way more than the next soldier in a jungle, desert, or post-apocalyptic midwest-looking place (unless it's the next Fallout, of course :)).
 
Last edited:

pandemonium

Golden Member
Mar 17, 2011
1,777
76
91
I just want console gaming to die so we're not hindered by crappy low quality ports. Always were crappy, always will be crappy.

Consoles are just trying too hard to be PCs, without actually being PCs, so their respective companies can profit as much as possible.
 

thejunglegod

Golden Member
Feb 12, 2012
1,358
36
91
I just want console gaming to die so we're not hindered by crappy low quality ports. Always were crappy, always will be crappy.

same here. Look at what happened to LA Noire that was brought on to the PC or GTA IV, Fifa 2012. Those were practically insults. In fact Fifa 2012 didnot even have an in game graphics settings adjustor. Now that is just silly.

LA Noire was capped at 30 FPS and the game actually looked worse than what it looked on the PS3.
 

silvan4now

Member
Oct 4, 2011
128
0
0
just hate the fact that it is all involving graphics these days; were are the good old days of CS where the feeling is all that matters!
 

gorcorps

aka Brandon
Jul 18, 2004
30,741
456
126
same here. Look at what happened to LA Noire that was brought on to the PC or GTA IV, Fifa 2012. Those were practically insults. In fact Fifa 2012 didnot even have an in game graphics settings adjustor. Now that is just silly.

LA Noire was capped at 30 FPS and the game actually looked worse than what it looked on the PS3.

That's Rockstar's fault... Not the consoles'. If you're going to cry about something at last point the finger in the right direction.
 

thejunglegod

Golden Member
Feb 12, 2012
1,358
36
91
That's Rockstar's fault... Not the consoles'. If you're going to cry about something at last point the finger in the right direction.

what??? who's crying here? I believe this is a forum for expressing your views. Do not deviate unncecessarily. I've also put in Fifa for guys like you who have to find fault in everything and anything.

What do you want? The names of all the games that have been ported poorly? Or the list of games that could've looked better if not for consoles. Everyone knows that Oblivion and Skyrim have UI's personalised for consoles. And everyone knows Crysis 2 looked much worse than the original. You know why? Consoles!!!!!

I'm just ferrying a point across. You dont have to be a smart mouth and shout "Oh, he mentioned LA noire & GTA IV. He must not know that they were developed by the same company".
 

Tweak155

Lifer
Sep 23, 2003
11,449
264
126
DX11 tesselation should allow us to approach that without getting rid of our tried and true polygons (assuming by vector you mean being able to describe things that aren't straight lines). How well it can in actuality, we'll just have to see (that is, a game with models made using tesselation as a means of expressing curvature, where we currently still get visible straight sections). That, I am quite hopeful for, though I don't expect to see too much until a DX11 GPU is actually in an Xbox, since high-end PC video cards will be able to brute-force it for the foreseeable future, and the publishers with big budgets seem to like that kind of approach better, on average. Games that don't go for the faux-realistic look might be able to use such a feature to get substantial added detail, and good performance, out of otherwise-average GPUs. For an extreme example, imagine a Rayman game with what appear to be true curved surfaces, and practically no UV maps. I'd love seeing that way more than the next soldier in a jungle, desert, or post-apocalyptic midwest-looking place (unless it's the next Fallout, of course :)).

Vector graphics is defining objects using equations rather than pixels. This allows an image to be displayed the same on any resolution while keeping the same proportions. Currently, if we increased the resolution in Windows let's say, what happens to icons and other objects? They all get smaller. That's why we have things like 1920x1200 images that will look best on 1920x1200 screens. But now let's say the image is displayed using vector graphics (Apple has been working with this for a while). That same image should look just as good on a 1366x768 screen as it does on a 1920x1200, the difference being is that the 1920x1200 will look sharper. If a current 1920x1200 image is displayed on a 1366x768 display, details are lost and the image is "scrunched". Vector graphics would retain all data in the image, while it is still possible to not have parts of the image viewable. So if we go back to our Windows example, the desktop would be displayed as percentages of your screen (your start bar would be let's say 1/12 of your screen regardless of resolution).

A good example of this (and a site that uses vector graphics) is www.prezi.com. View a demo or create a free account and play with it. You can zoom in seemingly infinitely create a box that takes up the entire screen, zoom out and it will look tiny, and create another huge box around it. The objects retain their position and size via math rather than how many pixels they were when you created it.

It's a hard concept for me to explain, you may want to try reading the Wiki: http://en.wikipedia.org/wiki/Vector_graphics

The other advantage is if the image gets smaller or larger, the amount of data for that picture never changes since the image is strictly calculated. The graphic scales based on what you are doing. So a 1920x1200 image is the same "size" datawise as a 3840x2400, but the 3840x2400 will look sharper.

EDIT:

This would also remove the need to add "high res texture packs" to any game.
 

gorobei

Diamond Member
Jan 7, 2007
4,010
1,512
136
Vector graphics is defining objects using equations rather than pixels.
it is, but that only applies in 2d. there isn't any directly comparable equivalent in 3d presently. at least not any that directly avoids quantization
The other advantage is if the image gets smaller or larger, the amount of data for that picture never changes since the image is strictly calculated. The graphic scales based on what you are doing. So a 1920x1200 image is the same "size" datawise as a 3840x2400, but the 3840x2400 will look sharper.
implicit surfaces can adapt to camera position to avoid faceted/linear silhouettes, but resolution scaling isnt an accurate analogy.
EDIT:

This would also remove the need to add "high res texture packs" to any game.
texturing is a separate function from model format(polygon, nurbs, bspline patch, metaball, or point cloud). Any model not using octrees will still require a 2d mapping to store arbitrary colors not localized to a vertex. Even ptex still stores color on a uv matrix just with thousands of unjoined edges. as long as it is a 2d format, high res textures will be high res. Octrees can circumvent this limitation but come with their own complications when it comes to workflow.


You don't seem to know enough about 3dCG to be making these arguments.

Subdivision surfaces with adaptive tessellation can achieve the non-faceted/camera-scalable surface representation you are hoping for at reasonable performance costs on current hardware. No one has an engine designed for it yet, but it can be done. Regardless, the underlying base mesh will still be vertex/polygon based.

The main problem delaying implementation is the pipeline operator stack and other game interaction properties(collision/sound event/etc) haven't been worked out beyond some limited technology demonstrations.

Until 95% of all gaming hardware(consoles, pc with discrete graphics, mobile devices) are powerful enough to handle a game using 100% subdivision-surface-with-displacement-mapped ingame assets, no developer will make a complete game that scales to all resolution displays.

The adoption of dx11 will be the metric that signals when the shift to subdiv will occur. But since the next nintendo system is only licensing a 4770 design and the next xbox gpu is unclear, it may not happen until the gen after.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Vector graphics is defining objects using equations rather than pixels. This allows an image to be displayed the same on any resolution while keeping the same proportions. Currently, if we increased the resolution in Windows let's say, what happens to icons and other objects? They all get smaller. That's why we have things like 1920x1200 images that will look best on 1920x1200 screens. But now let's say the image is displayed using vector graphics (Apple has been working with this for a while). That same image should look just as good on a 1366x768 screen as it does on a 1920x1200, the difference being is that the 1920x1200 will look sharper. If a current 1920x1200 image is displayed on a 1366x768 display, details are lost and the image is "scrunched". Vector graphics would retain all data in the image, while it is still possible to not have parts of the image viewable. So if we go back to our Windows example, the desktop would be displayed as percentages of your screen (your start bar would be let's say 1/12 of your screen regardless of resolution).
That is a Windows API and inertia problem. OS X and Linux handle it just fine (window managers and toolkits still go by pixels, but the fonts and icons can get bigger, just with the same pixel-size borders, which there are reasons for), and Windows 7 handles it better than any previous version. Apple has no problem telling users they shouldn't be using old software, while Microsoft would die if they told their users that, which is why it's not perfect on Windows.

A good example of this (and a site that uses vector graphics) is www.prezi.com. View a demo or create a free account and play with it. You can zoom in seemingly infinitely create a box that takes up the entire screen, zoom out and it will look tiny, and create another huge box around it. The objects retain their position and size via math rather than how many pixels they were when you created it.
We already can do that, and for anything expressible via sets of triangles (in theory, any image to be represented by a grid of pixels), 3D hardware actually does do it. There are a few issues not resolved, though, for normal desktop use. One is that things get fuzzy. You could cel shade everything, but that's not a look that everyone will like. Otherwise, you end up with asymmetric fuzziness, if you don't add a rounding-to-pixels stage, but that could make for wonky zooming results. So, you end up needing to work directly with pixels in the end, anyway. Pictures already scaled down, boxes, and text don't show what window decorations, menu bars, etc. will look like. It's not an impossible thing to fix, but it's not exactly easy, either, since the solution must include old software support, which is one of Windows' weaknesses, when it comes to moving forward.

We are 90% there, today, though. I can't think of a single old person who's upgraded to 7 and not loved the DPI scaling, relative to older Windows. Applications, including web browsers, need to start using the OS' DPI, and translate non-dimensional and pixel sizes to scaled sizes (IE, 10px should be displayed at 12px@125DPI, since 96DPI would be assumed for the 10px height), but I'm sure that will come. While I disagree with wuliheron about realistic adoption rates and costs, I don't doubt that higher-res displays, including OLED for PCs, are coming, and that they will finally enable 200+ DPI PCs, which will cause MS and Apple to say, "told you so," while all the applications that just do pixels get frustrated into going to using the OS' DPI value to size everything. When that comes, we will have a 3x or higher spread of DPIs available on the market, and applications affected will finally be forced to deal with it.

The other advantage is if the image gets smaller or larger, the amount of data for that picture never changes since the image is strictly calculated. The graphic scales based on what you are doing. So a 1920x1200 image is the same "size" datawise as a 3840x2400, but the 3840x2400 will look sharper.
Not if it is a straight 90 degree line, as is exceptionally common in GUIs. For that, you will need a pixel-level layer of data and processing to get it right. You can use vector, but it can't be divorced from pixels, except in the case of an abritrary drawing.

This would also remove the need to add "high res texture packs" to any game.
Not necessarily. Even a vector image must be made with all that extra detail. If it lacks it, it starts looking bad when zoomed in, as well. Solid boxes and text can keep looking good because they are fairly simple. Downscaled imagez are already low detail, and natively vector drawings look good, because that's what they were made as (IE, a watercolor can be remade as vector somewhat well; an oil painting can't be). If you were to recreate a rusted metal sheet texture in vector, it would take a large amount of space, so it would get chopped down, and look bad when you're up against it. Unfortunately, vector or not, that gets back to drive and RAM speed and space limitations, and for a wealth of textures, probably most, in games, vector drawings will be less space and bandwidth efficient than textures.

Vector is great for solids, gradients, regular patterns, text, non-90-degree-lines, and so on, but imagine all the numeric detail required for a rusted corrugated roof: old paint, good zinc, rusting zinc, rusting steel, that dull matte brown look from dust and pollen, etc., all with R, G, B, A, height, and possibly additional values for reflectivity (if not done on a whole-surface basis, or ignored, as it usually is). In the end, it still would look poor when you got right up next to it. Hardware processing of vector image maps would also be awkward, to say the least, too (trees* v. 2d arrays with 1:1 data and graphical position mapping? I'll take grid arrays).

Something like megatexture, for great detail, with dynamic mipmap generation, and tessellation, while conceptually more complicated at a high level of abstraction, should achieve approximations that look as good most of the time, better in many cases, and should perform better, except for those cases where simple solid shapes, lines, and gradients are all that are being drawn. Rasterization's history has been full of odd hacks to approximate features of more elegant techniques, but damn if they don't work, and keep on working.

For the kinds of images commonly used for textures, we would need to be able to dynamically create semi-realistic patterns, with a map to lay them out on a surface, to do it at an arbitrary resolution, and assuming that it has been mathematically figured out by now (IE, methods that do not go into the uncanny valley with the 'realism' attempt), I would be surprised if the CPU or GPU time involved would not be high enough to favor some other method for the final product (such as megatexture, or another similar software technology, to keep the size no bigger than a few tens of gigabytes).

* making a sparse grid of a vector image, and storing it as a tree for space efficiency, came to mind as a lossy compression technique for an arbitrary-resolution RGBA+UV map, and almost made me want to get into a fetal position and wimper in a dark corner! :p
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
just hate the fact that it is all involving graphics these days; were are the good old days of CS where the feeling is all that matters!
I've been playing several PC games for the first time from GoG, lately, and having as much single-player fun as any good new game, and have been taking advantage of a midrange video card by throwing AA&AF at everything (doesn't support MSAA, but is 7+ years old? Run NVidia Inspector and turn on SSAA! :)). I also get the urge to go play Nethack and Dwarf Fortress. While it's nearly obsolete now, there are still good games trickling out for my DS (some I may have to import, though, like Inazuma Eleven 2, grrr).

I want better graphics from titles that are trying to be graphically intensive. I think everyone does, especially when they overshoot and it starts looking worse by trying to have too much minor detail for a given level of world detail, or framerates get horrible, or blur AA must be used, etc., etc.. All those little problems due to trying to cover up lacking HW capability, or from needing to use poorer approximations than they'd like, can screw up suspension of disbelief more than just trying to have less stuff going on in the scene. If not trying to be too graphically intensive, just make it look good for its minimum requirements, with no glaring issues, and don't try to sell it as a graphically intense title.

There's nothing wrong with Starcraft 2 at low detail. Devil Survivor 2 doesn't even use what little graphical prowess my DS may be capable of, and I'm already going through it again, trying for a bad ending, after getting the ethically good one, having plenty of fun doing so. And, even with fuzzy console textures, models of low detail compared to what we're used to, and no PC version, BG&E HD is as beautiful of a game as its original; which uses color, shape, animation, and pacing/timing, rather than relying on bigger textures, more mid-bass in explosion sounds, and release-day DLCs for addicts.
 
Last edited: