LumbergTech
Diamond Member
- Sep 15, 2005
- 3,622
- 1
- 0
PC gaming is doing fine....digital distribution..hello..
i bought more games in 2008 than i have in a long time
i bought more games in 2008 than i have in a long time
Originally posted by: LumbergTech
PC gaming is doing fine....digital distribution..hello..
i bought more games in 2008 than i have in a long time
Originally posted by: mindcycle
I'm not surprised by this at all, as most of you probably aren't either. I mean all we get anymore are shitty console ports (there are a few decent ones, I know) and DRM up the ass. I remember a few years ago when there were really good PC only games being released. What happened?
Here's a link to the article. http://www.shacknews.com/onearticle.x/56794
Originally posted by: ShawnD1
Originally posted by: ja1484
Originally posted by: ShawnD1
People are praising games released as alpha or beta versions (GTA 4) yet they complain about the declining quality of games. Here's a hint: stop buying shitty games. The developers are getting mixed messages here. When they see buggy games sell a million copies, they start to think that it's ok to sell a game that isn't finished.
This sort of attitude annoys me. Games were "less buggy" back in the day because they were far less complex.
No, they were less buggy because developers at least put some effort into it. A lot of today's games were obviously never tested before they were released. Look at how many people can't even run GTA 4. When I try to run it, it tells me I need to have Vista Service Pack 3. If they even tested this game ONE TIME they would have caught this. Unfortunately they didn't test it, they released an unfinished version of the game, and guys like me are left trying to fix their garbage just to get it working. Saint's Row 2 is even worse. The controls are so fucked that driving is impossible. Just tap the steering and your car is out of control. Did they test that game before releasing it? Obviously not.
It's Action 52 all over again.
Originally posted by: BenSkywalker
One of the problems there is PC enthusiasts greatly underestimate the technical merits of what the consoles are capable of when coding directly to fixed hardware, not to mention how weak current CPUs in PCs are.
The cell is extremely overrated as it cant process large threads effectively, making it only good for quick, small threads.
Now look at the i7.
It can work 16 simultaneous threads
and produces HUGE gains in multithreaded applications.
You can build a complete setup under $1000.
Personally, however, I'll take a PlayStation 3 or other game console over any PC for top-flight gaming.
The industry has proven fairly recession proof thus far. Despite the recession, "the industry full-year sales hit $21.33 billion (up 19 percent from 2007)" "2008 was indeed another record shattering year for the U.S. video game industry." Nintendo being the biggest success, obviously.Originally posted by: Eeezee
Actually, I'd blame this more on the deep recession than anything else. Every industry has been hit pretty hard, and only a 14% drop is actually pretty good
Originally posted by: JSt0rm01
I bought the orange box and l4d last year and thats it. With the economy the way it is i think a lot of people are just gonna play what they have. I mean I could entertain myself with L4D, CS:S and TF2 for a long time.
Despite what Sony told you, games are held back by branched integer data. Xbit Labs tested this with the E1200 and the results were conclusive.Originally posted by: BenSkywalker
The cell is extremely overrated as it cant process large threads effectively, making it only good for quick, small threads.
And what type of code do games normally have? What on the CPU side of the equation is going to be holding them back? Loads of heavily branched interger data, or is it perhaps raw FP performance?
They only need to program for 2 operating systems - Windows XP and Windows Vista. Actually they don't even need to officially support Vista; just program for XP if you want.Secondly, you just obliquely proved my point. They CAN'T test things as thoroughly as they used to because of the increased complexity of both the software and the PC environment. There are hojillions of hardware permutations available today, with hojillions of software permutations on top of them. This isn't 1994, when everyone was running a 486/Pentium and Windows 3.1 and five guys at Id Software could perfect a sprite-based rendering engine within a reasonable time period. Modern engines have complex 3d rendering, lighting systems (sometimes *several* within the same engine!), physics systems, animation systems, AI, scripting, shaders out the ass, particle systems, dynamic environments, massive draw distances, magical texture/content streaming, sound and dialogue, audio/video/physics/etc extensions, multi-threaded exceution, networking, and more. It's a fuster-cluck at best.
Despite what Sony told you, games are held back by branched integer data. Xbit Labs tested this with the E1200 and the results were conclusive.
If you look at the first two graphs, you'll see that an overclocked Celeron E1200 (3.4ghz) is a top performer when it comes to raw computing power.
Suddenly the E6750 with lots of cache sprints ahead to a 30% lead in Quake 4, 50% lead in Half-Life 2, 30% lead in Crysis, 50% lead in UT3, 50% lead in World of Conflict.
A processor with limited cache is not suitable for gaming.
Originally posted by: BenSkywalker
I'm willing to wager the Wii games cost a hell of a lot less to develop then any of the top 10 PC games.
You have the numbers backwards. A Celeron E1200 has a stock frequency of 8x multiplier at 200mhz FSB (1.6ghz total). These guys got it to 3.4ghz by cranking the FSB up to 425mhz. The stock frequency of the E6750 is 8x 333mhz (2.66ghz total).Originally posted by: BenSkywalkerTwo processors with identical FP capablities but one running on a considerably slower FSB with 1/8th the cache will show performance limitations when poor OoO code creates lots of cache misses is more like it.
Diminishing returns. Having 2 video cards in SLI does not double your frame rate. Having 2 CPU cores instead of 2 does not double your zip file compression rate. Could I say those tasks are not GPU or CPU dependent simply because it's not linear? Of course not. In that above link, we see that increasing the cache 8x increases the performance in Quake 4 from 78fps to 145fps (86% faster). Similarly, if I added a bunch of ram to my computer and it suddenly jumped 86% in load time performance, I wouldn't hesitate to say that there was a significant memory bottleneck. If I change from a GeForce 6800 video card to a GTX 280, which is orders of magnitude faster, I don't see orders of magnitude gains in my frame rate. Should I be disappointed? Of course not. Nobody expects linear gains with these kinds of things.We aren't talking about how the current Intel chip benefit from the amount of cache on die. But the charts do show a few rather interesting things. First off, even using the Intel architecture with all of the other limitations mentioned above(reduced FSB on the Celery is the biggest one), an 8x increase in cache only offers a fraction of the performance increase that a doubling in clock speed does- why is that? If the code was truly as branch/cache dependant as you state, why is a straight clock speed showing a much higher performance return then an increase in the amount of cache? When even an OoO CPU running highly unoptimized code is STILL showing that raw computational power is the biggest factor in performance scaling it should give you an idea.
You're right that I worded that badly. The Cell does have a lot of memory bandwidth and it does have a lot of raw processing power. The problem is that this never seems to translate into better gaming performance. Gamespot did a pretty good article about this:Not suitable isn't accurate, not ideal for general purpose x86 platform based gaming all else being equal is more like it(seems to me. Why would you see such a big difference in performance based on cache amounts? Probably due to cache misses. How could anyone ever work around such a problem? Maybe if they could come up with a way so that there was some direct memory access control over the on die memory on the CPU they could completely avoid cache misses altogether. Kind of like, exactly how Cell is built![]()
Diminishing returns. Having 2 video cards in SLI does not double your frame rate.
Having 2 CPU cores instead of 2 does not double your zip file compression rate.
If I change from a GeForce 6800 video card to a GTX 280, which is orders of magnitude faster, I don't see orders of magnitude gains in my frame rate.
This is very true for GTA 4 as well. On minimum draw distance, the game is actually playable. On max draw distance, the game is ridiculously CPU bottlenecked and the frame rate drops like a rock; even at lowest resolution the game is unplayable with that setting.
At the moment, no video card has enough RAM
World of Warcraft was very easy on the graphics card, and the frame rate was fantastic... until you turn up the draw distance in a major city. Then it turns into a slide show and Task Manager shows the CPU maxing out.
Something is up. The PS3 has a lot more memory bandwidth and a lot higher floating point power than your regular Phenom or C2Q. The PS3 should have much better physics and draw distance in every case, but it doesn't. What is left? I'm thinking it's integer/cache.
edit: I also just realized this has nothing to do with PC game sales.
NPD continues to fail at accurately portraying PC numbers. Eventually they are going to be called to task for misrepresenting data.
Originally posted by: CRXican
Originally posted by: BassBomb
Blame piracy
not rly
Originally posted by: BenSkywalker
NPD accurately reports what they track, they have never claimed otherwise. NPD does not track on line sales for either consoles or PCs and it doesn't track a small chain retail store called Wal Mart. They haven't ever claimed to track these things, they report the sample that they do track, and I have never seen anything to indicate that those numbers are not accurate.
