No nvidia chipsets for Nehalem?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Comdrpopnfresh

Golden Member
Jul 25, 2006
1,202
2
81
2 cents- take them or leave them:
What intel is doing makes sense. SUPPORTING AMD THAT IS. If intel wipes the floor with amd in the cpu sector, and amd is the only company out there with a complete platform- getting nvidia off amd's back would allow them to focus on graphics, and possibly care about processors less. Besides investment and the current market, it really doesn't matter what amd has a foot and a half in b/c they don't own their own fabs. At the same time, if intel had to worry about a stronger amd processor outlook, and bring graphics or gp-cpu/cpu on a chip out against TWO strong graphics companies, they risk poor performance of the graphics end of a hybrid chip, making little interest in the cpu part of the component, and they'll have taken focus and fab space off of the cpu division, which would only lead to the strengthening of amd anyhow.

Originally posted by: toadeater
Nvidia, AMD are going to beat Intel. Nehalem is going to have trouble becoming popular and influencing the desktop market.

1. Intel needs game physics to be done on the CPU for anyone to care. Nvidia and AMD want it on the GPU, they hold all the cards so far.
2. Hybrid SLI/Crossfire means you can use a cheap secondary GPU for physics and co-processing.
3. GPU co-processing is going to speed up many tasks tremendously. Apple is adding native support for this to OS X, Windows will get similar eventually.
4. Nehalem won't drastically speed up everyday tasks and games. Consumers are going to ask "WTF?"
5. Nehalem is expensive and proprietary. Consumers are going to say "It doesn't fit in my 775 board."
6. 775 socket FSB-based CPUs still have room left for improvement, and are more than fast enough for most users!

Nehalem is the right thing for Intel to do, they should have gotten rid of FSB clocking long ago as AMD did, but Intel may have waited a bit too long to do it. There isn't really anything out there for consumers, gamers, and business users that requires significantly more powerful CPUs. The bottleneck is slow storage and lack of RAM. For games, the GPU will continue to be far more important than the CPU, in part because games will be even more console-centric in the future.

Besides not realizing that "the market" is mainly comprised of sales to companies rather than individual consumers, you're even confusing what the average end-user is like.
A lot of end users will buy their computer at walmart and the like, not build it- they don't know what socket 775 is, even if they have it, which doesn't mater because they just buy a new computer every so often anyhow. All they know is they don't want what is old- which is why P4 systems were abound some time ago, although the average P4 system wasn't faster than a good P3 for a refresh or two.

"Everyday tasks" haven't been sped up since P4 days... The reduction in clock speeds, and the increase in cores and throughput on recent microprocessors speeds up reactions-time when multiple threads are issued. We all tend to think of this as when people are performing multiple tasks, but launching most programs the average user does- word-processing and internet releases multiple threads to the processor (think MS office, and all the crap that is also happening on a typical spyware infested comp when surfing). How exactly does one speed up a word processing program anyway- make it put on screen what you haven't even typed yet? The average user doesn't need task increases, they just don't want to see hang-ups when they want to do something. No one seems to recall that when dual-cores first came out, no one really gave them good reviews- ANYWHERE. They blew at games, where clock speed mattered, and the reduction in clock speed slowed down average tasks. But people got hooked by adverts and less hangups, and the code-monkeys started catering towards more cores, and now games and most everything can make use of more cores- so everyone is happy.

#4 ' Consumers are going to ask "WTF?" ' = you thinking "HT on nehalem, not RHT? WTF?"

I think intel choosing to go fsb-free now is the right time. They secretly know if you keep making things smaller, you're going to run into negative numbers. So to keep profits up and make things faster into the future, they have to be able to make designs, and what gets spit out on the fab lines more efficient. They've managed to curb the industry (not single-handedly of course) to corewars when they couldn't press the shear-speed envelope anymore, so they will have an easier time because they know now what a current good architectural basis is. Conroe was a hell of a lot better than P4s because of the departure in architecture, but penryn doesn't really make a noticeable clock-for-clock difference: process changes alone are making less of a difference. By moving the mem controller on-die, they reduce the complexity of nb chips, and will be able to shrink them in terms of realty and process, which will same them money. With no fsb, they will also need less cache to cover up things, so there is either more room for logic, or less space being taken up and smaller dies (same thing really).
Intel loves making money; they used to do it by raising performance through focusing on increasing the fsb and later the the clock speed, and recently the process to be able to spit out more chips on less Si, but all those things have or are reaching a limit in terms of gains and savings, so by looking at the platform and saving si on the things they manufacture and make money on, they get to continue to make money.
 

toadeater

Senior member
Jul 16, 2007
488
0
0
Originally posted by: Lonyo
1. I hardly see how AMD and NV hold all the cards in terms of physics. You do know that Intel bought Havok, right? NV and Intel hold the physics cards by virtue of controlling the two big middleware providers. MS may hold some cards if they make physics part of a DirectX spec, but that won't happen for a while.

They hold all the cards because games are console centric and consoles are GPU centric. PhysX on the console, and PhysX on the GPU.

Nvidia and AMD can continue to sell GPUs to the gaming market, and AMD and Intel can continue to sell mainstream CPUs, but gamers aren't going to need CPUs as powerful as Nehalem. Not while consoles are still the target platform.

I'll put it more simply, game developers have no plans to significantly increase CPU requirements over what we've already seen this year from games like Crysis. Nehalem is overkill for gaming because it's power won't be utilized and because gamers will buy new GPUs for their 775 systems for physics instead of buying complete new systems.

3. Agreed, assuming it gets implemented in a way which supports both AMD and NV, which would probably require MS intervention for Windows, although Apple are trying to do it (with their own stuff) for OSX.

There are a few GPU computing apps already available, and the benchmarks show a massive improvement for some types of tasks.

http://benchmarkreviews.com/in...1&limit=1&limitstart=4

4. You're not (IMO) going to speed up everyday tasks with a GPU which you can't also speed up with Nehalem etc. I don't see how you can argue that GPU's are going to be the way of the future because they can speed up specific tasks, and then claim Nehalem isn't good because it can do those same things upon release without needing extra coding.

I never claimed Nehalem isn't good. My point is that more powerful CPUs aren't going to be needed by consumers until something comes along to require them. Core 2 Duo-class CPUs are so fast that they handle any consumer PC task without slowing down, the CPU is no longer the bottleneck in consumer systems.

6. Things will always be fast enough for most users, but that doesn't mean they aren't allowed to get faster.

But what do they NEED for it get faster? Is it faster CPUs? No. They need faster storage, more RAM, and faster GPUs. CPUs have been underutilized by consumers for a while now, so why should they buy more powerful ones when it won't help? I'm not talking about markets where more cores are always needed, such as servers and workstations, I'm talking about mainstream users and gamers. Give me a reason why they would want a Nehalem in 2009, or even 2010 if they have a Core 2 Duo already.
 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
The CPU is the bottleneck though, in some RTS and TBS games, depending on the amount of AI-controlled units in play. Heroes V, Civ 4 - those are just a couple of the games where even with today's "fast" Core 2s, you still have to wait a decent amount of time for the AI turns to complete, if you play with larger maps.

I imagine some people will want a fast CPU to speed up media encoding tasks. Not really my sort of thing - I'd be more interested in something quicker to compile code faster.

As far as PhysX, I'm not sure what the future will hold for it, with AMD casting its lot with Intel last week, by deciding to go with Havok on its CPUs and GPUs.
 

myocardia

Diamond Member
Jun 21, 2003
9,291
30
91
Originally posted by: AmberClad
The CPU is the bottleneck though, in some RTS and TBS games, depending on the amount of AI-controlled units in play. Heroes V, Civ 4 - those are just a couple of the games where even with today's "fast" Core 2s, you still have to wait a decent amount of time for the AI turns to complete, if you play with larger maps.

Very true, although you forgot the most popular (I think) of the genre, Supreme Commander. Although, according to the people who play it on large maps with alot of AI, even a 4 Ghz dual-core isn't enough.
 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
Yep, I've heard that SupCom is pretty taxing, CPU-wise. I haven't gotten it yet though, and I was mainly posting from my own perspective, as far as the games where I feel like I could use a lot more CPU horsepower. Definitely want to DL the SupCom demo at some point to at least try it though.
 

Lorne

Senior member
Feb 5, 2001
873
1
76
Dont forget OS, Driver and software optimization.
Theres alot of third party hardware in ower world but still theres alot of programmers that need to catch up to just the latest standards.
 

Kuzi

Senior member
Sep 16, 2007
572
0
0
Nvidia are losing big time on the chipset front if they can't make chipsets for a great CPU like Nehalem.

But I don't think the first iteration of Larrabee will be able to compete with what Nvidia and ATI will have at the high end. Both Nvidia and ATI have years of experience producing very high performance GPUs, very optimized drivers, relations with game developers etc.

Intel will need time to catch up on all fronts when it comes to graphic performance.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Kuzi
But I don't think the first iteration of Larrabee will be able to compete with what Nvidia and ATI will have at the high end. Both Nvidia and ATI have years of experience producing very high performance GPUs, very optimized drivers, relations with game developers etc.

Intel will need time to catch up on all fronts when it comes to graphic performance.

Are you assuming Intel will make the first iteration available to the market?

I'm more inclined to think Intel management doesn't want another Merced on their hands, if the first iteration of Larrabee is less than startling performance-wise I'd expect them to retain that iteration for internal learning only and escalate the cadence for arriving at the second iteration (which would then be the first the rest of us consumers would be exposed to).

In other words, for all we know the first iteration has already occurred circa 2007 and Intel is working on the second iteration.

All I'm saying is that for the very reasons it is obvious to us that the first iteration ought to be inferior, surely Intel foresaw those reasons as well and factored that into their roadmap to ensure whatever they deliver whenever they deliver it will be a natural-born media and PR feast just as Core was over Netburst.

Why on earth would they do anything less when the worlds first-impression is at stake and Merced's nasty taste in the mouth of first impressions is no doubt on many minds.
 

aka1nas

Diamond Member
Aug 30, 2001
4,335
1
0
Originally posted by: Lonyo

2. That's what they've been talking about for 2 years now (rumours of the Forceware 90 drivers supporting physics on a second card were around in May 2006), and NV just announced they were in fact putting physics on the GPU itself (GTX 260/280) instead of using a separate card.

At the time, both ATI and Nvidia were planning on using SM 3.0(via HavokFX) to handle Physics processing. That evidently didn't work out too well. CUDA is a much better solution for this, but requires an 8000-series or later Nvidia card. It's taken this long for those cards to have a sufficient install base to make GPU-assisted physics worth implementing. The fact that they are using PhysX is just a bonus, as end-users will get the option of using their CPU, a PPU, or a GPU to process physics in PhysX-enabled games.
 

toadeater

Senior member
Jul 16, 2007
488
0
0
Originally posted by: AmberClad
The CPU is the bottleneck though, in some RTS and TBS games, depending on the amount of AI-controlled units in play. Heroes V, Civ 4 - those are just a couple of the games where even with today's "fast" Core 2s, you still have to wait a decent amount of time for the AI turns to complete, if you play with larger maps.

Have you checked the processor usage during those games? Is it really 100% during a turn? How do they perform on a quad-core?

I agree strategy games benefit from faster CPUs, and they're still a niche in PC gaming that consoles can't even come close to, but the question is how many games will be released targeted at something as fast (and expensive) as Nehalem? Considering how the game industry works, the only way that's going to happen is if Intel subsidizes development of PC-centric games, because there's no "killer app" for Nehalem (I know that Intel bought Project Offset, but so far that doesn't look a like a killer app). The games industry is becoming even more console-centric than ever.

I hate consoles. Most game devs (outside Japan at least) hate consoles. Consoles are crippled gaming machines that severely limit what's possible in gaming today. I just don't think moving away from 775 systems right now is going to help PC gaming. It's going to be another DX10; further fragmentation of the market. You don't have AMD and Nvidia telling gamers they have to dump their systems in 2009.

Anyway, I'm sure Intel is aware of all this. That's why they bought Havok and Project Offset. But until they can get the games industry to cooperate, they have no chance to sway the market.

I imagine some people will want a fast CPU to speed up media encoding tasks. Not really my sort of thing - I'd be more interested in something quicker to compile code faster.

Media encoding is actually one of the things GPUs have a massive advantage in over CPUs. Look at the link I posted earlier.

As far as PhysX, I'm not sure what the future will hold for it, with AMD casting its lot with Intel last week, by deciding to go with Havok on its CPUs and GPUs.

Yes, Havok works on GPUs and consoles as well. I imagine what Intel is going to do is make CPU support transparent in it so that if you like you can use a GPU or a CPU... or both. You can already do that with PhysX too. The thing is, once again, why would game devs target the Nehalem for physics rather than GPUs and consoles? How many Nehalems will gamers buy in 2009?
 

aigomorla

CPU, Cases&Cooling Mod PC Gaming Mod Elite Member
Super Moderator
Sep 28, 2005
21,077
3,578
126
toadeater,

Persephone was built with one game in mind. Supreme Commander.

On a map with 8k units, and simulated 71km x 71km. You BETTER have a fast CPU, and 4 cores, or your gonna cry.


Oh this is where XFire owns SLI. Because Supreme commander is multi monitor support. Stupid SLI.
 

Stoneburner

Diamond Member
May 29, 2003
3,491
0
76
The politics of Intel, DAAMIT, and NVIDIA has been just baffling. I would have expected Nvidia and Intel to both distance themselves from DAAMIT, possibly causing some more collaboration. Instead, DAAMIT is off in its own world and seems to be working with Both Intel and Nvidia while these two duke it out. However, If DAAMIT succeeds in its fusion and scalability concepts, it'll be a larger threat to both Nvidia and Intel.

What the hell is going on? It seems as though Intel no longer considers AMD a threat and Nvidia no longer considers ATI a threat.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Stoneburner
What the hell is going on? It seems as though Intel no longer considers AMD a threat and Nvidia no longer considers ATI a threat.

Uh...hasn't this been the operating assumption for the past 12-18months?

I haven't seen AMD or ATi associated with top-end performance in any desktop product lineup since 2006.

Xfire is good, but not threatening NVidia from everything I've read. AMD has positioned it's SKU's based on price/performance...which is as it should be...but a compelling price/performance portfolio does not make for sending shivers down the spines of the folks who got to the top for all the right reasons (engineering a superior product).

They know exactly what they did that AMD has not done, and until they see AMD do that (either CPU or GPU) then they know they've got nothing to fear.