Comdrpopnfresh
Golden Member
- Jul 25, 2006
- 1,202
- 2
- 81
2 cents- take them or leave them:
What intel is doing makes sense. SUPPORTING AMD THAT IS. If intel wipes the floor with amd in the cpu sector, and amd is the only company out there with a complete platform- getting nvidia off amd's back would allow them to focus on graphics, and possibly care about processors less. Besides investment and the current market, it really doesn't matter what amd has a foot and a half in b/c they don't own their own fabs. At the same time, if intel had to worry about a stronger amd processor outlook, and bring graphics or gp-cpu/cpu on a chip out against TWO strong graphics companies, they risk poor performance of the graphics end of a hybrid chip, making little interest in the cpu part of the component, and they'll have taken focus and fab space off of the cpu division, which would only lead to the strengthening of amd anyhow.
Besides not realizing that "the market" is mainly comprised of sales to companies rather than individual consumers, you're even confusing what the average end-user is like.
A lot of end users will buy their computer at walmart and the like, not build it- they don't know what socket 775 is, even if they have it, which doesn't mater because they just buy a new computer every so often anyhow. All they know is they don't want what is old- which is why P4 systems were abound some time ago, although the average P4 system wasn't faster than a good P3 for a refresh or two.
"Everyday tasks" haven't been sped up since P4 days... The reduction in clock speeds, and the increase in cores and throughput on recent microprocessors speeds up reactions-time when multiple threads are issued. We all tend to think of this as when people are performing multiple tasks, but launching most programs the average user does- word-processing and internet releases multiple threads to the processor (think MS office, and all the crap that is also happening on a typical spyware infested comp when surfing). How exactly does one speed up a word processing program anyway- make it put on screen what you haven't even typed yet? The average user doesn't need task increases, they just don't want to see hang-ups when they want to do something. No one seems to recall that when dual-cores first came out, no one really gave them good reviews- ANYWHERE. They blew at games, where clock speed mattered, and the reduction in clock speed slowed down average tasks. But people got hooked by adverts and less hangups, and the code-monkeys started catering towards more cores, and now games and most everything can make use of more cores- so everyone is happy.
#4 ' Consumers are going to ask "WTF?" ' = you thinking "HT on nehalem, not RHT? WTF?"
I think intel choosing to go fsb-free now is the right time. They secretly know if you keep making things smaller, you're going to run into negative numbers. So to keep profits up and make things faster into the future, they have to be able to make designs, and what gets spit out on the fab lines more efficient. They've managed to curb the industry (not single-handedly of course) to corewars when they couldn't press the shear-speed envelope anymore, so they will have an easier time because they know now what a current good architectural basis is. Conroe was a hell of a lot better than P4s because of the departure in architecture, but penryn doesn't really make a noticeable clock-for-clock difference: process changes alone are making less of a difference. By moving the mem controller on-die, they reduce the complexity of nb chips, and will be able to shrink them in terms of realty and process, which will same them money. With no fsb, they will also need less cache to cover up things, so there is either more room for logic, or less space being taken up and smaller dies (same thing really).
Intel loves making money; they used to do it by raising performance through focusing on increasing the fsb and later the the clock speed, and recently the process to be able to spit out more chips on less Si, but all those things have or are reaching a limit in terms of gains and savings, so by looking at the platform and saving si on the things they manufacture and make money on, they get to continue to make money.
What intel is doing makes sense. SUPPORTING AMD THAT IS. If intel wipes the floor with amd in the cpu sector, and amd is the only company out there with a complete platform- getting nvidia off amd's back would allow them to focus on graphics, and possibly care about processors less. Besides investment and the current market, it really doesn't matter what amd has a foot and a half in b/c they don't own their own fabs. At the same time, if intel had to worry about a stronger amd processor outlook, and bring graphics or gp-cpu/cpu on a chip out against TWO strong graphics companies, they risk poor performance of the graphics end of a hybrid chip, making little interest in the cpu part of the component, and they'll have taken focus and fab space off of the cpu division, which would only lead to the strengthening of amd anyhow.
Originally posted by: toadeater
Nvidia, AMD are going to beat Intel. Nehalem is going to have trouble becoming popular and influencing the desktop market.
1. Intel needs game physics to be done on the CPU for anyone to care. Nvidia and AMD want it on the GPU, they hold all the cards so far.
2. Hybrid SLI/Crossfire means you can use a cheap secondary GPU for physics and co-processing.
3. GPU co-processing is going to speed up many tasks tremendously. Apple is adding native support for this to OS X, Windows will get similar eventually.
4. Nehalem won't drastically speed up everyday tasks and games. Consumers are going to ask "WTF?"
5. Nehalem is expensive and proprietary. Consumers are going to say "It doesn't fit in my 775 board."
6. 775 socket FSB-based CPUs still have room left for improvement, and are more than fast enough for most users!
Nehalem is the right thing for Intel to do, they should have gotten rid of FSB clocking long ago as AMD did, but Intel may have waited a bit too long to do it. There isn't really anything out there for consumers, gamers, and business users that requires significantly more powerful CPUs. The bottleneck is slow storage and lack of RAM. For games, the GPU will continue to be far more important than the CPU, in part because games will be even more console-centric in the future.
Besides not realizing that "the market" is mainly comprised of sales to companies rather than individual consumers, you're even confusing what the average end-user is like.
A lot of end users will buy their computer at walmart and the like, not build it- they don't know what socket 775 is, even if they have it, which doesn't mater because they just buy a new computer every so often anyhow. All they know is they don't want what is old- which is why P4 systems were abound some time ago, although the average P4 system wasn't faster than a good P3 for a refresh or two.
"Everyday tasks" haven't been sped up since P4 days... The reduction in clock speeds, and the increase in cores and throughput on recent microprocessors speeds up reactions-time when multiple threads are issued. We all tend to think of this as when people are performing multiple tasks, but launching most programs the average user does- word-processing and internet releases multiple threads to the processor (think MS office, and all the crap that is also happening on a typical spyware infested comp when surfing). How exactly does one speed up a word processing program anyway- make it put on screen what you haven't even typed yet? The average user doesn't need task increases, they just don't want to see hang-ups when they want to do something. No one seems to recall that when dual-cores first came out, no one really gave them good reviews- ANYWHERE. They blew at games, where clock speed mattered, and the reduction in clock speed slowed down average tasks. But people got hooked by adverts and less hangups, and the code-monkeys started catering towards more cores, and now games and most everything can make use of more cores- so everyone is happy.
#4 ' Consumers are going to ask "WTF?" ' = you thinking "HT on nehalem, not RHT? WTF?"
I think intel choosing to go fsb-free now is the right time. They secretly know if you keep making things smaller, you're going to run into negative numbers. So to keep profits up and make things faster into the future, they have to be able to make designs, and what gets spit out on the fab lines more efficient. They've managed to curb the industry (not single-handedly of course) to corewars when they couldn't press the shear-speed envelope anymore, so they will have an easier time because they know now what a current good architectural basis is. Conroe was a hell of a lot better than P4s because of the departure in architecture, but penryn doesn't really make a noticeable clock-for-clock difference: process changes alone are making less of a difference. By moving the mem controller on-die, they reduce the complexity of nb chips, and will be able to shrink them in terms of realty and process, which will same them money. With no fsb, they will also need less cache to cover up things, so there is either more room for logic, or less space being taken up and smaller dies (same thing really).
Intel loves making money; they used to do it by raising performance through focusing on increasing the fsb and later the the clock speed, and recently the process to be able to spit out more chips on less Si, but all those things have or are reaching a limit in terms of gains and savings, so by looking at the platform and saving si on the things they manufacture and make money on, they get to continue to make money.
