Intel scrapping prescott at 4ghz

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

clarkey01

Diamond Member
Feb 4, 2004
3,419
1
0
"What does AMD have beyond a 3GHz A64? I don't see it getting that much higher, so all in all you've been ranting about intel doing a poor job when in reality AMD has no other outs other than the same as intel, and that's with dual+ core CPUs. "


AMD NEVER said they would get its CPU's to hit 10 Ghz, unlike Intel ( Nehalem 10.25 Ghz, 05) which is now dead along with tejas.
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
When you need 5kg HSF to cover up intels TDP lies it's time to change. Not cauze compitition, AMD will always be a small fry, but users who want to think while working and oems who don't want to pay $55 for copper.
 

bunnyfubbles

Lifer
Sep 3, 2001
12,248
3
0
That's besides the point, what I was trying to imply here was that its obvious sheer clock speed is hitting a wall, multi core computers are going to phase in, and in a matter of decades (maybe less?) we'll have quantum computing. Keep your pants on.
 

Wingznut

Elite Member
Dec 28, 1999
16,968
2
0
Originally posted by: clarkey01
I'll make a bold prediction. They have NO plan beyond adding cache to Prescott.
Please... Both Intel and AMD have plans for years from now. And then they have backup plans. And then they have backup plans for those plans.

Processors take years to design and manufacture. If they had no plan further than next year, then there would be like a 4 year window with no product.

 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: clarkey01
I think what is most disturbing about this is the almost complete lack of letting us know what lies beyond the 2mb cache chips. Frankly we see the extreme chips now, and we have a great idea how the 2mb Prescotts will perform. In fact we have a pretty good idea what would happen if they moved to 4mb of cache, but frankly we have no idea what kind of cpu we'd drop in after that.

LGA 775 is "promised" to be compatible with a future chip beyond those Prescott 2mb's and I find it hard to believe that promise when they honestly don't know, or won't tell exactly what that chip entails.

Why is this valuable? Why do we care to know now? Because, for me, gaming still favors a quality single core. What has been valuable up till now is that the HT of the P4 gives us good multitasking as well as "decent" gaming. If the future of P4 is to branch out, and this is what this news sounds like, then they are going all out multitasking and they are seemingly giving up on the gaming area entirely.

They will focus on mass volume markets. They have outlined NO functional platforms for the various tasks.

So do we salute AMD for giving us a "what you can get now and what you can plug in later"? Yes we do. Because Intel is failing on yet another front they used to do well at, letting us be assured that those super expensive parts we are buying today, motherboards and the rest, are useful for a chip generation beyond what we have. AND they would usually indicate what those chips were likely to do for the various market segments.

Now we have a thorough black hole. It's like their ability to compete with AMD is collapsing in one huge "don't worry we'll figure a way out" press release. Heck they aren't even saying this openly, which is even worse.

I'll make a bold prediction. They have NO plan beyond adding cache to Prescott. They have no idea how to get a Dothan to scale past 2.5ghz, let alone how to add 64bit ability to it, and they are seriously considering copying everything AMD has done right down to the on chip memory controller, meaning they would throw out every single compatibility they currently have out the window.

When corporations get worried about drastic things, they get real quiet. And you know what, Intel is VERY, very quiet



From Xbit labs, and I agree.

Dothan is slower than A64 clock for clock. Close but no cigar. Plus it would take away one of intels only advantage right now, hyperthreading. Plus it costs more to make with massive cache. Plus as you say scaleing. I've seen A64 hitting 3.0 on default Vcore and air in FX-55. Heh.

Intel might have plans within plans but AMD has better plans. Come Dual core time watch out.

Course I'm not sure it will have any effect on Intels bottom line. These processors performance is not all that different, only the enthusiast really notices, and Intel has probably one of the top 10 brand awarness in the world. Can't really compete with that unless your twice as good and half the price. Read: impossible unless AMD breaks laws of physics or comes up with some nobel prize worthy inventions.


Anyone know why Intel has'nt a integrated mem controller?
 

TStep

Platinum Member
Feb 16, 2003
2,460
10
81
Regardless of the future direction of their cpu roadmap, I just find it had to believe that Intel, being so close to 4ghz, won't paper launch the 4ghz and just produce a very few of the cpus, hand selected, for sale. Similar to AMD's first 2700+ or 2800+ XP a few years ago or the most recent highest end graphics card fiasco. Really seems like like they have too much propoganda money invested in the GHZ Myth to abandon ship before geting to 4ghz.

Well of course I'm assuming Intel can at least find a few hundred or so cpus that will do the dance at 4ghz. Maybe they need to call Lockheed's Skunkworks for a little help.;)
 

mamisano

Platinum Member
Mar 12, 2000
2,045
0
76
Anyone know why Intel has'nt a integrated mem controller?

With the new BTX standard, the memory is too far away from the processor, a reason we will probably never see an A64 on that platform.
 

iwantanewcomputer

Diamond Member
Apr 4, 2004
5,045
0
0
mamisano, does the distance form the processor to the memory make a difference? i know the traces have to be longer, but why would it make it impossible when you are using an integrated mem controller instead of separate.

anyway if they could integrate the mem controller and get a performance boost like th k8, they would scrap btx in a sec in favor of it.

ps. keep in mind that intel is screwed for at least the next year in high end, but this isn't where most sales are, and amd can only supply at most 30% of the market until fab 36 comes online at 65 nm in 2006
 

jiffylube1024

Diamond Member
Feb 17, 2002
7,430
0
71
Originally posted by: mamisano
Anyone know why Intel has'nt a integrated mem controller?

With the new BTX standard, the memory is too far away from the processor, a reason we will probably never see an A64 on that platform.

Seriously!? That's whacked!

Wow you're right - I just checked it out, and indeed the CPU is not right next to the RAM at all; the bridges are.
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Originally posted by: iwantanewcomputer
mamisano, does the distance form the processor to the memory make a difference? i know the traces have to be longer, but why would it make it impossible when you are using an integrated mem controller instead of separate.

anyway if they could integrate the mem controller and get a performance boost like th k8, they would scrap btx in a sec in favor of it.

ps. keep in mind that intel is screwed for at least the next year in high end, but this isn't where most sales are, and amd can only supply at most 30% of the market until fab 36 comes online at 65 nm in 2006

Uh... Do you even know WHY integrating a memory controller helps performance? It cuts down on latency, which has been the BIG problem with memory for the past fifty years (Yes, that's fifty as in 50). If you integrate the memory controller and then shove the memory far away, about the only thing you're doing is cutting out the ability to upgrade. We don't live in a world of instantaneous signaling (except at sub-molecular scales).
The primary problem with integrating is lack of flexibility. You won't be able to produce one core that can work with evolving technologies and market demands. We've been talking about highly integrated platforms and internet access/word processing in every household. Yet, the market seems unwilling to ditch low-cost flexibility. Consumers seem much more inclined to spend $30 here, $20 here every 1-2 years rather than $200 in one go every 4-5 years.

Also, anyone bashing Intel for not meeting frequency targets should take a look at the AMD camp and start listing the numerous instances of revised roadmaps scaling back frequency targets. Apple's G5 is also having similar problems since it doesn't seem they'll hit the 3GHz promised some time back. 90nm caught the entire semiconductor industry with its collective pants down. By the time it became apparent the problem lay with the technology and not because of Company A's couldn't hack it, everyone had invested over years of time and money. A complete about face would've meant literally billions of dollars down the drain with nothing to show for it.
Now, everyone is talking multi-core and functionality. Moore's Law still marches on, but it looks like it's time to find a new use for transistors other than more cache. However, that may actually be a good thing, seeing as how latencies are killing performance at high clock speeds. Decreasing cache size to meet latencies introduces new problems.

If you think multi-core will kill gaming, then it's time to open your eyes. GPU's (or VPU's) have offloaded work from the CPU for years. That's technically multi-processing. I don't see anyone complaining.
Beyond graphics, which is embarrassingly parallel, it's at most a 2 second exercise to find something else in the game that can be easily run across multiple processors: physics. If that doesn't float your boat, try A.I. or animation. For games, I'm waiting on physics and animation. When a human running around in the rain looks like a human running around in the rain, then you can say we don't need more processors. Heck, we're not even correctly animating cars driving around and cars are an easier problem. There are a lot more areas in games that can benefit from parallel execution aside from the three I mentioned.
 

sandorski

No Lifer
Oct 10, 1999
70,788
6,347
126
Originally posted by: Sahakiel
Originally posted by: iwantanewcomputer
mamisano, does the distance form the processor to the memory make a difference? i know the traces have to be longer, but why would it make it impossible when you are using an integrated mem controller instead of separate.

anyway if they could integrate the mem controller and get a performance boost like th k8, they would scrap btx in a sec in favor of it.

ps. keep in mind that intel is screwed for at least the next year in high end, but this isn't where most sales are, and amd can only supply at most 30% of the market until fab 36 comes online at 65 nm in 2006

Uh... Do you even know WHY integrating a memory controller helps performance? It cuts down on latency, which has been the BIG problem with memory for the past fifty years (Yes, that's fifty as in 50). If you integrate the memory controller and then shove the memory far away, about the only thing you're doing is cutting out the ability to upgrade. We don't live in a world of instantaneous signaling (except at sub-molecular scales).
The primary problem with integrating is lack of flexibility. You won't be able to produce one core that can work with evolving technologies and market demands. We've been talking about highly integrated platforms and internet access/word processing in every household. Yet, the market seems unwilling to ditch low-cost flexibility. Consumers seem much more inclined to spend $30 here, $20 here every 1-2 years rather than $200 in one go every 4-5 years.

Also, anyone bashing Intel for not meeting frequency targets should take a look at the AMD camp and start listing the numerous instances of revised roadmaps scaling back frequency targets. Apple's G5 is also having similar problems since it doesn't seem they'll hit the 3GHz promised some time back. 90nm caught the entire semiconductor industry with its collective pants down. By the time it became apparent the problem lay with the technology and not because of Company A's couldn't hack it, everyone had invested over years of time and money. A complete about face would've meant literally billions of dollars down the drain with nothing to show for it.
Now, everyone is talking multi-core and functionality. Moore's Law still marches on, but it looks like it's time to find a new use for transistors other than more cache. However, that may actually be a good thing, seeing as how latencies are killing performance at high clock speeds. Decreasing cache size to meet latencies introduces new problems.

If you think multi-core will kill gaming, then it's time to open your eyes. GPU's (or VPU's) have offloaded work from the CPU for years. That's technically multi-processing. I don't see anyone complaining.
Beyond graphics, which is embarrassingly parallel, it's at most a 2 second exercise to find something else in the game that can be easily run across multiple processors: physics. If that doesn't float your boat, try A.I. or animation. For games, I'm waiting on physics and animation. When a human running around in the rain looks like a human running around in the rain, then you can say we don't need more processors. Heck, we're not even correctly animating cars driving around and cars are an easier problem. There are a lot more areas in games that can benefit from parallel execution aside from the three I mentioned.

FYI, AMD mumbled something about a 3ghz wall 2.5ish years ago. They seemed well aware that the problem was going to occur and basically stated then that they wouldn't be making any Desktop cpus greater than 3ghz. There was a lot of conjecture for awhile that AMD was abandoning the Desktop. As we see now that's not the case, they chose to increase IPC and develop Dual-Core as a way around the issue.