1-25-05: 64bit < HT ?

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
While you're not legally obligated to back up every statement you make on these boards (as far as I know :D), you can expect to be challanged on subjects like this. There are lots of intelligent people here, and lots who work in the industry and with this stuff every single day, and hobbyists like me who always want to know why and do lots of research to find out why. So "because I said so" doesn't fly around here typically... just look at Zebo's stickied thread in this forum and see all the work he's done to prove his stance on memory and the Athlon-64.

You'll also get called a noob a lot... it's part of the territory. The word gets thrown around on forums like this as much as "dork" did when I was in grade school. Most of the time the people who can participate in an intelligent discussion with you won't resort to name calling, especially in their first comment to you.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: Jeff7181
To be fair, people often use incorrect terminology when talking about this. It's been a while since I read up on it, but more informed sources use the correct term, "physically address" rather than "access." Because as we all know, the Xeon is a 32-bit processor and is capable of using more than 4 GB of RAM. I'm guessing it's better (easier, faster, etc.) if the processor has the native ability to physically address more than 4 GB rather than have it be able to use more than 4 GB with software "tricks."

People confuse virtual space with memory.
From the software's point of view, it lives in the virtual space. Not in physical ram. Every byte of data and code, every access to OS services and resources, has its neat little place in virtual space.
Every virtual "address" that is actually then used, is mapped by the CPU's memory manager and the OS to some physical location, either in ram or in swap on the hd. There is no need for any correlation of bit-width in this mapping.

16-bit software use a virtual space composed of a number of 64KB segments. The software app itself need to explicitely manage these segments, and keep track of which segment every piece of data belongs to. This is contorted, to say the least.

Every used location inside these 16-bit pages is then mapped to some physical address, 24 bit wide, 32-bit wide, 36-bit wide, doesn't matter. This is business as usual. No penalties or software tricks here. The problems are entirely inside the applications own code, that have to deal with a multi-segmented virtual space.

32-bit software have 4GB segments. This opens for the possibility of having a software model that entirely resides inside one single flat 4GB segment. This means the software can be completely ignorant of the concept of segments, and can assume that every address is singular and unique. It has a FLAT, linear virtual space. Address arithmetic is a breeze and performance is increased.
The OS can also now assume that every process have it's own 32-bit segment and can easily separate them this way.

So basically what we have now, is used data, inside a number of 32-bit virtual spaces, being mapped to 36-bit physical addresses.

Every used location inside these 32-bit segments is then mapped to a physical location. This works something like this:
The application wants to set up some piece of data and requires a 800KB block for this, in it's virtual space. It asks the OS to allocate 800KB. The OS then finds two hundred (200) free, 4KB wide memory pages in ram. These can be scattered anywhere and in any order. The OS also finds a previously unused 800KB free block inside the apps virtual space. Then every sequential 4KB piece of this 800KB block is associated with one of the 4KB pages in ram. Finally, it hands the application the 32-bit number pointing to the start of the 800KB block inside its own virtual space. The application never concerns itself with physical addresses. It actually cannot. It's cut off, and isolated by the OS.

The OS may later want to swap out some rarely used 4KB pages to hd (to get more free ram). It then changes the association for these pages to point into swap instead. The software doesn't know anything about this. The virtual addresses inside its virtual space remains the same. If and when the app access data at that virtual addresses, the OS will go - "Oops, that address is on that page in swap." Find a free 4KB page in ram, load the page into ram and change the associations so they now point to the new page in ram.

When the Windows 4GB space then doesn't contain enough unfragmented blocks of numbers, to represent everything needed (application code, application data, .dlls, shared data, OS APIs, OS resources - disc cache, agp aperture...), we run into a barrier. This barrier has nothing directly to do with physical ram or physical addressing. It has to do with the virtual space of our software model.

Since we want to keep the single flat, linear space and all the advantages of that, we go to a 64-bit software model, featuring a 64-bit, 16ExoByte, virtual space. This requires a new CPU featuring an instruction set with 64-bit address field.

Current AMD K8 cpus feature the hardware to map a total of 1TB addresses, from somewhere in the lowest 256TB of the virtual space, to a physical address. But WindowsXP64 will only map 16GB from a 16TB virtual space, for a Windows64 app, if I'm correctly informed.

This does seem a bit constricted. But the scheme is expandable. The hardware concept, x86-64, is ultimately expandable to mapping 4PB from the full 16EB virtual space. And any x86-64 software can ultimately run in that environment.
Note that this doesn't necessarily means a Windows64 (software model) app will be able to use 4PB from a 16EB virtual space. But WindowsXP64 will probably be able to run on a future cpu, featuring hardware support for 16EB to 4PB memory mapping.

The option of instead having a virtual space composed of multiple 32-bit segments, is not at all attractive.
Note that we in 64-bit computing map from a larger virtual space to a smaller physical space. Whereas the the case is the opposite with 16- and 32-bit computing. 64-bit computing is more flexible and also solves the problem of fragmentation of the virtual space.
 

Zebo

Elite Member
Jul 29, 2001
39,398
19
81
Originally posted by: Vee
There's a couple of things to consider. One of them is EXTREMELY IMPORTANT, of make or break nature.

A Windows32 application cannot work reliably, using more than 1½ GB memory for data and code!
And by 1.8GB I would have expect it to already have terminated with out of memory.

Forget that silly 4GB figure that is so often quoted around the 32 vs 64 issue. It's wrong, and is derived from a simplistic, incomplete enough to be false, belief of how software and cpus handle ram.
It doesn't matter how many PC rags or web articles you see making the same claim: "32bits = 4GB".
It's still false! A 32-bit processor and 32-bit software can access more than 4GB ram. Correct me if I'm wrong, but I think current 32-bit cpu's are good for 64GB?

But a particular software format, like Windows32, is NOT able to access that much.
This is a price Windows32 pays for having a flat virtual space. And it's worth it!
To break the 1½ GB 32-bit barrier, we have to migrate to another software format.

We could go to a 32-bit software model that can use, like, 64GB. But it would be horrendously crippled, in ever respect. Basically, we would be back to Windows3.11 technology.

That's why everybody, Apple, Windows and Linux, is going 64-bit instead.
And in order to be able to use more than 1½GB memory, a bunch of heavyweight apps and games are going 64-bit asap. Mark very well, that this is NOT ram that I'm talking about! I'm talking about memory in the apps virtual space! When you run out of virtual space, you do so regardless if you have only 1Gb ram, - or surprise, surprise, 4GB ram. It will make absolutely no difference at all!

I would suggest anyone who plans to still use the computer in some future years, even in a secondary role, even if you insist on Intel and ht, to seriously consider 5x1 or 6x0 series P4s, to get 64-bit support.

Hyper threading feature, compared to this, is utterly below the horizon, complete nonsens.



Then there's the other side of this poll. - Is Intel's HT feature preferable to Athlon64's superior 32-bit performance?

That depends entirely on what you do, and what applications you use.

I've used both as budget engineering workstations. The A64 is on my applications utterly superior. It MAULS the P4. The A64 is such glorious working comfort, that ht is definitely a nonissue.

It may very well be due to that the software is poorly optimized for the P4. It doesn't matter. This just illustrates another of AMD's big advantages. You don't NEED special P4 optimized software. It's still fast. And that counts for a lot of software floating around. The margin is often huge, like +40%.

Multitasking doesn't change this picture for me. I'm well aware that it could, for some purposes.
But the way I do this is, when I have a large computation running in the background and also do interactive work in an app that also demands lot's of cpu power, during editing, is this: I drop the base priority for the background process in the taskmanager. Then the foreground app works so smooth, it's hard to impossible to know that something is running in the background. This works for me and the current app.

It's far from ideal solution though. Apparently, it's not guaranteed to always work for all apps. There's a number of preferable solutions. One of them would be that any software is smart enough to realize that timeconsuming computations should be done in a thread with low enough priority, not to disturb any other processes. (Why are so many Windows programmers so multithread stupid?) Another preferable solution would be hyper threading or multicore or dual core.

But it's not a given thing that the later ones are competitive in terms of performance and cost. My choice between A64 performance and Intel ht, is A64. But that's not why I voted for 64-bit. The reason I did that, was entirely due to the future addressing capabilities of 64-bit software.

I find myself starting to search for your posts nowadays:) Very informative:thumbsup:
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
I think I understand what you're saying, Vee.

So... in a very oversimplified way... 64-bit computing does for memory, what FAT32 did for hard drives? (with segments and partitions being similar in this comparison) In that, with FAT16 you could have a maximum of 2 GB partitions, but you could have say, 20 partitions for your 40 GB drive... so you can still use the 40 GB but in a "messy" way.
Then with FAT32, you were able to create a single 40 GB partition.
 

iversonyin

Diamond Member
Aug 12, 2004
3,303
0
76
Originally posted by: housecat
Originally posted by: 1stoff
Originally posted by: housecat
all it is really about is this: would you rather have 64bit or HT on your current processor.

if you have a AXP, choose between one of the two.
if you ahve A64, keep 64 or take HT instead.
if you have P4, keep HT or take 64 instead.


I'd take a A64, but give up 64bit anyday for HT. I think an A64 with HT would rape. On the other hand, a P4 with 64bit wouldnt be so woo-woo. It'd just be nice.

Not that there arent P4s with 64bit.. I'm just saying given the option to have any processor, and give or take one of the given options.

by your own words 64bit OS is not available as yet (you seem to conveniently forget Linux) so how can anyone honestly compare the two then make a choice

if you are comparing hardware then it is a fact that the A64 is ahead of the P4 in nearly every benchmark. if you compare 64bit to HT then there is no comparison, HT is a feature that "helps" the very poor P4 architechture that is specifically designed to ramp/scale MHz as dictated by the marketing /pr department of intel, 64bit is a totally different animal that cannot be compared, but when it is fully adopted, as was 8bit to 16bit, then to 32bit (can you remember that far back??, i can) then it will be awesome

that part in bold is funny.


1. i see no proof it will be awesome. show me a shred of proof showing that it "will" be awesome. i've seen marginal gains, and the gains that are there are due to the extra registers, not crunching data in 64bit chunks.

2. there is a OS available, its just in beta. and sure, theres linux.
so how can you say it will be "awesome" when gains are marginal today, at best?


im being serious, and not trying to be sarcastic when I ask "where are the gains going to come from?"
plz fill me in.. because all of my reading has led me to believe its overhyped and obviously, underused.


You need to read other people's post more carefully, he said when it "fully adopted, it would be awesome". 64bits is still in its early/indrotuction stage. And all he said is to give it sometime to develop and mature.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: Jeff7181
I think I understand what you're saying, Vee.

So... in a very oversimplified way... 64-bit computing does for memory, what FAT32 did for hard drives? (with segments and partitions being similar in this comparison) In that, with FAT16 you could have a maximum of 2 GB partitions, but you could have say, 20 partitions for your 40 GB drive... so you can still use the 40 GB but in a "messy" way.
Then with FAT32, you were able to create a single 40 GB partition.

Well, 32-bit software did that. Gave us a single segment. 64-bit software is our chance to keep that advantage, instead of going back.

 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: Vee
Originally posted by: Jeff7181
I think I understand what you're saying, Vee.

So... in a very oversimplified way... 64-bit computing does for memory, what FAT32 did for hard drives? (with segments and partitions being similar in this comparison) In that, with FAT16 you could have a maximum of 2 GB partitions, but you could have say, 20 partitions for your 40 GB drive... so you can still use the 40 GB but in a "messy" way.
Then with FAT32, you were able to create a single 40 GB partition.

Well, 32-bit software did that. Gave us a single segment. 64-bit software is our chance to keep that advantage, instead of going back.

Right... staying ahead of the game...
 

dev0lution

Senior member
Dec 23, 2004
472
0
0
It'll be interesting to see how the 1st generation of 64 bit and dual core plays out for AMD and Intel. On the one had you've got AMD who's already there and is just waiting for the OS and moving to dual core. But to keep the backward compatibility with socket 939, they're potentially compromising on adding additional changes to take better advantage of dual core and 64-bit XP.

On the other hand, intel's still limiting things with their FSB speeds and lack of an onboard memory controller and you STILL have to ditch 915 and 925 chipsets (and socket T likely) to upgrade. Looks like a play straight out of the MS upgrade path playbook :-D

Add in a Windows OS still in beta, drivers, application software, etc and it'll be a while until the dust settles. I think short term 64 bit on AMD will be the way to go, but long term it's a bit hard to tell. AMD doesn't have the fabrication capacity to match Intel, especially at 90nm and below. For now I'm stuck with HT so I might as well like it *heh*. Couldn't beat the deal, though I concede AMD's edge.
 

Megatomic

Lifer
Nov 9, 2000
20,127
6
81
Originally posted by: CaiNaM
easily HT. that's why I am looking forward to dual core. besides, at this time 64b means absolutely nothing unless you have a 64b OS, which would then allow access to a larger memory pool.

until os & apps are std 64b, it's really only benefits marketing.

ht provides real benefits now, and dual core will provide them better (2 physical cores are better than 1 physical plus 1 logical)
64bit is the future. This ought to shut the OP up.
 

housecat

Banned
Oct 20, 2004
1,426
0
0
Yeah, gotta quell those guys who learn so much about processors, that they end up goign against the mindless hordes who say "AMD ROXSORZ!@!!!!@@!"

My only question is: if AMD designed the A64 to be ready for dualcore since its creation (yes, they did amd fanboy noobs), then why is AMD releasing their dualcore chip after intel? When intel has to create a "new" processor?

You'd think AMD would be able to release it virtually anytime, and anytime before intel would be better.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: housecat
Yeah, gotta quell those guys who learn so much about processors, that they end up goign against the mindless hordes who say "AMD ROXSORZ!@!!!!@@!"

My only question is: if AMD designed the A64 to be ready for dualcore since its creation (yes, they did amd fanboy noobs), then why is AMD releasing their dualcore chip after intel? When intel has to create a "new" processor?

You'd think AMD would be able to release it virtually anytime, and anytime before intel would be better.

See you don't help your position at all by screaming "fagboy fanboy noobs" either.

But to respond to your question anyway... I believe it's because AMD doesn't have the resources that Intel has... it's been said before that Intel spends more money on advertising than AMD brings in annually. I've never looked up the stats to confirm that, but it doesn't sound unreasonable. AMD JUST made the transition to 90nm... and their dual core CPU's certainly won't be 130nm... so before they start pumping out dual cores, they probably have to work some bugs out of the 90nm process.
Just because the K8 was intended to be dual core capable from the beginning doesn't mean they've been making dual core CPU's for the past 4 or 5 years that the K8 has been in development. All it means is that architecturally it won't require any significant changes to make it a dual core processor. And I'm willing to bet since it was designed to be SMP capable, the foundation is already there to provide enough I/O bandwidth with the use of the Hyper Transport bus... unlike the Pentium 4 which is going to be starved for memory bandwidth again unless motherboard manufacturers make a 256-bit memory interface.
 

beatle

Diamond Member
Apr 2, 2001
5,661
5
81
Easy, HT (on an AMD chip). By the time I'm using a 64-bit OS with 64-bit apps that take advantage of the extra memory addressing, both the A64 and current day P4s will be extinct. :)
 

housecat

Banned
Oct 20, 2004
1,426
0
0
Originally posted by: beatle
Easy, HT (on an AMD chip). By the time I'm using a 64-bit OS with 64-bit apps that take advantage of the extra memory addressing, both the A64 and current day P4s will be extinct. :)

 

whorush

Member
Oct 16, 2004
132
0
0
HT isnt the end all be all. it helps the P4 because its pipeline is like 34 stages or something. one branch misprediction and the whole thing is empty. big bubble. so since the whole huge pipeline was empty all the time they tricked the OS into giving it 2x as much data. if HT is the end all be all, why isn't it in the pentium M or the ITANIC? (not sure if i'm imagining this, but i think they may have talked about putting it in the itanic, but anyway.)

anyway, the hammer has a short pipeline, around 12 stages, so it wouldnt benefit from HT, in fact, it could hurt preformance.

first, intel dual core SMITHFIELD is going to suck. 2 chips bunged together on a die, hardly as elegant as AMD's solution. also just when you thought intel couldnt go higher than 115 watts per die, the smithfield will be 130 max. wasn't dual core supposed to cut wattage? as far as why AMD is taking so long, they're really not. intel is comming out with dual core in desktops first and amd is comming out wtih dual core in servers first. ask me and it makes a lot more sense to put dual core in servers first. they designed it in a long time ago, true, but AMD is also really understaffed and they have a lot less money. it takes a lot of things to lauch a brand new chip, lot sof money and lots of coordination, so in this way its understandable they're taking their time and i bet it is that. they demonstrated the chip a long long time ago, which means they passed the technical hurdles.
 

Markfw

Moderator Emeritus, Elite Member
May 16, 2002
27,278
16,121
136
Originally posted by: housecat
Yeah, gotta quell those guys who learn so much about processors, that they end up goign against the mindless hordes who say "AMD ROXSORZ!@!!!!@@!"

My only question is: if AMD designed the A64 to be ready for dualcore since its creation (yes, they did amd fanboy noobs), then why is AMD releasing their dualcore chip after intel? When intel has to create a "new" processor?

You'd think AMD would be able to release it virtually anytime, and anytime before intel would be better.

Last I heard AMD was scheduled to release it first !
 

housecat

Banned
Oct 20, 2004
1,426
0
0
Now that, would be awesome (AMD really needs that). I know they've been doing "good" but they need alot more than the "one hit wonders" the AXP and 64.

At least in CORPORATE eyes. Not ours.

They gotta get noticed and keep hitting Intel up on this stuff.

They hit 1ghz first, first "64bit", hopefully first dualcore..


i'm not rooting for them, heck everyone here already branded me a intel fanboy.. i'm just saying they need it desperately and i hope they get it.
 

josh1413

Junior Member
Jan 28, 2005
17
0
0
If you put an Athlon 64 FX-55 or a Athlon 64 4000+ against a P4 3.8Ghz or heck even a P4 3.46Ghz EE it would probably beat both of them in any multi-tasking benchmark, or come so close it would be irrelevant. When it all boils down, the Athlon 64 processors are the best at gaming, but not saying the P4's are bad, they are just not as good in most gamming benchmarks or most other benchmarks.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: whorush
HT isnt the end all be all. it helps the P4 because its pipeline is like 34 stages or something. one branch misprediction and the whole thing is empty. big bubble. so since the whole huge pipeline was empty all the time they tricked the OS into giving it 2x as much data. if HT is the end all be all, why isn't it in the pentium M or the ITANIC? (not sure if i'm imagining this, but i think they may have talked about putting it in the itanic, but anyway.)

anyway, the hammer has a short pipeline, around 12 stages, so it wouldnt benefit from HT, in fact, it could hurt preformance.

first, intel dual core SMITHFIELD is going to suck. 2 chips bunged together on a die, hardly as elegant as AMD's solution. also just when you thought intel couldnt go higher than 115 watts per die, the smithfield will be 130 max. wasn't dual core supposed to cut wattage? as far as why AMD is taking so long, they're really not. intel is comming out with dual core in desktops first and amd is comming out wtih dual core in servers first. ask me and it makes a lot more sense to put dual core in servers first. they designed it in a long time ago, true, but AMD is also really understaffed and they have a lot less money. it takes a lot of things to lauch a brand new chip, lot sof money and lots of coordination, so in this way its understandable they're taking their time and i bet it is that. they demonstrated the chip a long long time ago, which means they passed the technical hurdles.

Don't forget that with those 130 watts comes almost twice the surface area to dissipate heat through. That'll make a big difference in the operating temperature of the CPU... about a 75% increase in surface area and only a 10% increase in total heat generated.

By the way, your post would be much easier to read if you capitalized the first letter of the first word of each sentence.

*EDIT* Hehe... ya know what would be funny? If a couple months after Intel's dual core CPU's came out, AMD comes out with a quad core and says "Yeah, since we designed the K8 to be a multi-core CPU from the beginning, we thought why bother with just two cores? Lets go all out and put 4 cores with a 512-bit memory interface." :laugh:
 

stevty2889

Diamond Member
Dec 13, 2003
7,036
8
81
Originally posted by: Markfw900
Originally posted by: housecat
Yeah, gotta quell those guys who learn so much about processors, that they end up goign against the mindless hordes who say "AMD ROXSORZ!@!!!!@@!"

My only question is: if AMD designed the A64 to be ready for dualcore since its creation (yes, they did amd fanboy noobs), then why is AMD releasing their dualcore chip after intel? When intel has to create a "new" processor?

You'd think AMD would be able to release it virtually anytime, and anytime before intel would be better.

Last I heard AMD was scheduled to release it first !

Intel moved their release date up by an entire quarter, and also, I think the first AMD dual cores are supposed to be for workstations, like the opteron, rather than for the desktop. Intel's workstation Dual cores aren't supposed to come out until 2006 afaik. In either case, I don't see dual core being very useful until there are more multithreaded applications to take advantage of them, so I think they need to continue to get more performance out of single core chips, or there won't be much improvement for any of the already cpu intensive single threaded applications.
 

whorush

Member
Oct 16, 2004
132
0
0
agreed on the increase in surface area. i don thtink its a 75%, it should be close to 100%, sinec i do believe that they are just 2 plain old cihps.

also, according to the inquirer who i think borke the story, HT will be disabled, because with HT the chips were getting too hot.

i really think that AMD will take some serious server/workstation share with the dual core chips, and i think there will be a lot of upgrading.
 

Jeff7181

Lifer
Aug 21, 2002
18,368
11
81
Originally posted by: whorush
agreed on the increase in surface area. i don thtink its a 75%, it should be close to 100%, sinec i do believe that they are just 2 plain old cihps.

also, according to the inquirer who i think borke the story, HT will be disabled, because with HT the chips were getting too hot.

i really think that AMD will take some serious server/workstation share with the dual core chips, and i think there will be a lot of upgrading.

That all depends on whether they EVER plan on implimenting HT... since I think I read somewhere that HT accounts for 5% of the transistors on the Prescott... plus there's bound to be some shared stuff between the two cores, but that may be offset by additional transistors needed to attatch the two cores, cause won't they be able to access eachother's L2 cache? So there would probably be some extra transistors in there to make that possible. So you're right... it may be closer to 100%... but I think AT LEAST 75% is a safe bet. =)
 

VIAN

Diamond Member
Aug 22, 2003
6,575
1
0
Note: After reading this post, look for my next post on the third page, very critical companion to this post.

I have an AMD AXP and I'm gonna get an A64 this year, but boy would I love HT on that baby. One of Intel's technologies that every processor could do with.

As for 64-bit technology is useless right now and probably won't be useful for another 1-2 years from now once more and more people jump on win64, whenever it releases.

And for all you saying we can't compare these technologies. You are right, but we aren't comparing. He just asked which one would you like to have now. No brainer, HT because it is of some use at least unlike 64-bit.