x64 w/LOTS of RAM

drag

Elite Member
Jul 4, 2002
8,708
0
0
for high performance databases it's very usefull.

Basicly you have a database that's 8 gigs big, you copy it into ram and thus I/O speeds are very very fast.

Also for working on large datasets. For instance if you have to do some non-linear video editing you can work on your HD video and run calculations that use many gigs worth of ram to do. With regular old x86 things begin to break down when you get to dataset sizes over 512 megs.. (or so I'm told)

But for normal desktop usage if your system and all your applications don't use up more then, say, a 1.5 gigs of RAM you are not going to see much, if any, performance increase if you add a extra 15.5 gigs of RAM. Although if you want to run your entire operating system and all your user files out of ram, then it would be pretty freaking fast.


 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
You probably won't notice anything spectacular unless you have an app that requires huge amounts of memory. 32-bit NT can handle up to 64G physical memory currently so upgrading to a 64-bit version will only help you in the per-process VM area and since there's very few 64-bit apps for Windows yet, I doubt you have one that would take advantage of it.

It pretty much boils down to, if you have to ask then, no it won't help.
 

spyordie007

Diamond Member
May 28, 2001
6,229
0
0
32-bit NT can handle up to 64G physical memory
Not really a correction but just a point to add. 64GB is not for all 32-bit versions of NT; XP Pro will only allow 4GB of RAM so going from XP Pro on 32bit to XP Pro on 64 would allow you to use more than 4GB of total RAM.

Of course, if you have 4GB or less than it really doesnt much matter :D
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Not really a correction but just a point to add. 64GB is not for all 32-bit versions of NT; XP Pro will only allow 4GB of RAM so going from XP Pro on 32bit to XP Pro on 64 would allow you to use more than 4GB of total RAM.

Well those are artificially imposed limits to make you pay more money to use your hardware, but yes I think you need Advanced Server to actually use the 64G of memory.
 

NogginBoink

Diamond Member
Feb 17, 2002
5,322
0
0
Originally posted by: Nothinman
Not really a correction but just a point to add. 64GB is not for all 32-bit versions of NT; XP Pro will only allow 4GB of RAM so going from XP Pro on 32bit to XP Pro on 64 would allow you to use more than 4GB of total RAM.

Well those are artificially imposed limits to make you pay more money to use your hardware, but yes I think you need Advanced Server to actually use the 64G of memory.


A 32 bit architecture can address 4GB of ram. (2^32=4GB).

Windows and Intel use special programming tricks (Address Windowing Extensions) to map certain physical blocks of RAM into and out of virtual address space.

The vast, vast, vast majority of software doesn't use AWE. It requires special programming that's only worth it for very RAM-hungry server software packages.

In practical terms, under most circumstances with most software, 32-bit Windows can use only 4GB RAM.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Most Windows apps don't need anywhere near the 2G they get now, so it's largely irrelevant. Windows itself can use up to 64G using PAE, even if a single process can't see it all at once you'll still be able to use the memory for other proceses and filesystem caching.
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
Originally posted by: Nothinman
Well those are artificially imposed limits to make you pay more money to use your hardware, but yes I think you need Advanced Server to actually use the 64G of memory.

That is not the only reason for the imposed 4GB limit.

See http://blogs.msdn.com/carmencr/archive/2004/08/06/210093.aspx:
"This is important because we?ve found that many devices and device drivers, especially in the consumer space, happily assume they?ll never have to address memory at an address over the 4GB boundary."

Most Windows apps don't need anywhere near the 2G they get now

"Most" is irrelevant if one or two of the apps you actually care about fall into the >2GB category.

Adobe Photoshop is an excellent example. Its executable is marked as largeaddressaware and will happily use 2GB memory (4GB under 64-bit Windows). When that memory is exhausted, it will start writing to its own scratch file(s). If it is running on a configuration with lots of memory (>4GB), it will allow file writes/reads go through the OS' cache manager (thus utilizing the extra memory), otherwise it will bypass the disk cache.

And before anyone mentions the /3GB boot flag... Check out http://blogs.msdn.com/oldnewthing/ and search for PAE, AWE and 3GB. Raymond wrote a whole slew of interesting articles on the subject a year ago. (Basically most people should avoid /3GB, specially now when x64 Windows is available)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
"This is important because we?ve found that many devices and device drivers, especially in the consumer space, happily assume they?ll never have to address memory at an address over the 4GB boundary."

So? That just means that DMA pools have to be allocated from "low" memory, that's already a requirement for most ISA devices. High memory can still be used for processes and caching.

"Most" is irrelevant if one or two of the apps you actually care about fall into the >2GB category.

Once again, if you have to ask it probably doesn't affect you. If you're running one of those apps then you probably know that they'll be able to take advantage of the additional memory.

Adobe Photoshop is an excellent example. Its executable is marked as largeaddressaware and will happily use 2GB memory (4GB under 64-bit Windows). When that memory is exhausted, it will start writing to its own scratch file(s). If it is running on a configuration with lots of memory (>4GB), it will allow file writes/reads go through the OS' cache manager (thus utilizing the extra memory), otherwise it will bypass the disk cache.

Bypassing the disk cache in almost any instance is stupid, IMO. As is the scratch file, which I would guess is just a hack-around for the poor memory management on OS 9 that is only still there because no one's gotten around to removing it.

(Basically most people should avoid /3GB, specially now when x64 Windows is available)

Duh. Because it limits the kernel to 1G of VM which probably also limits how much space it has for PTEs and disk caching. And since 99% of all Windows apps aren't marked largeaddressaware, it won't be of any real benefit except in corner cases.
 

Valkerie

Banned
May 28, 2005
1,148
0
0
Someone said that Unreal Tournament 2004 can use 2GB of RAM? Will the game be faster on x64 than on x32?

What about a processor identifying the RAM? Are there limitations with any Intel or AMD's? I thought Xeon's/Opteron's handle lots of RAM very well? Or is it the mobo architecture working with the OS?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Someone said that Unreal Tournament 2004 can use 2GB of RAM? Will the game be faster on x64 than on x32?

The only thing I can think of is really large textures and I would imagine that bandwidth between your video card and main memory would be more of a factor than anything else.

What about a processor identifying the RAM? Are there limitations with any Intel or AMD's? I thought Xeon's/Opteron's handle lots of RAM very well? Or is it the mobo architecture working with the OS?

Opterons only do 48-bit addressing right now which means they support up to 256 terabytes of physical memory but VM space is fully 64-bit so you have enough addresses to cover 16 exabytes. Non-EM64T Xeons are 32-bit chips with lots of cache so they still only do 64G of memory with PAE and 4G of VM unless you use AWE to hack around that. EM64T Xeons are essentially Intel's version of AMD64, but I couldn't find a doc that says how much of the 64-bit space they've actually implemented, not that it matters since I doubt you'll find a single box that can hold enough terabytes of memory to exceed it.
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
Originally posted by: Nothinman
"This is important because we?ve found that many devices and device drivers, especially in the consumer space, happily assume they?ll never have to address memory at an address over the 4GB boundary."

So? That just means that DMA pools have to be allocated from "low" memory, that's already a requirement for most ISA devices. High memory can still be used for processes and caching.

So you're saying all drivers will just work..? Despite what the MS guy said? You have a high faith in the capabilities of all third-party device driver developers then. (remember: We're not talking about mainstream nVidia or ATI drivers here, but rather soundcard drivers and various oddly shaped hardware device drivers -- stuff you typically never see in a server)

Once again, if you have to ask it probably doesn't affect you. If you're running one of those apps then you probably know that they'll be able to take advantage of the additional memory.

That is a generalisation. You're assuming all graphics artists even know what memory is. (the truth is probably somewhere in the middle. E.g. the guys over at http://dpreview.com/ forums often recommend minimum 2GB memory for PS, but that doesn't mean they understand the finer points of PAE or 64-bit OSes)

Bypassing the disk cache in almost any instance is stupid, IMO. As is the scratch file, which I would guess is just a hack-around for the poor memory management on OS 9 that is only still there because no one's gotten around to removing it.

By going through the disk cache OTOH, they increase the chance that the OS will interfer and steal pages from PS' process .That aside, the docs for FILE_FLAG_NO_BUFFERING states "When combined with FILE_FLAG_OVERLAPPED, the flag gives maximum asynchronous performance, because the I/O does not rely on the synchronous operations of the memory manager.". So it is not completely without its merits.

But certainly, I was surprised to learn that they bypass the disk cache unless you have a certain amount of physical memory present. Adobe seem to indicate that benchmarks show this is the best approach, and I'm inclined to believe them. (probably varies from systems to systems -- it is interesting to note that many seem to recommend 15k SCSI drives for PS systems)

And since 99% of all Windows apps aren't marked largeaddressaware

Well, again; PS falls into this category (marked as largeaddressaware)... I'd be surprised if other apps like e.g. AutoCAD, aren't marked. How are you able to say 99% of Windows apps aren't marked? Have you looked at the PE header for a significant number of apps? Should small utilities be counted too? (I wouldn't care about those -- do you?)
 

imported_BikeDude

Senior member
May 12, 2004
357
1
0
Originally posted by: Valkerie
What about a processor identifying the RAM? Are there limitations with any Intel or AMD's? I thought Xeon's/Opteron's handle lots of RAM very well? Or is it the mobo architecture working with the OS?

There are some issues as PCI devices tend to map their memory just below the 4GB marker. On my configuration that leaves a big gaping 768MB hole.

E stepping Opterons can re-map that area, whereas earlier steppings require some help from the BIOS. So on my Tyan motherboard, I have two options for "Memory hole", either "Software" or "Hardware". E stepping was introduced with the 252 I think, although AMD's site shows that there are E4 stepping 244 out there as well (the ones I bought in May were CG stepping though, so only "Software" remapping helps for me :( )

Curiously enough, Windows 2003 Standard ed. (both 32-bit and 64-bit) is now able to see my 4GB memory just fine, whereas a freshly installed 32-bit XPSP2 sees about 3GB. (I assumed XPSP2 would behave the same as its server cousin -- SP2 even added NUMA support and all...) 64-bit XP should be fine of course.

I dunno how this is solved with Xeons though.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
So you're saying all drivers will just work..? Despite what the MS guy said? You have a high faith in the capabilities of all third-party device driver developers then. (remember: We're not talking about mainstream nVidia or ATI drivers here, but rather soundcard drivers and various oddly shaped hardware device drivers -- stuff you typically never see in a server)

I never said they'd just work, but fixing them would be trivial, hell I could probably fumble through it myself if the source was available.

That is a generalisation. You're assuming all graphics artists even know what memory is. (the truth is probably somewhere in the middle. E.g. the guys over at http://dpreview.com/ forums often recommend minimum 2GB memory for PS, but that doesn't mean they understand the finer points of PAE or 64-bit OSes)

The artists don't have to understand how memory works, but they should have someone around who does. Those in corporations have IT departments to call upon and those who work on their own definately should take the time to understand how their computer works if it's their main source of income.

By going through the disk cache OTOH, they increase the chance that the OS will interfer and steal pages from PS' process .That aside, the docs for FILE_FLAG_NO_BUFFERING states "When combined with FILE_FLAG_OVERLAPPED, the flag gives maximum asynchronous performance, because the I/O does not rely on the synchronous operations of the memory manager.". So it is not completely without its merits.

Unless the NT VM is really stupid it won't steal pages from PS' working set for the page cache, that would defeat the purpose since PS is the process generating the pagefaults that are filling the page cache. And I'm a little skeptical about the use of the FILE_FLAG_NO_BUFFERING and FILE_FLAG_OVERLAPPED flags to get maximum async performance. The only reason I could see it helping is if Adobe was trying to keep only 1 copy of data in memory and their custom buffering and scratch disks was duplicating the efforts put forth by the page cache so they just bypass it to free some memory. But I don't claim to be an expert on the NT VM system, so maybe there's quirks in there that they're working around.

Well, again; PS falls into this category (marked as largeaddressaware)... I'd be surprised if other apps like e.g. AutoCAD, aren't marked. How are you able to say 99% of Windows apps aren't marked? Have you looked at the PE header for a significant number of apps? Should small utilities be counted too? (I wouldn't care about those -- do you?)

Just the fact that most developers I've talked to aren't the brightest bulbs and marking an executable largeaddressaware requires extra work is enough to convince me that 99% or more won't be marked largeaddressaware. AutoCAD may very well be and probably some other things like 3DSMax, LightWave and Exchange but I'm sure that for every 1 app you can name that is largeaddressaware someone can find 100 that aren't. And yes, I would say include every app you can think of because even a relatively small tool (cat, more, less, etc) could very easily be used on an extremely large dataset.