- Jan 24, 2005
- 140
- 0
- 0
You guys have already made enough rediculous comments for a lifetime. No thanks.Also fragmentation of the pagefile does not degrade performance
You guys have already made enough rediculous comments for a lifetime. No thanks.Also fragmentation of the pagefile does not degrade performance
You guys have already made enough rediculous comments for a lifetime. No thanks.
Its quite obvious at 15 you have a lot ot learn. Start off by refraining from pathetic words like tweaker, using them makes you seem like the amatuer you are. Fragmentation of the Pagefile does degrade performance. I know facts are hard to accept but I've learned people at your age don't listen and think they know everything, you will grow out of it. Then again maybe you will not.
This isn't a fact, its wrong. Also I've provided clear evidence as to the understood definition. You guys have nothing but what you are typing.Also fragmentation of the pagefile does not degrade performance
One of the limitations of the Windows NT/2000 defragmentation interface is that it is not possible to defragment files that are open for exclusive access. Thus, standard defragmentation programs can neither show you how fragmented your paging files or Registry hives are, nor defragment them. Paging and Registry file fragmentation can be one of the leading causes of performance degradation related to file fragmentation in a system.
This isn't a fact, its wrong. Also I've provided clear evidence as to the understood definition. You guys have nothing but what you are typing.
I'm not the one arguing facts you guys are:
Microsoft The Windows 2000 Server Operation Guide:
Part No. 097-0002722
Page 698:
Virtual Memory:
The Space on the hard disk that Windows 2000 uses as memory, the amount of memory taken from the perspective of a process can be much greater than the actual physical memory in the computer. The operating system does this in a way that is transparent to the application, by paging data that does not fit in physical memory to and from the disk at any given instant.
Page 298:
Set the Same Initial and Maximum Size:
Setting the pagings file's initial size and maximum size to the same value increases efficiency because the operating system does not need to expand the file during processing. Setting different values for initial and maximum size can contribute to disk fragmentation.
Your OPINION on something is not a FACT.
A program instruction on an Intel 386 or later CPU can address up to 4GB of memory, using its full 32 bits. This is normally far more than the RAM of the machine. (The 32nd exponent of 2 is exactly 4,294,967,296, or 4 GB. 32 binary digits allow the representation of 4,294,967,296 numbers ? counting 0.) So the hardware provides for programs to operate in terms of as much as they wish of this full 4GB space as Virtual Memory, those parts of the program and data which are currently active being loaded into Physical Random Access Memory (RAM). The processor itself then translates (?maps?) the virtual addresses from an instruction into the correct physical equivalents, doing this on the fly as the instruction is executed. The processor manages the mapping in terms of pages of 4 Kilobytes each - a size that has implications for managing virtual memory by the system.
Can the Virtual Memory be turned off on a really large machine?
Strictly speaking Virtual Memory is always in operation and cannot be ?turned off.? What is meant by such wording is ?set the system to use no page file space at all.?
An imaginary memory area supported by some operating systems (for example, Windows but not DOS) in conjunction with the hardware. You can think of virtual memory as an alternate set of memory addresses. Programs use these virtual addresses rather than real addresses to store instructions and data. When the program is actually executed, the virtual addresses are converted into real memory addresses.
The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize. For example, virtual memory might contain twice as many addresses as main memory. A program using all of virtual memory, therefore, would not be able to fit in main memory all at once. Nevertheless, the computer could execute such a program by copying into main memory those portions of the program needed at any given point during execution.
To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses. Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses.
The process of translating virtual addresses into real addresses is called mapping. The copying of virtual pages from disk to main memory is known as paging or swapping.
You can probably find the book at your local Borders.
It does not surprise me they are Linux users since I have come to expect this elitist mentality from them.
Actually, no. There are situations where the built-in defragger is extra slow due to large fragmented files - I've found that using contig on the worst files results in a much faster overall defrag. Why the built-in defragger doesn't do this automagically I have no idea.If you look at the Sysinternal page you'll see that Mark also wrote 'contig', an app to defragment a single file. Why do you think he wrote that one? Wouldn't it make more sense to just use the built-in defrag tool to defrag all of the non-special files?
unlike self proclaimed forum experts who are good at cutting and pasting
Actually, no. There are situations where the built-in defragger is extra slow due to large fragmented files - I've found that using contig on the worst files results in a much faster overall defrag. Why the built-in defragger doesn't do this automagically I have no idea.
Originally posted by: Nothinman
unlike self proclaimed forum experts who are good at cutting and pasting
Wow, you just described yourself.
Actually, no. There are situations where the built-in defragger is extra slow due to large fragmented files - I've found that using contig on the worst files results in a much faster overall defrag. Why the built-in defragger doesn't do this automagically I have no idea.
I never said there was no cases for it's use, but do you really sit there and watch the defragger as it works on your drive?
Windows CE supports a subset of the virtual memory functions available under Win32. Windows CE implements most of the virtual memory functions that are supported in Windows NT, with the exception of the VirtualXxxEx APIs, which manipulate virtual memory in other processes. Also, because Windows CE does not feature paging file support, it does not implement the VirtualLock and VirtualUnlock functions.