• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Peculiar - Disk Size difference

prateekchanda

Junior Member
I got a peculiar issue


Screen Shot

have a look at the size difference between, Used Space in C: properties i.e. 56.5GB and space used by all the files and folders in the root of C: with all the files visible i.e. 59.2 and 54.4 GB depending what u r comparing, is 2.7GB and -2.1GB Respectively

I thinks my PC is on booz

Any Explanations Einsteins
__________________
Wall-E Its a PC
Eva shes the MAC
 
Possibly the amount assigned to virtual memory/page file or shadow copies (restore points) or NTFS hard links or reparse points or some other MS nonsensical use of disk space.
 
NTFS has overhead which does not show - MFT files, and there is also a MFT Reserved zone in addition to the possibles cited by Gary.
 
Originally posted by: prateekchanda
I got a peculiar issue
Screen Shot
have a look at the size difference between, Used Space in C: properties i.e. 56.5GB and space used by all the files and folders in the root of C: with all the files visible i.e. 59.2 and 54.4 GB depending what u r comparing, is 2.7GB and -2.1GB Respectively

This is normal, their are part of the drive that (by default) you don't have access to enumerate and therefore count its files (such as the System Volume Information directory.
 
Originally posted by: GaryJohnson
Possibly the amount assigned to virtual memory/page file or shadow copies (restore points) or NTFS hard links or reparse points or some other MS nonsensical use of disk space.

How are these things 'nonsensical'?
 
Sometimes virtual memory gets used before the system is out of physical memory. To me, this is nonsensical. We're only using virtual memory because you're out of RAM, it should only be used when you're totally out of RAM. And at that point there should be a warning to the user.
 
Originally posted by: GaryJohnson
Sometimes virtual memory gets used before the system is out of physical memory. To me, this is nonsensical. We're only using virtual memory because you're out of RAM, it should only be used when you're totally out of RAM. And at that point there should be a warning to the user.

Thats not really true. While the system might prepare the page file based on virtual reservations, it doesn't sit and page while memory sits free.
 
Sometimes virtual memory gets used before the system is out of physical memory. To me, this is nonsensical.

It makes perfect sense. If the system is idle it can drop pages that haven't been touched in a while into the pagefile while leaving them in memory, this way if memory pressure increases it has less work to do to keep things going.

We're only using virtual memory because you're out of RAM, it should only be used when you're totally out of RAM. And at that point there should be a warning to the user.

That's an incredibly simplistic view of VM and paging and any system that implemented it like that would run like crap.
 
@bsobel - if what you say is true, people wouldn't observe significantly better performance after disabling their page file on systems with a lot of RAM.

@Nothinman - the way windows implements it now runs like crap. if it worked the way I describe it would probably perform worse but only after the system has run out of physical memory and the user would at least be informed about why their system has turned to pudding and what they can do about it.
 
Originally posted by: GaryJohnson
@bsobel - if what you say is true, people wouldn't observe significantly better performance after disabling their page file on systems with a lot of RAM.

They don't, its a placebo. I've run both and benchmarked both. I run with 32gig of memory, my page file is on.
 
@Nothinman - the way windows implements it now runs like crap. if it worked the way I describe it would probably perform worse but only after the system has run out of physical memory and the user would at least be informed about why their system has turned to pudding and what they can do about it.

The VM is probably the most difficult part of the system to tune because everyone's workload is different. What works well for one person won't work well for a lot of others. I definitely think MS could do better or at least expose more tuning knobs so people could adjust for themselves but I also think that most people love to employ lots of hyperbole when talking about the performance of Windows.
 
Originally posted by: Nothinman
@Nothinman - the way windows implements it now runs like crap. if it worked the way I describe it would probably perform worse but only after the system has run out of physical memory and the user would at least be informed about why their system has turned to pudding and what they can do about it.

The VM is probably the most difficult part of the system to tune because everyone's workload is different. What works well for one person won't work well for a lot of others. I definitely think MS could do better or at least expose more tuning knobs so people could adjust for themselves but I also think that most people love to employ lots of hyperbole when talking about the performance of Windows.

Can you imagine all of the problems people would get into if there were MORE knobs to fiddle with? Especially related to VM!
 
Can you imagine all of the problems people would get into if there were MORE knobs to fiddle with? Especially related to VM!

Yea, but that's their own fault. People are going to try stupid things no matter what so it's better to give the people with a clue the tools they need than to hide them from everyone to protect the stupid people from themselves.
 
Back
Top