Virtual Memory Low

holyghost

Member
Jul 26, 2001
198
0
0
I am getting Virtual Memory Low on my pc. It's causing my system to lag. How can I solve the problem?
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Budman
Buy more ram.

Or, if it's a specific program that causes the problem, see if you can figure out if it's a memory leak or a crash that causes it to blow up and eat ram unnecessarily. But in general, Budman is right.
 

JK949

Senior member
Jul 6, 2003
377
0
0
More ram. when you run out of onboard memory your system uses virtual memory
on your hard drive. you need at least 512 meg.
You could increase your virtual memory allocation but that's not the best
way to fix your problem. my roomates computer does the same thing.
He runs it 24 hours a day and never doses a reboot or shut down to clear
system resources and cashed memory.
 

Spencer278

Diamond Member
Oct 11, 2002
3,637
0
0
Originally posted by: CTho9305
Originally posted by: JK949
you need at least 512 meg.

How can you make that assertion without knowing his workload?

It is a rather safe assertion if he is getting Virtual memory low errors. It doesn't really take much of a work load to get a large swap file I'm up to 717 megs. Besides it can't hurt to have a swap file larger then 512 megs.
 

JK949

Senior member
Jul 6, 2003
377
0
0
I do believe that my assertion is correct. but there is one other not possibility.
A virus. yes there have been many viruses that simply create dummy files and
documents or run resource sucking rogue programs in the back ground
that can't be detected. but since most people are pretty good about keeping
thier av software up to date this is unlikly the problem. i stick by my memory
advice.
 

thegorx

Senior member
Dec 10, 2003
451
0
0
maybe the swap file is corrupt or the settings are set too low

how much free memory do you have ?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
More ram. when you run out of onboard memory your system uses virtual memory

Not true. Virtual Memory is always in use, when you run low on physical memory the OS will page memory that hasn't been recently touched out to disk. The pagefile is not Virtual Memory, no matter how many times MS mislables it as such.

maybe the swap file is corrupt or the settings are set too low

It can't be corrupt, it's basically reset on boot and if the pages inside of it are corrupt then other data on his disk would also be corrupt.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,203
126
Originally posted by: Nothinman
More ram. when you run out of onboard memory your system uses virtual memory

Not true. Virtual Memory is always in use, when you run low on physical memory the OS will page memory that hasn't been recently touched out to disk. The pagefile is not Virtual Memory, no matter how many times MS mislables it as such.

maybe the swap file is corrupt or the settings are set too low

It can't be corrupt, it's basically reset on boot and if the pages inside of it are corrupt then other data on his disk would also be corrupt.

At least in Win9x and Win 3.1x, it was definately possible for the pagefile to get corrupted, and to cause strange errors. I don't know about the design of NT/W2K/XP's pagefile, although I always set up W2K and XP to "clear pagefile on shutdown". Along with being a mitigating workaround for write-cache-flush/shutdown-delay issues, I suppose that probably helps to "clean" the pagefile on shutdown. OTOH, if a particular storage subsystem + hardware is susceptable to data-corruption due to incomplete write-cache-fluashing on shutdown, I suppose it would be hypothetically possible that an incomplete pagefile clear could also cause pagefile corruption, but I believe that it is cleared from start to finish, and whatever data-structure MS uses, is probably rooted at the start of the file. I've not ever had anything that even remotely appeared to be related to pagefile corruption under NT/W2K/XP though (except when a disk sector went bad in the pagefile, the OS didn't like that one bit).
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Well I havn't seen the page in/out code myself since MS doesn't give it out, but in general no data in the pagefile should be used after a reboot so anything corrupt in it should be overwritten before it's read again so unless there's general filesystem or memory corruption the data should be fine.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,203
126
Originally posted by: Nothinman
Well I havn't seen the page in/out code myself since MS doesn't give it out, but in general no data in the pagefile should be used after a reboot so anything corrupt in it should be overwritten before it's read again so unless there's general filesystem or memory corruption the data should be fine.

You would think so, for a logical, well-designed system, but this *is* MS we're talking about. They still haven't gotten Hibernate to work correctly, after 4 W2K service packs and now, soon, 2 XP service packs.

When hibernating, the OS should flush (write) all "dirty" filesystem write-cache data to disk, and simply discard any read-cache data, before hibernation. But it doesn't. If you hibernate W2K or XP, and then modify the filesystem (for example, running in a dual-boot configuration), and then "unhibernate", the OS will still show the old filesystem data. Even if you hit F5 to refresh it. I think that the only way to force refreshing, would be to disable and then re-enable the disk drive driver in Device Manager, and even then, that probably isn't possible if that disk hold the paging file and/or registry.

So if you continue to use the OS, and then write to the (previously modified, unbeknownst to the OS, due to poorly-implemented cache-cohenrency policies) filesystem, you will likely end up with filesystem corruption.

My theory is that in terms of the paging file, in NT it's a true paging file, everything is simply written out in nice neat 4KB chunks. In Win 3.x and Win9x, it actually swapped out 16-bit protected-mode segments, so there probably exists some sort of internal filesystem-like data-structure that keeps track of those things, because the same file was used for swapping and paging.
 

bsobel

Moderator Emeritus<br>Elite Member
Dec 9, 2001
13,346
0
0
When hibernating, the OS should flush (write) all "dirty" filesystem write-cache data to disk, and simply discard any read-cache data, before hibernation. But it doesn't. If you hibernate W2K or XP, and then modify the filesystem (for example, running in a dual-boot configuration), and then "unhibernate", the OS will still show the old filesystem data. Even if you hit F5 to refresh it. I think that the only way to force refreshing, would be to disable and then re-enable the disk drive driver in Device Manager, and even then, that probably isn't possible if that disk hold the paging file and/or registry.

BS. Hibernation does not support modification of the underlying disk structure period. It has nothing to do with 'simply discard any read-cache data'. There is no way to make what you want work (and do people really try to do things like this and expect it to work?)

Bill
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
BS. Hibernation does not support modification of the underlying disk structure period. It has nothing to do with 'simply discard any read-cache data'. There is no way to make what you want work (and do people really try to do things like this and expect it to work?)

I did a while back with a FAT partition until I realized what was happening. It wouldn't be so bad if you had the option to unmount/remount the partition so that the filesystem could be back in a consistent state with what the OS had in memory. Now you have me curious as to whether that would work with Linux or not, obviously I wouldn't trust it for something like /home but for a non-critical shared partition that can be umount/mounted easily it should work fine IMO.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: Nothinman
BS. Hibernation does not support modification of the underlying disk structure period. It has nothing to do with 'simply discard any read-cache data'. There is no way to make what you want work (and do people really try to do things like this and expect it to work?)

I did a while back with a FAT partition until I realized what was happening. It wouldn't be so bad if you had the option to unmount/remount the partition so that the filesystem could be back in a consistent state with what the OS had in memory. Now you have me curious as to whether that would work with Linux or not, obviously I wouldn't trust it for something like /home but for a non-critical shared partition that can be umount/mounted easily it should work fine IMO.

And break every app that has open file handles? MS is doing the right thing.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
And break every app that has open file handles? MS is doing the right thing.

That was why I said "obviously I wouldn't trust it for something like /home but for a non-critical shared partition that can be umount/mounted easily it should work fine IMO".
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,203
126
Originally posted by: bsobel
When hibernating, the OS should flush (write) all "dirty" filesystem write-cache data to disk, and simply discard any read-cache data, before hibernation. But it doesn't. If you hibernate W2K or XP, and then modify the filesystem (for example, running in a dual-boot configuration), and then "unhibernate", the OS will still show the old filesystem data. Even if you hit F5 to refresh it. I think that the only way to force refreshing, would be to disable and then re-enable the disk drive driver in Device Manager, and even then, that probably isn't possible if that disk hold the paging file and/or registry.

BS. Hibernation does not support modification of the underlying disk structure period.

First you say "BS", and then you agree with me by your statement, that the Hibernate feature does not work properly under the conditions that I have described. Your method of communication is strange, to say the least.

Originally posted by: bsobel
It has nothing to do with 'simply discard any read-cache data'. There is no way to make what you want work (and do people really try to do things like this and expect it to work?)
Bill

Funny, quite a few people expect it to work the way that I describe, and there is no reason why it shouldn't work that way, if it were properly implemented. Implementing cache hierarchies is a basic computer-science concept. The fact that a proper design for such was not implemented in MS's flagship OS, in such a way that it works in all cases that a user might expect it to, is IMHO "broken". The fact that it is only implemented in a half-a##ed way, for a singular special-case (laptop suspend-to-disk, essentially), but that fact is not clearly documented, is also broken. It would be easy enough, upon resume from Hibernate, to check the filesystem volume timestamp that is auto-updated every time that it is written to by modern MS OSes, and if that timestamp is different than that recorded for that volume in the Hibernate file, to take an alternate action, in an attempt to prevent further filesystem corruption. But it doesn't. So the implementation doesn't even enforce its apparent usage restriction (that Hibernate doesn't support filesystem modification between Hibernate and un-Hibernate events), in a way that safegards user filesystem data.

Btw, I know plenty of users that use Hibernate to upgrade hardware. I certainly don't think that's a wise idea, but I would simply like to point out that users don't see any immediate problems with that, they see it as a feature to be used as they please, and they attempt to use it in every way possibly imagined.

You are also clearly incorrect that it is simply not possible to make it work, safely, in the manner that I've described.

Yes, open file handles do add complexity into the equation, and part of that is because the basic approach to filesystems in general is somewhat flawed from a caching and transactional viewpoint. (It is the analog of a CPU that allows interrupts to occur, at times other than defined sequence-points in the microcode. In other words, the way that filesystems operate in most consumer OSes (MS OSes) today, is similar to how DSPs handle interrupts. The way that filesystems should behave, is more similar to how x86 CPUs behave. It always amazes me how the "hardware guys" have been getting the design right for years, and the "software guys" still don't get the picture. But I digress.)

Assuming that there are no file handles open, that cannot be simply closed, the file's path in the filesystem stored, and then re-opened when resuming from Hibernate, then it really should be "trivial" to implement.

When the OS goes into Hibernate, it should send a power-management window-message to top-level application windows. The app should then prepare for hibernation by flushing any internally-cached file state data to the OS'es file I/O buffers. After all apps have done this, then the OS should then flush its I/O buffers to disk, and discard stale cached filesystem and filesystem-related block device I/O-buffer data.

Yes, if the user then modifies the filesystem's state, in a manner incompatible with the resumption of certain apps from their hibernated state (say, by deleting a file that was in use at the time of hibernation), then the app will clearly crash (possibly similarly to if one used Process Explorer to force-close an open file handle), or the OS could detect this and fail to restore that app, but it wouldn't cause the potential corruption of the filesystem at a global level.

Regardless of the broken-ness of Hibernate with regards to applications, it IS broken that the OS's own Cache Manager, with respect to cached filesystem data (for performance reasons), doesn't deal with changes properly. At the very least it should detect that the volume has been modified in the interim, and refresh all of the cached volume information. MS got oplocks basically right WRT network filesystems, so it's surprising to me that Hibernate is as broken as it is. I'm quite curious now, actually, how it behaves with shared network drives. I have not yet tested that scenario.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: VirtualLarry
Originally posted by: bsobel
When hibernating, the OS should flush (write) all "dirty" filesystem write-cache data to disk, and simply discard any read-cache data, before hibernation. But it doesn't. If you hibernate W2K or XP, and then modify the filesystem (for example, running in a dual-boot configuration), and then "unhibernate", the OS will still show the old filesystem data. Even if you hit F5 to refresh it. I think that the only way to force refreshing, would be to disable and then re-enable the disk drive driver in Device Manager, and even then, that probably isn't possible if that disk hold the paging file and/or registry.

BS. Hibernation does not support modification of the underlying disk structure period.

First you say "BS", and then you agree with me by your statement, that the Hibernate feature does not work properly under the conditions that I have described. Your method of communication is strange, to say the least.

Originally posted by: bsobel
It has nothing to do with 'simply discard any read-cache data'. There is no way to make what you want work (and do people really try to do things like this and expect it to work?)
Bill

Funny, quite a few people expect it to work the way that I describe, and there is no reason why it shouldn't work that way, if it were properly implemented.
Are these people computer scientists?

Implementing cache hierarchies is a basic computer-science concept. The fact that a proper design for such was not implemented in MS's flagship OS, in such a way that it works in all cases that a user might expect it to, is IMHO "broken". The fact that it is only implemented in a half-a##ed way, for a singular special-case (laptop suspend-to-disk, essentially), but that fact is not clearly documented, is also broken. It would be easy enough, upon resume from Hibernate, to check the filesystem volume timestamp that is auto-updated every time that it is written to by modern MS OSes, and if that timestamp is different than that recorded for that volume in the Hibernate file, to take an alternate action, in an attempt to prevent further filesystem corruption. But it doesn't. So the implementation doesn't even enforce its apparent usage restriction (that Hibernate doesn't support filesystem modification between Hibernate and un-Hibernate events), in a way that safegards user filesystem data.
No matter what MS does, there would be problems. Most programs read data into their own buffers... for example, winamp preloads a few hundred kb (or more) of an mp3 to pre-decompress it for better playback. Regardless of the OSes buffering behavior, this application WOULD BE AFFECTED if the underlying FS was modified.

Btw, I know plenty of users that use Hibernate to upgrade hardware. I certainly don't think that's a wise idea, but I would simply like to point out that users don't see any immediate problems with that, they see it as a feature to be used as they please, and they attempt to use it in every way possibly imagined.
Users are stupid and lazy.

You are also clearly incorrect that it is simply not possible to make it work, safely, in the manner that I've described.
See example above.

Yes, open file handles do add complexity into the equation, and part of that is because the basic approach to filesystems in general is somewhat flawed from a caching and transactional viewpoint. (It is the analog of a CPU that allows interrupts to occur, at times other than defined sequence-points in the microcode. In other words, the way that filesystems operate in most consumer OSes (MS OSes) today, is similar to how DSPs handle interrupts. The way that filesystems should behave, is more similar to how x86 CPUs behave. It always amazes me how the "hardware guys" have been getting the design right for years, and the "software guys" still don't get the picture. But I digress.)
Not sure what you're trying to say. Filesystems DO only modify data at certain points, otherwise multithreaded applications would blow up when two threads hit data at the same time.

The whole point of write locks on files is so that programs don't HAVE to worry about data changing beneath them. Circumventing the write lock by modifying a file while the system is hibernated is stupid and obviously going to break stuff.

Files that aren't locked may or may not cause problems if they change.

Assuming that there are no file handles open, that cannot be simply closed, the file's path in the filesystem stored, and then re-opened when resuming from Hibernate, then it really should be "trivial" to implement.
Yep. However, if you close a programs files for it, it's going to have trouble when you resume it. If you don't, then your assumption can't be supported.

When the OS goes into Hibernate, it should send a power-management window-message to top-level application windows. The app should then prepare for hibernation by flushing any internally-cached file state data to the OS'es file I/O buffers. After all apps have done this, then the OS should then flush its I/O buffers to disk, and discard stale cached filesystem and filesystem-related block device I/O-buffer data.
There we go. Of COURSE you can do it if you require application support. But how many people really want to rewrite their programs? Besides, what you're asking software do to is not exactly trivial. What if I'm running, say, UT in the middle of a game and hibernate? When it comes back up, it has to reload all its data (texstures, levels, etc), and hope that the current state (e.g. where the player is) is still valid (and not inside a wall suddenly). Mozilla would have to reload all its chrome - a functionality that I don't think is properly implemented yet because it's pretty difficult. Also, for a program like Moz, a LARGE amount of the time spent launching the application is loading chrome (and for UT, loading other files), so basically on a hibernate resume you will end up practically relaunching every app.... losing the whole benefit of hibernation in the first place.

What if a user modifies a running executable? Kill the app and relaunch it? That could cause data loss.

Yes, if the user then modifies the filesystem's state, in a manner incompatible with the resumption of certain apps from their hibernated state (say, by deleting a file that was in use at the time of hibernation), then the app will clearly crash (possibly similarly to if one used Process Explorer to force-close an open file handle), or the OS could detect this and fail to restore that app, but it wouldn't cause the potential corruption of the filesystem at a global level.
So the OS, on resume, has to know what file every application is using, check to see if it's modified, AND see if the modification is "incompatible" with the application's state? Sounds like a slow process, and difficult since different apps can handle different changes.

Regardless of the broken-ness of Hibernate with regards to applications, it IS broken that the OS's own Cache Manager, with respect to cached filesystem data (for performance reasons), doesn't deal with changes properly. At the very least it should detect that the volume has been modified in the interim, and refresh all of the cached volume information. MS got oplocks basically right WRT network filesystems, so it's surprising to me that Hibernate is as broken as it is. I'm quite curious now, actually, how it behaves with shared network drives. I have not yet tested that scenario.
Without modification of every application, you can't guarantee functionality. Would you rather deal with hibernation knowing that you can't modify files at all (makes sense...), or knowing there's a 75% chance that your each of your running apps will crash, but a 25% chance that the modified apps will handle changed data?

I really think it's best for now that they just require you NOT to modify any files while the OS is hibernated. Besides, the only people who CAN hibernate and then modify files are power users running multiple OSes, who should understand why you CAN'T modify the files. Your parents running just XP probably never even realized they could modify stuff while the machine is hibernated.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I really think it's best for now that they just require you NOT to modify any files while the OS is hibernated. Besides, the only people who CAN hibernate and then modify files are power users running multiple OSes, who should understand why you CAN'T modify the files. Your parents running just XP probably never even realized they could modify stuff while the machine is hibernated.

That's just the thing, you don't need to be a power user to dual boot these days and the number of people who understand how hibernation works is pretty small. I modified a shared vfat partition when Windows was hibernated for months without a problem before I realized what was happening. From the user perspective the OS is shutdown and the filesystems are unmounted so it makes sense to think that you could modifiy them even though that's not true.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,570
10,203
126
Originally posted by: CTho9305
Are these people computer scientists?

No, and they shouldn't have to be (the users of the OS I mean) - the OS should prevent them from doing unsafe things, without them being aware that they are unsafe. At least "FORMAT" asks the user "Y/N?".

Originally posted by: CTho9305
No matter what MS does, there would be problems. Most programs read data into their own buffers... for example, winamp preloads a few hundred kb (or more) of an mp3 to pre-decompress it for better playback. Regardless of the OSes buffering behavior, this application WOULD BE AFFECTED if the underlying FS was modified.

Yes, the *application*, but the entire filesystem wouldn't get scrambled, as could potentially be the case with the sitatuation as it is now.

Say WinAmp pre-buffers some data, read from a file. If I Hibernate, as long as I don't modify *that file*, things should work out ok. The Hibernated app state should store the runtime library's state, which would contain the position of the file-pointer position within the file, and the OS's Hibernated state should keep track that WinAmp has file X:\Y\Z.mp3 open for read-only access. When un-Hibernated, the OS should open "X:\Y\Z.mp3" again and attach it to that process's open file-handle. Assuming that the particular file isn't modified in the interim, there should be no problem here.

Originally posted by: CTho9305See example above.
Actually, that was a trivial example that could easily be made to work, but I agree, there could be more problematic scenarios.

Originally posted by: CTho9305
Not sure what you're trying to say. Filesystems DO only modify data at certain points, otherwise multithreaded applications would blow up when two threads hit data at the same time.

Actually, short of a re-architected application - OS-filesystem interface layer, the only defined "sequence points", are at an application process's creation and shutdown. The fact that the OS allows changes to global shared filesystem state between those two points, is a severe design error that nearly all consumer OSes have had for ages. What should happen, is that applications recieve their own copy-on-write shadow volume copy of the filesystem, and only when the application is sucessfully shut down (or an explicit filesystem volume shadow copy flush/sync is done, which AFAIK there is currently no API or semantics to do such a thing), then the modified application-level filesystem state is written back into the global filesystem state. (What I was talking about had nothing to do with threads, that's a runtime-library/application-level syncronization issue.)

Originally posted by: CTho9305The whole point of write locks on files is so that programs don't HAVE to worry about data changing beneath them. Circumventing the write lock by modifying a file while the system is hibernated is stupid and obviously going to break stuff.
Files that aren't locked may or may not cause problems if they change.

Yes, but again, that breakage should be limited to the application itself, NOT global "random" filesystem corruption.

Originally posted by: CTho9305
Assuming that there are no file handles open, that cannot be simply closed, the file's path in the filesystem stored, and then re-opened when resuming from Hibernate, then it really should be "trivial" to implement.
Yep. However, if you close a programs files for it, it's going to have trouble when you resume it. If you don't, then your assumption can't be supported.

Perhaps I mis-spoke when I used the term "closed" there. What I was talking about was not that the OS force-closes file handles, at least in terms of the file-handle semantics that the app sees, but rather that the OS "hibernates" the file handles that the app has open, by storing the necessary filesystem information (pathname) to "unhibernate" that file handle later. That's all. The application doesn't even see it coming. It "wakes up" later on, and file handle #134 still points to filesystem path "X:\Y.Z".

Originally posted by: CTho9305
When the OS goes into Hibernate, it should send a power-management window-message to top-level application windows. (...)
There we go. Of COURSE you can do it if you require application support. But how many people really want to rewrite their programs?

Are you aware, that Standy mode already requires similar application support? Most of the necessary low-level support is provided by the run-time libraries that ship with Windows-targeted compiliers, so rarely does the programmer have to do much extra work, unless they have non-standard functionality that they need to implement. Hibernate is basically a super-set of Standby.

Originally posted by: CTho9305
Besides, what you're asking software do to is not exactly trivial. What if I'm running, say, UT in the middle of a game and hibernate? When it comes back up, it has to reload all its data (texstures, levels, etc), and hope that the current state (e.g. where the player is) is still valid (and not inside a wall suddenly). Mozilla would have to reload all its chrome - a functionality that I don't think is properly implemented yet because it's pretty difficult. Also, for a program like Moz, a LARGE amount of the time spent launching the application is loading chrome (and for UT, loading other files), so basically on a hibernate resume you will end up practically relaunching every app.... losing the whole benefit of hibernation in the first place.

I guess you mis-understood what I was saying. Sorry. Most of those loaded files, do not have active file handles open to each of their files. They are loaded into the applications' process memory space upon startup, and discarded on shutdown. When the applications' process memory space was saved by Hibernate, all of that data would stay loaded and saved in the apps' memory. Resuming from Hibernate would simply restore it all. No need to re-load.

Originally posted by: CTho9305What if a user modifies a running executable? Kill the app and relaunch it? That could cause data loss.

Not sure what this comment is supposed to refer to.

Are you talking about the in-memory process representation of an executable, or the on-disk version? Once an executable is loaded, the on-disk version can be modified/replaced, without an issue, on NT-based, and many other OSes. That's how upgrades to running software work. In order for the changes to the on-disk version to take effect, yes, a program would have to be re-started. But that has nothing to do with Hibernate functionality.

If the in-memory executable was modified, that modified process memory state would simply be saved in the Hibernation data.

Originally posted by: CTho9305
So the OS, on resume, has to know what file every application is using,

Why not? It normally does all the time anyway, that's the OS's "job", with respect to maintaining the filesystem.

Originally posted by: CTho9305
check to see if it's modified, AND see if the modification is "incompatible" with the application's state?

No, that would be the application's job, and the reason why there needs to be communication between the OS and the application, and support within the application, with respect to Hibernate functionality, so that the application can manage its state properly.

Originally posted by: CTho9305
Without modification of every application, you can't guarantee functionality. Would you rather deal with hibernation knowing that you can't modify files at all (makes sense...), or knowing there's a 75% chance that your each of your running apps will crash, but a 25% chance that the modified apps will handle changed data?

I'm not asking for support for changing an otherwise actively-used file out from under an application, while that application is hibernating. I'm simply asking that the OS do the sane thing, and that it not result in arbitrary filesystem corruption, when modifying an unrelated file on the same volume.

Originally posted by: CTho9305
I really think it's best for now that they just require you NOT to modify any files while the OS is hibernated. Besides, the only people who CAN hibernate and then modify files are power users running multiple OSes, who should understand why you CAN'T modify the files.

Or perhaps MS should implement functionality that works, and only then, instead of releasing half-working functionality just to obtain bullet-point feature approval from PHBs. It was bad enough that they had the other APM/ACPI shutdown IDE write-cache problem for so long, and only recently fixed it, all the while denying that it affected any OS other than Win9x.

It's not at all clear to most people, that Hibernate only works for one specific case, and not most general cases. The fact that it can silently corrupt your data in those other general cases is even more insideous, as is the fact that they could detect and prevent that from happening. Hibernate, as currently implemented, violates both the "Principle of least surprise", and the principle of "Do no harm". Both are egregious things to do, in a consumer-oriented OS.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: VirtualLarry
Originally posted by: CTho9305
Are these people computer scientists?

No, and they shouldn't have to be (the users of the OS I mean) - the OS should prevent them from doing unsafe things, without them being aware that they are unsafe. At least "FORMAT" asks the user "Y/N?".
I was referring to the "these people" you referred to - people who think they know better than MS.

Originally posted by: CTho9305
No matter what MS does, there would be problems. Most programs read data into their own buffers... for example, winamp preloads a few hundred kb (or more) of an mp3 to pre-decompress it for better playback. Regardless of the OSes buffering behavior, this application WOULD BE AFFECTED if the underlying FS was modified.

Yes, the *application*, but the entire filesystem wouldn't get scrambled, as could potentially be the case with the sitatuation as it is now.
I have yet to scramble anything more severe than a file that was open. I don't intend to try either ;).

Say WinAmp pre-buffers some data, read from a file. If I Hibernate, as long as I don't modify *that file*, things should work out ok. The Hibernated app state should store the runtime library's state, which would contain the position of the file-pointer position within the file, and the OS's Hibernated state should keep track that WinAmp has file X:\Y\Z.mp3 open for read-only access. When un-Hibernated, the OS should open "X:\Y\Z.mp3" again and attach it to that process's open file-handle. Assuming that the particular file isn't modified in the interim, there should be no problem here.
... that's already what happens - open files stay open. A program doesn't have to know the system was hibernated to work properly.

Originally posted by: CTho9305
Not sure what you're trying to say. Filesystems DO only modify data at certain points, otherwise multithreaded applications would blow up when two threads hit data at the same time.

Actually, short of a re-architected application - OS-filesystem interface layer, the only defined "sequence points", are at an application process's creation and shutdown. The fact that the OS allows changes to global shared filesystem state between those two points, is a severe design error that nearly all consumer OSes have had for ages. What should happen, is that applications recieve their own copy-on-write shadow volume copy of the filesystem, and only when the application is sucessfully shut down (or an explicit filesystem volume shadow copy flush/sync is done, which AFAIK there is currently no API or semantics to do such a thing), then the modified application-level filesystem state is written back into the global filesystem state. (What I was talking about had nothing to do with threads, that's a runtime-library/application-level syncronization issue.)
One of us is missing something, or we have a communications problem.

Originally posted by: CTho9305The whole point of write locks on files is so that programs don't HAVE to worry about data changing beneath them. Circumventing the write lock by modifying a file while the system is hibernated is stupid and obviously going to break stuff.
Files that aren't locked may or may not cause problems if they change.

Yes, but again, that breakage should be limited to the application itself, NOT global "random" filesystem corruption.
As I said above, I've only seen per-file breakage caused by an application crashing.

Originally posted by: CTho9305
Assuming that there are no file handles open, that cannot be simply closed, the file's path in the filesystem stored, and then re-opened when resuming from Hibernate, then it really should be "trivial" to implement.
Yep. However, if you close a programs files for it, it's going to have trouble when you resume it. If you don't, then your assumption can't be supported.

Perhaps I mis-spoke when I used the term "closed" there. What I was talking about was not that the OS force-closes file handles, at least in terms of the file-handle semantics that the app sees, but rather that the OS "hibernates" the file handles that the app has open, by storing the necessary filesystem information (pathname) to "unhibernate" that file handle later. That's all. The application doesn't even see it coming. It "wakes up" later on, and file handle #134 still points to filesystem path "X:\Y.Z".
That's what happens. For all the program knows, there was just a long time since it last got CPU time. All its files that were open remain open.

Originally posted by: CTho9305
When the OS goes into Hibernate, it should send a power-management window-message to top-level application windows. (...)
There we go. Of COURSE you can do it if you require application support. But how many people really want to rewrite their programs?

Are you aware, that Standy mode already requires similar application support? Most of the necessary low-level support is provided by the run-time libraries that ship with Windows-targeted compiliers, so rarely does the programmer have to do much extra work, unless they have non-standard functionality that they need to implement. Hibernate is basically a super-set of Standby.
I was unaware that software support was required, as I have yet to see an app that broke on standby (drivers aren't apps), including quick and dirty assembly programs.

Originally posted by: CTho9305
Besides, what you're asking software do to is not exactly trivial. What if I'm running, say, UT in the middle of a game and hibernate? When it comes back up, it has to reload all its data (texstures, levels, etc), and hope that the current state (e.g. where the player is) is still valid (and not inside a wall suddenly). Mozilla would have to reload all its chrome - a functionality that I don't think is properly implemented yet because it's pretty difficult. Also, for a program like Moz, a LARGE amount of the time spent launching the application is loading chrome (and for UT, loading other files), so basically on a hibernate resume you will end up practically relaunching every app.... losing the whole benefit of hibernation in the first place.

I guess you mis-understood what I was saying. Sorry. Most of those loaded files, do not have active file handles open to each of their files. They are loaded into the applications' process memory space upon startup, and discarded on shutdown. When the applications' process memory space was saved by Hibernate, all of that data would stay loaded and saved in the apps' memory. Resuming from Hibernate would simply restore it all. No need to re-load.
So that's what happens right now.

Originally posted by: CTho9305What if a user modifies a running executable? Kill the app and relaunch it? That could cause data loss.

Not sure what this comment is supposed to refer to.

Are you talking about the in-memory process representation of an executable, or the on-disk version? Once an executable is loaded, the on-disk version can be modified/replaced, without an issue, on NT-based, and many other OSes. That's how upgrades to running software work. In order for the changes to the on-disk version to take effect, yes, a program would have to be re-started. But that has nothing to do with Hibernate functionality.
Oh.

Originally posted by: CTho9305
So the OS, on resume, has to know what file every application is using,

Why not? It normally does all the time anyway, that's the OS's "job", with respect to maintaining the filesystem.
Yes....

Originally posted by: CTho9305
check to see if it's modified, AND see if the modification is "incompatible" with the application's state?

No, that would be the application's job, and the reason why there needs to be communication between the OS and the application, and support within the application, with respect to Hibernate functionality, so that the application can manage its state properly.
The problem is that an application can be in ANY state when the hibernate occurs, and you want it to be able, from ANY state, to reverify its files? It's hard enough ensuring that you won't have deadlocks/races with standard multithreaded code - adding even more cases makes the problem even harder.

Originally posted by: CTho9305
Without modification of every application, you can't guarantee functionality. Would you rather deal with hibernation knowing that you can't modify files at all (makes sense...), or knowing there's a 75% chance that your each of your running apps will crash, but a 25% chance that the modified apps will handle changed data?

I'm not asking for support for changing an otherwise actively-used file out from under an application, while that application is hibernating. I'm simply asking that the OS do the sane thing, and that it not result in arbitrary filesystem corruption, when modifying an unrelated file on the same volume.
Haven't seen the problem ;).

Originally posted by: CTho9305
I really think it's best for now that they just require you NOT to modify any files while the OS is hibernated. Besides, the only people who CAN hibernate and then modify files are power users running multiple OSes, who should understand why you CAN'T modify the files.

Or perhaps MS should implement functionality that works, and only then, instead of releasing half-working functionality just to obtain bullet-point feature approval from PHBs. It was bad enough that they had the other APM/ACPI shutdown IDE write-cache problem for so long, and only recently fixed it, all the while denying that it affected any OS other than Win9x.
Specific facts please.

It's not at all clear to most people, that Hibernate only works for one specific case, and not most general cases. The fact that it can silently corrupt your data in those other general cases is even more insideous, as is the fact that they could detect and prevent that from happening. Hibernate, as currently implemented, violates both the "Principle of least surprise", and the principle of "Do no harm". Both are egregious things to do, in a consumer-oriented OS.
Granted, it should have a warning label. (Maybe an extra bit on an NTFS filesystem saying "Hibernated!" so that when it's mounted under another OS, you are warned that you might be causing data loss).

I hate this stupid new preview window in fusetalk so I'm going to post first, and edit anything that needs to be fixed ;)