• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

How long until we see a true multitasking OS?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
When I'm talking preemptive I simply mean something can undergo a context switch and not bomb out when it returns. For NT anything up in user mode will work golden like this. I'm sure linux has some ring that does the same. Down in kernel mode it gets a bit more complicated. Not everything can be preempted without problems. Some actions must be atomic so semaphores, locks and the like are used to make them survive the preemption.

IRQLs handle pretty much everything else. They provide a nifty method to determine what can and cannot be preempted (ie you can preempt my i8042prt.sys keyboard driver any time you want...except when the IRQL is jacked way up while it's pulling something off the buffer...prempting at this point could result in data coming off a fifo buffer but the buffer pointer not getting updated)

So there are always going to be components that are not preemptive (you can't preempt the preemption code for example 😛 ) but the details of the discussion are really silly. Both Linux and Windows are very much preemptive multitasking systems in every regard that counts.
 
Originally posted by: ProviaFan
I have personally experienced this many times: start something single-threaded and resource-intense (e.g. a FLAC encoder) on a dual core CPU running Windows XP. That one core will be pegged, and the system will come lagging (sometimes almost to a halt). Audio will skip with no end, despite being streamed from a different disk than the FLAC encoder is using. The Adobe DNG converter also causes horrible lagging in a similar manner, for example. I don't know what the problem is, and while I don't like jumping on the "blame Windows" bandwagon, I don't know what else is at fault - an application, no matter how much CPU it needs, should never be allowed under a modern operating system to bog everything down as often happened with Windows 3.1.

Edit: so concerning the previous post... the OS does not make some applications yield, though it should.


hm. I'm thinking you are confusing contention with system resources as poor thread handling. Using encoding as an example: if it's single threaded you're basically going to try and peg one processor as hard as you can right? If you expect the system to be responsive just because the other CPU is "idle" you may be in for a surprise. First off, driver cpu usage doesn't show up in task manager. If one processor is pegged you are clearly moving a lot of data around. Who is servicing all the interrupts for that I/O? If once CPU is pegged (remember drivers don't show usage) then that means there is little to no time left on that CPU to handle anything else. Your "idle" CPU may very well be working it's butt off handling OS tasks that are related to the thread on the first CPU.

Another possibility is that you're dogging other areas of your system. You rack up enough disk queue length and it won't matter if you have 20 CPUs...the system is going to run slow.

The summary: the graphs in task manager and the sluggish 'feel' of your system are simply not enough information to determine root cause of your poor performance. It's all really more complicated than that.


 
What you people need to keep in mind is that everything comes with a trade off. Everything with a OS is comprimising design.

So you have two things you want:
A. A system that is very responsive.
B. A system that is able to proccess data quickly.
So these things are actually mutually exclusive.

A very responsive system must always be prepared to be interrupted with user demands, but while doing this your hurting overall performance and degrading the efficiency of the system.

While having a efficient, fast system requires that you spend time proccessing data and you loose performance by loosing your instructions and data in cache and having to switch back and forth.

Also keep in mind that the GUI, the file manager, the application, and the sound system (Windows uses a sound server type thing with software mixing unless your using special drivers) is all user-space applications. So as your operating your computer and playing music and such your not just using 2 or 3 application, your using dozens, just most of them are not obvious.

So imagine say you have demanding wife or mother or something. You have 2 chores you have to do.. You have to clean the dishes by hand (dishwasher broke) and you have to rake the leaves.

Now a busy user (user's applications) are like that female yelling at you to do the dishes then as she looks out the window she yells at you to rake the yard. Back and forth all day it goes like that. If you react each time she yells you will spend most of your time simply running back and forth between tasks and you won't get jack done.

So as a little innocent OS you have to choose to ignore her so you can get work done and not piss her off by taking all afternoon, or you can choose to jump every time she yells so you don't piss her off from making her wait before she sees that your doing work.

So when the GUI locks up or the sound stutters that is often just the system making a mistake in the scedualing. Stuff like that is very hard to avoid.

Were as if you make your system as smooth as silk it would also end up getting a big performance penalty.
 
:thumbsup: drag!

It's starting to feel like this has been beat to death but figured I would post some pictures of the yonah architecture (since we've spent so much time discussing core duo).
From Intel:
http://www.intel.com/technology/itj/200...Intro_to_Core_Duo/figures/figure_2.gif
And another more detailed image (though not so much english):
http://www.pcinlife.com/article_photo/a...a_ydg/arts/yonah_microarchitecture.png

Notice the shared buses and shared L2 cache. If you are running a single thread that is thrashing all your cache or saturating the bus it doesn't matter what the core utilization is. In many cases adding a 2nd core only moves the bottleneck to the next weakest link.
 
Guys - I understand OS design implicitly. Yes, I know about preemptive multitasking and how it works. I know when an operating system is not being able to preempt an application. I know WHY this is all happening, I'm asking why is HAS to happen. Since we're talking about multiple processors, which most of the processing power is going to be wasted - WHY are we not seeing strides toward an OS design which takes these concepts to heart?

Yes, I know that designing an operating system isn't peanuts, in fact it is more complicated than any application software aside from a compiler. What I am trying to determine is why technology is heading in this direction, but it's not being taken advantage of.
 
WHY are we not seeing strides toward an OS design which takes these concepts to heart?

We are, I can't speak for Windows but my Linux systems exhibit none of the problems that you're complaining about. =)
 
It would be nice if you give an actual specific use case where this happens. Unless you're still not understanding, it's been made clear that there should be no real lockups even in Windows (NT based).
 
Originally posted by: Nothinman
WHY are we not seeing strides toward an OS design which takes these concepts to heart?

We are, I can't speak for Windows but my Linux systems exhibit none of the problems that you're complaining about. =)

Ditto for me. I haven't ever used a multiple cpu windows box and I haven't used Windows at home for several years now, so it's a bit hard to contrast.

A few years ago when Redhat was working on Redhat 8 and such Linux developers were realy struggling on how to deal with desktop responsiveness issues. For instance drawing windows and GUI responsiveness in X was faster then it was in Windows and both of those were much faster then it was in OS X, but users were telling the developers that no OS X was the fastest and Windows next and X last.

So there was a lot of changes that took place all over X and all over the kernel to help solve those issues and make everything 'seem' as fast as Windows and OS X. Which is why both Windows and Linux are copying OS X's 3d graphically-driven 2-D compositing interface. It just makes sense.


Now Windows XP is just plain _old_. Like with the preemtiveness, desktop responsiveness and such it was way ahead of Linux.

But that was 5 years ago. Now Linux doesn't have the issues your describing, at least not in my experiance, for a while now and I know for a fact that it runs very well and multitasks well on dual core machine.

I am figuring that if you want to look a modern system don't look at XP. XP's core is ancient even if SP1 and SP2 replaced big hunks of the system with updates. Most of that stuff was programmed 6 or 7 years ago for W2k and NT.

So check out a modern Linux system or check out Vista if you want to see actual progress.
 
A few years ago when Redhat was working on Redhat 8 and such Linux developers were realy struggling on how to deal with desktop responsiveness issues. For instance drawing windows and GUI responsiveness in X was faster then it was in Windows and both of those were much faster then it was in OS X, but users were telling the developers that no OS X was the fastest and Windows next and X last.

That was also back when most distros defaulted to starting X at a nice level of -10 to boost it's priority a bit. Now that's not necessary and is even considered a bad thing to do.
 
Yep. Nowadays if you have 'preempt' options selected in the kernel config the scedualing algorythm stuff is designed to automaticly give higher priority to interactive proccesses.

I remember the big difference that stuff made to the 'feel' of the desktop. Nowadays when moving stuff around on a 'preemptive' kernel I can barely tell the difference between a system running with 100% cpu usage vs one that has a idle cpu. Its funny that now I have to have a icon telling me cpu usage were before I could tell the cpu was loaded just by trying to view a webpage or move a window or even just moving the mouse around.
 
Why? Why isn't a multiprocessing OS aware enough to load balance at the very least?

Because the application in question would likely freeze if the OS attempted to do so because the apps is not multi processor or multi-core aware. Go ahead - set a start up affinity to assign the app to your second core and watch it crash. Personally I'd rather not go back to the days where the OS controlled all aspects of multi-tasking and threading because it was horribly inefficient.

I've been working with Winframe, Citrix and Terminal Services at a corporate level for almost 10 years, and I've seen single processor 300mhz P3's handle several thousand threads with application responsiveness that beat local desktops running processors at 2-3x the speed. I've built dual P3's Citrix servers running lowly NT4, and handling all the desktop apps for 50+ users flat_out_rape Windows XP on 2ghz P4's. Two years ago I was working with a non profit that was running a 12 local desktops off of a single P3 700mhz Dell server, and remote application responsiveness running bloated MS apps and databases was lighting fast. It took at least a 2.8ghz P4 new desktop to beat the 700mhz Dell Terminal Server in terms of percpetual application speed. There was no app that woulnd't launch or load damn near before your finger off the mouse.

Have any of you heard of the quatum setting in regards to Windows Operating systems? No? Then I'm guessing for most of you your frame of reference in terms of 'multitasking' performance likely involves a game.

The lag you feel with desktop operating systems when a few apps start bogging down the system is caused by two things (1) Inefficient Context switching with lazily configured desktop optimized operating systems, and (2) Poorly written apps. Has little with how operating systems are inherently designed because this aspect of their architecutre has changed little (unfortunatley) in mainstream operating systems in 10 years. Multicore processors may be new, but running Windows and *Nix OS's on multiprocessors is old hat.

There's little incentive for MS or Apple to dramatically re-tool their desktop OS's for improved multitasking when current applications and user demands don't want it. We all want our marquee apps we spent money on to be as fast and quick as possible, and we want our DVD rippers tp peg the CPU at 100% thinking it's actually using all that CPU when it's not. We don't want to sacrifce 5-10% of total system resources to produce 1000x increase in multitasking efficiency because maximum frame rate and multimedia encoding benchmarks are more important.
 
There was no app that woulnd't launch or load damn near before your finger off the mouse.

A lot of that was due to the fact that Windows has efficient way to share application libraries, isn't it? So starting up a new instance of a application for another user has about the same amount of overhead as openning up a new window of a already running application..
(in other words there is no disk access slowing everything down.)

(very interesting post, btw)

I know that with X terminal systems one of the things to watch out for is people using Firefox a lot. Since it's all self rendering and such each instance has it's own large amount of memory unique to every user. Were as with OpenOffice.org, even though it is horribly bloated, is mostly shared with very little of the data unique to each user. So comparatively it scales much beter a multiuser system then what you'd get with Firefox.
 
More than likely the app that appeared to be hanging the whole system was a symptom, not the cause.

More than likely it's because the App in question accessed a subsytem, or made a driver access a subsytem because of an I/O call that did something very bad at Ring(0). Apps running in protected mode will have a pretty hard time nuking a modern, microkernel OS running abstraction layers. Memory leaks are a problem, but they can be cleaned up.

An example was NT 3.51, which didn't allow games and video orientated API calls into Ring (0). There was no DirectX because there was no Direct access outside the abstraction layer. The results was you coulnd't kill that OS with a fraction the garbage that will crash XP, which is why I found NT 3.51 more robust in some instances than many newer OS's. However, MS decided it needed to appease gamers and hence allowed your video and sound card almost as much priveledge as the frikken OS to kernel mode after NT 4.0. MS then turned around and got some of their senses back with Server 2003 and tightening up Ring(0) a bit. This is why there is *no* Windows XP server.
 
Luckily for us, they're implementing a great deal of the Server 2003 base into Vista. A lot of the drivers have been moved back to user land again.
 
Because the application in question would likely freeze if the OS attempted to do so because the apps is not multi processor or multi-core aware. Go ahead - set a start up affinity to assign the app to your second core and watch it crash. Personally I'd rather not go back to the days where the OS controlled all aspects of multi-tasking and threading because it was horribly inefficient.

Setting a process's affinity shouldn't cause it to crash, if it does it's a bug in the app and the app is doing something extremely stupid anyway. And the OS does control all aspects of multitasking and threading, at least in the resource allocation respects.

The lag you feel with desktop operating systems when a few apps start bogging down the system is caused by two things (1) Inefficient Context switching with lazily configured desktop optimized operating systems, and (2) Poorly written apps.

I can't really say anyting about #2 because there's only so much the OS can do to keep processes under control while still letting them do whatever they were designed for. But #1 seems way off, I would bet money that the context switching code in XP and 2003 Server is exactly the same. Pro and Server may have slightly different settings for a processes max timeslice and such which will affect interactivity when you're running CPU intensive processes but the context switches themselves will take the same amount of time as long as the hardware is the same.

An example was NT 3.51, which didn't allow games and video orientated API calls into Ring (0). There was no DirectX because there was no Direct access outside the abstraction layer.

The API calls for doing accellerated 3D work exactly the same now as they did back in NT 3.51, no DirectX wasn't around but NT 3.51 did ship with OpenGL and if you had a card with proper drivers you could do hardware acceleration just like today. The fact that it shipped with an OpenGL screensave should make that pretty obvious. No process can directly access the hardware back then or now in XP. That's what DirectX is for, it's the abstraction layer that mediates userland access to the drivers which do the actual talking to the hardware.

However, MS decided it needed to appease gamers and hence allowed your video and sound card almost as much priveledge as the frikken OS to kernel mode after NT 4.0.

The video and sound card have always had full access to the hardware, technically all they do is DMA to/from memory, but they still do that directly. And the drivers have always had full access, even in NT 3.51, because they need to be able access the hardware in order to actually use it.

This is why there is *no* Windows XP server.

Yes there is, it's called Win2k3 Server since Win2K3 is based on XP SP2.
 
Originally posted by: ChronoReverse
Luckily for us, they're implementing a great deal of the Server 2003 base into Vista. A lot of the drivers have been moved back to user land again.
:thumbsup:

I love how when the crappy beta nvidia drivers crash Vista can just reload it, much better than bluescreening
 
Originally posted by: spyordie007
Originally posted by: ChronoReverse
Luckily for us, they're implementing a great deal of the Server 2003 base into Vista. A lot of the drivers have been moved back to user land again.
:thumbsup:

I love how when the crappy beta nvidia drivers crash Vista can just reload it, much better than bluescreening

Have you seen the black "notes" screen during video driver install? Pretty nifty.

 
Originally posted by: Smilin
Originally posted by: spyordie007
Originally posted by: ChronoReverse
Luckily for us, they're implementing a great deal of the Server 2003 base into Vista. A lot of the drivers have been moved back to user land again.
:thumbsup:

I love how when the crappy beta nvidia drivers crash Vista can just reload it, much better than bluescreening

Have you seen the black "notes" screen during video driver install? Pretty nifty.

I remember seeing that and thinking, "Wow, no more blue screen of death... now it's a black screen of death..."
 
Originally posted by: SunnyD
Originally posted by: Smilin
Originally posted by: spyordie007
Originally posted by: ChronoReverse
Luckily for us, they're implementing a great deal of the Server 2003 base into Vista. A lot of the drivers have been moved back to user land again.
:thumbsup:

I love how when the crappy beta nvidia drivers crash Vista can just reload it, much better than bluescreening

Have you seen the black "notes" screen during video driver install? Pretty nifty.

I remember seeing that and thinking, "Wow, no more blue screen of death... now it's a black screen of death..."


The BSOD is still there.

The black notes screen is the screen where everything goes black and flips into text mode with ascii music notes displayed while a tone sounds. It's the graphics driver being loaded and the graphics card being initialized. Pulling this off outside of the boot process is pretty impressive.
 
Originally posted by: Smilin


The BSOD is still there.

The black notes screen is the screen where everything goes black and flips into text mode with ascii music notes displayed while a tone sounds. It's the graphics driver being loaded and the graphics card being initialized. Pulling this off outside of the boot process is pretty impressive.

How is that impressive? Unix systems could do that since Xwindows existed.
 
Well normally with X you'd have to ssh into the machine and stop restart it that way. There is no automatic detection and reset if the GUI goes south.
 
Originally posted by: drag
Well normally with X you'd have to ssh into the machine and stop restart it that way. There is no automatic detection and reset if the GUI goes south.

Well, yes, it is not automatically detected, so I would say that is somewhat impressive on Windows, but it can be done outside the boot process.

Also, I was pretty sure there was a hotkey to kill X and drop you back into a terminal if the "GUI goes south."
 
Also, I was pretty sure there was a hotkey to kill X and drop you back into a terminal if the "GUI goes south."

You're probably thinking about Ctrl+Alt+Backspace. By default that'll kill the X server and if gdm, kdm, etc is running it'll restart it for you. But since drivers like nvidia and fglrx have large portions of their code in the kernel it's not always possible to restart X gracefully, if the kernel module goes south you're pretty much screwed.
 
well that and if X goes bad it will often steal your keyboard also, so the ctl-alt-backspace may not work. Although it's pretty rare that you can't recover if you have a remote computer to ssh into the stricken one.

(nvidia/ati shovelling to much into the kernel is another reason why propriatory drivers suck)
 
Back
Top