Microsoft begins distributing Windows 8 to OEMs

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Veliko

Diamond Member
Feb 16, 2011
3,597
127
106
How about they create a lean OS and see how many people buy it. I would bet they would sell tons of it.

Microsoft sales person:
"Do you want a bloated OS with 30 features you never use or a lean, fast OS with only the basics"?

I'm guess more people would respond "lean and fast".

Yes, until you start to describe exactly what you mean by 'the basics'.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Modelworks said:
Every popular OS on the market right now has problems with dependencies , where one application relies on something else being installed. They are all based on designs from 10 years ago. Other OS designs that make use of methods to get away from the legacy issues are out there they just don't have the software support. These newer OS designs have self contained programs that rely on nothing outside of themselves, there are no services or dependencies.

That's not a new design, the problem with that design is that it only works in very small, limited systems where wasting space on shared libraries is a concern. Luckily those are getting more and more rare. Even an Android phone has enough space to run a full Linux distribution these days.

Modelworks said:
The programs tell the kernel what they need and the kernel decides if they can have it, no services, no shared dependencies.

You can do that right now if you want. Just design your program completely in C or ASM and avoid the standard C libraries by making direct syscalls to the kernel. It's really, really ugly but it works if you put in the time.

Modelworks said:
It is great to have programs that are just a single exe. No dll, so,.o , or any of that legacy junk. Imagine running an OS that all the applications are just a single file that requires no installation, just click and run and you know that it has everything it needs to work and you can remove it just by deleting the file.

Now imagine an exploit in a piece of encryption or HTTP code, how do you know which of your apps are affected? Now you have to go out and manually update every single one of them instead of just replacing libssl.so.

Now imagine you've got a SQL query tool that wants to pass data directly to your spreadsheet app, how is it supposed to do that without any standard interfaces? Since you don't have a registry or any services now you need to configure one app with the path and any parameters needed for the spreadsheet to accept data and that's hoping that you know all of that.

Just because MS dropped the ball hard on things like package management, app dependencies, the registry, etc doesn't mean they're bad things. When done properly they make for an awesome, integrated system like Debian has been putting out for years. Sure both have their warts, but I'm sure you'd feel the tradeoffs are worth it if you actually had to use, support and develop for a system with nothing shared.

wanderer27 said:
These days, there's quite a bit more Memory available, and CPU performance is magnitudes greater than what was available back then.
With all this extra capacity (on both fronts), developers pretty much just run things through the Compiler and leave it at that.

And with more people using run times like .Net the resource usage goes up a good bit. But the benefits of an environment like .Net far outweigh the additional resources. More standardized apps with a lot lower chance of exploitability are a huge benefit that most users don't realize.

wanderer27 said:
Granted, Compilers are (hopefully) better these days, but there's still room for optimization that just doesn't seem to happen these days.
You just don't much these days about getting down into Machine level programming to tweak performance.

Because it's just not necessary and in most places would be considered a waste of time. Would you rather the developers worked on adding new features and fixing bugs or reading through profiler output to determine how to shave 2ms off their app startup time? It's a tradeoff and lots of the time it's better to err on the side of safety instead of performance. Games are probably the only place where that kind of performance tuning is really required these days.
 

wanderer27

Platinum Member
Aug 6, 2005
2,173
15
81
Because it's just not necessary and in most places would be considered a waste of time. Would you rather the developers worked on adding new features and fixing bugs or reading through profiler output to determine how to shave 2ms off their app startup time? It's a tradeoff and lots of the time it's better to err on the side of safety instead of performance. Games are probably the only place where that kind of performance tuning is really required these days.

Well, it's been a while since I looked, but when you can look through an exe or dll (or whatever) and see text strings referring to library calls etc . . . . well, that's just nothing short of bloat.

Worst case it's just lazyiness, and to me is really indicative of a lack of optimization. There's no real need to have things like that in a Compiled object.

I'm not arguing that compiled libraries etc (.Net and so forth) makes it easier/quicker to develop, it does - it's just not conducive to tight code.


.

.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Well, it's been a while since I looked, but when you can look through an exe or dll (or whatever) and see text strings referring to library calls etc . . . . well, that's just nothing short of bloat.

Worst case it's just lazyiness, and to me is really indicative of a lack of optimization. There's no real need to have things like that in a Compiled object.

I'm not arguing that compiled libraries etc (.Net and so forth) makes it easier/quicker to develop, it does - it's just not conducive to tight code.


.

.

So you're really worried about several K of text that has no bearing on performance at all? I can run strings on just about any binary here and see random bits of text from function names to the help output. But are you saying that you think /bin/ls can be better optimized and that someone should spend the time to remove all of those strings in order to make the binary like 20K smaller?
 

wanderer27

Platinum Member
Aug 6, 2005
2,173
15
81
So you're really worried about several K of text that has no bearing on performance at all? I can run strings on just about any binary here and see random bits of text from function names to the help output. But are you saying that you think /bin/ls can be better optimized and that someone should spend the time to remove all of those strings in order to make the binary like 20K smaller?

:D

I know what you're saying.

I just look at it under the light that all this extra padding from dozens/hundreds of subfiles adds to overall system bloat.

The biggest thing to me though is that this indicates that things haven't been optimized as well as they could be.
Maybe I'm just being too old school on this - from the days where you had to scrape to get things to fit in Memory.

Basically:

- Ease of Use = Bloat

- Performance <> Ease of Use


I just think there could be a bit less bloat on the ease of use side of things, that's all.


.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
:D

I know what you're saying.

I just look at it under the light that all this extra padding from dozens/hundreds of subfiles adds to overall system bloat.

The biggest thing to me though is that this indicates that things haven't been optimized as well as they could be.
Maybe I'm just being too old school on this - from the days where you had to scrape to get things to fit in Memory.

Basically:

- Ease of Use = Bloat

- Performance <> Ease of Use


I just think there could be a bit less bloat on the ease of use side of things, that's all.


.

But that's like nitpicking about the rear view mirror when talking about the performance of your car. Sure, taking it out will reduce the overall weight but the performance gains will be virtually 0 and you've removed something that is pretty damned useful.
 

wanderer27

Platinum Member
Aug 6, 2005
2,173
15
81
But that's like nitpicking about the rear view mirror when talking about the performance of your car. Sure, taking it out will reduce the overall weight but the performance gains will be virtually 0 and you've removed something that is pretty damned useful.

You need to take into account the (apparent) lack of performance optimization as well though.

That's what I'm trying to point out, that the evident bloat also indicates that the effort hasn't been made for performance tweaks either.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
You need to take into account the (apparent) lack of performance optimization as well though.

That's what I'm trying to point out, that the evident bloat also indicates that the effort hasn't been made for performance tweaks either.

Except it's not evidence of that at all. Static strings in a binary help performance because you avoid looking up the string somewhere else, although that makes translation a bitch. How would you propose someone print out the help text of a cli binary without storing it in the binary? If the compiler could optimize them away into some other, faster form don't you think that it would?

That's like saying that a spoiler and flame sticker on a car is evidence that the owner did a ton of performance tuning on it. It's not, the two are completely separate and don't indicate anything either way about the other.
 

wanderer27

Platinum Member
Aug 6, 2005
2,173
15
81
Except it's not evidence of that at all. Static strings in a binary help performance because you avoid looking up the string somewhere else, although that makes translation a bitch. How would you propose someone print out the help text of a cli binary without storing it in the binary? If the compiler could optimize them away into some other, faster form don't you think that it would?

That's like saying that a spoiler and flame sticker on a car is evidence that the owner did a ton of performance tuning on it. It's not, the two are completely separate and don't indicate anything either way about the other.

Well, to be fair I'm also basing this a bit on past experience.

I was de-compiling a DLL (long time ago), and when comparing it with what I ended up with there were definitely some optimizations to be done.

Thats been quite some time back, but I wouldn't be surprised if it's still the case.

Making things easy to access versus high performance just aren't going to be the same.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Well, to be fair I'm also basing this a bit on past experience.

I was de-compiling a DLL (long time ago), and when comparing it with what I ended up with there were definitely some optimizations to be done.

Thats been quite some time back, but I wouldn't be surprised if it's still the case.

Making things easy to access versus high performance just aren't going to be the same.

Decompiling a library won't get you the same code that the author compiled, it's not possible to perfectly reverse engineer a lossy process such as compilation. If the code you got was readable at all you should be happy.
 

Texashiker

Lifer
Dec 18, 2010
18,811
198
106
Regardless of the bloat of windows 7, I do not think the time is right for microsoft to be releasing another operating system. Because we have more options these days, microsoft might find that people turning away from desktops and more towards smart phones.

Take the time developers are putting into a desktop OS, and develop a portable OS that can be used on an ereader or smart phone.

I see a time in the near future where a desktop is primary for work environments. When people leave their job, they are going to turn towards portable devices, instead of going home and getting on their PC running windows.

When windows 8 is released, people might say "I do not want to learn something new, so I will just use my smart phone."
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Regardless of the bloat of windows 7, I do not think the time is right for microsoft to be releasing another operating system. Because we have more options these days, microsoft might find that people turning away from desktops and more towards smart phones.

Take the time developers are putting into a desktop OS, and develop a portable OS that can be used on an ereader or smart phone.

I see a time in the near future where a desktop is primary for work environments. When people leave their job, they are going to turn towards portable devices, instead of going home and getting on their PC running windows.

When windows 8 is released, people might say "I do not want to learn something new, so I will just use my smart phone."

They are with Windows Phone 7 and Windows embedded in general. They're just not good at it and can't figure out why.
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
I bet microsoft could make an OS that uses less then 100 megs of memory, and very little cpu, but they refuse to do so. But they did it in 1995, 1998 and 2000. Unlike everything less in the world, microsoft products are getting larger and less efficient.
Maybe. I'm not so sure that would be good for performance though. There's a very real chance of regressing on the space-time tradeoff if you try to absolutely minimize memory usage. Particularly at a time when memory usage is still growing in-line with Moore's Law and single-threaded CPU usage isn't growing nearly as quickly, it makes more sense to trade as much space for time as is reasonable.
 

vcsx

Member
Jun 1, 2010
34
0
0
Win 8 will probably be lower on resources and have larger UI elements so that it can be easily touched on tablets. Right now, a lot of people complain that Win7 isn't finger friendly due to some of its buttons (like the arrow that opens the text color palletes, or font lists) be too tiny. I have a TM2 Tablet PC and I stick to the pen for input. I can use my fingers too but sometimes I miss the button I really wanted to touch.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
That's not a new design, the problem with that design is that it only works in very small, limited systems where wasting space on shared libraries is a concern. Luckily those are getting more and more rare. Even an Android phone has enough space to run a full Linux distribution these days.

It is entirely new. The way these newer kernels or multi-kernels work is easiest pictured as what would an OS look like if each ip on the internet was a program and the kernel was the connections between them .
Each ip or program is self contained has what it needs to do its task and only shares information not resources.

You can do that right now if you want. Just design your program completely in C or ASM and avoid the standard C libraries by making direct syscalls to the kernel. It's really, really ugly but it works if you put in the time.

No you can't, not like this , these programs do not work on current popular OS kernels. They simply can't. The architecture is that different. There are no API or libraries. The closest thing I could describe it as is the ability to write programs that generate code which only contains the code it needs to execute and nothing else. It isn't linked to libraries and doesn't re-use code from other programs or files.


Now imagine an exploit in a piece of encryption or HTTP code, how do you know which of your apps are affected? Now you have to go out and manually update every single one of them instead of just replacing libssl.so.

You are going to have to get out of the library mindset to understand. If an exploit exist in one application the chances of it existing in another is nearly zero because they do not share code. The code generated is unique for that application.

Now imagine you've got a SQL query tool that wants to pass data directly to your spreadsheet app, how is it supposed to do that without any standard interfaces? Since you don't have a registry or any services now you need to configure one app with the path and any parameters needed for the spreadsheet to accept data and that's hoping that you know all of that.

Simple. The SQL tool sends a simple message , very similar to email in how it reads, like , Subject: SQL data, To: spreadsheet applications, From: SQL tool, RE: type of data. Neither application knows that each other exist or if the message has been read until the other application responds . The applications proceed to discuss how the data will be transferred and what will be transferred and the exchange is made once both parties agree to the terms. Neither application ever has access to the others data directly or to any shared systems. Sharing of information between applications is not kept in the background or moved around memory without the user knowing. Everything can be viewed easily by the user where they can see exactly what information is being accessed and what information is being shared.

The kernel has a series of message lines and this is the only way applications can exchange data. The work is placed on the program to decide what applications it will converse with and what information it will provide. Just like if a bunch of people are in a room together. You can choose what you want to say and whom to say it to. Nobody in the room can reach into your mind and pull out things you don't want them to know.

Just because MS dropped the ball hard on things like package management, app dependencies, the registry, etc doesn't mean they're bad things. When done properly they make for an awesome, integrated system like Debian has been putting out for years. Sure both have their warts, but I'm sure you'd feel the tradeoffs are worth it if you actually had to use, support and develop for a system with nothing shared.

Package management isn't even an issue with newer designs. There is nothing to manage. Either a program is installed or it isn't. Very few of the systems I program for have shared libraries. Thankfully I stayed mainly on embedded gear where everything but linux was dedicated code that didn't make use of libraries. Shared code on embedded gear is trouble waiting to happen. Embedded hardware is extremely intolerant to copy and paste type coding. I guess that is why I picked up the newer kernel designs so easily .

About the closest public information on some of these kernels would be the barrelfish kernel.
http://www.barrelfish.org/
 

Texashiker

Lifer
Dec 18, 2010
18,811
198
106

Load windows 7 on a Pentium II 400mhz computer with 256 megs of memory, and lets see if its still faster then XP.

Correct me if I am wrong, but windows 7 "seems" to be faster because it caches programs in memory. Lets limit windows 7 to 512 megs of system memory or less, and lets see how fast it runs.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Triodos said:

It's true, I was really surprised.

Modelworks said:
It is entirely new. The way these newer kernels or multi-kernels work is easiest pictured as what would an OS look like if each ip on the internet was a program and the kernel was the connections between them .
Each ip or program is self contained has what it needs to do its task and only shares information not resources.

Sounds like nothing more than a spin on a microkernel. And one that would have more overhead and complication than current microkernels.

Modelworks said:
No you can't, not like this , these programs do not work on current popular OS kernels. They simply can't. The architecture is that different. There are no API or libraries. The closest thing I could describe it as is the ability to write programs that generate code which only contains the code it needs to execute and nothing else. It isn't linked to libraries and doesn't re-use code from other programs or files.

Even if the communication happens over IPC, sockets, etc there has to be some API otherwise the app can't interact with any hardware, other programs, etc.

Modelworks said:
You are going to have to get out of the library mindset to understand. If an exploit exist in one application the chances of it existing in another is nearly zero because they do not share code. The code generated is unique for that application.

Ah so everyone gets to waste huge amounts of time reimplementing things that have already been done a dozen times before? And they'll all likely have their own set of exploits because no one knows how to handle strings, memory management, etc properly.

Modelworks said:
Simple. The SQL tool sends a simple message , very similar to email in how it reads, like , Subject: SQL data, To: spreadsheet applications, From: SQL tool, RE: type of data. Neither application knows that each other exist or if the message has been read until the other application responds . The applications proceed to discuss how the data will be transferred and what will be transferred and the exchange is made once both parties agree to the terms. Neither application ever has access to the others data directly or to any shared systems. Sharing of information between applications is not kept in the background or moved around memory without the user knowing. Everything can be viewed easily by the user where they can see exactly what information is being accessed and what information is being shared.

The kernel has a series of message lines and this is the only way applications can exchange data. The work is placed on the program to decide what applications it will converse with and what information it will provide. Just like if a bunch of people are in a room together. You can choose what you want to say and whom to say it to. Nobody in the room can reach into your mind and pull out things you don't want them to know.

Simple on paper but not to implement. Computers aren't people, they're not able to make judgement calls which is what your first paragraph describes. So there needs to be an API with data types documented and other things.

Basically you're reimplementing DCOM, CORBA, IPC, D-BUS, etc in some new, incompatible manner. How useful...


Modelworks said:
Package management isn't even an issue with newer designs. There is nothing to manage. Either a program is installed or it isn't. Very few of the systems I program for have shared libraries. Thankfully I stayed mainly on embedded gear where everything but linux was dedicated code that didn't make use of libraries. Shared code on embedded gear is trouble waiting to happen. Embedded hardware is extremely intolerant to copy and paste type coding. I guess that is why I picked up the newer kernel designs so easily .

In the general case it's still an issue, you're just ignoring it by bundling everything with your program and wasting tons of time and effort in doing so.

Thankfully, Linux is becoming more popular on embedded systems so devices can have proper shared libraries and package management and get out of the dark ages of NIH syndrome.

Texashiker said:
Load windows 7 on a Pentium II 400mhz computer with 256 megs of memory, and lets see if its still faster then XP.

Correct me if I am wrong, but windows 7 "seems" to be faster because it caches programs in memory. Lets limit windows 7 to 512 megs of system memory or less, and lets see how fast it runs.

I don't even think I would consider XP to be usable with 256M of memory after you've patched it all up and installed everything.

Win7 doesn't seem faster, it is faster because of SuperFetch and you can't just strike that off the list and say it doesn't count. XP caches things in memory too, it's just not as smart about it as Vista and up.
 

catilley1092

Member
Mar 28, 2011
159
0
76
Seriously, I'd prefer for XP to have a proper burial before another Windows OS is released. If Win 8 were to be released late 2012, MS would be actively supporting 4 OS's.

Really, MS ought to get their maximum dollar out of 7 before moving on. Win 7 is currently selling 7 copies per second, why bother with a steady stream of cash flow. Plus, if Win 8 were to be released mid April 2014 (only 18 months later), then a 128 bit version could be released, w/o everyone thinking "should I go 128, 64 or 32 bit?". When 7 was released (most new computers as 64 bit), the 32 vs 64 debate continued for nearly a year. We need 32 bit to be laid to rest when Win 8 is released, or wait until Win 9 to see it, mabye by 2015.

I'm ready to move forward. If Win 8 doesn't offer a 128 bit OS (I'm sure that AMD & Intel is preparing for this), then I'll stick with 7. 64 bit computing, although now the norm, has been on the market since 2004/2005 (XP Pro 64 bit).

Where is even the beta for Win 8, I'm a TechNet member, and haven't seen it yet?

Cat
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
catilley1092 said:
I'm ready to move forward. If Win 8 doesn't offer a 128 bit OS (I'm sure that AMD & Intel is preparing for this), then I'll stick with 7. 64 bit computing, although now the norm, has been on the market since 2004/2005 (XP Pro 64 bit).

Why would they have 128-bit builds yet when there's currently not even beta 128-bit chips that I'm aware of and no one has come close to exhausting the VM provided by 64-bit chips?
 

StinkyPinky

Diamond Member
Jul 6, 2002
6,983
1,281
126
Load windows 7 on a Pentium II 400mhz computer with 256 megs of memory, and lets see if its still faster then XP.

Correct me if I am wrong, but windows 7 "seems" to be faster because it caches programs in memory. Lets limit windows 7 to 512 megs of system memory or less, and lets see how fast it runs.

Who cares? Why limit the OS to 1999 hardware? How many pcs have 512mb ram anymore?

Even the crappiest budget netbooks ship with 1gb.