Triodos said:
It's true, I was really surprised.
Modelworks said:
It is entirely new. The way these newer kernels or multi-kernels work is easiest pictured as what would an OS look like if each ip on the internet was a program and the kernel was the connections between them .
Each ip or program is self contained has what it needs to do its task and only shares information not resources.
Sounds like nothing more than a spin on a microkernel. And one that would have more overhead and complication than current microkernels.
Modelworks said:
No you can't, not like this , these programs do not work on current popular OS kernels. They simply can't. The architecture is that different. There are no API or libraries. The closest thing I could describe it as is the ability to write programs that generate code which only contains the code it needs to execute and nothing else. It isn't linked to libraries and doesn't re-use code from other programs or files.
Even if the communication happens over IPC, sockets, etc there has to be some API otherwise the app can't interact with any hardware, other programs, etc.
Modelworks said:
You are going to have to get out of the library mindset to understand. If an exploit exist in one application the chances of it existing in another is nearly zero because they do not share code. The code generated is unique for that application.
Ah so everyone gets to waste huge amounts of time reimplementing things that have already been done a dozen times before? And they'll all likely have their own set of exploits because no one knows how to handle strings, memory management, etc properly.
Modelworks said:
Simple. The SQL tool sends a simple message , very similar to email in how it reads, like , Subject: SQL data, To: spreadsheet applications, From: SQL tool, RE: type of data. Neither application knows that each other exist or if the message has been read until the other application responds . The applications proceed to discuss how the data will be transferred and what will be transferred and the exchange is made once both parties agree to the terms. Neither application ever has access to the others data directly or to any shared systems. Sharing of information between applications is not kept in the background or moved around memory without the user knowing. Everything can be viewed easily by the user where they can see exactly what information is being accessed and what information is being shared.
The kernel has a series of message lines and this is the only way applications can exchange data. The work is placed on the program to decide what applications it will converse with and what information it will provide. Just like if a bunch of people are in a room together. You can choose what you want to say and whom to say it to. Nobody in the room can reach into your mind and pull out things you don't want them to know.
Simple on paper but not to implement. Computers aren't people, they're not able to make judgement calls which is what your first paragraph describes. So there needs to be an API with data types documented and other things.
Basically you're reimplementing DCOM, CORBA, IPC, D-BUS, etc in some new, incompatible manner. How useful...
Modelworks said:
Package management isn't even an issue with newer designs. There is nothing to manage. Either a program is installed or it isn't. Very few of the systems I program for have shared libraries. Thankfully I stayed mainly on embedded gear where everything but linux was dedicated code that didn't make use of libraries. Shared code on embedded gear is trouble waiting to happen. Embedded hardware is extremely intolerant to copy and paste type coding. I guess that is why I picked up the newer kernel designs so easily .
In the general case it's still an issue, you're just ignoring it by bundling everything with your program and wasting tons of time and effort in doing so.
Thankfully, Linux is becoming more popular on embedded systems so devices can have proper shared libraries and package management and get out of the dark ages of NIH syndrome.
Texashiker said:
Load windows 7 on a Pentium II 400mhz computer with 256 megs of memory, and lets see if its still faster then XP.
Correct me if I am wrong, but windows 7 "seems" to be faster because it caches programs in memory. Lets limit windows 7 to 512 megs of system memory or less, and lets see how fast it runs.
I don't even think I would consider XP to be usable with 256M of memory after you've patched it all up and installed everything.
Win7 doesn't seem faster, it is faster because of SuperFetch and you can't just strike that off the list and say it doesn't count. XP caches things in memory too, it's just not as smart about it as Vista and up.