Debian hardened, and a bit about new Sun hardware

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Sunner
Originally posted by: n0cmonkey
Originally posted by: drag
dammit... get back later..

Haha! I have confused and defeated you! :swashbuckler;

:confused: :confused: :confused: :confused: :confused:

Drag sure confused me...

Nope, I started to reply and accedently hit enter at the wrong time. So basicly I had your enter comment as a one big quote and nothing else.

Then my boss came in and I had a review, so I didn't have time to finish the comment, so I gave up. (got a raise BTW :D)




Originally posted by: Sunner
Originally posted by: n0cmonkey
Originally posted by: drag
I wonder how these would perform for the backend server of a thin client network? Maybe Sun should push their sunrays a little harder. :D



Exactly. That's what everybody is aiming for. They are going to beat MS, not by making desktops that are better then microsoft's, but by simply making expensive desktop OSes obsolete.

And instead go with expensive hardware? There's a problem for Sun where they could do much better on the thin client side, especially with the hardware they're using (HYPERSPARCS?!). Those sunrays are like $600+ a piece. Definitely neat though. I only got to play with them for a couple of minutes, but I think it's almost the perfect solution for a corporate network.

But the benefits just aren't there for enthusiasts, like the ones on this site. It's also an issue for home users, until broadband gets bigger.

Yeah, we bought a bunch of SunRay's when they were relatively new, they were only ~$250 at the time.
Now they're damn near as expensive as the low end boxes we buy(HP with a 2.6 GHz P4, 256 MB and a 17" LCD).
Insane.

I just looked and their cheapest thin client is $359.00. But that's with no monitor. It's the cool one too.

"Thick" clients are interesting, but I think a thin client is an even better solution, unless the thick clients updated themselves every time they booted up. Also, the fact I can leave my Sun's thin client equipped cubicle, move to another cubicle, and pick up exactly where I left off is just amazing.

There are some substantial problems with thin clients....

For a long time I thought thin clients were the "in" thing for corporates. I still do, and every once in a while they get popular, but the popularity dies off. I didn't understand why (I still love X terminals for business/university campuses), but reading the PDF from Redhat about the stateless desktop made me realise the problems.

Namely if the server goes down, or the network gets disconnected, or anything at all happens anywere between the thin client and the server, then everything everybody is working on is toast and nothing can get done. How many people want to lose the day's work of a couple hundred to a thousand people simply because of a router storm, or some misconfigured switch somewere?

I like the sun ray's ability to do what "screen" does for the terminal/ssh connection, though.

Now for the Redhat's setup, you have several modes of operation that are designed to work around these limitations. I am sure that you know how distributed filing systems work(AFS, Intermezzo, Redhat's GFS, MS's DFS..), the desktops will be setup based on that based on their needs.

For their idea to work they need several things: 1. readonly root, any machine-specific configurations and /tmp need to be stored in ram. 2. The end user has no need nor will ever have any need whatsoever for the root password for anything non-administrative. Ever. No root access will be required for anything and everything. If a program needs to ask for the password or gain root access in any way, then it's a bug. The root filing system will be mounted read-only. And the end user only would have to know about hte desktop programs he/she uses. 3. Completely automatic setup of hardware, home folders, and printers.

And probably a couple other things. For instance burning cdroms would be handled thru HAL and Dbus instead of suid'd cdrecord related programs.

There idea is that you'd have a minimal of 2 images for your desktops. You'd have the current and the developement. Any changes you want to make to the end-users machines, you make to the developement images. Once your ready you switch over to the development image and that becomes "current". The desktop machines will have a crontab task that will check preriodicly to make sure that the system is up to date. If it's not in sync with the current image, then it will do a rsync with the image (or something similar).

Then each client will have a different amount of "thinkness" depending on your need. I suppose that for lots of people it would be best to have a completely thin client.

Then the next would be the "instantional" (or something) client with localised applications, this computer would be expected to be connected on the network at all times. Not everything will be localized, and the entire root image wouldn't be downloaded... it would be on a distritbuted filing system. It would cache just what you need to use for the local applications, and the user's home folder will be local and when ever a change is made to the user's system it will be syncronized with the file server. It would seem very similar to the Window's "roaming profiles" setup from the end user's perspective. (thus a server could go down, or the network go out for a short time, and the user would never know the difference.)

Then next up would be things like laptops. Things that are only connected periodicly to the network and can be taken somewere, or is over the internet or other unreliable connection. These will have a full-on read-only file system, with everything the end-user needs locally.(I suppose machine-specific configurations would be stored seperately from the main root system on the harddrive instead of RAM) When the laptop is connected to the network it would first sync the user's files with the archived versions on the file server, and then update the operating system.

The final goal is to be able to take a PC, toss it out the window, and within minutes have another PC in it's place with a file by file, bit by bit, exact copy of the systerm that was running the now-destroyed PC.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: drag

There are some substantial problems with thin clients....

For a long time I thought thin clients were the "in" thing for corporates. I still do, and every once in a while they get popular, but the popularity dies off. I didn't understand why (I still love X terminals for business/university campuses), but reading the PDF from Redhat about the stateless desktop made me realise the problems.

Namely if the server goes down, or the network gets disconnected, or anything at all happens anywere between the thin client and the server, then everything everybody is working on is toast and nothing can get done. How many people want to lose the day's work of a couple hundred to a thousand people simply because of a router storm, or some misconfigured switch somewere?

The network problem shouldn't' affect much work on these thin clients since I believe the work would be saved and you could log back in to get it. With some of the newer Sun technologies, the server going down would not affect much either. Yes, some work would be lost, but with a couple of commands everything should be back up. Between NAS/SAN andSunFire ?800s, it gets a lot easier. And sun systems can go a while without crashing. :cool:

I like the sun ray's ability to do what "screen" does for the terminal/ssh connection, though.

Now for the Redhat's setup, you have several modes of operation that are designed to work around these limitations. I am sure that you know how distributed filing systems work(AFS, Intermezzo, Redhat's GFS, MS's DFS..), the desktops will be setup based on that based on their needs.

SAN/NAS should solve that problem too, pretty much.

There idea is that you'd have a minimal of 2 images for your desktops. You'd have the current and the developement. Any changes you want to make to the end-users machines, you make to the developement images. Once your ready you switch over to the development image and that becomes "current". The desktop machines will have a crontab task that will check preriodicly to make sure that the system is up to date. If it's not in sync with the current image, then it will do a rsync with the image (or something similar).

That could be interesting, but wouldn't be tough with regular machines or thin clients.

Then each client will have a different amount of "thinkness" depending on your need. I suppose that for lots of people it would be best to have a completely thin client.

Then the next would be the "instantional" (or something) client with localised applications, this computer would be expected to be connected on the network at all times. Not everything will be localized, and the entire root image wouldn't be downloaded... it would be on a distritbuted filing system. It would cache just what you need to use for the local applications, and the user's home folder will be local and when ever a change is made to the user's system it will be syncronized with the file server. It would seem very similar to the Window's "roaming profiles" setup from the end user's perspective. (thus a server could go down, or the network go out for a short time, and the user would never know the difference.)

Then next up would be things like laptops. Things that are only connected periodicly to the network and can be taken somewere, or is over the internet or other unreliable connection. These will have a full-on read-only file system, with everything the end-user needs locally.(I suppose machine-specific configurations would be stored seperately from the main root system on the harddrive instead of RAM) When the laptop is connected to the network it would first sync the user's files with the archived versions on the file server, and then update the operating system.

The final goal is to be able to take a PC, toss it out the window, and within minutes have another PC in it's place with a file by file, bit by bit, exact copy of the systerm that was running the now-destroyed PC.

Those are definitely some interesting thoughts. Nothing a little centralized login and rsync couldn't handle though. ;)
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
ya I think the concept should be usefull.

Some of the technology should be aviable in Fedora Core 3 when it comes out. One thing is the "read-only root", it should be aviable as a rpm package you can add on.

I was thinking that the security implications are interesting.
 

sciencewhiz

Diamond Member
Jun 30, 2000
5,885
8
81
The reason there isn't stack protection in debian as of years ago is because none of the current mechanisms work on all 11(?) architechtures.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Pretty sure ProPolice is working on all 12 of OpenBSD's archs. If one is missing, it's probably VAX.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Missing: HP PA-RISC, IA-64, IBM S/390

Maybe DEC MIPS, but it should work, I'd think. Debian has a large developer base though, they could try to contribute something back to the community by helping to add support for those archs, can't they? :p
 

Sunner

Elite Member
Oct 9, 1999
11,641
0
76
Originally posted by: n0cmonkey
Missing: HP PA-RISC, IA-64, IBM S/390

Maybe DEC MIPS, but it should work, I'd think. Debian has a large developer base though, they could try to contribute something back to the community by helping to add support for those archs, can't they? :p

Didn't know Debian supports IA-64, oh well, forgot about PA-RISC as well.
Maybe the Debian guys got tired of contributing stuff back after the XFree86 guys kept refusing their patches over and over :roll:
Good thing we're getting rid of that project >Happy X.org user< :)
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Sunner
Originally posted by: n0cmonkey
Missing: HP PA-RISC, IA-64, IBM S/390

Maybe DEC MIPS, but it should work, I'd think. Debian has a large developer base though, they could try to contribute something back to the community by helping to add support for those archs, can't they? :p

Didn't know Debian supports IA-64, oh well, forgot about PA-RISC as well.
Maybe the Debian guys got tired of contributing stuff back after the XFree86 guys kept refusing their patches over and over :roll:
Good thing we're getting rid of that project >Happy X.org user< :)

XFree refused patches from nearly everyone. The ProPolice guy is working with OpenBSD. If you can work with Theo, you can work with anyone. :p
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Originally posted by: Sunner
Originally posted by: drag
here is a proccessor chart for ProPolice Gcc patches

works in:
Intel x86
powerpc
sparc
VAX
mips
mips64
Motolora 68k
alpha
sparc64
arm
amd64

Is there anything missing that Debian supports?
S/390? I wonder how many Debians are being run on S/390 systems anyway...I doubt it's very many.


Probably a few. But how many S/390's are around being used as servers? Not very many, they aren't to hot at that sort of thing.

I wouldn't be suprised if those things already have something like that... Remember the OS doesn't actually run the computer, it just runs on the computer. VM or Z/0S or VSA/ESA or whatever runs the actual computer and I'd bet that thing has some hardcore memory management stuff, the Linux stuff runs in a "partition" on the server, which isn't just disk space, but cpu time and memory slice in it's own little world.

For examply our S/390 is dual core, not for SMT or anything like that, but both CPU's execute the same instructions and then at the end if they return different results, then you know it has a problem. (Well, the computer knows it has a problem, it'll call whoever down in Texas (I think) by itself and tell THEM, then they call you back and tell you when they can come in a get it fixed.)

So I'd bet that they have a similar setup with RAM... or have at least a special subsystem of some sort that monitors that sort of thing. Not that I know anything about it, realy.

on a side note:
Of course only MS has the balls to compare that sort of setup running a Linux OS vs Windows running on a 900mhz Xeon-based Dell computer in a price/performance comparision and then using that as "proof" that Linux has a higher TCO. morons.
/rant

But I don't think that not being able to run a PATCH on GCC on all archatectures would be grounds for them refusing to use it, would it??

I mean Debian has lots of platform specific packages.. You can't use Grub on PPC, right? Do they have to have all the same exact GCC version for all platforms in order to get it into testing? There has to be some platform specific tweaking and such for GCC...

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Maybe they have very strict standardization for everything. I don't think a couple of architectures not supporting one thing should affect much, especially if they're working on support for those archs.
 

sciencewhiz

Diamond Member
Jun 30, 2000
5,885
8
81
Originally posted by: n0cmonkey
Maybe they have very strict standardization for everything. I don't think a couple of architectures not supporting one thing should affect much, especially if they're working on support for those archs.

Things won't move past unstable unless they work on all architectures.

Whenever this type of thing comes up on the security list, there are always many people who want it, and nobody who wants to develop it.