If You Use Linux Read This! - Maybe You Don't Know

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Costs of operating a server OS have really little to do with the initial purchase price, the cost of maintaining it will be much more than the purchase price in the long run.
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
But with that example at least you have the option of removing CUPs and the package manager will give you a warning about all of the other packages that it's removing in order to complete your request. Sure it'll let you shoot yourself in the foot if you want, but you do get fair warning and at least you have the choice. It's not even possible to remove printing support from Windows so it's not exactly a good comparison.

The presence of printing support within windows does not require that a daemon be loaded. You can stop the print spooler process quite easily either through services.msc or command line...and there is nothing to uninstall because windows will not install printer drivers if no printer is connected.

Yes, parts of the system do need to be done in ASM for things like the very early stages of booting. I'm pretty sure there's no way to kick an x86 CPU into long-mode or even real mode without ASM.

Early stages of booting are handled by the bios. The transition from bios to software happens within the boot sector & boot loader portion of the boot drive, which does not need to be done in ASM. You do realize that compilers spit out machine code...

And using XP as a comparison is indeed valid because it's still in mainstream use. It's not my fault Windows users are clinging to almost decade old software for no good reason. I wouldn't be surprised if there's still more XP machines out there than Vista and Win7 combined.

I'd liken your statement of conjecture regarding the current "mainstream" usage of XP to the current "mainstream" usage of Linux boxes with the 2.4 kernel...but we're not talking about market share, we're talking about current "production quality" software and comparing the latest versions of each...so no, comparing the latest version of linux to XP is not a valid comparison. You would need to compare 2.4 kernel linux to XP if you want to use XP as your basis for comparison.

I just tried it again on Win7 in the hopes that they fixed it and nope, they haven't. I installed Handbrake (which took longer than it would've on Linux), started up a transcode and set it's process to Above Normal and immediately the UI became choppy and it took ~30s to minimize FF. Sure it wasn't completely unresponsive but it rendered the system effectively unusable.

No, it did what you wanted it to - it gave the transcoding process more CPU time than everything else. What happens when you do not touch the thread settings? Is the program underperforming? By the way, I have adjusted thread priority on Sony Vegas encodes and I did not experience any type of UI slowdowns. Maybe with linux you need to manually adjust the process priority for it to work right...but as I said before, with windows if you leave it alone it works great.

Generally at home it's a VM because the process priority also influences the I/O priority I can lower it and reduce I/O contention. The system is still usable without touching it, but lowering it helps when both are doing a lot of I/O.

If you need a robust VM you should be using an OS better suited to that task...if only there was an OS that was designed from the ground-up to allow for high performance virtualization...oh wait, there is...it's called Open Solaris and it's also free.

No, it's much simpler and quicker to look at the log. It takes ~2s to run 'less /var/log/Xorg.0.log' then hit G to see what the last error was. If you're unwilling to do even the smallest bit of investigation then you shouldn't be using that computer.

If I wanted to see the last error in a log I'd just use "tail /path/to/log.txt" See, I'm more efficient then you because I don't need to press G.

I know exactly what happened without looking at the log file, I would not reinstall something unless it was necessary. See the issue above with X user interface fonts being considered "dependencies" when they should not be (i.e. attached to cups).


Novell did their first demo of XGL on lower end model laptop with a 2-3 gen old ATI card. There's no way Aero would even run on that hardware let a lone run well.

And frankly for normal, 2D desktop usage whichever is faster is irrelevant. I've never had X 2D performance seem slow or laggy to me.

XGL what? Who uses it? Not I...who uses Novell for desktop or workstation systems? Not anyone...so who cares? Nobody. It's a saying Tesla should have gotten more for being so ahead of his time...no, he sucked at marketing and got pwned by Edison so now history credits Edison with all the breakthrus in electricity, even though Tesla was responsible for a lot more of our technology as it is today with his inventions and discoveries.

2D matters since that is the current desktop/workstation GUI paradigm...and windows still has the best 2D performance. It's not just lag, it's rendering windows, things like smooth-scrolling text, etc.

Obviously the entire driver isn't in userland, there has to at least be a small shim for device setup, IRQ acknowledgement, etc. But as much as possible was moved to userland as can be seen by the automatic recovery done instead of a STOP error when the driver runs into unexpected problems.

That would have required an overhaul of WDM. I can neither confirm nor deny that MS overhauled WDM for Vista/Win7 but it is likely they did. I have not seen a BSOD in Win7 to date.

I didn't say it was difficult, I said most developers can't deal with it. There's a difference. With all of the craptastic apps for Windows I'm actually surprised most developers can hold jobs.

Most apps are for Windows and there are plenty of good ones out there.

Ubuntu 10.4 ships with the 2.6.32 kernel which has ext4, TRIM, etc there already.

Yes, TRIM only works with ext4 partitions so you need one to have the other...and last I checked you don't have a lot of Ubuntu enterprise servers. It's a desktop OS, it's not certified or intended for enterprise-grade usage. You could use it for a server...but would that be the smart thing to do? Probably not.

And yes, I saw Keith Packard's stuff about GEM. GL performance in X has always been comparable to Windows for me so my hopes are more for virtualization 3D support than improved performance.

You can run 3D games on Linux just about as well as you can on Windows if the video card driver is there...and as long as you have an AMD or Nvidia card you should be ok. They're bringing STEAM to linux (and Mac)...that says something.

But that aside, virtualization is still a "new" thing. If you really need hardcore VM support you should set up the system with the appropriate software...eitehr Open Solaris or something like CentOS with the Xen kernel. You can then set up all the instances you need and get decent performance.

I do because all it does is add more attack vectors, mostly from web browsers, and makes it seem like people who don't understand the system can still admin it which isn't the case with any OS. I've seen more botched SBS servers, AD setups, etc than I'd care to talk about. And all of them were done by Windows admins that thought they knew more than they did because they were able to successfully make it through the SBS setup wizard.

The resources X and a desktop use are negligible and will be paged out if necessary. There are reasons for not having X on a server but performance isn't one of them.

X+window manager loads a bunch of crap in to the memory and yes, its presence does use resources. You want to waste swap space for it? That's your call...

Not sure what your attack vector statement is all about because you should be restricting administrative access by port/IP on the firewall. SSH works fine for me...and aside from that, there are web-based admin tools "webmin, cpanel/whm, etc" which make managing servers quite painless.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Since your beef seems to be with closed-source drivers...let me just say that a lot of the well-performing drivers on linux are either closed-source from the device manufacturer (vs linux generic implementation) or the manufacturer does provide open-source drivers. It is not unreasonable for them to expect some kind of NDA because drivers are low-level enough that trade secrets may be revealed. If you invested a lot of money into something of your own, would you turn around and let anyone freely benefit from your investment if you knew that doing so would devalue your investment? Chances are you would not.

Honestly as long as it works as intended, I don't care if the driver is open source of binary. I'm not looking for a conspiracy in every nook and cranny "OMG what's in this closed-source driver? They're watching us, man!!"

They don't have to expose how the hardware works, just the APIs for interacting with it. If that information is trade secret, they screwed the pooch too early into the development of the hardware.
I don't think they're watching us, I just trust a lot of open source hackers more than I trust a lot of no-name, not part of the community, just out of school, never really written a big project before, don't really use the OS anyways "developers" from the third world.
It's like if someone from nVidia said video drivers were too complex for open source developers to write, and then someone found a vulnerability in the closed source nVidia driver. If they're too complex for open source developers to write, and too complex for nVidia to write, who can write them?

EDIT:
I also don't have a problem with companies providing open source drivers for the hardware they make. Documentation on how to write the driver would be a million times more useful to developers who then have to troubleshoot, support, and maintain that code; but sometimes code is all you can get. If the company is devoted to the community, and continues to keep the code working (with new and old hardware) I'd be incredibly impressed. Hell, I'd make sure their hardware was on my wishlist.
I vote with my money. nVidia isn't high on the list. AMD/ATI and Intel on the other hand are much more likely purchases. As is RaLink for wireless and Intel for wired networking gear.
 
Last edited:

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Actually, America was largely built upon violent rebellion AND CAPITALISM. We rebelled because we were making a lot of money here in the States and didn't wanna pay taxes to Britain. Anti-capitalism is unamerican in the purest sense. I reiterate that the problem is not free software, but the notion that ALL software should be freely available.

And by capitalism, you mean economic tyranny. A certain American hero once wanted to sell illegal black market tea for more than the "expensive" (and by expensive I mean better quality and costing less money) tea being sold by the British. Unfortunately he wasn't able to form a monopoly, because that privilege was given to another company.
Oh, and instead of being a man about it, he pussy-footed just a bit by dressing up as a Native American, so that they could take the blame. What a hero indeed!

BSD has a very liberal and generous license. You can basically take a BSD OS and brand it as your own without having to release to source to anyway. It's license is very business-friendly, which is why Apple and Mac OS exist...but it lacks the large community that linux has. I'd say a lot of BSD users are just as bad as Mac OS users...they have a big inferiority complex and quickly resort to pointing out how "superior" BSD is to Linux, yet benchmarks (both synthetic and real world) tend to favor Linux over BSD.
BSD is superior (and truly FREE), there's no doubt. ;)
Who cares? Most BSD users I know and interact with don't use it exclusively. Why? Because it isn't the best thing to use to solve every problem.


Jews are cheap, but that's a whole nother story.
Did you really just say that? Really? Are you being serious, or just "edgy?" Or did I somehow warp into 4chan?
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
And I just ran into some more MS fail today. Downloaded the ~53M package of SQL 2005 Express to install on a freshly installed Win2K8 R2 server and it wouldn't even install properly. It kept saying that it couldn't find the MSI for the SQL Native Client even though it was included in the package and I verified the MSI was in the right directory. Installing the Native Client manually first allowed the installer to finish, but it's retarded that stupid shit like that was necessary.

So yes, I stand by my assessment that Linux package management is many magnitudes better than the installers for Windows.

Wait, you're saying that apt-get install postgresql-8.4, yum install postgresql, or even pkg_add -i postgresql-server and then configuring it is easier than tracking down the db installer on the web, downloading it, double clicking it, holding the installer's hand, and finally configuring it?
Or that apt-get install postgresql-8.4, yum update postgresql, or pkg_add -iu postgresql-server is easier than tracking the update on the web, downloading it, and holding the installer's hand?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The presence of printing support within windows does not require that a daemon be loaded. You can stop the print spooler process quite easily either through services.msc or command line...and there is nothing to uninstall because windows will not install printer drivers if no printer is connected.

I understand how to stop services in Windows. If the Windows print services were separate packages like they were in CUPS there would be something to remove, but like so many other parts of Windows it's just lumped in with the rest of the OS so that you can't remove it.

You're the one that mentioned removing CUPS and having it take a bunch of other packages with it which doesn't have an analogy in Windows since you can't remove the print spooler there. If you had just disabled the CUPS service like you would the Windows print spooler you wouldn't have had those dependency issues.

Early stages of booting are handled by the bios. The transition from bios to software happens within the boot sector & boot loader portion of the boot drive, which does not need to be done in ASM. You do realize that compilers spit out machine code...

So you're telling me all of the code in the function protected_mode_jump in arch/x86/boot/pmjump.S is in assembly for fun?

Boot loaders like GRUB and LILO are run in real mode. Otherwise you wouldn't be able to use them to boot memtest, DOS, etc because you can't switch back to real mode from protected mode.

I'd liken your statement of conjecture regarding the current "mainstream" usage of XP to the current "mainstream" usage of Linux boxes with the 2.4 kernel...but we're not talking about market share, we're talking about current "production quality" software and comparing the latest versions of each...so no, comparing the latest version of linux to XP is not a valid comparison. You would need to compare 2.4 kernel linux to XP if you want to use XP as your basis for comparison.

However you want to spin it, the Windows process scheduler is still broken in Win7.

No, it did what you wanted it to - it gave the transcoding process more CPU time than everything else. What happens when you do not touch the thread settings? Is the program underperforming? By the way, I have adjusted thread priority on Sony Vegas encodes and I did not experience any type of UI slowdowns. Maybe with linux you need to manually adjust the process priority for it to work right...but as I said before, with windows if you leave it alone it works great.

No, it didn't do what I wanted. Raising the priority of a process in Linux doesn't adversely affect the usage of the system like it does in Windows.

I don't need to adjust them in Linux, stop putting words in my mouth. In general neither OS needs you to fudge with the process priorities, but doing so doesn't make the system unusable like it does in Windows.

If you need a robust VM you should be using an OS better suited to that task...if only there was an OS that was designed from the ground-up to allow for high performance virtualization...oh wait, there is...it's called Open Solaris and it's also free.

And it's also shit to use. I'm not installing Open Solaris on my workstation for any reason and VM support is probably the lowest on the list of potential reasons. ZFS is probably the only thing that might make me consider it but forcing Solaris on myself isn't worth it just for ZFS.

If I wanted to see the last error in a log I'd just use "tail /path/to/log.txt" See, I'm more efficient then you because I don't need to press G.

I know exactly what happened without looking at the log file, I would not reinstall something unless it was necessary. See the issue above with X user interface fonts being considered "dependencies" when they should not be (i.e. attached to cups).

A) I use less because generally I end up searching back into the file because I need more 1 screen's worth to see everything.
B) The fonts are a dependency because they're used for rendering.
C) I was speaking in general, not specifically about the problem you created yourself by trying to remove packages without understanding the consequences.

XGL what? Who uses it? Not I...who uses Novell for desktop or workstation systems? Not anyone...so who cares? Nobody. It's a saying Tesla should have gotten more for being so ahead of his time...no, he sucked at marketing and got pwned by Edison so now history credits Edison with all the breakthrus in electricity, even though Tesla was responsible for a lot more of our technology as it is today with his inventions and discoveries.

2D matters since that is the current desktop/workstation GUI paradigm...and windows still has the best 2D performance. It's not just lag, it's rendering windows, things like smooth-scrolling text, etc.

Of course no one uses XGL now, but it was used for the first compositing demos. Since then the compositing stuff got merged into Xorg and various window managers.

I understand why 2D matters, my point was that the 2D speed of X is more than fast enough for normal usage right now so speeding it up won't result in any tangible changes.

That would have required an overhaul of WDM. I can neither confirm nor deny that MS overhauled WDM for Vista/Win7 but it is likely they did. I have not seen a BSOD in Win7 to date.

Whether you've seen a BSOD or not is irrelevant, all that means is driver quality went up. I've seen video drivers crash in Vista and just cause the display to reset and a little bubble pops up in the notification area telling you that the driver encountered an error and has been restarted. That was years ago back when Vista first came out but I remember it vividly because of the frustration it caused me trying to play a game.

Most apps are for Windows and there are plenty of good ones out there.

I agree. But saying there's "plenty of good ones" doesn't contradict the fact that the majority of apps released for Windows are utter shit.

Yes, TRIM only works with ext4 partitions so you need one to have the other...and last I checked you don't have a lot of Ubuntu enterprise servers. It's a desktop OS, it's not certified or intended for enterprise-grade usage. You could use it for a server...but would that be the smart thing to do? Probably not.

Unless you're stalking me how would you know how many enterprise servers I'm responsible for?

And it's not just ext4, XFS does TRIM as well and Ubuntu is just fine for servers. If you use the alternate install or LTS discs you don't get a Gnome desktop by default and it's all the same software you would get with RHEL/CentOS. And you can get support from Canonical directly or one of their partners. So why wouldn't it be smart?

You can run 3D games on Linux just about as well as you can on Windows if the video card driver is there...and as long as you have an AMD or Nvidia card you should be ok. They're bringing STEAM to linux (and Mac)...that says something.

But that aside, virtualization is still a "new" thing. If you really need hardcore VM support you should set up the system with the appropriate software...eitehr Open Solaris or something like CentOS with the Xen kernel. You can then set up all the instances you need and get decent performance.

I know, I don't game much these days but when I do it's on Linux.

I wouldn't inflict Solaris or CentOS on myself for a server let a lone a desktop. I get decent normal usage performance with what I have now, but 3D in a VM is still beta at best. However it's looking that with KVM and GEM we might have native 3D support in guests fairly soon.

X+window manager loads a bunch of crap in to the memory and yes, its presence does use resources. You want to waste swap space for it? That's your call...

Not sure what your attack vector statement is all about because you should be restricting administrative access by port/IP on the firewall. SSH works fine for me...and aside from that, there are web-based admin tools "webmin, cpanel/whm, etc" which make managing servers quite painless.

The resources used by X are small and disk space is cheap, performance isn't a reason to not install X. But security is because having all of that extra software installed, like web browsers, just adds more things to go wrong. I don't install X unless necessary either, but I don't do it because I'm convinced it makes my server faster because it doesn't. And Webmin was crap the last time I touched it.

Honestly as long as it works as intended, I don't care if the driver is open source of binary. I'm not looking for a conspiracy in every nook and cranny "OMG what's in this closed-source driver? They're watching us, man!!"

It's not about conspiracies or big brother, it's about being able to properly debug and fix the driver when there's a problem which is impossible with non-free modules.

Wait, you're saying that apt-get install postgresql-8.4, yum install postgresql, or even pkg_add -i postgresql-server and then configuring it is easier than tracking down the db installer on the web, downloading it, double clicking it, holding the installer's hand, and finally configuring it?
Or that apt-get install postgresql-8.4, yum update postgresql, or pkg_add -iu postgresql-server is easier than tracking the update on the web, downloading it, and holding the installer's hand

Kind of, except change "holding the installer's hand" to "tracking down why the installer doesn't work at all and fixing it my doing something that the installer should've done itself without my help".
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
They don't have to expose how the hardware works, just the APIs for interacting with it. If that information is trade secret, they screwed the pooch too early into the development of the hardware.

Ok, even if there was an API for certain drivers which are secretive, the API would be limited. You wouldn't have access to the things the company doesn't want revealed. There are certain optimizations, coding techniques, proprietary functions that can allow a given driver to perform better on the same hardware. Either way, you would need to know what the company doesn't want the general developer community knowing in order to effectively utilize the API.

I don't think an NDA is unreasonable if they allow you access to the info you need to make drivers for a given platform...in general it is in a company's best interest to allow drivers to be created for as many platforms as possible, but the company also wants to maintain a level of QC. If the device doesn't work right who do you think is going to be inundated with support calls? Hint: not the developer.

And by capitalism, you mean economic tyranny. A certain American hero once wanted to sell illegal black market tea for more than the "expensive" (and by expensive I mean better quality and costing less money) tea being sold by the British. Unfortunately he wasn't able to form a monopoly, because that privilege was given to another company.
Oh, and instead of being a man about it, he pussy-footed just a bit by dressing up as a Native American, so that they could take the blame. What a hero indeed!

I never said anything about perceived honor or morality; I'm merely stating that the USA is a country built on war and capitalism...and if you live here, that's what you gotta accept whether you like it or not. It's no secret, nor is it any surprise, that many financially successful people can be associated with shady or questionable ventures...but hey, for all its faults the USA is still the best country in the world.


BSD is superior (and truly FREE), there's no doubt. ;)
Who cares? Most BSD users I know and interact with don't use it exclusively. Why? Because it isn't the best thing to use to solve every problem.

I like the organization of BSD but it is definitely lacking in the performance department...and it's not really cutting-edge, it's more "tried and true". To each his own.

Did you really just say that? Really? Are you being serious, or just "edgy?" Or did I somehow warp into 4chan?

It's a well known fact.

I understand how to stop services in Windows. If the Windows print services were separate packages like they were in CUPS there would be something to remove, but like so many other parts of Windows it's just lumped in with the rest of the OS so that you can't remove it.

Considering that linux is just a kernel + a bunch of crap lumped onto the kernel, comparing it to windows is apples to oranges in that sense. It's not a bad thing that windows has a "functional core" rather than just a kernel.

You're the one that mentioned removing CUPS and having it take a bunch of other packages with it which doesn't have an analogy in Windows since you can't remove the print spooler there. If you had just disabled the CUPS service like you would the Windows print spooler you wouldn't have had those dependency issues.

Well if the package manager worked right, it wouldn't cause dependency issues. Obviously, removing cups should not remove a font required for X to operate. Shoddy programming and nothing else.

So you're telling me all of the code in the function protected_mode_jump in arch/x86/boot/pmjump.S is in assembly for fun?

No, it's written in ASM because that is the only way to get the kind of granular control the developer was looking for. You could do bootstrap code in C if you were inclined.

Boot loaders like GRUB and LILO are run in real mode. Otherwise you wouldn't be able to use them to boot memtest, DOS, etc because you can't switch back to real mode from protected mode.

Yeah, that was accurate when the 286 was cutting edge...you can switch between protected mode and real mode operation, and two points of note: all x86 CPUs start in real mode by default, and the system bios operates entirely in real mode.

I don't need to adjust them in Linux, stop putting words in my mouth. In general neither OS needs you to fudge with the process priorities, but doing so doesn't make the system unusable like it does in Windows.

If you buy a fast car, it runs well...but then if you start "tuning" it without knowing what you are actually doing, small changes can have adverse effects on its performance. It's the same with windows. It works fine if you leave it alone 99% of the time...so complaining that the altering the process priorities when there is seldom a need to do so is kinda pointless.

And it's also shit to use. I'm not installing Open Solaris on my workstation for any reason and VM support is probably the lowest on the list of potential reasons. ZFS is probably the only thing that might make me consider it but forcing Solaris on myself isn't worth it just for ZFS.

I haven't really used Open Solaris enough to draw any conclusions about it, but from what I have seen, it is versatile enough and not a bad OS.

A) I use less because generally I end up searching back into the file because I need more 1 screen's worth to see everything.
B) The fonts are a dependency because they're used for rendering.
C) I was speaking in general, not specifically about the problem you created yourself by trying to remove packages without understanding the consequences.

a) You said you just wanted to see the last line...tail will give that to you.

b) And that is why they shouldn't be removed when cups is removed. If there is other software that shares a dependency, it needs to be considered by the package manager and clearly this is not what happens.

c) Yes, now it is my fault that whoever did the depsolving code for linux doesn't know how to properly assess dependencies. Also, this is a strong case for moving linux into a "core OS + software" model vs a "kernel only + software" model. Consistency and structure are not a bad thing in this case.

Of course no one uses XGL now, but it was used for the first compositing demos. Since then the compositing stuff got merged into Xorg and various window managers.

I've seen nothing in X that can come close to GDI/Aero...yet.

I understand why 2D matters, my point was that the 2D speed of X is more than fast enough for normal usage right now so speeding it up won't result in any tangible changes.

Fast enough for you, maybe...not by my standards...and not when it cannot beat Windows 98's 2D performance.

Whether you've seen a BSOD or not is irrelevant, all that means is driver quality went up. I've seen video drivers crash in Vista and just cause the display to reset and a little bubble pops up in the notification area telling you that the driver encountered an error and has been restarted. That was years ago back when Vista first came out but I remember it vividly because of the frustration it caused me trying to play a game.

I'm sure that you will experience the same type of frustration when installing any new OS...remember when the 2.6 kernel came out? It wasn't exactly plug-n-play...it had its fair share of glitches and compatibility issues...and today, Vista is a solid OS second only to Win7.

I agree. But saying there's "plenty of good ones" doesn't contradict the fact that the majority of apps released for Windows are utter shit.

There's a lot of shitty __insert name of OS___ apps too.

Unless you're stalking me how would you know how many enterprise servers I'm responsible for?

Are you suggesting that you are running Ubuntu on enterprise servers? Because that was the only reason I said that.

And it's not just ext4, XFS does TRIM as well and Ubuntu is just fine for servers. If you use the alternate install or LTS discs you don't get a Gnome desktop by default and it's all the same software you would get with RHEL/CentOS. And you can get support from Canonical directly or one of their partners. So why wouldn't it be smart?

It's not smart for the same reason you wouldn't run beta code in a production environment. Not saying you can't use Ubuntu for a server OS, butI am saying that I would not. What do you have against CentOS??
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Ok, even if there was an API for certain drivers which are secretive, the API would be limited. You wouldn't have access to the things the company doesn't want revealed. There are certain optimizations, coding techniques, proprietary functions that can allow a given driver to perform better on the same hardware. Either way, you would need to know what the company doesn't want the general developer community knowing in order to effectively utilize the API.

The point of having a decent API is to keep the secrets and let developers do what they do. Granted, for the bigger platforms (Linux, Windows, etc.) developing the driver in house is probably a lot better than making those communities do it (depending on the hardware in question), but having the ability to spread beyond the major players for very little work (releasing enough information for a driver to be written) shouldn't be a problem.

I don't think an NDA is unreasonable if they allow you access to the info you need to make drivers for a given platform...in general it is in a company's best interest to allow drivers to be created for as many platforms as possible, but the company also wants to maintain a level of QC. If the device doesn't work right who do you think is going to be inundated with support calls? Hint: not the developer.

I go to my software distributor for help, not the hardware vendor. If I have reason to believe the hardware is broken, of course the manufacturer or whatever is the place to go.

Of course I don't have access to hardware manufacturers' support systems, so I can't say for sure, but I'm guessing most users of *BSD go to their respective distribution for help with underperforming hardware.

NDAs only mean that certain developers can work on a driver. If that developer gets too busy, hit by a bus, moves to a different project, etc. that driver starts to lag behind the others. Distributing documentation (and possibly giving developers a channel to ask questions, get help, etc.) keeps this from being a problem. The documentation doesn't have to be public, but an NDA makes sure it stays with the person that signed the NDA, limiting the development community available to help.


I like the organization of BSD but it is definitely lacking in the performance department...and it's not really cutting-edge, it's more "tried and true". To each his own.
Evolution not revolution. If the performance isn't up to your expectations, you're doing it wrong. There is a definite chance you're using the wrong tool for the job. It's a common problem. ;)
No one system is right for every situation.

It's not smart for the same reason you wouldn't run beta code in a production environment. Not saying you can't use Ubuntu for a server OS, butI am saying that I would not. What do you have against CentOS??

He hates yum/rpm. He's a long time Debian user, so using Debian based systems is easier for him. To each their own, right? ;)

I've successfully run plenty of beta code in production enterprise environments. It usually takes a bit more care and attention, but if the code base is solid it usually isn't a problem (and often times fixes problems).
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Considering that linux is just a kernel + a bunch of crap lumped onto the kernel, comparing it to windows is apples to oranges in that sense. It's not a bad thing that windows has a "functional core" rather than just a kernel.

Obviously when speaking about "Linux" I'm referring to a distribution, talking about just the kernel isn't interesting unless the conversation is specifically about kernel-only stuff like drivers, filesystems, etc.

And I do consider it a bad thing that the various pieces aren't properly separated. If they were things would be simpler and I'm hoping that the development of Windows CORE installs will push that even further than it is now.

I like the organization of BSD but it is definitely lacking in the performance department...and it's not really cutting-edge, it's more "tried and true". To each his own.

Yea, except I'd replace "tried and true" with slow moving. Everything's essentially done via committee so it's more of a PITA to get new things accepted.

Well if the package manager worked right, it wouldn't cause dependency issues. Obviously, removing cups should not remove a font required for X to operate. Shoddy programming and nothing else.

Without seeing the exact scenario it's hard to judge, but I'd say the package manager is working properly. You may not agree with the dependencies as they're setup but they're done that way for a reason. Developers don't add unnecessary dependencies to their packages for fun.

No, it's written in ASM because that is the only way to get the kind of granular control the developer was looking for. You could do bootstrap code in C if you were inclined.

Did you even look at the function I mentioned? Please, tell me how you'd switch to protected mode from real mode without any assembly code.

Yeah, that was accurate when the 286 was cutting edge...you can switch between protected mode and real mode operation, and two points of note: all x86 CPUs start in real mode by default, and the system bios operates entirely in real mode.

It looks like it's just a fluke that it works, but yea it does seem possible to go back and forth. But yes, I know and mentioned that PCs start up in real mode initially.

If you buy a fast car, it runs well...but then if you start "tuning" it without knowing what you are actually doing, small changes can have adverse effects on its performance. It's the same with windows. It works fine if you leave it alone 99% of the time...so complaining that the altering the process priorities when there is seldom a need to do so is kinda pointless.

Except that I do know how process scheduling works and I know that it shouldn't work like it does in Windows.

I haven't really used Open Solaris enough to draw any conclusions about it, but from what I have seen, it is versatile enough and not a bad OS.

I guess I wouldn't call it bad, however I've become spoiled by the package management in Debian and will choose something with it over something without that kind of integration every time unless there's some hard requirement that Linux can't meet but Solaris can. And AFAIK the only two features that Solaris has that Linux has no analogy for are ZFS and DTrace and neither of those are worth putting up with the other stuff for.

a) You said you just wanted to see the last line...tail will give that to you.

b) And that is why they shouldn't be removed when cups is removed. If there is other software that shares a dependency, it needs to be considered by the package manager and clearly this is not what happens.

c) Yes, now it is my fault that whoever did the depsolving code for linux doesn't know how to properly assess dependencies. Also, this is a strong case for moving linux into a "core OS + software" model vs a "kernel only + software" model. Consistency and structure are not a bad thing in this case.

A) And sometimes tail show you enough and sometimes not. I've just gotten used to less because it gives me the ability to scroll back if it's not enough
B) I meant rendering print jobs, not UI elements.
C) Except that X and CUPS would both fall into the "non-core" category and be maintained separately from the kernel and each other so you'd be in the exact same situation.

I've seen nothing in X that can come close to GDI/Aero...yet.

I've seen nothing that Aero does that makes me care about it either. Maybe the benefits are more subtle and I just don't realize it, but as I've said, I've never really seen any performance or other issues with X as it is now.

Fast enough for you, maybe...not by my standards...and not when it cannot beat Windows 98's 2D performance.

What's so slow about it? I mean, do you have a real, specific example of X being consistently so slow that it causes problems or even just annoys you?


There's a lot of shitty __insert name of OS___ apps too.

Sure, but on the Linux side of things as long as you stick to apps that are available via your package manager you can be confident that they won't break anything else. In Windows it's 50/50 whether the installer will install hose your system or not. You're relegated to testing on another machine/VM or just running setup.exe and crossing your fingers.

Are you suggesting that you are running Ubuntu on enterprise servers? Because that was the only reason I said that.

Yes. Generally RHEL gets used because it's specifically named in requirements for things like Oracle but otherwise I would definitely recommend Ubuntu or even Debian if possible.

It's not smart for the same reason you wouldn't run beta code in a production environment. Not saying you can't use Ubuntu for a server OS, butI am saying that I would not. What do you have against CentOS??

The only real difference between beta and production code is the fact that someone slapped a production label on the box. You can run all of the regression test suites you want and have all of the QA people in the world test your software but it'll never be bug free.

The only thing I have against CentOS is that it's a relabeled version of RHEL and I'm not a huge fan of RHEL. Mainly because yum sucks and I dislike their /etc/sysconfig stuff. I'd choose either over Windows most of the time, but given a choice I'll go with Ubuntu or Debian any day.
 

JD50

Lifer
Sep 4, 2005
11,888
2,788
136
Nothinman - just out of curiosity, why do you like apt-get so much more than yum? I've always preferred yum over apt-get, but that's probably because I'm more familiar with yum. My only real experience with apt-get was when I was trying to make a local apt-get repository, I found it much easier and more straightforward to do that with yum.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Nothinman - just out of curiosity, why do you like apt-get so much more than yum? I've always preferred yum over apt-get, but that's probably because I'm more familiar with yum. My only real experience with apt-get was when I was trying to make a local apt-get repository, I found it much easier and more straightforward to do that with yum.

I mainly dislike yum because it's done in Python and is slow as hell. dpkg itself is C with some support scripts in perl and apt and aptitude are C++.

It definitely is (was?) easier to create your own repo for yum than for apt but I haven't had a real reason to do that lately so that's a secondary concern at best for me.
 

JD50

Lifer
Sep 4, 2005
11,888
2,788
136
I mainly dislike yum because it's done in Python and is slow as hell. dpkg itself is C with some support scripts in perl and apt and aptitude are C++.

It definitely is (was?) easier to create your own repo for yum than for apt but I haven't had a real reason to do that lately so that's a secondary concern at best for me.

Oh ok. I thought maybe there were some features that apt-get had that yum was missing, and that was your reasoning. I usually work in environments that don't touch the internet, so creating offline repositories is a pretty big deal for me.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
I don't know why you guys are saying it's difficult to setup a repo for apt-get.

The /etc/apt/mirrors.list is very similar in syntax to sources.list and you only need to setup a cron job to run apt-mirror to keep it up to date.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Oh ok. I thought maybe there were some features that apt-get had that yum was missing, and that was your reasoning. I usually work in environments that don't touch the internet, so creating offline repositories is a pretty big deal for me.

There may be, but if so I can't remember them right now since it's been a long time since I've had to touch a RPM-based distro.

I don't know why you guys are saying it's difficult to setup a repo for apt-get.

The /etc/apt/mirrors.list is very similar in syntax to sources.list and you only need to setup a cron job to run apt-mirror to keep it up to date.

That's just for mirroring an official repository, creating one for your own packages was a bit of work IIRC.
 

sourceninja

Diamond Member
Mar 8, 2005
8,805
65
91
We run 20+ ubuntu servers (most in VM's) without any issue. We build them all with ubuntu JeOS images (or now with ubuntu 10.04 by pressing F4 and selecting virtual machine). This lets us install the most minimal OS possible then install the services we need.

So far (and by that I mean 5 years) we have had 0 downtime, needed 0 support, and we are very happy with the results. Ubuntu currently runs our web, smtp, dns, printing, scm, vpn, messaging, reporting, archival databases, moodle, and internal resources like support software, wiki, etc.

Prior to ubuntu we used debian. We switched because most of us were already using ubuntu on the desktop, paid support was available, and it shipped with some things we were either getting from 3rd party repos or compiling ourselves.

The only things not currently running on ubuntu is novell services, oracle, and sungards banner application (basically just some oracle app services).
 

EricMartello

Senior member
Apr 17, 2003
910
0
0
And I do consider it a bad thing that the various pieces aren't properly separated. If they were things would be simpler and I'm hoping that the development of Windows CORE installs will push that even further than it is now.

Windows has always used a DLL model for linking applications that share common functions together. If I were building an OS from scratch, I would probably take a "layered" approach which is not a far stretch from Windows current form...but with Linux, the basic model is "kernel + everything else" and nothing really ties it together. It's like a hub-spoke model which has benefits when it comes to stability but has drawbacks when it comes to performance and consistency. If there was a linux "core" system that could boot, and shared a uniform structure and set of basic features, I would be pleased. For example, a bootable kernel + kernel-integrated command interpreter as a minimum "kernel distribution".


Without seeing the exact scenario it's hard to judge, but I'd say the package manager is working properly. You may not agree with the dependencies as they're setup but they're done that way for a reason. Developers don't add unnecessary dependencies to their packages for fun.

Not for fun but lazy/sloppy coding...you should know. Debian used to have a reputation for being one of the "cleanest" distros around and maybe that's why you're a fan of APT. As it got bigger, it got a bit more convoluted...tho I wouldn't say it is bloated. I will admit that CentOS/RHEL is bloated.

Did you even look at the function I mentioned? Please, tell me how you'd switch to protected mode from real mode without any assembly code.

It looks like it's just a fluke that it works, but yea it does seem possible to go back and forth. But yes, I know and mentioned that PCs start up in real mode initially.

Assembly code and writing a program entirely in ASM are two different things. I am not an ASM developer, but if I needed to flip back to real mode from protected mode I'd need to tell the CPU to save its state to memory and reset it...like a soft reset.

Flipping between real/protected modes is largely a 286-486 era thing...back when most people were running DOS and system bios was a lot more simplistic. Nowadays, the bios can fully initialize a system including memory and IRQ allocations...there is little need to fudge that unless you are doing something non-standard.

And AFAIK the only two features that Solaris has that Linux has no analogy for are ZFS and DTrace and neither of those are worth putting up with the other stuff for.

Linux can't have ZFS due to Open Solaris' licensing...but they are working on BTRFS which is supposed to compete with ZFS. WHen it comes to filesystems I'm not too adventurous...I'll stick with ext for now.


What's so slow about it? I mean, do you have a real, specific example of X being consistently so slow that it causes problems or even just annoys you?

Yes, for example dragging a window will be "choppy" or even laggy. If a website has flash, it can slow the whole system up as the page is rendered. Smooth scrolling is "inefficiently implemented". As long as X is using a "client-server" model, even when locally deployed, it will fail. It's an outdated paradigm that unfortunately has never been replaced...so almost all nixes get stuck with a shitty windowing system because there really are not better alternatives.

Sure, but on the Linux side of things as long as you stick to apps that are available via your package manager you can be confident that they won't break anything else. In Windows it's 50/50 whether the installer will install hose your system or not. You're relegated to testing on another machine/VM or just running setup.exe and crossing your fingers.

Ever since system restore I've never "hosed" a windows system...certainly not when intalling software or an update. Those days are long gone, the same with having to run scandisk in the event of an unclean shutdown.

Yes. Generally RHEL gets used because it's specifically named in requirements for things like Oracle but otherwise I would definitely recommend Ubuntu or even Debian if possible.

Fair enough. I downloaded the latest ubuntu server edition and will give it a try. Unfortunately it doesn't work with Cpanel but I may have other uses for it.

The only real difference between beta and production code is the fact that someone slapped a production label on the box. You can run all of the regression test suites you want and have all of the QA people in the world test your software but it'll never be bug free.

Right...but beta code is fresh and may have new bugs, critical bugs and security holes, that were not found before. It's a peace of mind thing...that's why we have a beta testing period...were it is tested in live, non-critical environments.

The only thing I have against CentOS is that it's a relabeled version of RHEL and I'm not a huge fan of RHEL. Mainly because yum sucks and I dislike their /etc/sysconfig stuff. I'd choose either over Windows most of the time, but given a choice I'll go with Ubuntu or Debian any day.

It's not like they hide that fact...yeah, it's RHEL relabeled but RHEL is a damn good OS and a rock-solid kernel. The important thing is that the distro you choose works with the software/apps you intend to use.
 

VinDSL

Diamond Member
Apr 11, 2006
4,869
1
81
www.lenon.com
We run 20+ ubuntu servers (most in VM's)[...]

Prior to ubuntu we used debian.[...]

The only things not currently running on ubuntu is novell services, oracle, and [some oracle apps].
Interesting!

Choice of (server) OS: FreeBSD, CentOS, Debian, Fedora Core, Suse, and Windows.

Been running Ubuntu on the desktop since 9.04 (converted Ubu hater). Never used it for server apps.

And, you run them in VMs, eh?!?!?!?

Hrm... :hmm:

I think I need to try this!
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Yea, except I'd replace "tried and true" with slow moving. Everything's essentially done via committee so it's more of a PITA to get new things accepted.
That depends on the BSD. Each BSD has a different method of getting things accepted, and not all are committee based.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Windows has always used a DLL model for linking applications that share common functions together. If I were building an OS from scratch, I would probably take a "layered" approach which is not a far stretch from Windows current form...but with Linux, the basic model is "kernel + everything else" and nothing really ties it together. It's like a hub-spoke model which has benefits when it comes to stability but has drawbacks when it comes to performance and consistency. If there was a linux "core" system that could boot, and shared a uniform structure and set of basic features, I would be pleased. For example, a bootable kernel + kernel-integrated command interpreter as a minimum "kernel distribution".

And Linux uses ELF shared libraries, which is the same exact thing.

From a high level, technical standpoint both OSes work largely the same. The base kernel (ntoskrnl.exe and all of it's drivers) and everything else. The product "Windows" also has the bundled userland of Explorer and the core apps MS decides to ship but there's no magic glue that Windows has which Linux lacks. If you want an accurate comparison you need to look at a finished product like RHEL, CentOS, Debian, etc not just the kernel.

Not for fun but lazy/sloppy coding...you should know. Debian used to have a reputation for being one of the "cleanest" distros around and maybe that's why you're a fan of APT. As it got bigger, it got a bit more convoluted...tho I wouldn't say it is bloated. I will admit that CentOS/RHEL is bloated.

Sure, most of what makes Debian better is the package quality. The tools themselves are only a small part of it. But apt is still much better to work with than yum simply because yum is done in Python and slow as hell.

If you're really convinced that your problem was the result of slop submit a bug report. Incorrect dependencies are something that we want to fix. Otherwise stop bitching because this isn't Windows, you do have the ability to tell Debian about their bugs and speak directly with the developers.

Assembly code and writing a program entirely in ASM are two different things. I am not an ASM developer, but if I needed to flip back to real mode from protected mode I'd need to tell the CPU to save its state to memory and reset it...like a soft reset.

Which requires ASM even though you keep saying otherwise.

Flipping between real/protected modes is largely a 286-486 era thing...back when most people were running DOS and system bios was a lot more simplistic. Nowadays, the bios can fully initialize a system including memory and IRQ allocations...there is little need to fudge that unless you are doing something non-standard.

Flipping back and forth was just a hack so that apps didn't have to deal with real-mode segmentation and such. The only things I can think of would've done that were DOS4GW and NetWare.

Linux can't have ZFS due to Open Solaris' licensing...but they are working on BTRFS which is supposed to compete with ZFS. WHen it comes to filesystems I'm not too adventurous...I'll stick with ext for now.

I know why they're not being ported, I just don't care enough about them to force Solaris upon myself. MD+LVM+XFS is more than good enough.

Yes, for example dragging a window will be "choppy" or even laggy. If a website has flash, it can slow the whole system up as the page is rendered. Smooth scrolling is "inefficiently implemented". As long as X is using a "client-server" model, even when locally deployed, it will fail. It's an outdated paradigm that unfortunately has never been replaced...so almost all nixes get stuck with a shitty windowing system because there really are not better alternatives.

Flash is a problem in itself that has nothing to do with X.

I don't think I've ever seen window dragging, resizing, etc being laggy or choppy. If you have then I'd guess that it's either the hardware that sucks or the window manager you're using.

Ever since system restore I've never "hosed" a windows system...certainly not when intalling software or an update. Those days are long gone, the same with having to run scandisk in the event of an unclean shutdown.

Then you're probably not keeping up on your patches because just a few months ago MS released an update that caused millions of machines to BSOD on bootup after applying the patch. The only way to fix it was to boot up into the rescue console and uninstall the patch.

And depending on the circumstance of the shutdown chkdsk will still run. It's more rare than in the past, but it still happens.

Right...but beta code is fresh and may have new bugs, critical bugs and security holes, that were not found before. It's a peace of mind thing...that's why we have a beta testing period...were it is tested in live, non-critical environments.

If by "we" you mean your company, then that just proves my point even more. You shouldn't need to beta test software yourself since that's the job of the developer. The fact that people even have to consider testing updates for weeks or months after the developer released them just exemplifies the fact that "production" code is just as buggy as beta code.

It's not like they hide that fact...yeah, it's RHEL relabeled but RHEL is a damn good OS and a rock-solid kernel. The important thing is that the distro you choose works with the software/apps you intend to use.

I know they don't hide it and I applaud their efforts. But if I don't like the parent distro I'm not going to like the child either. The only way I touch either of them is if the software being run on the server lists them as hard requirements.

VinDSL said:
Choice of (server) OS: FreeBSD, CentOS, Debian, Fedora Core, Suse, and Windows.

Been running Ubuntu on the desktop since 9.04 (converted Ubu hater). Never used it for server apps.

And, you run them in VMs, eh?!?!?!?

Hrm...

I think I need to try this!

As usual that's backwards. I wouldn't touch FreeBSD for work use except for very odd circumstances, Fedora wouldn't even be on the list since CentOS is almost always a better choice, same with SuSE. And Debian and Ubuntu would probably interchangeable, although for a server that someone else would be working with I'd probably give the nod to Ubuntu because of the name and more regular release cycles.

And Windows goes wherever the software requires it. Until Samba 4 is a lot more mature or someone makes a suite of awesome wrappers around OpenLDAP, Kerberos, BIND, etc to compete with AD, Windows will almost always be a necessity. And a few things, like MS SQL, are better than the free alternatives. Although the licensing usually means that if you're considering PostgreSQL or MySQL you don't want to pay for MS SQL so they're not usually considered together.

And anyone not looking at virtualization is doing themselves and anyone they're working with a huge disservice. With the availability of Xen, KVM, VirtualBox, VMware Server, VMware ESXi, etc there's no virtually excuse to waste physical hardware on a single server.

n0cmonkey said:
That depends on the BSD. Each BSD has a different method of getting things accepted, and not all are committee based.

Since the big 3 all have NFP orgs to protect their trademarks and such I figured they all only had a handful of people with CVS commit access. While maybe not officially a committee in all 3 it seems to be essentially the same thing.
 

Cogman

Lifer
Sep 19, 2000
10,284
138
106
I mainly dislike yum because it's done in Python and is slow as hell. dpkg itself is C with some support scripts in perl and apt and aptitude are C++.

It definitely is (was?) easier to create your own repo for yum than for apt but I haven't had a real reason to do that lately so that's a secondary concern at best for me.
Glad to see I'm not the only one that hates applications purely based on what language they are programmed in :D

I'm just not a big fan of python. Sure, its easier to code in. But from everything I've seen, it churns out some terrible programs. They are slow, memory hogging beasts. (Take a look at Frets on Fire for a great example of why anything more intense then a button should not be done in python)
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Glad to see I'm not the only one that hates applications purely based on what language they are programmed in :D

I'm just not a big fan of python. Sure, its easier to code in. But from everything I've seen, it churns out some terrible programs. They are slow, memory hogging beasts. (Take a look at Frets on Fire for a great example of why anything more intense then a button should not be done in python)

That's been my impression as well, although I haven't tried Frets on Fire in particular. I'm also not a fan of the "whitespace controls program flow" stuff, but that's a secondary concern.
 

JD50

Lifer
Sep 4, 2005
11,888
2,788
136
I'm finally putting a decent amount of effort into learning Python. I learned a little bit of Perl when I worked on a project about a year ago, other than that I've done mostly shell scripting. I was trying to decide between Python and Perl, I chose Python because I mainly work with RHEL products and they're using Python for almost everything nowadays.

The white space business did take a little time to get used to, but it doesn't bother me any more. I also don't like that fact that variables don't have a special character preceding them.

I read countless python vs. perl articles/threads/posts because I was having a hard time deciding. I don't think I ever really saw a definitive reason to use one language over the other (for mostly sys admin stuff) so I went with the one that my distro of choice favors.