• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

I have tons of linux questions?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
These changes seem to be for the desktop oriented. Does the media keep mentioning the desktop advances Linux makes and ignores the server oriented advances? Or are developers the ones ignoring possible server advances?
 
Right now Linux is aiming at the desktop in a big way. A BIG way. Instead of just being


I'll do my best to sumarize Linus, I don't have time to find the links right now.

He said that Linux does servers, and linux does servers well. But it's easy compared to desktops. With servers your dealing with dedicated and well-known hardware controlled by computer-centric people. In this role Linux is very successfull. But desktops are very hard in comparision, your dealing with mostly random bits of hardware slapped together for people who have absolutely no understanding of computers and unless it works right away everybody gets lost. Buy aiming at the coporate desktop you can gain a foothold. If everybody is using Linux as a server (vast majority of companies have some sort of linux server somewhere, no matter how insignificate) then this makes it very easy for some to integrate Linux desktops into the workplace. Buy gaining corporate desktop support people become used to seeing and using Linux. People also run software at home that they use for work. Thus making it more likely of a widespread adoption of Linux in people's homes.


For example look in the 2.6.4 kernel menuconfig at all the scsi controllers Linux can run vs the number of Video cards. By gaining market share and coporate backing people open up hardware for Linux, but no desktop support means no support for desktop hardware. And thus we end up with crap like ATI and Nvidia's driver support.

So getting Linux on the desktop means that we get better running Linux computers. And since people open up the hardware (NDA's SUCK) it makes it much easier for other Free software stuff like the BSD's to flurish.



But people aren't going to abandon servers anytime soon. Server adoption is critical for desktop adoption.

Linux has 20-30% server market share. This is very good and much money is getting pored into developement of Linux for the server market even in these economicly depressed times.

I am sure you remember the old benchmarks of 2.6 vs 2.4 vs FreeBSD vs OpenBSD? It shows a dramatic network performance increase between 2.4 and 2.6 kernels. Stuff like Security-Enhanced Linux can make a server almost invunerable to traditional modes of attack. Improvements in scedualling and file locking mechanisms have made linux scalable up to 32 (maybe 64?) proccessors from a previous 8 (2.4 can handle more then 8 but after that you run into race conditions that rob any performance increase).

Nobody is going to want to give any of this up.

Stuff like sysfs/udev for instance has a direct applications for server markets. For instance you want a array of 400-500 disk drives? Your going to have a hardtime with the traditional /dev major/minor kernel device model. Udev can put as many /dev/ file links as you want. (as I understand it)

here found this, it talks about the advantages of udev and has a couple URL's (Like I said I don't understand the major/minor number stuff and it's relationships to sysfs and udev exactly..)

So I think that nobody is going to abandon Linux server support any time soon.

BTW IBM has recently beginning to offer 32-way Power computers that come with Redhat offered as the default installated OSes. Or first-time OS or something like that.IBM propaganda

And then you end up with freakish things like http://LinuxBIOS that uses C programs and a Linux kernel to replace a motherboard's bios. Using this they were once able to get a 13 machine cluster completely operational from cold boot in 13 seconds. (just found that out from slashdot. Funny stuff)

 
Most of that makes sense.

Originally posted by: drag

For example look in the 2.6.4 kernel menuconfig at all the scsi controllers Linux can run vs the number of Video cards. By gaining market share and coporate backing people open up hardware for Linux, but no desktop support means no support for desktop hardware. And thus we end up with crap like ATI and Nvidia's driver support.

So getting Linux on the desktop means that we get better running Linux computers. And since people open up the hardware (NDA's SUCK) it makes it much easier for other Free software stuff like the BSD's to flurish.

Do you think companies will release docs and open drivers or do you think they'll pull the same crap nVidia and ATI are?
 
This all sounds like it is going to require reworking drivers and the way people understand the system now. I hope they have good plans for backwards compatibility.

In a way yes, the drives all need to be updated to use sysfs instead of procfs and some utilities will need to be redone to use sysfs files instead of procfs but for now procfs is still intact for compatibility, but I wouldn't be surprised to see it be one of the first things changed in 2.7.

That's kind of silly. In my dmesg I get information about all of the hardware in the system, whether there is a driver for it or not. Makes life a lot easier.

But it's not necessary since I have other tools to get the same information and things like lspci can give me more information.

What do you mean it can get overwritten?

dmesg just prints the kernel log buffer which is a fifo, enough kernel messages and you lost all the bootup messages. That's all I mean.

It just seems like replacing /dev is going to make a lot of commands different in the future.

They're not replacing /dev but they are replacing devfs. /dev will just be populated by udev which is a userland daemon instead of devfs which was in the kernel.
 
Originally posted by: Nothinman
This all sounds like it is going to require reworking drivers and the way people understand the system now. I hope they have good plans for backwards compatibility.

In a way yes, the drives all need to be updated to use sysfs instead of procfs and some utilities will need to be redone to use sysfs files instead of procfs but for now procfs is still intact for compatibility, but I wouldn't be surprised to see it be one of the first things changed in 2.7.

That's kind of silly. In my dmesg I get information about all of the hardware in the system, whether there is a driver for it or not. Makes life a lot easier.

But it's not necessary since I have other tools to get the same information and things like lspci can give me more information.

We both get what we want through different methods. Many different ways to do the same thing, kinda part of the Unix philosophy. 😉

What do you mean it can get overwritten?

dmesg just prints the kernel log buffer which is a fifo, enough kernel messages and you lost all the bootup messages. That's all I mean.

I think that's a miscommunication. I think he was talking about the file, and you are talking about the buffer.

It just seems like replacing /dev is going to make a lot of commands different in the future.

They're not replacing /dev but they are replacing devfs. /dev will just be populated by udev which is a userland daemon instead of devfs which was in the kernel.

I get it now 🙂
 
Do you think companies will release docs and open drivers or do you think they'll pull the same crap nVidia and ATI

Hard to say.

I think that Nvidia and ATI are actually working REALY hard on maintaining their drivers for Linux, but it's not easy. Just a hunch, a guess. Most of the Linux developement team are hostile towards the thought of using closed source drivers in their kernel. They aren't going out of their way to stop it, but they aren't helping out in any way. So that's about the same thing.

I still remember the coop that Nvidia pulled when they first released their unified driver sceme (back when I was using windows 98). ATI was left in the dust due to bad drivers and it realy got people liking Nvidia. On my card I got a 10-20 boost in performance just by updating my drivers alone. With a history like that I am not suprised that Nvidia is unwilling to disclose driver secrets. I also beleive that they got trapped into using closed source drivers themselves and lost a great deal of control over their own software.

Nobody likes that.

If you want to sell hardware that is usefull for Linux you have to contribute to GPL software. There is just no way around it. Most hardware manufacturers do it once they realise that drivers are easy and cheap to write (comparatively) if they are open vs closed. They just want to make a profit, they make profits by selling hardware not developing drivers.

After all, if Nvidia released their drivers, or helped build open source drivers and ATI took their code and made drivers for themselves, does Nvidia loose anything? ATI's drivers would be GPL'd, too. People will still buy their cards simply because they desire fast video cards for complex games. (and If ATI doesn't GPL their drivers and just leaches technology from nvidia then nobody will buy their cards anyways because the drivers will still suck @ss anyways.)

Hell they might even do something smart like developing a sort of ISA for video cards so that everybody has a standardized way to interact with video cards. After all they are pretty much their own little super floating point number crunching computers. They have their own BIOS, memory, cpu and hardware busses. The drivers are going to end up being minature OSes themselves.

For example if you look at the .plan from "finger johnc@idsoftware.com | less" you see how Ati and Nvidia agreed with id that doom shouldn't support vendor specific vertex renderers and instead use a 3rd platform-neutral one, just because it's better to find good standards. The spirit is willing but the mind has yet to put 2 and 2 together.

Check out the "chipset secrets" section of this article for how some companies benifited from openning up their hardware before everybody else in and ended up becoming successfull in the cluster arena.
 
They're not replacing /dev but they are replacing devfs. /dev will just be populated by udev which is a userland daemon instead of devfs which was in the kernel.

I get it now

Doh. Ya I remember the stuff about how udev gets around the major/minor limitiations. Udev is able to handle the numbers dynamicly, udevfs and before all the numbers had to be agreed to and set in stone before hand, so you could run out of possible numbers pairs if you weren't carefull. (I think that's right)
 
Drag, that LinuxBIOS link is awesome. If only I had a supported board to play with. 🙂

On the NV/ATI closed source drivers, I'll give you an example of the difference between the two. My previous card (to this GF3Ti200) was a Radeon 64MB VIVO DDR. Using the DRI drivers, it worked well in 3D, and I could use ALL of the features of the card. Vid-in Vid-out both worked flawlessly. Now, with the closed-source NV drivers, there is no way I can setup Vid-out on this GF3. It just isn't supported by the drivers (for this card). The linux Dets seem very lacking in their support of the additional features of a given card. That said, I get about the same FPS-wise in linux as I do in windows, so there's no performance complaint with the card at all. In case anyone is wondering, I'm using an MSI GF3Ti200 VTD-128.

EDIT:
In reference to #2:
By the way, since devices are treated as files, all you need is an image of a filesystem to have multiple filesystems within a partition, but one will sit inside another. For instance, I have a bunch of ISOs stuck in /usr/share/iso, and you can mount the images wherever you like using mount -o loop /path/to/image.iso /path/to/mount. I currently have an ISO of the Gutenberg project mounted in an SMB share path so everyone in the house has access to the CD without needing the CD in their system. 🙂

And in reference to #1:
CRLF is \r\n, \n is NewLine/LineFeed, and \r is Carriage Return. The difference between \r and \n is how they are viewed on some systems, a carriage return does not always indicate a forced new line, though most viewers nowadays will view it as such.
 
EDIT:
In reference to #2:
By the way, since devices are treated as files, all you need is an image of a filesystem to have multiple filesystems within a partition, but one will sit inside another. For instance, I have a bunch of ISOs stuck in /usr/share/iso, and you can mount the images wherever you like using mount -o loop /path/to/image.iso /path/to/mount. I currently have an ISO of the Gutenberg project mounted in an SMB share path so everyone in the house has access to the CD without needing the CD in their system.

But even so the filesystem of the partition is still FAT, ext3 or whatever and the fact that you can put files containing other filesystems on it doesn't change the parent filesystem. Sure it's semantics, but the difference needs to be clear so that noone gets confused about what's possible and what's not.
 
Back
Top