How would you refocus Linux development?

Chaotic42

Lifer
Jun 15, 2001
34,545
1,707
126
Other than the Onion, Slashdot is my favorite humor site, but sometimes they have relatively serious articles about software. Someone asked a question that I thought would make for interesting discussion. Here's the question:

-
"The majority of Slashdot readers are no doubt appreciative of Linux in the general sense, but I suspect we all have some application or aspect of the platform that we wish were more stable, performant, feature-rich, etc. So my question is: if you were able to devote a 'significant' number of resources (read: high-quality developers) to a particular app or area of the kernel, and were able to set the focus for those resources (stability, performance, new features, etc.), what application or kernel area would you attempt to improve, and what would aspect you focus on improving?"
-

I thought it was a good question. What would you improve?
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Quality assurance, user interface testing, and streamlining userspace APIs (for example standardizing audio API, there are so many different ways of doing sound in Linux that it's a confusing mess)
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
I would like to see better management capabilities within an enterprise environment. Basically, what I'm saying is it needs something akin to Active Directory and Group Policies. I think Samba might be incorporating some AD and GP abilities in Samba4, but I would like to see the ability to control linux workstations.

The other thing is the "average user" experience. This has actually come along nicely in the last year or two, but there are still some little things, such as finding out my IP address (on my Xubuntu Feisty box I have to go to ifconfig on the cli) or mounting remote file shares on user login (I can't get pam_mount to work, and again should be easily done in gui).
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
It would be nice to cleanup the current power management mess and get the whole swsusp vs uswsusp vs Suspend2/TuxOnIce thing sorted out but I don't have any problems patching my kernel for that so far.

But frankly I'm happy with Debian sid overall. Sure there's rough edges but they're usually easier to work around than whatever I run into whenever I try to run Windows.
 

darkfoon

Member
Jun 14, 2006
49
0
66
I'd like to see considerable effort put into writing quality open-source alternatives to closed-source blob drivers.
I really am getting tired of having to load low quality binary drivers (I'm looking at you, ATI/AMD) just to get my hardware to work the way I need it to.

That or, I'd like to see programmers work together through disputes, although this problem can't exactly be solved by throwing a bunch of skilled coders at it. Choices are good, yes, but when somebody doesn't like one little thing, and somebody else is completely unwilling to compromise, so the first person goes and branches the code; now we have two projects that are the same except for one esoteric difference. Yes the differences grow in time and the two projects become very different (sometimes) but to newbies to linux, its confusing, and hurtful to adoption.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I'd like to see considerable effort put into writing quality open-source alternatives to closed-source blob drivers.
I really am getting tired of having to load low quality binary drivers (I'm looking at you, ATI/AMD) just to get my hardware to work the way I need it to.

Well that's something for the end user to decide.

If people didn't put up with 'blob' drivers then nobody would be making them.

Right now you can take care of the situation by buying better hardware. Currently the only category of hardware that requires 'blob' drivers for Linux is high-performance 3D acceleration. (For regular video and average 3D performance Intel is what you'd want.) Everything else has nice open source drivers available for it one way or another.

I understand this is easier said then done, but if your building or buying a machine were you want to run Linux on it then it's not that difficult.


The thing is that by the time it takes a project to produce working drivers by reverse engineering hardware that hardware is pretty much already obsolete.

For example... It took years for the open source Linux drivers for R200 hardware to surpass the quality and speed of ATI's own proprietary drivers. By the time that happenned ATI had dropped all support for it anyways. For R300/R400 drivers by the time they were stable and useful for the desktop it is almost impossible to find new versions of the card in retail outlet.

For broadcom 802.11B/G wireless cards we now have stable drivers, but thats only until recently. Now your going to start seeing 802.11B/G/N cards out there from broadcom, which means that you'll have a whole another class of hardware that people are going to use ndiswrapper for. Now for 'n' hardware it won't take nearly as long, but it's still going to take a while after those cards reach markets.

For modern hardware Linux devs have figured out that the only way to keep up with the consumer market is to have direct cooperation from the hardware manufacturers themselves. For stuff to appear in a timely basis they need to be able to talk to engineers and find fixes for hardware bugs and other such things.

The engineering and hardware design folks are probably more then willing to help out, but the business side of things needs to have economic justification for this sort of stuff. It's up to Linux end-users to provide that justification. If bad companies like ATI and Nvidia continue to sell lots of hardware then what economic justification is there for 'good' companies to have the risk and the expense of being more 'open'?


If they don't then it's probably just going to get worst.

Modern video cards that provide 3D acceleration do most of it through software. GPUs as we see today are just specialized processors that are tuned to support the sort of calculations required for high speed graphics. They are not 'DirectX expressed in hardware' or anything like that.. DirectX/OpenGL is practically all software that runs on both your CPU and GPU.

Look at the latest fad in accelerated graphics is programmable shaders for doing 2D video and 3D special effects. You have GLSL compilers and that sort of thing that takes code from special languages and makes it work on your GPU. It can be used for practically anything.

Both ATI and Nvidia are producing specialized hardware for the HPC (high performance computing) market. Their 'GPU's are very fast at certain types of calculations that are useful for some times of scientific computing and such things. So they are selling boxes that are pretty much just stripped down video cards to this market. But they don't even then allow access to hardware directly. They don't let the scientists know how to access the hardware.

What they do is provide special libraries and software abstraction to make it easier to program on their GPUs. It's 'CUDA' for Nvidia, and 'CTM' for ATI/AMD. This provides standard interface to abstract changes between different generations of cards (so that older software would still be compatable with newer hardware), simplify programming, and then also hide their 'IP' from prying eyes.

The long term trend of all this is that your going to start seeing GPU's integrated into your central CPU as just another core.

For desktop purposes your probably not going to see any benefit from massive amounts of similar cores on your CPU. Sure going from 1 core to 2 cores is very good. Going from 2 to 4 is probably useful also.. but I really doubt that going from 4 to 8 or to 16 cores is really going to provide any benefit for desktop, even for the most hardcore PC user.

Intel has currently 80-core prototype cpus...

So what is going to happen is that your going to see specialized cores for your computer. That is you'll have 2-4 generic cores that are designed similar to they are today.. then you'll see a few more cores that are specially designed to accelerate specific workloads. The most obvious of this would be to include numerous GPUs in as just another bunch of cores.

Unless ATI opens up what may happen is that your going to require special drivers to just to be able to access the full capabilities of just very basic hardware.
 

xtknight

Elite Member
Oct 15, 2004
12,974
0
71
Desktop-ization: X.org development, desktop interactivity, and user interfaces. Bring what has been developed to the user in a convenient way, and develop what hasn't been developed. Make the most out of the groundwork that is already there.

- It would be nice (no pun intended) if the foreground window got more priority in the scheduler. Some optional communication bus from X.org<>kernel scheduler?
- X.org needs to be able to detect every monitor on the planet flawlessly without issues. Especially widescreens. Maybe this is already done but I haven't seen much progress until Gutsy.
- Multi-monitor support should be perfected, multiple backgrounds on GNOME/etc. Displayconfig-gtk should be perfected.
- Compiz needs to be better tested on various hardware. It needs to work on multiple screens without a problem. Silly problems like black screens and no window decorations need to vanish. Solidify it...
- GUI for audio config (surround sound/2.0/etc)

There's a lot of stuff in Linux just scattered all over the place with no nice, or visible, GUI, like some power management, CPU frequency adjustment, temp monitoring, multimonitor adjustment, and sound card speaker arrangement configuration (and testing sound on each speaker, etc). I don't know if this is true for all distros but I know Ubuntu is missing a lot of this. Some of it is partially implemented but it doesn't support all HW, or it's tough to install, barebones, inaccurate, inconsistent, undocumented, lacking features, etc...
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
Not saying these things are fixed by any stretch of the imagination, but to let you know that there is progress...



- It would be nice (no pun intended) if the foreground window got more priority in the scheduler. Some optional communication bus from X.org<>kernel scheduler?

kernel 2.6 used special heuristics to try to do that. It's goal was to try and detect interactive processes and give them higher priority so that the interface would be more responsive.

It was a definate improvement over 2.4, but it's still not ideal.

So what they ended up doing lately is scrapping the whole complicated affair and going with a much more simpler 'fair' scheduler that is designed to give equal access to everything. This should improve performance somewhat and make giving certain apps higher priority make sense.


- X.org needs to be able to detect every monitor on the planet flawlessly without issues. Especially widescreens. Maybe this is already done but I haven't seen much progress until Gutsy.

It's not 'Gusty' that is doing much of anything. It's X.org. (of course Ubuntu doesn't really mention it so much)

With X.org 7.3 they are trying to make everything with X hotplugable. With modern Linux hotplug and detecting devices on the fly 'just works' most of the time. It's very good at the autodetection of everything.

However even though 'Linux' can do this... 'X' can't actually use any of it. X can't be reconfigured on the fly, X can't autodetect inputs and outputs dynamically. So you've always depended on outside applications, or the user itself, to go and detect this stuff on it's own and then reconfigure the xorg.conf file, then the user had to manually restart X to make the changes go into effect.

With X.org 7.3 they are trying to get X caught up with Linux. It's suppose to support sysfs, dbus, HAL, and other type things that are designed to provide a inter-procccess bus for applications to communicate and for system notification.

So by making X work closer to Linux and Gnome/KDE applications then Linux will autodetect everything, send notifications to X and the user's desktop. That way things like mice and additional monitors can be detected and configured on the fly.

For example I saw a demo were a fellow plugged their Intel-based laptop into a projector. That projector was autodetected, you had a notification on the desktop, the desktop automaticly expanded to fill both display devices.

Then when he unplugged the monitor the desktop automaticly shrunk back down and windows and such were moved so that they didn't 'disappear' off the edge of the screen.


Of course this is all dependent on the video drivers. So don't expect anything X.org to do to fix ATI's crapiness or make Nvidia's proprietary stuff work better with other applications and tools.

 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
- It would be nice (no pun intended) if the foreground window got more priority in the scheduler. Some optional communication bus from X.org<>kernel scheduler?

The means are already there, the Xorg could talk to the kernel via netlink but no one's done it and I don't really see a reason to either.

kernel 2.6 used special heuristics to try to do that. It's goal was to try and detect interactive processes and give them higher priority so that the interface would be more responsive.

It was a definate improvement over 2.4, but it's still not ideal.

2.4 did that too, basically any process that spends most of it's time waiting on I/O gets a boost because it's not using much CPU time. The main downside is that it counts all I/O because I don't think there's a way to differentiate between disk, network, keyboard, etc inside of the scheduler. At least not one quick enough to warrant the check.

So what they ended up doing lately is scrapping the whole complicated affair and going with a much more simpler 'fair' scheduler that is designed to give equal access to everything. This should improve performance somewhat and make giving certain apps higher priority make sense.

Heavily sleeping processes still get bonuses with CFS and performance shouldn't really be better but just more uniform.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
If I could run all or most Windows PC games in Ubuntu without hassle than I wouldn't use Windows. Period.

I can barely get Windows games to run on Windows, let alone Linux.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Originally posted by: Nothinman
If I could run all or most Windows PC games in Ubuntu without hassle than I wouldn't use Windows. Period.

I can barely get Windows games to run on Windows, let alone Linux.

Yeah, but what do you expect with warez? :D
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Yeah, but what do you expect with warez?

Actually back when I did pirate sh!t a lot I had a much simpler time with the warez copies of games since all of the copy protection crap was removed for me.
 

Brazen

Diamond Member
Jul 14, 2000
4,259
0
0
Originally posted by: Nothinman
Yeah, but what do you expect with warez?

Actually back when I did pirate sh!t a lot I had a much simpler time with the warez copies of games since all of the copy protection crap was removed for me.

ha, yeah, that is probably one of the biggest things driving me away from proprietary software to open source.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I'd work better with other free software projects.

Besides the recent debacle with OpenBSD and some Atheros drivers, which is a nice ironic mirror to the earlier Broadcom one, where's the problem?
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Nothinman
I'd work better with other free software projects.

Besides the recent debacle with OpenBSD and some Atheros drivers, which is a nice ironic mirror to the earlier Broadcom one, where's the problem?

Anything that Linux may have used that was orignally BSD/ISC/whatever and rolled under the GPL without providing the sources back to the original author under a compatible license.

Also the situations where Linux (hell, some BSDs fall into this category too) prefers to go with proprietary solutions or don't fight for documentation while others continue to push. Free Software would be farther ahead if EVERYONE worked together a bit better. :)

I think the ath debacle is different than the bwc debacle. Although a small part of the ath stuff was blown out of proportion (where-as all of the bwc one was).
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Anything that Linux may have used that was orignally BSD/ISC/whatever and rolled under the GPL without providing the sources back to the original author under a compatible license.

Virtually all of the Linux kernel history is there in git so anyone can audit it if they like. But BSD, not sure about the others, doesn't require giving back anything under the original license other than the original code. So as long as the orignal code used is still marked as BSD everything's fine.

Also the situations where Linux (hell, some BSDs fall into this category too) prefers to go with proprietary solutions or don't fight for documentation while others continue to push. Free Software would be farther ahead if EVERYONE worked together a bit better.

While I would like to agree with this I don't think it's true. Broadcomm, nVidia, ATI/AMD, etc aren't going to release anything totally free anytime soon no matter how many people push. I'm actually surprised that nVidia and ATI/AMD have stuck with it as long as they have considering how often Linux kernel internals change and cause them extra work.

I think the ath debacle is different than the bwc debacle. Although a small part of the ath stuff was blown out of proportion (where-as all of the bwc one was).

In my eyes the main difference is that the bwc code was checked directly into OpenBSD while the ath stuff was just posted to lkml for review so the latter wasn't even committed before everyone got up in arms over it.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Nothinman
Virtually all of the Linux kernel history is there in git so anyone can audit it if they like. But BSD, not sure about the others, doesn't require giving back anything under the original license other than the original code. So as long as the orignal code used is still marked as BSD everything's fine.

Yeah, it doesn't require it. But some companies do it. Some people do it. But others are a bit greedier. ;)

While I would like to agree with this I don't think it's true. Broadcomm, nVidia, ATI/AMD, etc aren't going to release anything totally free anytime soon no matter how many people push. I'm actually surprised that nVidia and ATI/AMD have stuck with it as long as they have considering how often Linux kernel internals change and cause them extra work.

You never know until you try.

In my eyes the main difference is that the bwc code was checked directly into OpenBSD while the ath stuff was just posted to lkml for review so the latter wasn't even committed before everyone got up in arms over it.

It was just as available either way. :p

I'd like to think they were both mistakes that shouldn't have happened and could have been handled a bit nicer. Seeing the misinformation coming out of the linux camp is staggering though. Removing the licenses on files is just wrong, especially when they state you can't do that. :p

I guess people just weren't paying attention.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
While I would like to agree with this I don't think it's true. Broadcomm, nVidia, ATI/AMD, etc aren't going to release anything totally free anytime soon no matter how many people push. I'm actually surprised that nVidia and ATI/AMD have stuck with it as long as they have considering how often Linux kernel internals change and cause them extra work.


The reason I figure Nvidia/ATI continue their efforts is mostly due to the fact that Linux is the OS of choice for high-end visual workstations and movie-making type stuff. I don't think that they never really gave 2 shits about the Linux consumer market.

Both seem to be caring a bit more, but they have bigger fish to fry. (ie: Vista DRM-related design restrictions)

I'd like to think they were both mistakes that shouldn't have happened and could have been handled a bit nicer.

That's the major problem right there.

Too many drama queens on both sides.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Yeah, it doesn't require it. But some companies do it. Some people do it. But others are a bit greedier.

You can't expect any more than the license requires and that's if you're lucky which is why so many Linux people use the GPL.

You never know until you try.

You think ndiswrapper and reverse engineering their chipsets was the first thing people tried wrt Broadcomm?

It was just as available either way.

Irrelevant, one was an official commit to the project's official source tree one was a posting to a mailing list asking for reviews. Reviews which did their job and caught the problem before it hit any repos.

I'd like to think they were both mistakes that shouldn't have happened and could have been handled a bit nicer. Seeing the misinformation coming out of the linux camp is staggering though. Removing the licenses on files is just wrong, especially when they state you can't do that.

Sure removing the license is wrong but AFAIK the Broadcomm debacle involved copy/pasting chunks of GPL'd code without any attribution as to the source or the license of that code which is just as bad.

I guess people just weren't paying attention.

In the Atheros case I can see where the Linux developer would be confused since everyone knows that the BSD license allows for inclusion in other licensed works, (s)he probably assumed that you could also relicense that code without any problems. But IMO the Broadcomm issue was either pure ignorance about the GPL or a total disregard for the GPL's guidelines because everyone knows that the GPL requires that you give back your changes even if they don't understand all of the terms like what a derivative work is or that it's only required when you redistribute the product.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
What makes broadcom double-hilarious is that broadcom has it's own Linux drivers.

Yep, completely seperate and closed source they have always had their own Linux drivers for their wireless stuff. This is because all those Linksys routers and most other common home-style wireless routers run Linux and they use Broadcom wireless chipsets.

This is partially behind why the authors of the bcm43xx drivers did not want to BSD-license all their code even though they were perfectly willing to BSD-license most of it.. from the beginning. They didn't want to give broadcom legal justification for the continued violation of Linux's licensing terms.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Nothinman
You can't expect any more than the license requires and that's if you're lucky which is why so many Linux people use the GPL.

I can expect more from people who want Free software to succeed. Linux and BSD people are on the same side.

And GPLing BSD/ISC/whatever software is making it just as closed as proprietary licenses make it. Its sad. That's all. I'll try to expect less from now on. ;)

You think ndiswrapper and reverse engineering their chipsets was the first thing people tried wrt Broadcomm?

Of course not, and there's nothing wrong with reverse engineering the chipsets. The broadcom is a good example of a reverse engineering effort that went well. The ath isn't. Reyk and others did a lot of work reverse engineering the blob. The mad-wifi and proprietary BSD guys poopooed the effort because it might might atheros angry and they wanted to think there was possibly some taint in the code. Well, its been declared taint free, but instead of working with Reyk and friends to help reverse engineer the rest of it, the Linux based changes are all going to be GPLed. So they aren't available to the rest of us.

Oh well, expected too much again. :p

Irrelevant, one was an official commit to the project's official source tree one was a posting to a mailing list asking for reviews. Reviews which did their job and caught the problem before it hit any repos.

Irrelevant, the incorrect changes are still available. Hell, they're more available than the bad code stuck in a CVS repo.

Sure removing the license is wrong but AFAIK the Broadcomm debacle involved copy/pasting chunks of GPL'd code without any attribution as to the source or the license of that code which is just as bad.

Agreed. Like I said, mistakes were made on both sides.

In the Atheros case I can see where the Linux developer would be confused since everyone knows that the BSD license allows for inclusion in other licensed works, (s)he probably assumed that you could also relicense that code without any problems. But IMO the Broadcomm issue was either pure ignorance about the GPL or a total disregard for the GPL's guidelines because everyone knows that the GPL requires that you give back your changes even if they don't understand all of the terms like what a derivative work is or that it's only required when you redistribute the product.

Because the following line of text is so complicated (emphasis mine):
* Permission to use, copy, modify, and distribute this software for any
* purpose with or without fee is hereby granted, provided that the above
* copyright notice and this permission notice appear in all copies.

So removing those lines of code was either pure ignorance about copyright, licenses, etc. or total disregard for the license, the author, the law, or the code.

I didn't come to argue about this. I just wanted to make a quick post on topic, and tried to approach this topic diplomatically (especially since I think both sides fucked up and they could do more to make it all better). Oh well.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I can expect more from people who want Free software to succeed. Linux and BSD people are on the same side.

They're on the same side but the BSD license is too permissive for most Linux kernel developers, they don't want Atheros to be able to take their changes and put them into a new closed source HAL.

And GPLing BSD/ISC/whatever software is making it just as closed as proprietary licenses make it. Its sad. That's all. I'll try to expect less from now on.

Not at all, the software's still free and the source is still available but for the parts that have been GPL'd you still have to respect the GPL. For the parts that are BSD you can do whatever you want with them. And the GPL parts have a greater chance of staying open because they're under the GPL.

Irrelevant, the incorrect changes are still available. Hell, they're more available than the bad code stuck in a CVS repo.

It's completely relevant, lots of stuff gets posted to lkml that never goes anywhere and it's an open list so anyone can post whatever they want. I could post a patch tomorrow that ripped out all of the licensing information from all of the kernel files but it won't mean a thing because it'll never make it into Linus' tree.

Because the following line of text is so complicated (emphasis mine):

So? When most people think of the BSD license they know that the code can be included in closed source software and to most people that is equal to a license change. If you're allowed to change the license from BSD->closed then why not BSD->GPL? So following that logic what's wrong with removing the BSD license when you change the project over to GPL?

I know that's not correct and you can't relicense someone else's code without their permission but I'm sure that train of logic has been followed by many people in the past and it just happened that someone got caught this time. Most people don't understand licensing or copyright and the fact that the BSD license is one step away from public domain doesn't help.