Debian proposes to drop 7 architectures

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
I want to welcome those left out in the cold (despite what the email says) to the BSD world. Both NetBSD and OpenBSD will support most of the dropped archs. :beer:

Email here.

Therefore, we're planning on not releasing most of the minor architectures
starting with etch. They will be released with sarge, with all that
implies (including security support until sarge is archived), but they
would no longer be included in testing.

This is a very large step, and while we've discussed it fairly thoroughly
and think we've got most of the bugs worked out, we'd appreciate hearing
any comments you might have.

This change has superseded the previous SCC (second-class citizen
architecture) plan that had already been proposed to reduce the amount of
data Debian mirrors are required to carry; prior to the release of sarge,
the ftpmasters plan to bring scc.debian.org on-line and begin making
non-release-candidate architectures available from scc.debian.org for
unstable.

Note that this plan makes no changes to the set of supported release
architectures for sarge, but will take effect for testing and unstable
immediately after sarge's release with the result that testing will
contain a greatly reduced set of architectures, according to the
following objective criteria:

- it must first be part of (or at the very least, meet the criteria for)
scc.debian.org (see below)

- the release architecture must be publicly available to buy new

- the release architecture must have N+1 buildds where N is the number
required to keep up with the volume of uploaded packages

- the value of N above must not be > 2

- the release architecture must have successfully compiled 98% of the
archive's source (excluding architecture-specific packages)

- the release architecture must have a working, tested installer

- the Security Team must be willing to provide long-term support for
the architecture

- the Debian System Administrators (DSA) must be willing to support
debian.org machine(s) of that architecture

- the Release Team can veto the architecture's inclusion if they have
overwhelming concerns regarding the architecture's impact on the
release quality or the release cycle length

- there must be a developer-accessible debian.org machine for the
architecture.

We project that applying these rules for etch will reduce the set of
candidate architectures from 11 to approximately 4 (i386, powerpc, ia64
and amd64 -- which will be added after sarge's release when mirror space
is freed up by moving the other architectures to scc.debian.org).
This will drastically reduce the architecture coordination required in
testing, giving us a more limber release process and (it is hoped) a
much shorter release cycle on the order of 12-18 months.
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
It's a PROPOSAL.

Not policy. As far as I can tell nobody has actually decided on anything yet.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: drag
It's a PROPOSAL.

Not policy. As far as I can tell nobody has actually decided on anything yet.

Fixed.

I still think it's a stupid idea.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Originally posted by: n0cmonkey
I still think it's a stupid idea.
I sooo disagree. It's been what, 2+ years since the last stable release? That's ridiculous. It's easy to say to people here to just use testing or unstable, but that's not always a great option. If I want truly trouble-free maintenance on a Linux box, I want Debian stable. Not for stability in the sense of crashing, but for stability of config files - I can't confidently do an unattended security update for testing or unstable because I have to worry about whether some new config file needs to be merged with one of my own. For one or two machines under close supervision that's no problem, but for lots of machines with limited management time it's a big issue. So right now I'm stuck with ancient packages because the newest perl won't compile on the three Irix boxen left in the Debian community or because GNOME 2.10 won't run on the 10-year old PA-RISC machine in someone's basement. That's stupid. I respect the ideal of achieving clean code through multi-platform releases, but at some point you need to serve the needs of the many rather than chase after that ideal.

Besides, if I understand the recent announcements correctly, it's not even that those architectures will be utterly dropped, just that they won't hold back the "official" stable version. The individual arch maintainers are still welcome to release their own stable versions when they feel it's appropriate.

Finally, it's not fair to compare Debian to Open/Net BSD. While I love and use BSD, the ports collection is a far cry from the huge package database and management infrastructure that Debian provides. For a server that only needs one or two big apps, BSD is fine. But for complex installations, a reasonably up-to-date Debian stable release would be much, much simpler.

 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: cleverhandle
Originally posted by: n0cmonkey
I still think it's a stupid idea.
I sooo disagree. It's been what, 2+ years since the last stable release? That's ridiculous. It's easy to say to people here to just use testing or unstable, but that's not always a great option. If I want truly trouble-free maintenance on a Linux box, I want Debian stable. Not for stability in the sense of crashing, but for stability of config files - I can't confidently do an unattended security update for testing or unstable because I have to worry about whether some new config file needs to be merged with one of my own. For one or two machines under close supervision that's no problem, but for lots of machines with limited management time it's a big issue. So right now I'm stuck with ancient packages because the newest perl won't compile on the three Irix boxen left in the Debian community or because GNOME 2.10 won't run on the 10-year old PA-RISC machine in someone's basement. That's stupid. I respect the ideal of achieving clean code through multi-platform releases, but at some point you need to serve the needs of the many rather than chase after that ideal.

They need to get those package maintainers to either get things to work or mark them as broken on those archs and move along. They've decided to spend a horrible amount of time between releases, and that's a shame. They shouldn't be proposing to screw users over their inability to keep up with the rest of the world.

Besides, if I understand the recent announcements correctly, it's not even that those architectures will be utterly dropped, just that they won't hold back the "official" stable version. The individual arch maintainers are still welcome to release their own stable versions when they feel it's appropriate.

There will no longer be a testing or unstable branch on those archs. To me that means they're dead.

Finally, it's not fair to compare Debian to Open/Net BSD. While I love and use BSD, the ports collection is a far cry from the huge package database and management infrastructure that Debian provides. For a server that only needs one or two big apps, BSD is fine. But for complex installations, a reasonably up-to-date Debian stable release would be much, much simpler.

I don't understand your point here. Is it just the number of ports/packages as compared to the collection Debian has? If so, what is the rest of this paragraph about? Is there a second point I've missed here? :confused:
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Originally posted by: n0cmonkey
They need to get those package maintainers to either get things to work or mark them as broken on those archs and move along. They've decided to spend a horrible amount of time between releases, and that's a shame. They shouldn't be proposing to screw users over their inability to keep up with the rest of the world.
Just marking things as broken doesn't seem like a solution. For one, there's still the sheer number of packages, developers, and arches around. Even assuming that people are relatively decent and responsible, you still end up with a lot of people spending a lot of time doing clerical work sending out email reminders/nags, monitoring build logs, etc. Seems like a waste of time considering the return. Second, if I understand the package/release structure correctly, there's a bigger issue with broken packages. It's fine to mark some oddball package as broken and leave it out of the dist for that arch. But what about more central packages like GTK or GCC? Some of those packages surely have problems with unusual architectures after new releases, but you can't mark them as broken because too many others depend on them. To put it a different way (again if I understand it correctly), you can't mark a version of a package broken on a arch, only the whole package.
There will no longer be a testing or unstable branch on those archs. To me that means they're dead.
I've heard other people make similar statements, but I just don't get them. First off, there would be still be an unstable branch - it's whatever the currently compiling set of packages is for that arch. And there would be a stable branch containing whatever set of packages the arch maintainer believes to be well-tested and stable. So just no testing. A loss? Yeah, I guess. Earth-shattering? I don't see why. Ultimately, people that use obscure architectures will have to face the fact that a giant community of volunteers isn't going to bend over backwards multiple times over to serve them at the expense of the other 99% of the users. That's not unreasonable.
I don't understand your point here. Is it just the number of ports/packages as compared to the collection Debian has? If so, what is the rest of this paragraph about? Is there a second point I've missed here? :confused:
Sorry, I could have been more detailed. Yes, it's partially the number of packages - obviously, more packages means more work. But it's also the consistency of packaging and the supporting infrastructure that are different. BSD is a clean, simple, classic UNIX system. Debian has much more infrastructure - even a stripped down Debian install has the debconf, interfaces, and alternatives stuff. And the infrastructure grows quickly from there with things like the menu and font systems. That's all well-designed stuff, but it still means extra maintenance for a Debian package maintainer - more rules and policies to follow and subsystems to keep an eye on. More work. The other issue with complicated installs for BSD (Open, at least) is that the package manager is still pretty limited. When PHP gets a security update, I can't remove or upgrade it without removing everything that depends on it (the whole webmail system). And then there are the other apt/dpkg capabilities like pinning, multiple repos, and automatic configuration that Deb has over the BSD's. That's not so much a developer idea like the other stuff, but it does impact what system I'll choose to use for a given purpose. Again, I'm not saying that Debian is simply better than BSD, just that there's a lot more complexity in the system and that comparing them isn't really fair to either one.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: cleverhandle
Just marking things as broken doesn't seem like a solution. For one, there's still the sheer number of packages, developers, and arches around. Even assuming that people are relatively decent and responsible, you still end up with a lot of people spending a lot of time doing clerical work sending out email reminders/nags, monitoring build logs, etc. Seems like a waste of time considering the return. Second, if I understand the package/release structure correctly, there's a bigger issue with broken packages. It's fine to mark some oddball package as broken and leave it out of the dist for that arch. But what about more central packages like GTK or GCC? Some of those packages surely have problems with unusual architectures after new releases, but you can't mark them as broken because too many others depend on them. To put it a different way (again if I understand it correctly), you can't mark a version of a package broken on a arch, only the whole package.

That's ridiculous. If a package works on 10 archs, but is broken on 1 it should be marked as broken on that one arch.

gcc should be part of base, not a package. It's too important. Even so, it's a big package and should get some time to update. You don't have to update to every new release.

Of course some of those bigger packages have issues with obscure archs, but if the BSD people can get them working the linux people can too.

I'm not sure why there would be a lot of clerical work that isn't worth the effort. Setup a mailing list. Package maintainers sign up to the list. Wow, everything that doesn't involve actually building a package can be found out there.

I've heard other people make similar statements, but I just don't get them. First off, there would be still be an unstable branch - it's whatever the currently compiling set of packages is for that arch. And there would be a stable branch containing whatever set of packages the arch maintainer believes to be well-tested and stable. So just no testing. A loss? Yeah, I guess. Earth-shattering? I don't see why. Ultimately, people that use obscure architectures will have to face the fact that a giant community of volunteers isn't going to bend over backwards multiple times over to serve them at the expense of the other 99% of the users. That's not unreasonable.

I misread the unstable, testing, stable thing. The email is a bit confusing.

"The world is i386" mentality is a linux thing, and it's getting annoying. These other archs have their place, and like I said the BSD community would love to have more members. :)

I can see dropping useless archs; but sparc, sparc64, etc. are not useless or rare.


Sorry, I could have been more detailed. Yes, it's partially the number of packages - obviously, more packages means more work. But it's also the consistency of packaging and the supporting infrastructure that are different.

Ok, Debian has a lot more packages (packages/ports/whatever) than BSD. I can't speak for Net or Free, but OpenBSD's consistency is superb.

Of course the infrastructures are different.

BSD is a clean, simple, classic UNIX system. Debian has much more infrastructure - even a stripped down Debian install has the debconf, interfaces, and alternatives stuff.

I'm not sure you noticed, but every networking operating system has interfaces. :confused:

I've got a good selection of applications in OpenBSD base, as well as all the configuration programs you could ever need (vi and X config programs).

And the infrastructure grows quickly from there with things like the menu and font systems. That's all well-designed stuff, but it still means extra maintenance for a Debian package maintainer - more rules and policies to follow and subsystems to keep an eye on.

There are rules and whatnot for OpenBSD ports too. I'm not sure what the difference is.

More work. The other issue with complicated installs for BSD (Open, at least) is that the package manager is still pretty limited. When PHP gets a security update, I can't remove or upgrade it without removing everything that depends on it (the whole webmail system).

Huh?

And then there are the other apt/dpkg capabilities like pinning, multiple repos, and automatic configuration that Deb has over the BSD's.

I have no clue what pinning is.

There are usually master sites and secondary sites for the source if you're using ports, and there are plenty of mirrors for pacakges.

Most (I haven't tried ALL ports yet) ports have a default config that works well for plenty of users.

That's not so much a developer idea like the other stuff, but it does impact what system I'll choose to use for a given purpose. Again, I'm not saying that Debian is simply better than BSD, just that there's a lot more complexity in the system and that comparing them isn't really fair to either one.

While I don't think OpenBSD or NetBSD are the answers to every question, I do think it'll be the answer to "Where do you go for good SPARC support?"
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Originally posted by: n0cmonkey
Originally posted by: cleverhandle
To put it a different way (again if I understand it correctly), you can't mark a version of a package broken on a arch, only the whole package.
That's ridiculous. If a package works on 10 archs, but is broken on 1 it should be marked as broken on that one arch.
I think you're misunderstanding me. Sure, you can mark a package broken only on one particular architecture, but you can't do it for a particular version. So if package A updates from 2.5 to 2.6 and 2.6 was designed with little thought for non-i386 machines, you can't say "2.6 is broken, use 2.5" - you're stuck with marking package A as broken for that arch (for any version, period) and removing it from the dist. For lots of libraries, that's going to knock out a bunch of dependencies and be a bad solution.

Again, I'm not at all certain that I understand Debian's build/release policy in this regard. That's just the gist I've gotten from following some of the mailing list threads.
gcc should be part of base, not a package. It's too important. Even so, it's a big package and should get some time to update. You don't have to update to every new release.

Of course some of those bigger packages have issues with obscure archs, but if the BSD people can get them working the linux people can too.
I was mostly just using those as examples of packages that obviously have many dependencies. But like you say, they're so big that they will get the work done regardless. I think the more likely candidates for problems are mid-level libraries that get less attention - I think that something like a gstreamer library on mips botched up GNOME in testing for over a month.
"The world is i386" mentality is a linux thing, and it's getting annoying. These other archs have their place, and like I said the BSD community would love to have more members. :)

I can see dropping useless archs; but sparc, sparc64, etc. are not useless or rare.
Yeah, I tend to agree with you on Sparc and it seems that a lot of others do too. I suspect that Sparc will end up making the cut somehow.
The other issue with complicated installs for BSD (Open, at least) is that the package manager is still pretty limited. When PHP gets a security update, I can't remove or upgrade it without removing everything that depends on it (the whole webmail system).

Huh?
Geez, I hope I'm misunderstanding, but I thought this was the deal for OpenBSD. If package B depends on A, and you need to upgrade A, then B has to be uninstalled while A is removed/reinstalled. This has a tendency to squash config files or leave a lot of extra cruft lying around in some cases. At the very least, it's a pain to go back and reinstall all of A's dependencies when there are many of them. Am I totally missing something about package upgrades? (That would be great...)
I have no clue what pinning is.

There are usually master sites and secondary sites for the source if you're using ports, and there are plenty of mirrors for pacakges.

Most (I haven't tried ALL ports yet) ports have a default config that works well for plenty of users.
I'm not dissing BSD. I love it. And most of the time, it's perfectly adequate for my needs. Just pointing out a difference in style - Debian tries to accomodate nearly every eventuality with some kind of system. BSD is more hands off - let the admin handle it manually. Each has their place, but I think that the Debian approach becomes a bigger burden for the maintainers.
While I don't think OpenBSD or NetBSD are the answers to every question, I do think it'll be the answer to "Where do you go for good SPARC support?"
If things keep on going the way they look now, that seems pretty inevitable.

 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
AFAIK this was discussed by a small handful of people in a closed, unannounced meeting. A lot of people are angry and this won't happen without some huge waves. And on top of it all their 'main' 4 architectures don't even meet their requirements yet, from what I've read none of them have more than 1 buildd so as of right now etch will released with 0 supported architectures. And I saw on debian-alpha that 2 more boxes are awaiting hosting to be donated to Debian, so ironically Alpha might be the first arch to meet all of the requirements =)

I sooo disagree. It's been what, 2+ years since the last stable release? That's ridiculous

And the delays have not been directly affected by the additional architecture support. The main slowdown was the d-i rewrite, which yes did take extra work to support all of the architectures but they've already spent the time making it work so why drop them after the fact? Most packages work fine on all architectures without any extra work by the maintainer and if there is extra work it'll be required to make it work on x86, AMD64 and PPC anyway. The kernels for each arch need a seperate maintainer, but that's already in place.

IMO someone is just looking for a place to point the blame and the arch support is the easiest target on the surface, but once you actually look below the surface it's obvious that it's not a problem at all.

I doubt I'll be posting in this thread again, I've already gone through this on Ars at http://episteme.arstechnica.com/eve/ubb.x/a/tpc/f/96509133/m/858005422731 and I don't feel like having the same converstion here.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: cleverhandle
I think you're misunderstanding me. Sure, you can mark a package broken only on one particular architecture, but you can't do it for a particular version. So if package A updates from 2.5 to 2.6 and 2.6 was designed with little thought for non-i386 machines, you can't say "2.6 is broken, use 2.5" - you're stuck with marking package A as broken for that arch (for any version, period) and removing it from the dist. For lots of libraries, that's going to knock out a bunch of dependencies and be a bad solution.

That makes sense. Again, NetBSD and OpenBSD get decent support on a number of archs. Debian has a lot more users than those two (probably more than those two combined). They can do it too.

Again, I'm not at all certain that I understand Debian's build/release policy in this regard. That's just the gist I've gotten from following some of the mailing list threads.

I was mostly just using those as examples of packages that obviously have many dependencies. But like you say, they're so big that they will get the work done regardless. I think the more likely candidates for problems are mid-level libraries that get less attention - I think that something like a gstreamer library on mips botched up GNOME in testing for over a month.

I wonder if NetBSD had this working...

Yeah, I tend to agree with you on Sparc and it seems that a lot of others do too. I suspect that Sparc will end up making the cut somehow.

Alpha is still viable, but quiet.
ARM is still out there and kicking. In fact new products pop up all the time.
MIPS is fairly prevelant, but almost dead (AMD is still using it).


Geez, I hope I'm misunderstanding, but I thought this was the deal for OpenBSD. If package B depends on A, and you need to upgrade A, then B has to be uninstalled while A is removed/reinstalled. This has a tendency to squash config files or leave a lot of extra cruft lying around in some cases. At the very least, it's a pain to go back and reinstall all of A's dependencies when there are many of them. Am I totally missing something about package upgrades? (That would be great...)

That was true for a long time. It _just_ changed. ;)

The ports/packages tools have been REDONE for 3.7.

I'm not dissing BSD. I love it. And most of the time, it's perfectly adequate for my needs. Just pointing out a difference in style - Debian tries to accomodate nearly every eventuality with some kind of system. BSD is more hands off - let the admin handle it manually. Each has their place, but I think that the Debian approach becomes a bigger burden for the maintainers.

So Debian's method isn't scalable? :p

I personally don't care. People should use what they want, I can diggit. :beer:

If things keep on going the way they look now, that seems pretty inevitable.

With Linus's using a G5 and IBM offering up cash and prizes for some such PPC Linux stuff other archs should start to get the attention they deserve. Linux is becoming i386 only less and less. It's a shame Debian is considering going in the other direction.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Nothinman
AFAIK this was discussed by a small handful of people in a closed, unannounced meeting. A lot of people are angry and this won't happen without some huge waves. And on top of it all their 'main' 4 architectures don't even meet their requirements yet, from what I've read none of them have more than 1 buildd so as of right now etch will released with 0 supported architectures. And I saw on debian-alpha that 2 more boxes are awaiting hosting to be donated to Debian, so ironically Alpha might be the first arch to meet all of the requirements =)

I sooo disagree. It's been what, 2+ years since the last stable release? That's ridiculous

And the delays have not been directly affected by the additional architecture support. The main slowdown was the d-i rewrite, which yes did take extra work to support all of the architectures but they've already spent the time making it work so why drop them after the fact? Most packages work fine on all architectures without any extra work by the maintainer and if there is extra work it'll be required to make it work on x86, AMD64 and PPC anyway. The kernels for each arch need a seperate maintainer, but that's already in place.

IMO someone is just looking for a place to point the blame and the arch support is the easiest target on the surface, but once you actually look below the surface it's obvious that it's not a problem at all.

I doubt I'll be posting in this thread again, I've already gone through this on Ars at http://episteme.arstechnica.com/eve/ubb.x/a/tpc/f/96509133/m/858005422731 and I don't feel like having the same converstion here.

I'll try and check that out later. Although I occassionally post at ars, I don't like the board as much. :p
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
I agree with n0c on the marking packages as stable/broken for individual archs. That's what gentoo does. A new package is released and it's certified stable for x86 but testing for amd64 will have that package be installed on a stable x86 system and not on a stable amd64 system. I don't see why this is such a big deal.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I'll try and check that out later. Although I occassionally post at ars, I don't like the board as much.

I'm not a big fan of the board software, but IMO it's a much more technical forum although the users are a little more abrasive. But I guess that comes with the territory =)

I don't see why this is such a big deal.

Because releasing a distro on 12 architectures but only having 3 of them actually work 100% is stupid. If they were able to say "broken on IA64" and close the bug half of the maintainers would probably do just that and never fix anything. Debian's QA is actually attempting to produce a quality product, it's not optional like in Gentoo.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Originally posted by: n0cmonkey
Again, NetBSD and OpenBSD get decent support on a number of archs. Debian has a lot more users than those two (probably more than those two combined). They can do it too.
Correct me if I'm wrong, but aren't ports free to ignore architectures if they won't build there? That was my impression, but a lazy Google search isn't getting me any information.
ARM is still out there and kicking. In fact new products pop up all the time.
Yeah, but ARM is best in the embedded space, and I don't really see the value in supporting that with a vast, general-purpose OS. If you're going to run GNOME on a PDA, fixing a couple of routine build errors will be the least of your concerns. Alpha and MIPS I just don't know much about...

That was true for a long time. It _just_ changed. ;)

The ports/packages tools have been REDONE for 3.7.
Hoo-ray!
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
Originally posted by: Nothinman
I'll try and check that out later. Although I occassionally post at ars, I don't like the board as much.

I'm not a big fan of the board software, but IMO it's a much more technical forum although the users are a little more abrasive. But I guess that comes with the territory =)

I don't see why this is such a big deal.

Because releasing a distro on 12 architectures but only having 3 of them actually work 100% is stupid. If they were able to say "broken on IA64" and close the bug half of the maintainers would probably do just that and never fix anything. Debian's QA is actually attempting to produce a quality product, it's not optional like in Gentoo.

Gentoo works on a multitude of distros. You can have a perfectly stable system on any of the distros it supports, but what versions of the packages you have on a stable amd64 platform will be different than on a stable x86 platform.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
You can have a perfectly stable system on any of the distros it supports, but what versions of the packages you have on a stable amd64 platform will be different than on a stable x86 platform.

Which is extremely stupid.
 

cleverhandle

Diamond Member
Dec 17, 2001
3,566
3
81
Originally posted by: silverpig
Gentoo works on a multitude of distros. You can have a perfectly stable system on any of the distros it supports, but what versions of the packages you have on a stable amd64 platform will be different than on a stable x86 platform.
You don't understand what "stable" means to Debian. A box crashing is only part of it...
 

drag

Elite Member
Jul 4, 2002
8,708
0
0
I think that it would be a mistake to drop all of them..

The problems with Debian not releasing a new stable has very little to do with the number of arches. The trouble is that they are going after a moving target...

Linux continues to get better. More usable, more stable, more security (generally). So it's very big temptation for Debian to go 'oh, just add the next newer version of X, it's so much better'.

Hell, they just upgraded the KDE desktop they are going to use. That's nuts.

What would fix Debian is to go to time-based release.
Something like a new release every 16months, garenteed. Strong roadmap. Releases supported for 2 generations...

legacy, stable, testing, unstable, and the various experimental branches.

That's the only way I can see Debian going with a timely release. Time-based releases are proven(OpenBSD model), feature-based releases are proven to be silly (Debian model).
 

silverpig

Lifer
Jul 29, 2001
27,703
12
81
Originally posted by: Nothinman
You can have a perfectly stable system on any of the distros it supports, but what versions of the packages you have on a stable amd64 platform will be different than on a stable x86 platform.

Which is extremely stupid.

Latest stable version of the linux kernel is 2.6.11.4 or something like that. The kernel in debian stable is what? 2.4.19?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Latest stable version of the linux kernel is 2.6.11.4 or something like that. The kernel in debian stable is what? 2.4.19?

2.6 packages are for sarge and sid available via apt if you want. And really kernel versions mean nothing to distributions, RHEL 3 was using 2.4.9 until very recently, RHEL 4 which was only released a few weeks ago is the first release from RH to even offer a 2.6 kernel.

That's the only way I can see Debian going with a timely release. Time-based releases are proven(OpenBSD model), feature-based releases are proven to be silly (Debian model).

OpenBSD doesn't support ports as much as Debian supports it's packages AFAIK. Time based release won't help them close RC bugs, there's currently nearly 700 RC bugs.

http://bugs.debian.org/release-critical/
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: cleverhandle
Originally posted by: n0cmonkey
Again, NetBSD and OpenBSD get decent support on a number of archs. Debian has a lot more users than those two (probably more than those two combined). They can do it too.
Correct me if I'm wrong, but aren't ports free to ignore architectures if they won't build there? That was my impression, but a lazy Google search isn't getting me any information.

They can mark it broken, but they definitely try to fix things. Obviously not all of the ports maintainers have access to all of the archs and have to rely on others for help.
 

n0cmonkey

Elite Member
Jun 10, 2001
42,936
1
0
Originally posted by: Nothinman
OpenBSD doesn't support ports as much as Debian supports it's packages AFAIK. Time based release won't help them close RC bugs, there's currently nearly 700 RC bugs.

http://bugs.debian.org/release-critical/

In OpenBSD you can submit a formal bug for a port. Most people take the problems to the ports@ mailing list or the maintainer directly. I see two open PRs. OpenBSD takes ports seriously.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
In OpenBSD you can submit a formal bug for a port. Most people take the problems to the ports@ mailing list or the maintainer directly. I see two open PRs. OpenBSD takes ports seriously.

But AFAIK they're not supported, not audited and wouln't be considered release critical, right? Would a grave bug in a port delay a release?