• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Instlaling debian on a SAN

sourceninja

Diamond Member
Anyone have any experience in installing debian with Q-logic 2200 cards as the only disk drive?

I have a dell 2U that I'm trying to install debian on. It will not find any disk drives to install on. However if I put in the ubuntu 6.06 server install disk it finds the SAN and allows me to install.

the debian disk says it can not find a disk and asks me to choose a driver. I select the correct driver qla2xxx and it still can not find a disk.

Any suggestions?
 
I would but I do not know what to even look for to get enough information to make it worth their time. Besides saying "It doesn't work"
 
the only thing I could think if is installing to a local ide drive and then looking through dmesg (I think that would be the right place) to see what hardware is being detected.

You are using Debian Etch (4.0) right?
 
My guess would be that your QLogic card is one of the drivers with the non-free firmware embedded in the driver that Debian removed. Check the other VTs to see if the module gave any output telling you what's wrong.
 
Ahh, yep that is the problem right there. I guess there is nothing I can do but not use debian. I'm not going to rebuild my own liveCD to install. These servers do not have floppy drives and installing to local hard drives are not an option.
 
There is a dozen different ways to install Debian.

In fact using the installation CD is something I don't do very often.


This guy has links to about 50 different ways to install it with variations.
http://linuxmafia.com/faq/Debian/installers.html

My personal favorite for situations like this is the debootstrap and chroot method.
http://www.debian.org/releases/etch/i386/apds03.html.en

It's a pretty simple process.

Here is a guide on building and installing a Linux kernel. This is somewhat easily done from a chroot environment. You'll want to be sure to include any modules, if needed, you need to access the card in /etc/initramfs-tools/modules

 
Yea, too much work. I don't have time to deal with things. I do for my own geek projects. But explaining why this is a good idea to my boss just will not work. The solution will be another distro. We are now using ubuntu on these machines. The boss wants these simple servers with common hardware setup in minutes, not hours. This looks bad for us. I talk up the boss about how easy linux is. We do a test on our vmware esx servers. Everything is great. I show how we can save lots of cash with linux+dell instead of Sun+solaris. We make the switch, all the hard drive servers work great. We go to add the san servers and we hit a snag.

It is my fault, I tested with a ubuntu live cd I had. I figured if ubuntu worked, debian would work because they are very close to the same thing. However when I says I could research it, my boss said, well ubuntu works, lets use that. Fortunatly the only machine we are moving off sun that is on the san is a development mysql server that is not publically accessable, but is under a huge load. Ubuntu 6.06 server fit well and has the version of mysql we wanted. So it is debian on local drive and virtual machines, ubuntu (for now) on san machines.

Maybe I'll get time in the future to setup a debian distro to run on these. If I do, I can then just san copy it and use it as a template for all new machines. However, my boss really wants hands off servers and I'm not sure he is going to be keen on compiling kernels. After all, a major reason we want to leave solaris is that we had to compile a lot of packages that sun did not offer.
 
How often have you done debootsetup installs?

It doesn't take any longer realy, and actually you generally end up with a leaner machine since there is extra stuff that the normal installer uses.

And with the kernel stuff, if you do the 'debian way' you end up with a kernel package you can just install on every machine.

It's not like you have to sit there babysitting the systems either. Just use ssh to setup the first system. I've installed Debian from miles away, no sweat. All I have to do is make sure I have ssh access to the machine.


I don't blame you for not wanting to do it. I'd be dissapointed too that Debian made stuff harder for you because of their politics.

But it doesn't have to even be Debian. Similar techniques work well for Ubuntu or any other system you'd like. It's just that with Debian other people have already done it and setup guides and such for it before you.

Installing from cdrom is realy a irritating and slow way of doing things.

First off generally you need to have hardware just for the install that you don't need at any time else. Stuff like keyboards, video cards, monitors, mice, etc. That stuff is fairly worthless on a server unless there is something bad with the hardware. It's just extra stuff you have to cart around with you.


For example. It's fairly easy to do a PXE boot for a Linux system.

What you'd do is setup a DHCP server, NFS, and a tftp server. Setup a netboot a generic Linux setup with ssh access. Boot your systems off the network, disk-less style with nfs. Ssh into the machines. You already have a system setup in a tarball that you've prepared. Untar it, run the script to setup the hostname, ip address, routing, fstab, whatever other custom configurations you want, and give it a root password.

Script takes 20 seconds, the tarball maybe 15 minutes or whatever to untar. Your finished, reboot the machine, move on to the next one.

That is even to much work for some people. You have FAI project:
http://www.informatik.uni-koeln.de/fai/

This is for managing massive amounts of machines.

Boot it off the system of the network, it does it itself. You use a configuration engine like Cfengine to have the custom configurations already setup for the machines. Start up a bunch of new workstations, go home, come back to a datacenter. 😛


Then the nice thing about a configuration engine is that your not having to ssh into every machine. You have it setup for Sun or Windows or Linux or whatever. Plus it has logging and roles and such for good auditing. )It's still a pain in the rear to setup though)

Of course you also setup a apt-proxy like apt-proxy or approx so your not installing off the internet like a weenie all the time, but from local LAN cache were it goes very fast. As fast a pushing down system images. 🙂

I know it's not what you want to do right now and the preperation takes more effort for you then just walking around with a bunch of burnt cdroms, but keep in mind that there are many ways to skin a cat.
 
Back
Top