where do programs go when you install?

Red Squirrel

No Lifer
May 24, 2003
71,322
14,088
126
www.anyf.ca
This is something I've always wondered. When you install a program in Linux, whether it's using yum, apt, or ./configure method, where exactly does it go?

I have never been able to find any references to the program using locate.

I have a situation here where I just managed to install Zone Minder using the ./configure method, well, at least I think it worked. The make and make install did not really generate any human readable output nor a message at the end saying it was successful, so I really don't know. I can start the service, if that means anything.

I'm trying to figure out where the web interface is so I can point apache to it. It's not in /var/www/html/zm like the configure flag has. Maybe I have to copy it manually, but problem is I can't find it. I did updatedb, then did locate zoneminder which returned zero results and then did locate zm which returned a few binaries but that's it.

So where exactly do programs go when they install? I would assume this program would have a few hundred files and not just a few binaries. Not to mention the web interface files.
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
If it's in your path and you can run it, you can use which to find the location of it.

$ which sudo
/usr/bin/sudo
$ which bash
/bin/bash
$ which reboot
/sbin/reboot
 

kermit32

Banned
Jan 12, 2012
8
0
0
Use 'which' or 'whereis', but globally, since all *nix systems have strong file system structure you will notice that when you start checking one by one installed piece of client or server software, that there is very few locations where all software is installed
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
The short answer is "where it's supposed to go". You need to read the documentation to figure out what to do next. It should tell you the default locations for whatever you're looking for if you need them. I just skimmed over the ZM docs and it looks like they explain how you should configure apache.

For packages that means files will be strewn about /usr wherever they belong. You can use the package manager to find them if you need, but generally that's not something you need to worry about.

For compiled things, somewhere below /usr/local or /opt by default. Although I'm sure some apps will have other defaults.
 

Red Squirrel

No Lifer
May 24, 2003
71,322
14,088
126
www.anyf.ca
I found some of the binaries using which, but figured there would be way more files somewhere else. I did manage to find the web interface, I think (I just get internal server error, have to troubleshoot that) but nothing was where it was "suppose to be" whatever that means. The tutorial I used had some paths in the ./configure string, but they were not in those paths so not sure what those paths were for. The web interface appears to be a single file, so guess it's compiled in some kind of binary, I was expecting a bunch of php files. So guess I was just overestimating the amount of files this program had to it.

I've learned that most of the time the documentation is never enough. The documentation assumes a 100% perfect scenario where everything works in one shot. That is rarely the case and it's usually a fight of getting one building block at a time to work. The docs basically say to just go to localhost/zm. Well it's not that easy, I have to actually configure apache for that path to be valid, which involves getting the web interface's location and then finding why it's not working.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
I found some of the binaries using which, but figured there would be way more files somewhere else. I did manage to find the web interface, I think (I just get internal server error, have to troubleshoot that) but nothing was where it was "suppose to be" whatever that means. The tutorial I used had some paths in the ./configure string, but they were not in those paths so not sure what those paths were for. The web interface appears to be a single file, so guess it's compiled in some kind of binary, I was expecting a bunch of php files. So guess I was just overestimating the amount of files this program had to it.

I've learned that most of the time the documentation is never enough. The documentation assumes a 100% perfect scenario where everything works in one shot. That is rarely the case and it's usually a fight of getting one building block at a time to work. The docs basically say to just go to localhost/zm. Well it's not that easy, I have to actually configure apache for that path to be valid, which involves getting the web interface's location and then finding why it's not working.

Most of the paths given to ./configure scripts are top level, meaning if you say /usr the files will go into /usr/bin, /usr/sbin, /usr/lib, etc instead of /usr/local.

The docs tell you that the package comes with an Apache config file that makes /zm work and you just have to symlink it into the sites-available. I don't think the docs assume 100% perfect scenario, they just assume that you know the system well enough to debug little issues that might come up from your system's configuration, which doesn't seem to be the case for you most of the time.
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,249
561
126
In general, if you are compiling software using "./configure;make;make install" method, software will typically go under "/usr" or "/usr/local". You can typically determine that from the "configure" script, with "./configure --help" (look for the "--prefix" option as that is what you would change to move the location, and it will typically say something along the lines of "if prefix not set, default location is xxxxx").

Personally I always set the prefix to be something as I typically do not like installing software under /usr or /usr/local, and instead prefer to place them on an NFS automounted directory (so that any other linux system I have will have access to the software as well, so 1 install, multiple systems can use).
 

Red Squirrel

No Lifer
May 24, 2003
71,322
14,088
126
www.anyf.ca
Personally I always set the prefix to be something as I typically do not like installing software under /usr or /usr/local, and instead prefer to place them on an NFS automounted directory (so that any other linux system I have will have access to the software as well, so 1 install, multiple systems can use).

Does that actually work? I thought it was compiled specifically for that system. If yes I may start doing this as well, will save lot of headaches when having to install stuff that is complex to install. I can have it all preconfigured in one self contained folder, zip it up as a backup and then just drop it on any system, or share it etc...
 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
It is specifically compiled for that system. That's why you can't use something compiled for ARM on an x86, or something compiled for x86-64 on ARM. It is not compiled for your unique physical CPU, that would be absolutely ridiculous. All software would have to be distributed by source...

How do you think binary packages are distributed? You get packages built for your architecture. Don't you write code? You really should understand the whole compiling->assembling->linking->loading process for a binary executable including the differences between static and dynamic linking.
 

Red Squirrel

No Lifer
May 24, 2003
71,322
14,088
126
www.anyf.ca
Any code I've written never works on another system when I just transfer the binary. So that's why I always figured it was the same for any other program.

Like if I compile a "hello world" program on CentOS, then move the binary to Debian, it wont work, or if I go from an AMD system to a different chip or to Intel etc... Is there certain flags they use so that it works? I've always wanted to be able to code an app that will work on any x64 system but I've always had to recompile for each system. I've heard of static linking, is that what they do? I've never managed to figure out how to do that and I've always been told it's a bad idea anyway, for some reason. To me it sounds like heaven if it means an binary will work on any system.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
Any code I've written never works on another system when I just transfer the binary. So that's why I always figured it was the same for any other program.

Like if I compile a "hello world" program on CentOS, then move the binary to Debian, it wont work, or if I go from an AMD system to a different chip or to Intel etc... Is there certain flags they use so that it works? I've always wanted to be able to code an app that will work on any x64 system but I've always had to recompile for each system. I've heard of static linking, is that what they do? I've never managed to figure out how to do that and I've always been told it's a bad idea anyway, for some reason. To me it sounds like heaven if it means an binary will work on any system.

If you do it properly, it does. That's how packages work. They don't contain source, they contain binaries compiled for a system that meets a certain set of criteria.
 

Red Squirrel

No Lifer
May 24, 2003
71,322
14,088
126
www.anyf.ca
So how would I "do it propertly"? That's kinda vague.

I usually just do g++ -o binary program.cpp

In some cases I'll add -pthreads, or any other arguments required depending what headers I'm using and what their requirement is. Things usually get ugly once 3rd party libraries start to get involved.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
So how would I "do it propertly"? That's kinda vague.

I usually just do g++ -o binary program.cpp

In some cases I'll add -pthreads, or any other arguments required depending what headers I'm using and what their requirement is. Things usually get ugly once 3rd party libraries start to get involved.

As long as you can understand which libraries you're linking and what versions have compatible ABIs then it's not hard. It can get ugly, but it's obviously workable since virtually every other developer in the world manages it. That is one reason why it's recommended so much to stick with packages from the official repositories and only other developers that you trust.

Running through the tutorials for creating a deb or rpm should help you understand more about how it all works.