• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

There has to be a way to do this

Red Squirrel

No Lifer
Ok, there has to be some way to compile a linux app so it runs on any other box without having to recompile it.

I have this app I code on a test server then put on a production server when it's ready, if there's a bug I fix it then reupload it. It's a royal pita having to reupload all the source code to the production server every time, and recompiling, then redoing all the permissions. Is there any way I can compile it on the dev server so that it works on the production one? Like a static compile or something?

Any help would be appreciated. The compiler I use is G++.
 
Using your model you should be building a package to install on your production servers. Use the native package management system from your production servers system. Now you develop for that environment. You know exactly what will be installed on the system and you have strict control over the dependencies associated with your package. I'm sure there are hundreds of tutorials out there on exactly how to do this in <insert distribution here>.

But if you insist on doing everything by hand, you need to learn how the linking and loading process works. How will a binary file will go about finding it's needed library files at execution time? http://www.linuxjournal.com/article/6463 That's a great article with lots of general information about the process. It's fundamental to understanding how a program executes and what the operation system does to allow it to happen.
 
Slight hijack that may also be helpful to the OP, but what's the difference between dynamic linking and dynamic loading?
 
It's a royal pita having to reupload all the source code to the production server every time, and recompiling, then redoing all the permissions. Is there any way I can compile it on the dev server so that it works on the production one? Like a static compile or something?

As Crusty says, that's what packages are for. You generally have a dev server setup so you compile and build the package there which has all of the right permissions, dependencies, etc so you just then do a normal 'rpm -Fvh blah.rpm' or 'yum localinstall blah.rpm'.

Slight hijack that may also be helpful to the OP, but what's the difference between dynamic linking and dynamic loading?

Probably depends on the person using the terms. I'd say they're probably supposed to mean the same thing but I could see someone using the latter term to mean using dlopen() to load a library manually at runtime instead of just letting the linker handle it at build time.
 
I'll look into the packages, supose once I figure it out it's just the thing of scripting it for every time I do an update. I plan to mostly stay within redhat/rpm environments anyway, so I don't have to worry about supporting other platforms, though it would be nice, maybe in the future. Would be great if all this stuff was standardized across the board.
 
I'll look into the packages, supose once I figure it out it's just the thing of scripting it for every time I do an update

There's not much scripting to do to create a package and RPMs are very easy to do. As long as you have an automated build-process already in place you can just put that into the .spec with some other metadata and boom, rpm will build the package for you.
 
I see that you have found reason number 10456 that linux sucks.

Not really, when handled properly this is one area where Linux is better than Windows. In Windows most installers just dump random versions of libraries into system32 and hope for the best. It's simpler for the user but causes lots of problems.
 
Originally posted by: Nothinman
I see that you have found reason number 10456 that linux sucks.

Not really, when handled properly this is one area where Linux is better than Windows. In Windows most installers just dump random versions of libraries into system32 and hope for the best. It's simpler for the user but causes lots of problems.

Yes, because dumping everything is /usr/lib is so much cleaner. Linux package manager just put lipstick on a pig. Aptget is then the eyelash liner.

 
Yes, because dumping everything is /usr/lib is so much cleaner. Linux package manager just put lipstick on a pig. Aptget is then the eyelash liner.

Do you have a better idea? It's not a perfect solution but it is much cleaner because each library has it's version in it's filename so there's no conflicts. Having a package file that says its requires blah version 2.0 is much better than having an installer that's twice as big because it includes it's own version of blah and then blindly overwrites the one in system32 when it installs. Proper packagement alone removes the need for all of the crap Windows does to detect and restore shared libraries whenever an install overwrites one.
 
I really like Linux package management. I think it's a very slick system, and I also think we'd have something a lot more like it on the Windows side if Windows software OEMs would have played ball five or ten years ago. The Linux community of altruistic iconoclasts seems more willing to collaborate on this stuff than the capitalists of the Windows world.
 
The only thing with windows I really like is how you can compile a single EXE, and give it to anyone and it will work (assuming you don't link to some DLL). In Linux you have to give the source code in some form or the other. If you really don't want them to see the code I suppose you could write an app that removes all returns and just makes it retarded hard to read, but bottom line is they still have the code.

Guess in one way it's good as it forces open source, but maybe this is why lot of vendors don't like to support linux. for example you would not see MS make office for Linux as they don't want to give their source.
 
The only thing with windows I really like is how you can compile a single EXE, and give it to anyone and it will work (assuming you don't link to some DLL).

The same issues exist on Windows, the reason it works like you say most of the time is because there's a more standard set of base libraries. You still end up linking to mfc.dll, odbc32.dll, etc. If you happen to give a binary that uses ODBC to someone that doesn't have MDAC/ODBC installed you'll have the exact same problem.

In Linux you have to give the source code in some form or the other. If you really don't want them to see the code I suppose you could write an app that removes all returns and just makes it retarded hard to read, but bottom line is they still have the code.

No, you don't have to give them the source. VMware, id software, Opera, etc are all closed source and run just fine on Linux.

Guess in one way it's good as it forces open source, but maybe this is why lot of vendors don't like to support linux. for example you would not see MS make office for Linux as they don't want to give their source.

I'd bet that if MS decided to port Office to Linux they'd actually take the time to understand the build tools so that they wouldn't have to give out the source. Actually they'd probably just bundle their own copies of tons of libraries bloating up everyone's system but either way they wouldn't give out the source.
 
Well you could compile on every single distro, but that would be just nuts considering there's 1000's out there, unless there IS a way to make a binary execute on any system, which was my original question and it just lead to package managers, which are just a fancy way of distributing/compiling source.
 
If the systems all have the correct libraries installed then yes it will run on the any of them assuming they have the same architecture. The problems lies in the fact that distros don't all use the same version of major stuff like libc so you have problems going from RH to Debian for example... but that's why you build packages for those distros because you knowthose will work.
 
Which is a big issue as there are so many different distros and all of them are different. For example, there are 10 Fedoras alone, then there's the different redhats, then centos, and that's not even going on the debian side yet. There's got to be a way to make a single catch all binary that has all the libraries built in.
 
Originally posted by: RedSquirrel
Which is a big issue as there are so many different distros and all of them are different. For example, there are 10 Fedoras alone, then there's the different redhats, then centos, and that's not even going on the debian side yet. There's got to be a way to make a single catch all binary that has all the libraries built in.

Yes, but the key point is that you have to package the right dependencies, not rebuild source. Even that's a big chore, I agree. But isn't it the case that all those distros fall into a much smaller group of major familes? But anway, that diversity is the price I guess, for getting a lot of good stuff for no money.
 
Well you could compile on every single distro, but that would be just nuts considering there's 1000's out there, unless there IS a way to make a binary execute on any system, which was my original question and it just lead to package managers, which are just a fancy way of distributing/compiling source.

And distributing your product is what you're looking to do so it makes sense to go that route.

Which is why when you see a project out there you generally only see 3 or 4 downloads. One of the source and 2 or 3 packages for various distributions that they want to support. Everyone else is left to their own devices. This works for 99% of the projects out there so why doesn't it work for you?

Which is a big issue as there are so many different distros and all of them are different. For example, there are 10 Fedoras alone, then there's the different redhats, then centos, and that's not even going on the debian side yet. There's got to be a way to make a single catch all binary that has all the libraries built in.

So you want to support all the way from FC1 up to FC10? Good luck with that. Things change way too fast and most people only want to hear about bugs in relation to the most recent release and maybe 1 release back and never any further than that because it's too much work. If you want to run FC4 then that's your choice and you get all of the pain that comes with it.

Yes, but the key point is that you have to package the right dependencies, not rebuild source. Even that's a big chore, I agree

It's not really that big of a chore, RPM can figure out most of the dependencies for you so all you really have to do is have a VM for whichever distro you're building on and try it. Sure it'll take a lot of disk space to cover those distros but if you're going to say "Yup, this works on FC5" don't you want to actually see it build and run on FC5 first?
 
Not an application developer, but this sounds like the reason one author was saying Mac development was much better - less code management since there's a more solid framework to build from.
 
Linux package management only sucks for people who think linux is a operating system.

Why should you expect redhat packages to work without issue on ubuntu? They are two different operating systems that happen to use the same programs and kernel. You don't see solaris programers getting pissed that some aix application won't run. I mean, they are both unix right? You pick your supported operating systems (windows, osx, redhat, debian, etc) or you supply the source. Those are really the only two ways to go about it.

Personally, I prefer the way OSX handles installing, I just hate the way it handles uninstalling. But I'd take a linux distro's package management system over installshield anyday.

Also, saying you want to support every version of Fedora is like saying you want DOS to Windows 7 support on your app.
 
Not an application developer, but this sounds like the reason one author was saying Mac development was much better - less code management since there's a more solid framework to build from.

The core system is definitely more consistent, but Apple adds new things all of the time. If you compile something on OS X 10.5 do you know if it'll run on 10.4 or 10.3 without testing it? Of course not. And then there was the PPC->x86 migration, I'm sure Apple's developers loved that...
 
Originally posted by: Nothinman
Yes, because dumping everything is /usr/lib is so much cleaner. Linux package manager just put lipstick on a pig. Aptget is then the eyelash liner.

Do you have a better idea? It's not a perfect solution but it is much cleaner because each library has it's version in it's filename so there's no conflicts. Having a package file that says its requires blah version 2.0 is much better than having an installer that's twice as big because it includes it's own version of blah and then blindly overwrites the one in system32 when it installs. Proper packagement alone removes the need for all of the crap Windows does to detect and restore shared libraries whenever an install overwrites one.

You mean like Wise? Or InstallShield? Or any of the other installers? I dig RPM/APT on *nix but I've not targeted it as a dev. Still seems hit or miss, especially since there are so many flavors.

On Windows, MSI technology is pretty nice. Prior to that it could get ugly, and still isn't easy. And at least there are options on Windows... XP and on has SxS and .Net has a good assembly loading scheme.
 
You mean like Wise? Or InstallShield? Or any of the other installers?

No, not at all. All they really do is present a GUI, copy files and add registry entries. They don't do any real dependency checking. If MS had made their MSI system more complete like rpm or dpkg then maybe, but not until that happens.
 
Originally posted by: Nothinman
You mean like Wise? Or InstallShield? Or any of the other installers?

No, not at all. All they really do is present a GUI, copy files and add registry entries. They don't do any real dependency checking. If MS had made their MSI system more complete like rpm or dpkg then maybe, but not until that happens.

Both rpm and dpkg are crap because they don't mange files they just check if the metadata for a package is their. So half the time you have to force them to install anyways.
 
Back
Top