C++ : Best way to make compile-time options run-time ?

cKGunslinger

Lifer
Nov 29, 1999
16,408
57
91
So, I have this C++ program that has a config.h file with many #define's, and the code has many #if's and #else's used to control which code get compiled in/out. This was originally done for speed, as a variable compare or check each cycle was too much overhead. Now I want to change this (faster hardware & new requirements) to run-time options, with as little overhead as possible.

I am assuming that function pointers, initialized once at startup, is the best option for this (with a config.txt or config.ini used to populate the values?) Are there any other methods out there you prefer or have used?


*Note - I don't need to change the options while the program is running, but only at initialization. I am just trying to get rid of the need to build multiple executables, based on different config parameters.


Thanks for your suggestions.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
On Windows you could also use the registry, but the text file is more flexible since it can be edited with Notepad not just with whatever settings menu you add to your program.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,572
10,208
126
Originally posted by: cKGunslinger
*Note - I don't need to change the options while the program is running, but only at initialization. I am just trying to get rid of the need to build multiple executables, based on different config parameters.
Well, you could do something like the old DOS LIST.COM program does.. use some hardcoded variables in the executable, and then patch it to change them. (Could have the command-line parser actually self-patch the program, although that would likely also trigger any resident virus-scanner running on the user's machine.)

The most obvious and simple thing would be to go through the code and make everything a "normal" variable, and change #ifdefs into normal if() statements, etc.
 

cKGunslinger

Lifer
Nov 29, 1999
16,408
57
91
Originally posted by: BingBongWongFooey
Function pointers? What for? Why not just use variables like you would for anything else?

Performance reasons. This is a high-performance process on a custom CPU/FPGA combination, and the extra time needed for decision branching is just too much overhead (which was the reason for compile-time options in the first place.) And since these values will never change while the sytem is running, it is quite wasteful to have the compare each iteration, when we know what the value will be once we get started.

With function pointers, you just have to back up to the last function called and replace it with an init-time pointer to the corrrect function (1 for each decision branch,) and leave it at that. At least, that's how I've always done this type of optimization. I was just fishing for other suggestions from those, I suppose, with real-time system experience.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,572
10,208
126
Originally posted by: cKGunslinger
With function pointers, you just have to back up to the last function called and replace it with an init-time pointer to the corrrect function (1 for each decision branch,) and leave it at that. At least, that's how I've always done this type of optimization. I was just fishing for other suggestions from those, I suppose, with real-time system experience.
Ahh, I get it. Back-patching, like the MS floating-point runtimes do (or did), back when CPUs didn't always have a built-in FPU, and they had to software-emulate the routines. I'm not sure, but I think that they emitted a nop or whatnot along with the FPU opcodes, and trapped FPU unhandled exceptions and patched those bytes to a hardcoded FPU runtime call instead, for future call iterations.

Self-modifying code is indeed usually still faster than either calls through function pointers, or runtime branching, although I wonder how many "gotchas" that has with really recent modern CPUs.

If you want something even more hardcore... you could go with something akin to HP's Dynamo technology - runtime dynamic optimized re-compiling of code. But that probably has far too much back-end overhead for your application, to say nothing of the development overhead.

Edit: Nevermind about the Dynamo comment, I just think it's neat technology so I bring it up a bunch. But the fact that the code is being dynamically re-written would make execution time non-deterministic, and likely wholly unsuitable for any sort of real-time system. I forgot about that constraint.
 

cKGunslinger

Lifer
Nov 29, 1999
16,408
57
91
Originally posted by: VirtualLarry
Originally posted by: cKGunslinger
With function pointers, you just have to back up to the last function called and replace it with an init-time pointer to the corrrect function (1 for each decision branch,) and leave it at that. At least, that's how I've always done this type of optimization. I was just fishing for other suggestions from those, I suppose, with real-time system experience.
Ahh, I get it. Back-patching, like the MS floating-point runtimes do (or did), back when CPUs didn't always have a built-in FPU, and they had to software-emulate the routines. I'm not sure, but I think that they emitted a nop or whatnot along with the FPU opcodes, and trapped FPU unhandled exceptions and patched those bytes to a hardcoded FPU runtime call instead, for future call iterations.

Self-modifying code is indeed usually still faster than either calls through function pointers, or runtime branching, although I wonder how many "gotchas" that has with really recent modern CPUs.

If you want something even more hardcore... you could go with something akin to HP's Dynamo technology - runtime dynamic optimized re-compiling of code. But that probably has far too much back-end overhead for your application, to say nothing of the development overhead.

Yeah, I got a headache from just reading your description of Dynamo. I can only imagine trying to implement this. :p

 

Apathetic

Platinum Member
Dec 23, 2002
2,587
6
81
I used to do a lot of real time programming. It's hard to give specific advice since I don't know what CPU you're using and what your real time constraints are but function pointers one of the best ways to go. They are (usually) fairly quick and it takes a known constant amount of time to make a function call. The main disadvantage is that all the functions have to take the same number of parameters and adding more parameters will slow down the call slightly (pushing and poping values to/from the stack).

I usually didn't bother with #ifdefs though. During the initialization phase, I would just assign the function pointer to the proper function and start calling it. If memory is tight and you can't fit all of the functions in the same binary image then you're probably stuck using the #ifdefs.

Dave
 

cKGunslinger

Lifer
Nov 29, 1999
16,408
57
91
Originally posted by: Apathetic
I used to do a lot of real time programming. It's hard to give specific advice since I don't know what CPU you're using and what your real time constraints are but function pointers one of the best ways to go. They are (usually) fairly quick and it takes a known constant amount of time to make a function call. The main disadvantage is that all the functions have to take the same number of parameters and adding more parameters will slow down the call slightly (pushing and poping values to/from the stack).

I usually didn't bother with #ifdefs though. During the initialization phase, I would just assign the function pointer to the proper function and start calling it. If memory is tight and you can't fit all of the functions in the same binary image then you're probably stuck using the #ifdefs.

Dave

Thanks. Yeah, memory isn't really a problem with this program (very small footprint + lots of RAM), just throughput and schedule. I've started down the function pointer route, with a manual input message at startup with the correct variables to set them up. I think this'll be the best solution.