Socket/Server performance issue/question

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Red Squirrel

No Lifer
May 24, 2003
71,304
14,081
126
www.anyf.ca
Sounds more complicated then it's worth tbh.

I'll stick to appname -compile (-compile calls up g++ -o appname appname.cpp + other args like -pthread or debug) or just do it manually for more specific parameters, but I can also use appname -compile=" -O3" for example. It's coded in my setup.


That said, I noticed my program can only handle 1020 connections no matter what. Am I possibly hitting a system limit of some sort? It's odd as it will run very well then just dead stop at 1020, if one of those connections is killed, boom, I can open another, but when its at 1020 I connect then get booted off.

Could this be a limitation of Linux? Durring this time my server is fairly responsive outside of that application though. Even current sessions in that app are responsive but I can't open new ones.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
Make is really GOOD to know how to use even if you don't CHOOSE to use it. You WILL have to hack other peoples' makefiles OFTEN just to get THEIR code to compile.

Alternatives to learn in addition to make are : smake, ANT, jam.
Learning something about make + ANT at least is highly recommended.

Your process is limited in the number of file descriptors (per process limit) it is allowed to open.
http://man.sourcentral.org/FC5/2+getrlimit

man getrlimit
find /usr/include -type f -print0 | xargs -0 grep NR_OPEN

http://www.techiesabode.com/ar...cle_w.php?article_id=2

-O2 optimizations are "normal" to use for production code; MAYBE turn OFF assert checking if that is a critical slow down for your code otherwise leave it ON, don't use -g if you don't need debug data in the production code, and do use -O2 plus the usual platform specific bounds checking / stack smashing type options.

Use -O0 for maximum debuggability but slower performance.

-O3 enables a few potentially "unsafe" optimizations -- unless your code is very carefully constructed it may give wrong results under -O3... you have to be clear about what is const, what is volatile, what is variant or invariant in a given context, be very careful (as always!) about thread safety et.al. and you MAY survive the -O3 torture test... hang on to your pants, it'll either be very FAST running or very FAST crashing..

Originally posted by: RedSquirrel
Sounds more complicated then it's worth tbh.

I'll stick to appname -compile (-compile calls up g++ -o appname appname.cpp + other args like -pthread or debug) or just do it manually for more specific parameters, but I can also use appname -compile=" -O3" for example. It's coded in my setup.


That said, I noticed my program can only handle 1020 connections no matter what. Am I possibly hitting a system limit of some sort? It's odd as it will run very well then just dead stop at 1020, if one of those connections is killed, boom, I can open another, but when its at 1020 I connect then get booted off.

Could this be a limitation of Linux? Durring this time my server is fairly responsive outside of that application though. Even current sessions in that app are responsive but I can't open new ones.

 

Red Squirrel

No Lifer
May 24, 2003
71,304
14,081
126
www.anyf.ca
What would be a safe value to set the file descriptor limit to? Is there a reason why it would be bad to have it too high? I'm experimenting with it right now, I just set it to 65535 to see what happens.
 

QuixoticOne

Golden Member
Nov 4, 2005
1,855
0
0
There is a *per system* descriptor limit also, so your per process limit can't reasonably be larger than the per system limit. I believe the article I linked discussed the values / header locations / etc. of both values.

There is no real reason you can't increase either the per process limit or the per system limit up to a factor several times bigger than their default values AFAIK.

The per process limit like the per process memory limit is just to help preserve system stability and usability against a malware process or buggy process that starts consuming excessive amounts of resources for no good reason -- the system overall should stay responsive and have mostly free resources even if an individual process is consuming all it possibly can. If you have a reason to need more for a given system/ process, of course tune the values to suit your needs.

IIRC in some thread you might've mentioned something about running some really old version of OS like Fedora 5 .. if that was you and you can do so you might want to update it to a much newer version (maybe 10.0 in a few weeks) since IIRC the bugs in the old versions due to both kernel and userspace tools were pretty severe at times. I wouldn't be shocked if you hit some kinds of limits / bugs if you're using the system very hard as you seem to be. Ignore what I've said if the fedora legacy project is still providing good contemporary kernel & user tools updates for your version -- IDK what the story with that is. I think the Fedora mainline project considers F5 EOL by now though.

Oh make files... nothing to be afraid of, here's a simple but useful one that will intelligently update (recompile) any / all programs needed without recompiling ones that don't need to be recompiled. The process checks the timestamps of the output files vs. the sources they depend on.

Makefile --

CFLAGS=-g -O0
INCDIRS=-I/usr/local/include
LIBDIRS=-L/usr/local/lib

all: hello goodbye comeback

hello: hello.c hello.h hello2.h
gcc $(CFLAGS) $(INCDIRS) $(LIBDIRS) -o $@ $<

goodbye: goodbye.c goodbye.h aloha.h
gcc $(CFLAGS) $(INCDIRS) $(LIBDIRS) -o $@ $<

comeback: comeback.c comeback.h return.h
gcc $(CFLAGS) $(INCDIRS) $(LIBDIRS) -o $@ $<

clean:
- rm -f hello.o goodbye.o hello goodbye *core*


bash% make hello
bash% make goodbye
bash% make comeback
bash % make all

Originally posted by: RedSquirrel
What would be a safe value to set the file descriptor limit to? Is there a reason why it would be bad to have it too high? I'm experimenting with it right now, I just set it to 65535 to see what happens.