Good C++ compiler for windows

Red Squirrel

No Lifer
May 24, 2003
70,316
13,661
126
www.anyf.ca
I have borland C++ 5.02, but that is quote outdated and I should probably be using something more up to date, unless someone can say otherwise...

So I'm wondering what everyone uses these days for C++. I'm hoping for something free that has a gui, and maybe even something that does resources (dialogs etc) but that's just a plus. I've used dev C++ in the past but found it did not compile much, such as sockets, etc, but perhaps that changed now.

Or on the more advanced/non free side there's Ms VC++ but only thing I'm worried if I start using that, am I limited to microsoft code, like MFC?
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
gcc works on Windows, Dev C++ is just a front-end for it. Mingw will get you a native compiler with no dependencies on cygwin. The MS compilers are free, it's the full blown development tools that cost, Borland's compilers are also free IIRC. The Intel compiler for Linux is free for non-commercial use, it might be the same on Windows.
 

itachi

Senior member
Aug 17, 2004
390
0
0
intel c++ and msvc produce the fastest binaries.. with intel being in the lead. in terms of performance, gcc really isn't a comparison anymore. if you're stingy, poor, or you're only programming for fun.. go with gcc, it's a free optimizing compiler. if you plan on using it for professional applications, go with msvc.

icc is, without much doubt, the fastest compiler out there.. but it's extremely sensitive and has the potential of producing slow binaries with large overhead when u dont know the effect that each of the flags will have on your program and your target processor. for example, never let the compiler decide on how many times to unroll loops.. take into factor the length and architecture of your processors pipeline to determine how many times it should be unrolled.

windows platform sdk gives you access to the win32 api.. and .net framework sdk gives you access to the .net api.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
intel c++ and msvc produce the fastest binaries.. with intel being in the lead. in terms of performance, gcc really isn't a comparison anymore. if you're stingy, poor, or you're only programming for fun.. go with gcc, it's a free optimizing compiler. if you plan on using it for professional applications, go with msvc.

gcc isn't meant to be an optimizing compiler, it's main goal is portability. And really, in 99% of the applications out there the speed differences will be so minimal I doubt anyone notices.
 

itachi

Senior member
Aug 17, 2004
390
0
0
in terms of what? gcc is a compiler for linux.. it offers portability in terms of architectures, but not operating systems.. the windows version of gcc, MinGW, is done outside of the gcc development group. taken from the gcc website.. "...The GCC development effort uses an open development environment and supports many other platforms in order to foster a world-class optimizing compiler..."

in 100% of the programs out there, if the speed difference isn't huge, people won't notice without a frame of reference. and it doesn' t matter whether they can really notice the difference or not.. if the program has less overhead and takes up less cpu time, then the program will perform better when the user has programs running in the background. any program that exploits instruction level parallelism will perform better than it's counterpart.. and the idea behind optimizing compilers is to take architecture into factor when producing the machine code. the out-of-order execution core can't compensate for serial code.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
gcc is not for linux explicitely infact if GNU/Hurd would ever become usable it would be the main development platform, gcc runs natively on just about everything. Hell a quick google search even says that it even ran on VMS at one point, but I have no idea if that's still true. The fact that there is a seperate project to make it work properly on Windows only says that Windows itself is the problem.

and it doesn' t matter whether they can really notice the difference or no

Sure it does, if you're going to optimize per-CPU you'll most likely need to have at least 5 versions of your program up for download. And that alone will confuse people enough for them to either call someone for help or just move on looking for something simpler.

f the program has less overhead and takes up less cpu time, then the program will perform better when the user has programs running in the background. any program that exploits instruction level parallelism will perform better than it's counterpart.. and the idea behind optimizing compilers is to take architecture into factor when producing the machine code. the out-of-order execution core can't compensate for serial code.

In 99% of the cases it won't matter at all. Nearly all programs are I/O bound, be it network, disk or user. So when your app is running in the background chances are it's waiting for some form of input or idle. Sure if you're working on SETI or lame, then yes performance matters but for most people it's irrelevant.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Nothinman
gcc is not for linux explicitely infact if GNU/Hurd would ever become usable it would be the main development platform, gcc runs natively on just about everything. Hell a quick google search even says that it even ran on VMS at one point, but I have no idea if that's still true. The fact that there is a seperate project to make it work properly on Windows only says that Windows itself is the problem.
no.. to state the obvious, it means that windows isn't part of the open source community. stating that gcc runs natively on just about everything is redundant.. just because you have the static libraries doesn't mean that a program compiled for one architecture will run on another guaranteed. files that compile on linux won't compile on solaris or irix.
Sure it does, if you're going to optimize per-CPU you'll most likely need to have at least 5 versions of your program up for download. And that alone will confuse people enough for them to either call someone for help or just move on looking for something simpler.
software that's been optimized on a per-cpu basis for windows typically comes in an all in one package.. the main binary checks and loads the respective dll for the running cpu at runtime. other than that.. you can have the compiler generalize the target.. with icc, you can do so by setting the loop unrolling to something like 4.. no pipeline is shorter than that, and no recent cpu determines a branch mispredict anywhere near there.
In 99% of the cases it won't matter at all. Nearly all programs are I/O bound, be it network, disk or user. So when your app is running in the background chances are it's waiting for some form of input or idle. Sure if you're working on SETI or lame, then yes performance matters but for most people it's irrelevant.
if a program reads a file 1 character at a time, then the program will be severely limited to i/o transactions.. if it loads 20 mb into memory, the transfer will take a lot less time and dependency will be reduced significantly.
unless 99% of the programs out there perform no computations or does so in a manner that can't exploit any ilp.. that's a hugely skewed and ignorant figure.

oh, and 97% of windows programs are compiled using msvc.. imagine how windows would run if everything was compiled to reduce file size (highly serial code).
 

jlbenedict

Banned
Jul 10, 2005
3,724
0
0
Borland C++ BuilderX has a personal edition of the new version 6.... looks like its $69..

the previous version of BuilderX was free.. looks like they up'd the ante on the new version..
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
no.. to state the obvious, it means that windows isn't part of the open source community. stating that gcc runs natively on just about everything is redundant.. just because you have the static libraries doesn't mean that a program compiled for one architecture will run on another guaranteed. files that compile on linux won't compile on solaris or irix.

Most of the systems that gcc runs on aren't open source. What libraries and headers are available on each system is irrelevant, gcc itself runs and compiles on them just fine even if you have to do a little work to make your program conform to what's available on that system.

if a program reads a file 1 character at a time, then the program will be severely limited to i/o transactions.. if it loads 20 mb into memory, the transfer will take a lot less time and dependency will be reduced significantly.

And if you read 20M at a time and a person only has 64M of memory the OS will start paging like mad and cause a performance degredation of the whole system.

unless 99% of the programs out there perform no computations or does so in a manner that can't exploit any ilp.. that's a hugely skewed and ignorant figure.

Of course they perform computations, but most apps are I/O bound in some manner, whether it may be disk, network, some other device or human.

oh, and 97% of windows programs are compiled using msvc.. imagine how windows would run if everything was compiled to reduce file size (highly serial code).

Windows already runs like crap in general, whether that would be an improvement or not is up in the air.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Nothinman
Most of the systems that gcc runs on aren't open source. What libraries and headers are available on each system is irrelevant, gcc itself runs and compiles on them just fine even if you have to do a little work to make your program conform to what's available on that system.
what does that have to do with anything? gcc is an ansi compliant compiler.. well, guess what.. so is msvc (with the right flags) and icc. so if you write a program in windows without using any of the win32 api, it'll compile on linux using gcc too. so does that mean that msvc and icc were designed with the intention of being portable? no. portability is a characteristic of the language itself, not the compiler.
And if you read 20M at a time and a person only has 64M of memory the OS will start paging like mad and cause a performance degredation of the whole system.
man.. you are really reaching. if someone has 64m of system memory, then they shouldn't give a sht about the performance.. optimizations are done with a generalized or specific target in mind.. if some software says "min system requirements: 512 mb of ram" and you try to run it with 4mb.. don't blame the software.
Of course they perform computations, but most apps are I/O bound in some manner, whether it may be disk, network, some other device or human.
and yet you completely missed my point.. tcp/ip stacks are optimized and disks utilize buffers. disk buffers are designed to transfer a lot of information in the shortest amount of time possible.. if you read everything at once without processing, then you won't have to deal with another disk access until you run out of data to process.
Windows already runs like crap in general, whether that would be an improvement or not is up in the air.
practically all games are designed to be run on windows.. and game developers do use optimizing compilers. there's no field that's as demanding of processing power as the gaming market.. distributed computing doesn't even require a fast computer, it just requires a bunch of em to run in parallel.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
what does that have to do with anything?

You said that gcc isn't native on Windows because Windows isn't part of the open source community, which is just plain wrong.

man.. you are really reaching. if someone has 64m of system memory, then they shouldn't give a sht about the performance.. optimizations are done with a generalized or specific target in mind.. if some software says "min system requirements: 512 mb of ram" and you try to run it with 4mb.. don't blame the software.

But if you're reading in the entire file at once for no reason then it is the software's fault, obviously 1 character and 20M are extremes used to show a point and it's up to the programmer to decide what balance should be used.

and yet you completely missed my point.. tcp/ip stacks are optimized and disks utilize buffers. disk buffers are designed to transfer a lot of information in the shortest amount of time possible.. if you read everything at once without processing, then you won't have to deal with another disk access until you run out of data to process.

And if you fill a large chunk of memory with the entire file just to save yourself from doing a read loop you're going to kill performance of everything else. If the OS decides that it needs to free up memory to make room for your crap it's going to drop pages from the other executables that you haven't ran recently and possibly shove the private modified data from those processes into the pagefile. The OS' page caching is key in keeping I/O performance high on a machine, but it's not a crutch to avoid doing smart buffered reads in your app.

practically all games are designed to be run on windows.. and game developers do use optimizing compilers. there's no field that's as demanding of processing power as the gaming market.. distributed computing doesn't even require a fast computer, it just requires a bunch of em to run in parallel.

Games are a completely seperate market. Of course they require a good optimizing compiler and in a lot of cases still, hand written assembly. A lot of games also bypass most of the "optmized" TCP/IP stack by using UDP and generating their own packets and protocols but I don't see how any of this is relevant to the OP's thread.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Nothinman
You said that gcc isn't native on Windows because Windows isn't part of the open source community, which is just plain wrong.
no.. what i said was that the windows port isn't part of the GNU GCC project.
But if you're reading in the entire file at once for no reason then it is the software's fault, obviously 1 character and 20M are extremes used to show a point and it's up to the programmer to decide what balance should be used.
that was just an example.. if you load up 20M of data into memory, then once that data is loaded in memory, all of it can be processed regardless of i/o transaction speed.. that's the point that i was making. i wasn't saying to have it load up 20M regardless of how fast it can process the data and how long before it can read another block of data.. i was just using that as an example of how i/o boundaries are overcome.

And if you fill a large chunk of memory with the entire file just to save yourself from doing a read loop you're going to kill performance of everything else. If the OS decides that it needs to free up memory to make room for your crap it's going to drop pages from the other executables that you haven't ran recently and possibly shove the private modified data from those processes into the pagefile. The OS' page caching is key in keeping I/O performance high on a machine, but it's not a crutch to avoid doing smart buffered reads in your app.
the windows memory manager won't load up the whole file.. it'll load it up on a demand basis.. you could tell the os to load 1 gb worth of data, but until your program uses up a little part of it.. the paging scheme won't load up even a quarter of it.
Games are a completely seperate market. Of course they require a good optimizing compiler and in a lot of cases still, hand written assembly. A lot of games also bypass most of the "optmized" TCP/IP stack by using UDP and generating their own packets and protocols but I don't see how any of this is relevant to the OP's thread.
the tcp/ip argument wasn't a part of the gaming argument.. i was using it to show another area where c code is optimized.
and where did you get the notion that games are written in assembly? the only "assembly" that would be practical to code would be for the graphics card.. but programmers dont even have access to that, the driver and hal handle it through the use of an api.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
no.. what i said was that the windows port isn't part of the GNU GCC project.

Go back and read it again, you said Windows isn't part of the OSS community.

the windows memory manager won't load up the whole file.. it'll load it up on a demand basis.. you could tell the os to load 1 gb worth of data, but until your program uses up a little part of it.. the paging scheme won't load up even a quarter of it.

I understand how demand paging works, but you're still going to end up loading more than you're actively using because of read-ahead. And I don't know all of the heuristics that the Windows VM uses to determine when to page things out, it might see that you requested 20M and decide to page things out because it's guessing that you'll be using it in the future.

and where did you get the notion that games are written in assembly? the only "assembly" that would be practical to code would be for the graphics card.. but programmers dont even have access to that, the driver and hal handle it through the use of an api.

I didn't mean the entire game was written in assembly.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Nothinman
Go back and read it again, you said Windows isn't part of the OSS community.
and i stand by that.. windows isn't part of the open source project, and because of that the GNU GCC project doesn't release windows ports of it.. mingw is the windows port for gcc, and it's not gnu that released it.
I understand how demand paging works, but you're still going to end up loading more than you're actively using because of read-ahead. And I don't know all of the heuristics that the Windows VM uses to determine when to page things out, it might see that you requested 20M and decide to page things out because it's guessing that you'll be using it in the future.
it doesn't page things out until the page gets requested.. when the kernel requests a page, it looks in the page table for the running process.. if the page exists in the table, then it loads it (and if it can't find it.. page fault depending on the hierarchy level).. if the page doesn't exist in the table, then the kernel reads more from the disk (demand-based paging).
I didn't mean the entire game was written in assembly.
can u show me some references? games are predominately dependent on memory transfer rate (communication is done through memory for the most part).. and more memory to memory transfers reduces register pressure significantly, which would improve performance.. c++ compilers always keep variables in memory, so I can't imagine how assembly-optimized code would be beneficial.
 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
and i stand by that.. windows isn't part of the open source project, and because of that the GNU GCC project doesn't release windows ports of it.. mingw is the windows port for gcc, and it's not gnu that released it.

Then why are there ports for so many other systems that aren't a part of the OSS community? Sun, HP/DEC/Compaq, SGI, etc all have commercial compilers that they would rather sell you for their systems so why would they have any part of helping the GNU project get gcc working on their systems?

it doesn't page things out until the page gets requested.. when the kernel requests a page, it looks in the page table for the running process.. if the page exists in the table, then it loads it (and if it can't find it.. page fault depending on the hierarchy level).. if the page doesn't exist in the table, then the kernel reads more from the disk (demand-based paging).

As I said, I know how demand paging works. But the NT VM has to have some heuristics to decide when it needs to start evicting things from physical memory and I can't believe it would just be something simple like just testing if > X% of memory is used. So even thought your file may be demand paged (with read-ahead that you seem to be ignoring) the VM is also looking at the rest of the system to decide whether or not it should free up some physical memory which will result in pages being evicted from memory. This means either other executables and libraries pages or some process's private writable data will have to be stored in the pagefile, but the affect is the same in that when you switch back to another process more data will need to be paged back into memory and the system will seem sluggish.

can u show me some references? games are predominately dependent on memory transfer rate (communication is done through memory for the most part).. and more memory to memory transfers reduces register pressure significantly, which would improve performance.. c++ compilers always keep variables in memory, so I can't imagine how assembly-optimized code would be beneficial.

There's only 2 or 3 companies that have ever released source code of their old games, id and 3DRealms. id was going to release the Q3 code recently but it was delayed since they had just licensed the engine out again, I'm not sure if they ever released it after that. Q2 was released a while back though, although it's like 7 years old now.

Looking at Q2 there's assembly code for screen copying, polygon model drawing, alias model transform and project, horizontal 8-bpp span-drawing code with 16-pixel subdivision, edge clipping and emission, edge-processing, turbulent texture mapping, horizontal 8-bpp transparent span-drawing, 8 bpp surface block drawing and sound.
 

itachi

Senior member
Aug 17, 2004
390
0
0
Originally posted by: Nothinman
Then why are there ports for so many other systems that aren't a part of the OSS community? Sun, HP/DEC/Compaq, SGI, etc all have commercial compilers that they would rather sell you for their systems so why would they have any part of helping the GNU project get gcc working on their systems?
what are you even arguing anymore? windows is a completely different architecture from linux.. linux was designed to be compatible with unix.. solaris, hp-ux, aix, and irix are all licensed variants of unix. windows has no association with unix. sun, hp/dec/compaq, and sgi don't contribute to gcc.. people who understand their architectures contribute (while this may not have been true in the past.. it is now).
As I said, I know how demand paging works. But the NT VM has to have some heuristics to decide when it needs to start evicting things from physical memory and I can't believe it would just be something simple like just testing if > X% of memory is used. So even thought your file may be demand paged (with read-ahead that you seem to be ignoring) the VM is also looking at the rest of the system to decide whether or not it should free up some physical memory which will result in pages being evicted from memory. This means either other executables and libraries pages or some process's private writable data will have to be stored in the pagefile, but the affect is the same in that when you switch back to another process more data will need to be paged back into memory and the system will seem sluggish.
the windows vmm uses a modified fifo algorithm to handle paging.. the demand based paging scheme deals with the working set of a process.. whenever a program changes it's working set, the older set is removed from the active paged pool and the newer set is added. the newer set consists of pages from both the zero and free page list. when a program requests a new empty page for both reading and writing, it looks in the zero page list.. if it's empty, the memory manager checks the free page list for a page to zero.. if that's empty too, it looks for the oldest page, and swaps it from main memory. when it requests an existing page that exists in the page file.. it loads up that page and adjacent pages along with it. windows will rarely, if ever, swap out the main pages for a running process.
There's only 2 or 3 companies that have ever released source code of their old games, id and 3DRealms. id was going to release the Q3 code recently but it was delayed since they had just licensed the engine out again, I'm not sure if they ever released it after that. Q2 was released a while back though, although it's like 7 years old now.

Looking at Q2 there's assembly code for screen copying, polygon model drawing, alias model transform and project, horizontal 8-bpp span-drawing code with 16-pixel subdivision, edge clipping and emission, edge-processing, turbulent texture mapping, horizontal 8-bpp transparent span-drawing, 8 bpp surface block drawing and sound.
didn't to know that.. good to know.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Geeks :p

what are you even arguing anymore? windows is a completely different architecture from linux.. linux was designed to be compatible with unix.. solaris, hp-ux, aix, and irix are all licensed variants of unix. windows has no association with unix.
Windows has a POSIX subsystem.... not that anything other than SFU actually uses it. (Cygwin uses it's own wrappers to run programs as win32 apps.)

Or on the more advanced/non free side there's Ms VC++ but only thing I'm worried if I start using that, am I limited to microsoft code, like MFC?
Nope. You can compile all kinds of code. The compiler doesn't really affect that - I think you can use win32 API calls from gcc (cygwin, mingw) too if you want.
 

EagleKeeper

Discussion Club Moderator<br>Elite Member
Staff member
Oct 30, 2000
42,589
5
0
Originally posted by: CTho9305

Or on the more advanced/non free side there's Ms VC++ but only thing I'm worried if I start using that, am I limited to microsoft code, like MFC?
Nope. You can compile all kinds of code. The compiler doesn't really affect that - I think you can use win32 API calls from gcc (cygwin, mingw) too if you want.

If you wite code using Microsoft foundation class (MFC), your code will then be closely tied to a compiler that can support those classes.

The same goes with Borland if you use the OWL classes. Some compilers will work with both (Watcom did so in the90's and their resulting code was actually tighter than both MS and Borland)

However, if you write generic code to use the basic windows' headers files you can have a windows program that is not compiler specific.

Likewise, you can use a Borland/Miscorsoft/etc compiler to write generic code that can link up to specific libaries; and those libraries are intended to target a given platform.

There are some third party manufacturers that provide libraries that will allow you at link time to target a given OS platform. Your source code does not have to be modified.

However, you must develop your code to call the specific interface libraries that allow you that flexability.

So again you are tying code to something (third party) for ease of use.

 

Nothinman

Elite Member
Sep 14, 2001
30,672
0
0
what are you even arguing anymore? windows is a completely different architecture from linux.. linux was designed to be compatible with unix.. solaris, hp-ux, aix, and irix are all licensed variants of unix. windows has no association with unix. sun, hp/dec/compaq, and sgi don't contribute to gcc.. people who understand their architectures contribute (while this may not have been true in the past.. it is now).

The architecture is irrelevant, as I mentioned gcc supported VMS at one point, not sure if it still does, and that's nothing like unix and definately not OSS. And as was mentioned, Windows has a POSIX subsystem and it was installed and enabled by default at least up to Win2K.




windows will rarely, if ever, swap out the main pages for a running process.

My experiences running Windows seem to say otherwise.