• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Using C++ for web apps?

Red Squirrel

No Lifer
Anyone use C++ directly instead of php/etc for web apps? How is the best way to implement that, via CGI? Is the whole thing just compiled into a single binary and you use query strings as normal? I imagine you have to code everything from ground up such as handling http headers (send and receive). I like C++ due to it's nice OOP as well as the fact that a compiled app in a lower level language is always going to be much faster thus handle heavy traffic or spikes better.

Been toying with the idea of redesigning my sites, and possibly using C++ for the back end. It's overkill as far as performance requirement given I don't have much traffic, but I figure, why not. Just curious if anyone has done this before and if you'd recommend it, or if you have recommendations on how to go about doing it, or if it's just a terrible idea.
 
You could use C++. You don't have to use cgi however, you could open a socket and listen and spawn a thread off the back of the request and produce a tiny little app server.

While its possible you aren't going to get it to be faster than Java and its ecosystem of libraries. Why? Because they have spent enormous amounts of time optimising every aspect of the JVM and the libraries to produce very high levels of performance. Not only would a JVM do this faster but it would be a lot easier. I personally wouldn't bother, C++ is a square peg for a round hole.

Go is quite an interesting language for being system able but with a lot less general overhead in programming and the system access if what you need is a web site with some system access. However if you want system access in a website I would want to stick that behind some serious security API and not have the entire site have the ability.
 
I used C++ EXEs on one IIS site to generate data plots as JPEG images, but the HTML pages and main logic were done in ASP (not ASPX, this was in 2000).

I'd never reinvent the wheel and do the main site in C++ though, too much extra work.
 
if i was going to do a recent webapp and wanted oop, i'd use either groovy or java for the backend. been getting my feet wet with groovy recently at work and it's an interesting language. it's basically a shorthand way to write java. and if you want to do some java in the code you can do that as well.

then use javascript for the front end stuff.
 
While its possible you aren't going to get it to be faster than Java and its ecosystem of libraries. Why? Because they have spent enormous amounts of time optimising every aspect of the JVM and the libraries to produce very high levels of performance. Not only would a JVM do this faster but it would be a lot easier. I personally wouldn't bother, C++ is a square peg for a round hole.


Have you ever looked at the task manager while minecraft or other java app is running? :biggrin: Java is probably one of the worse languages from a performance standpoint. 😛 Though for a web app, it definitely would be fine, but performance it's definitely it's weakness.

My main attraction to C++ is that I know it somewhat well, and I already have lot of reusable code on hand from previous projects. Though I guess I should force myself to learn new languages too.
 
PHP is pretty close to C++, the main debugging annoyances are it's not type-safe and doesn't check that variables exist when you use them. You can also mix together PHP and raw HTML on the same page.
 
You could use C++. You don't have to use cgi however, you could open a socket and listen and spawn a thread off the back of the request and produce a tiny little app server.

While its possible you aren't going to get it to be faster than Java and its ecosystem of libraries. Why? Because they have spent enormous amounts of time optimising every aspect of the JVM and the libraries to produce very high levels of performance. Not only would a JVM do this faster but it would be a lot easier. I personally wouldn't bother, C++ is a square peg for a round hole.

Go is quite an interesting language for being system able but with a lot less general overhead in programming and the system access if what you need is a web site with some system access. However if you want system access in a website I would want to stick that behind some serious security API and not have the entire site have the ability.

Even if you could get your sever side code to be 500% more efficient than Java, it would still go unnoticed, as the latency of the internet and database connections are the biggest performance hits.
 
Even if you could get your sever side code to be 500% more efficient than Java, it would still go unnoticed, as the latency of the internet and database connections are the biggest performance hits.

Yup. This is why python and ruby aren't terrible ideas for web languages. They are anywhere from 10->100x slower than C++, yet they still beat disk IO.

Unless you are doing some really complex stuff in each endpoint, there is really no reason to use a low level language like C++.

That said, modern C++11 looks like a pretty nice. It goes a long way to making C++ less of a beast to work with.
 
Have you ever looked at the task manager while minecraft or other java app is running? :biggrin: Java is probably one of the worse languages from a performance standpoint.

That's BS. Yes it might use more memory but the actual performance is pretty much on par with C++. You also know that you can configure Java how much memory it is allowed to use? And that a JVM reserves a certain amount of memory and that amount is displayed in the task manager but the actual app might use only half of it?

I used to believe this too but that was 10 years ago when 8 gb of RAM wasn't $50.
 
I think it comes down to if it is something you want the browser to render or the server to render. If the browser than xsi/javascript/java if the server whatever language you prefer (perl, python, php, c++, java (servlet)). It is true that some languages have better interface/support libraries for some specific web applications but at the end of the day I don't know how much it matters. The problem is that if you use off the shelf building blocks and (a) you don't keep it up-to-date and (b) a security flaw is found then (c) your machine will be hacked.

This doesn't mean that home grown solutions are invulnerable (security by obsecurity is pretty weak) but it does mean that cookie-cutter hacks are less likely to impact your infrastructure.
-
Me? I've used a bit of them all over the years from hacking the server side stuff directly into apache pipe-line to writing custom server from ground up. Language - I'm partial to java/c++ but php and python as quite popular. For client side stuff javascript/xsi has a lot of benefits though I personally disable javascript on my browser but most folks don't bother.
-
As to performance C++ is faster than java but this is some what irrelevant because java is fast enough for 99% of what is done. Computers today are just plain fast as long as you're not doing too much I/O - at least for web stuff. Anyway it is impossible to answer your question because 'web stuff' is too generic; the choice (at least for myself) is heavily influenced by the specifics. While I noted above that I tend to favor C++; for animation it probably wouldn't be my first chioce but I would be reluctant to use perl for security reasons.
-
 
I think it comes down to if it is something you want the browser to render or the server to render. If the browser than xsi/javascript/java if the server whatever language you prefer (perl, python, php, c++, java (servlet)). It is true that some languages have better interface/support libraries for some specific web applications but at the end of the day I don't know how much it matters. The problem is that if you use off the shelf building blocks and (a) you don't keep it up-to-date and (b) a security flaw is found then (c) your machine will be hacked.

This doesn't mean that home grown solutions are invulnerable (security by obsecurity is pretty weak) but it does mean that cookie-cutter hacks are less likely to impact your infrastructure.
-
Me? I've used a bit of them all over the years from hacking the server side stuff directly into apache pipe-line to writing custom server from ground up. Language - I'm partial to java/c++ but php and python as quite popular. For client side stuff javascript/xsi has a lot of benefits though I personally disable javascript on my browser but most folks don't bother.
-
As to performance C++ is faster than java but this is some what irrelevant because java is fast enough for 99% of what is done. Computers today are just plain fast as long as you're not doing too much I/O - at least for web stuff. Anyway it is impossible to answer your question because 'web stuff' is too generic; the choice (at least for myself) is heavily influenced by the specifics. While I noted above that I tend to favor C++; for animation it probably wouldn't be my first chioce but I would be reluctant to use perl for security reasons.
-

For our apps, (and I'm fairly sure this is true of most web apps). 99% of our time is spent in the database. Only a small fraction of our rendering time is actually spent in java (even when going through some of the most crazy code paths).

Processing power is very high right now. What isn't is IO and network speeds. Until that changes it really doesn't matter what language you choose (unless you end up writing the next twitter or facebook. However, you can't plan for that).
 
That's BS. Yes it might use more memory but the actual performance is pretty much on par with C++. You also know that you can configure Java how much memory it is allowed to use? And that a JVM reserves a certain amount of memory and that amount is displayed in the task manager but the actual app might use only half of it?

I used to believe this too but that was 10 years ago when 8 gb of RAM wasn't $50.

Actually I'm talking about cpu usage more than ram. Even with my Core i7, a Java app like Minecraft makes a pretty decent dent in it when much more complex apps written in C++ or even C# or VB arn't as bad.

For a website this matters less though, but I would not see the benifit of using Java at all, when even php would be faster.
 
Actually I'm talking about cpu usage more than ram. Even with my Core i7, a Java app like Minecraft makes a pretty decent dent in it when much more complex apps written in C++ or even C# or VB arn't as bad.

For a website this matters less though, but I would not see the benifit of using Java at all, when even php would be faster.

AFAIK Minecraft is pretty badly written so I can easily make you a crappy, slow app in C++ full of memory leaks.
 
CPU usage is actually getting more relevant across most application domains, I think. Where it used to be evaluated in the context of a single dedicated server, measuring what percentage of that machine's CPU cycles were in use at any given time, it is now a matter of "how much billed-by-the-cycle cloud computing resources are you consuming?" It was more or less a common belief that for web apps, spending most of their time waiting on requests, CPU was rarely a significant factor. But if you're scaling up a big web app with a large user base the amount of CPU on average consumed by the server processes might be very significant.
 
Yup. This is why python and ruby aren't terrible ideas for web languages. They are anywhere from 10->100x slower than C++, yet they still beat disk IO.

Unless you are doing some really complex stuff in each endpoint, there is really no reason to use a low level language like C++.

That said, modern C++11 looks like a pretty nice. It goes a long way to making C++ less of a beast to work with.

I include user input with 'I/O' along with disk/network.
If you include the keyboard bottleneck, you really don't need most of the CPU efficient languages (most people just call CPU-performance "performance" but to me I see performance wholistic).

That said, if you're just dealing with data.. and many apps are, then your DB is doing most of the work and what its programmed in/on is almost entirely irrelevant.
Depends on use-case.

In summary tho, it's not 1989 anymore. More important than 'speed' (CPU crunching) is fault tolerant, multinode web services more than anything else.
You don't really need C++ to get the 386dx16 running well.
 
CPU usage is actually getting more relevant across most application domains, I think. Where it used to be evaluated in the context of a single dedicated server, measuring what percentage of that machine's CPU cycles were in use at any given time, it is now a matter of "how much billed-by-the-cycle cloud computing resources are you consuming?" It was more or less a common belief that for web apps, spending most of their time waiting on requests, CPU was rarely a significant factor. But if you're scaling up a big web app with a large user base the amount of CPU on average consumed by the server processes might be very significant.

Of all the large scale* web apps I've worked on, the biggest performance gains were always from SQL optimization, DB indexing, and caching. I am sure for sites like Amazon and Google, code efficiency can play a part, but for the majority of web apps, it just isn't a priority.


*I believe the smallest had a user base of 350,000 users. That seems large scale, but I really have no idea exactly where it fits in the big picture of web apps.
 
Actually I'm talking about cpu usage more than ram. Even with my Core i7, a Java app like Minecraft makes a pretty decent dent in it when much more complex apps written in C++ or even C# or VB arn't as bad.

For a website this matters less though, but I would not see the benifit of using Java at all, when even php would be faster.

Client side Java performance is not comparable to server side Java performance.
 
Of all the large scale* web apps I've worked on, the biggest performance gains were always from SQL optimization, DB indexing, and caching. I am sure for sites like Amazon and Google, code efficiency can play a part, but for the majority of web apps, it just isn't a priority.


*I believe the smallest had a user base of 350,000 users. That seems large scale, but I really have no idea exactly where it fits in the big picture of web apps.

Yeah I am thinking more in terms of cost, than performance. There's no question that for an individual worker thread it's going to be spending most of its time waiting on input, the db, etc. But scale this up across a few hundred or thousand workers, all of which are running on a pay-as-you-go cloud platform, and maybe its a bigger deal how much cpu each is using. I'm just thinking that the world is changing. Used to be we bought a CPU and how much of it we used was really only relevant from a performance perspective (and power utilization). Now increasingly we'll be buying a shared slice of a cpu, so effectively every additional cycle has a cost implication.
 
Yeah I am thinking more in terms of cost, than performance. There's no question that for an individual worker thread it's going to be spending most of its time waiting on input, the db, etc. But scale this up across a few hundred or thousand workers, all of which are running on a pay-as-you-go cloud platform, and maybe its a bigger deal how much cpu each is using. I'm just thinking that the world is changing. Used to be we bought a CPU and how much of it we used was really only relevant from a performance perspective (and power utilization). Now increasingly we'll be buying a shared slice of a cpu, so effectively every additional cycle has a cost implication.

I don't see the pay-as-you-go model taking off in big business though. Having your own dedicated servers would always be a better deal, even if it costs more up front. What is I am sharing CPU with someone who crashes the server or impacts my performance through no fault of my own? I just can't see a business ignoring these concerns until we have super computers running datacenters and hosting websites. I could be wrong though.

I forgot to mention on my other post, there are some trading programs that require the most efficient code and performance. They, form what I've been told, even position the servers geographically as close to the the servers from Wall St. as possible to negate as much latency. But I guess when a second of delay could potentially cost millions, you do what you can to minimize that.
 
I don't see the pay-as-you-go model taking off in big business though. Having your own dedicated servers would always be a better deal, even if it costs more up front. What is I am sharing CPU with someone who crashes the server or impacts my performance through no fault of my own? I just can't see a business ignoring these concerns until we have super computers running datacenters and hosting websites. I could be wrong though.

I forgot to mention on my other post, there are some trading programs that require the most efficient code and performance. They, form what I've been told, even position the servers geographically as close to the the servers from Wall St. as possible to negate as much latency. But I guess when a second of delay could potentially cost millions, you do what you can to minimize that.

More like when a few microseconds cost you a few dollars!
 
What is I am sharing CPU with someone who crashes the server or impacts my performance through no fault of my own? I just can't see a business ignoring these concerns until we have super computers running datacenters and hosting websites. I could be wrong though.

Current cloud platforms like Azure and EC2 already deal with that issue. You're running in a VM and getting billed on the amount of host CPU it uses. I think businesses will move more toward cloud deployment in the future as well. The benefits are too striking. Some will rent servers from Amazon and the like, and some will build their own. The latter group is likely to be financial institutions and others that have significant regulatory privacy and security concerns. But, private or public, the cloud model will likely win out, and if it isn't Microsoft billing you for time on Azure it will be your own global IT group charging you for time on the company cloud.
 
So in summary efficiency is always better than inefficiency..
but if you actually profile 90% of applications out there, they are waiting on I/O of some sort, making the swap from Ruby to Scala irrelevant.
Ruby on Rails being what Twitter was originally on, and Scala being what it's built with now.

My theory was always that if I created the next Twitter, my CPU performance won't matter or be a bottleneck until I reach the popularity of Twitter.
At that point, I'll pay someone much smarter than me to rewrite it in Scala with one of my millions of dollars. Or whatever is better than Scala that seems to the the top dog.
 
Last edited:
Back
Top