GPGPU use for LAMP/WAMP... is it feasible?

phaxmohdem

Golden Member
Aug 18, 2004
1,839
0
0
www.avxmedia.com
With the recent release of the GTX280 and the upcoming ATI launches, there has been lots of talk about the GP-GPU capabilities of these using CUDA and similar programming languages.

I was wondering if there is an existing port of Apache/MySQL/PHP for a GPGPU, or if this is even possible.

As an operator of my own WAMP web server from home, I think It would be interesting to be able to pop in a relatively cheap 8800GT for example and be able to use it as a coprocessor to boost database response time, or assist apache by offering more threads to serve simultaneous pages.

I do realize that for larger databases, the limited memory of a graphics card could hurt it, and I also realize that the data a web server deals with may not be very compatible with the floating point nature of graphics cards. I'm just curious if this has been discussed/tried and what some of the pitfalls that must be overcome are.
 

drebo

Diamond Member
Feb 24, 2006
7,034
1
81
It probably can and will be done, but most likely more as proof of concept than any actual practical use.
 

scootermaster

Platinum Member
Nov 29, 2005
2,411
0
0
Originally posted by: phaxmohdem
With the recent release of the GTX280 and the upcoming ATI launches, there has been lots of talk about the GP-GPU capabilities of these using CUDA and similar programming languages.

I was wondering if there is an existing port of Apache/MySQL/PHP for a GPGPU, or if this is even possible.

As an operator of my own WAMP web server from home, I think It would be interesting to be able to pop in a relatively cheap 8800GT for example and be able to use it as a coprocessor to boost database response time, or assist apache by offering more threads to serve simultaneous pages.

I do realize that for larger databases, the limited memory of a graphics card could hurt it, and I also realize that the data a web server deals with may not be very compatible with the floating point nature of graphics cards. I'm just curious if this has been discussed/tried and what some of the pitfalls that must be overcome are.

I do GPGPU stuff, and I seriously doubt you'd get much of any benefit from using a GPU for those sorts of applications. The dynamic and ephemeral nature of web requests aren't really good matches for what a GPU can do. Unless we're talking a huge scale here, like, google with their map-reduce framework or something.
 

phaxmohdem

Golden Member
Aug 18, 2004
1,839
0
0
www.avxmedia.com
@scootermaster

I'm actually kind of interested in learning to do GPGPU programming myself, I've got several 8 series GeForce Cards I could use with CUDA, what would you suggest for a good starting point for a GPU programming n00b such as myself? I've got basic Java/VB skills, and a fair understanding of data structures, I've just never messed with GPU specific tasks before.
 

scootermaster

Platinum Member
Nov 29, 2005
2,411
0
0
Originally posted by: phaxmohdem
@scootermaster

I'm actually kind of interested in learning to do GPGPU programming myself, I've got several 8 series GeForce Cards I could use with CUDA, what would you suggest for a good starting point for a GPU programming n00b such as myself? I've got basic Java/VB skills, and a fair understanding of data structures, I've just never messed with GPU specific tasks before.

Well, I would print out the nvcc and general CUDA documentation and read through them.

On the one hand, there really isn't anything to "CUDA" programming. It's just C (or C++ converted to C). On the other hand, to actually get the GPU (or "device", as it's called) to actually do anything, it takes some doing. And, naturally, to get it to do anything efficiently and faster than a normal computer (or "host", as they say), well, that's the tricky part.

So, it depends...getting something to compile on CUDA shouldn't take that long. There are examples and tutorials. But hacking Apache so it's thread pools are put into CUDA thread grid/warps, and knowing how many registers to allocate, and whether or not you can get by with 24/16 bit arithmetic and all that...that's more complicated (read: impossible).

People seem to thing GPGPU programming is going to save the world. The reality is that it's really only shines in certain situations. The real boon will be just accessing it as if it was another core (i.e. well, our two cores are busy doing stuff, but hey, why don't we use the GPU...it's not doing anything right now). That's what's going to happen with things like grand central and OpenCL. It's going to use the GPU because why the heck not, but it's not like all the sudden normal every-day tasks are going to be made much faster just because GPUs are massively parallel or have fast shared memory or any of the other things that people talk about that somehow make GPUs "better". (You see a lot of graphs about the peak FLOPS of GPUs versus CPUs, and how they're diverging. Well, yes, they are. But it's not like people at Intel are stupid. There's a reason CPUs are made like they are. So it's dumb to look at GPUs as "better).

I mean, it's not as if all the sudden you're going to see inherently non-parallel (or things that aren't obviously parallel) start running at full throughput on GPUs just because they're dispatched there.

So, in conclusion, I'll answer your question with another question: Do you just want to get your GPU to "do" something, or do you want it to do something that's really useful to do on a GPU?

If you really want to get into it, you're going to have to familiarize yourself with all aspects of parallelism; application level, task level, instruction level, etc, and see what you can come up with that will make your application (or, sigh, a "kernel", as they call it) shine on a GPU.
 

phaxmohdem

Golden Member
Aug 18, 2004
1,839
0
0
www.avxmedia.com
Wow, thank you for the detailed reply. I have no intention of messing with Apache or similar applications, as I know I would be about 20,000 leagues over my head :)

I am actually in the middle of a fairly informative article just posted on Toms Hardware when I read your reply which so far has basically expounded on everything you just mentioned. Good read in my n00b opinion :) [ http://www.tomshardware.com/re...dia-cuda-gpu,1954.html ]

I have no real reason to tinker with CUDA other than just that... to tinker. (I wouldn't mind a having an GPU accelerated render engine for Lightwave 3D or Maya though if the GPU could actually be of use in that scenario. )
 

hans007

Lifer
Feb 1, 2000
20,212
18
81
yeah i'm just starting to look into cuda and i'm fairly certain things like web servers are not what its made for.

it seems fairly limited. i was actually thinking that one thing that might work easily on it is data encryption. so i suppose, if you were running a webserver that had huge swaths of highly encrypted data that had to be decrypted on the fly it could help there.
 

scootermaster

Platinum Member
Nov 29, 2005
2,411
0
0
Originally posted by: Crusty
Yeah, I bet it would make a nice SSL offload engine :)

I don't actually know much about encryption, other than, you know, prime numbers (and I had a class with Len Adleman once), but GPUs thrive in environments that are massively parallel and supremely independent. So if you could break up a webpage or something, into chunks that didn't depend on other chunks, and then de/encrypt those chunks independently of each other, than absolutely, GPUs would shine in that arena.

So, go implement it! You can at worst, publish a paper about it, or better, get a patent and start a company!

 

Crusty

Lifer
Sep 30, 2001
12,684
2
81
Originally posted by: scootermaster
Originally posted by: Crusty
Yeah, I bet it would make a nice SSL offload engine :)

I don't actually know much about encryption, other than, you know, prime numbers (and I had a class with Len Adleman once), but GPUs thrive in environments that are massively parallel and supremely independent. So if you could break up a webpage or something, into chunks that didn't depend on other chunks, and then de/encrypt those chunks independently of each other, than absolutely, GPUs would shine in that arena.

So, go implement it! You can at worst, publish a paper about it, or better, get a patent and start a company!

Well, serving web requests by nature is very parallel(think 1000's of requests a second). So if you're doing a high volume of requests, using the power of the chip to do the encryption should provide a fairly good boost.
 

DaveSimmons

Elite Member
Aug 12, 2001
40,730
670
126
zlib compression of multiple pages at once is one idea, similar to encryption but something that's much more widely used.

It's a simple, fixed set of instructions for the compression so it makes more sense than tying to parallelize general webserver tasks. It's also a distinct stage in the server pipeline, and data (page response) could be batched together.
 

degibson

Golden Member
Mar 21, 2008
1,389
0
0
Originally posted by: Crusty
Originally posted by: scootermaster
Originally posted by: Crusty
Yeah, I bet it would make a nice SSL offload engine :)

I don't actually know much about encryption, other than, you know, prime numbers (and I had a class with Len Adleman once), but GPUs thrive in environments that are massively parallel and supremely independent. So if you could break up a webpage or something, into chunks that didn't depend on other chunks, and then de/encrypt those chunks independently of each other, than absolutely, GPUs would shine in that arena.

So, go implement it! You can at worst, publish a paper about it, or better, get a patent and start a company!

Well, serving web requests by nature is very parallel(think 1000's of requests a second). So if you're doing a high volume of requests, using the power of the chip to do the encryption should provide a fairly good boost.

Web serving is very control-parallel. GPUs are really designed for tons of data-level parallelism (e.g. SIMD).