Nvidia to launch dedicated GPGPU brand

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
http://www.thestreet.com/_dm/smallbusinesstech/smallbusinesstech/10347867.html
-in PART-

In June, Nvidia will launch a new brand dedicated to selling the G80 -- not as a graphics accelerator for PCs and workstations, but as a chip intended to take on data-crunching computing chores currently handled by microprocessors.

Nvidia calls the concept "GPU computing" and contends that the 128 individual "stream" processors packed into the G80 chip make it ideally suited for such computational heavy lifting.

According to CEO Jen-Hsun Huang, the G80 boasts 10 times the floating point computational muscle of today's top-of-the-line PC microprocessor.

"We believe GPU computing will usher in an era of the personal supercomputer, and will dramatically accelerate the adoption of new methods from computational chemistry to computational finance to computational genomics," Huang said in a February conference call with financial analysts.

The effort is being spearheaded by Andy Keane, who joined Nvidia last year and has worked at microprocessor outfits like Intel (INTC - Cramer's Take - Stockpickr - Rating) and MIPS Technologies (MIPS - Cramer's Take - Stockpickr - Rating).

Last year, speculation grew that Nvidia was secretly developing a PC microprocessor based on the x86 instruction set to better compete with the recently merged Advanced Micro Devices (AMD - Cramer's Take - Stockpickr - Rating) and ATI Tech, which fields both graphics chips and microprocessors, as well as Intel, which already has both capabilities.
i thought i'd start a new thread

the old one is here

Nvidia at work on combined CPU with graphic - On 65nm in 2008



*confirmed*


and toldjaso :p

:Q


BiG NEWS!

 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
sadly, its not equipped with any of the coding logic present in all CPUs that are x86, nor is the chip even x86... so, are we to expect program coders to make their software, and Microsoft's OS, compatible with this new CPU as well as x86?
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Posting this in one forum wasn't enough?

Say it after me:
Rebranded Nvidia graphics card.

That's all this is.
 

destrekor

Lifer
Nov 18, 2005
28,799
359
126
Originally posted by: Phynaz
Posting this in one forum wasn't enough?

Say it after me:
Rebranded Nvidia graphics card.

That's all this is.

careful how you categorize it... because its not for graphical computations, at least that's what I get out of it.
 

Aluvus

Platinum Member
Apr 27, 2006
2,913
1
0
Originally posted by: destrekor
sadly, its not equipped with any of the coding logic present in all CPUs that are x86, nor is the chip even x86... so, are we to expect program coders to make their software, and Microsoft's OS, compatible with this new CPU as well as x86?

I supect the idea is more to market the chips as general-purpose coprocessors that can take on tasks that that processors are not as good at. Much in the way that Folding@Home has developed a client that can offload certain types of work units onto certain GPUs for large performance gains.

IOW, the idea is to supplement x86 rather than replace it.

In that regard it isn't really that different, from the software side, from supporting x87, SSE, or some other x86 extension.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
First it says that it will be based on x86 IS (Instruction Set), then the article goes on and on on how good it will be in parallel processing without letting anyone forget for a moment that it is a "GPU computing".

It sounds very weird.

The thing is it is either a x86 or it isn?t, i.e. it can run windowsXP or it can't, and by the looks of it, it can't.

Based on the x86 IS probably means it will have some similarities with x86 IS but will not be fully compatible, this usually means that existing compilers that can produce x86 machine code will need to be altered only to some extend so any existing code will eventually run on this GPU computer.

This can (1) save Nvidia a TON of money, mostly in R&D as in using existing, modified compilers and not go through the loops of creating a compilers from scratch. (2) it also means that a LOT of existing code will be able to run on their GPU computers without any significant changes to it (time to market is a very strong selling point theses days).

However this does NOT seems like it (GPU computer) will be able to exist in its own right i.e. this GPU computer will probably be either A. a regular computer (x86) with maybe a regular MB with slots of GCPU, or B. some kind of embedded semi scalable hardware that will run Nvidia propriety OS which will probably be a modified *nix OS (less likely but not altogether unrealistic).

/edit

Cliff note:

Originally posted by: apoppin
BiG NEWS!
Nope. :p
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: Phynaz
Posting this in one forum wasn't enough?

Say it after me:
Rebranded Nvidia graphics card.

That's all this is.

*No* it isn't :p

--and this is CPUs in case you haven't noticed
:roll:

--different group of posters entirely ... i posted the *other previous related* thread in both forums without a single complaint :p

anyway ... learn to read:

1. "not as a graphics accelerator for PCs and workstations, but as a chip intended to take on data-crunching computing chores currently handled by microprocessors. "

2. G80 boasts 10 times the floating point computational muscle of today's top-of-the-line PC microprocessor.

... following so far?

3. "We believe GPU computing will usher in an era of the personal supercomputer, and will dramatically accelerate the adoption of new methods from computational chemistry to computational finance to computational genomics," Huang said in a February conference call with financial analysts.

evidently it is a G80 specifically made to address the 'niche' market that needs "super computing" ... likely it is a "reworked" G80 ... *not* the one you will play Crysis with ;)

of course we need more details

stay tuned ;)
 

formulav8

Diamond Member
Sep 18, 2000
7,004
522
126
Didn't ATI/AMD already release something like this? I could have sworn I heard something like this has already been done by AMD? :confused: I could very well be wrong of course.


JAson
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: formulav8
Didn't ATI/AMD already release something like this? I could have sworn I heard something like this has already been done by AMD? :confused: I could very well be wrong of course.


JAson

yes ... and ... no

yes ... the x1900 series can do FoH and other computations ... and IS used for that

however, this CGPU seems to be a "specialized" g80 ... for a niche market that has "super computational" needs

not one you can just stick in your rig and play games with [like the x1900 series]
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,754
31,725
146
Originally posted by: kobymu
First it says that it will be based on x86 IS (Instruction Set), then the article goes on and on on how good it will be in parallel processing without letting anyone forget for a moment that it is a "GPU computing".

It sounds very weird.

The thing is it is either a x86 or it isn?t, i.e. it can run windowsXP or it can't, and by the looks of it, it can't.

Based on the x86 IS probably means it will have some similarities with x86 IS but will not be fully compatible, this usually means that existing compilers that can produce x86 machine code will need to be altered only to some extend so any existing code will eventually run on this GPU computer.

This can (1) save Nvidia a TON of money, mostly in R&D as in using existing, modified compilers and not go through the loops of creating a compilers from scratch. (2) it also means that a LOT of existing code will be able to run on their GPU computers without any significant changes to it (time to market is a very strong selling point theses days).

However this does NOT seems like it (GPU computer) will be able to exist in its own right i.e. this GPU computer will probably be either A. a regular computer (x86) with maybe a regular MB with slots of GCPU, or B. some kind of embedded semi scalable hardware that will run Nvidia propriety OS which will probably be a modified *nix OS (less likely but not altogether unrealistic).

/edit

Cliff note:

Originally posted by: apoppin
BiG NEWS!
Nope. :p
You reference a "GPU computer", but wouldn't it make more sense for this to be a drop-in co-processor for Torrenza, and whatever Intc calls the open socket standard they announced@their developers forum?
A. a regular computer (x86) with maybe a regular MB with slots of GCPU
Seems like that is where your thoughts were going, but perhaps you aren't familiar with what the Torrenza initiative entails?
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
<<Seems like that is where your thoughts were going, but perhaps you aren't familiar with what the Torrenza initiative entails?>>

are you talking AMD's *Fusion*?
:confused:

nope ... *evidently* this "new brand" is a remarked and repackaged G80 targeted specifically for the "Pro" 'super-computing' niche market
--obviously it is *optimized* for it's new job ... [since g80 cannot be currently be used for FoH, for example]
... and will be really expensive

no doubt it will work with CPUs :p
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,754
31,725
146
Originally posted by: apoppin
<<Seems like that is where your thoughts were going, but perhaps you aren't familiar with what the Torrenza initiative entails?>>

are you talking AMD's *Fusion*?
:confused:

nope ... *evidently* this "new brand" is a remarked and repackaged G80 targeted specifically for the "Pro" 'super-computing' niche market
--obviously it is *optimized* for it's new job ... [since g80 cannot be currently be used for FoH, for example]
... and will be really expensive

no doubt it will work with CPUs :p
The technology elements of Torrenza are closely related to those of the AMD Fusion project, which targets the integration of graphics processing units (or other coprocessing functions) and CPU cores onto one chip. As a programmatic distinction, Torrenza refers to external acceleration technology (including graphics processing units in PCIe slots), while Fusion refers to integrated acceleration technology.


If Nv is going to "go it alone" well, good luck with that.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,754
31,725
146
I looked around a bit to dispell a little of my ignorance, I understand a little better now
Both NVIDIA and AMD/ATI want to start selling silicon in HPC clusters. This is a niche market, but it's a profitable niche precisely because the folks in that market have an insatiable appetite for performance above all else, and they have pockets deep enough to satisfy that need with whatever works. This isn't to say that the HPC market is insensitive to system price (and power consumption considerations), but in the overall price/performance equation programmer time relates to hardware costs much differently than it does outside of the HPC niche.
from Ars news back in Feb. Text

I saw this elsewhere
The IBM Roadrunner supercomputer will connect 16,000 Opteron processors to 16,000 Cell Broadband Engines in an effort to reach 1 Petaflop of processing power. This would make the system the fastest supercomputer in the world. However, it is not clear if this system configuration should be considered an example of a coprocessing architecture because the Opteron and Cell processors will be running independent operating systems and communicating using software-based message-passing protocols.

n/m my silly comment.


 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,754
31,725
146
Originally posted by: apoppin
i think they *have* to ... unless they partner with intel which doesn't look to happen ;)
I see what you were getting at about CUDA now, after seeing that Ars article. I was just thinking it made more sense to leverage the open socket standards from both AMD&Intel but Ars pointed out it is a very lucrative niche market they are targeting. Pardon me, it is just my ignorance is showing, again. :)

 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
brave new world

that is what i meant by "big news"

this is just nvidia's piece of it

and AMD already has x1900 for FoH ... i expect they will aim for the pro supercomputing market the same way as nvidia with r600/r660 ;)

nvidia is looking for $500M a year from this new brand of g80 ;)

not exactly 'change'
 

kobymu

Senior member
Mar 21, 2005
576
0
0
@DAPUNISHER

While the Torrenza initiative does show great, although yet unfulfilled promise, I don?t see any technical hurdles that can be a show stopper for Nvidia in the backplane department, i.e. that are existing technologies that Nvidia can license from other companies to offer their GCPUs fast, reliable, high bandwidths, low latency interconnection, also bare in mind that not only is Nvidia is a member of the HyperTransport Consortium, it has also already have a working, functional implementation of it on the market (wiki), additionally IMHO developing a new propriety backplane technology is well within Nvidia ability to accomplish, if they would choose to do so.

Backplane technology will not by Nvidia biggest challenge, in this endeavor, it is in my opinion in the software department. As I described in my previous post, a very significant if not crucial component in introducing a new ISA (Instruction Set Architecture, from wiki: "The complete specification of the interface between computer programs that have been written and the underlying computer hardware that carries out the actual work.") to the market is software, it always have been, and unless Nvidia can offer some kind of a killer application(wiki), that is if it can somehow manage to withstand all the R&D cost in designing a new ISA (which will be a LOT if they decide to go that route), it will face a tremendous difficulty in penetrating the market in which a few, thousands of pounds dinosaurs companies reign supreme, Big names like IBM, HP, Cray and SGI.

Going for a new ISA will be borderline insane, especially considering the type of market these processors are targeted toward (id it was a ARM like processor then maybe), on the other hand going head to head with a company like Intel with its fabrication infrastructure will be also the equivalent of suicide (and I haven't even touched on the legal implication of such a move), venturing into the a CPU market, any kind of it, will be very difficult indeed, however the gunning for the small in-between window as I described, that pseudo-x86 ISA, may be Nvidia best gamble for the reasons I have mentioned.

If Nvidia, or any other company for that matter, can design a good GCPU, providing it with the appropriate peripheral hardware that need to support the processor shouldn't be such a difficult task, the biggest challenges will be at the software side.
 

DAPUNISHER

Super Moderator CPU Forum Mod and Elite Member
Super Moderator
Aug 22, 2001
31,754
31,725
146
kobymu,

It definitely appears that the software dev is the big hurdle, and an expensive one, just as you stated. The speculation at this point seems to be, that much like AMD, they will target just those who could use the performance so much, that they are willing to take on the costly programming time to leverage the hardware. At least for now, and as long as the focus is so specific, couldn't they avoid the need for a killer app.?

BTW, my appologies for what reads to me now, as a condescending remark by myself, in an early post.

 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: DAPUNISHER
kobymu,

It definitely appears that the software dev is the big hurdle, and an expensive one, just as you stated. The speculation at this point seems to be, that much like AMD, they will target just those who could use the performance so much, that they are willing to take on the costly programming time to leverage the hardware. At least for now, and as long as the focus is so specific, couldn't they avoid the need for a killer app.?

BTW, my appologies for what reads to me now, as a condescending remark by myself, in an early post.

i don't think nvidia faces this hurdle ... their big job will probably be the BIOS ...each company that uses the "g80 super computer" probably writes their own SW ;)

i believe FoH was adapted and written for x1900 ... and it is still a video card ... i "imagine" the g80 will be more optimized for 'super computing' ... yet nvidia will not actually write the programs.

there doesn't seem to be a lot of info available yet
 

kobymu

Senior member
Mar 21, 2005
576
0
0
The problem is you need to understand the work that need to be done behind the scene in order to support the developers that want to utilize your hardware, designing a good ISA is one thing, supporting it from the software side is a whole different ball game.

Although current ISAs have the benefits of a middle man, the most obvious example is Microsoft visual studio for windows (that eventually support the x86 ISA), in this example all a developer needs to do is go to MSDN. However don?t let it fool you, CPU companies need to support their products or more precisely the developers for their products, even if a middle man (software companies) exist, here are a few examples.

HP developer portal:
http://h21007.www2.hp.com/dspp/pp/pp_Overview_IDX/1,1419,1,00.html

Specific example: HP Vector math library for 64-bit Windows:
http://h21007.www2.hp.com/dspp/tech/tec...wareDetailPage_IDX/1,1703,5296,00.html


IBM developerWorks - developer portal:
http://www-128.ibm.com/developerworks/

Specific example: Mathematical Acceleration Subsystem Support:
http://www-306.ibm.com/software/support/rss/other/2021.xml?rss=s2021&ca=rssother

Sun developer portal:
http://developers.sun.com/

Specific example: Sun N1 Grid Engine 6
http://www.sun.com/software/gridware/

SGI developer portal:
http://www.sgi.com/developers/

Specific example: SGI ProPack for Linux:
http://www.sgi.com/products/software/linux/propack.html

Intel developer portal:
http://softwarecommunity.intel.com/isn/home/

Specific example: Intel Integrated Performance Primitives (Intel IPP) Open Source Computer Vision Library (OpenCV) FAQ:
http://www.intel.com/support/performancetools/libraries/ipp/sb/cs-010656.htm

Open Source Computer Vision Library - OpenCV Coding Style Guide:
http://www.intel.com/technology/computing/opencv/coding_style/

Just browse these links for a few moments each and you will start to get an idea of just of just how much software needs to be written by CPU companies themselves to support their ISA.

CPU companies need to support 2 kinds of developers: 1) developers in software companies who want to develop software for their hardware and 2) in the business sphere (especially in the HPC market), companies that purchase the software they need but also need to customizes that software it for their needs through the software APIs. It is in the interest of CPU companies to make sure their hardware is as accessible to any kind of developers as much as possible, and the way to do that is by providing the developers as many development tools, utilities, ready to use libraries and other resources as possible.

@DAPUNISHER
When I referred to a killer application it was meant in the sense that if unless Nvidia can offer, with their hardware, some king of never before seen performances in the HPC market (within a certain price range), with some kind of existing software, i.e. a software that has to be rewritten to take advantage of Nvidia ISA, then Nvidia has it made, other companies will jump at the opportunity to become Nvidia middle man so they can profit from a new formed nitch in the market. If however Nvidia cannot accomplish that, they will have to battle all the other companies that have existing products in HPC market on all the fronts.

In a nutshell if Nvidia can offer some kind of killer application, Nvidia can afford not being their own middle man, without a killer app, they will have to fill that void themselves.

Designing good hardware is one thing (*1), making sure that it is utilizes properly is a different thing (*2), and convincing other to use it by itself anther different front (*3).

Originally posted by: apoppin
i don't think nvidia faces this hurdle ... their big job will probably be the BIOS ...each company that uses the "g80 super computer" probably writes their own SW ;)

If 1) Their GCPUs aren't the 'root' processor (that handles the OS); 2) Existing software wouldn?t need to be rewritten to take advantage of the new CPU (only recompiled), and finally 3) Nvidia will provide that mysteriously complier that can take other ISA code and produce from it native code for their ISA then yes, their biggest job would be rewriting the BIOS. ;)

i believe FoH was adapted and written for x1900 ... and it is still a video card ... i "imagine" the g80 will be more optimized for 'super computing' ... yet nvidia will not actually write the programs.

there doesn't seem to be a lot of info available yet

This is exactly what I'm referring too, Folding@Home utilized ATI API to accomplish the huge x20+ performance gain.

http://www.anandtech.com/showdoc.aspx?i=2849&p=3
With help from ATI, the Folding@Home team has created a version of their client that can utilize ATI's X19xx GPUs with very impressive results.

If it was just some directX routing calls it would have worked on g80 as well.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
Originally posted by: kobymu
The problem is you need to understand the work that need to be done behind the scene in order to support the developers that want to utilize your hardware, designing a good ISA is one thing, supporting it from the software side is a whole different ball game.

Although current ISAs have the benefits of a middle man, the most obvious example is Microsoft visual studio for windows (that eventually support the x86 ISA), in this example all a developer needs to do is go to MSDN. However don?t let it fool you, CPU companies need to support their products or more precisely the developers for their products, even if a middle man (software companies) exist, here are a few examples.

HP developer portal:
http://h21007.www2.hp.com/dspp/pp/pp_Overview_IDX/1,1419,1,00.html

Specific example: HP Vector math library for 64-bit Windows:
http://h21007.www2.hp.com/dspp/tech/tec...wareDetailPage_IDX/1,1703,5296,00.html


IBM developerWorks - developer portal:
http://www-128.ibm.com/developerworks/

Specific example: Mathematical Acceleration Subsystem Support:
http://www-306.ibm.com/software/support/rss/other/2021.xml?rss=s2021&ca=rssother

Sun developer portal:
http://developers.sun.com/

Specific example: Sun N1 Grid Engine 6
http://www.sun.com/software/gridware/

SGI developer portal:
http://www.sgi.com/developers/

Specific example: SGI ProPack for Linux:
http://www.sgi.com/products/software/linux/propack.html

Intel developer portal:
http://softwarecommunity.intel.com/isn/home/

Specific example: Intel Integrated Performance Primitives (Intel IPP) Open Source Computer Vision Library (OpenCV) FAQ:
http://www.intel.com/support/performancetools/libraries/ipp/sb/cs-010656.htm

Open Source Computer Vision Library - OpenCV Coding Style Guide:
http://www.intel.com/technology/computing/opencv/coding_style/

Just browse these links for a few moments each and you will start to get an idea of just of just how much software needs to be written by CPU companies themselves to support their ISA.

CPU companies need to support 2 kinds of developers: 1) developers in software companies who want to develop software for their hardware and 2) in the business sphere (especially in the HPC market), companies that purchase the software they need but also need to customizes that software it for their needs through the software APIs. It is in the interest of CPU companies to make sure their hardware is as accessible to any kind of developers as much as possible, and the way to do that is by providing the developers as many development tools, utilities, ready to use libraries and other resources as possible.

@DAPUNISHER
When I referred to a killer application it was meant in the sense that if unless Nvidia can offer, with their hardware, some king of never before seen performances in the HPC market (within a certain price range), with some kind of existing software, i.e. a software that has to be rewritten to take advantage of Nvidia ISA, then Nvidia has it made, other companies will jump at the opportunity to become Nvidia middle man so they can profit from a new formed nitch in the market. If however Nvidia cannot accomplish that, they will have to battle all the other companies that have existing products in HPC market on all the fronts.

In a nutshell if Nvidia can offer some kind of killer application, Nvidia can afford not being their own middle man, without a killer app, they will have to fill that void themselves.

Designing good hardware is one thing (*1), making sure that it is utilizes properly is a different thing (*2), and convincing other to use it by itself anther different front (*3).

Originally posted by: apoppin
i don't think nvidia faces this hurdle ... their big job will probably be the BIOS ...each company that uses the "g80 super computer" probably writes their own SW ;)

If 1) Their GCPUs aren't the 'root' processor (that handles the OS); 2) Existing software wouldn?t need to be rewritten to take advantage of the new CPU (only recompiled), and finally 3) Nvidia will provide that mysteriously complier that can take other ISA code and produce from it native code for their ISA then yes, their biggest job would be rewriting the BIOS. ;)

i believe FoH was adapted and written for x1900 ... and it is still a video card ... i "imagine" the g80 will be more optimized for 'super computing' ... yet nvidia will not actually write the programs.

there doesn't seem to be a lot of info available yet

This is exactly what I'm referring too, Folding@Home utilized ATI API to accomplish the huge x20+ performance gain.

http://www.anandtech.com/showdoc.aspx?i=2849&p=3
With help from ATI, the Folding@Home team has created a version of their client that can utilize ATI's X19xx GPUs with very impressive results.

If it was just some directX routing calls it would have worked on g80 as well.

are we *disagreeing* on ANY points?
:confused:

i don't think so ...

you have just "expanded" on what i believe
[and i thank you] :)

i understand that nvidia has a pretty talented team of SW engineers that probably worked closely with the designers of the G80 ... i know that nvidia helps game Devs a lot in making and tweaking their games and engines ... that's why they did the twiimtbp program [to advertise nvidia]

i imagine they will have similar assistance for the "super computing" teams

if you spend a couple of grand for a g80 [i am guessing, from Quadro prices], you *should* get some SW support and help with recompiling code :p