• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

The next big thing

pcm81

Senior member
Mar 11, 2011
598
16
81
Traditionally we had a cardinal difference between AMD and Nvidia, because we had 2 types of archidectures, SIMM vs. MIMM (Single Instruction Multiple memory vs. Multiple Instructions Multiple Memory). as the result AMD cards, being architectually more simple were more power efficient and could sqeeze more SIMM proccessors onboard than Nvidias Cuda cores. Now that both companies are using MIMM architecture the two camps are less different from eachother than before. Usually the SIMM architecture is better when working with very large sets of data, due to the fact that yu can sqeeze more cores in per watt of power. In the cases when data sets become smaller MIMM architecture wins, because all cores can be scheduled to to work all the time, vs having half of SIMM cores idle, waiting for work.

Now, it seems we have a common trend of "core clusters" when SIMM cores are devided into MIMM clusters, with each cluster being able to use separate instruction que. Clearly this comes at a price of increased complexity of the dye design, compared to true SIMM model. This explains why AMD and Nvidia both produces top tier cards of very similar thrmal and computational performance.

It seems to me that the company trends are reversing, Nvidia took off their cards the DP compute ability, hence simplified the design. AMD introduced MIMM clusters, hence compicated the design.

Are we seeing a complete role reversal? in a few yers will it be an industry standart to buy AMD to compute and nvidia to play?

I dont want a flame war; try to restrain your comments to technically sound discussion / theories about core architectire trends.
 

SolMiester

Diamond Member
Dec 19, 2004
5,330
17
76
I see it as NV realised they were giving too much away with compute on gamer cards, so reversed the fermi compute from kepler in order to accelerate the clock...spread their products better for the market and separate pro from gamer more....
Not sure what you are getting at regarding AMD to compute is a few years time and NV for gaming, and NV has been in the compute business for years and will continue to lead it....

AMD...well, they are just playing catch up IMO
 

JAG87

Diamond Member
Jan 3, 2006
3,921
3
76
Are we seeing a complete role reversal? in a few yers will it be an industry standart to buy AMD to compute and nvidia to play?

This is something you can only believe, not a fact.

I have been buying nvidia for all purposes (with gaming at the top of the list) for years, and I couldn't care less what architecture is under the hood.

It's software that counts (because that's what I interact with as the end user), and nvidia's is better.
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
I think it's just that GK104 is compute restricted. GK110 won't be.

For the longest time nVidia was the only show in town for the professional market, be it workstations or HPC. I know with most pro apps they were designed using nVidia hardware. The programmers, testers, etc. all had nVidia cards in their workstations. They designed their software to run on nVidia. They didn't design it to not run on ATI/AMD, but it isn't as heavily tested. They aren't likely to change this unless there's a financial incentive to do so. If, for example, one company, say NewTek or Maxon, optimized their programs for AMD (As well as nVidia, of course.), and they saw a substantial increase in sales, Autodesk might decide to follow. To prevent this nVidia offers support (in many forms) to most of these companies which removes any incentive to do anything except recommend nVidia cards to customers when/if they complain about a show stopping bug with an AMD card. Now, Adobe going OCL will help AMD a lot in the workstation segment. It at least gives them a level playing field.

As consumers we should all be hoping for competition. It's just like with Radeon vs. Geforce, competition will improve performance while driving down prices.

I'm not as informed about HPC, but I assume it would work similarly. They might be more interested in changing for an improvement in absolute performance. In the past nVidia was just better at HPC. We'll have to see what happens going forward. Anyone who believes that nVidia isn't going to try and maintain their lead though, just because GK104 isn't a good compute chip, I believe is mistaken.
 

thilanliyan

Lifer
Jun 21, 2005
12,062
2,275
126
It's software that counts (because that's what I interact with as the end user), and nvidia's is better.

That is your opinion, not fact. I have in fact had a worse experience with nV drivers than AMD ones. Therefore AMD's software is better...?

:confused:
 

balane

Senior member
Dec 15, 2006
666
0
76
It's software that counts (because that's what I interact with as the end user), and nvidia's is better.

This is incorrect. AMD Software is better simply because I say so.
 

Yuriman

Diamond Member
Jun 25, 2004
5,530
141
106
And I've never had a problem with either so they're equally good.
 

hokies83

Senior member
Oct 3, 2010
837
2
76
I had massive issues with amd drivers.. i got a 7970.. got afew things to work that was it..

10 years of using Nvidia drivers i have never had an issue... i RMAed my 7970 and got gtx 670s..

Again this was me... not saying it is everybody having issues just saying i did.
 

richaron

Golden Member
Mar 27, 2012
1,357
329
136
Are we seeing a complete role reversal? in a few yers will it be an industry standart to buy AMD to compute and nvidia to play?

I think we might be..

nVidia has historically offered good compute results with all their cards. AMD has jumped on the bandwagon with an open API (which has been waiting to be utilised...) & great compute. It seems nV has jumped off the wagon with mainstream parts (& thier 'closed' API): Reducing the potential market.

I would assume the next step would be developers to code for open standards (which both cards run), but AMD mainstream cards run much better.
 

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
This explains why AMD and Nvidia both produces top tier cards of very similar thrmal and computational performance.
Isnt the 7970 like 500% (5 times) as fast when it comes to DP floating point compute, as the 680?
and like 45% ahead in single floating point?

That *may* be due to nvidia locking down things on the 680,
so they can have a professional card released lateron at some point, but those are "hardly" what I would call
very similar thrmal and computational performance
Currently the AMD cards spank the nvidia ones with computational performance.
 
Last edited:

Pottuvoi

Senior member
Apr 16, 2012
416
2
81
Both Amd and Nvidia are driving forward to huge processing capacity.
GK104 Kepler was meant for mainstream gaming.. GK110 is clear indicator that they do intend to pursue for heavy compute as well. (it seems to be very interesting chip..)
http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf

Currently only way forward is driven by power efficiency and it leads directly to massive parallelism with very simple cores.
Here is a very good talk about the subject. (end of Moore etc..)
http://mediasite.colostate.edu/Medi...aspx?peid=22c9d4e9c8cf474a8f887157581c458a1d#

There was an interesting paper about project Echelon the architecture following Maxwell at nvidias research site, but sadly the whole site is under maintenance at the moment..
Here's the link if it ever comes back.
http://research.nvidia.com/sites/default/files/publications/IEEE_Micro_2011.pdf
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Isnt the 7970 like 500% (5 times) as fast when it comes to DP floating point compute, as the 680?
and like 45% ahead in single floating point?

That *may* be due to nvidia locking down things on the 680,
so they can have a professional card released lateron at some point, but those are "hardly" what I would call

Currently the AMD cards spank the nvidia ones with computational performance.

79000 Tetrablops/s means nothing per se.

When there is no use for it.
Or software for it. Other than Bitcoin and Luxmark
AMD can't even put GPGPU to use by making a H264 encoder for themselves, let alone develop complete ecosystem for some crazy physicists.


DP? Where the hell do you need it. In code debugging, and in some obscure algorithms.
90% GPGPU in the wild is done with single-precision.

EDIT: OK, they both(AMD,NVIDIA) got $12M from Energy Department for exascale system :)
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
Okay single performance then,.... still a 45% differnce is what Id call "huge", not simular.

AMD went overboard with all the GPGPU stuff I think, but for the first time since gods know when,
AMD is beating nvidia in it (id have liked if they focused more on the gameing performance personally).

AMD can't even put GPGPU to use by making a H264 encoder for themselves.
Why would they need too? isnt that up to the software guys, to come up with some use of compute capabilities and then "they" write the software that makes use of it?

Anyways arnt there OpenCL H264 encodeing programs out there? so its a non issue right?.

***edit: you mean the no show of VCE yet? yeah.... thats kinda sad.
Supposed to be a QuickSync like thing but on the GPU's side, lame its taken AMD this long, it should have been out when the cards launched.
 
Last edited:

f1sherman

Platinum Member
Apr 5, 2011
2,243
1
0
Mini-Kepler is inherently unsuitable for GPGPU, because it has been optimized for rendering workloads, not GPGPU.

That does not mean it's a total dud, and that certain GPGPU apps/code can not be optimized/written to hide that fact.

EDIT: LOL yeah, VCE is becoming a soap opera
 
Last edited:

Arkadrel

Diamond Member
Oct 19, 2010
3,681
2
0
EDIT: LOL yeah, VCE is becoming a soap opera
I actually tried to google it, to see if I could find anything.
Cant find any bench's or anything legit with it.

all I found was:
1) AMD APP SDK v2.7 comes with OpenCL 1.2
2) One of the new Key feature's of OpenCL 1.2 was "Video encode using VCE Encode (Win7)."


So it looks like you need to Install AMD APP SDK v2.7 (with OpenCL 1.2) and use Win7,
to be able to use VCE.

The harder part... finding software that can use it <.< lol.
Im not 100% but I think Cyberlink's PowerDirector 10 can use it.

I would love to see some benches, where its working.



***edit:
lol I went to Cyberlink 's Homepage and found the powerdirector 10 page, and hit "see all new feature's" and its a dead link.
I googled Cyberlink Powerdirector 10 + AMD VCE, and found some new's updates on their site mentioning Powerdirector 10 now working with AMD's VCE (that was a news update from May 22).

Bonus points to anyone that can actually find a benchmark with this program showing VCE working.
 
Last edited:

Siberian

Senior member
Jul 10, 2012
258
0
0
The GK104 is a midrange chip. Go look at the specs for the GK110, its in a class all its own.