***Official Unigine Heaven 4.0 benchmark scores! Thread***

Page 4 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

futurefields

Diamond Member
Jun 2, 2012
6,470
32
91
Could have just been a hitch off the stairs coming out of that room as well, who knows, but I take nothing from Heaven mins, and even as is it's nothing more than a ROP/Tess benchmark with overkill AA and overkill tessellation.

This is the most taxing scene I believe.
 

AndyE

Junior Member
Apr 15, 2013
11
0
0
Does anybody know the clock speed on anyde's Titans?
Yes, I know :)

I changed the settings with EVGA precision to:
shader +160 MHz
memory + 160 MHz
Power = 100% (default)
Temp = 80 C (default)
Fans = 85% (max)

While the fans are noisy at this settings the cards keep running at ca. 70 degree C (Usually, I keep the system loaded for 24 hrs operation, so temperature trumps noise to keep the system's components as cool as possible)

There is a point in Unigine Heaven's 4.0 performance curve where the GPU's don't matter anymore for increased scores. The "limiting" factor is the software architecture of the benchmark application as it uses only 2 threads on the Intel CPU. The only way to raise Unigine Valley results in this setting is to raise the frequency of a few CPU cores. Increasing the GPU frequency doesn't help, neither does help an increase in the number of Intel CPU cores as they aren't used at all. This is typical for many benchmarks as they aren't optimized for the throughput capabilities of systems. 3dMark performs better on a highly overclocked 4core CPU than on a 16core dual Xeon system. If you are only interested in benchmark results, take the smaller and cheaper CPU with OC capabilities.

The result shown in my previous post (4257 points) was achieved with above settings and the i7-3930K CPU set at 4,1 GHz. With identical settings, but the CPU set to its original 3.2 GHz frequency, the result goes down to the 3300 range. Looking at the power consumption of the GPUs during the run (relatively low) a higher CPU frequency would surely push the result higher. As said, it's a software architecture thing, not a GPU capability alone.

I am not using the system for graphics workload but compute stuff. Out of interest, I checked some of the more common GPU benchmarks like Unigine, 3dMark and Furmark.

original.jpg




To visualize my argument above.

Quite a lot of discussions filled threads in internet forums like here about the utility of a GTX Titan vs. alternatives like the GTX 680 or GTX 690. "Overpriced", "marginal impact", "cheaper alternatives available" were often read arguments.

With that, I thought I share a graphic I found by the Super Computer Center in San Diego, which shows the performance graphs of CPUs and GPUs for Amber - one of the widely used applications in the field of bioinformatics. The slide is from a recent presentation at GTC 2013 in March.


A few comments:
  • For performance reasons, Amber leverages mixed precision calculation (i.e. single precision for individual multiplications of vector elements and double precision for its summation)
  • A single GTX Titan seems to be 37% faster than a 8 node dual Xeon E2-2670 cluster (the 16 CPUs alone are 25600 US$ at newegg.com)
  • One Titan is slightly faster than 4 GTX 680 in one compute node
  • It is 22% faster than 2 x K10 cards (the "pro" version of the GTX 690, roughly equivalent to the GTX 690. A K10 card is currently listed at Amazon with 3000 US$)
  • Due to its higher frequency, it is 28% faster than its professional K20X brethren. Due to scaling issues between 2 GPUs, it is still faster than 2 of them. (The K20X is the more expensive version (4600 US$) of the K20 which is currently listed at newegg.com for 3500 US$ each)
  • The performance metric measured is nanoseconds of folding simulation per one day of compute time. The time steps are usually 2 femtoseconds, so for one nanosecond there need to be 500.000 iterations of the force calculations between the atoms under investigation.
original.jpg


rgds,
Andy
 
Last edited:

Rikard

Senior member
Apr 25, 2012
428
0
0
^^I think we found the cause of the global warming!

Sapphire HD7950 Vapor-X
ASIC 87.7%
1230/1625 MHz @ 1.15 V
Score: 984
FPS: 39.0

I have a momentum 22 in that I need more voltage to go higher on the core, but that makes the VRM too hot, so the OC above is a trade off. Pretty typical high ASIC behaviour isn't it? Does not need high voltage, but also does not respond well to overvolts due to leakage?
 

John Tauwhare

Member
Dec 26, 2012
137
5
81
Might as well post mine here too.

Capture-2_zpsaf6f3148.png


When I get more time I will try to tweak it and see if I can get a better score. Heaven 4.0 is a lot tougher to run then Valley..

NVIDIA for the score chart, 2 x 660..



Beaten by 6. The shame!!

Stock settings, whatever they are, I'm a complete noob to graphics cards. CPU at 4.8 and ~45C. 405W total.
 

Face2Face

Diamond Member
Jun 6, 2001
4,100
215
106
John, this is a easy win for you. Just "Google" How to overclock a GTX 660 and it's game over for me :D

BTW - If you change your resolution to 1920X1080 it may be enough?
 

FalseChristian

Diamond Member
Jan 7, 2002
3,322
0
71
I don't know how to post my score page. Maybe someone won't make fun of me and tell
me how to do it.

Anyhow:60.6 fps,1811 score albeit at 1680x1050.
 

futurefields

Diamond Member
Jun 2, 2012
6,470
32
91
John, this is a easy win for you. Just "Google" How to overclock a GTX 660 and it's game over for me :D

BTW - If you change your resolution to 1920X1080 it may be enough?

He's running 2 660's you have a single 7950, how would it be a win for him?
 

futurefields

Diamond Member
Jun 2, 2012
6,470
32
91
I don't know how to post my score page. Maybe someone won't make fun of me and tell
me how to do it.

Anyhow:60.6 fps,1811 score albeit at 1680x1050.

In the start menu, open the snipping tool. Save the image and upload to tinypic (no registration needed).
 

John Tauwhare

Member
Dec 26, 2012
137
5
81
I don't know how to post my score page. Maybe someone won't make fun of me and tell
me how to do it.

Anyhow:60.6 fps,1811 score albeit at 1680x1050.

Process I use:
1. get the ungine html on screen
2. do Ctrl Printscreen
3. paste into MS Picture Manager
4. crop to the score page
5. save (as bmp)
6. use an online utility to convert bmp to jpg
7. upload the jpg to Photobucket
8. copy the IMG code script into your AT reply
THERE HAS GOT TO BE AN EASIER WAY.
 

Rikard

Senior member
Apr 25, 2012
428
0
0
THERE HAS GOT TO BE AN EASIER WAY.
There is. I just type it. Given what other 7950 owners have posted, my results are very believable, so I could not be asked to do the whole snapshot+upload thingie. Now quad Titan is a different matter...
 

Rikard

Senior member
Apr 25, 2012
428
0
0
^^I will trade you my first born child for that setup. Please say you actually run all of them in one box!
 

AndyE

Junior Member
Apr 15, 2013
11
0
0
Please say you actually run all of them in one box!

No :)

Two systems:

1) single i7-3930K/X79 board (picture above)
2) dual Xeon E5-2665 board with better data transfer rates memory / GPUs (currently being built)

switching GPUs back an forth to check implications and performance on different architectures/software combinations. Mostly compute bound.

Andy
 

Kenmitch

Diamond Member
Oct 10, 1999
8,505
2,250
136
Process I use:
1. get the ungine html on screen
2. do Ctrl Printscreen
3. paste into MS Picture Manager
4. crop to the score page
5. save (as bmp)
6. use an online utility to convert bmp to jpg
7. upload the jpg to Photobucket
8. copy the IMG code script into your AT reply
THERE HAS GOT TO BE AN EASIER WAY.

MSI Afterburner 2.3.1 has screen capture and video capture built in. You just need to go into settings and setup the hotkeys and file formats. It makes it easy to capture the screenshots. Then you just need to snip it and upload the file.

Using the save HTML file makes it easy to cheat. A person could just edit the HTML to whatever they wanted. Not saying anybody does this, just that it's possible.

Example: Don't add these fake results!



Not like a person would know the difference anyways. Of course this GPU doesn't exist yet....Or does it :)
 
Last edited: