Refurbished EVGA?

ocre

Golden Member
Dec 26, 2008
1,594
7
81
i dont think it being refurbished is the bad part. as far as that goes i know plenty of ppl who have got refurb newegg stuff and had zero issues thus far.

what i want to ask is why a 580? for that price there are so many better card options right now.
 

digitaldurandal

Golden Member
Dec 3, 2009
1,828
0
76
Why would you pay 420 bucks for a 580? The extra RAM isn't worth 120 bucks and unless you get a real kick out of playing with the volt mods etc.. I wouldn't get it over a recent generation card.
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
i dont think it being refurbished is the bad part. as far as that goes i know plenty of ppl who have got refurb newegg stuff and had zero issues thus far.

what i want to ask is why a 580? for that price there are so many better card options right now.

I won't be doing any OC.

My application is compute. My understanding was the 6xx series aren't as good, at least until the big kepler comes out.

Is there a better priced 580 out there? I only know newegg.com

And yes, the 3gigs is nice, but if going down to 1.5 would save me big bucks I wouldn't mind. I still coding my app, so I don't know if I can use the extra memory or not.

Unless someone can explicitly show me AMD is better in compute, and by "better" I also mean having Matrix libraries, and something similar to CUDA's thrust. Then no, I won't consider AMD chips.
 

Durvelle27

Diamond Member
Jun 3, 2012
4,102
0
0
256 bit VS 384 bit

the GTX 670 still outperforms the GTX 580 for less money and consumes less power ?
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
256 bit VS 384 bit

the GTX 670 still outperforms the GTX 580 for less money and consumes less power ?

I was under the impression even for SFP, even the 680 does not beat the 580 due to the less cache. My application is not gaming, I am writing custom CUDA apps, less power is definitely nice tho.

I am using single precision, not double precision. I don't care about GTX580 double precision performance.
 

lambchops511

Senior member
Apr 12, 2005
659
0
0
http://www.anandtech.com/show/5818/nvidia-geforce-gtx-670-review-feat-evga/15
http://blog.accelereyes.com/blog/2012/04/26/benchmarking-kepler-gtx-680/

These two (and others like them) may gives some ideas on compute performance. It seems like the 600 series is pretty good for single precision, but poor for double precision. The 7970 seems to be pretty beastly at some compute, but that's out of the question with CUDA.

Thanks for the 2nd link. This is much more convincing. I just learnt about clAmdBlas, but it doesn't seem to be as mature as the NVIDIA cuBLAS. I will be sticking to team green, the developer support + libraries available is just overwhelming. It seems like for the GTX680 is cache-bound and memory bound, and hence the large matrices retain a lower GFLOPS.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I was under the impression even for SFP, even the 680 does not beat the 580 due to the less cache. My application is not gaming, I am writing custom CUDA apps, less power is definitely nice tho.

I am using single precision, not double precision. I don't care about GTX580 double precision performance.

For single precision performance, the calculation is extremely simple:

# of Shaders x Shader clock speed x Operations / Clock cycle

GTX580 = 512 SPs x (772 GPU x 2 Shader clock) x 2 Ops / cycle = 1.581 Tflops
GTX680 = 1536 SPs x 1058 Shader clock x 2 Ops / cycle = 3.25 Tflops
HD7970 = 2048 SPs x 925 Shader/GPU clock x 2 Ops / cycle = 3.79 Tflops

In Matrix Multiply a single GTX680 is almost 2x faster than C2070.

matmul.png


However, it doesn't mean all compute programs will scale perfectly linear.

fft2d1.png


In some cases, the 580 can still come out on top of the 680:

sort.png


Source

*** Looks like JS17 beat me to it. Didn't see his post prior to posting mine. ***
 
Last edited:

lambchops511

Senior member
Apr 12, 2005
659
0
0
For single precision performance, the calculation is extremely simple:

# of Shaders x Shader clock speed x Operations / Clock cycle

GTX580 = 512 SPs x (772 GPU x 2 Shader clock) x 2 Ops / cycle = 1.581 Tflops
GTX680 = 1536 SPs x 1058 Shader clock x 2 Ops / cycle = 3.25 Tflops
HD7970 = 2048 SPs x 925 Shader/GPU clock x 2 Ops / cycle = 3.79 Tflops
***

Those calculations don't mean too much, MMM is generally memory bound, the extra cache the Fermi chips have over the GTX680 really helps for MMM.

This is why we need more benchmarks. Right you are the C2070 may be slower, but on the other hand I am getting 6 GiB of memory per card! GTX580 is only 3 GiB IRC.