So if I've decided on a 980TI, is the best one the Zotac 980Ti Amp! Extreme Edition?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
Seasonic makes top notch PSUs. 750W Seasonic = 750W @ 24/7 100% load :)

That PSU should easily handle 2xGTX780/780Tis maxed out. If you buy reference blowers with the Titan cooler, they do not need any space between them. They'll run perfectly fine at 82-83*C.

The blower 780/Tis are specifically designed to function 24/7 100% load under this scenario.

8babeeb0_IMG_6991_zpsdda97815.jpeg

titan_3930k_03.jpg


If you were happy with GTX590, I'd try to find 2 780/780Ti and go that route. Your PSU can handle it and your gaming performance will be more than satisfactory while your Octane performance will be much better than a single 980Ti. Alternatively, even a single used 780Ti would be a better investment for you than an 800 EURO 980Ti. Then when Pascal comes out, you can check its performance in Octane and upgrade again. I mean from what I am seeing there is no way I would buy an 800-850 EURO 980Ti based on everything you described. It's just a horrendous price/performance option for your tasks.

Plus, think about this way, 780Ti x 2 are soo much faster in Octane (62% faster than a 980TI), that you can literally downclock them 10-15% to lower noise levels or temperatures and still beat 980Ti easily.

Oh, is that right? I mean about the blower fan, is it not loud as f? I admit i had 2 gtx460 in that setup once, and that was awful - they had propeller fan though...the upper card especially just screamed, it was unbearable, i got rid of it as fast i could.

I agree what you say sounds reasonable, but for the reasons i stated i did not consider it to be possibility until now. I will think about it.

EDIT: Oh, i did not realize, now it crossed my mind looking for used Kepler cards on local bazaar site, what they go for - i need as much VRAM as possible. So 980Ti with its 6GB is superior option to pair of 3GB Kepler cards in this regard, even if performance wise it is not.
 
Last edited:

railven

Diamond Member
Mar 25, 2010
6,604
561
126
Oh, is that right? I mean about the blower fan, is it not loud as f? I admit i had 2 gtx460 in that setup once, and that was awful - they had propeller fan though...the upper card especially just screamed, it was unbearable, i got rid of it as fast i could.

I agree what you say sounds reasonable, but for the reasons i stated i did not consider it to be possibility until now. I will think about it.

Ref blower on 780 Ti I believe is the same that was used for 980 Ti? If yes, it is loud. I'm grateful to get rid of it's need.

I also had a GTX 460, while back, that was never loud to me. When I ran two 660 Tis, those too were not loud. The 980 Ti ref was definitely louder. However, not sure if our levels of "loud" are the same.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
Ref blower on 780 Ti I believe is the same that was used for 980 Ti? If yes, it is loud. I'm grateful to get rid of it's need.

I also had a GTX 460, while back, that was never loud to me. When I ran two 660 Tis, those too were not loud. The 980 Ti ref was definitely louder. However, not sure if our levels of "loud" are the same.

But did you have them in that setup like on the pic above, right next to each other? Its more or less OK, if there is one free slot between the cards.

If yes, well maybe our "standards" are different as you say. I guess the difference in use is important too. If you get your cards under load while gaming, even using some headphones in the process, i guess too much noise is irrelevant. If you however run the renders, cant do nothing for few hours just wait next to the computer and listen to fans screaming, it can get old pretty fast.
 

railven

Diamond Member
Mar 25, 2010
6,604
561
126
But did you have them in that setup like on the pic above, right next to each other? Its more or less OK, if there is one free slot between the cards.

If yes, well maybe our "standards" are different as you say. I guess the difference in use is important too. If you get your cards under load while gaming, even using some headphones in the process, i guess too much noise is irrelevant. If you however run the renders, cant do nothing for few hours just wait next to the computer and listen to fans screaming, it can get old pretty fast.

The GTX 660 Ti's had probably a slot between them.

More so, what I was getting at is running a configuration like that, the cards are gonna get hot, and if you use the standard fan profile they will get loud. A setup like that will probably get the 780 Ti's hotter than my 980 Ti ran alone, and the ref blower on a single 980 Ti was annoying, I could not imagine two of them going at 60-70%.

I'm not trying to talk you in any direction, just that I always heard great things about that Titan Ref Blower design, and perhaps back in the day it was amazing but after experiencing it first hand, it isn't as quiet as I was led to believe haha. I'm glad my water kit keeps the temps low because the blower never goes beyond it's stock lowest 22% thus I never hear the thing.
 

bradly1101

Diamond Member
May 5, 2013
4,689
294
126
www.bradlygsmith.org
Don't forget either the $15 off $15 Visa Checkout or $25 off $200 AMEX Newegg stacking deals.

As far as overclocking goes, even if one 980Ti reaches 1525mhz and the other just 1450mhz, that's only a 5% difference. That will have no impact on your overall gaming experience. The more $ you can save towards a next GPU upgrade, the more you have to spend on a next gen card. By December 2017, there will be a $650 card that's 60-80% faster than a 980Ti. What's the point of sweating the extra 5% of OCing headroom if you have to spend $70-80 extra for it? I wouldn't do it but that's me.

I mean if you are going to spend extra on a 980Ti, focus on other value-added items.

For example, MSI Gaming 980Ti has a free mousepad thrown in, while the MSI Lightning 980TI LE edition throws in a free copy of Black Ops 3.

At least the 980Ti Lightning LE has one of the best and quietest GPU coolers and it has high-end components. So from that perspective, it's at least somewhat justifiable to spend a bit extra over the card I linked.

fannoise_load.png


front.jpg


The Zotac Extreme was a great card in the beginning of 980Ti's life-cycle but at the current time I don't think it's worth the $650 anymore. If you intend to overclock on your own, I don't see how the MSI Lightning LE could be a worse option but it'll for sure be quieter.

I don't have the Lightning (I shoulda waited dammit!) but I can attest to MSI's other offering in enhanced 980TI's (the Gaming 6G) as being very quiet even at 100% fan speed. It helps that it's in a case as well insulated as the 550D.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
The GTX 660 Ti's had probably a slot between them.

More so, what I was getting at is running a configuration like that, the cards are gonna get hot, and if you use the standard fan profile they will get loud. A setup like that will probably get the 780 Ti's hotter than my 980 Ti ran alone, and the ref blower on a single 980 Ti was annoying, I could not imagine two of them going at 60-70%.

I'm not trying to talk you in any direction, just that I always heard great things about that Titan Ref Blower design, and perhaps back in the day it was amazing but after experiencing it first hand, it isn't as quiet as I was led to believe haha. I'm glad my water kit keeps the temps low because the blower never goes beyond it's stock lowest 22% thus I never hear the thing.

TBH i fear it would not be ok so i probably pass on this option. What do i do, if turns out to be too hot or too loud or even both? Sell again what i just bought? Even if Russian Sensation is right that from performance point of view 2 older GPUs are superior solution, this is pretty big concern.

Additionally, as i said, there is the VRAM issue and on top of it sort of specific to me living in smaller country - i assume those 780s/780Tis i am supposed to get are meant to be used - easier said than done over here. Its not like i am living in big country like US or England, where i get hundred of possibilities to buy them second-hand if i want. I can find maybe 10 people selling them, some of them living say hundred miles away - do i feel like travel there to get them? Which is probably the only option, since i am defo not buying used hardware on someone´s good word its going to work.

So ultimately, there seem to be 2 choices for me. Either i get 980Ti or wait for Pascal. If i decided on 980Ti, which one is more preferable then? The MSI Gaming 6G for 594 EUROs or Evga Hybrid for 669 EUROs? The MSI has been restocked on the local e-shop, so its a possibility once again.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
So ultimately, there seem to be 2 choices for me. Either i get 980Ti or wait for Pascal.

Honestly you have to decide since it seems like if you don't get a card soon, your rig will be unusable so how can you wait for Pascal?

If i decided on 980Ti, which one is more preferable then? The MSI Gaming 6G for 594 EUROs or Evga Hybrid for 669 EUROs? The MSI has been restocked on the local e-shop, so its a possibility once again.

MSI Gaming is quieter at idle/low load since the fans don't turn on, while the EVGA Hybrid is quieter at load. If you want the coolest and quietest card at load and don't mind the premium, the Hybrid wins.

https://www.youtube.com/watch?v=VaSTljmsa74
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
Honestly you have to decide since it seems like if you don't get a card soon, your rig will be unusable so how can you wait for Pascal?



MSI Gaming is quieter at idle/low load since the fans don't turn on, while the EVGA Hybrid is quieter at load. If you want the coolest and quietest card at load and don't mind the premium, the Hybrid wins.

https://www.youtube.com/watch?v=VaSTljmsa74

Thanks!

I can wait cause i have replacement card (580) in the meantime. Its just pain to use when i need to do something in Octane - and coincidentally, nowadays i do. Its not a rule though, its bunch of random side-jobs, which will actually help get me to finance the 980Ti or whatever else i decide to buy.

I admit the most sensible choice would be wait for Pascal, if i dont want to go the way of buying bunch of used Kepler cards. But there is vanity involved, i actually want to buy something :-D and sooner rather than later.

But 980Ti excites me less and less by each passing day. The more i read about Pascal anyway. I actually was looking yesterday at Titan X - admittedly even less sensible choice these days than 980Ti - but it had 12GB VRAM, which is actually very big plus. So far the largest VRAM i had possibility to work with is 3GB 780Ti at my work - and there was already few projects where i was limited by it - could not get as much geometry into my scenes as i would have liked.

But yeah, since the performance is about the same as 980Ti, about 300 EUROs more for 2x as much RAM is bit too much. But it has at least something which makes you say wow.

BTW its pretty annoying, that for all that money they ask for it, it has actually worse cooling solution than many custom 980Ti´s. And seems not to have even backplate, WTF? For that money it should be watercooled like that Evga Hybrid.

I am curious about the upcoming dual chip card too. I assume it will be dual GM200 card for 1500+, thus outside my budget, but if by chance it turned out to be 2xGM204 like Tesla M60 for same price as say Titan X (like 690 used next to original Titan), now that would be something worth considering. Probably the ideal solution to me, if it had 8GB VRAM per GPU... I would really wish for something like that at this point.

This leads me to a question, what is wrong with all those "board" partners these days - MSI, Gigabyte, Asus, Evga...etc. They seem to be incredibly conservative with their designs. The cards pretty much differ by the numbers of fans and stuff like that, but i remember times not so long ago, when you actually got cards like GTX460 x2 etc... i mean custom designs beyond the reference models by Nvidia or AMD... do these companies prohibit that nowadays?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Thanks!

This leads me to a question, what is wrong with all those "board" partners these days - MSI, Gigabyte, Asus, Evga...etc. They seem to be incredibly conservative with their designs. The cards pretty much differ by the numbers of fans and stuff like that, but i remember times not so long ago, when you actually got cards like GTX460 x2 etc... i mean custom designs beyond the reference models by Nvidia or AMD... do these companies prohibit that nowadays?

I wouldn't say conservative at all. For single GPUs, the level of quality of cards nowadays is incredible. Look how many 390/970/390X/980/980Ti cards are cool and quiet and have huge overclocks out of the factory on the NV side. Even the Sapphire Fury is whisper quiet and cool despite using well over 250W of power. We've come a LONG way from GTX480/7970 blower days. After-market cards from AMD/NV have never been better.

Since you have a fairly niche use case for professional applications on a consumer GPU, I can understand why you are frustrated with lack of dual-GPU offerings like dual 980s on 1 board. That has more to do with overall market demand for such products, not the conservatism of AIBs.

Look at the Titan Z - came out at $3000 US on June 20, 2014. Newegg has those for $1550 and almost no one buys them. Even EVGA has refurbished ones for $1299 and it's been sitting on their website for a while now without selling out despite just 2 of them posted.

Generally speaking, dual-chip cards aren't very popular, despite NV/AMD trying really hard to make a $1500-3000 market segment with them. For example, if a dual GP200/Fiji X2 card launched now for $1500+, ya some people will buy it but most would rather buy 2 after-market 980Ti cards that will cost less, overclock better and be easier to resell individually. Also, very few consumers want to spend $1500+ on a single card.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
I wouldn't say conservative at all. For single GPUs, the level of quality of cards nowadays is incredible. Look how many 390/970/390X/980/980Ti cards are cool and quiet and have huge overclocks out of the factory on the NV side. Even the Sapphire Fury is whisper quiet and cool despite using well over 250W of power. We've come a LONG way from GTX480/7970 blower days. After-market cards from AMD/NV have never been better.

Since you have a fairly niche use case for professional applications on a consumer GPU, I can understand why you are frustrated with lack of dual-GPU offerings like dual 980s on 1 board. That has more to do with overall market demand for such products, not the conservatism of AIBs.

Look at the Titan Z - came out at $3000 US on June 20, 2014. Newegg has those for $1550 and almost no one buys them. Even EVGA has refurbished ones for $1299 and it's been sitting on their website for a while now without selling out despite just 2 of them posted.

Generally speaking, dual-chip cards aren't very popular, despite NV/AMD trying really hard to make a $1500-3000 market segment with them. For example, if a dual GP200/Fiji X2 card launched now for $1500+, ya some people will buy it but most would rather buy 2 after-market 980Ti cards that will cost less, overclock better and be easier to resell individually. Also, very few consumers want to spend $1500+ on a single card.

I guess you are right. Its probably more down to lack of demand than anything else. I just recollect days when custom design did not mean just different board layout or different aftermarket cooler.

Regarding Titan Z, aside the fact 1500 is lot of money for most people, its not getting sold out cause its about the same price as 2x 980Ti´s. Which are indeed superior performance-wise. It did no sell when it was new, cause of the exorbitant 3000 price, and there are better choices now.

Yeah, i am frustrated. After 4 and half years i actually want to and can spend my money on GPU again and there is no proper upgrade path for me. But maybe Nvidia will introduce Titan Z2 for 1000 tomorrow :-D
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
One more shameless bump to this thread. I´ve realized Titan X has exactly 6x more CUDA cores than GTX 580 had. Hotclocked cores of 580 are nowadays pretty much on par with the cores of Maxwell, more or less, up to 1500 MHz... but then Titan X is about 2,5x to 3x faster than 580 at Octane... do you know some other app where the performance increase between the 2 is directly proportional to the core count increase? Is it 6x faster in games? Otherwise i wonder whats so wonderful on these new architectures, if hypothetical Fermi chip with 3072 cores would probably dump them to ground?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Otherwise i wonder whats so wonderful on these new architectures, if hypothetical Fermi chip with 3072 cores would probably dump them to ground?

I am going to say because it wouldn't be possible to make a 3072 core Fermi chip right now. There are many reasons for this. Fermi architecture might have internal bottlenecks elsewhere which means adding extra cores wouldn't allow it to scale well. Secondly, not all cores are equal as you noticed. 2880 CUDA core 780Ti is 2X faster than the 512 CUDA core 580Ti. Therefore, don't assume that a single Fermi CUDA core takes as much transistor space as a single Maxwell core; and they cannot be compared directly.

You cannot use that logic even when comparing Kepler vs. Maxwell as 128 Maxwell cores provide 90% of the performance of 192 Kepler cores at the same clocks. Efficiency changes.

6800Ultra had 16 pipelines @ 400mhz
GTX580 has 512 cores @ 772mhz core / 1544mhz shader

That means if you use simple math, GTX580 should be between 61.76X and 123.52X faster, right....?

No, it's only 18X faster.
http://forums.anandtech.com/showthread.php?t=2298406

That's why the few top electrical engineers who design GPUs get paid so much $; and why there are so few of them. They have to know what is the most optimal way to design future GPU architectures and that means abandoning what you thought worked well during last 2-4 years and adapting. That's why whoever designed VLIW and GCN is almost a genius since that person was able to design an architecture that was flexible for so many years. Think about it, who wouldn't want a GPU architecture that could scale and scale and scale with just newer nodes, more functional units and higher GPU clocks? It's not that simple as eventually all GPU architectures hit bottlenecks and require a full redesign.

Someone who actually designs GPUs or has GPU architectural knowledge would be able to give you a great answer but if it were so easy to just scale existing architectures with higher transistor density, NV/AMD wouldn't be spending $3-4 billion dollars on new architectures every so often. Look at DX12 and Asynchronous Compute shaders. If you need massive parallelism, lots of DirectCompute power and flexibility for future software, Fermi, Kepler and Maxwell are already outdated for NV. You need something entirely new or heavily redesigned. That's why you cannot just scale architectures that were never meant to adapt to software that takes advantage of things the old architecture was never designed to do well. This is similar to how HD5870/6970 were good for games but were horrible for parallelism/compute. That's how GCN came about. Similar to that, let's say GCN was ahead of its time for parallelism, but nothing is free. That means it had to give up something to be better at something else - in this case pixel shading power, geometry performance, integer16 texture performance, polygon throughput, voxel lighting/voxelization performance, conservative rasterization are big problem areas for that AMD architecture.

The irony here is that some programs you may run as an end user may not benefit much from future GPU architectures because future GPU architectures focus on the most popular software and future trends. If the program(s) you use start becoming outdated from a software perspective, future architectures won't linearly improve your performance. It sounds to me like Octane is one of those programs. Think about it, if the goal is to make more realistic games, you have to focus on say lighting. Then the engineers have to figure out the future trends, all the known lighting techniques and how to use scarce transistors to maximize their goals. Let's say they pick that voxel lighting is the future and throw their resources on that technique. The result would be a dramatic improvement for that software technique:

VoxelPerf.jpg


Let's say the software you use doesn't even use voxelization, well too bad. The next gen's graphics card just used up 5% of its transistors on making sure lighting runs 3X faster than last generation's hardware because that's the future trend. That's the risk of designing new GPu architectures is that you must focus on something knowing you can't do everything for everyone. Why am I telling you this because we cannot possibly predict how well Pascal will improve the performance in Octane since we'd have to figure out what areas of the GPU architecture Octane stresses the most and what the focus for NV on Pascal is. That's how you could easily end up with a situation where an $80 GTX580 offers just half the performance of a $1000 Pascal chip in Octane -- as stupid as that sounds that is the reality of GPU design.

I'll give you an example like this on the AMD side. During HD7970 series, there was a heavy focus on double precision performance but since the focus shifted to perf/watt, this was put aside during the current Fiji architecture. There are some distributed computing users on this very forum who use double precision software. Look what happens for them:

2048 shader HD7970Ghz = 1.075Tflops of FP64 performance
4096 shader Fury X = 0.538Tflops of FP64 performance

That means a $650 2015 Fury X with twice as many cores/shaders is theoretically up to twice as slow as $120 HD7970Ghz for FP64 software. ():)

Thankfully for you, there are people who use Octane and will benchmark the latest NV cards so that you know if something is worth purchasing. You don't have to spend 1000 EURO on Pascal and not know how well it'll perform. Thank you Internet!
 
Last edited:

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
I am going to say because it wouldn't be possible to make a 3072 core Fermi chip right now. There are many reasons for this. Fermi architecture might have internal bottlenecks elsewhere which means adding extra cores wouldn't allow it to scale well. Secondly, not all cores are equal as you noticed. 2880 CUDA core 780Ti is 2X faster than the 512 CUDA core 580Ti. Therefore, don't assume that a single Fermi CUDA core takes as much transistor space as a single Maxwell core; and they cannot be compared directly.

You cannot use that logic even when comparing Kepler vs. Maxwell as 128 Maxwell cores provide 90% of the performance of 192 Kepler cores at the same clocks. Efficiency changes.

6800Ultra had 16 pipelines @ 400mhz
GTX580 has 512 cores @ 772mhz core / 1544mhz shader

That means if you use simple math, GTX580 should be between 61.76X and 123.52X faster, right....?

No, it's only 18X faster.
http://forums.anandtech.com/showthread.php?t=2298406

That's why the few top electrical engineers who design GPUs get paid so much $; and why there are so few of them. They have to know what is the most optimal way to design future GPU architectures and that means abandoning what you thought worked well during last 2-4 years and adapting. That's why whoever designed VLIW and GCN is almost a genius since that person was able to design an architecture that was flexible for so many years. Think about it, who wouldn't want a GPU architecture that could scale and scale and scale with just newer nodes, more functional units and higher GPU clocks? It's not that simple as eventually all GPU architectures hit bottlenecks and require a full redesign.

Someone who actually designs GPUs or has GPU architectural knowledge would be able to give you a great answer but if it were so easy to just scale existing architectures with higher transistor density, NV/AMD wouldn't be spending $3-4 billion dollars on new architectures every so often. Look at DX12 and Asynchronous Compute shaders. If you need massive parallelism, lots of DirectCompute power and flexibility for future software, Fermi, Kepler and Maxwell are already outdated for NV. You need something entirely new or heavily redesigned. That's why you cannot just scale architectures that were never meant to adapt to software that takes advantage of things the old architecture was never designed to do well. This is similar to how HD5870/6970 were good for games but were horrible for parallelism/compute. That's how GCN came about. Similar to that, let's say GCN was ahead of its time for parallelism, but nothing is free. That means it had to give up something to be better at something else - in this case pixel shading power, geometry performance, integer16 texture performance, polygon throughput, voxel lighting/voxelization performance, conservative rasterization are big problem areas for that AMD architecture.

The irony here is that some programs you may run as an end user may not benefit much from future GPU architectures because future GPU architectures focus on the most popular software and future trends. If the program(s) you use start becoming outdated from a software perspective, future architectures won't linearly improve your performance. It sounds to me like Octane is one of those programs. Think about it, if the goal is to make more realistic games, you have to focus on say lighting. Then the engineers have to figure out the future trends, all the known lighting techniques and how to use scarce transistors to maximize their goals. Let's say they pick that voxel lighting is the future and throw their resources on that technique. The result would be a dramatic improvement for that software technique:

VoxelPerf.jpg


Let's say the software you use doesn't even use voxelization, well too bad. The next gen's graphics card just used up 5% of its transistors on making sure lighting runs 3X faster than last generation's hardware because that's the future trend. That's the risk of designing new GPu architectures is that you must focus on something knowing you can't do everything for everyone. Why am I telling you this because we cannot possibly predict how well Pascal will improve the performance in Octane since we'd have to figure out what areas of the GPU architecture Octane stresses the most and what the focus for NV on Pascal is. That's how you could easily end up with a situation where an $80 GTX580 offers just half the performance of a $1000 Pascal chip in Octane -- as stupid as that sounds that is the reality of GPU design.

I'll give you an example like this on the AMD side. During HD7970 series, there was a heavy focus on double precision performance but since the focus shifted to perf/watt, this was put aside during the current Fiji architecture. There are some distributed computing users on this very forum who use double precision software. Look what happens for them:

2048 shader HD7970Ghz = 1.075Tflops of FP64 performance
4096 shader Fury X = 0.538Tflops of FP64 performance

That means a $650 2015 Fury X with twice as many cores/shaders is theoretically up to twice as slow as $120 HD7970Ghz for FP64 software. ():)

Thankfully for you, there are people who use Octane and will benchmark the latest NV cards so that you know if something is worth purchasing. You don't have to spend 1000 EURO on Pascal and not know how well it'll perform. Thank you Internet!

Thanks for the elaborate response. Would give you some karma points for it, if these boards allowed:D

I was under impression that Fermi cores were the same as Kepler cores, except the hotclock thing, at least i remember reading something like that back when GTX680 was released. Perhaps i was misinformed. However it kinda made sense, since apparently because of the lack of hotclocks you had to compare one Fermi core to 2 Kepler ones -> thus making 680 to be comparable to hypothetical Fermi card with 768 cores...which more or less coincidented with the 40 percent perf improvement between 680 and 580.

I have no doubts that people who design these architectures are tenfold smarter than i am :)

I dont know about Octane, the thing is still being developed. Right now version 3.0 is pending. It was brand new in 2009. OFC it may be the kind of app which cant fully use the potential of the new GPUs, as you say. I am only trying to say that since its still in development, i would kinda expect the devs to adapt it to current architectures as much as possible.

One thing that interests me about Pascal and what makes me most uneasy about my choice, whether to wait for it or not, is its FP16 functionality. I recall reading on Octane boards once that Octane actually uses half-precision - so perhaps it will mightily benefit from this new Pascal feature? Would love to know that right now, but i guess since even the Octane devs dont have access to any Pascal HW yet, nobody knows for sure.

Finally, i found out there is actually EVGA titan X with the hybrid cooler same as GTX980Ti.... now of the all available cards right now this one probably comes closer to what i want. But its probably even more expensive than regular Titan X, lol. Not on sale over here, so its a moot point anyway.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
Thanks for the elaborate response. Would give you some karma points for it, if these boards allowed:D

Finally, i found out there is actually EVGA titan X with the hybrid cooler same as GTX980Ti.... now of the all available cards right now this one probably comes closer to what i want. But its probably even more expensive than regular Titan X, lol. Not on sale over here, so its a moot point anyway.

Thanks ;)

I think since you got a GTX580 replacement, I would just save up for Pascal Titan. The current Titan X came out March 31, 2015 and prices at retail haven't really fallen. With Pascal's Titan, we should get get 16GB of ~ 1TB/sec HBM2. The card is going to be much better for compute vs. Fermi, Kepler and Maxwell since compute is a big renewed focus with Pascal. Given that NV is going all in on compute with Maxwell while Kepler and Maxwell eschewed compute in favour of pure perf/watt, I think for you specifically, since you want more VRAM and compute, Pascal Titan will be the ticket.

Screenshot-95.jpg


I also think you'd feel much better about spending 900-1000 Euro on a card that's cutting edge, not one that's 8 months old.
 

Timmah!

Golden Member
Jul 24, 2010
1,571
935
136
Thanks ;)

I think since you got a GTX580 replacement, I would just save up for Pascal Titan. The current Titan X came out March 31, 2015 and prices at retail haven't really fallen. With Pascal's Titan, we should get get 16GB of ~ 1TB/sec HBM2. The card is going to be much better for compute vs. Fermi, Kepler and Maxwell since compute is a big renewed focus with Pascal. Given that NV is going all in on compute with Maxwell while Kepler and Maxwell eschewed compute in favour of pure perf/watt, I think for you specifically, since you want more VRAM and compute, Pascal Titan will be the ticket.

Screenshot-95.jpg


I also think you'd feel much better about spending 900-1000 Euro on a card that's cutting edge, not one that's 8 months old.

You are absolutely right. I was thinking exactly the same today - if i eventually decided to spend 1000 EUROs on the GPU, it will be Pascal Titan, not the current one. So its either that or 980Ti now...well, more like in December.