[techeye] High end kepler -- 2013?

Page 3 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

96Firebird

Diamond Member
Nov 8, 2010
5,742
340
126
If there were working engineering samples we would've known by now.

Or do you think NVIDIA is being completely silent about Kepler because they want to? No, it's because they have nothing to show.

They were completely silent about their last release, the GTX 580. Rumors started around October 15th, with a release 25 days later...

http://hardforum.com/showthread.php?t=1554229

http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580

A little off-topic, but I browsed over the GTX 580 rumor thread at HardOCP. The timing of the card was definitely a surprise to some. :)
 

wahdangun

Golden Member
Feb 3, 2011
1,007
148
106
Its hard to say either way. I know with Fermi , nvidia were eager to talk even 6 months before release.


haha thats remind me of this :

nvidiafeb20.jpg


its totally the WTF announcement that i ever heard. i mean an announcement of announcement lol.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
In the current PC era characterized by console ports/game engines that lack realism, it would be amazing to see a GPU maker trying to add some unique features that may encourage game developers to produce more realistic games.

Seems like a big risk to be taking if they are devoting large resources to it. Unless, of course, they already had something in the works that made it so a future version of this GPU would end up powering something else that developers will push hard(and hence, getting the backdraft of ports with huge benefits for them).

Ageia's PPU was just a vector processor with cache coherency; there was nothing else special about it. In terms of features GPUs have long exceeded that, and there's no reason that I know of that you'd ever need "Ageia" hardware to do physics (particularly kinematics, which is what Ageia's hardware was built for).

From an engineering standpoint, having additional pure arith units with no pixel paths tied to them at all could make some sense. First off, you encourage much wider utilization of features like PhysX because there wouldn't be any real performance hit(marginal due to increased particles etc, but more or less a rounding error) and also the FLOPS/mm improvement would help them out for both the Tesla and Quadro segments. I'm not saying that is what nVidia is doing, but I can certainly see *why* they would take that approach.

Or do you think NVIDIA is being completely silent about Kepler because they want to?

If history is a guide, nVidia being quiet is the last thing in the world AMD fans want to see.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
They were completely silent about their last release, the GTX 580. Rumors started around October 15th, with a release 25 days later...

http://hardforum.com/showthread.php?t=1554229

http://www.anandtech.com/show/4008/nvidias-geforce-gtx-580

A little off-topic, but I browsed over the GTX 580 rumor thread at HardOCP. The timing of the card was definitely a surprise to some. :)

Introducing a new architecture is not the same as releasing a new card. Not a great comparison.
 

Stuka87

Diamond Member
Dec 10, 2010
6,240
2,559
136
Definitely. Like AMD invented the use of multiple-GPUs in CF and NV copied with their SLI technology, how AMD introduced SM3.0 first with X1800 series and NV copied with their 6800 series, how AMD had the world's first DX10 GPU in 2900XT and G80 came out months later with the same feature set, how AMD had the most advanced Tessellation design in HD5800 series and NV copied it with Fermi, how AMD developed the world's first GPGPU scalar architecture and NV copied, how AMD provided Analytical Anti-Aliasing and Super Sample Anti-Aliasing for DX10+ games first, how AMD provided custom game settings/profiles and custom resolutions in driver control panel, how AMD was the first to have working surround 3D gaming....etc. :thumbsup:

Did you actually just try and say than nVidia developed SLI? Because they most certainly did not. Nor were they first on the scene with it making that point kind of moot. They did have multi-GPU before ATI, but thats not because nVidia spent all this time developing it. They simply bought the rights and the engineers from 3dFX.
 

notty22

Diamond Member
Jan 1, 2010
3,375
0
0
How so? In both instances, the chips get delivered, the boards get built, the drivers are built, and the cards get released.
Agreed. New gpu wafers were produced (GF110), new boxes, all on the QT, almost no one saw it coming. No credible leaks. And yet it hard launched.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
How so? In both instances, the chips get delivered, the boards get built, the drivers are built, and the cards get released.

The GTX 580 didn't bring a new architecture. It was a GTX 480 with a fully functional GF100/110 core and less leaky transistors plus some other very minor tweaks to improve power consumption. Nothing else.

Anyway, don't know why you're arguing. Kepler hasn't been sampled to anyone.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Agreed. New gpu wafers were produced (GF110), new boxes, all on the QT, almost no one saw it coming. No credible leaks. And yet it hard launched.

Because it required pretty much nothing to be launched. It was just a rehash, nothing else. Or are you suggesting the differences going from GF100 to GF110 are comparable to those of making a new architecture (Kepler)?
 

Concillian

Diamond Member
May 26, 2004
3,751
8
81
PhysX has the potential. The problem right now is, that it is proprietary and not getting traction. I'm talking about the future, not the past or present.

Not only is it not gaining traction, there's no way it can in the current development environment, where consoles are given equal or more weight in development time than PC on almost every game made. Developers simply want to deal with as few PC only features as they can at this point.
 

96Firebird

Diamond Member
Nov 8, 2010
5,742
340
126
Because it required pretty much nothing to be launched. It was just a rehash, nothing else. Or are you suggesting the differences going from GF100 to GF110 are comparable to those of making a new architecture (Kepler)?

Once the chips are in house, there is little difference. Kepler (presumably) is already designed, architecture wise. You said there is no working silicon, but you have no idea of knowing that. You claimed because we haven't heard anything, they have no engineering samples. I pointed to the fact that the GTX 580 was launched less than a month after rumors started. So really, neither of us know anything.

Do you think Nvidia got its batch of GF110 chips, tested them, made their boards, gave it to AIB partners, and launched, all in less than a month?
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Last edited:

Grooveriding

Diamond Member
Dec 25, 2008
9,147
1,330
126
How so? In both instances, the chips get delivered, the boards get built, the drivers are built, and the cards get released.

Let me fix ths for you to account for what had to be done for the 580

How so? the chips get delivered, and the cards get released.

480/580 PCB = the same.
480/580 Drivers = the same.
Chip = the same with another shader cluster enabled on the 580 and refinements accumulated over the time passed from 480 release.

I saw the 580 on Chiphell 6 weeks before it released.

No one has any idea when a high end GTX 680 Kepler will come out of course, but as this is February. Mininmum is April, most likely Summer time.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Once the chips are in house, there is little difference. Kepler (presumably) is already designed, architecture wise. You said there is no working silicon, but you have no idea of knowing that. You claimed because we haven't heard anything, they have no engineering samples. I pointed to the fact that the GTX 580 was launched less than a month after rumors started. So really, neither of us know anything.

Do you think Nvidia got its batch of GF110 chips, tested them, made their boards, gave it to AIB partners, and launched, all in less than a month?

And missing the point entirely, yet again. GF100 to GF100 DOES NOT EQUAL GF110 to X Codename Kepler. It's a NEW architecture on a NEW process node. How is that difficult to understand?

Also, how do you figure we have no idea of knowing that? We've almost always known when engineering samples are in the wild for a new architecture (whether it be CPU or GPU one) or when engineering samples are being given to OEMs.

And of course Kepler has already been designed; that's obvious. It had to be designed well over a year ago. Actually getting working engineering samples of your design is another different thing entirely, which is what I'm arguing.
 

Red Storm

Lifer
Oct 2, 2005
14,233
234
106
*Obligatory doom and gloom rhetoric*


I think they'll be fine. They may be late, but they'll be fine. We'll just have to deal with high prices for a little while longer.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81


Yes, and? What does that have to do with anything? AMD probably looked at XDR2 and then decided to go with a wider bus instead given how the new architecture worked.

As for release dates, I was right. Tahiti hard launched in January. Unfortunately, even though AMD has all the Pitcairn cards ready for launch, they haven't released them because of no competition from NVIDIA.

AMD are doing a top-to-bottom GPU release like the 5000 series instead of a middle up-to top-to bottom like HD 6000.

Also, remember those laptops and motherboards with working 28nm GPU samples that AMD demoed over nine months ago? AMD has a much better grasp on new processing nodes than NVIDIA, and that's the way its been for around three years.
 

tviceman

Diamond Member
Mar 25, 2008
6,734
514
126
www.facebook.com
Yes, and? What does that have to do with anything?

It has to do with everything you are trying to assert right now. It shows that you believe whatever info you can find that fits into what you want to believe. But in reality, no matter how strong you claim and assert yourself, you (nor anyone else here, as far as I am aware of) knows for sure when/if Kepler taped out, how many times it has been respinned, when it will launch, and exactly how it will perform. We'll talk again soon about it all, and we can look back and together and discuss how hindsight made your assertions (and my predictions) look.
 

formulav8

Diamond Member
Sep 18, 2000
7,004
523
126
AMD has a much better grasp on new processing nodes than NVIDIA, and that's the way its been for around three years.

Definitely true. AMD has majorly outclassed NVidia since I think 55 or 65nm? :confused: NVidia has definitely been behind the last few years when it comes to shrinking nodes.
 

96Firebird

Diamond Member
Nov 8, 2010
5,742
340
126
I saw the 580 on Chiphell 6 weeks before it released.

No one has any idea when a high end GTX 680 Kepler will come out of course, but as this is February. Mininmum is April, most likely Summer time.

Wouldn't mind a link to that...

And LOL_Wut_Axel, you never have any idea when chips arrive, that we know for sure. Unless you work in shipping/receiving at Nvidia, you won't know.

The truth of the matter is, you, nor Groover, nor the articles author, nor I, have any clue when Kepler is releasing.
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
PhysX has the potential. The problem right now is, that it is proprietary and not getting traction. I'm talking about the future, not the past or present.

Many people confuse using the GPU for physics calculations with Physx.

Physx is a proprietary middleware. Developers won't use it for meaningful gameplay experiences, because that would basically cut their possible userbase in half, and AMD won't license it, because it would give Nvidia every chance to screw them over due to the closed nature of it. What's worse, due to Physx being the default physics middleware for the UE3, it actually inhibits developers moving to OpenCL etc. and so actually slowing down the adoption of GPU based physics! You get steam reacting to the movement of the player character but won't see for example a game, where you manipulate steam into various shapes to solve puzzles, in the near future.
 

boxleitnerb

Platinum Member
Nov 1, 2011
2,605
6
81
PhysX was never meant for "gameplay physics", rather for purely graphical effects.

If Nvidia would make PhysX open source, then it could take off.

And you're not serious about PhysX slowing down GPU based physics. Right now the ONLY solution we have in this regard IS PhysX. It is not Nvidias fault that the guys around AMD and Bullet are happily sleeping, doing nothing. When did AMD say they fully support Bullet? How many years ago? How many games do we have with GPU accelerated physics, using bullet? Zero. You may not like it, but Nvidia is doing at least something while the rest of the world is sitting on its buttocks and critizising Team Green for actually putting themselves out there and being pioneers.
 

Joseph F

Diamond Member
Jul 12, 2010
3,522
2
0
This new toolkit backs that rumour

CUDA Toolkit 4.1



"LLVM's modular design allows third-party software tool developers to provide a custom LLVM solution for non-NVIDIA processor architectures, enabling CUDA applications to run across other vendors" (AMD)

http://developer.nvidia.com/cuda-toolkit-41

Did I not read this right, or does this mean that we, AMD users, can finally use CUDA on our GPUs? *head explodes
 

ViRGE

Elite Member, Moderator Emeritus
Oct 9, 1999
31,516
167
106
Did I not read this right, or does this mean that we, AMD users, can finally use CUDA on our GPUs? *head explodes
Unfortunately you didn't read it right. Non-NVIDIA processor architectures means ARM and x86. CUDA LLVM is closed source and NVIDIA has complete control over what architectures it can compile to.
 

Olikan

Platinum Member
Sep 23, 2011
2,023
275
126
And you're not serious about PhysX slowing down GPU based physics. Right now the ONLY solution we have in this regard IS PhysX. It is not Nvidias fault that the guys around AMD and Bullet are happily sleeping, doing nothing. When did AMD say they fully support Bullet? How many years ago? How many games do we have with GPU accelerated physics, using bullet? Zero. You may not like it, but Nvidia is doing at least something while the rest of the world is sitting on its buttocks and critizising Team Green for actually putting themselves out there and being pioneers.

there is a reason that almost no company uses physX, bullet or havok...

it's easyer to code a decent phisics in the game engine, than build a game engine around havok, for example.