Oh My goodness, a $240 part doesn't do something as well as a $500 part?
No, a $240 part doesn't do something as well as a $200 part.
Oh My goodness, a $240 part doesn't do something as well as a $500 part?
Actually AMD has no excuse to botch up their tessellation engine:
http://en.wikipedia.org/wiki/TruForm
They have had hardware tesselllation since the Radeon 8500.
They should have wiped the floor with NVIDIA's performance in tessellation, but they made it to be the other way around.
I suspect this is another area where AMD's small die philosophy once again hit performance/features in a bad way.
So then do you keep repeating over and over and over that AMD's tessellator is lacking if it's going to come down to a game by game evaluation to determine the optimal tessellation setting for each particular future title? Why not simply adopt a "wait and see" attitude like most everybody else here? Only time will ultimately determine whether AMD's 6800 series has enough/not enough tessellation power for the games that will be released during its useful lifespan.'Overkill' depends entirely on what it is you're tessellating, and how.
So you can hardly make the claim that AMD's tessellator is lacking.
Have you seen the tesselation and GPGPU benchmarks?
Yes I can. Compared it to nVidia's tessellator, it looks bleak, VERY bleak.
You may argue that the tessellator is not that important in the overall picture, but NOT that the tessellator itself is not lacking, when the performance difference is so incredibly large:
http://www.geeks3d.com/20100826/tessmark-opengl-4-gpu-tessellation-benchmark-comparative-table/
Look at the extreme and insane results: Even nVidia's GTX460 is about 3 times as fast as AMD's 5870. And the 470 is twice as fast as that. That is a factor 6 performance difference between AMD's best and nVidia's best.
This is nothing short of a massacre. Not lacking you say? Denial I say.
Yes I can. Compared it to nVidia's tessellator, it looks bleak, VERY bleak.
You may argue that the tessellator is not that important in the overall picture, but NOT that the tessellator itself is not lacking, when the performance difference is so incredibly large:
http://www.geeks3d.com/20100826/tessmark-opengl-4-gpu-tessellation-benchmark-comparative-table/
Look at the extreme and insane results: Even nVidia's GTX460 is about 3 times as fast as AMD's 5870. And the 470 is twice as fast as that. That is a factor 6 performance difference between AMD's best and nVidia's best.
This is nothing short of a massacre. Not lacking you say? Denial I say.
Yes I can. Compared it to nVidia's tessellator, it looks bleak, VERY bleak.
You may argue that the tessellator is not that important in the overall picture, but NOT that the tessellator itself is not lacking, when the performance difference is so incredibly large:
http://www.geeks3d.com/20100826/tessmark-opengl-4-gpu-tessellation-benchmark-comparative-table/
Look at the extreme and insane results: Even nVidia's GTX460 is about 3 times as fast as AMD's 5870. And the 470 is twice as fast as that. That is a factor 6 performance difference between AMD's best and nVidia's best.
This is nothing short of a massacre. Not lacking you say? Denial I say.
just downloaded and ran tessmark with my 5870's. neat little program. Anyhow, when you look at how many wire frames going on in there it is like WOWZERS.
What people are more concerned with is how it will effect their gaming experience not how big their synthetic e-peen will be.
@Scali
Is tessellation any different than subdivision while modeling? I know it's not exactly the same thing. What I mean is does it basically increase model density the same as subD does?
That's the most important point. Some people just don't get it.just downloaded and ran tessmark with my 5870's. neat little program. Anyhow, when you look at how many wire frames going on in there it is like WOWZERS.
Needless to say, when selesting anything beyond medium tessalation I saw dimishing returns but with huge performance loss. To me this sounds like AA. When you start getting past 4xAA the penalty grows expontially and the visually quality is only really detectable under zoomed in still screen shots. Hardly something that I would even concern myself with unless it were a life or death situatiuon. If the time comes around that this kind of tessalation becomes availabe to gamers and offers a "tangible" visual gain I'll reconsider my view. Until then, I hardly see insane amounts of tessalation relevant to today's gamer, in todays games.
What if Cayman does out perform Fermi in tessellation?
Yes, pretty much. A very common form of subdivision in modeling software is with NURBS (Non-Uniform Rational B-Splines). Maya is pretty much entirely built around the concept of NURBS. With DX11 tessellation, you can program the hardware to subdivide NURBS patches in realtime.
But you can also do other things, such as displacement mapping... or a combination of techniques.
A feature Nvidia has never had before and actively blocked from being added to DX10.
Well, the reason I'm asking is I have a model I'm making using SubD (hypernurbs in C4D). The basic model, without SubD, is 728 poly (quads). It's about as smooth as Legos.Way too lowres for game use in anything but very distant shots. If I SubD w/ a factor of only 6 the poly count jumps to +2.9 million. At that poly count the model appears virtually solid. Way too dense for any game use. A SubD factor of 4 gives it 187K. Hires enough to appear perfectly smooth for any rendering short of Hollywood. This is a model with a lot of curves, so it's not because it's boxy. Unless I'm not understanding something, I can't see any use for a Tess factor above 4 in games.
http://www.tomshardware.com/reviews/opengl-directx,2019-7.htmlPartly incorrect: GeForce3 supported RT-patches in DirectX 8.
The rest, that's a pretty strong accusation, have any proof to back it up?
Initially planned for Direct3D 10 (which explains its presence in the Radeon HD series), it seems that Microsoft, ATI, and Nvidia werent able to reach an agreement in time, and so it disappeared from the specifications, only to return with a vengeance with Direct3D 11.
Partly incorrect: GeForce3 supported RT-patches in DirectX 8.
The rest, that's a pretty strong accusation, have any proof to back it up?
Speed is proportional to the price of the hardware. You expect each card to perform according to its price.
And that's the thing with tessellation... it shifts this metric in nVidia's favour currently. Not only on the fastest cards, but on all cards.
I will help him out. Go get the specs of Vista befor MS changed the specs because NV was cring to MS.
I don't see how that shifts the entire price/performance in favor of nvidia, though. I mean look at the big picture.
