Worst and best graphics tech ever?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

dawp

Lifer
Jul 2, 2005
11,347
2,710
136
best would be video display, period.
worst gimmicky 3d displays that require glasses.
 

pcm81

Senior member
Mar 11, 2011
598
16
81
best would be video display, period.
worst gimmicky 3d displays that require glasses.

The reason why i went with color display is because it is hard to trace back when the 1st "display" came about. Do white sheets with projectors count? How about shadow figures on a cave wall? we seem to agree on 3D.
 

Sable

Golden Member
Jan 7, 2006
1,130
105
106
We including engines? id tech 5. 22GB for Rage? Seriously? 22 fucking GB!!!!!!!!!!!! Mega textures my ass. And they LOOK like ass close up as well.
 

Wall Street

Senior member
Mar 28, 2012
691
44
91
Worst - 3dFX T-Buffer - This technology never really caught on and blurring several frames together produced a horrible unrealistic effect.

Best - Anisotropic Filtering - With a very small performance hit, this made it almost impossible to see the boundaries between mipmap levels, making textures look increadibly sharper and reducing the shimmer found in the fist few cards.
 

dawp

Lifer
Jul 2, 2005
11,347
2,710
136

IlllI

Diamond Member
Feb 12, 2002
4,927
11
81
my list of worst:


the crappy graphics coupled w/ shitty drivers that intel used to have on their motherboards. at least in the past couple years there has been some improvement.
lucid virtu. a gimmick that does absolutely nothing
intel larrabee. spent billions and have no product to show for it.
 

Remobz

Platinum Member
Jun 9, 2005
2,564
37
91
I agree 100%, my Hercules 4500 Kyro 2 64MB was an awesome video card but it had bad drivers which was not helped by lack of support from games companies. :(

Brings back memories. I owned one as well.

I always felt I should of bought a 3dfx card.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
The problem I have with FXAA is that it often doesn't do a good job with edges and it's being promoted in favor of rotated grid sampling (or sparse grid if 8 or more samples are used) and nvidia doesn't support MSAA hacks anymore.
FXAA is very fast and looks much better than no AA. It can be enabled in very heavy games with little performance hit, unlike 2xMSAA which is often too slow for them.

And nVidia most certainly supports MSAA hacks – not sure where you got that idea from.

I know that I'm probably the only person in the world who doesn't like lossy texture compression, but I think it's bad because lossless texture compression is technologically possible. While lossless texture compression wouldn't allow for as quite of large or as many textures to be stored in memory, it would look better overall IMO. I've never thought 8k^2 textures were necessary, especially since we have AF and since texel density can vary. Given the same color depth, a 4k^2 texture is only 1/4 the size of an 8k^2 texture. Then lossless texture compression would surely on average save at least 15%.
Sorry, no. Bigger lossy textures will always look better than smaller lossless textures, assuming the lossy compression is a good standard.

Coming in 3rd place would have to be DX as opposed to OpenGL. DX is proprietary, closed source, and has standardized things too much. I think it has severely limited the progress of Graphics. I think it has done a few good things including the advocating unified shaders but it has mostly made things worse IMO.
LOL. So standards are a bad thing now, huh?

“Severely limited progress”? “Made things worse”? Is that why OpenGL has been playing catch-up for years?

Perhaps you’d like to go back to the DOS days where games had to program hardware components individually?

As for best graphics tech, I would have to say 3dfx's RGSSAA which is still the very highest quality 13 years after 3dfx first revealed it and 12 years after they hard launched the first product that could do it. It made the competition's ordered grid sampling look pathetic to say the least.
As has been already pointed out, both AMD and nVida have had this feature for a while.

You seem to start these threads about once every 3 months and repeat the same things over and over. What do you hope to accomplish with this?
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
As has been already pointed out, both AMD and nVida have had this feature for a while.
I never said they didn't.:) 3dfx simply invented it.
You seem to start these threads about once every 3 months and repeat the same things over and over. What do you hope to accomplish with this?
Nothing... I just wanted some opinions on what people thought. I don't recall starting a thread quite like this before.
LOL. So standards are a bad thing now, huh? “Severely limited progress”? “Made things worse”? Is that why OpenGL has been playing catch-up for years? Perhaps you’d like to go back to the DOS days where games had to program hardware components individually?
Yes, I think it would be better if each games were programmed for each GPU. I'm not sure that OpenGL tried to continue to compete with D3D. I think it just gave up. I acknowledge that I could be wrong about that.

And nVidia most certainly supports MSAA hacks – not sure where you got that idea from.
What I meant was I heard that they don't officially support them. Someone said so on nvidia's forums.
FXAA is very fast and looks much better than no AA. It can be enabled in very heavy games with little performance hit, unlike 2xMSAA which is often too slow for them.
It may look better than no AA and be fast, but I still think it's trying to replace rotated grid AA. I just don't see why it's proposed when rotated grid SS/MS hacks could be instead. Even though not 100% of all engines supported rotated grid SS/MS hacks, perhaps engines that don't support them should be discouraged.
Bigger lossy textures will always look better than smaller lossless textures, assuming the lossy compression is a good standard.
That's subjective.

I'm sorry that I'm retarded.:)
 

Sable

Golden Member
Jan 7, 2006
1,130
105
106
intel larrabee. spent billions and have no product to show for it.
Errr...

phi_coprocessor_TEASER1.jpg
 

DooKey

Golden Member
Nov 9, 2005
1,811
458
136
Hypermemory has to be the worst tech ever.

Best ever is multi-gpu capability tied with 3D rendering.
 

GotNoRice

Senior member
Aug 14, 2000
329
5
81
Any software by Lucidlogix, such as the Virtu series.

For the worst...
 
Last edited:

Cogman

Lifer
Sep 19, 2000
10,286
145
106
Best? OpenGL, worst? OpenGL 3.0

There were so many promises that just fell through. It was supposed to be the answer to DX 10, instead it was more like OpenGL 2.0 without fixed functions.

The original OpenGL was revolutionary. After that, it has just sort of perpetually stagnated.
 

hawtdawg

Golden Member
Jun 4, 2005
1,223
7
81
Worst: Intel IGP. Intel has been holding back the lowend GPU segment for years. Only now is their IGP starting to set a decent baseline in performance.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
As for best graphics tech, I would have to say 3dfx's RGSSAA which is still the very highest quality 13 years after 3dfx first revealed it and 12 years after they hard launched the first product that could do it.

You mean SGi's accumulation buffer techniques? 3dfx flat out stole everything in the T-Buffer from SGi, and they did a *very* poor job of it too(morons didn't even have proper LOD settings at launch). 3dfx's "quality" was actually shockingly bad out of the box, rather an embarassment to the industry honestly. They never could get texture filtering working properly(the math behind LOD determination based on a multi frame setup versus larger single was too complicated for them), they never had anisotropic filtering built in to the part and it never worked with any sort of shaders. The SGi parts that 3dfx ripped off the idea of the TBuffer from handled all of these things.

I wouldn't say the TBuffer was the poorest 3D tech ever, man I'd have to think a long time over that one, but it certainly was in the running for the poorest. Even if you think the exacting capabilities of the TBuffer were the greatest thing ever, Irix machines were doing all of it in clearly superior fashion prior to 3dfx.
 

Anarchist420

Diamond Member
Feb 13, 2010
8,645
0
76
www.facebook.com
You mean SGi's accumulation buffer techniques? 3dfx flat out stole everything in the T-Buffer from SGi, and they did a *very* poor job of it too(morons didn't even have proper LOD settings at launch). 3dfx's "quality" was actually shockingly bad out of the box, rather an embarassment to the industry honestly. They never could get texture filtering working properly(the math behind LOD determination based on a multi frame setup versus larger single was too complicated for them), they never had anisotropic filtering built in to the part and it never worked with any sort of shaders. The SGi parts that 3dfx ripped off the idea of the TBuffer from handled all of these things. I wouldn't say the TBuffer was the poorest 3D tech ever, man I'd have to think a long time over that one, but it certainly was in the running for the poorest. Even if you think the exacting capabilities of the TBuffer were the greatest thing ever, Irix machines were doing all of it in clearly superior fashion prior to 3dfx.
I wasn't aware that SGi made consumer grade graphics processors other than what they did for the N64.
 

BFG10K

Lifer
Aug 14, 2000
22,709
3,003
126
I never said they didn't. 3dfx simply invented it.
They were the first to implement RGSS/SGSS in consumer space, but I don’t think it’s accurate to say that they invented it.

Nothing... I just wanted some opinions on what people thought. I don't recall starting a thread quite like this before.
You’ve repeatedly started topics about items in the OP, e.g. railing against texture compression: http://forums.anandtech.com/showthread.php?t=2235026&highlight=

The only thing new in this thread is FXAA, which seems to have taken the place of forcing 32 bit depth buffers.

Yes, I think it would be better if each games were programmed for each GPU.
You mean like consoles? Oh wait, you wanted them to die: http://forums.anandtech.com/showthread.php?t=2250392&highlight=

You ever written any code? Hardware abstraction and APIs provide numerous advantages over programming hardware directly, not just for games.

My GTX680 came out after all my games were released. Following your paradigm, I guess I can’t play any of my 129 installed games unless they get patched to contain code for the GTX680?

Even in the late 90s we were still getting games programmed for individual hardware (S3Metal, RRedline, Glide, etc.) What a mess. DirectX has been one of the best things to ever happen to PC gaming.

It’s a fantasy to think it’s viable for modern PC developers to program hardware directly like they used to. Again, if you want that sort of thing, go back to DOS or buy a console. In either case you’ll see a tremendous loss of functionality over what modern abstraction provides.

What I meant was I heard that they don't officially support them. Someone said so on nvidia's forums.
The codes are actively added to shipping drivers by the driver programmers, and they function in shipping games. That sounds like support to me.

It may look better than no AA and be fast, but I still think it's trying to replace rotated grid AA. I just don't see why it's proposed when rotated grid SS/MS hacks could be instead. Even though not 100% of all engines supported rotated grid SS/MS hacks, perhaps engines that don't support them should be discouraged.
It’s not trying to replace it, it’s an option where MSAA is either too slow and/or doesn’t work. It’s also vastly superior to MLAA.