best would be video display, period.
worst gimmicky 3d displays that require glasses.
The reason why i went with color display is because it is hard to trace back when the 1st "display" came about. Do white sheets with projectors count? How about shadow figures on a cave wall? we seem to agree on 3D.
I agree 100%, my Hercules 4500 Kyro 2 64MB was an awesome video card but it had bad drivers which was not helped by lack of support from games companies.![]()
FXAA is very fast and looks much better than no AA. It can be enabled in very heavy games with little performance hit, unlike 2xMSAA which is often too slow for them.The problem I have with FXAA is that it often doesn't do a good job with edges and it's being promoted in favor of rotated grid sampling (or sparse grid if 8 or more samples are used) and nvidia doesn't support MSAA hacks anymore.
Sorry, no. Bigger lossy textures will always look better than smaller lossless textures, assuming the lossy compression is a good standard.I know that I'm probably the only person in the world who doesn't like lossy texture compression, but I think it's bad because lossless texture compression is technologically possible. While lossless texture compression wouldn't allow for as quite of large or as many textures to be stored in memory, it would look better overall IMO. I've never thought 8k^2 textures were necessary, especially since we have AF and since texel density can vary. Given the same color depth, a 4k^2 texture is only 1/4 the size of an 8k^2 texture. Then lossless texture compression would surely on average save at least 15%.
LOL. So standards are a bad thing now, huh?Coming in 3rd place would have to be DX as opposed to OpenGL. DX is proprietary, closed source, and has standardized things too much. I think it has severely limited the progress of Graphics. I think it has done a few good things including the advocating unified shaders but it has mostly made things worse IMO.
As has been already pointed out, both AMD and nVida have had this feature for a while.As for best graphics tech, I would have to say 3dfx's RGSSAA which is still the very highest quality 13 years after 3dfx first revealed it and 12 years after they hard launched the first product that could do it. It made the competition's ordered grid sampling look pathetic to say the least.
I never said they didn't.As has been already pointed out, both AMD and nVida have had this feature for a while.
Nothing... I just wanted some opinions on what people thought. I don't recall starting a thread quite like this before.You seem to start these threads about once every 3 months and repeat the same things over and over. What do you hope to accomplish with this?
Yes, I think it would be better if each games were programmed for each GPU. I'm not sure that OpenGL tried to continue to compete with D3D. I think it just gave up. I acknowledge that I could be wrong about that.LOL. So standards are a bad thing now, huh? “Severely limited progress”? “Made things worse”? Is that why OpenGL has been playing catch-up for years? Perhaps you’d like to go back to the DOS days where games had to program hardware components individually?
What I meant was I heard that they don't officially support them. Someone said so on nvidia's forums.And nVidia most certainly supports MSAA hacks – not sure where you got that idea from.
It may look better than no AA and be fast, but I still think it's trying to replace rotated grid AA. I just don't see why it's proposed when rotated grid SS/MS hacks could be instead. Even though not 100% of all engines supported rotated grid SS/MS hacks, perhaps engines that don't support them should be discouraged.FXAA is very fast and looks much better than no AA. It can be enabled in very heavy games with little performance hit, unlike 2xMSAA which is often too slow for them.
That's subjective.Bigger lossy textures will always look better than smaller lossless textures, assuming the lossy compression is a good standard.
Errr...intel larrabee. spent billions and have no product to show for it.
Errr...
![]()
They renamed accumulation buffer as T-Buffer, but they didn't invent it.I never said they didn't.3dfx simply invented it.
No, it looks like a steaming pile of crap.FXAA is very fast and looks much better than no AA.
"No product". Phi is a product.We're talking about graphics here, so he's partly correct. Xeon Phi does not render video, it's strictly for HPC.
"No product". Phi is a product.
It did what it needed to do.Worst: Intel IGP. Intel has been holding back the lowend GPU segment for years. Only now is their IGP starting to set a decent baseline in performance.
As for best graphics tech, I would have to say 3dfx's RGSSAA which is still the very highest quality 13 years after 3dfx first revealed it and 12 years after they hard launched the first product that could do it.
I wasn't aware that SGi made consumer grade graphics processors other than what they did for the N64.You mean SGi's accumulation buffer techniques? 3dfx flat out stole everything in the T-Buffer from SGi, and they did a *very* poor job of it too(morons didn't even have proper LOD settings at launch). 3dfx's "quality" was actually shockingly bad out of the box, rather an embarassment to the industry honestly. They never could get texture filtering working properly(the math behind LOD determination based on a multi frame setup versus larger single was too complicated for them), they never had anisotropic filtering built in to the part and it never worked with any sort of shaders. The SGi parts that 3dfx ripped off the idea of the TBuffer from handled all of these things. I wouldn't say the TBuffer was the poorest 3D tech ever, man I'd have to think a long time over that one, but it certainly was in the running for the poorest. Even if you think the exacting capabilities of the TBuffer were the greatest thing ever, Irix machines were doing all of it in clearly superior fashion prior to 3dfx.
They were the first to implement RGSS/SGSS in consumer space, but I dont think its accurate to say that they invented it.I never said they didn't. 3dfx simply invented it.
Youve repeatedly started topics about items in the OP, e.g. railing against texture compression: http://forums.anandtech.com/showthread.php?t=2235026&highlight=Nothing... I just wanted some opinions on what people thought. I don't recall starting a thread quite like this before.
You mean like consoles? Oh wait, you wanted them to die: http://forums.anandtech.com/showthread.php?t=2250392&highlight=Yes, I think it would be better if each games were programmed for each GPU.
The codes are actively added to shipping drivers by the driver programmers, and they function in shipping games. That sounds like support to me.What I meant was I heard that they don't officially support them. Someone said so on nvidia's forums.
Its not trying to replace it, its an option where MSAA is either too slow and/or doesnt work. Its also vastly superior to MLAA.It may look better than no AA and be fast, but I still think it's trying to replace rotated grid AA. I just don't see why it's proposed when rotated grid SS/MS hacks could be instead. Even though not 100% of all engines supported rotated grid SS/MS hacks, perhaps engines that don't support them should be discouraged.