gsellis
Diamond Member
- Dec 4, 2003
- 6,061
- 0
- 0
Originally posted by: obsidian
So he claims NVIDIA's brilinear has quality degradation, but ATI's doesn't? Please...
Brilinear?
Originally posted by: obsidian
So he claims NVIDIA's brilinear has quality degradation, but ATI's doesn't? Please...
Originally posted by: ChkSix
I should add that the only reason why I would choose Nvidia here over ATi is architecture. Unless the ATi offering was knocking the daylights out of Nvidia's offering across the board (which isn't the case, they are basically even throughout the spectrum), I cannot see spending 500 dollars on something that is roughly two years old with enhancements such as additional pipelines and speed increases on it's core. I love bleeding edge technology, and I am aware that there will always be problems to iron out with everything brand spanking new. But that's how it works in our hobby/profession. Technology should always progress forward, even if the designs are wrong and don't work like they should first time around. Because you can always learn off of the mistakes of something new and perfect them, at which time it would lead you to another bold push in another direction. You can't learn anything new if you just tinker with the same thing over and over and over again, already knowing it inside and out, backwards and forwards.
Turbine engines would never have been if the prop wasn't completely perfected and someone said, ok out with the old, let's try something bold and new and walk down a path never taken before by anyone. Same holds true for the new Scramjet and Hypersonic engines being developed now because the Turbine has reached a point of perfection. Quantum Mechanics and the Uncertainty Theory came at the turn of the century, which led to microprocessors and the computer revolution. If no one had the courage to push the envelope to it's breaking point, we would all still be using punch cards or the basic Chinese computer (forget the name) to do our mathematical computing for us instead of desktops, laptops, pocket pcs and the like.
Progression can only come if one is willing to take a chance and dive into the unknown. And this time around, ATi dissapointed me by not doing that and elongating a seriously old design, that is if one thinks of it in terms of technology.
Originally posted by: caz67
Ati or Nvidia.!!
Both are great products.
The next gen cards offer enough firepower for the average enthusiast and the extreme gamer.
Lets all be thankful, that they are going to produce better products in the future.
We are all winners.
Originally posted by: obsidian
So he claims NVIDIA's brilinear has quality degradation, but ATI's doesn't? Please...
Originally posted by: caz67
Ati or Nvidia.!!
Both are great products.
The next gen cards offer enough firepower for the average enthusiast and the extreme gamer.
Lets all be thankful, that they are going to produce better products in the future.
We are all winners.
damn this guy is good.
why cant everyone just have his outlook ?
:beer: :beer: :beer:
Ppl did report that mipmap transitions were still distinct with brilinear at its debut, so in that sense they were right to complain (well, that and the fact that nV introduced it with only UT2K3, so it was an app-specific and hand-coded "optimization"). ATi's "trylinear," OTOH, both doesn't seem to betray mipmap transitions and is a generic (not app-specific) optimization, so I don't think it warrants as much indignation on a technical level. I'm sure nV bore the brunt of the storm by being the first "caught" with it, though, and I agree that ATi is as guilty as nV from an ethical perspective by hiding it from end-users until caught.Originally posted by: VisableAssassin
NV came up with a fast method...their brilinear. It ran fairly fast but reduced IQ slightly...some notice..some dont. but the IQ was slightly less than what it shoulda been.
but when NV did this people b1tched up and down about it. But now ATi comes along pulls the same stunt...sure its harder to catch right off...but in my opinion its the same stunt....and nobody even raises as much hell as they did with NV, which is why I said NV does it its wrong...ATi does it...its ok...I dont get it
Well, he just typed "22" instead of "45." Given the time pressure he's usually under with his reviews, I find it an understandable mistake. Besides, B3D readers are more than likely adept enough to catch the error, particularly since the contradictory picture is right there.Originally posted by: DAPUNISHER
That's a hell of a typo Pete.
R3x0 probably doesn't have whatever hardware tweaks are required to run "trylinear." RV3x0 does, and R420 seems to have been built off of RV360 (both are 130nm & low-k, whereas R3x0 is 150nm and not low-k).Originally posted by: lithium726
So if ATi has been using this new and improved triliniar on its 9600 series, why hasnt it been used on the 9500, 9700, or 9800? wouldnt that improve performance? i have a 9600 pro and i think its a great card for the 180 dollars i paid for it in oct of 2003, but why only on the 9600 and x800?
Software can only exploit what's present in hardware, and the current assumption is that R3x0 hardware doesn't allow for "trylinear." If "try" is in fact as good-looking as tri but faster, ATi has no reason not to want to enable it now.Originally posted by: Bar81
ATI has made no indication it is a hardware feature, rather they seem to imply it's a software algorithm in their drivers.
Yes, well, that's never stopped any company in the history of man before, has it? Specifically and currently, if Apple can get away with calling their PCs the most powerful desktop supercomputers and the first 64-bit CPUs, then it's pretty much free reign for chaos.Originally posted by: ChkSix
My response: Engineers don't write those .pdf's, they build and design hardware and drivers. And once a .pdf is written, it is usually checked and rechecked by lawyers of the company for legal purposes before it ever makes it out the front door and into the hands of the consumer.
I don't see everyday 9800s clocking at 475/450 with an extra four pipelines, an extra two vertex shaders, and better pixel and vertex shader performance.If they did do this with their 9800 cards, no one on God's green earth would buy a X800 for 500 dollars when they can get the same performance and IQ out of a card costing almost half as much today.
Actually, Damage himself (the author of that TechReport "ATi X800 filter games" article) used the 61.11 drivers with the 6800s when he benched them against the X800s, so he was in fact comparing "trylinear" to "brilinear" scores. So, technically, I think his numbers are still valid.I think many others might disagree here bro. Me personally, I think it makes their benchmarking very invalid. But like you said, it does become extremely difficult as well as controversial now to get accurate results.
A: Brilinear. Note that Dave @ B3D has said bri has improved since its debut, so it may not be as bad at hiding Mipmap transitions as the six-month-old 3DC article describes. In fact, he seemed to imply in one recent B3D post that bri and try may now look very similar.Originally posted by: gsellis
Brilinear?
Originally posted by: Matthias99
One could also make the point that, if you're unsure about your design, you shouldn't ask your customers to pay $500 to beta-test it for you. The tendency to rush incomplete or unfinished products out the door is something I don't like about the computer industry (in terms of both hardware and software). Now, obviously the 6800 works, so this may not really apply to NVIDIA in this case. But if they come out with a vastly improved NV45 in six months, anyone who jumped on the NV40 bandwagon now is not going to be happy. Of course, the same thing could happen with ATI and R480, which is why I'm not buying *anything* right now. It rarely pays to be an early adopter (ATI's 9700Pro and NVIDIA's GF4Ti cards being rare examples of first-gen hardware that worked and didn't become obsolete quickly -- folks who bought 5800Ultras, or the rev1 5600s, were probably not as thrilled).
Progress is good, but progress for the sake of progress is not necessarily a good thing.
Originally posted by: stickybytes
So should the wait and see approach be taken with the 6800 GT as well? Will there be a refresh for the GT's?
Originally posted by: Matthias99
Originally posted by: stickybytes
So should the wait and see approach be taken with the 6800 GT as well? Will there be a refresh for the GT's?
Your guess is as good as mine. *Generally* refresh parts only come in at the top -- but NVIDIA might also introduce a whole line of boards based on NV45. I mean, during the last generation they put out the 5800/5800U, then replaced those with the 5900/5900U (and added the 5900SE and 5900XT), and then later put out the 5950U. They might do something similar here, they might not. I haven't seen or heard any firm plans yet.
I'm just gonna sit tight with my OCed 9800Pro until everything is on the market, prices can stabilize, and my magic 8-ball stops saying "Situation Unclear".![]()
The issue being discussed is adpative trilinear, not brilinear. Don't confuse the two.So he claims NVIDIA's brilinear has quality degradation, but ATI's doesn't? Please...
Originally posted by: BFG10K
The issue being discussed is adpative trilinear, not brilinear. Don't confuse the two.So he claims NVIDIA's brilinear has quality degradation, but ATI's doesn't? Please...
It takes eight samples when it requires it. If it can get away with less without impacting IQ then it does so.If it only runs full trinlinear when it feels like it its cheating and id consider it Brilinear BFG.
