ATI+AMD confirmed

Page 9 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
GPU

In other words a processor that accelerates the entire 3D pipeline (as defined at the time) in cpu independent hardware.

No, even radeon 7xxx stuffed up Ascension's rendering badly IIRC. Geforce was definitely the way to play that game (after the patches) - I bought a Voodoo 3000 especially for that game, and it looked like crap compared to how my GF2 Pro rendered it (and performed worse to boot).
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
come on a 'respected' link . . . NOT nVidia marketing propaganda
The technical definition of a GPU is "a single chip processor with integrated transform, lighting, triangle setup/clipping, and rendering engines that is capable of processing a minimum of 10 million polygons per second."

in other words if it processes 9.9 million polygons per second it is NOT a GPU. :p

sure

and even nVidia calls it a TECHNICAL definition [read "marketing" blurb to sell their product . . . they even quote themselves -like you do] . . .

 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Why wouldn't I quote nvidia when they were the first to bring a fully functional GPU to consumer space?

I also gave you my definition.

You'll find that ATi's definition of "VPU" (why they inist on being different I'll never know), 3DLabs definition and S3's definition align very closely.
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
'Your' definition is the "technical" one . . . the word *programmable* - is the only difference compard with 'my' "general" definition. ;)

no wonder we can't agree on anything - if we can't even agree on the simplest of 'definitions'. :p

[that and your eXtreme stubbornness]
:D
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
GPU is a processor that handles essentially all the graphics work. Anything before geforce only handled the rasterization part. (though really, the T&L of geforce was so nearly useless that it may have well been just a rasterizer)
 

archcommus

Diamond Member
Sep 14, 2003
8,115
0
76
It makes me mad that the Inq article mentions so much that AMD would be completely dead in five years if it wasn't for this merger. True, they're an entire generation behind at this point, but who's to say that will still be the case a year or 18 months from now? AMD has been the underdog since I've bought my first Thunderbird, and now suddenly that means they can't cut it without help from ATI? It seems like we just all got used to AMD being on top (in our minds, not financially), and now that they're behind people scream they're going to die. I think AMD would've been fine without this merger just like they were years ago when the Pentium III was king.

However, that's not to say it doesn't benefit them. Surely it does, with a GPU company under their wing the possibilities are endless. But overall it still upsets me. Regardless of what other opportunities this may bring, it still seems better and more balanced to have two CPU and two GPU/chipset companies, all independent of one another. I know as of now most agreements and products between the companies are still slated for release, but once we start approaching the generation after that, I have a feeling we'll see no more ATI Intel chipsets, and too many ATI AMD chipsets that completely crush any competition from nVidia because they're "in" with AMD and know how to make a better chipset for the platform. The future to me just looks unbalanced and lacking of options. And with ATI focusing on too many other things I can definitely see nVidia being the only option in high-end discreet graphics.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
come on a 'respected' link . . . NOT nVidia marketing propaganda

nVidia marketing created the term GPU- seriously. It is one that we all adopted- so if you like it or not their marketing propaganda actually technically is the definition of the word :p

'Your' definition is the "technical" one . . . the word *programmable* - is the only difference compard with 'my' "general" definition.

GPU literally stands for Graphics Processing Unit. Rasterizers only handled rasterization. There is a very real difference between the two; the Kyro2 was a rasterizer despite the fact that overwhelmingly was significantly faster then the GeForce256 in almost every aspect. Calling a Rage128 based chip a GPU is like saying that the odler P4 3.2EE is a multi core CPU just because it does a lot of what the multi core chips do and it performs fairly well.

the GeForce was much faster, but i didn't like its IQ -the nVidia GPU's colors were "washed out"

I can tell you for certain that you didn't get a Gainward board. You didn't see nV IQ- you saw a poor IHV's IQ. People confused them frequently back then and some carried that misconception over. You take anything made by nV in numerous years and crank DV and I can assure you that ATi will look EXTREMELY washed out comparitively(of course, cranked all the way up is gross overkill IMO).

though really, the T&L of geforce was so nearly useless that it may have well been just a rasterizer

You know that even Beyond3D was forced to change their stance to one of the T&L unit on the GeForce being too powerful for gaming. Hard wired T&L was the defacto standard for game support until around the time of the R9700Pro and is still utilized in games shipping today. Fixed function not being effective was a very poor propaganda effort by 3dfx and its PR wing(which B3D was founded in part by).
 

apoppin

Lifer
Mar 9, 2000
34,890
1
0
alienbabeltech.com
i was looking forward to continuing this discussion - all day. :p

not :D

anyway, i had an Asus Geforce 256 [32MB SGRAM, i think it was called if i remember right] and i was disappointed . . . although it was very fast compared to my old Rage Fury 32, the colours were not at all 'vibrant' . . . and i remember a LOT of others complaining about the same thing.

OF course - the term "GPU" was coined by nVidia - and of course, ATi had to use another word - VPU [visual procesing unit] - to describe it's add in processor's function.

you can argue semantics all you want . . . in simplest terms, a GPU is a dedicated graphics [or visuals] rendering device for a PC . . . and the popular term came to include ATi's processors and also earlier processors.

And your comparison is pretty weak . . . The Rage in its final form was ATI's first multitexturing renderer i believe it had two [2!] pixel pipelines . . . pretty advanced compared to it's previous efforts. and it also supported a pretty good 32 bit while VooDoo3 was still 16 bit. :p

edit: i just looked it up, this old GPU wasn't as bad as you make it out to be:
ATI implemented a then-new cacheing technique they called "Twin Cache Architecture" with Rage 128. The Rage 128 used an 8 KB buffer to store texels that were used by the 3D engine. In order to improve performance even more, ATI engineers also incorporated an 8 KB pixel cache used to write pixels back to the frame buffer.

* 8 million transistors, 0.25 micrometer fabrication
* Highly optimized superscalar 128-bit engine
* Integral hardware support for DVD and Digital TV
* 3D Feature Set
o Single pass multitexturing (Dual texel pipe delivering 2 pixels per clock)
o Hardware support for vertex arrays, Fog and fog table support
o 16-bit or 32-bit color rendering
o Alpha blending, vertex and Z-based fog, video textures, texture lighting
o Single clock bilinear and trilinear texture filtering and texture compositing
o Perspective-correct mip-mapped texturing with chroma-key support
o Vertex and Z-based reflections, shadows, spotlights, 1.00 biasing
o Hidden surface removal using 16, 24, or 32-bit Z-buffering
o Gouraud and specular shaded polygons
o Line and edge anti-aliasing, bump mapping, 8-bit stencil buffer
* Integrated DVD/MPEG-2 decode
* 32 MB frame buffer (16 on VR), 250 MHz RAMDAC, AGP 2x with AGP texturing

looks like a primitive GPU to me
:Q
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Well, I had (still have) a Gainward GeForce 1 SDR and a Hercules GeForce2 Pro. Neither of them suffered from off colors or blurry output. Plenty of other brands (that relied on the reference design, just like the faulty 7900 GT's) did though.

There is another factor that does not get considered often in this context though. ATi uses a 10 bit DAC on their cards whereas nvidia and almost everyone else in the consumer 3d market (bar matxrox I think) uses an 8 bit DAC. Given that rendering APi's only allow for 8 bits of precision per color component (unless HDR is in effect), I'm not sure that there is much advantage to a higher precision DAC, though some might notice a slight difference.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Oh, and fox5, T&L was far from useless on GF1, especially if you owned a K6 series cpu, whose greatest weakness was floating point calaculations - guess what got offloaded to the card that the CPU would have otherwise had to do?
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
You know that even Beyond3D was forced to change their stance to one of the T&L unit on the GeForce being too powerful for gaming. Hard wired T&L was the defacto standard for game support until around the time of the R9700Pro and is still utilized in games shipping today. Fixed function not being effective was a very poor propaganda effort by 3dfx and its PR wing(which B3D was founded in part by).

Most of those games required fixed funciton T&L, but still used the cpu for most things. For years, you could disable fixed function T&L (though the use of an external app) and have minimal change in the image quality. For that matter, the Geforce 2 is the power level that was accepted as the minimum for years, the original geforce didn't really benefit as it was just too slow to run most games. (and even it's t&l unit wasn't that fast, once cpus broke 1ghz they had a fair amount of spare power they could use instead of the fixed function unit)
Not saying fixed function t&l is useless, but its primetime was the geforce 2, not the geforce.

Oh, and fox5, T&L was far from useless on GF1, especially if you owned a K6 series cpu, whose greatest weakness was floating point calaculations - guess what got offloaded to the card that the CPU would have otherwise had to do?

Oops, yeah I guess if you look at it that way. I was always one to upgrade my cpu sooner than my graphics card (mainly because the graphics card market at that time was so imperfect...you couldn't have everything you wanted in a single upgrade, the bar ATI raised with the 9700 pro for excellence in all areas was really needed) so I tended to have the top of the line cpu paired with the non-T&L graphics card.
My geforce 3 was very disappointing, despite having a 1.4ghz athlon, the geforce 3 wasn't very fast (it didn't even provide the boost that it's fillrate suggested it would, let alone the fact that it was my first t&l card I owned), image quality was extremely subpar, drivers were crap when I first got it (which was actually part of the reason for the poor performance, with later drivers it performed as expected but by then I had a 9700 pro that blew it away, image quality on the geforce 3 didn't get better with drivers though), and it couldn't run any dx8 level games very fast at all. In fact, at times I almost wish I had gone for a fast geforce 2 instead wihch would have performed better in many games, except then games started coming out that wouldn't work on a geforce 2 or were severely crippled in graphical features.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
I don't know how you could possibly have found the GF3's IQ to be lacking. It introduced MSAA & nvidia's top quality anisoptropic filtering (which they regrettably dropped for nv40 and g7x - it had better make a return for G80...)

Speedwise, early on my Leadtek GF3 Ti200 performed the same as my GF2 Pro, but as drivers matured the GF3 far outstripped the GF2, it was a case of nvdidia mastering the use of the crossbar memory controller.
 

DARQ MX

Senior member
Jun 4, 2005
640
0
0
yea it is pretty cool and all, but 5.4 billion? Holy crap.

I hope AMD still has there stuff together during Core 2 release. Cause that is a pretty big number. And AMD sales could get hurt this querter from that.

Well, I hope AMD will not come out with something that will prevent us from using Nvidia in the future. Cause now that AMD buys ATi I hope this does not change anything.

I think it would of been a little wiser to try and buy Nvidia because they are the highest selling video card maker.
 

DARQ MX

Senior member
Jun 4, 2005
640
0
0
The big plus I can think that ATi will gain is new technology from AMD for them to ustilize it.

How does a cooler running ATi video card sound to anyone?

I mean I heard many people complain on the emmace heat the x1000 series makes. It not only can burn someone(If there dumb enough to just leave there hand on there) it also heats up the case a bit more.

I could not even touch the HSF on a x1900GT after 15 mins of it being on. IT was unbearable HOT. I never seen an Nvidia G70 or G71 feel that burning on my hand when I was taking it out right after I turned my customers rig off.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
anyway, i had an Asus Geforce 256 [32MB SGRAM, i think it was called if i remember right] and i was disappointed . . . although it was very fast compared to my old Rage Fury 32, the colours were not at all 'vibrant' . . . and i remember a LOT of others complaining about the same thing.

Like I said, you didn't have a Gainward. I knew this by your comment of washed out colors. You bought a very crappy board, that wasn't due to the GPU at all. I just built a business use rig(for a friend's parents) and put a GF6200($30- should be good for Vista too) in the system and was honestly shocked just how much better the 2D image quality was compared to my BBA board on the same monitor.

looks like a primitive GPU to me

Simplest possible terms- a GPU can execute code. The original Radeon was ATi's first GPU. There is a very real fundamental difference between a GPU and a rasterizer. This has nothing to do with ATi v nV- the Rage128 series were TNT competitors- they were never meant to go head to head with the GeForce(that was the Radeon's job).
 

Nyati13

Senior member
Jan 2, 2003
785
1
76
Originally posted by: archcommus
It makes me mad that the Inq article mentions so much that AMD would be completely dead in five years if it wasn't for this merger. True, they're an entire generation behind at this point, but who's to say that will still be the case a year or 18 months from now?

I would argue that AMD is only half a generation behind right now (K8 vs Conroe), and that is only against 20% of Intel production by the end of the year. K8L should be out in dualcore mid-2007 (say July) so thats a 12 month gap.
 

5150Joker

Diamond Member
Feb 6, 2002
5,549
0
71
www.techinferno.com
What AMD/ATi need to do right now to survive is to leverage existing technology on a smaller process at dirt cheap prices. Sell X1900 GTs and AMD 64 chips at a fraction of the cost of nVidia/Intel parts while making razor thin margins on them (just don't fall into the red). As long as they can keep going for about a year until AMD has a response to Conroe/Merom they'll be alright.
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Originally posted by: Gstanfor
I don't know how you could possibly have found the GF3's IQ to be lacking. It introduced MSAA & nvidia's top quality anisoptropic filtering (which they regrettably dropped for nv40 and g7x - it had better make a return for G80...)

Speedwise, early on my Leadtek GF3 Ti200 performed the same as my GF2 Pro, but as drivers matured the GF3 far outstripped the GF2, it was a case of nvdidia mastering the use of the crossbar memory controller.

My Geforce 3 (PNY brand) how horrible analog output, among the worst I'd ever seen. It was just fine on DVI, but I didn't have a DVI monitor at the time.
Oh, and Quincunx AA sucked.
I wouldn't be surprised if part of my performance was due to having a VIA chipset, I may have had one of the ones with poor AGP performance.

I would argue that AMD is only half a generation behind right now (K8 vs Conroe), and that is only against 20% of Intel production by the end of the year. K8L should be out in dualcore mid-2007 (say July) so thats a 12 month gap.

I would argue a full generation behind.
I'd consider Athlon 64, Prescott, and Yonah to be all of the same generation. Sure Prescott wasn't an improvement over Northwood, but it was an attempt at it with quite a bit of silicon added. If Prescott had worked out as Intel wanted, it may be the leading processor right now. A >=5ghz Netburst processor probably would have done to the athlon 64s what Northwood to the XPs.
Conroe is next gen, and K8L will be AMD's answer.
 

Gstanfor

Banned
Oct 19, 1999
3,307
0
0
Yeah, your troubles stemmed from the card brand and the motherboard chipset you were using imho. VIA & nvidia never really worked well together in a system imo.

To be fair the GF3 IQ wasn't 100% perfect. In DXTC modes higher than DXTC1 you could see a fair amount of banding happening. That wasn't corrected until GF4.