• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

Nvidia's Future GTX 580 Graphics Card Gets Pictured (Rumours)

Page 20 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Really? Matrox released an entire series of gaming cards that allow triple monitors to be run on one card and brought multi-monitor gaming to the masses? Please provide a link, because I don't quite remember that.

Well, they brought it to the masses, the masses just decided not to buy it (I think that hasn't really changed with Eyefinity, a lot of people still think 3 monitors is just too much for a bit of gaming):
http://www.anandtech.com/show/911/11
http://www.anandtech.com/show/2054
 
Last edited:
That's not what he said and I'm pretty sure you realize that.
Can I ask why you thought he meant that Nvidia has better marketing and not actually has better technology implementations? I mean, dude, you have to stop pushing sometime.
You find yourself actually sitting there saying that Nvidia isn't good at making new tech, just better at marketing it????
WTH man?
Tesselation has been in ATI cards for years. Yes. It's no wonder that the tech never went anywhere until NV offered it as well. They push technology. ATI gives a poke and asks if you would like to use this tech. And if nobody answers, it becomes idle, dies, or never goes anywhere.

He never said that at all:

Amd is good at makeing new tech... Nvidia is just better at marketing it, and makeing game developers use it. Tessellation has been in ati cards for years and years... yet no one wanted it in games because nvidia cards didnt have it.

But your right Sickamore, nvidia have better marketing/game developing.

Good job putting words in his mouth. But hey I guess that suits the argument you were trying to make, so why not?

Did a google of his username:
http://semiaccurate.com/forums/searc...earchid=314609

Don't waste your breathe

vBulletin Message said:
Sorry - no matches. Please try some different terms.
What were you trying to prove again? Oh yeah, ad hominem.
 
Really the only two things 3dfx did was make 3d-acceleration an affordable reality AND introduce SLI. But man, were those two things huge. 😀

Ofcourse, 3dfx' contribution to video cards is not to be underestimated...
But nVidia has since surpassed them, if you ask me...
The things I listed had a very large impact on performance of games and/or the graphical realism.
Especially hardware T&L is not to be underestimated either. nVidia called it the world's first GPU. It's not exactly coincidence that we refer to any graphics chip as 'GPU' these days.
 
Then I will: nVidia's G80 was the first unified shader architecture.

ATi has shown some rigged demos, but never an actual consumer product.

Yea, but most of those aren't really ATi's doing... LPCM audio is just an implementation of the HDMI standard. Even Intel had that on their IGPs before nVidia, I believe (and possibly before ATi?)
As I said, something like Eyefinity was done first by Matrox. Matrox is the true pioneer of multi-monitor technology.
GDDR5 and smaller process nodes, those are technologies developed by other companies, ATi just adopted them before nVidia did, but didn't have anything to do with the development of the actual technology itself.
And those technologies didn't do all that much for graphics in itself. They just made the same graphics technology slightly faster and/or cheaper.

Are you sure about that Scali? Because according to http://www.anandtech.com/show/1719/7 it seems that ATI designed Xenos with unified shaders, prior to G80? I could have the dates wrong though.

Yes that's what I meant, ATI physics never made it out of tech demos into actual products, apparently, kinda sad if they did have a lead at one point.

Also ATI had a hand in GDDR5 development: http://www.anandtech.com/show/2679/8

But doesn't Matrox's triplehead stop at 3, and Eyefinity goes up to 24? Sure it's an extension of existing technology, but earlier in this thread you said extensions count. I guess it's a matter of degree, to you.
 
Are you sure about that Scali? Because according to http://www.anandtech.com/show/1719/7 it seems that ATI designed Xenos with unified shaders, prior to G80? I could have the dates wrong though.

I don't know about that (consoles don't exist in my world), I thought the Xenos was a derivative of their DX9 hardware, in which case it wouldn't have unified shaders. I know that ATi talked about unified shaders for years, but at the very least nVidia was the first to deliver unified shaders to the PC platform.

Yes that's what I meant, ATI physics never made it out of tech demos into actual products, apparently, kinda sad if they did have a lead at one point.

I don't think they did. I think all they had was demos.

But doesn't Matrox's triplehead stop at 3, and Eyefinity goes up to 24?

That's just nomenclature. Matrox also has cards that do more than triplehead. I don't know if they go up to 24, but that's just a number from there on, isn't it? The technology itself is multi-monitor, once you have it, you can scale it up to where you want it.

Sure it's an extension of existing technology, but earlier in this thread you said extensions count. I guess it's a matter of degree, to you.

Well, this I wouldn't even call an extension.
Matrox was the one to bring dualhead, triplehead and beyond to the market, years before other companies did. Matrox also pioneered the use of this technology in Windows desktops and games. So what AMD does is nothing new. Matrox did that back in 2002.
 
I don't know about that (consoles don't exist in my world), I thought the Xenos was a derivative of their DX9 hardware, in which case it wouldn't have unified shaders. I know that ATi talked about unified shaders for years, but at the very least nVidia was the first to deliver unified shaders to the PC platform.

Yeah I don't use consoles either, but they ARE a big market... anyway, apparently ATI was first to unified shaders, but NV was first to unified shaders in PCs.
 
anyway, apparently ATI was first to unified shaders, but NV was first to unified shaders in PCs.

We'd have to dive into the dates for that, but I really can't be bothered.
Unified shaders in themselves aren't that spectacular... It's not like the Xenos was so much better and faster than their existing DX9 cards.
G80 actually used those unified shaders to give us Cuda. Now THAT is innovation.
 
We'd have to dive into the dates for that, but I really can't be bothered.
Unified shaders in themselves aren't that spectacular... It's not like the Xenos was so much better and faster than their existing DX9 cards.
G80 actually used those unified shaders to give us Cuda. Now THAT is innovation.

http://en.wikipedia.org/wiki/GeForce_8_Series#GeForce_8800_Series
http://en.wikipedia.org/wiki/Xbox_360

XBOX360 came out in mid-2005. G80 came out in late-2006.

I agree that NV is usually the innovator, but let's give credit where it's due.
 
We'd have to dive into the dates for that, but I really can't be bothered.
Unified shaders in themselves aren't that spectacular... It's not like the Xenos was so much better and faster than their existing DX9 cards.
G80 actually used those unified shaders to give us Cuda. Now THAT is innovation.

Using parallel cores to execute parallel code? GENIUS. Seriously though, CUDA is not innovative, just evolutionary.
 
Using parallel cores to execute parallel code? GENIUS. Seriously though, CUDA is not innovative, just evolutionary.

Oh come on... We never even referred to shaders as 'programmable cores' prior to Cuda? Why not? Because they weren't anything like CPUs.
They were programmable shaders, literally. They were simple operations for shading only, which could be 'programmed' (more like 'sequenced').
Cuda took a giant leap there, added a lot of features such as full IEEE float compliance, full random memory access, shared memory, as well as a more complete instructionset, so that GPUs were now capable of more or less the same things as CPUs, and can be programmed through C/C++.

Sure, we've had parallel architectures before, but we're talking about GPUs here. For GPUs this certainly was a first, and quite a leap. It opened up the GPU for tons of new applications.

I realize it's difficult for people who have never programmed GPUs, or at least haven't programmed them in the early days, with register combiners and the first generation of shaders, to have a good idea of how far we've come... but really, try and get a proper perspective first, this is getting ridiculous guys.
 
Last edited:
Lonbjerg I still dont get it... also this does feel like a personal attack.

I guess you don't.
Keys did.
Why he isn't wasting anymore time on you.
You posts there(S/A), combined with you lack of GPU engineering means you are not worth the trouble...even if you post garbage, it's not worth to spend the time to debunk it.

Come back when you can tells the difference between marketing and engineering.

Here is some marketing for you:
http://blogs.amd.com/play/2009/06/02/why-we-should-get-excited-about-directx-11/

Here is some engineering:
http://www.geeks3d.com/20101028/test-asus-eah6870-1gb-review-direct3d-performances-part-4/

It's that simple.
 
As I said, it is evolutionary. The whole evolution of GPU's has to become more general purpose (more like CPU's). This is in the DX standards (the push for directcompute and other stuff like that). CUDA is not something that innovative. It is just part of the evolution towards more general purpose. You said it yourself, GPU's were now capable of more or less the same things as CPUs and can be programmed through C/C++. Those two things have existed for a while, so it isn't that big an innovation. Sure the technical details took a while but that isn't really innovation.

I am not saying AMD innovates much either. Consumer companies rarely innovate as it takes a while to turn innovations into products.

So now we don't allow people who post on websites some people don't like? That seems rather personal bringing what websites he likes to frequent. Don't bother him about stuff when you couldn't understand why AMD's share of dx11 cards went down even though it outsold nvidia.
 
Oh come on... We never even referred to shaders as 'programmable cores' prior to Cuda? Why not? Because they weren't anything like CPUs.
They were programmable shaders, literally. They were simple operations for shading only, which could be 'programmed' (more like 'sequenced').
Cuda took a giant leap there, added a lot of features such as full IEEE float compliance, full random memory access, shared memory, as well as a more complete instructionset, so that GPUs were now capable of more or less the same things as CPUs, and can be programmed through C/C++.

Sure, we've had parallel architectures before, but we're talking about GPUs here. For GPUs this certainly was a first, and quite a leap. It opened up the GPU for tons of new applications.

I realize it's difficult for people who have never programmed GPUs, or at least haven't programmed them in the early days, with register combiners and the first generation of shaders, to have a good idea of how far we've come... but really, try and get a proper perspective first, this is getting ridiculous guys.

True but I think people are taking issue with the triple counting. Hardware change (programmable shaders), and we count the API and PhysX utilizing that API, as two additional things?
 
As I said, it is evolutionary. The whole evolution of GPU's has to become more general purpose (more like CPU's).

However you want to call it, I'm not going to bother debating that, that's not the point.
The point is that this progress tends to go in bursts. I am pinpointing some of the innovations that gave a burst to this 'evolution' as you want to call it.

This is in the DX standards (the push for directcompute and other stuff like that). CUDA is not something that innovative.

AHAHAHAHHAHAHAHAHAHA LOL ROFL!
Sorry, that was just TOO funny.
Push for DirectCompute? Really? Man, Cuda was introduced with the first DX10 hardware. DirectCompute was introduced in DX11, about 3 years later!
And guess what? DirectCompute builds upon the groundwork laid by Cuda (as does OpenCL). If it wasn't for Cuda, we don't know what DX11 would look like. If it even had DirectCompute yet, or if it was anything like it is today.
But what we do know is that even that early nVidia DX10 hardware fully supports DirectCompute and OpenCL. Why? Because the minimum requirements and general computational model are based on Cuda.

You HAVE to give nVidia credit there. They were a generation ahead of the rest, literally.
I guess you either just don't know the facts, or you can't give credit where credit's due.
 
True but I think people are taking issue with the triple counting. Hardware change (programmable shaders), and we count the API and PhysX utilizing that API, as two additional things?

Programmable shaders refers to the GeForce 3 (and if you like, the register combiners in GeForce256/2).
Those were programmable, but NOT in the GPGPU sense. You could NOT do Cuda or PhysX on them. That wasn't until many years later.
But I already said that earlier.
I also already explained why PhysX and Cuda are not the same thing.
Stop trolling.
 
However you want to call it, I'm not going to bother debating that, that's not the point.
The point is that this progress tends to go in bursts. I am pinpointing some of the innovations that gave a burst to this 'evolution' as you want to call it.



AHAHAHAHHAHAHAHAHAHA LOL ROFL!
Sorry, that was just TOO funny.
Push for DirectCompute? Really? Man, Cuda was introduced with the first DX10 hardware. DirectCompute was introduced in DX11, about 3 years later!
And guess what? DirectCompute builds upon the groundwork laid by Cuda (as does OpenCL). If it wasn't for Cuda, we don't know what DX11 would look like. If it even had DirectCompute yet, or if it was anything like it is today.
But what we do know is that even that early nVidia DX10 hardware fully supports DirectCompute and OpenCL. Why? Because the minimum requirements and general computational model are based on Cuda.

You HAVE to give nVidia credit there. They were a generation ahead of the rest, literally.
I guess you either just don't know the facts, or you can't give credit where credit's due.

Its ironic how CUDA is making their cards pretty bad in the grand scheme of things. One of their greatest achievements, just in the wrong place.
 
Back
Top