Why Nvidia is so fashionably late with their card... ?

Jun 9, 2002
56
0
0
As we know, nvidia for once in it's gpu cycle (that I can remember) is fashionably late. Why ? Well, the first logical answer that comes to mind is that they were preparing the consumer market for another small step up in gpu performance but were taken aback when ati came around to open something greater than a Can O Whoop @ss on them. With that, they had to tweak some plans in order to put out a card that would compete with ati's goliath. A quick but wise approach is my theory as i will explain.

What's the most burdensome things cpu manufacturers encounter as they develope better processors. The answer i'm looking for is the developement of a new cpu architecture. As we know, with a new architecture in developement, a manufacturer may find themselve revamping a lot of stuff... transistor location, cache location/size, trace locations/lengths/.... et certera. Another question. What's one of the ways cpu manufacturers obtain better performance from their cpus built on the same architecture? The answer I'm looking for is... increasing clock frequency! Of course there are many other factors but lets take these for the basics to look at.

To explain my point I ask you to look at the stats on the nvidia 5800 fx.... What's it's core clock speed? What's it's memory clock speed? From what many have been informed of, they are 500Mhz and 1000Mhz respectively. Now what's ati's core and memory clock set at ? Of course, being generous we'll give it the 325Mhz and 310Mhz found on the top of the line cards...(not that you can't overclock a regular or lower clocked 9700/pro).
Now I ask you this. How wide is ati's memory bus? 256bits right ? How wide is nvidia memory bus? 128bits. That's the kicker! Nvidias FX 5800 looks like it's going to have only half the memory bus width as that of the 9700 series. Not a good sign....I take that to be the reason they cranked up their core and memory frequencies by 53% and 61% respectively......a better sign...so much so that according to benchmarks, it looks like it's competing very well to the 9700Pro. The big kicker to all you bragging 9700Pro people.... don't get me wrong, I respect those who are proud of their 9700Pro....but then there are those who demean those of us who have a different opinion...even if it rest in neutral territory.... back to my point.... the kicker that will give ati's Radeon 9700Pro and their next generation card a good run for their money all goes back to my opening statement concerning bus width.

Nvidia WILL (and this is said with speculation) open up their memory bus width in the next generation or two...hopefully the first one after their N30 release. With that done, memory bandwith will have effectively doubled.... theoretically from 16GB/s to 32GB/s...like WHOA... putting out a new architecture AND a super fast core and memory would have been a lot to handle in my opinion....and of course what's the point of selling a card with all the bells and whistles... you have to have the customer go through one hoop at a time in order to introduce what's "new and improved"...more like take peoples money gradually <wink wink>.... Anyhow....I don't if anyone else has speculated this in articles but I dare to say that i'm the first ... hehe. Nvidia is still here... lurking... waiting. You watch.

Furthermore, just to back myself up I bought a 9500Pro. Unfortunately me and it had driver issues. My Geforce3 actually gave me MORE FPS (DOUBLE when I was with my party of 8 members) in Dungeon Siege than the 9500Pro! Shocker! It did however boost my Counterstrike FPS by 10...but that's all it did well for CS. The rest of it sucked .....walls shimmied back and forth when running through hallways....I tried everything....even reinstalling windows so that there were no traces of nvidia on the hdd....sorry...this card is going back to Newegg today. Sorry Charley, I'm sticking with my ever reliable geforce3 (thank god for their software engineers). Gotta love ati's hardware...but software...sigh....why me...? I tried at least.
 

Bopple

Member
Jan 29, 2003
39
0
0
No offense. But I can't see any sense in your logic.

As we all know, nVIDIA chose 128bit DDR2 and ATi chose 256 bit DDR1.
And you conclude GFFX competes very well against 9700p. - it's a praise.

No, it's not like that.
nVIDIA chose 128bit DDR2 and it was not good enough. So they had to put that bs cooler on the board.
And still with that, its temp is at about 150F(70C). Also again, its overclockability is very low and consumes excessive power.
even if it got overclocked, the performance differences were even more insignificant.
What does it say? - it's already overclocked in the factory by the hands of nVIDIA themselves. and that to the limit of the card's potential.

you see? you're complimenting this bs.
even it's improper if you see this things happen and say nothing about it. but you praise this?
 

Fallen Kell

Diamond Member
Oct 9, 1999
6,116
477
126
I agree, the MAIN reason they were late was because of shifting to a .13 mil process. Their design was fine, just took the manufacturing lab 6 months longer then they origionally thought to get the .13 mil process working properly.

Also, if you take that into account, the NV30 is fairly comparable to the 9700 PRO. ATI and NVIDIA focused on different things for their cards, NVIDIA went for a more programable system with increased fill-rate. ATI went for more memory bus bandwidth and optimizations for AA and floating point computation.
 
Jun 9, 2002
56
0
0
Hey dopple,
I dont know if you realized it... but my main point was to forecast what nvidia might do. I used facts that everyone knows in order to make my assumption more tangible to the reader. Please show me evidence to back up your claims that are "hard facts". All my evidence can be dug up easily by just grabbing an article on the matter at www.tomshardware.com. Now it doesn't speculate....but facts are facts when it comes to hardware architecture. I'll avoid getting into a debate on heat or noise that the card is said to produce. If you're wise you'll probably take off a side panel to your case and add additional cooling with an external fan .... and noise ?.... what about it... since when is owning a harley davidson in the form of a video card bad?!...that's that...nice to see someone agrees "fallen" :)
 

Bopple

Member
Jan 29, 2003
39
0
0
I saw your point about nvidia's next action. I agree on what will nvidia do in the future. But then again, we all assumed as much.
And that future also can be applied to ATi and and other companies.

But about the present situation, i can't agree.
nvidia went to 130nm 128bit DDR2 - i don't want to talk about this if that was wise or not for in the future that can be mute point.(depends on how smooth ati go through 130nm process)

but even if you make a new wafer process, unless overclocking or using trash material, you can't make such thing only with 10% more transistors into 150F monster.
And that with that enormous cooler.

i call it cheating. but you praises.