ROFL!!!😀Originally posted by: Jeff7181
You're one of those guy's who's voice gets higher and you can't sit still when you talk about computers aren't you?
ROFL!!!😀Originally posted by: Jeff7181
You're one of those guy's who's voice gets higher and you can't sit still when you talk about computers aren't you?
Actually, not much. The GC has built in DRAM on the flipper chip. It nVidia was to use the type of ram that the GC uses, then it could add that 16mb quite cheaply.Originally posted by: Insomniak
I mean, do you know how much eDRAM costs? Not to mention that there are only 2 or 3 manufacturers of it in the first place. Those specs are just too astronomical of a jump methinks.
As above, the GC has built in ram, and its theoretical maximum bandwidth for the flipper and its memory is 10Gb/s+. And we`ve all seen some of the stuff that the GC can push out.Now on the other hand, if NVidia does pull it off and manage to offer the card at a reasonable price (at or below current top end prices), then, holy crap.
ctually, not much. The GC has built in DRAM on the flipper chip. It nVidia was to use the type of ram that the GC uses, then it could add that 16mb quite cheaply.
And we`ve all seen some of the stuff that the GC can push out.
Originally posted by: gorillaman
If ATI did it, would you call it a waste then? No offense, just curious. And where did you get 1/4th from?
What kind of memory is used for L2 cache again?
Originally posted by: Jeff7181
Originally posted by: videoclone
🙂 i think Nvidia will be making something fast but i dont think it will be based on a new Fab i think they will stick with 13u micron
!!!! its more mature and there not going to make the same mistake twice!!!!! And have there new core delayed a year due too poor fabrication maturity and yield
the core speed will be 550 - 600 but i dont think anything higher then that and they will stick with
Extravagant cooling solution with the beast and wouldn?t be surprised if the card ends up being even bigger then the old GF FX?s ? and they may also change the name back too Geforce5 or Geforce FX2
You're one of those guy's who's voice gets higher and you can't sit still when you talk about computers aren't you?
Originally posted by: reever
Originally posted by: gorillaman
If ATI did it, would you call it a waste then? No offense, just curious. And where did you get 1/4th from?
It doesnt matter who does it, if either company did it, it would most likely drive the cost up immensely and offer little to no performance increase. And i get 1/4th because the peak of the memory in the GC chip is 10gb/s and the memory that will be used in the R420/NV40 will run anywhere from 30-45gb/s
What kind of memory is used for L2 cache again?
They use SRAM for the cache, it really can only come in small densities and has adequate bandwidth
Originally posted by: gorillaman
OK, here is what I have come up with so far..
Here are the nVidia NV40 Specs:
- 0.09u process
- 300-350 Million Transistors
- 750-800 MHz Core clock speed
- 16 Mb Embedded DRAM (134 million transistors)
- 1.4 GHz 256-512 Mb DDR-II Memory
- 8 Pixel Rendering Pipelines (4 texels each)
- 16 Vertex Shader Engines
- 204.8 GB/sec Bandwidth (eDRAM)
- 44.8 GB/sec Bandwidth (DDR-II)
- 25.6 GigaTexels per Second
- 3 Billion Vertices per Second
- DirectX 9.1 (or even DirectX 10) features
To compare, here are the nVidia NV30 Specs:
- 0.13u process
- 500 MHz Core clock speed
- 500 (1Ghz) MHz 128-256 Mb DDR-II
- 125 million transistors
- 8 pixel pipelines with 2 texturing units each
- 16 texture layers per rendering pass
- 3.2 gigapixels per second
- 6.4 gigatexels per second
- 360-400 million vertices per second
- 16 gigabytes/sec
- DirectX 9.0+ features (Pixel Shader 2.0+, Vertex Shader 2.0+, etc.)
- 128 and 64-bit Floating-Point Pixel Processing - Quad Vertex Shader Engine
- Improved FSAA (Programmable Grid AA or Adaptive AA)
- Improved HSR (Lightspeed Memory Architecture III)
- AGP 3.0 (AGP 8x)
Note: The data is not offical and shouldnt be treated as such.
Originally posted by: reever
Originally posted by: gorillaman
Reever seems to have "Conspiracy Theory Syndrome".... LOL.. Everyone lies to him and he knows it without a doubt.
Let him be the low IQ on the totem pole right where he constantly puts himself becuase he cant control what he types.
It's the metacarpal version of terets syndrome. Don't pay him any mind, he doesn't know any better. And I know that I
don't feel like making time for things like this. He's really a good kid, but his knuckles are red and swollen from smashing his
fingers with a hard rubber mallet whenever they act up. I hope he seeks treatment.
It's so much easier to make personal attacks then it is to discuss technology right?
Originally posted by: jiffylube1024
Originally posted by: gorillaman
OK, here is what I have come up with so far..
Here are the nVidia NV40 Specs:
- 0.09u process
- 300-350 Million Transistors
- 750-800 MHz Core clock speed
- 16 Mb Embedded DRAM (134 million transistors)
- 1.4 GHz 256-512 Mb DDR-II Memory
- 8 Pixel Rendering Pipelines (4 texels each)
- 16 Vertex Shader Engines
- 204.8 GB/sec Bandwidth (eDRAM)
- 44.8 GB/sec Bandwidth (DDR-II)
- 25.6 GigaTexels per Second
- 3 Billion Vertices per Second
- DirectX 9.1 (or even DirectX 10) features
To compare, here are the nVidia NV30 Specs:
- 0.13u process
- 500 MHz Core clock speed
- 500 (1Ghz) MHz 128-256 Mb DDR-II
- 125 million transistors
- 8 pixel pipelines with 2 texturing units each
- 16 texture layers per rendering pass
- 3.2 gigapixels per second
- 6.4 gigatexels per second
- 360-400 million vertices per second
- 16 gigabytes/sec
- DirectX 9.0+ features (Pixel Shader 2.0+, Vertex Shader 2.0+, etc.)
- 128 and 64-bit Floating-Point Pixel Processing - Quad Vertex Shader Engine
- Improved FSAA (Programmable Grid AA or Adaptive AA)
- Improved HSR (Lightspeed Memory Architecture III)
- AGP 3.0 (AGP 8x)
Note: The data is not offical and shouldnt be treated as such.
^ Bwahahahahahaahaha! Those were an "guestimate" made on some board by a hardware guru named ChairmanSteve well over a year ago (and used in many publications too).
I'll dig up the link from where it was originally posted, but that has NOTHING to do with Nvidia (this may take awhile).
Also, just look at the features:
.09 - we already know ATI and Nvidia will be using .13.
eDram - not likely.
750-800MHz core speed - used to be incredibly optimistic, but after the FX cards, something like 500-650MHz is probably doable for the core.
Asuming the chip ran a 500-600mhz, running on-chip cache at the speed wouldnt be much of a problem, asuming that they could intergrate it without having a huge increase in voltage and heat.Originally posted by: reever
But it would be quite a waste, as even the peak bandwidth of the memory would still be 1/4th the bandwidth of the regular memory. Unless the on-chip ram is running at astronomical speeds(which it probably wont considering how cache right now runs) its pretty much a waste of silicon
What it has to do is; more on chip memory can help gfx chips in a very big way. And any other chip for that matter.And what that actually has to do with the on-chip memory remains to be seen, plus comparing what a console can push out compared to a PC has always been a moot comparison
If either company did do it, it would mean quite a performance increase compared to the same chip with no on-chip memory.it doesnt matter who does it, if either company did it, it would most likely drive the cost up immensely and offer little to no performance increase.
Asuming the chip ran a 500-600mhz, running on-chip cache at the speed wouldnt be much of a problem, asuming that they could intergrate it without having a huge increase in voltage and heat.
more on chip memory can help gfx chips in a very big way. And any other chip for that matter. Even if it wasnt 16mb, even if it was only 512kb, it would do wonders for the chip.
Only 25gb/s. Well ah heck me, if the numbers that small!Originally posted by: reever
Cache on processors runs at core speed, and only gives 20-25gb/s of bandwidth, main video memory still would have higher bandwidth. If you need more bandwidth you would have to increase the bus and associativity of the cache which would make costs skyrocket and prove just about impossible to manufacture. Intel is the master t making cache and not even they can do it effectively
You dont seem to understand the nature of cache fully do you?How would it help? What would the cache hold? Framebuffer, textures, instructions? If it held anything but instructions you would need atleast 16mb of memory to make it affect anything, there is no point in holding 512k of textures or framebuffer
Only 25gb/s. Well ah heck me, if the numbers that small!
As i said. By using the right type of sram, it could be done relativly cheaply.
Remember that the P4 is running at 3 times the speed of current graphics processors, so about 16GB/s unless they increased the data path.At 1.5GHz, the Pentium 4?s L2 cache offers a 48GB/s throughput while a theoretical 1.5GHz Pentium III would only offer 24GB/s of available bandwidth.
GPU - "Flipper" (system LSI)
Manufacturing Process: 0.18 microns NEC Embedded DRAM Process
Clock Frequency: 162 MHz
Embedded Frame Buffer/Z Buffer: Approx. 2 MB, Sustainable Latency: 6.2 ns (1T-SRAM)
Embedded Texture Cache: Approx. 1 MB, Sustainable Latency: 6.2 ns (1T-SRAM)
Texture Read Bandwidth: 10.4 GB/second (Peak)
Main Memory Bandwidth: 2.6 GB/second (Peak)
Xbox specs. Less bandwidth at 6.4GB/s rather than peak or 10.2GB/s for Gamecube-Unified memory architecture bank to share 64MB between CPU and graphics tasks. Memory bandwidth of 6.4GB/second. 125 million polygons per second
Originally posted by: gorillaman
And can we also speculate that GPU/VPU's will become more CPU like? Its almost like having L2 cache in the core of the graphics processor.. What kind of memory is used for L2 cache again?
From the looks of Doom3 and Half life2 bringing todays top hardware to a crawl, it's not out of the question that the hardware makers would feel the "market need" to go hardcore on the next-gen GPU/VPU's. People were used to getting well over 100fps in most games easily. Now it is questionable whether the games can be played with any real enjoyment at all. The software companies, including Microsoft of course, are probably pushing hardware vendors to pick up the pace.
To keep up with DX9/9.1 and so on. Some heavy duty hardware is needed right now to push and shade these super high powered games.
IMHO
By using the right type of sram, it could be done relativly cheaply.
Can the CPUs cache hold the entire OS in its cache? No it cant, so it transfers the parts it doesnt need to the main memory. And thats what`ll happen on gfx chips, if they use cache.
Originally posted by: BoomAM
You are too stubern to argue with.
We dont even know if the nV40 will have cache yet.