Originally posted by: OCguy
That sounds more credible than anything Charlie has ever wrote, but I just want benchmarks at this point.
1.5GB memory standard? Sweet!
My guess is it's not needed. At 384-bit, bandwidth is already extreme, so why waste the extra money on a 512-bit board?Originally posted by: dguy6789
Only 384 bit memory? What happened to the 512 bit + GDDR5?
Originally posted by: Stoneburner
So, Nvidia is releasing one product that'll keep 5800 and larrabee at bay in separate fields. That'd be a truly impressive feat if accomplished.
Originally posted by: toyota
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".
Fermi?s dual warp scheduler selects two warps, and issues one instruction from each warp to a group of sixteen cores, sixteen load/store units, or four SFUs.
Originally posted by: OCguy
Originally posted by: toyota
I asked two people at NVIDIA why Fermi is late; NVIDIA's VP of Product Marketing, Ujesh Desai and NVIDIA's VP of GPU Engineering, Jonah Alben. Ujesh responded: because designing GPUs this big is "fucking hard".
That really says nothing new about the timing. He says "widespread" Q1, which leaves him wiggle-room for parts available for Xmas.
The 5870 is barely even available a week after launch, but you can still get them. That is probably what we will see with this card as well.
Oh, and @ the "hard" quote :laugh:
Originally posted by: wlee15
Fermi?s dual warp scheduler selects two warps, and issues one instruction from each warp to a group of sixteen cores, sixteen load/store units, or four SFUs.
I not sure you can really call this MIMD.
http://www.nvidia.com/content/...itectureWhitepaper.pdf
o Full IEEE 754-2008 32-bit and 64-bit precision
o Full 32-bit integer path with 64-bit extensions
o Memory access instructions to support transition to 64-bit addressing
? NVIDIA GigaThreadTM Engine
o 10x faster application context switching
o Concurrent kernel execution
o Out of Order thread block execution
o Dual overlapped memory transfer engines
Hardware Execution
CUDA?s hierarchy of threads maps to a hierarchy of processors on the GPU; a GPU executes
one or more kernel grids; a streaming multiprocessor (SM) executes one or more thread blocks;
and CUDA cores and other execution units in the SM execute threads. The SM executes
threads in groups of 32 threads called a warp. While programmers can generally ignore warp
execution for functional correctness and think of programming one thread, they can greatly
improve performance by having threads in a warp execute the same code path and access
memory in nearby addresses.
Originally posted by: Kakkoii
Originally posted by: wlee15
Fermi?s dual warp scheduler selects two warps, and issues one instruction from each warp to a group of sixteen cores, sixteen load/store units, or four SFUs.
I not sure you can really call this MIMD.
http://www.nvidia.com/content/...itectureWhitepaper.pdf
Oh awesome find.
Found this interesting:
o Full IEEE 754-2008 32-bit and 64-bit precision
o Full 32-bit integer path with 64-bit extensions
o Memory access instructions to support transition to 64-bit addressing
Anand wrote:
Fermi will support DirectX 11 and NVIDIA believes it'll be faster than the Radeon HD 5870 in 3D games. With 3 billion transistors, it had better be.
Ujesh is wiling to take total blame for GT200. As manager of GeForce at the time, Ujesh admitted that he priced GT200 wrong. NVIDIA looked at RV670 (Radeon HD 3870) and extrapolated from that to predict what RV770's performance would be. Obviously, RV770 caught NVIDIA off guard and GT200 was priced much too high.
I asked Jonah if that meant Fermi would take a while to move down to more mainstream pricepoints. Ujesh stepped in and said that he thought I'd be pleasantly surprised once NVIDIA is ready to announce Fermi configurations and price points. If you were NVIDIA, would you say anything else?
Jonah did step in to clarify. He believes that AMD's strategy simply boils down to targeting a different price point. He believes that the correct answer isn't to target a lower price point first, but rather build big chips efficiently. And build them so that you can scale to different sizes/configurations without having to redo a bunch of stuff. Putting on his marketing hat for a bit, Jonah said that NVIDIA is actively making investments in that direction. Perhaps Fermi will be different and it'll scale down to $199 and $299 price points with little effort? It seems doubtful, but we'll find out next year.
Originally posted by: her209
Jesus, what kind of power supply do you need to run that?
Originally posted by: Kakkoii
I really don't like the idea of our CPU's being our GPU's also. A GPU advances a lot quicker than CPU's do. I don't want to have to upgrade my CPU/GPU every time I want better graphics performance.