• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

NVIDIA Pascal Thread

Page 141 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
It's Pascal's ability to create up to 16 different viewpoints from a single rendered scene that makes single card VR much faster than previous gens.

Iirc It uses 8 viewpoints for a single headset, 4 for each eye. The extra 3 are for around the edges. Because you don't see a smuch detail there, they can lower the detail to reduce the performance hit.
 
It's viewPORTS, not viewPOINTS. It can do 2 viewpoints at once from what we know, not 16.

So what's the difference between a viewpoint and a viewport?

Whatever it's called, they definitely said 16 in the video, with up to 8 being used per headset.
 
Chiphell's paparazzi spotted Geforce GTX 1070 @ Shenzen. 😛

193657axyvrxs62xzzzv27_zpskh6ol7lk.jpg
 
^ Nice - Love the sleepwear.

Rumors are floating around stating the 1070 is only 15-20% slower than the 1080.

Add some overclocking to the mix, you can have an extremely capable card for 1440p for under $400.
 
So what's the difference between a viewpoint and a viewport?

Whatever it's called, they definitely said 16 in the video, with up to 8 being used per headset.



A viewport gives you only a 2D transform to scale and translate the image plane. A viewpoint is a 4D transform + a perspective projection. You can't render different point of views with a viewport change, you need a different viewpoint transform for that. Apparently Pascal allows you to render two viewpoints (left and right eye I suspect) at the same time without transforming and/or submitting geometry twice.
 
^ Nice - Love the sleepwear.

Rumors are floating around stating the 1070 is only 15-20% slower than the 1080.

Add some overclocking to the mix, you can have an extremely capable card for 1440p for under $400.

No point to buy the 1080 then if 1070 is that close. I feel like the 1070 is going to end up bottlenecked in some way though, due to less memory bandwidth. Hopefully not.
 
^ Nice - Love the sleepwear.

Rumors are floating around stating the 1070 is only 15-20% slower than the 1080.

Add some overclocking to the mix, you can have an extremely capable card for 1440p for under $400.

It cant be 15-20% slower because Tflops difference is 40% and memory bandwidth difference is 25%.
 
Just look at GTX 980 vs 970..

5Tflops vs 3.9Tflops.GTX980 have 28% more Tflops also GTX980 have 14% more memory bandwidth.
In real life GTX980 is 18-20% faster than GTX970 in 1440P.

1070 vs 1080-There is 10% more Tflops difference than 970 vs 980 have and also 11% more memory bandwidth difference.
So 1080 should be another 5-10% faster than GTX1070.
1080 should be 25-30% faster than 1070.

Edit interesting video:how fast 1070 will be?
https://www.youtube.com/watch?v=G0nb1QVBt2w
 
Last edited:
9TFLOP -> 6.5TFLOP is ~27.7% slower.

For what it's worth, VideoCardz claims to have published the final specs for the 1080, and it lists "only" 8.2 TFlops. If that's correct, and the 6.5 TFlop number is correct for the 1070, then the cards are not as far apart as originally thought (in that metric, at least).
 
For what it's worth, VideoCardz claims to have published the final specs for the 1080, and it lists "only" 8.2 TFlops. If that's correct, and the 6.5 TFlop number is correct for the 1070, then the cards are not as far apart as originally thought (in that metric, at least).

It just depends what clocks you use. FLOPS = Frequency * CC * 2, so 8.2TFlops is based on the 1.6GHz base clock. That's the way nVidia has traditionally done it in the past. If 6.5 is also base clocks, that could be 1920CC @ 1.7GHz, or maybe 2240 @ 1.45GHz.

Given that the 6.5 number was also given at the same time as the 9 though which would seem to be FLOPS at boost clocks, it's logical to me at least that the 6.5 TFLOPS number is also at boost clocks. That seems to fit pretty well a 1920CC part, and makes the 1070 75% of a full GP104 at least in CUDA cores.
 
It just depends what clocks you use. FLOPS = Frequency * CC * 2, so 8.2TFlops is based on the 1.6GHz base clock. That's the way nVidia has traditionally done it in the past. If 6.5 is also base clocks, that could be 1920CC @ 1.7GHz, or maybe 2240 @ 1.45GHz.

Given that the 6.5 number was also given at the same time as the 9 though which would seem to be FLOPS at boost clocks, it's logical to me at least that the 6.5 TFLOPS number is also at boost clocks. That seems to fit pretty well a 1920CC part, and makes the 1070 75% of a full GP104 at least in CUDA cores.

:thumbsup: Yep, very good point.
 
It just depends what clocks you use. FLOPS = Frequency * CC * 2, so 8.2TFlops is based on the 1.6GHz base clock. That's the way nVidia has traditionally done it in the past. If 6.5 is also base clocks, that could be 1920CC @ 1.7GHz, or maybe 2240 @ 1.45GHz.

Given that the 6.5 number was also given at the same time as the 9 though which would seem to be FLOPS at boost clocks, it's logical to me at least that the 6.5 TFLOPS number is also at boost clocks. That seems to fit pretty well a 1920CC part, and makes the 1070 75% of a full GP104 at least in CUDA cores.

I was under the impression nvidia and wikipedia and everyone calculates FLOPS (either TFLOPS or GFLOPS) based on listed maximum boost clock under reference design clock numbers.
 
Given that the 6.5 number was also given at the same time as the 9 though which would seem to be FLOPS at boost clocks, it's logical to me at least that the 6.5 TFLOPS number is also at boost clocks. That seems to fit pretty well a 1920CC part, and makes the 1070 75% of a full GP104 at least in CUDA cores.

Sounds like good reasoning to me. 1920 CCs @ 1.7 GHz boost clock would be a nice GPU card. Faster than a GTX 980 for less $$s and support for newer features - pretty good deal. Only bummer would be if the # of ROPs was cut by an equal amount - guess we'll have to wait an see.
 
A viewport gives you only a 2D transform to scale and translate the image plane. A viewpoint is a 4D transform + a perspective projection. You can't render different point of views with a viewport change, you need a different viewpoint transform for that. Apparently Pascal allows you to render two viewpoints (left and right eye I suspect) at the same time without transforming and/or submitting geometry twice.

What they were saying it did was render one scene in 3D, and have it viewed from multiple angles. The program can define those angles, up to 16, one each for the main 2D image for each eye, and several more for the outer image.
 
I'm guessing Ryan Smith will review the new cards? If so i hope he's fair and objective.Anandtech's review quality has gone down in last few years because they have been very lenient and forgiving.Not bashing Anandtech-they still have the most comprehensive and in-depth analysis but i wish they were more strict and less forgiving.And please do include 1080p benchmarks as well.Tired of people saying target market for these cards use higher than 1080p.Whether there is any truth to it or not,1080p is and will be the most popular screen resolution for the next 2-3 years.
 
Back
Top