Bottom Line Telecom has 141 3930K's in stock

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
So even being x2 faster would mean a 2% performance drop.

Your point?

618px-JeanLucPicardFacepalm.jpg


It doesn't work like that.


Picard Facepalm communicates condescending attitude, it is an unacceptable form of baiting and flaming.

Reading below, it accomplishes exactly as much.

Administrator Idontcare
 
Last edited by a moderator:

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
No...

I would love for it to have it, too, but it doesn't. Motherboard manufacturers can claim PCIe 3.0 support or being PCIe 3.0 ready, but to get CPU support you'll need Ivy Bridge-E. Again, Sandy Bridge-E CPUs WILL NOT support PCIe 3.0.

Among the reasons being given are that Intel has no products to test and therefore validate it with, and another one is supposed engineering issues. Whatever the case, it's not good news because Ivy Bridge CPUs will have PCIe 3.0 support and SB-E is supposed to be better when it comes to longevity.

Thanks for the clear-up there. Information on this is a little hazy. :)

Look's like AMD's new platform will not support 3.0 at all. Makes you wonder how much it is really needed in the next year. It remains to be seen if 3x16 is bandwidth limited with the top cards in 2012, but I likely not.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
And how many enthusiasts will be stuck on one of these once it is an issue??? Not many.

The HD 7970 will be here in Q1 2012...

That's not to say it'll be an immediate issue, but for future-proofing it's good to have. Another thing to take into account is that PCIe 3.0 x4=PCIe 2.0 x8.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Thanks for the clear-up there. Information on this is a little hazy. :)

Look's like AMD's new platform will not support 3.0 at all. Makes you wonder how much it is really needed in the next year. It remains to be seen if 3x16 is bandwidth limited with the top cards in 2012, but I likely not.

No problem. :)

Now that I think about it, the most important thing I can see regarding PCIe 3.0 is the bandwidth it enables for other devices. 16 lanes of PCIe 3.0=32 lanes of PCIe 2.0. Right now for AMD GPUs the difference between 2.0 x8 and x16 only starts to be noticed at 2560x1600 with a Radeon HD 6990, and none of the new GPUs will be close to matching its performance, so it won't be a problem.

I am seeing the potential for true Tri-CF/SLI on a Performance motherboard and for a reasonable cost if the HD 7800 series or its NVIDIA equivalent support PCIe 3.0. x8/x4/x4, anyone? That would be the equivalent of 2.0 x16/x8/x8...

In the future we could also be talking about new RAID controllers and what not using it.
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
the 2011 platform will have plenty of bandwith to handle 2 next gen cards at even pcie 2.0

if to slots get 16x that means they have all the bandwith that 16x can handle,they dont get bottle necked from the other lanes,they are there own so to say and will be plenty for the next gen cards.

what you are talking about will bottle neck the cpu way before it bottle necks the lanes.

its up to the board maker to configure there lanes but if a board has 3 16x slots and quad channel ram you can bet it will run any thing you throw at it for the next 3 years.

yeah 3.0 doubles the bandwith but the cpu will choke up if there is a card that can use that much bandwith.

thats like running 8 gtx 580s on a sandy now vs 3 years from now running 3 next next gen cards.

I just got back from the future and brought back 3 cards from the year 2016 and they are pcie 3.0.Do you honestly think my pcie 3.0 2500k will run those cards to there potential? its like running a gtx 580 now on a p4

when we have monster cards that use pcie 3.0 we will be well into haswell die shrink at 14nm
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,114
136
They're delayed and around 1600 dollars :biggrin:

I know.. I'm waiting on the 2687W

What mobo are you going with? EVGA's Classified SR-3 is due out by the end of the year (IIRC), it supports dual CPUs and PCI 3.0 (Patsburg-T).
 

greenhawk

Platinum Member
Feb 23, 2011
2,007
1
71
Site sounded sketchy, but the reseller ratings was pretty solid (assuming they are real). Still sounds a little fishy, like they have the stock and will sell it now, but ship after launch.

Site is good. Even one of the few I have found that will ship outside the states. Current keyboard and SSD was from them. SSD worked out 40% cheaper than local and the keyboard was released 2 months before it was out here (same cost in the end for it though).

as to the "have stock, will sell, but not ship". I suspect they are trying to abide by intel so they do not get black listed by the company (ie: selling before it is official". Just look at it as pre-ordering :)
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
It doesn't work like that.

Cuz you said so? If you swallowed the whole PCI 3.0 propaganda isn't my problem. Take a look at the review I posted and tell me why it's different instead of posting some 4chan stuff.

The supposed needing for a PCI 2.0 x16 link by the current graphics cards is bullshit. Not even a 590 or 6990 can stress the current PCI slots, do you get it already? And ppl like you is willing to go for PCI 3.0 without a single reason aside "it's new".

Gimme a break.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Cuz you said so? If you swallowed the whole PCI 3.0 propaganda isn't my problem. Take a look at the review I posted and tell me why it's different instead of posting some 4chan stuff.

The supposed needing for a PCI 2.0 x16 link by the current graphics cards is bullshit. Not even a 590 or 6990 can stress the current PCI slots, do you get it already? And ppl like you is willing to go for PCI 3.0 without a single reason aside "it's new".

Gimme a break.

No, it doesn't work like that because you don't understand the science behind it. You can't just say "oh, we only lost 2% performance moving from PCIe 2.0 x16 to x8 with a high-end graphics card, so if a new one comes out and it's ~100% faster it'll only be 4%". That's not how bandwidth works. If the card can't be fed more data you'll see much bigger performance penalties.

Easy way to illustrate: take any storage medium that can only go up to 30MB/s and transfer data to it using any USB 2.0 bus. Even at 30MB/s it can have some problems, but it'd only bottleneck it by 5% or lower. If we were to take that drive and give it up to 2x higher theoretical throughput, does that mean the penalty would only be up to 10% now? Of course not, because now it can't be fed enough data and therefore can't perform as fast as it can by a much larger margin. Same thing happens with graphics cards. If the bus can't be fed data the card won't perform near its optimum.

That's not to say we'll see meaningful performance differences with the new 28nm GPUs comparing PCIe 3.0 x16 and 2.0 x16 (we probably won't because none of those will be as fast as an HD 6990), but your reasoning itself is wrong.

And there are definite advantages to PCIe 3.0, which I mentioned earlier. Each lane has twice the bandwidth as PCIe 2.0, and I mentioned in my previous comment the benefits that has.
 

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
No, it doesn't work like that because you don't understand the science behind it. You can't just say "oh, we only lost 2% performance moving from PCIe 2.0 x16 to x8 with a high-end graphics card, so if a new one comes out and it's ~100% faster it'll only be 4%". That's not how bandwidth works. If the card can't be fed more data you'll see much bigger performance penalties.

Easy way to illustrate: take any storage medium that can only go up to 30MB/s and transfer data to it using any USB 2.0 bus. Even at 30MB/s it can have some problems, but it'd only bottleneck it by 5% or lower. If we were to take that drive and give it up to 2x higher theoretical throughput, does that mean the penalty would only be up to 10% now? Of course not, because now it can't be fed enough data and therefore can't perform as fast as it can by a much larger margin. Same thing happens with graphics cards. If the bus can't be fed data the card won't perform near its optimum.

That's not to say we'll see meaningful performance differences with the new 28nm GPUs comparing PCIe 3.0 x16 and 2.0 x16 (we probably won't because none of those will be as fast as an HD 6990), but your reasoning itself is wrong.

And there are definite advantages to PCIe 3.0, which I mentioned earlier. Each lane has twice the bandwidth as PCIe 2.0, and I mentioned in my previous comment the benefits that has.

As the review I posted shows, a GTX 480 dealing with a x8 link has only an overall 2% performance drop and 1% at 2500*1600. But that doesn't stop there, at x4 the performance drop is 7% and 5% respectively.

2v2ebk5.gif


So a GTX 480 (a really powerful card even today) does really well with just 1/4 of the available PCIe 2.0 bandwidth. That's AGP x8, something from 10 years ago.

Again, what's the benefit of doubling each line bandwidth? Doesn't make more sense to make more flexible and powerful chipsets and enhancing QPI and HyperTransport?

Longevity is not a problem for SB-E since PCIe has back and forward compatibility. When there's a need for more bandwidth than PCIe 2.0 SB-E will be as old or more than Pentium III or AGP now.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
As the review I posted shows, a GTX 480 dealing with a x8 link has only an overall 2% performance drop and 1% at 2500*1600. But that doesn't stop there, at x4 the performance drop is 7% and 5% respectively.

2v2ebk5.gif


So a GTX 480 (a really powerful card even today) does really well with just 1/4 of the available PCIe 2.0 bandwidth. That's AGP x8, something from 10 years ago.

Again, what's the benefit of doubling each line bandwidth? Doesn't make more sense to make more flexible and powerful chipsets and enhancing QPI and HyperTransport?

Longevity is not a problem for SB-E since PCIe has back and forward compatibility. When there's a need for more bandwidth than PCIe 2.0 SB-E will be as old or more than Pentium III or AGP now.

You still don't get it. Whatever, forget it. *sigh*
 

grkM3

Golden Member
Jul 29, 2011
1,407
0
0
if you put 3 gtx 580s on one card it still would not bottleneck a 16x slot.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
Right now for AMD GPUs the difference between 2.0 x8 and x16 only starts to be noticed at 2560x1600 with a Radeon HD 6990, and none of the new GPUs will be close to matching its performance, so it won't be a problem.

If no new top of the line GPUs will match the 6990 then why the hell are you even talking about PCIe3? As in years past, I can bet that nVidia's top of the line SINGLE GPU offering will be equal to a 6990. Maybe better.

Better, how many people own a 6990 (or a 590) versus those doing SLI or Crossfire? People doing SLI or CF will not see any real advantage to PCIe3. And if mobos support it, but not CPUs....well, people have an upgrade path.
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
If no new top of the line GPUs will match the 6990 then why the hell are you even talking about PCIe3? As in years past, I can bet that nVidia's top of the line SINGLE GPU offering will be equal to a 6990. Maybe better.

Better, how many people own a 6990 (or a 590) versus those doing SLI or Crossfire? People doing SLI or CF will not see any real advantage to PCIe3. And if mobos support it, but not CPUs....well, people have an upgrade path.

I swear, people here NEVER bother to read.

I am seeing the potential for true Tri-CF/SLI on a Performance motherboard and for a reasonable cost if the HD 7800 series or its NVIDIA equivalent support PCIe 3.0. x8/x4/x4, anyone? That would be the equivalent of 2.0 x16/x8/x8...
And the new cards won't even be close to Radeon HD 6990 performance, whether its from AMD or NVIDIA, so get it out of your mind. The new Enthusiast cards will be a 50-65% improvement over current, and the HD 6990 is an 80-95% improvement over current single GPU (depending on scaling, compared to HD 6970).

And unlike what you're saying, past history has shown us that the 100% improvements you've touted have never existed. Look at the Radeon HD 4890 vs the HD 5870 in most games. The jump there was bigger than what we'll see now, and even then it was 60-75% in the vast majority of scenarios.


Stop flaming. There is no need to be this vitriolic:
I swear, people here NEVER bother to read.
Administrator Idontcare
 
Last edited by a moderator:

Imouto

Golden Member
Jul 6, 2011
1,241
2
81
Again, read what? Maybe you think you posted something frigging clever but no one else is noticing it. Can you explain it again for us, the peasants?

Or maybe you're mistaken about what PCIe 3.0 means and you think that it will deliver magically a lot of bandwidth that has been a chipset issue for years, not about the PCIe slots. NF200 anyone? X58?
 

LOL_Wut_Axel

Diamond Member
Mar 26, 2011
4,310
8
81
Again, read what? Maybe you think you posted something frigging clever but no one else is noticing it. Can you explain it again for us, the peasants?

Or maybe you're mistaken about what PCIe 3.0 means and you think that it will deliver magically a lot of bandwidth that has been a chipset issue for years, not about the PCIe slots. NF200 anyone? X58?

What's your point?

PCIe 3.0 with 16 lanes would be equivalent to PCIe 2.0 with 32 lanes AND cheaper.

And I already told you what benefits that would bring for high-end cards. If you still don't get it now, you won't even if I keep repeating it to you and even though it's a simple concept.
 

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
I swear, people here NEVER bother to read.

And the new cards won't even be close to Radeon HD 6990 performance, whether its from AMD or NVIDIA, so get it out of your mind. The new Enthusiast cards will be a 50-65% improvement over current, and the HD 6990 is an 80-95% improvement over current single GPU (depending on scaling, compared to HD 6970).

And unlike what you're saying, past history has shown us that the 100% improvements you've touted have never existed. Look at the Radeon HD 4890 vs the HD 5870 in most games. The jump there was bigger than what we'll see now, and even then it was 60-75% in the vast majority of scenarios.

Dude, you realize that right now on 1366, if you do SLI with TWO CARDS they each get 16 lanes. Of PCIe 2.0. And if I go with what you're saying about next gen, not even the 6990 can push us to levels needing PCIe3. So "true" TRI SLI and CF are already there. The bandwidth demands already met.Is PCIe3.0 a good thing? Sure. Is it needed now, or even within the next...2 years? I don't think so.

Stop telling us to read. you've made no points that make sense.

Also, I never said 100%. Go look at the 5970 (same generation as the GTX480) compared to the 580. The 580 pretty well holds even.

http://www.guru3d.com/article/radeon-hd-6990-review/

Gee, I see a 30% improvement over a 580. A "680" would not surprise me if it came toe to toe with a 6990.
 
Last edited:

RampantAndroid

Diamond Member
Jun 27, 2004
6,591
3
81
What's your point?

PCIe 3.0 with 16 lanes would be equivalent to PCIe 2.0 with 32 lanes AND cheaper.

And I already told you what benefits that would bring for high-end cards. If you still don't get it now, you won't even if I keep repeating it to you and even though it's a simple concept.

It brings bandwidth that no cards will be able to use any time soon. There's no speed increase. It's just bandwidth. And bandwidth, as you already pointed out, is only beneficial if you need it.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Yep. Looks like my suspicious were right.

Just so you know, a Radeon HD 7970 and its NVIDIA equivalent will be significantly faster than a GTX 480, and those will be cards that by consequence will be pushing a lot more memory bandwidth.


So this is how it is with you . Ya spent a better part of a year hyping BD which when it comes to SLi sucks compared to SB, So are you sugjesting that a workstation server chip needs PCI-e III today because next generation is suppose to be 50% faster than present generation . Sli on SB does very well with PCI-E II X8 . There are a few sli test and reviews and thats when SB spreads its wings . Were suppose to listen to a guy thats was so wrong about BD. All we need to do is look at the sli reviews for these all powerful Gpus coming from NV and worry our present setups won't be able to handle it . Thats just nuts. Pci-e X16 will do nicely for sometime to come .
 

3DVagabond

Lifer
Aug 10, 2009
11,951
204
106
So this is how it is with you . Ya spent a better part of a year hyping BD which when it comes to SLi sucks compared to SB, So are you sugjesting that a workstation server chip needs PCI-e III today because next generation is suppose to be 50% faster than present generation . Sli on SB does very well with PCI-E II X8 . There are a few sli test and reviews and thats when SB spreads its wings . Were suppose to listen to a guy thats was so wrong about BD. All we need to do is look at the sli reviews for these all powerful Gpus coming from NV and worry our present setups won't be able to handle it . Thats just nuts. Pci-e X16 will do nicely for sometime to come .

His position on BD is irrelevant. You are trying to discredit him, rather than his position. The point that half as many pci-e lanes will be just as fast and potentially cheaper is a valid one. Maybe we can make NF200 chips redundant as well, saving costs. Although I'll bet nVidia will still make you buy one if you want to be tri or quad SLI compliant with only 16 lanes. :shrug: