Intel's new branding mess...

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Eeqmcsq
It's still a bit mind boggling to me right now. Unless the final numbering scheme makes it obvious, trying to figure out the # of cores at a glance will also be annoying. That's one part of AMD's naming scheme I like: the X for the # of cores. It's straight forward and easily scales for future number of cores.

intel also said that the i7 name will make sense when the entire family is released...
All i am seeing is that it is used for two DIFFERENT architectures, and that i5 and i3 have different types of disabled features compared to i7 (one of them that is)... oh and the i3 and 5 can be faster if you get a higher clocked one compared to a lower clocked i7. ugh. what a mess. but to get back to the "will make sense", I still have no idea what the i or the 7 stand for.
 

VirtualLarry

No Lifer
Aug 25, 2001
56,352
10,050
126
Originally posted by: taltamir
intel also said that the i7 name will make sense when the entire family is released...
All i am seeing is that it is used for two DIFFERENT architectures, and that i5 and i3 have different types of disabled features compared to i7 (one of them that is)... oh and the i3 and 5 can be faster if you get a higher clocked one compared to a lower clocked i7. ugh. what a mess. but to get back to the "will make sense", I still have no idea what the i or the 7 stand for.

Intel should have called ALL Nehalem derivatives "Core i7" (like Windows 7), and then just differentiated with model numbers. Possibly with a prefix or postfix letter to indicate socket type.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
Im betting i9, as it has more than 8 threads and threads seems to be the distinguishing feature.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
except the thread count doesn't match... AND they have the same amount of threads across different numbered i#s
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
bah. My brain is fried. I just want a SATA6Gb SSD + 8GB RAM + arrandale 12-13" laptop. Forget the rest. I seriously have no interest in s1156 as a desktop platform (except for a ultra low power WHS, potentially)

Ill have a gulftown or bloomfield + G300 or larrabee for gaming, the laptop will be my daily machine.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
i have no interest in 1366 as a platform... a useless 3 channel ram controller, a useless X58 chip burning power, taking space, and giving you nothing but two 16x slots at the cost of increased timing compared to having a single 16x slot integrated directly into the CPU...
let me just say that even if they cost the same, I consider s1156 to be the superior platform.
 

imported_Shaq

Senior member
Sep 24, 2004
731
0
0
We will see when it comes out. If they are that good why would they cancel the lower i7 SKU's? It must give them a larger profit margin or they would leave everything on 1366. Their marketing department can spin it however they want to though. And useless today doesn't mean useless forever.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Shaq
We will see when it comes out. If they are that good why would they cancel the lower i7 SKU's? It must give them a larger profit margin or they would leave everything on 1366. Their marketing department can spin it however they want to though. And useless today doesn't mean useless forever.

you answered your own question here.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
Originally posted by: taltamir
i have no interest in 1366 as a platform... a useless 3 channel ram controller, a useless X58 chip burning power, taking space, and giving you nothing but two 16x slots at the cost of increased timing compared to having a single 16x slot integrated directly into the CPU...
let me just say that even if they cost the same, I consider s1156 to be the superior platform.
I didn't say that it was worthless (although the limit on PCI-E bandwidth is a heavy limitation going forward), just that it doesn't seem helpful for what I need.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
although the limit on PCI-E bandwidth is a heavy limitation going forward
PCI-E bandwidth is identical for a single slot card, and in fact the 1156 has the advantage of direct to CPU connection!

only if you have TWO 16x v2 slots do you lose out on bandwidth. so "heavy limitation" it is not.

lemme say that again. as long as you only use ONE slot card, than the 1156 is going to have FASTER PCI-e (although not significantly faster due to timing not being that big of a deal).
For 2 slots the X58 is faster
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
While it applies to the GPUs as well, the chipsets only support 6/8 PCI-E x1 slots. SATA 6Gb/s and USB3.0 are each offering more bandwidth than a single v2.0 lane supports.
DMI itself will likely bottleneck future s1156 systems. QPI offers quite a bit more bandwidth to the NB (and those lanes) than DMI offers to the PCH (and it's lanes). While not a significant factor for gaming, it's a future concern.

But yeah, single GPUs will likely run slightly better on s1156 compared to s1366, clock speeds and all else being equal (and in fact comparisons with X58 using 2 memory sticks will interest me greatly.)
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
X58: CPU <QPI> X58 "northbridge" (contains ONLY: 2x video card connections, 1 CPU link, 1 southbridge link) <DMI> Southbridge
P55: CPU (with built in northbridge on die) <DMI> Southbridge

Those limitations you list are not alleviated by the X58. if anything the X58 is worse off for having that extra "jump" between the southbridge the CPU.
They are certainly going to be a problem and I forsee an X68 and P65 to deal with them. but the X58 doesn't bring anything to the table other than more bandwidth for dual video card SLOTS.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
You're looking at connections, which is fine. I'm looking at what they can do without changing the *CPU socket*. With the north bridge built in, they have to change the socket to add pins. The extra jump allows them to change the NB without changing the CPU socket.
As far as I know (and this could be totally bogus) they're limited by the socket pin counts as to what they can add to DMI in the way of additional PCI-E lanes (which is what DMI is, AFAIK) to the P55 south bridge.
QPI has bandwidth to spare and could increase PCI-E lane count on X68 without maxing the current QPI connection. It'd put non GPU lanes on the northbridge, but it should be theoretically possible.
On the other hand, they may be able to increase the DMI clock speed. I don't know, it's beyond my knowledge. But thats my understanding at the moment.

I need to find a good reference as to one direction bandwidth for the different connections.
Also remember a PCI-E lane is just a lane. It doesn't care what its connected to, and it doesn't care how many others are connected to the same thing.

Edit: Typos
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
they are limited by CPU and chipset design. actually changing the amount of pins on the socket takes an engineer a few MINUTES (of course, the pins aren't gonna DO anything unless you actually modify the chip in question). integrating a northbridge or a higher rate data port into the CPU takes a whole bunch of time to design.

BOTH chips could increase bandwidth, it just takes some design effort to do, and that is the key.

Any change that they need to do requires engineers to design, and then testing. So I would say the two chipsets are equally limited as far as "future connectors" are concerned.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
takes more than a few minutes to create a new socket that doesn't require a new line of chips, etc.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: ilkhan
takes more than a few minutes to create a new socket that doesn't require a new line of chips, etc.

which is kind of my point... for those extra pins to DO something you need to modify the chips in question, it is the chip modification and testing that take a lot of time, not changing the socket to have a few more pins.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
that was kind've my point too. Without any design changes, X58 has more bandwidth coming from the CPU socket.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: ilkhan
that was kind've my point too. Without any design changes, X58 has more bandwidth coming from the CPU socket.

no it doesn't. it was worth bandwidth to a single card or to southbridge, at the advantage of better bandwidth for TWO video cards.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
Is there anything in the way the connections are designed that forces the lanes on the northbridge of X58 to be used exclusively for GPUs? As far as I understand, PCI-E is PCI-E.

The northbridge on X58 is just a QPI<->PCI-E bridge. It could be changed on X68 to 64 PCI-E v3 lanes for all we know. But theres bandwidth available for that upgrade. Where as on s1156, they'd bottleneck that much bandwidth over the DMI link. Changing the lanes on the CPU would require much more validation at minimum and a new electrical spec and/or socket design at maximum. Does that make sense?

And as I understand it, the CPU die is connected to the PCI-E/GPU die via QPI. Latency may be lower with the shorter distances, but I believe there is still a QPI link involved. IDC and I have had some interesting speculation on this and basically left it "we don't know enough yet".

edit: and I hope you aren't frustrated with this conversation. It helps me think through things and consider new input, even if we disagree. Maybe I wasn't being clear. I was thinking once we hit PCI-E v3, at 1GB/s the bandwidth needs from the socket double, and s1156 and s1366 are both going to be around during and after that jump.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
http://www.anandtech.com/cpuch...ts/showdoc.aspx?i=3570
http://www.anandtech.com/cpuch...howdoc.aspx?i=3570&p=2

well, the question is... is it easier add bandwidth to a northbridge/cpu hybrid (that is based on a design that had more bandwidth).
Or is it easier to redesign a northbridge to have some southbridge functionality while still communicating with a second discreet southbridge.

I would think the latter is the worst choice. the X58 has 3 connections, video card only PCI-E lanes, QPI to CPU, DMI link to southbridge. the lynfield core has the PCI-E for video card only integrated into it, making the QPI link superflous, and the exact same DMI link to southbridge.

the X58 already uses DMI, it uses it to communicate with the southbridge where all the SATA, USB, etc connections are located, the connection that supposedly require a ton more bandwith in the future.
So you would have to replace the DMI from X58 to Southbridge with a better link as well.

Do you think that the X68 is going to be a chipset that connects to a nehalem without built in GPU PCIE link via QPI and then connect to southbridge via QPI or have a new ICH11 that has a bunch of southbridge functions removed and relocated into the northbridge? it makes no sense, it requires too many changes and too much work and too much deviation from current model.

but if you take the model that already has an integrated northbridge/CPU hybrid and upgrade the northbridge it has... or maybe upgrade its single link to the southbridge...

And intel demonstrated they can make such changes quickly with the nehalem to lynfield transition.

What I forsee is that the next platform will take the lynfield and P55 platform... replace USB2 with USB3 on P55 chip. Replace SATA2 with SATA3 on P55 chip. MAYBE add some more speed from CPU to southbridge. either via reintroduction of QPI, or just a boost of DMI speed. and replace the PCIEv2 16x link on CPU with a PCIEv3 16x link on CPU. all on a lower process tech.

as process tech improves and companies run out of ideas for how to wring more performance, their first idea was "more cores"... another good solid way to increase performance has been "integration". Move the memory controller from northbridge chip to CPU, now moth the rest of the northbridge into the CPU as well... next? either mix in a GPU (they plan to) or move the southbridge functions into it as well...
 

Idontcare

Elite Member
Oct 10, 1999
21,118
58
91
Originally posted by: taltamir
http://www.anandtech.com/cpuch...ts/showdoc.aspx?i=3570
http://www.anandtech.com/cpuch...howdoc.aspx?i=3570&p=2

well, the question is... is it easier add bandwidth to a northbridge/cpu hybrid (that is based on a design that had more bandwidth).
Or is it easier to redesign a northbridge to have some southbridge functionality while still communicating with a second discreet southbridge.

Just to clarify, that would be "our" question, yes?

(as in you, ilkhan, and possibly myself if I shoulder my way into the conversation)

The guys who are actually making the decisions that leads to the technology products we buy are asking a very different question, it starts something like this "which is less costly, less risky to meeting time to market, and more likely to have higher ROI...".

I too wonder where 16x PCIe is going, its one thing to say it is good enough for today's most advanced GPU's but it is entirely another thing to state that 16x will be enough for single-slot GPU solutions for N+1 and N+2 in coming year or two.

It might not be of a practical consequence though, like ram bandwidth maybe it just doesn't hardly matter in real world apps.

Does anyone know of a review where the author's intentionally reduced the PCIe slot electrical bandwidth (16x -> 14x -> 12x ->10x ->...->1x) while testing for impact to fps as well as GPGPU/CUDA stuff?

That data would give us some insight into just how much available bandwidth is still there with today's GPU products and 16x electrical PCIe slots.
 

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
I could not find any GPGPU/CUDA benchmarks for x8 vs x16 lanes but I did find this to be quite interesting.

P45 vs X48 Crossfire Performance

Testing done on Gigabyte P45-DQ6 and X48-DQ6 boards with Q9650 and dual 4850 GPUs.

The page linked there is for Crysis - shows pretty significant reduction in fps (30-50%) going from dual x16 down to dual x8 slots.

Next to last page (9) shows similar trend in CoD:WaW at high resolution (25x16) with 4AA/16AF enabled.

Basically it looks like you can exceed the bandwidth of x8 slots if you run at high resolutions and/or with AA/AF. If anything I would expect this trend to increase going forward as games get more and more graphically intensive.

Regarding GPGPU/CUDA - not a direct benchmark, but from personal experience - crunching F@H on two GTX 260s on dual x8 slots I get exactly the same ppd from each card as I did for a single card on an x16 slot.

EDIT: One additional thought I had - we have already seen massive improvements in dual & triple card performance on i7 versus Core2 systems. I think this is due to the increased bandwidth going from the cores to the PCIe lanes (via QPI) - the cores are able to keep the GPUs fed more consistently and everything runs better. Now - i5 will have the PCIe lanes integrated on-die and directly in contact with the cores.

Question is this: will the reduced latency offered by this arrangement be enough to overcome the halved bandwidth of dual x8 channels when feeding two GPUs?