Sandy Bridge die-size estimate from photo

Page 5 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Sp12

Senior member
Jun 12, 2010
799
0
76
So, running a ringbus over the top of a die doesn't actually increase size? Just complexity? What sort of effect will that have on yields compared to a traditional setup (if it affects them at all)?

Also, how does adding another stop increase the bandwidth of a ring bus. The 'channel' is still the same width, it just makes more stops. Does the Tokyo bullet train carry more people with every stop added?
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Were not discussing trains . Read about the SB ring buss. Some really good articles on it. Intel has done here what I would call AMDs worse nightmare. THe ring buss speed scales with clock speed the LLC scales with clock speed. Now its up to you to read up and find out why the stops add bandwidth. You have stopped at traffic lights befor that are badly timed out . were the flow of traffic isn't = for both sets of lanes. Your stopped at the light and the other light is green but no traffic,

Better yet think of it as an semi automatic weapon vs. an automatic weapon that just keeps firing so long as finger is holding trigger down . Now the auto matic weapon will empty the clip faster than the semiautomatic weapon that needs you to pull the trigger each time you fire . Which weapon has the most bandwidth so to speak.
 
Last edited:

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Each stop has data waiting and released Than moves to next stop Than recieves new data. Just like an automatic weapon rapid firing of data . Really One member here did a marbles at the stops to explain it, about as well as it can be explained. Read what he wrote he is much better at communicating than I am . I really don't like humans much so my patients isn't so good and its getting worse.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Also, how does adding another stop increase the bandwidth of a ring bus. The 'channel' is still the same width, it just makes more stops. Does the Tokyo bullet train carry more people with every stop added?

Think of a highway with an absurdly ridiculous number of lanes such that no amount of traffic on the highway would ever lead to a logjam.

Now how do you get the cars on and off this highway? The on-ramps and exit-ramps of course. But they are single-lane, so a finite number of cars can get onto (or off of) the highway at any given entry/exit.

For SB this finite limit is 96GB/s per on-ramp/exit-ramp.

Add more on-ramps and exit-ramps. How many cars are on the highway now?

Now of course there really us a limit to the number of cars that can be on the highway, and of course there is a limit to total volume of data that can be "in-flight" on the ringbus, but the value is likely to so ridiculously high that it just isn't a practical concern.
 

Sp12

Senior member
Jun 12, 2010
799
0
76
Thanks, that was elucidating indeed!

If these fora had a rep system you would have some right there.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
Its 32 bits one stop (thus bandwidth scaling per stop, each stop adds a potential 32 bits in flight each cycle) every cycle, right? But latency increases because it takes a cycle per stop and more stops means longer before getting to the relevant destination.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Its 32 bits one stop (thus bandwidth scaling per stop, each stop adds a potential 32 bits in flight each cycle) every cycle, right? But latency increases because it takes a cycle per stop and more stops means longer before getting to the relevant destination.

The latency aspect is a good question. Has there been anything official about this?

I've no doubt there is a latency penalty that comes from scaling to more stops inasmuch as adding more stops means adding more and more wire-delay to the physical half-trip distance of the bus itself.

But does a "stop" actually incur a delay for the data that is not intended to "exit" the bus at the stop? I know some ring-bus designs do operate this way, but I am not aware of it being a requirement for all ring-bus implementations so I am really just asking this a question and not trying to state as fact.
 

ilkhan

Golden Member
Jul 21, 2006
1,117
1
0
But does a "stop" actually incur a delay for the data that is not intended to "exit" the bus at the stop? I know some ring-bus designs do operate this way, but I am not aware of it being a requirement for all ring-bus implementations so I am really just asking this a question and not trying to state as fact.
No idea. As I understand it, data moves one stop per clock, not the whole route.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
. ARS

Moving beyond the core, Sandy Bridge's larger layout is the result of Intel's decision to move to a more flexible, more easily scalable method for building SoCs. In this respect, Kanter's article points out a fact about the Sandy Bridge ring bus that I had overlooked.

Previous multicore designs had used a crossbar switch to connect the cores and other on-die components. A crossbar is an efficient and fast way to get multiple devices to talk to one another, but when you add a device, you have to add a port. Intel doesn't want to have to redo the crossbar every time it adds another device to the on-chip bus, so it has chosen to go with a ring bus design instead.


A ring bus is simpler and more easily scalable—you just hang however many devices off the ring. No big rewiring effort is necessary. The downside is that it's less efficient, but this shouldn't be an issue at lower core counts.

Eventually, Intel will have to move to something like Tilera's tile-based architecture, but that day is a ways off.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
As far as I understand it, its by design that the data on the ring advances one stop per cycle. So each additional stop increases maximum latency by 1 cycle. Whether the data takes the maximum amount of cycles to get to another point is entirely determined by how many stops it has to go to arrive there.

Hence, their term "takes the lowest latency route".
 

ydnas7

Member
Jun 13, 2010
160
0
0
http://www.opensparc.net/pubs/preszo/09/sunmicro_rainbowfalls_hotchips09.pdf
Crossbars are the main alternative's to a ring bus (other than a tile array).
just a multi threading software has diminishing returns, so also for hardware.
Intel's ring bus probably performs with a higher power consumption penalty than a more traditional system, but it is so much more readily scalable. and is suitable for differing core speeds/voltages (thus getting other power savings)
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ya intel says the data will go to the lowest latency stop thats available for new data.

as you increase cpu clock latency lowers and bandwidth increases.

I found it interesting that at XS, ONE of O/C hero has a SB running @ 5.3ghz on air. Its on a face book link. I don't have an account so I didn't see the results . But one thread a read replys 50,000 points in CPU vantage .

The memory controller on SB deserves to be discussed also very impressive.
 
Last edited: