• We’re currently investigating an issue related to the forum theme and styling that is impacting page layout and visual formatting. The problem has been identified, and we are actively working on a resolution. There is no impact to user data or functionality, this is strictly a front-end display issue. We’ll post an update once the fix has been deployed. Thanks for your patience while we get this sorted.

First Sandy Bridge Numbers?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
I'm not an expert on Sandy Bridge since I'm starting to get more patience and just wait for launches now, but what IDC meant with being the wrong SKU for gamers is that there are different versions of Sandy Bridge.

This processor is Sandy Bridge-DT, sharing a lot of similarities with Clarkdale.
If you really need every ounce of performance, you should be looking at Sandy Bridge B2, the Bridge equivalent of Gulftown.
 
isnt B2 gonna be the version you can only get in dual socket form though? my understanding was that 1155 was going to have everything up to upper midrange quad core CPUs, which right now arent yet fully utilized in games, and that ALL of those cpus will have integrated graphics. it's fully reasonable to assume that there are gonna be people who simply wont test the integrated graphics on these cpus, as they're intended for a market which wont need it to perform at all. the only reason that someone doing gaming benchmarks would test the IG is if intel incorporates a means to switch between intel integrated and ati/nv dedicated when possible, since there would obviously be some inherent power savings benefits from disabling your dedicated card when not in use
 
If I dragged this horse any closer to the water I'd be drowning it.

I hear Bulldozer has a single-FPU unit per module...I sure hope the LAST thing anyone with a BD sample bothers to check is how the FPU performance stacks up. Why would anyone care to check out one of the key microarchitecture differences of a new chip? beats the shit out of me I guess

I wonder if anyone even has bulldozer samples out in the wild? Things have really been oddly quiet. Then again, that may be a good thing. They made a lot of noise about phenom and it was a let down...they didn't make much noise about K8 and it was great (iirc, haha).
 
Isn't that kinda different? I mean what if we replace "Llano" and "Bobcat" with Bulldozer+GPU. 🙂

It's not like the upcoming 6 and the 8 core Sandy Bridge chips will help gaming performance anyway. And 3-channel memory? Doesn't even do much for Nehalem.

I think the single high-end GPU users could buy this version.


On a side note: Apparently there's both H65/H67 and P65/P67 chipsets. It's hinting that there are LGA1155 Sandy Bridge chips that lack GPU functionality. There was an early rumor which suggested 6 cores might be possible for the LGA1155 socket.

True enough . I assume your referring to 4 channel memory with intels Sandy highend . I think maybe when a programm is able to use AVX that bandwidth will come in handy.

The chip were discussing is 2 channel memory with 2-4 cores . On a 6-8 core high end cpu. Recently I have heard and I don't believe it but it may infact be true . My brother inlaw didn't tell but Bod did . and he has many contacts . But he was told that AVX on the high end was 512 bits . Unlike the 256 bits in this sandy bridge for low and midrange parts . Its along wait to see if thats a fact or not . But intel did say AVX can go out to 1024 bits.
 
Last edited:
I wonder if anyone even has bulldozer samples out in the wild? Things have really been oddly quiet. Then again, that may be a good thing. They made a lot of noise about phenom and it was a let down...they didn't make much noise about K8 and it was great (iirc, haha).
Well not so far as performance but Hammer was way late and hyped.
 
True enough . I assume your referring to 4 channel memory with intels Sandy highend.

Again, the LGA135x with 3-channel is the enthusiast version. There's probably a small possibility that they'll release super-ultra-high end with the LGA2011, but I doubt it.
 
Knowing intel lately, theyll duct tape a new socket to X58 and keep their 2 year old platform as the flagship.

Now that they have terminated all of the competition, i'm sure stretching that out to 20 years on the same chipset is alright with everyone.
 
Again, the LGA135x with 3-channel is the enthusiast version. There's probably a small possibility that they'll release super-ultra-high end with the LGA2011, but I doubt it.

Its been awhile sense I have heard anythingabout the 1355 socket . Do you have a recent link. I tried google didn't come up with anything really new other than the 2011 server version. Also the high end Desktop B2 which I found listed as 2011 socket 4 memory channels
 
Last edited:
I wonder if anyone even has bulldozer samples out in the wild? Things have really been oddly quiet. Then again, that may be a good thing. They made a lot of noise about phenom and it was a let down...they didn't make much noise about K8 and it was great (iirc, haha).

Bulldozer hasn't taped out yet, so no one has one in the wild or otherwise.

To their credit it was barcelona, not phenom, that they made a lot of noise about and pretty much all those guys have been fired, er I mean resigned to pursue other endeavors...

AMD was smart enough to feed "dancing in the halls" charlie the leaked pre-hype speculation for phenom so that the papertrail never officially led back to people who wanted a job after that thing hit the reviewers.
 
Its been awhile sense I have heard anythingabout the 1355 socket . Do you have a recent link. I tried google didn't come up with anything really new other than the 2011 server version. Also the high end Desktop B2 which I found listed as 2011 socket 4 memory channels

Yeah...That's what I thought as well... That the high end enthusiast platform was going to be socket 2011.

I remember reading about socket 1355 a while back but again, I haven't seen ANYTHING about it recently. It does seem weird that Intel would have 3 sockets (1155, 1355, 2011), right? I mean...1155 and 2011 seems more likely? Anyone know what happened to socket 1355?

And I know that the quad channel ddr3 won't likely offer much (if any) performance boost in most apps... But I'm sure it'll help in some cases. And I don't think you'll be required to use 4 sticks of ddr3 if you don't want too. So really, it's nice that Intel is giving you the option. I'd assume (I could be wrong, but it'd make logical sense) that if you want to you'll be able to just use 2 sticks of ddr3 in a dual channel mode. I can't see any downsides there...
 
I thought they were supposed to have taped out already as reported by Charlie in an AMD analyst day in Nov 2009? Link.


Contrary to reports circulating on the web, AMD's much-vaunted Bulldozer core is not currently taping-out, but Llano is - or at least will likely be doing so imminently.

Bulldozer, on the other hand, will likely not tape-out until later in 2010, and we've heard whisperings that it's probably not even planned for 2011, and if it is, it will be towards the very end of that year.

http://www.hexus.net/content/item.php?item=21117

Mind you I am not basing my statement on the info contained in rumor sites, merely providing the public link with counter-info. Rumor can be found anywhere, when BD tapes out you can count on AMD letting analysts know this info asap.
 
I thought that was what they did as it was an AMD Analyst Day that S|A supposedly based the article upon. I confess, I did not really read the article. I merely read the headline, saw "analyst day" somewhere in the article, and assumed it was an official announcement.

Well then, I guess it just reaffirms my belief based upon earlier discussions that Bulldozer isn't quite likely to come around early in 2011. More like end of 2011.
 
Charlie misinterpretted what was stated by AMD as having taped out...he assumed BD, but it was actually Bobcat and Llano.
 
my take on sandy bridge graphics, the magic is the use of the cache like in a console.
PS2 - framebuffer is on the gpu
PS3 - the Cell's 8 vector units have a 256kb scratchpad each
XBOX 360 - XENOS GPU has 10mb EDRAM
Wii -3 MB EDRAM (texture memory and framebuffer)
as Sandybridge's cache is directly accessible to both the GPUs and the CPU, significantly speedups are available is a program is designed/ported with the expectation of using the cache.
6mb cache is not a framebuffer for 1920x1080 but it a framebuffer for 1600x900
8mb is a framebuffer for 1920 x 1080
take a game which is internally deigned at 1920x540 and 6mb will cover 1 x 32 bit frambuffer and 1 x 16bit z buffer, have a 8mb cache and there is 2mb left over for normal cache functions.

in short, a PC game will still be poor on Sandybridge, but a Console game (Xbox360/PS3/Wii) can be ported to Sandybridge and perform like a console game, no need for discrete graphics.

hypothetical sandy bridge speed up over 'dale
clock-speed (mature, 32nm high k, SOI) 733MHz -> 1200MHz x1.6
cache as frame/x/texture buffer (highly variable effect for games) x1.5
AVX x1.2
improved Integrated Memory Controller x1.2
improved GPU (wildcard, but perhaps HD engine is embedded in gpus x1.1
improved CPU x1.05

multiply this together and you get a figure around a 3.7x speed increase over the 'dales.
Not bad for keeping the same number of of similar GPUs in an integrated product.
This does off-course require the game to be aware and expecting the cache like it would on a console.... not like a PC.

so 6mb cache version will rock in a laptop limited to 1600 by 900 resolution
but 8mb is needed to rock in a desktop at 1920 by 1080, (although with good programming a 6mb buffer may still be very handy)
 
Last edited:
hypothetical sandy bridge speed up over 'dale
clock-speed (mature, 32nm high k, SOI) 733MHz -> 1200MHz x1.6
cache as frame/x/texture buffer (highly variable effect for games) x1.5
AVX x1.2
improved Integrated Memory Controller x1.2
improved GPU (wildcard, but perhaps HD engine is embedded in gpus x1.1
improved CPU x1.05

If the early rumors are correct, the top end clock for the IGP will be 1.4GHz. I'm assuming that's Graphics Turbo clock. Because the Core i5 661 already has 900MHz IGP, the advancement there is 56%, translating to 40-50% in real world.

20% improvement using better memory controller is quite a lot, dare I say not likely. Integrating the GMCH into the CPU die will have little effect because the GPU already has fast connection to the memory controller. If everything else is the same, the only improvement will be faster communication with the CPU.

The bottleneck in the current HD Graphics is still the HSR algorithm, though a considerable advancement from the near zero on the previous generation.

The big thing is if they have improved the execution units on the GPU, and how tight the integration of the CPU and the GPU is. AMD keeps talking about Fusion, but in terms of "Fusion" I'd say Sandy Bridge has an edge here. I wonder what they can share? 🙂
 
If the early rumors are correct, the top end clock for the IGP will be 1.4GHz. I'm assuming that's Graphics Turbo clock. Because the Core i5 661 already has 900MHz IGP, the advancement there is 56%, translating to 40-50% in real world.

20% improvement using better memory controller is quite a lot, dare I say not likely. Integrating the GMCH into the CPU die will have little effect because the GPU already has fast connection to the memory controller. If everything else is the same, the only improvement will be faster communication with the CPU.

The bottleneck in the current HD Graphics is still the HSR algorithm, though a considerable advancement from the near zero on the previous generation.

The big thing is if they have improved the execution units on the GPU, and how tight the integration of the CPU and the GPU is. AMD keeps talking about Fusion, but in terms of "Fusion" I'd say Sandy Bridge has an edge here. I wonder what they can share? 🙂

Any way you slice it, THANK GOD Intel is finally getting their IGP stuff up to "sort of reasonable" standards. IMHO, the horrible-ness (not a word, I know, haha) of the Intel GMA stuff has kept PC gaming down a bit. (IE, if you want to target a big audience, you must really program for the least common denominator, ala WoW).
 
The big thing is if they have improved the execution units on the GPU, and how tight the integration of the CPU and the GPU is. AMD keeps talking about Fusion, but in terms of "Fusion" I'd say Sandy Bridge has an edge here. I wonder what they can share? 🙂

Sandy bridge IS fusion... AMD keeps talking about it, intel is actually doing it.
 
Sandy bridge IS fusion... AMD keeps talking about it, intel is actually doing it.

And yet we don't have any idea how well (or poorly) it has been implemented because the leaker chose to do all the benches w/5870 thus completely sidestepping the Fusion aspects of the chip...
 
And yet we don't have any idea how well (or poorly) it has been implemented because the leaker chose to do all the benches w/5870 thus completely sidestepping the Fusion aspects of the chip...

true, we don't know how well it performs yet... it might just be a whole lot of hooha that would never amount to anything. But I doubt that is the case, and it is the fusion of GPU and CPU 😛
 
Sandy bridge IS fusion... AMD keeps talking about it, intel is actually doing it.

I still would put money on either Bobcat or Llano being available for purchase before Sandybridge.

Both companies are doing it, not just Intel. Intel just didn't hype the move.
 
I still would put money on either Bobcat or Llano being available for purchase before Sandybridge.

Both companies are doing it, not just Intel. Intel just didn't hype the move.

let me rephrase that, intel is doing it first.
I think intel will be first to market with this, by quite a bit.
 
let me rephrase that, intel is doing it first.
I think intel will be first to market with this, by quite a bit.

I think for Intel it is purely a matter of a business decision, there are technical milestones preventing them from releasing SB today if they wanted.

But releasing SB today would not be a "maximal gross margin" decision.

For Intel, SB SKU's would replace clarkdale/arrandale skus. But the die itself is larger being monolithic versus MCM, and would require more 32nm capacity versus the current product that uses up some of the well-depreciated 45nm capacity.

Intel may well release SB as a preemptive move against Llano/Bobcat just to score the PR points of always being able to claim first monolithic Fusion silicon to market but I have every expectation the release will be entirely timed by business reasons and not for technology ones.
 
Back
Top