ITT: We discuss processors for Steam & whether Westmere 2C/4T should be resurrected?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

cbn

Lifer
Mar 27, 2009
12,968
221
106
If Intel dedicated as much die space as they've hinted for 14nm Broadwell iGPU then a Broadwell i3 will be a decent Steambox APU.

Yes, I'm sure it would be a great SteamOS APU, but then I think it will probably cost too much for most people

1.) Now for someone who already owns a nice Windows Gaming Tower all that is really needed for a steam box is something like the following (running SteamOS, of course, rather than Android):

http://liliputing.com/2014/09/minix-neo-z64-is-a-pint-sized-129-pc-with-android-or-windows.html

minix-neo-z64.jpg


minix-neo-z64_02.jpg


This assuming the 10/100 LAN and 802.11n Wifi is fast enough networking to stream the games. (I will investigate networking specs more deeply at a later point)

2.) For someone who doesn't own a nice Windows gaming desktop (or sufficiently powerful Windows machine), the hardware needed for SteamOS becomes a trickier proposition, This because the games will no longer be streamed. Now the SteamOS machine must play the game directly via OPEN GL. How much does a person spend in this scenario is a good question? Spend too much and there will come a point where the investment is too large to justify limiting usage purely to SteamOS. Spend too little and the amount of playable games may be limited excessively.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
While Westmere was, (and still is) a reasonably capable cpu, I fail to see why you'd use that over broadwell. Even if you want to go low cost, it would probably cost less to omit the gpu on broadwell as opposed to moving westmere to the new process and test it for awhile.
 

NTMBK

Lifer
Nov 14, 2011
10,192
4,890
136
2.) For someone who doesn't own a nice Windows gaming desktop (or sufficiently powerful Windows machine), the hardware needed for SteamOS becomes a trickier proposition, This because the games will no longer be streamed. Now the SteamOS machine must play the game directly via OPEN GL. How much does a person spend in this scenario is a good question? Spend too much and there will come a point where the investment is too large to justify limiting usage purely to SteamOS. Spend too little and the amount of playable games may be limited excessively.

In this case I would definitely recommend Windows instead of SteamOS. The selection of games is just significantly bigger- and there are plenty of games from e.g. EA and Ubisoft which are never going to come to Steam.

I would probably recommend something like the ZBox EN760, or Asrock Vision X 420D. An OEM built mini-PC with soldered down parts and integrated cooling can just be so much more compact and efficient than a custom build.
 

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
The original concept has a small amount of merit based around the fact that it would utilize existing older equipment that likely isn't at full capacity. Shrinking that architecture down to a smaller node is just silly, the more recent architectures are already there and more efficient/have better IPC to boot.

The major drawbacks would be the craptastic iGPU of that era (while today's aren't great, they are much improved) and the higher power consumption/heat generation of the larger process node.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
While Westmere was, (and still is) a reasonably capable cpu, I fail to see why you'd use that over broadwell. Even if you want to go low cost, it would probably cost less to omit the gpu on broadwell as opposed to moving westmere to the new process and test it for awhile.

Yep.

The cheapest way is actually to use Haswell/Broadwell and if you dont want Intel GPU/Memory controller. Simply do like with Clarkdale and use the QPI.
 

NTMBK

Lifer
Nov 14, 2011
10,192
4,890
136
The major drawbacks would be the craptastic iGPU of that era (while today's aren't great, they are much improved)

The original idea was to replace the iGPU. Westmere's IGP and memory controller were on a separate die- you could keep the 32nm CPU, but pair it with a new GPU and memory controller.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Even if you want to go low cost, it would probably cost less to omit the gpu on broadwell

Unfortunately I don't think a budget system builder will realize any savings by buying a broadwell processor without iGPU.

The reason is because the price of a video card (even a very small one) will most likely be greater than what the iGPU would have added to the price of the processor.
 
Last edited:

Denithor

Diamond Member
Apr 11, 2004
6,300
23
81
Unfortunately I don't think a budget system builder will realize any savings by buying a broadwell processor without iGPU.

The reason is because the price of a video card (even a very small one) will most likely be greater than what the iGPU would have added to the price of the processor.

Which is exactly the logic behind AMD's APU lineup.

Besides which, I very seriously doubt Intel would ever be willing to open up their fabs to outside designers. Not saying it couldn't happen, just very unlikely.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
I very seriously doubt Intel would ever be willing to open up their fabs to outside designers. Not saying it couldn't happen, just very unlikely.

Well, even Paul Otellini was open to letting others use Intel fabs.

With Brian Krzanich, I think the openness to letting others use Intel fabs is even greater.

The question here would be what Intel fab and process would used for the on package graphics/memory controller chip?

Since I expect 32nm Westmere to be an extreme bargain desktop chip, maybe 22nm for on package graphics would be the most likely. This possibly with Nvidia Kepler or Maxwell IP, depending on what Nvidia charges for the graphics license.
 

Maximilian

Lifer
Feb 8, 2004
12,603
9
81
Wont Intel just steal others designs if other companies use their fabs? I remember something about IBM doing that to Cyrix or something.

TSMC wont steal anything, they don't care they have nothing to sell except fab capacity.
 
Mar 10, 2006
11,715
2,012
126
Wont Intel just steal others designs if other companies use their fabs? I remember something about IBM doing that to Cyrix or something.

TSMC wont steal anything, they don't care they have nothing to sell except fab capacity.

Why would a high-integrity company like Intel "steal" a processor design from a customer?

Do you really think Intel would risk its reputation and potentially very costly lawsuits?
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Wont Intel just steal others designs if other companies use their fabs? I remember something about IBM doing that to Cyrix or something.

TSMC wont steal anything, they don't care they have nothing to sell except fab capacity.

Sounds like a great way to get a sued into oblivion.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
According this chart from Intel, the company's 22nm process has the ~same area as TSMC's 28nm:

(Notice how Intel's 22nm lines up with TSMC's 28nm on the y-axis)

8857531-1393813792762722-ProfG.png


Of course, Intel's 22 nm process benefits from FinFET whereas TSMC has planar transistors.

So with this in mind, how much Kepler graphics IP would you include on a Intel 22nm on package graphics/memory controller chip? How much Maxwell graphics IP?

Assume Intel and/or Nvidia designs a good DDR3 memory controller ( or maybe even a DDR4 memory controller.)

Just try to keep in mind this is meant to be a value gamer desktop processor.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
Unfortunately I don't think a budget system builder will realize any savings by buying a broadwell processor without iGPU.

The reason is because the price of a video card (even a very small one) will most likely be greater than what the iGPU would have added to the price of the processor.

Problem is, what you're proposing has been in use for awhile already in the laptop segment. Most laptop dGPUs are actually just the chip soldered to the motherboard alongside some vram, and connected to the cpu via an internal PCI-e bus. There is no physical slot nor card in this setup, and is relatively low cost and low(er) power.
 

Vesku

Diamond Member
Aug 25, 2005
3,743
28
86
I don't think the lack of companies using Intel's offered fab capacity is due to concerns about stealing, at least not primarily, has more to do with a lack of depth in terms of experience and tools. Either a company already has engineers experienced with the more traditional contract foundries (TSMC/UMC/SMIC), can hire from the overall pool of experienced people to design what they want, or pay another company to do part or all of the design work. The industry has to start somewhat from scratch in terms of building up the expertise and resources to do design on Intel's fab processes. So you have a definite higher development cost than using traditional foundries but it's doubtful Intel is offering it's superior processes at an equal or lower price per wafer either.

If Intel gets really serious about being a contract foundry we'll probably see them spin off or financially support a contract Intel process focused design company. This is assuming they haven't done so already, I did not dig to see if they had.
 
Last edited:

cbn

Lifer
Mar 27, 2009
12,968
221
106
Unfortunately I don't think a budget system builder will realize any savings by buying a broadwell processor without iGPU.

The reason is because the price of a video card (even a very small one) will most likely be greater than what the iGPU would have added to the price of the processor.

Problem is, what you're proposing has been in use for awhile already in the laptop segment. Most laptop dGPUs are actually just the chip soldered to the motherboard alongside some vram, and connected to the cpu via an internal PCI-e bus. There is no physical slot nor card in this setup, and is relatively low cost and low(er) power.

I did also think about laptop discrete GPUs when I wrote that, but please think about the following factors:

A mobile discrete GPU needs its own RAM and memory controller. It also needs its own power delivery and cooling right? By using on-package graphics these components (and any others) would be shared rather than duplicated.

In fact, lower cost is probably one reason we see iGPU replacing discrete graphics in some laptops.
 

Zodiark1593

Platinum Member
Oct 21, 2012
2,230
4
81
I did also think about laptop discrete GPUs when I wrote that, but please think about the following factors:

A mobile discrete GPU needs its own RAM and memory controller. It also needs its own power delivery and cooling right? By using on-package graphics these components (and any others) would be shared rather than duplicated.

In fact, lower cost is probably one reason we see iGPU replacing discrete graphics in some laptops.
In the low cost segment, however, GPU performance is of relatively low priority, though even lower end iGPUs are more than capable of HTPC tasks, and even light gaming on the side. For those where GPU performance is necessary, that is why dGPUs exist.

So, I still fail to see what it is you're wanting to accomplish here.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
In the low cost segment, however, GPU performance is of relatively low priority, though even lower end iGPUs are more than capable of HTPC tasks, and even light gaming on the side. For those where GPU performance is necessary, that is why dGPUs exist.

So, I still fail to see what it is you're wanting to accomplish here.

To put things very bluntly and succinctly, I think Apple may very well catch Intel off guard with respect to desktop performance with ARM.

Take Apple's Cyclone CPU core or one its successors and put it an form factor that is not thermally constrained (eg, Apple TV and make it it more like a Mac Mini), boost clocks and we could have whole wave of desktop like Apps we are not used to seeing in the ARM ecosystem follow soon afterward.

Therefore, I believe Intel needs to approach and think about value desktop in a way they have never thought about before.

Once ARM catches Intel off guard, it will be very difficult for Intel to regain what they lost IMO (maybe a good example of this is the phone SOCs). Instead, I would like to see Intel take some kind of pro-active and aggressive stance now than have to react defensively later on.

P.S. Also at some point, I think Intel needs to plan on offering eMMC 5.0/UFS 2.0 or one its successors as an option for their big core APUs. This way the cheapest class of big core motherboards can have a BOM lowering form of primary storage if necessary.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,192
4,890
136
This idea actually reminds me a lot of the original XBox- Intel Pentium III, with a combined GPU/memory controller from NVidia attached via the FSB. Back to the future :cool:
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
This idea actually reminds me a lot of the original XBox- Intel Pentium III, with a combined GPU/memory controller from NVidia attached via the FSB. Back to the future :cool:

From the Anandtech article I linked and quoted in the OP:

http://www.anandtech.com/show/2901/2

To make matters worse, the on-package chipset is a derivative of the P45 lineage. It’s optimized for FSB architectures, not the QPI that connects the chipset to Clarkdale.

So I wonder how much R & D would be required for a on package memory controller optimized for QPI?

Could Nvidia or others do this by themselves?

Also if Intel were to relaunch Clarkdale "as is" including the H57/H55 chipset how much money should they charge for it? Maybe $15 total for the 32nm 2C/4T processor, 45nm on package graphics chip and 65nm H57 PCH? This should allow platform to compete or better yet cutoff an infusion of future large core ARM chips on desktop. In fact, the more I think about this idea, the more I believe making Intel dual big core processors with Hyperthreading ubiquitous would also help pave a way for the more expensive Intel processors like Core M to be successful on Alternative OSes.

Using a superior processor design (like Intel big dual core with HT) on an old node vs. using an inferior processor design (quad core atom) on an advanced node for Alternative OS desktops? Which one is more economically viable long term? I would have the say the former way is the best pathway to take if at all possible.

P.S. Regarding the 65nm PCH, I do realize that is an old process tech. However, realize other extreme bargain chips from Intel also used very old process tech for the PCH. One example would be Pinetrail atom (released in 2010). For Pinetrail, Intel used the NM10 Express chipset built on 130nm process tech. With 130nm originally used for 2001 chips, that made the process tech used for Pinetrail atom 9 years old at the time. At the time of this writing 65nm is only 8 years old.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
Its irrelevant. Also its used in every single (Intel) NUMA server.

So it would be extremely low cost then.

That brings up an interesting comparison:

1. Intel re-launching 2C/4T Westmere with the original 45nm Iron lake GMA graphics/memory controller "as is" once again as Clarkdale with either H57 or H55 for $15 total (or some similar low value)

vs.

2. Intel or third party re-launching 2C/4T Westemere with a new on package graphic/memory controller chip. This I would assume would be matched with H57 or H55 chipset.

Assuming Intel would sell the re-launched Clarkdale i3 with H57 for $15, I wonder how third party graphics IP would compare cost-wise and value-wise to this original stock configuration? I'd have to imagine an on-package Nvidia graphics/memory wouldn't have to cost much to deliver a good boost in performance compared to the original Clarkdale Iron lake 45nm GMA on package graphics chip. Similarly, I'm sure a more modern graphics uarch from Intel or Imagination tech could do likewise.

The question then becomes what graphics IP offers the best bang for the buck in the intended market segment (which at the moment is Linux desktop- Steam OS or Ubuntu with Steam Client). However, if this ultra low budget Intel x86 desktop was successful I would also expect it to spill over to Android too. (Yes, I really do believe the Android gaming desktop/HTPCs/consoles are coming. This especially true after Apple breaks first ground with a fast clocked Cyclone class cpu or a fast clocked version of whatever follows after Cyclone). However, with that mentioned, I really hope Intel gets there first.
 

ShintaiDK

Lifer
Apr 22, 2012
20,378
145
106
You have to remember how Intels nodes move. 45nm in volume is out of the question. And even 32nm is already down more or less to chipsets only today.

TDP/Performance is the next issue. Why use a 32nm obsolete product when you can do everything cheaper and more efficient on newer node with newer uarchs. You already have to develop everything besides the CPU anyway.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
TDP/Performance is the next issue. Why use a 32nm obsolete product when you can do everything cheaper and more efficient on newer node with newer uarchs.

Since it is a desktop, not having the latest node shouldn't be a problem. (In fact, 32nm on desktop clocks quite high, although cooling could become an issue if frequencies were pushed to extremes)

Now as far as using newer uarchs on 22nm and beyond, I would be concerned about the amount of logic that would need to be disabled to make an extreme budget gamer desktop chip ($15 and below, including PCH). Sure your cost per xtor is somewhat lower on advanced nodes, but then then the chip has more total xtors and more of them are being disabled to create the differentiation. Sure some of these disabled units will come from defects, but how much volume is that really going to add? I would think it would not be much and most of the volume necessary would have to be created from disabling perfectly good logic.

Alternatively, there are always chips like Braswell (quad core 14nm atom with 16 Gen 8 EUs, optimized for mobile xtors, SOC). And while I think something like this is fine for high end tablet (it is a tablet chip re-purposed for desktop, afterall) I have to believe it is less than optimum for x86 gamer desktop for many reasons.

Here are some of them:

1. Quad small (ie, atom) core: This is not a good idea for x86 gamer desktop because most of the existing x86 games suitable for its low voltage 16EU iGPU would be single or dual thread games. Two large cores would have been a better use of silicon die area here if it were designed from the ground up as a specialized budget desktop gamer chip.

2. Optimized for mobile xtors: While the low leakage rate is great for mobile, on the desktop the low drive current and low max frequencies make for a poorer value. For optimum value on desktop, I would like to see a die optimized for higher voltage/frequency per mm2 silicon area.

3. SOC: While integrating PCH is beneficial for saving space in the tight confines of a phone or 8" tablet, I have read it does nothing (or very very little) for performance. In fact, in some cases integrating the PCH can bloat the die to the point where some CPU and GPU die area need to be sacrificed in order to keep costs down.
 

cbn

Lifer
Mar 27, 2009
12,968
221
106
You have to remember how Intels nodes move. 45nm in volume is out of the question.

I wish I had better access to information regarding Intel's fabs, but according to the following list here are the fabs currently in use:

http://en.wikipedia.org/wiki/List_of_Intel_manufacturing_sites

Fab sites:

D1X Hillsboro, Oregon, USA 300 mm, 14 nm
D1D Hillsboro, Oregon, USA 300 mm, 14 nm
D1C Hillsboro, Oregon, USA 300 mm, 22/14 nm
Fab 12 Chandler, Arizona, USA 300 mm, 65 nm
Fab 32 Chandler, Arizona, USA 300 mm, 22/14 nm
Fab 42 Chandler, Arizona, USA 450 mm, 14 nm
Fab 11 Rio Rancho, New Mexico, USA 300 mm, 45/32 nm
Fab 11X Rio Rancho, New Mexico, USA 300 mm, 45/32 nm
Fab 17 Hudson, Massachusetts, USA 200 mm, 130 nm
Fab 24 Leixlip, Ireland 300 mm, 14 nm
Fab 28 Kiryat Gat, Israel 300 mm, 22 nm
Fab 68 Dalian, China 300 mm, 65 nm

Perhaps if there was a need maybe some of the 65nm nodes could at least partly transition to 45nm to increase capacity if necessary.