Nehalem

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Ah yes, I am with you now. I was not making the connection with the IA32 work they did for the Itanium platform. Now I see.

Yes indeed that is quite a promising path they could down if they wanted. Kind of like virtualization of the x86 hardware itself.

Use an uber translater/compiling to migrate your OS across a heterogenous network of hardware if you like.

The concept has merit, the technology exists (as proved on Itanium) but is it the strategy and vision at Intel? And if it is, is it on a timeline to intersect Nehalem?
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ya its really exciting because Larrabbee isn't that far off . Than we will know. Thats what I meant by nehalem may leverage Elbrus tech(epic compiler) threw the PCI-e with larrabe. Its really getting exciting in the CPU world . Whats NV going to do. Whats AMD bulldoozer going to be like . Its exciting times changes are occurring so fast now . Who knows what can happen.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Nemesis 1
Ya its really exciting because Larrabbee isn't that far off . Than we will know. Thats what I meant by nehalem may leverage Elbrus tech(epic compiler) threw the PCI-e with larrabe. Its really getting exciting in the CPU world . Whats NV going to do. Whats AMD bulldoozer going to be like . Its exciting times changes are occurring so fast now . Who knows what can happen.

In theory I suppose the compiler could be used to enable better leverage of anything plugged into a PCIe slot (assuming the IC on the PCIe card is supported by the compiler).

Meaning NVidia, ATI, Larrabee, PhysX, Killer Nic...they could, all to varying degrees of utility, be "drafted" by an Elbrus compiler as a resource for pumping numbers thru FPU and INT pipes.

Its not impossible, just not clear to me yet whether Intel would even want to go down that path. The software angle is really IBM's gig, Intel seems intent (to date, IPF excluded) intent on letting brute hardware carry them to higher gross margins.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
One should also remember on Intels Nehalem Some pci-e lanes connect directly to the core. Intel is getting everthing in place . I really want nehalem out yesterday . Simply because I can't wait to see what Israeli team comes up with Gesher.

The wait for Gesher for me is less than the wait for C2D. Nehalem will be good . Gesher should be a revolutionary step in processor design.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I look @ what the israeli team did with pro core and I am impressed. Notice I used the word should demens. Only because This is going to be that teams time to really shine and I exspect something revolutionary within Intels guidelines.
 

BrownTown

Diamond Member
Dec 1, 2005
5,314
1
0
We know absolutely nothing at all about Gesher, in addition even Intel does not really know where it will end up because last I checked the design was not complete and the 32nm node wasn't making anything any more interesting than SRAM arrays. So even Intel can only speculate on its expected performance characteristics based on simulations and such, but very little hard data.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
The New Architecture
To reduce power you need to reduce the number of transistors, especially ones which don?t provide a large performance boost. Switching to VLIW means they can immediately cut out the hefty X86 decoders.
Out of order hardware will go with it as they are huge, consumes masses of power and in VLIW designs are completely unnecessary. The branch predictors may also go on a diet or even get removed completely as the Elbrus compiler can handle even complex branches. With the X86 baggage gone the hardware can be radically simplified - the limited architectural registers of the x86 will no longer be a limiting factor. Intel could use a design with a single large register file covering integer, floating point and even SSE, 128 x 64 bit registers sounds reasonable .

The so-called instruction "deficiencies" does not exist in desktops. Back when 486(or even Pentium) was out, decoders did take significant transistors. Now the chips are much bigger and decoders are an insignificant amount of die size and transistors. Look at all the VLIW implementations, Itanium, Crusoe/Efficeon. The CPUs based on x86 instruction set(A64/Phenom/C2D) stomps them!

Some of the most efficient performance CPU is an x86 CPU, and it takes small amount of transistors. The initial talk of Itanium having 2x more core is long gone now. VLIW makes up the work done by hardware to the software, the programmers aren't really willing to port it so the infrastructure fails. Transmeta tried to get around that with Code Morphing Software(CMS), but the performance blowed so they went down. So they add more transistors to make up for the lack of proper software optimization, so it ends up the same eventually. I don't think the x86 will go away anytime soon. Intel's focus of x86 for their MID/UMPCs with Silverthorne and dumping of XScale is an example of that.

The bigger/faster CPU gets, the differences in instruction sets and having decoders becomes less significant.

We know absolutely nothing at all about Gesher, in addition even Intel does not really know where it will end up because last I checked the design was not complete and the 32nm node wasn't making anything any more interesting than SRAM arrays. So even Intel can only speculate on its expected performance characteristics based on simulations and such, but very little hard data.

I know Gesher can do 7xDP FP/SSE operations per cycle per core :).
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: BrownTown
We know absolutely nothing at all about Gesher, in addition even Intel does not really know where it will end up because last I checked the design was not complete and the 32nm node wasn't making anything any more interesting than SRAM arrays. So even Intel can only speculate on its expected performance characteristics based on simulations and such, but very little hard data.



Not True. We know that Israeli team is doing it. We know its suppose to scale to 16 cores+ We know its going to be on 32nm. I believe it well be 3d gates as Intel has always said it would be . Just like they said High K and metal gates on 45. Which I said would happen when 65nm came out . Just because I believed intels guidence.

Intel user 2000 THE Elbrus compiler doesn't do away with X86 software . Only the hardware as Elbrus compiler can morph it. Once its copied to disk it will be in epic . If intel can make a simpler faster chip and not lose performance . I think they will go for the higher margins. Not only that but 16 cores on a chip . Those cores will be nothing like Nehalem . The elbrus compiler can also read all high level languages.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Intel user 2000 THE Elbrus compiler doesn't do away with X86 software . Only the hardware as Elbrus compiler can morph it. Once its copied to disk it will be in epic . If intel can make a simpler faster chip and not lose performance . I think they will go for the higher margins. Not only that but 16 cores on a chip . Those cores will be nothing like Nehalem . The elbrus compiler can also read all high level languages.

It's not going to be that simple. You are speculating for the most part for Nehalem/Gesher being related to Elbrus anyway.

-Of course its going to be on 32nm, it says on Intel's tick tock roadmap
-16+ cores?? You mean like 32 cores?? Unless they want the single thread performance to go back to the level of something like Athlon 64, or they want to have a 1000mm2 die, they won't do that.
-Simpler faster chip and not lose performance?? Like Silverthorne/Niagara/Cell right?? Their multi-threading performance it good of course, but their single threading performance blows, which us PC users will care about after 5+ years because programmers and lazy and things like games won't be optimized for more than 4 cores for the most part. When you make the core simpler you are bound to lose something.

Even after 30+ years of mainstream computing, still a trusted, proven technology is the way to go. Going radical like Itanium/P4/Crusoe and such will lead to failures. Of course that may be less true in servers, but desktops?? HA!! There's no magic in CPU design. Intel did not do something radical with Core 2, they developed what they thought was proven architecture. Unlike their Netburst team.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
No not on nehalem . But we have seen Elbrus influence already. In C2D .

1). Macro fussion

2). Intels EFI . Used to make Apples OS run on C2D. Intels firmware.

Thats just 2 I know about . I am sure there is More.

I think your missing what I said. Intels Larrobee Is Intels first terra scale product.

It uses 16 inorder simple cores capable of 2 threads each so thats 32 threads. Intel says its X86 . That doesn't mean its X86 hardware only that it reads X86 instruction.

What is Intel going to do with 32threads in the years of 2008 or 2009? . This is ware I believe that we will see a modified Epic Elbrus compiler. Plus alot of Shared cache and a large Vertex engine. I enjoy that people won't except the hand writing on the wall . Its so much more rewarding when it comes to pass. So Intel has recently made a deal with Ramus. So its not much of a limb to crawl out on but exspect to see XDR memory with Larrabee. Its not that long befor Intels Spring IDF . We will know alot more than about Nehalem and intels first tera scale core Larrabee. Can't wait. To be right once again.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Well just a little help for my case.

its from X-bit


The first incarnation of Intel?s Tera-Scale initiative was an 80-core Teraflops research chip built using 65nm process technology that contained 80 cores, which Intel calls tiles due to the fact that they are very simplistic and hardly resemble modern central processing units? cores, organized into 10x8 2D mesh network and operating at 4GHz clock-speed. Each tile consisted of a processing engine (PE) connected to a 5-port router with mesochronous interfaces, which forward packets between tiles. According to Intel, the 80-core chip has VLIW (very long instruction word) micro-architecture, which [on the architectural level] is generally similar to the latest processing engine of ATI?s code-named R600 graphics processing unit.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Ya I know what the tera scale proof of concept project is . But Dud I am talking about this.


Clearing up the confusion over Intel's Larrabee, part II
By Jon Stokes | Published: June 04, 2007 - 10:54PM CT

Another Larrabee-related presentation from Intel has surfaced, this time with even more details on the first iteration of the company's forthcoming GPU/HPC processor and add-in board. Because the information presented in the slideshow dovetails perfectly with what I laid out in my previous article on Larrabee, I'm presenting this short article as a sequel to that one. I'll presume that you've read the first article, so instead of recapping it, I'll just dive right into the new details.

Related StoriesIntel officially owns up to GPU plans with Larrabee (updated)
Intel's next GPU to be Pentium MMX-based?
Intel picks up gaming physics engine for forthcoming GPU product
Update: The Larrabee-related slides have now been removed from the linked presentation. Clearly, they weren't supposed to be public, and Intel has now remedied the mistake.

Larrabee as a GPU
The new slides, which actually date to a March 7 presentation by Intel's Ed Davis, indicate that the first Larrabee products will have the following characteristics:

Package size: 49.5mm x 49.5mm
Process: 45nm
Clockspeed: 1.7-2.5GHz
Power: >150W
I mentioned in the previous article that that Larrabee would have a "fixed-function" unit that, in its GPU incarnation, would contain some sort of raster hardware. One of the Larrabee slides, excerpted below, shows the texture sampler situated next to each of the chip's two memory controllers.


A Larrabee GPU
In a nutshell, a texture sampler represents one stage in the standard DirectX and OpenGL 3D rendering pipeline. The texture sampler loads texture maps from memory, filters them as necessary for level-of-detail, and feeds them into the pixel processing portion of the pipeline (i.e., shading and rendering).

Larrabee's "pixel/vertex shaders" are implemented by the in-order cores described in the previous article. Note that in the previous article, I stated that a Larrabee GPU product would have at least 10 such cores. The new slide says that Larrabee products will have from 16 to 24 cores and adds the detail that these cores will operate at clockspeeds between 1.7 and 2.5GHz (150W minimum). The number of cores on each chip, as well as the clockspeed, will vary with each product, depending in its target market (mid-range GPU, high-end GPU, HPC add-in board, etc.).

These simple, in-order cores can do two double-precision scalar floating-point operations per cycle, in addition to the SIMD capabilities that I described in the previous article. Each core contains 32KB of split L1 cache, accessible with 1 clock cycle of latency. It also appears that each core will also have 256KB of the chip's large, shared pool of L2 for private (read-write for that core, read-only for the other cores) use, with that cache having a 10-cycle access latency. The cache line width is 64B.

To return to slide 16, all of Larrabee's components (cores, texture sampler, memory controller) will be connected by a ring bus that will be familiar to students of IBM's Cell processor. This ring bus has a width of 256 bytes/cycle.

In line with what I previously described, Intel makes clear in slide 16 that Larrabee will do some amount of ray-tracing with its in-order cores.

From GPUs to HPC
An "easter egg" on slide 16, accessible with any PDF editing software (I used Illustrator) reveals a board-level layout of a Larrabee GPU.


Board layout of a Larrabee GPU
The board shows a Larrabee chip connected to eight banks of GDDR and mounted on a PCIe 2-compatible daughterboard. The board has two power connectors: one for 150 watts and another for 75 watts. There are also two display outs and what appears to be a video in. The apparent package size given is 49.5 x 49.5mm.

In contrast to the daughtercard/GPU design shown in slide 16, a later slide gives a block diagram of a higher-end, HPC-oriented variant of Larrabee that uses Intel's forthcoming common systems interconnect (CSI) to gang together four 24-core processors. It's fairly clear from the block diagram that this layout shows a four-socket server design where all four sockets contain Larrabee parts. Such a design would be one node in a compute cluster that would almost certainly contain general-purpose CPUs from Intel as well.

Interestingly, Intel sees a role for Turbo Memory to boost the performance of these nodes. I'll be talking about Turbo Memory (a.k.a., Robson technology) in a future article, so I'll save that discussion for later.

Overall, the presentation, which was originally unearthed (and subsequently misunderstood) by TGDaily, is an attempt to show part of Intel's developing vision for the era of many-core computing, except Intel prefers the term "tera-scale" to "many-core." The presentation also includes some discussion of the company's research from the 80-core "Polaris" prototype project. Recall that, unlike Larrabee, the Polaris chip has a 2D mesh interconnect that makes use of a crossbar switch. The fact that Polaris is as much about system-level interconnects as it is about on-chip interconnects is reinforced in slide 14, which shows a mix of optical fiber and hybrid lasers that are used to get data onto and off of the chip. (For more on Polaris and interconnects, see "Beyond the teraflops: Why Intel really put 80 cores on a single chip").
The presentation also includes some details about Intel's 32nm "Gesher" CPU, due out in 2009. In brief, it's 4-8 cores, 4GHz, 7 double-precision FLOPs/cycle (scalar + SSE), 32KB L1 (3 clocks), 512KB L2 (9 clocks), and 2-3MB L3 (33 clocks). The cores are arranged on a ring bus, just like Larrabee's, that transmits 256 bytes/cycle. Gesher is due out sometime in 2009.

________________________________________________________________

You see were it says On the Polaris tera scale system the one your referring to .

HYBRID LAZER. This was a big deal . and it actually moves us a lot closer to the proof of concept terra scale than many relize.
For now tho The Larrabbe Terra scale project is all we will see within a year+months.

Also most are saying that each larrabee core will do 4 threads each. For now I will stick to 2 threads till I see better information If its 4 threads great.

Gesher looks interesting doesn't it.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I will bold one paragraph here.

By Wolfgang Gruener

What we know about Larrabee

Intel has recently shared more information with the public about its intents in the realm of general purpose GPU (GPGPU). In a presentation from March 7 of this year, Intel discussed its data parallelism programming implementation called Ct. The presentation discusses the use of flat vectors and very large instruction words (VLIW as utilized in ATI/AMD's R600). In essence, the Ct application programming language (API) bridges the gap of allowing it to work with existing legacy APIs and libraries as well as co-exist with current multiprocessing APIs (Pthreads and OpenMP), yet provides ?extended functionality to address irregular algorithms.?



There are several things to point out from the image above, which is a block diagram of a board utilizing Larrabee. First is the PCIe 2.0 interface with the system. Intel is currently testing PCIe 2.0 as part of the Bearlake-X (Beachwood) chipset (commercial name: X38), which could be coming out as part of the Wolfdale 45 nm processor rollout late this year or early in 2008. Larrabee won?t arrive until 2009, but our sources indicate that if you buy an X38-based board, you will be able to run a Larrabee board in such a system.

In the upper right hand corner the power connections indicate 150 watts and 75 watts. These correspond to 8-pin and 6-pin power connections that we have seen on the recent ATI HD2900XT. Intel expects the power consumption of such a board to be higher than 150 watts. There are video outputs to the far left and as well as video in. Larrabee appears to have VIVO functionality as well as HDMI output based on the audio-in block seen at the top left.
A set of BSI connections are next to the audio in connection. We are not positive on what the abbreviation stands for but we speculate that these are connections for using these cards in parallel like ATI?s Crossfire or Nvidia?s SLI technologies. Finally, there is the size of the processor (package). That is over twice the size of current GPUs as ATI?s R600 is roughly 21 mm by 20 mm (420 mm²). Intel describes the chip as a ?discrete high end GPU? on a general purpose platform, using at least 16 cores and providing a ?fully programmable performance of 1 TFlops.?





Moving on we can see that Larrabee will be based on a multi-SIMD configuration. From other discussions about the chip across the net, it would seem that each is scalar that works using Vec16 instructions. That would mean that, for graphics applications, it could work on blocks of 2x2 pixels at a time. These ?in-Order? execution SIMDs will have floating point 16 (FP16) precision as outlined by IEEE754. Also to note is the use of a ring memory architecture. From a presentation by Intel Chief Architect Ed Davis called ?tera Tera Tera?, Davis outlines that the internal bandwidth on the bus will be 256 B/cycle and the external memory will have a bandwidth of 128 GB/s. This is extremely fast and achievable based on the 1.7-2.5 GHz projections for the core frequency. Attached to each core will be some form of texturing unit as well as a dynamically partitioned cache and ring stop on the memory ring.

In the final image below you will notice that each device will have a 17 GB/s of bandwidth per link. These links tie into a next generation Southbridge titled ?ICH-n? as this is yet to be determined. From discussions with those in the industry, it would appear that the external memory might not be soldered into the board but in fact be plug in modules. The slide denotes DDR3, GDDR, as well as FBD or fully buffered DIMMs. It will be interesting to see what form this will actually be implemented as but that is the fun of speculation.







The current layout of project Larrabee is a deviation of previous Intel roadmap targets. In a 2005 whitepaper entitled ?Platform 2015: Intel Processor and Platform Evolution for the Next Decade?, the company outlines a series of Xscale processors based on Explicitly Parallel Instruction Computing or EPIC. Intel has deviated slightly from its initial roadmap since the release of this paper: Intel sold Xscale to Marvell last year, which makes it a rather unlikely product for Larrabee ? and could have opened up the discussion for other processing units.

What is interesting is that rumors that Intel was looking for talent for an upcoming ?project? involving graphics started passing around already more than a year and a half ago. In August of last year, you could apply for positions on Career Builder and Intel?s own website. A current generic job description exists on Intel?s website.



Concluding note

While this is an interesting approach to graphics, physics, and general purpose processing, we will be seeing the meat in the final product as well as the success of acceptance with independent software vendors (ISVs). In our opinion, the concept of the GPGPU is the most significant development in the computer environment in at least 15 years. The topic has been gaining ground lately and this new implementation from Intel could take things to a whole new level. As for the graphics performance, only time will tell.

It will be interesting to see which role Nvidia will play in Intel?s strategy. Keep a close eye on this one.

 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
GPGPU.

What a joke.

A general purpose peice of specialized hardware.

Kind of like jumbo shrimp I suppose.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
I thought you were an enthusist. If this stuff doesn't intrigue you . Not much will . Or is it that . You don't want to see A larrabbe GPU. This stuff sounds great to me.

For years now I have heard guys saying they want to see more competition . Cause it spurrs innovation. Than when something truely innovative comes along you reply with Tera scale is only a concept. Well larrabee is more than a concept and its Tera Scale. Its real and its coming . Than you say

GPGPU.

What a joke.

A general purpose peice of specialized hardware.

Kind of like jumbo shrimp I suppose.


Did you read anything its much more than a general purpose peice of specialized hardware.


 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
HAHAHA. Whatever was left of what I thought of you was being a legitimate tech guy all vaporized by this: "We will know alot more than about Nehalem and intels first tera scale core Larrabee. Can't wait. To be right once again." When somebody claims they are right solely based on personal bias, you start to doubt them.

BTW, that Jon Stokes is the same guy that claimed that Silverthorne is based on Penryn, and Moorestown on Nehalem. LOL. When the ISSCC2008 preview revealed its a 2-issue in-order processor, then he immediately assumed it'll be slower than the Cortex A8 CPU since it'll be based off Intel's Pentium. HAHAH. You can't be more ignorant and stupid than that. Maybe K5 performs well as Core because it has the same 4-issue out of order.

Who knows why Intel went with VLIW on the Tera-Scale project?? Let me guess, I think its because they wanted the core as small as possible so they can fit as many cores to claim that "Teraflop" performance, which is really idiotic since it only achieves peak throughput in extremely limited situations.

Maybe Intel wants to go back to IA64 because they assume by then AMD will be gone and they'll be no competition on the desktop-so who cares if Intel make a slow, hot CPU?? There's no competition right??

In truth, I believe that the Core 2 introduction made us that Intel is more capable than we thought before. I have used their other products. Lots of people claim that Intel's desktop mobos are stable, blah blah. Maybe they were good back-in the Pentium MMX era. Now I sometimes think how can a company this big can't even fix such minor bugs?? Their boards had problems related to mic all the way from 915 to 965. Despite the claims of being a completely different chipsets, the same errata transferred generation to generation.

I don't like to quote this site: http://www.fudzilla.com/index....=view&id=5037&Itemid=1

But if the rumors are true, won't that be fun??(Hint: No, not really)
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
This is a diccussion on Nehalem/ Larrabbee. I am only speculating on info available to all. Yes their is contridicting information. But I look at whaT I know to be FACT.

1 main fact. Intel did buy Elbrus. If you know anything about this compiler its really that good. WE all know to be fact. That intel has wanted to go to the EPIC (VLIW).

But AMD 64 crossed them up. So intel bought Elbrus after AMD 64 came out.

From that tech comes intel EFI intels firmware for apple . Also Macro fussion . and what ever else Intel has used.


This CT that is mentioned on larrabee smells like sounds like and there for is likely derived from the ELBRUS compiler. It was mentioned in both posted articles among others that Larrabbee uses VLIW(epic) yet intel says it can do x86 instructions=Elbrus . Tons of shared cache=Elbrus Vertex engine= Elbrus. The R600 uses VLIC but it will never read x86instructions.

I presenting a good speculative look at what might be.

So you flame.

Instead just offer your thoughts were this tech is going based on all known info. All whether its good or bad information. If you find a solution that works for all the available info . odds are your going to be vary close to the real deal. Elbrus compiler is the only solution that fits all the known info.
 

dmens

Platinum Member
Mar 18, 2005
2,275
965
136
But AMD 64 crossed them up. So intel bought Elbrus after AMD 64 came out.

From that tech comes intel EFI intels firmware for apple . Also Macro fussion . and what ever else Intel has used.

that's a load of crap. EFI and macrofusion are two completely different things, neither of which have anything at all to do with compilers.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Macro- fusion is front end hardware Macro-fusion is the ability of joining two x86 instructions together into a single micro-op. This improves the CPU performance and lowers the CPU power consumption, since it will execute only one micro-op instead of two.
Macro-fusion was New to C2D. Intel also owns Elbrus hardware ya know. You can read about the hardware it may shock ya into reality. Some of the cache stuff is really good. Read about it.

In January 2006, Apple, Inc. shipped their first Intel-based Macintosh computers. These systems use EFI and the Framework instead of Open Firmware, which had been used on their previous PowerPC-based systems.[8] On April 5, 2006 Apple released Boot Camp which produces a Windows XP Drivers Disk as well as a non-destructive partitioning tool to help users easily install Windows XP. A firmware update was also released which added legacy BIOS support to its EFI implementation. Subsequent Macintosh models shipped with the newer firmware. Now all current Macintosh systems are also able to boot legacy BIOS Operating Systems like Windows XP and Vista. AMD macs can runn all things X86. Yet they only changed firmware. That some nice firm ware.

So all of the sudden X86 cores run on UNIX-based operating system. Solaris on SPARC can handle high end jobs where x86/Linux doesn?t have a look-in. Note: Sparc recall that Elbrus sun has tech agreement and Intel now owns Elbrus.

As A said befor the line is being drawn in the sand . SUN APPLE INTEL all happy together all because of a small russian company called Elbrus.
 

CTho9305

Elite Member
Jul 26, 2000
9,214
1
81
Originally posted by: IntelUser2000
BTW, that Jon Stokes is the same guy that claimed that Silverthorne is based on Penryn, and Moorestown on Nehalem. LOL. When the ISSCC2008 preview revealed its a 2-issue in-order processor, then he immediately assumed it'll be slower than the Cortex A8 CPU since it'll be based off Intel's Pentium. HAHAH. You can't be more ignorant and stupid than that. Maybe K5 performs well as Core because it has the same 4-issue out of order.

He may not be good at predicting the future, but he is absolutely not stupid.
 

zsdersw

Lifer
Oct 29, 2003
10,505
2
0
Originally posted by: Nemesis 1
Macro- fusion is front end hardware Macro-fusion is the ability of joining two x86 instructions together into a single micro-op. This improves the CPU performance and lowers the CPU power consumption, since it will execute only one micro-op instead of two.
Macro-fusion was New to C2D. Intel also owns Elbrus hardware ya know. You can read about the hardware it may shock ya into reality. Some of the cache stuff is really good. Read about it.

It's the capstone of hubris for you to talk down to dmens about stuff like macro-op fusion, etc.

 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Hows that. I have read his post there very good. I think he works for intel . But that doesn't mean he knows were Macro-fusion hardware come from. Elbrus tech is after all intels now. One simply has to read about Elbrus CPU. And what its hardware could do. If Intel owns the tech and there hardware that would benefit them . they wwill use it.

Half my orginal post didn't post why I don't know . Read about elbrus caches you might get a surprize. NO americans want to give the russians any credit . to bad. When Sun made deal with Elbrus. it wasn't good as Russia was still USSR. Now that cold war is over and iron curtain is down . None care.

Software is software. Firmware is software that is kept on memory. So as to interface with hardware and OS. Elbrus software compiler can read all high level languages . Beings how the people how wrote the Elbrus Compiler software now work for intel . It was not a problem for Intel to use some of that software in firmware. Throw goobs of cache on the processor and you can decode in real time. K5 sucked because the decoders sucked for risc translation to x86. So a 4 issue core performed really really poorly. Plus it was a year late to the party.

 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Nemesis 1
Hows that.

Its all well and good to speculate on what the future could potentially hold but there are those who know very well what the future more likely hold...because they are actually part of creating it.

Those people may not desire to share this info with you, be it for personal reasons or professional ones, but you personally should want to operate with that info.

The absence of info is info as well. Dmens says a lot when he says nothing, it would serve you well to listen to the silence more often.

You can live and die by all the press releases and tentative connections made in the media between every company on the planet, or you can give a little credit (put a little faith into) the people who are actually on the ground working with this stuff.

Intel's next gen stuff is not going to be created by press releases from 5 years ago. It is going to be created by the people working with this stuff right now. Why ignore them or put more confidence in your tentative theories than their's?

Who is more likely to have insider info that they can leverage to guide them in their postings (as in knowing what NOT to talk about, or what TO talk about) - someone in the trenches right NOW working with designing this stuff, or some old codger who likes to piece together press releases to paint a glorious picture of Intel's righteous future?

I am no longer in the trenches, but I was in them recently enough to know how little the crap on the web reflects what is going on inside the company. You are only fooling yourself by dismissing "opinions" of people who are merely "expressing opinions and not speaking for their employer".