Go Back   AnandTech Forums > Hardware and Technology > CPUs and Overclocking

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals
· Free Stuff
· Contests and Sweepstakes
· Black Friday 2013
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 11-09-2012, 09:32 PM   #51
Edrick
Golden Member
 
Edrick's Avatar
 
Join Date: Feb 2010
Location: Boston MA
Posts: 1,525
Default

Quote:
Originally Posted by Olikan View Post
if they manage to do this or don't t doesn't matter...it's the competition that made intel atleast think about doing it
I think that money (ie Sales) has more to do with it than competition. If AMD went out of business and Intel had the x86 market all to itself, do you honestly feel it would slow down its release cycle? Of course not, it has to sell you a new CPU every few years or else Intel stops making money. They are, in fact, in competition with themselves more than anything else.
__________________
Core i7 4770
Gigabyte Z87X-UD3H (F5 BIOS)
G.Skill RipjawsZ 8GB @ 2400mhz 10-12-12-31
Gigabyte GTX 660
Samsung 840 Pro 256GB
Antec Eleven Hundred
Edrick is offline   Reply With Quote
Old 11-09-2012, 09:47 PM   #52
CTho9305
Elite Member
 
Join Date: Jul 2000
Posts: 9,214
Default

Quote:
Originally Posted by pm View Post
I worked on Poulson. Powering it up for the first time will stick in my head as one of the coolest things that I have done while working at Intel. There was a point right after it came back from the fab, we had packaged parts and we socketed the first Poulson that went into a system for the first time and then everyone in the control room turned and looked at me and said "ok, Patrick, get it running" and I did. And that, my friends, was truly awesome. I handed off control to another guy less than 5 minutes later, and overall, I played a pretty minor role in things, but I was the very first person to get it to reset on a system and that will stick in my head as pretty cool.
You guys build some crazy-fast caches.
__________________
*Not speaking for any company
CTho9305 is offline   Reply With Quote
Old 11-09-2012, 09:51 PM   #53
WhoBeDaPlaya
Diamond Member
 
WhoBeDaPlaya's Avatar
 
Join Date: Sep 2000
Location: Dallas, TX
Posts: 5,525
Default

Quote:
Originally Posted by CTho9305 View Post
You guys build some crazy-fast caches.
They're so fast, setups become holds, and holds become setups!
__________________
BlueSmoke.net
Silverstone Temjin TJ09S - 4770K @ 4.55GHz - MSI Z87-GD65 - 32GB Crucial Ballistix Elite 1866 CL9 - MSI Twin Frozr 780 - X-Fi Titanium Fatal1ty Pro - BFG EX-1000 - Antec Kuhler 920 - 6x Noctua NF-P12 - Samsung 840 500GB - Samsung F3 1TB - OptiArc 7261S - Logitech G19 - 2x Klipsch Promedia 2.1
WhoBeDaPlaya is offline   Reply With Quote
Old 11-10-2012, 12:05 AM   #54
lambchops511
Senior Member
 
Join Date: Apr 2005
Posts: 659
Default

Quote:
Originally Posted by Edrick View Post
They are, in fact, in competition with themselves more than anything else.
Sad but true.
lambchops511 is offline   Reply With Quote
Old 11-10-2012, 04:55 AM   #55
Olikan
Golden Member
 
Join Date: Sep 2011
Posts: 1,730
Default

Quote:
Originally Posted by ShintaiDK View Post
Competition also have a tendency to drive quality to the bottom. Something that is shown way too often. Often as a chance to destroy another competitor with higher quality products on a short run.

http://www.legitreviews.com/news/14087/
Olikan is offline   Reply With Quote
Old 11-10-2012, 08:00 AM   #56
Cogman
Diamond Member
 
Cogman's Avatar
 
Join Date: Sep 2000
Location: A nomadic herd of wild fainting goats
Posts: 9,714
Default

Quote:
Originally Posted by SunnyD View Post
Itanium: A solution in search of a problem.
It solves the same problem that the x86 architecture solves... That is its problem.

Itaniumn is a truly remarkable architecture and it is a shame that it never became consumer grade. I could see it easily beating the pants off of x86 given the same amount of research effort.
__________________
CogBlog - Random Babblings of Cogman mainly focused on software.
Cogman is offline   Reply With Quote
Old 11-10-2012, 08:04 AM   #57
ShintaiDK
Diamond Member
 
ShintaiDK's Avatar
 
Join Date: Apr 2012
Location: Copenhagen
Posts: 9,552
Default

Quote:
Originally Posted by Cogman View Post
It solves the same problem that the x86 architecture solves... That is its problem.

Itaniumn is a truly remarkable architecture and it is a shame that it never became consumer grade. I could see it easily beating the pants off of x86 given the same amount of research effort.
Instead we got a semi bolted on x64 with a round of 20-30 years more legacy and luggage.
ShintaiDK is offline   Reply With Quote
Old 11-10-2012, 02:13 PM   #58
Cerb
Elite Member
 
Cerb's Avatar
 
Join Date: Aug 2000
Posts: 14,670
Default

Quote:
Originally Posted by ShintaiDK View Post
So by your logic Intel should have doubled performance from P4 to Core 2? Yet it was "only" 40%? And actually only 20% compared to Pentium-M. Not exactly fitting into competition conspiracy is it?
http://www.anandtech.com/bench/Product/93?vs=54

Double or more in quite a few cases, and a similar difference in release years, even (I'm not trying to twist numbers, either--E6550 v. 660 is even worse for the P4). So, not his logic, but measured performance, even against the EE, which, thanks to its big cache, still holds up today as an alright light-use desktop CPU.

TBF, though, that was a bit of a misstep on Intel's part (Netburst in general), rather than typical generational improvements. The Pentium-M comparison is a much more fair one. Not only that, but for what most of us use our computers for (prepackaged binaries with optimizations balanced for many CPUs, and more irregular branching than loopy data crunching), even this latest and greatest Itanium would almost certainly run like a snail, so comparing it to our typical 10% gen over gen now isn't really fair.

Quote:
Originally Posted by ShintaiDK View Post
If competition is the reason, why are ARM cores so dull and turtle slow in terms of performance increase?
Um, they're not? A8->A9->A15 were all pretty significant jumps. The absolute performance is just stuck very low, and I don't see where the money would come from to change that too much.

Quote:
Originally Posted by WhoBeDaPlaya View Post
Details in ISSCC paper :
http://www.intel.com/content/dam/www...sscc-paper.pdf

This is Intel's 20ft dildo of the CPU world
Most computer buyers want the best x86 they can offer. Where's the R&D playground? As long as Itanium is profitable, Intel will keep adding better low-level RAS to it, then migrate it down over a couple generations to the Xeons. IA64 was never fundamentally good, but it must exist to keep running software tied to HP users, so why not go crazy with it? No matter how great Intel's internal R&D projects get to be, there's always something to be said for bringing a design out to production, to make sure certain features really work.

Itanium is dead, for any mass market user, and has been since about the time it was released. But it's still got niches, and still has living software from companies that used computer systems supported by companies that were gobbled up, over time, by Compaq and HP. While it never met any of its hopes and dreams, because they were all about magic pixie dust*, Intel does in fact profit off of Itanium, so they'll keep improving it to keep profiting, until they can wean HP's users off of it, to x86, which could very well take another 10+ years.

Quote:
Originally Posted by Cogman View Post
It solves the same problem that the x86 architecture solves... That is its problem.

Itaniumn is a truly remarkable architecture and it is a shame that it never became consumer grade. I could see it easily beating the pants off of x86 given the same amount of research effort.
Will people pay for quad channel RAM in low-end computers, where they can currently get by with a single channel? It would be like P4 Celerons with SDR all over again. No amount of research effort has any possibility of fixing the problem that dynamic and static binary sizes are huge, and brute force (more RAM bandwidth and IOs, and much larger caches) is the only way around that. A quality 32-bit-instruction-word (or smaller) RISC, without the crazy idea of compiler-based parallelism and speculation, would have had some real promise. The only viable solution for IA64 would be to come up with a suitably cheap RAM technology that is as fast as on-chip caches--just moving it away from the chip to fill in the DIMM slots pretty much precludes that.

Quote:
Originally Posted by ShintaiDK View Post
Instead we got a semi bolted on x64 with a round of 20-30 years more legacy and luggage.
And it is awesome, complete with the 12345 password!

Really, though, if Intel had actually made something better, we might have gotten away with an x86 emulation layer. On both x86 and non-x86 fronts, they decided not to (or, rather, put too much faith in what wasn't going to work out, without enough plan Bs), so here we are.

* It's one thing when solving a problem is known doable, but hard. It's another when it's not even known doable, and compiler-only IPC improvements with the memory wall upon is that latter kind of problem.
__________________
"The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows." - Frank Zappa
Cerb is online now   Reply With Quote
Old 11-10-2012, 02:31 PM   #59
ShintaiDK
Diamond Member
 
ShintaiDK's Avatar
 
Join Date: Apr 2012
Location: Copenhagen
Posts: 9,552
Default

Quote:
Originally Posted by Cerb View Post
http://www.anandtech.com/bench/Product/93?vs=54

Double or more in quite a few cases, and a similar difference in release years, even (I'm not trying to twist numbers, either--E6550 v. 660 is even worse for the P4). So, not his logic, but measured performance, even against the EE, which, thanks to its big cache, still holds up today as an alright light-use desktop CPU.
Why are you comparing products with 2Ĺ years apart?

Its like doing this comparision:
http://www.anandtech.com/bench/Product/701?vs=54


Quote:
Originally Posted by Cerb View Post
And it is awesome, complete with the 12345 password!

Really, though, if Intel had actually made something better, we might have gotten away with an x86 emulation layer. On both x86 and non-x86 fronts, they decided not to (or, rather, put too much faith in what wasn't going to work out, without enough plan Bs), so here we are.
The problem is here you need a tech thats 3-5 times faster than the old one to innovate. Else we are stuck in the past. x64 was most likely made on a napkin at lunch. And a true example on the risk vs reward factor issue with competition.

Quote:
Originally Posted by Cerb View Post
Um, they're not? A8->A9->A15 were all pretty significant jumps. The absolute performance is just stuck very low, and I don't see where the money would come from to change that too much.
So they are turtle slow in increase. Thanks.

Last edited by ShintaiDK; 11-10-2012 at 02:36 PM.
ShintaiDK is offline   Reply With Quote
Old 11-10-2012, 05:20 PM   #60
Cerb
Elite Member
 
Cerb's Avatar
 
Join Date: Aug 2000
Posts: 14,670
Default

Quote:
Originally Posted by ShintaiDK View Post
Why are you comparing products with 2Ĺ years apart?
Because the following were being mentioned, and their performance compared:
1. The Pentium 4, which stopped speeding up about 2006.
2. The Pentium-M, which also stopped speeding up about that time.
3. The Core 2, which stopped speeding up about 2 years later than that.

Therefore, Either release P4 v. release C2D, or mature P4 v. mature C2D, were the most fair options, since no P-M was there to choose.

Quote:
Its like doing this comparision:
http://www.anandtech.com/bench/Product/701?vs=54
That would be around 4 years, and nowhere was IB being mentioned, so no, it would not at all be the same. I did not include any P-M bench, because I found none there.

Quote:
The problem is here you need a tech thats 3-5 times faster than the old one to innovate. Else we are stuck in the past. x64 was most likely made on a napkin at lunch. And a true example on the risk vs reward factor issue with competition.
A favorite quote of mine, by Felix von Leitner:
"If you canít explain it on a beer coaster, itís too complex."

If it can be made on a napkin, and other engineers gotten behind it, that doesn't sound bad at all. I'll take that over a pie in the sky any day.

Yet, I would very much disagree with your claim about needing 3-5x faster to innovate. To innovate, you need something different that people will want to buy. There's plenty of evidence that processors not faster have managed to do that quite well, by having other useful properties others don't, such as cost, and ease of integration with otherwise custom hardware.

To be 3-5x faster, you've got to figure out how to get there, and right now, nobody has a way that works. IA64's didn't, and other predicating schemes don't even seem to simulate well enough to catch on (wish branches probably being the most promising, as of late). Nobody has yet figured out a magic ILP pill for CPUs, short of all RAM being on-die or on-package SRAM. Plenty of work has gone into it, and we have gotten better compilers, better CPUs, and working transactional memory, for some of that trouble, but no massive performance boost.

x86-64 got rid of the worst legacy crap, then took all the good things x86 has, and didn't screw them up. IA64 threw out all the good (code density, efficient immediates, nice MMU/PTE setup, not relying on profiled builds for decent performance, etc.), and didn't add anything good itself other than short pipelines (part of what made it a nice number cruncher), and some super duper RAS (to help kill off competition, which worked), while taking RISC to a silly extreme.

ATM, performance is very much limited by R&D. We're close enough to the physical limits that no one without hundreds of millions of dollars to put into it, will be able to come close to what Intel has pulled off with x86. Even as it gets slightly better per SDRAM generation, slow memory alone is enough to create a massive barrier to entry, even before any patent concerns start to crop up. When silicon finally gets replaced, maybe we'll see some real shaking up. Until then, no fancy shmancy ISA is going to do more than get a generation or so's worth of performance benefit over another, and even that will require time and money similar to what was put in to that 'other' one's processors.

Quote:
So they are turtle slow in increase.
Stop changing goalposts. An increase, by its very definition, is relative, not absolute. x86 hasn't increased in performance as much as ARM is now since the late 90s. Nearly doubling performance for multiple generations, with only a couple years between them, is not a slow increase in performance. We haven't seen that in our desktops or servers for over 12 years, now.
__________________
"The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows." - Frank Zappa
Cerb is online now   Reply With Quote
Old 11-10-2012, 05:57 PM   #61
ShintaiDK
Diamond Member
 
ShintaiDK's Avatar
 
Join Date: Apr 2012
Location: Copenhagen
Posts: 9,552
Default

Quote:
Originally Posted by Cerb View Post
Because the following were being mentioned, and their performance compared:
1. The Pentium 4, which stopped speeding up about 2006.
2. The Pentium-M, which also stopped speeding up about that time.
3. The Core 2, which stopped speeding up about 2 years later than that..
You completely missed the context obviously.
ShintaiDK is offline   Reply With Quote
Old 11-10-2012, 07:44 PM   #62
Cerb
Elite Member
 
Cerb's Avatar
 
Join Date: Aug 2000
Posts: 14,670
Default

Quote:
Originally Posted by ShintaiDK View Post
You completely missed the context obviously.
The CPUs actually performed much better than stated; the scaling was actually much higher than stated. ARM's performance increases have been huge (but when will they hit a wall?). That x86 was 100% terrible and should have been replaced by a fundamentally unproven (still, today) concept was and still is wrongheaded. That x86 was, or is, bad in any way but decoder width, still has yet to be born out. Looking to an ISA to solve a physical problem is denying reality: power, L and C, SRAM, DRAM, flash and platters are fundamentally limiting performance; and scaling not be substantially greater for anything that can't side-step those barriers.

If it scaled easily, it wouldn't be nearly lock-step with x86, in performance scaling. x86 has scaled up every bit as well, thus far, if not better (you were even the one to link to the benches showing it ).
Quote:
The reason why Itanium scales better is because (...)
Yet, that assumption has yet to be shown true. The ILP needs to exist for wider issue to do anything, and then the compiler needs to be able to output efficient execution paths for it that can utilize the ILP, and then the instructions and data need to both be in low-level caches at just the right times. Algorithms for compilers that can figure that out with dynamic data sets may well be in the realm of unsolvable problems. Having a wider CPU doesn't make any of the software run faster.

That said, I'm not sure what Olikan was going for. HP would not be happy with nothing new, and Intel would make less money that way. It was possible to increase performance by X, while maintaining profitability, so Intel increased performance by X. Those customers using Itanium will be more likely to upgrade and/or expand and/or not jump ship, as long as there is more performance around the corner. Even if Intel has a complete demise planned for IA64, they'd need to keep up with a few more generations of Itanium just to handle a smooth conversion.
__________________
"The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows." - Frank Zappa
Cerb is online now   Reply With Quote
Old 11-11-2012, 08:00 AM   #63
Cogman
Diamond Member
 
Cogman's Avatar
 
Join Date: Sep 2000
Location: A nomadic herd of wild fainting goats
Posts: 9,714
Default

Quote:
That x86 was, or is, bad in any way but decoder width, still has yet to be born out
They go through momentousness efforts in order to make x86 scale. The whole concept of microcode was born out of x86's inability to scale well. In fact, Out of order processing was originally an Itanium idea moved over to x86.

I mean, just think about it, the x86 instruction set is so bad that it is cheaper for intel to essentially translate x86 into another instruction set and run that then to make pure x86 faster.

Quote:
If it scaled easily, it wouldn't be nearly lock-step with x86, in performance scaling.
Actually, the fact that it has been lock step with x86 should be proof enough that it scales easily. Intel doesn't put nearly the same amount of research into Itanium as they do x86. The fact that Itanium has been able to be competitive with x86 with a severely reduced researching staff and budget should be more than ample proof that it scales well.

Quote:
Algorithms for compilers that can figure that out with dynamic data sets may well be in the realm of unsolvable problems.
How do you know? Itanium is a one of its kind architecture. Nobody else does it quite like it. On top of that, it isn't very popular so there has been no incentive for compiler writers to work with it that way. On the other hand, people once said that it was impossible for compilers to efficiently use SIMD instructions, yet here we are today with most compilers supporting auto-vectorization and working feverishly to try and make it better and better.

Quote:
Having a wider CPU doesn't make any of the software run faster.
Then why was the SSE instruction set such a big boon for most processor intensive applications? Why does CUDA and OpenCL work?
__________________
CogBlog - Random Babblings of Cogman mainly focused on software.
Cogman is offline   Reply With Quote
Old 11-11-2012, 09:36 AM   #64
alyarb
Platinum Member
 
Join Date: Jan 2009
Posts: 2,387
Default

Quote:
Originally Posted by Cogman View Post
They go through momentousness efforts in order to make x86 scale. The whole concept of microcode was born out of x86's inability to scale well. In fact, Out of order processing was originally an Itanium idea moved over to x86.

I mean, just think about it, the x86 instruction set is so bad that it is cheaper for intel to essentially translate x86 into another instruction set and run that then to make pure x86 faster.
Without knowing much at all about Itanium, I still can't see this. I know the concept of EPIC is old, but OoO is older. A host of Power, MIPS, SPARC and Pentiums were using OoO in systems worldwide while Itanium was still on paper.

How was OoO an original Itanium idea? I thought Itanium's original idea was to let the compiler figure everything out.

The die area devoted to decoding legacy instructions is pretty insignificant right? It really doesn't cost much to maintain compatibility and most programs use newer instructions anyway that run on wider hardware. I see the very small cost to area, but where is the cost to performance?

When you are looking at 9 million transistors per square millimeter, how do you compute the performance decrease incurred by ivy bridge by keeping legacy decoder hardware onboard?

I do not understand how unused hardware slows down other functions if they are totally independent. For instance, intel disables AVX on a lot of CPUs, yet all non-AVX instructions decode, execute, and retire at the expected rate.
alyarb is offline   Reply With Quote
Old 11-11-2012, 04:16 PM   #65
Cerb
Elite Member
 
Cerb's Avatar
 
Join Date: Aug 2000
Posts: 14,670
Default

Quote:
Originally Posted by Cogman View Post
They go through momentousness efforts in order to make x86 scale. The whole concept of microcode was born out of x86's inability to scale well. In fact, Out of order processing was originally an Itanium idea moved over to x86.
LOL. Whatever you're smoking, put it down.

Microcode was born out of a desire or need to separate program code from machine-executed assembly language code, so that microarchitectures could change without a program needing to be rewritten from scratch. It has also been used to more compactly implement complicated instructions. Commercial use of strictly micro coded computers--that is, that just executed microcode--predates the 8086 by at least 4 years (IBM S/360), and x86 has used microcode since the very first 8086, long before scaling issues ever presented themselves (and then, when they did, x86 was able to hang with the best of them, and has been ever since the P6, so...). I have unable to find an oldest use of microcode at all.

Microcode is neither good nor bad, and CPUs have been using it since before it even got named that (the S/360 and VAX were remarkable in that they were designed around such a feature, so are easy to put dates to).

OoOE predates the 8086 by 14 years (CDC 6600), with fully-OoOE predating it by 12 years (IBM S/390), and thus, Itanium by 35-37 years. It was never going to be an Itanium feature. OoOE's increasing on-chip complexity was one of several problems that EPIC was supposed to get around by doing it all in software, since they were reaching a limit of 1 IPC (which, BTW, was based assuming that clock speeds would keep on increasing; not that >1 IPC was impossible to do). Unfortunately, there was no free lunch, so they needed massive caches, wider execution, and multithreading, to keep up (there's no free lunch).

How Itanium came to be (notice the links at the bottom):
http://www.hpl.hp.com/news/2001/apr-jun/itanium.html

Quote:
I mean, just think about it, the x86 instruction set is so bad that it is cheaper for intel to essentially translate x86 into another instruction set and run that then to make pure x86 faster.
How is that x86 being a bad ISA? Does that mean every other ISA that isn't powering a microcontroller is a bad ISA? By that logic, every ISA for almost every high-performance CPU since the mid 90s has been bad. Have they?

The reality is that RISC works. It runs faster when it's simpler. The other reality is that x86, unlike VAX, was not hobbled by its ISA, so they were able to decode legacy instructions and run them fast on new CPUs. x86 could get most of the advantages of RISC, without moving to a RISC ISA. That's a sign of excellent engineering and business decisions, to me, not a bad ISA (x86 got many things right, but people see VLIs and go, "ew, bad!").

Quote:
Actually, the fact that it has been lock step with x86 should be proof enough that it scales easily. Intel doesn't put nearly the same amount of research into Itanium as they do x86. The fact that Itanium has been able to be competitive with x86 with a severely reduced researching staff and budget should be more than ample proof that it scales well.
Nothing gets as much money spent on it as Intel's x86 development, and Itanium has never improved performance the way we want x86 to improve (significant per-clock-per-thread gains). x86, IA64, and Power have all scaled very similarly, over the last decade or so, just taking their own niches into account. What x86 has gotten that no other has, has been massive gains per thread per Ghz with existing binaries. There's no free lunch for that, like multithreading or more cores (also, that's secondary with Itanium's existing users--it was a barrier towards replacing better ISAs for mass market use, but has little to nothing to do with IA64 v. Power v. SPARC, today).

I could throw in AMD's ability to compete with Intel until recently as one example of the same with less money put in, but I have food to attend to (IE, x86 doesn't require that much, Intel requires that much to make damn sure nobody can ever pull out another K8).

Quote:
Then why was the SSE instruction set such a big boon for most processor intensive applications? Why does CUDA and OpenCL work?
CUDA and OpenCL work the same way multicore CPUs with Hyperthreading work, on GPUs, anyway. SSEn are vector extensions, and are taking advantage of known DLP which has no undetermined dependencies. Anything vectorization can speed up by leaps and bound more or less has ILP 'solved', or TLP 'solved', or both (EP). It's a totally different problem space than business logic or GUIs, which commonly have 10% or more of the code as conditional branches, and for which each run of a small window of functions might be best executed with a different instruction ordering.
__________________
"The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows." - Frank Zappa

Last edited by Cerb; 11-11-2012 at 04:20 PM.
Cerb is online now   Reply With Quote
Old 11-12-2012, 05:58 AM   #66
Olikan
Golden Member
 
Join Date: Sep 2011
Posts: 1,730
Default

Quote:
Originally Posted by Cerb View Post
That said, I'm not sure what Olikan was going for.
where i am going is that competition is good... even for intel, make them realize where were theyr mistakes

i can clearly see a meeting where someone ask:
"why Red Hat is dropping support?"
"all of it's consumer prefer power7, because it way faster"

...suddenly itanium gets massive performance boost
Olikan is offline   Reply With Quote
Old 11-12-2012, 06:50 AM   #67
Idontcare
Administrator
Elite Member
 
Idontcare's Avatar
 
Join Date: Oct 1999
Location: 台北市
Posts: 20,123
Default

Quote:
Originally Posted by Olikan View Post
where i am going is that competition is good... even for intel, make them realize where were theyr mistakes

i can clearly see a meeting where someone ask:
"why Red Hat is dropping support?"
"all of it's consumer prefer power7, because it way faster"

...suddenly itanium gets massive performance boost
If you follow the money though the competition is not with Intel, it is with HP. HP is the company that tied itself to the mast of S.S. Itanium.

If Itanium goes away it would be a blip on the revenue dashboard of Intel. But HP needs Itanium if it is to compete with IBM and Oracle. And by compete I don't mean in terms of hardware sales, which is purely secondary, I mean compete in terms of revenues from software and support sales. (something Intel doesn't get to enjoy/partake)

It has to be a rather uncomfortable business position for HP to be in, completely and utterly dependent on Intel's decision makers when it comes to pricing and roadmaps. The IBM and Oracle guys don't have to contend with that.

Imagine if AMD decided to not only outsource their fabs to globalfoundries (which they did) but to also outsource the design of the CPUs themselves to glofo.

That would be weird and not expected to really work out in the long run for a number of business reasons. But for some reason people expect it to work out for HP and Intel.
Idontcare is offline   Reply With Quote
Old 11-12-2012, 07:33 AM   #68
mrmt
Platinum Member
 
Join Date: Aug 2012
Posts: 2,077
Default

Quote:
Originally Posted by Idontcare View Post
It has to be a rather uncomfortable business position for HP to be in, completely and utterly dependent on Intel's decision makers when it comes to pricing and roadmaps. The IBM and Oracle guys don't have to contend with that.

(...)

That would be weird and not expected to really work out in the long run for a number of business reasons. But for some reason people expect it to work out for HP and Intel.
I don't think that Oracle is in such a better position than HP regarding hardware. Oracle shelled 7 billion to acquire Sun in 2009, and from there, hardware sales are just slumping. It is always a bad note on their EC and Oracle isn't really keen to change this reality.

I see HP problem more in terms of not having a fully comprehensive package for their customers such as IBM and Oracle have, than in not having in-house hardware design. They must always leverage on someone's software for their servers, and sometimes this someone is a vendor that can go for their client and offer the entire deal with a nice discount.

Quote:
Originally Posted by Idontcare View Post
Imagine if AMD decided to not only outsource their fabs to globalfoundries (which they did) but to also outsource the design of the CPUs themselves to glofo.
But they are taking exactly this route. ARM will design their CPU cores and GLF will manufacture them...
mrmt is online now   Reply With Quote
Old 11-12-2012, 07:58 AM   #69
Genx87
Lifer
 
Join Date: Apr 2002
Location: Earth
Posts: 35,308
Default

Quote:
Originally Posted by Idontcare View Post
If you follow the money though the competition is not with Intel, it is with HP. HP is the company that tied itself to the mast of S.S. Itanium.

If Itanium goes away it would be a blip on the revenue dashboard of Intel. But HP needs Itanium if it is to compete with IBM and Oracle. And by compete I don't mean in terms of hardware sales, which is purely secondary, I mean compete in terms of revenues from software and support sales. (something Intel doesn't get to enjoy/partake)

It has to be a rather uncomfortable business position for HP to be in, completely and utterly dependent on Intel's decision makers when it comes to pricing and roadmaps. The IBM and Oracle guys don't have to contend with that.

Imagine if AMD decided to not only outsource their fabs to globalfoundries (which they did) but to also outsource the design of the CPUs themselves to glofo.

That would be weird and not expected to really work out in the long run for a number of business reasons. But for some reason people expect it to work out for HP and Intel.
HP is probably the 2nd worse run technology company right behind AMD.
__________________
"Communism can be defined as the longest route from capitalism to capitalism."
"Capitalism is the unequal distribution of wealth. Socialism is the equal distribution of poverty"
"Because you can trust freedom when it is not in your hand. When everybody is fighting for their promised land"
Genx87 is offline   Reply With Quote
Old 11-12-2012, 08:01 AM   #70
Tuna-Fish
Senior Member
 
Join Date: Mar 2011
Posts: 536
Default

--

Last edited by Tuna-Fish; 11-12-2012 at 08:06 AM. Reason: I should learn to refresh before replying
Tuna-Fish is offline   Reply With Quote
Old 11-12-2012, 04:29 PM   #71
Cerb
Elite Member
 
Cerb's Avatar
 
Join Date: Aug 2000
Posts: 14,670
Default

Quote:
Originally Posted by Olikan View Post
where i am going is that competition is good... even for intel, make them realize where were theyr mistakes

i can clearly see a meeting where someone ask:
"why Red Hat is dropping support?"
"all of it's consumer prefer power7, because it way faster"

...suddenly itanium gets massive performance boost
That never got the chance to happen. P3 and P4 Xeons could hang with Itanium from the get-go, so nobody that wanted any flexibility did more than dabble in Itanium. Those companies either used Windows for ease of use and rapid development, or Linux or FreeBSD for cheap-per-computer Unixy goodness. The Linux/Itanium overlap was always small, and hardware support quality always dodgy--there was never enough demand for Itanium computers to become COTS computers, so there were never enough users. Linux is probably the only OS out there where radically changing hardware isn't all that hard to do.

Meanwhile, if you ran VMS or NonStop*, and needed to upgrade, or had software deeply tied to HP-UX, Itanium was what you got. Same with AIX, or using software for any elderly IBM series, like those now running on z--your upgrade path is decided for you, but vendor support keeps on keeping on.

When business people talk about x86 not being ready for their big data needs, despite numerous companies doing just fine with COTS plenty of I/Os and PBs, what they really mean is that the cost of moving from their HP partially-vertical, or IBM fully vertical, systems is monumental, compared to buying more Power or Itanium boxes**, and keeping up with their related upgrades. Intel has made Itanium good at scaling out since day 1, regardless of per-thread performance, so with scalable software systems made for clusters back before clusters were trendy, it works out fairly well.

There isn't the kind of competition, where businesses can choose to affordably uproot themselves, having been deep into proprietary systems for a long time. If IBM, Oracle, or HP get sufficiently expensive, they will. Until then, most of big iron vendors are working with long-time loyal customers. Even when not dealing with such customers, a potential customer is buying into the vendor's hardware and software ecosystem, not just a computer. Performance needs to improve so as to reduce energy costs per workload over time, and keep up with workloads as they grow, without costing so much to the customer that they'll move to COTS, much more than they need to beat the other team.


* NonStop on x86 might be nice...
** Or Oracle or Fujitsu SPARC
__________________
"The computer can't tell you the emotional story. It can give you the exact mathematical design, but what's missing is the eyebrows." - Frank Zappa
Cerb is online now   Reply With Quote
Old 11-13-2012, 09:48 AM   #72
Edrick
Golden Member
 
Edrick's Avatar
 
Join Date: Feb 2010
Location: Boston MA
Posts: 1,525
Default

Before Itanium, HP had their own, even more proprietary CPU, the PA-RISC. So making the move to Itanium which had the chance of being more universal, made sence to HP at the time. Even if Itanium stays a HP only product, they are no worse off then they were before.

With that said, I do not see Itanium going away for at least 10 years.
__________________
Core i7 4770
Gigabyte Z87X-UD3H (F5 BIOS)
G.Skill RipjawsZ 8GB @ 2400mhz 10-12-12-31
Gigabyte GTX 660
Samsung 840 Pro 256GB
Antec Eleven Hundred
Edrick is offline   Reply With Quote
Old 11-13-2012, 10:04 AM   #73
Idontcare
Administrator
Elite Member
 
Idontcare's Avatar
 
Join Date: Oct 1999
Location: 台北市
Posts: 20,123
Default

Quote:
Originally Posted by Edrick View Post
Before Itanium, HP had their own, even more proprietary CPU, the PA-RISC. So making the move to Itanium which had the chance of being more universal, made sence to HP at the time. Even if Itanium stays a HP only product, they are no worse off then they were before.

With that said, I do not see Itanium going away for at least 10 years.
I agree, way too much has been invested in the creation of existing Itanium infrastructure between software, compilers, and hardware for it all to be abandoned and support evaporated in a span of just 10yrs.
Idontcare is offline   Reply With Quote
Old 11-13-2012, 12:19 PM   #74
Vesku
Platinum Member
 
Join Date: Aug 2005
Location: Seattle, WA
Posts: 2,914
Default

Yes, Intel is beholden to support Itanium for quite sometime given it's big iron nature** HP sued Oracle for trying to break a contract guaranteeing Itanium support** It would be pretty amazing if HP didn't have contractual ties with Intel in regards to Itanium**

http://allthingsd*****/20120801/hp-w...t-with-oracle/
Vesku is offline   Reply With Quote
Old 11-13-2012, 12:30 PM   #75
Edrick
Golden Member
 
Edrick's Avatar
 
Join Date: Feb 2010
Location: Boston MA
Posts: 1,525
Default

Quote:
Originally Posted by Vesku View Post
Yes, Intel is beholden to support Itanium for quite sometime given it's big iron nature** HP sued Oracle for trying to break a contract guaranteeing Itanium support** It would be pretty amazing if HP didn't have contractual ties with Intel in regards to Itanium**

http://allthingsd*****/20120801/hp-w...t-with-oracle/

What would also be pretty amazing is if the next few releases of Itanium (starting with this one) actually get increased market share**
__________________
Core i7 4770
Gigabyte Z87X-UD3H (F5 BIOS)
G.Skill RipjawsZ 8GB @ 2400mhz 10-12-12-31
Gigabyte GTX 660
Samsung 840 Pro 256GB
Antec Eleven Hundred
Edrick is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 12:25 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.