• We should now be fully online following an overnight outage. Apologies for any inconvenience, we do not expect there to be any further issues.

nVidia scientist on Larrabee

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: taltamir
it is the most common analogy for processors and it is absolute worst analogy for them because they are NOTHING alike.

I think you might be forgetting the very purpose of communicating concepts by way of analogies.

Analogies involve the "structural alignment" of two (or more) structured representations (representations containing objects, their relations, and their attributes, along with relations between relations) so that the common elements in the representations are mapped onto each other1. Structural alignment occurs under three primary constraints: systematicity, one-to-one mapping, and parallel connectivity.

That or you are conflating analogy with metaphor?

Cars/engines/travel/commutes all make great analogies to things in semiconductors because the purpose of a vehicle is to move around people and objects while the purpose of a great number of IC's is to move around electrons and bits.

But cars make piss-poor metaphors for things relating in semiconductors, most notably because of the mechanical vs non-mechanical aspects of the two products.

Saying "My surgeon is a butcher" is a metaphor as both a surgeon and a butcher use knives to cut meat and the statement is meant to communicate that the surgeon treats their patients like pieces of dead meat.

Saying "my surgeon is a race car driver" is not a metaphor, its an analogy meant to imply that the surgeon is perhaps quick and speedy or has a one-track mind or even potentially reckless with one goal in mind (if your not first your last Ricky Bobby).

This is not to say I am defending Nemesis' analogy, its defacto operating expectations that any analogy offered by Nemesis has been butchered in a race car driver sort of way that only my surgeon can properly explain.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
haha :)...
Well, the problem with analogies is that you should use them to explain a specific thing and that is IT. you first say HOW they are similar, than you show how that ONE operation occurs in the same way in regards to each other.
The problem is that people use it actually extrapolate completely new things
 

ArchAngel777

Diamond Member
Dec 24, 2000
5,223
61
91
Originally posted by: Idontcare
Originally posted by: BenSkywalker
I can't tell if you really don't know much about layout and design as to make such a meaningless comparison or if you think the rest of don't and you are just being lazy for the sake of your own convenience in keeping the rebuttal short. Help me understand.

OK, I'll help you understand then. Using the most simplistic form of filtering you need to be pushing 9 instructions per pixel(that is assuming a fully optimized TMU unit and a very basic filter), to compete with even a GTX260 192 Larry is going to need to push 332,100MIPS. If we figure for 20 cores, and are generous on the instructions it can retire per clock give it 44MIPS/MHZ(Cell is 3.2MIPS/MHZ by way of comparison- I'm giving Larry more then an order of magnitude benefit of the doubt- Perfect scaling from PPro) then it would need to be clocked at 7.54GHZ. I'm being exceptionally generous on all counts to Larry in this assesment. Comparing it to an outdated part that is being phased out, giving it an order of magnitude more performance then the closest current architecture and using the simplest possible filtering.

In a more realistic sense, it would need to be closer to 30GHZ to hit parts in its timeline at 20 cores, and Intel seems to be leaning towards 16 cores atm. No, I don't know exactly what kind of clock rates Intel is going to hit, but over 600mm and 30GHZ? I'd be willing to wager fairly heavily that that is a pipe dream even the most fringe lunatic of Intel fans wouldn't want to wager on.

And for the record, that is just to handle basic fillrate- the overwhelming majority of the GPUs will be sitting idle.

Edit- Figured I should probably point out that the current top tier GPU solution would require a bit over 20GHZ- I realized upon review it made it look like a stretch going from 7.5GHZ to 30GHZ. Also, 1.6GHZ does give us a guideline in the sense that the odds of anyone getting 600mm die to ~20-30GHZ is about as close to 0 as you can get using anything resembling current technology. Also, if we removed the TMU these numbers would go up by an order of magnitude just in computation time, additional stalls from read/writes would have that closer to two orders of magnitude.

Now there is an acceptable level of justified opinion, thanks for taking the time to go into it!

It does naturally beg a degree of self-assessment though - unless we presume ourselves to be superiorly intelligent to the decision makers at Intel, we must assume Intel knew ALL of this before they even assembled the layout team for Larrabee nearly 3 yrs ago. And they certainly knew it 2 yrs ago, and 1 year ago, and today.

So why then would Intel choose to ignore this information and develop a product with such woefully obvious inadequacies? Why would they release such a woefully inadequate product next year?

Just saying we've got to be giving ourselves a lot of credit in the grey matter department and assuming Intel's decision makers are operating with a commensurate depletion of it in order to be so confident as to assume we know what they don't and that we foresee in their competition something which they do not.

Are we so supremely confident in ourselves as to make such an assertion?


Excellent post! Also, I don't want to take anything away from Ben's posts either as they are great. We will have to wait and see!
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
Originally posted by: Nemesis 1
Originally posted by: taltamir
Originally posted by: BenSkywalker
Any video encoding app would benefit without having to be patched or re-coded, for example.

That would ignore what a non x86 based alternative with comparable layout could do. Some here may demand ignorance to enter into a discussion, but in all seriousness x86 is about the poorest architecture you could imagine for this type of architecture. Decode hardware when tiny die space per core is the essence of your design goal is rather foolish. What makes this worse, far worse, is that the applications will still require a recompile in order to run on Larrabee, it isn't an OoO architecture- default x86 code would roll over and die running on it(wouldn't be surprising to see a normal processor be faster on anything with decent amounts of branching).

With several hundred thousand transistors per core wasted on decode hardware, more trasnsistors utilized to have full I/O functionality given to each core, a memory setup that is considerably more complex then any of the other vector style processor choices available Larrabee is making an awful lot of compromises to potential performance to be more Intel like then it needs to be.

Everyone seems to be taking the stance that Larrabee must have a lot going for it because of how much Intel is putting into it. Itanium anyone? Everyone with so much as an extremely small dose of understanding knew that Itanium was going to be a huge failure in the timeframe it hit. Sadly, a VLIW setup for something like Larrabee would end up being a much better option then where they are headed.

I guess, the best way to think of it is that Intel clearly sees a major movement as does everyone else in computing power. The problem is, Intel wants to take as much lousy outdated broken down crap with them as they can. We already have x86 as our main CPUs to handle that garbage, why do we need more of the same wasted die space on our GPUs? To make it so that lousy existing x86 code that isn't well suited for extreme levels of parallelization can be recompiled in an easier fashion? So let's prop up our outdated poorly structured code base for a short term gain and hold back everything else in the long term? Just doesn't make sense to me.

first accurate post of the thread... larabee is set to waste over 40% of its total space on redundant x86 decode hardware (one for each core), and for what? no SSE, no out of order... NOTHING is going to run on it without a serious recompile and recode... so why bother with it in the first place? its wasting space on a gimmick.
the way i see it, intel is banking on taking a loss (by wasting nearly half the die on NOTHING) for the chance to get x86 to become the standard, if that happens then they are granted legal monopoly status and no one may compete with them. It seems as clear as day that this could be their only course of action, intel engineers are not stupid.

I think this is why the professor in question is joining the fight... nvidia is the one company that stands a chance at braking the x86 stranglehold and potentially getting us heterogenous computing. although i wouldn't be surprised if they would just opt to displace intel as the only legal monopoly backed by stupid misapplied patent laws.
Your first Bold.
So true X86 must be recompiled. BUT ALL SSE2 can be recompiled by simply adding pretext of (vec) thats easy. You would also do well to find out what kind of recompile this is . Not all compiles are EQUAL.


Your 2 bold . We all know it wasn't Intel who kept us in X86 hell now was IT?

X86 on Larrabee has to be recompiled. What does that mean? I heard some talking here like they know what intel has done on the X86 side of things. You said 40% of what X86 larrabee die is a waste. I will take issue with that. Since compiler has been brought up and it dam well better be . That compiler is native C/C++ . Now I really don't know what that means other than larrabee runs on a software layer natively . Now Thats what I don't understand . How can it run on a software layer natively . But thats what there saying . Or I just not getting it. But the compiler Intel has can do some interesting things as were all going to find out. Intel only said Larrabee was x86 cpu . It never said how that those sse instructions would run . Only that it will. WITH a recompile Since SSE2 is so common the simple port of pretect of (vec) makes it a 1 time run and play . recompile.

But were stuck here on x86 . When its the vertex unit and the the ring bus,compiler.and more than anthing else the scatter gather and the use of a memory mask. Open CL Yet were talking about X86 in which none here knows the x86 hardware involved in this UNIT.

Screw larrabee . I am excited for the software render tech . Sooner the better. I know why MS has been bad mouthing Apple . I told you guys you be happy with the ATI 4000.s . This has nothing to do with Larrabee yet. But wait tile you see Nehalem on snow. LOL!

I seem to have missed that one... when i said recompile i meant to say rewrite the compiler, and rewrite the code. Technically code should just WORK when recompiled... but there are many black box dll as well as the simple fact that different compilers compile things differently.
my point was that SSE2 is the minimum that is expected by EVERYTHING right now, its simply been around for so long, so having an x86 sans the SSE2 (or SSE1 for that matter, or MMX) instruction set means trouble. This is simply not an "easy transition" without at least SSE2 to go from modern code to code meant to work on ancient x86 design... combine that with going from few core to a lot of cores and things get even more hairy.
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
It would seem were a bit off on what larrabe will be . 4 models is latest gossip.

8 core 16 core 32 core and 64 core. I believe 40 or 48 . 64core is alittle hard to swallow . But Who knows?
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: Nemesis 1
It would seem were a bit off on what larrabe will be . 4 models is latest gossip.

8 core 16 core 32 core and 64 core. I believe 40 or 48 . 64core is alittle hard to swallow . But Who knows?

The bolded part - why do you say this?
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
well, nvidia has 320 cores and ati has 800 "cores" (well, 1/5 that of 5 way cores each of them like a 5 core... core?)
Basically it just depends on how much you subdivide each core to see how many you can fit on it. if the cores are small enough they can fit a lot of them in there.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: taltamir
well, nvidia has 320 cores and ati has 800 "cores" (well, 1/5 that of 5 way cores each of them like a 5 core... core?)
Basically it just depends on how much you subdivide each core to see how many you can fit on it. if the cores are small enough they can fit a lot of them in there.

This is my point, way too much assumations going on in this thread regarding core size, core count, clockspeed, IPC, and power consumption, etc, etc.

Naturally everyone argues for these various attributes to be restricted (or not) in size and magnitude in such a way that magically their predisposition towards Intel's assured success or assured failure turns out to be irrefutable by their bounded logic.

I have yet to see any compelling datapoints to date that suggest Larrabee will be a smashing success or a dismal failure, but if we want to gorge ourselves on opinion well then this thread delivers.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Originally posted by: chizow

They're very different in the sense a CPU die is often comprised mainly of cache, which is the simplest transistor and always the first used to validate a new process node. That's very different from GPUs which are already dedicating upwards of 50% to execution units, a ratio that is only growing with every new iteration.

Look here: http://www.3dnews.ru/_imgdata/img/2009/02/06/112525.jpg

and here:
http://billkosloskymd.typepad....2008/02/04/tukwila.jpg

Tell me how much of that 699mm2 die is cache hmm?(That's the die pic for Itanium Tukwila with 30MB on-die cache) Larrabbee had to greatly simply its cores to increase its core count. The circuits in GPUs are definitely more complex than caches but comparing them in this case is flawed.
 

IntelUser2000

Elite Member
Oct 14, 2003
8,686
3,787
136
Originally posted by: alyarb
if anand is right, 160 flops per clock doesn't sound to good. at 2 ghz that 10-core is only doing 320 gflop. pretty good for a CPU but a GPU? isn't the production version supposed to have 32 cores? or is it 10? are gamers supposed to buy a 320 gflop card?

It's 160 flops per clock in DP. It's single precision flops per clock is 320 so at 2GHz it'll reach 640 GFlops.

16 wide SIMD x 2 FLOPs/cycle FMA x 32 cores x 2GHz = 2 TFlops. The rumors from Fudzilla was that they can make a 48 core so we'll end up at 3 TFLOPs.

600mm2 die size/48=12.5mm2, but what's going to happen is the cores are going to be smaller than that because it needs space for the "uncore" part of the chip, for example the ring bus.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
To phrase my next response I want to point out a couple of things that happened not all that long ago and didn't concern Intel at all.

AMD almost launched a 4850 that would have been utterly obliterated in the market, it wouldn't have been viable at launch in the $100 price segment. Very late in the game Wavey(Dave B) convinced them to make changes to the board increasing its performance a very healthy amount and making it a part that made some serious waves in the market. The seriously underestimated where nVidia was going to be and almost embarassed themselves and launched a part that would have been too slow with too little RAM to be remotely competitive.

nVidia not all that long ago launched a couple of different parts at more then $100 over where they needed to be because the underestimated(or perhaps, accurately estimated where ATi would have been if not for the changes) the competition and assumed that they would have a clear lock on the first few tiers of high end performance in the graphics market.

Between AMD and nVidia they are, beyond any doubt, the top two companies in the field of graphics hardware today. Noone else has come out with anything remotely in their league, and many companies have rolled over and died in their wakes.

Both of them can and do make some fairly large judgemental mistakes.

It does naturally beg a degree of self-assessment though - unless we presume ourselves to be superiorly intelligent to the decision makers at Intel, we must assume Intel knew ALL of this before they even assembled the layout team for Larrabee nearly 3 yrs ago.

Three years ago Intel made several very poor predictions. They did not keep these secret, so it's pretty easy to tell what they were. For many years GPUs trailed CPUs by several build processes, both AMD and Intel being 2-5 steps ahead of the GPUs. As time moved on, the GPUs increasingly got closer to the CPUs and eventually we ended up where we are today, 40nm GPUs are coming about when volume shipment of 32nm CPUs should start. Intel saw this as a sign that GPUs were not going to be able to continue their torrid increases in performance as they were pushing up against the same type of limitations Intel themselves had.

Intel decided that the days of GPUs scaling were over.

Those are not my words, those are Intel's. We know, as a point of fact, they were wrong. Anyone with a very moderate level of capacity and understanding back when they first thought that also knew that they were wrong. At no point did it take consultation with the top engineers in the world to conclude that at best, Intel was a bunch of idiots making that call. Not borderline, they were flat out morons.

Another very poor prediction they made, rasterizers were running into limitations scaling. Nothing ever indicated this to be true in any way whatsoever, there has never been even the slightest evidence of this so I honestly can't say where their logic came from. I suppose from the standpoint that we were closing in on being 'fillrate complete'(made up term by me, simple way of stating that fillrate is a 'solved' issue- we aren't there yet but close) they could state that rasterizers couldn't improve much on that angle, but when something handles something perfectly it doesn't seem to be a good plan of attack to go after that area. If it were a company with any sort of knowledge about the graphics industry at all, they would have known the point in time when raw fill was no longer a concern shader die space would see an exponential increase, but this is the same company that made the staggering mistake of stating that the days of GPUs scaling were over. We know, for a fact, that they were wrong in no uncertain terms.

Just saying we've got to be giving ourselves a lot of credit in the grey matter department and assuming Intel's decision makers are operating with a commensurate depletion of it in order to be so confident as to assume we know what they don't and that we foresee in their competition something which they do not.

I've been making calls like this for over a decade on these forums(my registration date was the day these particular forums went live, I was here for a while before that). I have no problems at all standing by my record on predictions relating to the graphics industry and they many flawed attempts by all sorts of companies with FAR more expertise in the graphics industry then Intel. Of course, everytime I point out what is certain to be a rather terrible failure, the loyalists for that company get extremely upset and always bring up the argument that the people making these choices know a lot more then I do. Of course, they have all since lost their jobs because clearly, they didn't ;)

In reality the biggest mistakes companies have is their own arrogance when undertaking a project such as this.

This is where we get to the biggest mistake Intel made with Larry. They thought ray tracing would be great because they use it in movies. They looked over where they were at and where they were likely to be three years out and decided that they could make a real time ray tracer using software emulation to allow for complete flexibility and noone would be able to do anything remotely comparable. This is a lesson they should have learned from Itanic. Developers aren't going to switch unless you give them a BIG reason to do so. Larry, if a custom software render engine hand tuned by Abrash himself(who I would wager heavily will be the top person they can hope to enlist- he taught Carmack tricks back in the day) were offered still will not be able to compete with GPUs. Could it have if Intel was accurate with their predictions years ago that GPUs were no longer scaling? Yes, it could have. But Intel was so shockingly wrong with their initial prediction it isn't really close.

Getting game developers to move from rasterization to ray tracing is as much of a hurdle as it was trying to get developers to move their mainstream applications from x86 to IA64, we know how great that worked for Intel.

Just saying we've got to be giving ourselves a lot of credit in the grey matter department and assuming Intel's decision makers are operating with a commensurate depletion of it in order to be so confident as to assume we know what they don't and that we foresee in their competition something which they do not.

Itanium? i740? NetBurst.... well OK, I didn't know in advance Netburst was going to be a huge failure, but the other two were very obvious from the outset. They weren't close calls either, the entire concept for market acceptance was at best moronic.

Intel is without a doubt utterly brilliant in the x86 market, outside of that their record places them far closer to 'utterly inept and bankrupt' then 'dominating market force'.
 

vueltasdando

Junior Member
Apr 18, 2009
3
0
0
Originally posted by: BenSkywalker
It is interesting that your entire post ignores the history of the graphics market and how companies have managed to succeed and fail no matter the relative size of their competition. ATi, Matrox and 3Dfx all utterly dwarfed nVidia not all that long ago. As of right now, nVidia has close to enough liquid assets to pay cash outright for AMD in its entirety. While it is nice to assume that money alone will ultimately give you a major advantage, the real limitations are going to come to how much die space you are dealing with. Are you going to end up with a better processor spending $5 Trillion on R&D at 90nm or spending $100 Million on R&D at 32nm? It is true that having larger cash reserves will help you expand your transistor budget, but not enough to overcome staggering mistakes in architecture.

All that you are talking about was when GPUs were in the beginning, and the cost of develop a processor was low. 3dfx was the first commodity 3d graphics accelerator, and when nVidia
surpassed ATI, with TNT cards, the situation was near the same. Now, the competitions size matters.

Originally posted by: BenSkywalker
Intel spent billions on IA64, and while some in this thread seem to ignore the fact- IA64 was supposed to replace x86, Intel made this abundantly clear publicly- had x86 emulation up and running, spent billions and failed in no uncertain terms. Intel has also previously attempted to enter into the high end graphics market with the i740, while it did much better then Itanium, at best it managed a short life of being moderately competitive with mid tier offerings and resigned itself to integrated status fairly quickly. Not close to the huge financial failure that the Itanium was, but very far removed from being their entry into the booming graphics market.

False. IA64 was to replace old RISC architectures in enterprise market for HP and others, and a "first step" for Intel. No was in any way meant to be a x86 succesor.

Originally posted by: BenSkywalker
Despite its enormous financial advantages, Intel fails with rather shocking frequency any time they step outside of the x86 market.

False. There is no such a thing like "x86 market". There is a datacenter market, console market, phone market, pc market, etc, but no a "x86 market", because the consumer dont cares about the underlaying architecture. And, by the way, Intel had the crown in the smartphone market with XScale ARM (risc) processors until the technology was sold to Marvell. The had a first step in console market with XBox (not a bad one). And well, they posses biggest market share in PC, notebook, netbooks and servers of all ranges. Yes, of course, in all of those markets with x86 processors, but those markets were domained by RISCs and other processor types earlier, and Intel dethroned them with x86.
And hey, your assumption dont deny the possibilities of Larrabee in GPU market, its a x86 :)

Originally posted by: BenSkywalker
The very dangerous reality that Intel faces right now is that AMD and nVidia are both nigh fillrate complete for rasterizing needs, the amount of die space they will be able to dedicate to shader cores is going to accelerate significantly faster in relative terms then anyone's build process. Their cores are, by a staggering amount, more powerful then the most optimistic estimates for Larry on a transistor for transistor basis. This isn't going to change unless Intel completely abandons their notion that x86 and software rasterization is the way to go. It is a failed concept in every way from a market perspective.

I dont think Larrabee will surpass nvidia and ati from the beginning, but thats not a short race. DX11 will force GPU manufacturers to add complexity in their pipes, and new designs to fit new specs. Larrabee wont need changes to support DX11,12,... whatever. And as GPU apis get flexible, Larrabee will get momentum. Of course, it needs to be dollar per dollar competitive from day one, al least in midrange.

Originally posted by: BenSkywalker
Try and say what you will about the brilliance of Intel, simply point to their huge success outside of a vice grip on a commodity ISA market. It doesn't exist. The only thing history has proven to us about Intel more then they find a way to triumph in the x86 market is that they will find a way to fail when they step outside of it.

As i have said before, Intel dominates markets that were RISCs not far ago.

Originally posted by: BenSkywalker
Intel's ignorance in the market they are attempting to enter is another major factor that shouldn't be overlooked. People point to the fact that Intel has hired people with extensive industry experience such as those from 3DLabs. AMD and nVidia both offer integrated graphics that are more powerful then anything 3DLabs ever created. Intel does not have talent, experience, or skill in the market they are trying to enter in any fashion that should convince anyone they are capable of taking the market over.

¿Where the hell do you work? Companies have money and infraestructure, not knowledge. Knowledge is a human trait. And money can buy human talent.

 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
You forget your login info Nemesis?

3dfx was the first commodity 3d graphics accelerator

No, it wasn't. 3Dfx had the first commodity 3d rasterizer the public adopted in anything resembling mass market numbers.

and when nVidia surpassed ATI, with TNT cards, the situation was near the same

Again, no it wasn't- and the Riva128 handled besting ATi on a performance basis quite easily. 3Dfx still owned the gaming market and ATi still dwarfed nVidia in the OEM space.

False. IA64 was to replace old RISC architectures in enterprise market for HP and others, and a "first step" for Intel. No was in any way meant to be a x86 succesor.

Merced had a built in x86 mode from the design phase on. I don't know what very poorly written web sites you read, but they aren't giving you a very clear picture.

False. There is no such a thing like "x86 market".

People lose IQ points reading things that stupid :)

The had a first step in console market with XBox (not a bad one).

And the staggering number of processors sold in consoles this generation, give me a second..... carry the nine...... to the fifth...... 0. Must have made an incredible impression.

Yes, of course, in all of those markets with x86 processors, but those markets were domained by RISCs and other processor types earlier, and Intel dethroned them with x86.

What markets does x86 currently dominate? Desktop PCs, workstations and servers. The latter two for the exact same reason AMD and nVidia dominate high end 3D graphics now, simple scales of economy.

DX11 will force GPU manufacturers to add complexity in their pipes, and new designs to fit new specs. Larrabee wont need changes to support DX11,12,... whatever. And as GPU apis get flexible, Larrabee will get momentum.

In terms of new effects that can be done under DX11 why don't you tell me. End visual results that were not possible with DX9 that is. DX11 simply makes things simpler, and D3D is something Intel is very much trying to run away from, ray tracing is no part of it.

As i have said before, Intel dominates markets that were RISCs not far ago.

Intel took specialty markets away via scales of economy. Intel is the specialty player this time.

Companies have money and infraestructure, not knowledge. Knowledge is a human trait. And money can buy human talent.

Are you 12 or 13? I have to assume that no matter what nationality you are by the time you are in High School you will understand what Intellectual Property is. I detest responding to drivel this ignorant, but I fear some people may actually believe it.
 

vueltasdando

Junior Member
Apr 18, 2009
3
0
0
Originally posted by: BenSkywalker
You forget your login info Nemesis?

Whos Nemesis?

Originally posted by: BenSkywalker
3dfx was the first commodity 3d graphics accelerator

No, it wasn't. 3Dfx had the first commodity 3d rasterizer the public adopted in anything resembling mass market numbers.

Oh my god, what kind of problem do you have?. So, 3dfx was the first commodity 3d graphics accelerator. To intelligent ppl a 3d rasterizer is a kind of 3d accelerator. What are you missing? There were no real 3d accelerators by the time 3dfx hit the market.


Originally posted by: BenSkywalker
False. IA64 was to replace old RISC architectures in enterprise market for HP and others, and a "first step" for Intel. No was in any way meant to be a x86 succesor.

Merced had a built in x86 mode from the design phase on. I don't know what very poorly written web sites you read, but they aren't giving you a very clear picture.

You cant release a new processor in such market without a software base. There is no reason (to living intelligent humans) to suposse that the x86 mode was a placed for targeting desktops. Have you ever seen a desktop machine with Itanium? Or even a mid-low server solution?

And well... i read wikipedia, and sites like that... do you even read?

Originally posted by: BenSkywalker
False. There is no such a thing like "x86 market".

People lose IQ points reading things that stupid :)

Yeah... and so, seems you have read a lot of stupid things like that... How many IQ points have you lost already?

Originally posted by: BenSkywalker
The had a first step in console market with XBox (not a bad one).

And the staggering number of processors sold in consoles this generation, give me a second..... carry the nine...... to the fifth...... 0. Must have made an incredible impression.

24 millions consoles... not bad for a beginner...

Originally posted by: BenSkywalker
Yes, of course, in all of those markets with x86 processors, but those markets were domained by RISCs and other processor types earlier, and Intel dethroned them with x86.

What markets does x86 currently dominate? Desktop PCs, workstations and servers. The latter two for the exact same reason AMD and nVidia dominate high end 3D graphics now, simple scales of economy.

What did you fail to understand?

Originally posted by: BenSkywalker
DX11 will force GPU manufacturers to add complexity in their pipes, and new designs to fit new specs. Larrabee wont need changes to support DX11,12,... whatever. And as GPU apis get flexible, Larrabee will get momentum.

In terms of new effects that can be done under DX11 why don't you tell me. End visual results that were not possible with DX9 that is. DX11 simply makes things simpler, and D3D is something Intel is very much trying to run away from, ray tracing is no part of it.

So you are saying that nvidia and ati didnt included new hardware to support DX10...?
With your reasoning (if can be called that way...) why DX9 hardware doesnt support DX10?
Intel has insisted Larrabee is meant to be a rasterizer, no a ray tracer.

Originally posted by: BenSkywalker
As i have said before, Intel dominates markets that were RISCs not far ago.

Intel took specialty markets away via scales of economy. Intel is the specialty player this time.

Yeah, yeah. Scales of economy?. Faster and cheaper processors are "scales of economy"?

Originally posted by: BenSkywalker
Companies have money and infraestructure, not knowledge. Knowledge is a human trait. And money can buy human talent.

Are you 12 or 13? I have to assume that no matter what nationality you are by the time you are in High School you will understand what Intellectual Property is. I detest responding to drivel this ignorant, but I fear some people may actually believe it.
[/quote]

WTF matters intellectual property here?

But there is no problem here. Your wisdom clearly eclipses all talent behind Intel team.
btw, whats your job? I only hope not a technical one, for your employers sake.
I wonder, are you a bot?






 

AmberClad

Diamond Member
Jul 23, 2005
4,914
0
0
vueltasdando - Welcome to Anandtech. However, please familiarize yourself with the forum rules regarding personal attacks, otherwise you might find your stay here a short one...

Originally posted by: vueltasdando
Oh my god, what a dumb you are.
.
.
.
Really, what kind of jerk are you?. I am really curious about...
.
.
.
Yeah... you seem to have lost all your IQ time ago...
.
.
.
Do you have a hard time reading? Look for help, please.

AmberClad
Video Moderator
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
Originally posted by: vueltasdando
Originally posted by: BenSkywalker
You forget your login info Nemesis?

Whos Nemesis?

Originally posted by: BenSkywalker
3dfx was the first commodity 3d graphics accelerator

No, it wasn't. 3Dfx had the first commodity 3d rasterizer the public adopted in anything resembling mass market numbers.

Oh my god, what a dumb you are. So, 3dfx was the first commodity 3d graphics accelerator. To intelligent ppl a 3d rasterizer is a kind of 3d accelerator. What are you missing? There were no real 3d accelerators by the time 3dfx hit the market.


Originally posted by: BenSkywalker
False. IA64 was to replace old RISC architectures in enterprise market for HP and others, and a "first step" for Intel. No was in any way meant to be a x86 succesor.

Merced had a built in x86 mode from the design phase on. I don't know what very poorly written web sites you read, but they aren't giving you a very clear picture.

Really, what kind of jerk are you?. I am really curious about...
You cant release a new processor in such market without a software base. There is no reason (to living intelligent humans) to suposse that the x86 mode was a placed for targeting desktops. Have you ever seen a desktop machine with Itanium? Or even a mid-low server solution?

And well... i read wikipedia, and sites like that... do you even read?

Originally posted by: BenSkywalker
False. There is no such a thing like "x86 market".

People lose IQ points reading things that stupid :)

Yeah... you seem to have lost all your IQ time ago...

Originally posted by: BenSkywalker
The had a first step in console market with XBox (not a bad one).

And the staggering number of processors sold in consoles this generation, give me a second..... carry the nine...... to the fifth...... 0. Must have made an incredible impression.

24 millions consoles... not bad for a beginner...

Originally posted by: BenSkywalker
Yes, of course, in all of those markets with x86 processors, but those markets were domained by RISCs and other processor types earlier, and Intel dethroned them with x86.

What markets does x86 currently dominate? Desktop PCs, workstations and servers. The latter two for the exact same reason AMD and nVidia dominate high end 3D graphics now, simple scales of economy.

Do you have a hard time reading? Look for help, please.

Originally posted by: BenSkywalker
DX11 will force GPU manufacturers to add complexity in their pipes, and new designs to fit new specs. Larrabee wont need changes to support DX11,12,... whatever. And as GPU apis get flexible, Larrabee will get momentum.

In terms of new effects that can be done under DX11 why don't you tell me. End visual results that were not possible with DX9 that is. DX11 simply makes things simpler, and D3D is something Intel is very much trying to run away from, ray tracing is no part of it.

So you are saying that nvidia and ati didnt included new hardware to support DX10...?
With your reasoning (if can be called that way...) why DX9 hardware doesnt support DX10?

Originally posted by: BenSkywalker
As i have said before, Intel dominates markets that were RISCs not far ago.

Intel took specialty markets away via scales of economy. Intel is the specialty player this time.

Yeah, yeah. Scales of economy?. Faster and cheaper processors are "scales of economy"?

Originally posted by: BenSkywalker
Companies have money and infraestructure, not knowledge. Knowledge is a human trait. And money can buy human talent.

Are you 12 or 13? I have to assume that no matter what nationality you are by the time you are in High School you will understand what Intellectual Property is. I detest responding to drivel this ignorant, but I fear some people may actually believe it.

WTF matters intellectual property here?

But there is no problem here. Your wisdom clearly eclipses all talent behind Intel team.
btw, whats your job? I only hope not a technical one, for your employers sake.
I wonder, are you a bot?






[/quote]


I am . I just hardware junky. Welcome . Amber has good advice . We get hot and heavy here in a civil way . Its the best way. You really didn't need to attack young skywalker. He doing well on his own . If your around here after larrabee is released. You will also see how to properly deal with these types. This thread comes back from the dead. Than who ever was correct will be known and all false wild assumptions will be exposed . that works on both sides of all debates.


But I do have a rather vague question . For young Skywalker. Why would you ask me that question? TO refresh your memory, Your question. You forget your login info Nemesis? Only one person would ask that question BEN. Do you understand what I am saying . I won't make it any more clear than this.
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
There were no real 3d accelerators by the time 3dfx hit the market.

Rendition off the top of my head was available prior to any part being released by 3Dfx. Forgive me for not having to rely on extremely poorly written web sites, some of us were actually around during that timeframe and had been working with computers for many years.

Have you ever seen a desktop machine with Itanium?

Yes.

And well... i read wikipedia,

That explains a lot. Find more reliable sources.

24 millions consoles... not bad for a beginner...

And 0 this generation. They were so good, noone wanted anything to do with them this generation.

So you are saying that nvidia and ati didnt included new hardware to support DX10...?

What part did you fail to understand? When did I say there were no hardware differences? The difference in added hardware is to improve performance in certain cases, not adding hardware simply means you won't see the performance improvement. It isn't difficult to follow.

WTF matters intellectual property here?

Everything. If you don't understand that, you should look into it, and try and find better sources then Wikki.

This thread comes back from the dead.

Try this- I have never hesitated to voice my viewpoint when something is going to outright fail. You have a decade's worth of my posts to go through. Find an example of my market prediction being wrong on the consumer PC market.

btw, whats your job?

The most important part of my job deals with predicting sales trends. I get a lot of bonus checks at my job. If you have held a job in any comparable field, you should be able to figure out the rest.
 

vueltasdando

Junior Member
Apr 18, 2009
3
0
0
Originally posted by: BenSkywalker
There were no real 3d accelerators by the time 3dfx hit the market.

Rendition off the top of my head was available prior to any part being released by 3Dfx. Forgive me for not having to rely on extremely poorly written web sites, some of us were actually around during that timeframe and had been working with computers for many years.

Why you dont read what i actually write?
3dfx was the first commodity 3d graphics accelerator. Of course there were a lot of OGL workstations before, but i dont call commodity hardware a 5k$ machine.
And no Rendition hardware wasnt sold as common use hardware.


Originally posted by: BenSkywalker
Have you ever seen a desktop machine with Itanium?

Yes.

Well, of course anybody can build an itanium machine. I mean a business real world machine. Not a geek machine.


Originally posted by: BenSkywalker
And well... i read wikipedia,

That explains a lot. Find more reliable sources.

Like your thread posts? Dont make me laugh.

Originally posted by: BenSkywalker
24 millions consoles... not bad for a beginner...

And 0 this generation. They were so good, noone wanted anything to do with them this generation.

The fact intel hasnt processors with this console generation doesnt have anything to do with technical capabilities, nor performance. Fastest general purpose one-thread processors are Intels.

Originally posted by: BenSkywalker
So you are saying that nvidia and ati didnt included new hardware to support DX10...?

What part did you fail to understand? When did I say there were no hardware differences? The difference in added hardware is to improve performance in certain cases, not adding hardware simply means you won't see the performance improvement. It isn't difficult to follow.

Its gets hard to argue with you. Once your reasonings gets down, you change the question.

Originally posted by: BenSkywalker
WTF matters intellectual property here?

Everything. If you don't understand that, you should look into it, and try and find better sources then Wikki.

Engineers dont erase their memories when change employer. Remember it.


Originally posted by: BenSkywalker
btw, whats your job?

The most important part of my job deals with predicting sales trends. I get a lot of bonus checks at my job. If you have held a job in any comparable field, you should be able to figure out the rest.
[/quote]

So you are a not technician, true?


 

rgallant

Golden Member
Apr 14, 2007
1,361
11
81
Quote:
Originally Posted by XS Janus
I love the "new" Intel and its "presence" among mere mortals these last few years
Come back soon, and good luck. I'm sure you'll have great news by then

we'll put the planete on Turbo , Nehalem waterfalling to mainstream will be exciting, and then ... Lrb
That will be a year like the year of Conroe.
I ll make sure the Intel guys stay open.

Francois
__________________


-DR.Who on what he thinks is coming ,not much but hey, he should be in the know.

-http://www.xtremesystems.org/f...ab4df9269661c&t=222402

-Conroe = a shift to smarter chips ,clock per clock, but also high clocks at the same time.
 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: IntelUser2000
Originally posted by: chizow

They're very different in the sense a CPU die is often comprised mainly of cache, which is the simplest transistor and always the first used to validate a new process node. That's very different from GPUs which are already dedicating upwards of 50% to execution units, a ratio that is only growing with every new iteration.

Look here: http://www.3dnews.ru/_imgdata/img/2009/02/06/112525.jpg

and here:
http://billkosloskymd.typepad....2008/02/04/tukwila.jpg

Tell me how much of that 699mm2 die is cache hmm?(That's the die pic for Itanium Tukwila with 30MB on-die cache) Larrabbee had to greatly simply its cores to increase its core count. The circuits in GPUs are definitely more complex than caches but comparing them in this case is flawed.
I'm not sure what you are getting at, perhaps the same thing, but my point was that cache makes up a significant portion of traditional CPUs, the simplest transistor, allowing clock speeds to scale higher. Those die shots are consistent with my point, showing 40-50% of that die space being dedicated to cache.

That's very different than GPUs which are dedicating at least that much of their die space to execution units rather than cache. Translated to Larrabee, where each discrete core will be a mix of both cache and functional/execution units sets the expectation of clockspeeds being closer to those of a GPU, rather than Intel's CPUs, especially given the estimated size and TDP of Larrabee.

 

chizow

Diamond Member
Jun 26, 2001
9,537
2
0
Originally posted by: Idontcare
Now there is an acceptable level of justified opinion, thanks for taking the time to go into it!

It does naturally beg a degree of self-assessment though - unless we presume ourselves to be superiorly intelligent to the decision makers at Intel, we must assume Intel knew ALL of this before they even assembled the layout team for Larrabee nearly 3 yrs ago. And they certainly knew it 2 yrs ago, and 1 year ago, and today.

So why then would Intel choose to ignore this information and develop a product with such woefully obvious inadequacies? Why would they release such a woefully inadequate product next year?

Just saying we've got to be giving ourselves a lot of credit in the grey matter department and assuming Intel's decision makers are operating with a commensurate depletion of it in order to be so confident as to assume we know what they don't and that we foresee in their competition something which they do not.

Are we so supremely confident in ourselves as to make such an assertion?

Idc, I'm not sure if you're purposefully avoiding the white elephant in the room here, but have you considered the business-minded individuals and decision makers at Intel might've trumped the wishes of the more technical and performance focused wishes of the design teams?

I mean Intel wouldn't have any vested interest in doing everything in their power to ensure x86 is firmly entrenched in any new computing standard or frontier, would they? To me its obvious why they're taking an approach from both a hardware and software standpoint that has just about everyone in the industry scratching their heads asking "Why?"
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
Why you dont read what i actually write?
3dfx was the first commodity 3d graphics accelerator. Of course there were a lot of OGL workstations before, but i dont call commodity hardware a 5k$ machine.
And no Rendition hardware wasnt sold as common use hardware.

Since you like this site. Rendition a $5K workstation class part, heh, seriously man- that is extremely amusing for anyone that remembers those parts.

Engineers dont erase their memories when change employer. Remember it.

IP laws can cost companies billions, remember it.

So you are a not technician, true?

Depends on what definition you are using. If you are asking am I trained in Comp Sci then yes, I am. Have I worked with computers for a living for years, again, yes. Have I worked with 3D technology for years, yes. Am I currently making way the hell more money then I did when I worked directly with these things for a living, yes.
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: chizow
Originally posted by: Idontcare
Now there is an acceptable level of justified opinion, thanks for taking the time to go into it!

It does naturally beg a degree of self-assessment though - unless we presume ourselves to be superiorly intelligent to the decision makers at Intel, we must assume Intel knew ALL of this before they even assembled the layout team for Larrabee nearly 3 yrs ago. And they certainly knew it 2 yrs ago, and 1 year ago, and today.

So why then would Intel choose to ignore this information and develop a product with such woefully obvious inadequacies? Why would they release such a woefully inadequate product next year?

Just saying we've got to be giving ourselves a lot of credit in the grey matter department and assuming Intel's decision makers are operating with a commensurate depletion of it in order to be so confident as to assume we know what they don't and that we foresee in their competition something which they do not.

Are we so supremely confident in ourselves as to make such an assertion?

Idc, I'm not sure if you're purposefully avoiding the white elephant in the room here, but have you considered the business-minded individuals and decision makers at Intel might've trumped the wishes of the more technical and performance focused wishes of the design teams?

I mean Intel wouldn't have any vested interest in doing everything in their power to ensure x86 is firmly entrenched in any new computing standard or frontier, would they? To me its obvious why they're taking an approach from both a hardware and software standpoint that has just about everyone in the industry scratching their heads asking "Why?"

I agree, that is a plausible scenario, such a white elephant has existed at Intel in the past. (Netburst marketing-driven strategy)

As I've said in other posts though I see Intel's business model as distinctly different from other technology companies (IBM, AMD, SUN, DEC, NV) in that Intel is about revenue and gross margins with which technology is merely a means to the end while the vast majority of other companies seem to prioritize the creation of technology first and foremost with hopes that customers and money will follow.

Intel's GMA and Netburst technologies are derided in enthusiast forums like these but there's no doubt they are products that were created to generate profits and they did/do that in spectacular fashion.

I have no beef with folks who merely wish to express their opinion regarding Larrabee's performance or likelihood of success, but I do like to listen to the justification for these opinions as usually there is a grain of possibility behind every one of them.

Not all possibilities carry the same probability of likelihood though, some are far-fetched (non-zero probability of occurring) but nonetheless are possible. I do have concerns for the folks who have already decided (with zero data for justification) that all scenarios but their preferred outcome have zero-probability of occurring...
 

Idontcare

Elite Member
Oct 10, 1999
21,110
64
91
Originally posted by: BenSkywalker
Engineers dont erase their memories when change employer. Remember it.

IP laws can cost companies billions, remember it.

My interpretation of this dialogue between you and vueltasdando is that you are discussing how the world should work in theory whereas vueltasdando is discussing how the world works in practice.

In theory engineers who have the talent to create IP at company A would not leverage that knowledge and their talent to create new IP at company B.

In reality though this happens all the time, in fact as vueltasdando is hinting at it is basically impossible for it to not happen. In my tenure at Texas Instruments I was the progenitor for some 20 patents, as it was my brain that was the very reason behind the creation of that IP it would be impossible for me to remove the experience and knowledge I have of that IP (which remains a TI asset) while I created IP at my current employer.

And in practice what happens is that my employer says to me"you were responsible for creating IP xyz at TI, you know the logic behind the creation IP as well as the limits to which you covered it in the patent, so can you envision a way to create new IP for us which goes beyond your prior IP at TI in such a way as to enable us to file a new patent which would then enable us to make products that do not infringe on the IP you created at TI?".

That's the reality and that's why it matters who Intel has hired, what their experience is, and it why they (Intel) bother to go to any lengths of making any of it public knowledge.

You also appear to be conflating knowledge of IP with infringement of IP. Intel can know of NV's IP by way of patent review (patents are required to capture all there is know about the IP in order for the IP to be covered by patent) or by way of reverse-engineering or by way of hiring analysts, etc. It is legal to intimately know IP of your competitors (provided you do it legally of course), and it is legal to hire people with the express task of creating products with new IP that avoid infringing the IP of the competitor.

Originally posted by: BenSkywalker
So you are a not technician, true?

Depends on what definition you are using. If you are asking am I trained in Comp Sci then yes, I am. Have I worked with computers for a living for years, again, yes. Have I worked with 3D technology for years, yes. Am I currently making way the hell more money then I did when I worked directly with these things for a living, yes.

I find this repeated talk of your implied checking account size to be distasteful. Not that you have to care or should care what I think, but I suspect I'm not the only one who might feel such efforts to bolster your credibility by implying you are a big-dog in the realworld is actually having the opposite effect.

In my experience its the guys who feel they have to resort to talking about the size of their paycheck as a means to establish credibility that tend to be the one's that don't have much else worth listening to. If you don't intend to create this added dimension to the perception of your online persona then I hope this feedback provides food for thought.
 

taltamir

Lifer
Mar 21, 2004
13,576
6
76
while netburst generated money, did it generate as much as it could? I thought it was largely intel's existing contracts and peoples lack of knowledge behind its sales, with AMD gaining massive traction and market penetration during the netburst days...