Intel's Discrete GPU

40sTheme

Golden Member
Sep 24, 2006
1,607
0
0
Yes, 1.2TFLOPS means quite a bit. The only problem, though, is that that is merely floating point operations. That's great for those who work with CAD and the like, but for gamers it doesn't mean much except more precise and extensive high-level shader effects that use a floating point algorithm (HDR and the ilk). In other words, you can't map textures and raster with floating point operations. So, to gamers, a massive amount of FLOPS doesn't mean a whole lot. But for development/engineering/etc., 1.2TFLOPS brings a lot to the table.
 
Jan 31, 2002
40,819
2
0
I'm pretty sure that 1.2TFLOP number is for their massively parallel POLARIS CPU, and not their discrete GPU.

Either that, or Intel is full of crap (again) about what their next GPU will be able to do, given that the estimated stats from the PS3's Folding@Home numbers are about 25GFLOPS per unit. ;)

- M4H
 

Phynaz

Lifer
Mar 13, 2006
10,140
819
126
Originally posted by: MercenaryForHire
I'm pretty sure that 1.2TFLOP number is for their massively parallel POLARIS CPU, and not their discrete GPU.

Either that, or Intel is full of crap (again) about what their next GPU will be able to do, given that the estimated stats from the PS3's Folding@Home numbers are about 25GFLOPS per unit. ;)

- M4H

Nope, it's their GPU. Details are here.



 
Jan 31, 2002
40,819
2
0
Originally posted by: Phynaz
Originally posted by: MercenaryForHire
I'm pretty sure that 1.2TFLOP number is for their massively parallel POLARIS CPU, and not their discrete GPU.

Either that, or Intel is full of crap (again) about what their next GPU will be able to do, given that the estimated stats from the PS3's Folding@Home numbers are about 25GFLOPS per unit. ;)

- M4H

Nope, it's their GPU. Details are here.

That's the original link from the OP, but looking over it again, I stand corrected. What they seem to be saying is that in theory they could get this kind of raw performance out of a massively parallel GPGPU (General Purpose GPU).

However, as we all know, raw numbers mean jack, especially on paper. :p

- M4H
 

kobymu

Senior member
Mar 21, 2005
576
0
0
http://www.beyond3d.com/images/articles/IntelFuture/Image12-big.jpg

VLIW had very little success in the centric/general purpose area, and now Intel wants to implement it on mini-cores!?:confused:

So the question is, does the core support x86 instructions at all? If single-threaded performance is still roughly acceptable, it might make some sense for it to do so, and then you could think of the Vec16 FPU as an 'on-core' coprocessor that exploits VLIW extensions to the x86 instruction set. Or, the entire architecture might be VLIW with absolutely no trace of x86 in it. Obviously, this presentation doesn't give us a clear answer on the subject. And rumours out there might just be speculating on Larrabee being x86, so that doesn't tell us much either.
Weird.

Hmmm... reading on i found this:

Interestingly, one of our ninjas reports having gotten a few of Intel's acolytes on the side and perceived a nearly staggering lack of appreciation from them for just how important and resource-intensive the software development infrastructure and support side can be...
Well doh, if you are going to change the way you do hardware assist rendering COMPLETELY aka rewrite ALL the 3D software assets companies have accumulated over the years, then don?t be surprised that companies are slow to adopt...

... One hopes for their sake that the senior people have a finer appreciation for this element. But when one remembers how many games updated for dual core CPUs last year also included a note that Intel's HT technology (introduced in 2002!) received significant benefits too. . . well, let's say that confidence on that point is hard to come by.
O.k. that is actually a good point. :eek:

[edit] I should post this quote in the Nvidia GCPU thread (in AT/CPU), maybe if it is coming straight from Intel's mouth, it will be more 'persuasive'.[/edit]
 

Cookie Monster

Diamond Member
May 7, 2005
5,161
32
86
Sounds like a GPU intended for general purpose computing. I wouldnt be surprised if they get slaughtered by ATi and nVIDIA though. This happened last time when they launched the i740.
 

kobymu

Senior member
Mar 21, 2005
576
0
0
I need to go on a semi-verbosely rant for a moment so I apologize up front if anyone thinks I'm thread trashing.

If you take a few steps backwards for a moment and try to look at the big picture, I think that most will agree with me that there is an approaching 'war' between Intel, Nvidia and AMD, a war over what will be the next 'winning' architecture of hardware-assist rendering.

I think 'it' (the current situation) is evolving into a war because the winning architecture will set the ground for what will be, eventually, after a couple of generation, the architecture for hardware-assist real-time photorealism. This is the future of hardware-assist real-time rendering.

As economy of scale has proven time and time again, even if the market IS big enough for several players/companies, it will eventually focus over a single one. AMD, Nvidia and intel all wants to be that company! Every single one of them is pushing, hard, to reach that goal. We are seeing just the start of it.

While it is completely understandable that every company will approach this endeavor differently, I foresee problems, big ones.

First of all, many software companies, as in the majority of them, are revolved around the "time to market" concept, with every programmer that I talk what I hear the most is how much their bosses are crazy because they promised their client that they will finish the project in an unreasonably time frame, and the 12 (sometimes 16) hours hauls they need to make to accomplish the deadlines, this is possible manly duo to one very impotent aspect of programming, reusability.

Software companies rely on reusable code, they depend on it, but most importantly they accumulate it, I really cannot stress this enough. IP (intellectual property) in the software industry is a huge deal, it is so huge that that some of the bigger name that ware considered big iron companies are now considered IP companies, names like Cisco and IBM, IBM for example gets a lot of press for their supercomputers but from their financially point their software assets are a bigger part of their technology assets pie, If IBM would start selling their technology assets they will make more money of their software assets then their CPU assets or their server assets, hardware assets aren't as expensive as it used to be (with the exception of fabrication facilities), IBM didn?t get that much for their laptop business. Back when DEC has started going under and its CEO started to sell the company piece by piece, they got the most for their software, and their IP.

Medium and smaller software companies live or die by their IP, I seen some go under for not changing with current technologies fast enough i.e. not developing enough IP for the newer technologies/markets and I have seen other companies that want through an explosion (in the good sense) for the single reason that they had the right IP.

For example I seen a small, 5 employees total, DB software company grow to 500 in 2 years just because they concentrated on cooperated DB software, things like ERP, CRM and ERM, that company is now worth a few dozens million USD. They own, maintain, and continually develop their core IP, which they use repeatedly with every new project they undertake, and just change the interfaces and customizing it to the client specific needs, client specific customized code the client usually gets, with some additionally costs, but their core IP is licensed for use only and the code remains closed (the project is shipped with the cope IP as binaries only) for only the software company to use/reuse, that way they can achieve many, many projects at the same time, while each and every one of these projects score them a very nice amount.

Going back to the subject at hand, as I mentioned in the beginning, the new offering that Intel, Nvidia and AMD are trying to push are targeted not at replacing the current discreet GPU, they are targeted toward a future architecture that will eventually be able to achieve hardware-assist real-time photorealism, I personally don?t think that current 3D architecture that exists in current GPU can/will easily attain that level of performances, I think the architecture should be changed, but I think that it should be changed in measurable steps.

The further away from current x86 architecture these solutions will be the harder and longer it will take to materialized. X86 has one of the biggest repositories of code in this segment, probably more then any other architecture. If next generation G/CPU or whatever they are, stray too far away from existing solution this will shake the entire 3D software industry.

If the article in the in the first post is correct, then Intel approach to this is the equivalent of a U turn with a triple somersault. Documentation, specification guideline, libraries, tools, utilities, all will have to be rewritten; some will be needed to be rewritten from scratch to make use of the new architecture, it will make just too much existing code obsolete.

In short, I don?t like this approach, it is just too radical.

Reading through the article again it seems very unlikely that a mini-core can accept both VLIW and x86 (or bare-bone X86), I know Intel has an impressive track record in regard to decoders but come on it?s a MINI core, where will it fit?

Going for a partial x86 or a pseudo x86 (with the appropriate additions) architecture just makes a lot more sense, Intel move doesn't, maybe can they produce a magical compiler that will consume slightly modify existing code and produce from it binaries that are native to their new architecture? I heard they have an extremely capable software team, but this will not be, by any means, a small feat. Maybe Intel had some kind of a rabbit in their hat? Maybe they are just relaying on their ability to push their new architecture by sheer force?

I don?t know, but I do know this, I have a BAD feeling about this one.

edit for spelling