- Jun 21, 2005
- 12,062
- 2,275
- 126
What I can say is that GPU product cycles tend to run faster than on the CPU side. AMD is working to incorporate the GPU pace of innovation and execution across all of our products, we call this AMD Velocity.
Q:will Fusion have anything to offer in the mid, perhaps even high class as far as GPU performance goes?
Translation: No.A: There are all sorts of synthetic tests, but we are interested in the performance gains we are able to achieve in actual applications. In fact, our goal is to achieve high performance in that segment exclusively.
i dont know why people are expecting these APUs to be super fast. it's just the next evolutionary step from a radeon 4200, geforce 9300, etc. it's the same integrated segment.
when in fact its more limiting (dare I say worthless?) to a power user.
i dont know why people are expecting these APUs to be super fast. it's just the next evolutionary step from a radeon 4200, geforce 9300, etc. it's the same integrated segment.
Cheaper motherboards for SFF computers and maybe some more income for AMD? Kinna seems like one of those " nothing to see here " situaitions.
I'm not a fan of added components on the CPU, Intel has already shown us what limitations can come of adding even something as simple as PCI-E controller to a CPU. A GPU is going to add a lot of heat, RIGHT next to your CPU core which is not attractive. I'm sure many were just hoping there might of been something to gain with such an integration, when in fact its more limiting (dare I say worthless?) to a power user.
Having that extra core on the CPU package could lead to interesting things in the near future. With proper coding and architecture you can easily offload physics or some other specific task to that die. Obviously software and hardware would have to work together for this, but there is potential for these to be useful even for gamers.
I think they already doing this with OpenCL which offloads floating point calculations into the gpu been an open standard, I'm sure it can be implemented for either dedicated pci card or on board gpu package. shouldn't matter.
I think I have asked this question before, but does anyone think the era of heterogeneous computing could lead to the practical elimination of the multi-core era?
If x86 code can be handled a GPU, why continue to multiply CPU cores? Wouldn't making CPU cores wider lead to larger gains per dollar?
LMAO yes bottleneck lets not forget the children, haha. Doesn't ATI make the xbox GFX sub system? They're not going to undermine themselves like that of course, there's to much money in console gaming and kids aren't the only ones playing these days.
Poor countries are historically quite a few product cycles behind us that's all, it got me laughing.
