Ok I think we are done with the debate on6&8v4 core.
I would like to know a little more on how tessellation works.from what Ive read its basicaly like subdivisions or something like that. But I wondering if someone can explain how it works,why is so special and what its all about?
In 3 or less paragraphs per post please.
- So does OpenCL run on both CPUs and GPUs?
Heres something interesting, Ive only seen posted on this other forum so far but...um check out this post:
If its anywhere near real...Im freakin PUMPED!!!
http://www.amdzone.com/phpbb3/viewtopic.php?f=532&t=138369&start=25#p198909
Heres something interesting, Ive only seen posted on this other forum so far but...um check out this post:
If its anywhere near real...Im freakin PUMPED!!!
http://www.amdzone.com/phpbb3/viewtopic.php?f=532&t=138369&start=25#p198909
IBM's chips are in a whole different class with regards to size, power, and thermals, and cannot be compared to those produced by AMD and Intel.well from hotchips prescentatio(last year) IBM are releasing 5.2GHZ chips. 24mb l3 and 192 l4 on 45nm process. so there is no reason an AMD chip designed for higher clocks ( remember STARS INT core hasn't been touched in over 10 years, all new font end) on 32nm SOI HKMG couldn't be clocking in the range you speak. look at the l2 cache latency the only way that works(higher access cycles but lots of data) is with a high clock speed.
Ok the first one's encoding which we don't have to argue about.. you need a gigantic number of cores or a tiny sample before that runs into problems
Never played Mafia 2, so can't argue with that, but I've played ME1 and 2 on a e8400 and if you're claiming that it's bottlenecked by the CPU then it's either completely not optimized for more than 2 threads (which iirc is not the case) or your quad is running at 2ghz since my good old e8400 is getting over 60fps there.
Great case where you may see a large percentual increase which is not really interesting with a refresh rate of 60Hz.
I really should look up some old posts when the dual/quad debatte was hot - should be quite funny in retrospective.
IBM's chips are in a whole different class with regards to size, power, and thermals, and cannot be compared to those produced by AMD and Intel.
If Intel and AMD's size, power and thermal budgets were raised, they would certainly have produced very different chips.
not really, intel have both X86 and IA64 chips in that size range, if you remove cache size as well the IBM chips aren't that big. also thats a 45nm process, the 32nm SOI, should be faster both because of the shrink and also because of the added HKMG.
thermals/power (the same thing really) do matter, but again we are talking about adding a 45nm to 32 HKMG to the equations. im not saying there directly comparable, but its just (if not more vaild) to look at what IBM have done with the same process then comparing AMD to intel.
Are you aware what the Z-chips are for? Or how they are deployed, how they are cooled, and how much power they use?but its just (if not more vaild) to look at what IBM have done with the same process then comparing AMD to intel.
yes and clockspeed isn't the only thing in determining power requirementsAre you aware what the Z-chips are for? Or how they are deployed, how they are cooled, and how much power they use?
i haven't worked on them, i have worked in very large datacentres with them ( im a network engineer/Architect)Have you actually worked with a Z-series from IBM, or even just read the technical manual for it?
i never said bulldozer will hit 5.0ghz, but it shows what the process can do on a new design not a 10+ year old front end + int core, saying the Z draws to much power when not considering workloads isn't fair either.If you have any of those experiences, you would know that what clockspeeds IBM can get is no indication of what AMD/Intel may accomplish.
not really,no more then using 45nm STARS or SB to guage bulldozer clock speed. You can get what 3.6 quad from amd thats front end/int cores largely haven't changed since K7. just look at the cache L2 latency change from STARS to bulldozer 15 cycles to what 18 while reducing L1 size, increased stages to the pipe line etc.You may guess all you want, and that would be well within your right, but using IBM's accomplishment for their own mainframe series as basis for what AMD/Intel can do is certainly more than just a little off.
again not really, its far more effecient to cool like this, I work on far bigger power hungry bits of kit that are fan cooled ie forwarding rates upto 92 Tbit a sec ( the system i have worked on aren't that big but still several racks) and the power allocation just for fans internal to device is over 600 watts per rack and you need very good localised cold air flow on top of that as well. while the heat load is more spread in my situation its far higher as well.Even if we were to totally discount the entirely different architecture, size and power, the fact that these Z-chips will be deployed in cages that come with their own modular refrigeration units (the base of the cages are actually ref units) as vital cooling (not just a dinky heatsink+fan, or radiator+fan), should tell you that these clockspeeds are in no way indicative of anything that will be deployed by Intel/AMD. It's simply no use dragging IBM into the picture when talking about mass-market products from Intel/AMD.
You could have just based your guess on what AMD/Intel has already done, and historical clockspeed increases due to process changes without drastic uarch changes, plus account for the new uarch meant for higher clocks, etc. I'd have no argument with that. Dragging IBM's mainframe chips into the picture is a whole different story entirely.
Well the thing is IBM is able to cool this things pretty efficiently and doesn't have the gigantic variations Intel/Amd have to live with. Custom cooling solutions do have their advantages, as well as being able to spend a considerable amount of money on those (a cooling solution that costs a few thousand bucks is a bit easier to justify if the product you're buying costs milliosnagain not really, its far more effecient to cool like this, I work on far bigger power hungry bits of kit that are fan cooled ie forwarding rates upto 92 Tbit a sec ( the system i have worked on aren't that big but still several racks) and the power allocation just for fans internal to device is over 600 watts per rack and you need very good localised cold air flow on top of that as well. while the heat load is more spread in my situation its far higher as well.
btw what is the max turbo core for bulldozer ? i heard that it will be 1 Ghz+ when not all core being used and 500 Mghz when all core were busy
is that the 8 or 16 core? if its the 8 core then damn thats sweet.
its probably a fake so i will wait and see what jf has to say before i start getting my hopes up.
- So does OpenCL run on both CPUs and GPUs?
- Virtualization on a desktop? Ya, I do it, and it's kinda cool, but isn't that what servers are for? Why not just go whole hog and run a browser over a networked VM?
- Westmere/SB, and I think BD, have hardware AES encryption. Again, it is massively faster than any general purpose CPU could ever hope to be
- Flash video? I'm glad you brought it up. You can play 1080p Youtube now on a 1.4GHz Core 2 Duo? How? GPU acceleration.
Most of those special cases are handled by dedicated hardware. Take a look at picture of the Tegra 2 found in the LG Optimus2x.Again, the most demanding tasks we do on a CPU can be ported over to the GPU or other specialized hardware or instructions(like AES, for example). Encryption, decryption, video encoding, video decoding, 3D rendering, scientific modeling and the user interface itself can all be off loaded to the GPU. . . . Look at ARM. Their A9 chips can do hardware 1080p encoding in real time. I don't think you can do that even on a multi-core CPU.
Heres something interesting, Ive only seen posted on this other forum so far but...um check out this post:
If its anywhere near real...Im freakin PUMPED!!!
http://www.amdzone.com/phpbb3/viewtopic.php?f=532&t=138369&start=25#p198909
We have only disclosed the 500MHz all core boost number, we have not disclosed max.
I'm fairly skeptical.
The other problem is that it doesn't tell us anything about the BD chip being used. It could very well be a 16-core server chip that was overclocked. Great numbers, but if it's a for a $1500 chip, I wouldn't consider it very competitive with the 2600K.
I think if that were the 16core chip...AMD would be in serious serious trouble. Because if 16cores isnt even double the speed of the 6 core 980x...um then... yeah BD truely SUCKS.
I am pretty sure AMD didnt spend the last 7 years on a uarch that requires nearly 3x more cores to compete with intel LOL!
If anything this has to be some form of the 8 core BD, since BD itself is 8 cores. 16core chip is an MCM.
Well technically speaking BD refers to the individual module. But I still think of BD as the base 8 core part non MCM =P