Go Back   AnandTech Forums > Hardware and Technology > CPUs and Overclocking

Forums
· Hardware and Technology
· CPUs and Overclocking
· Motherboards
· Video Cards and Graphics
· Memory and Storage
· Power Supplies
· Cases & Cooling
· SFF, Notebooks, Pre-Built/Barebones PCs
· Networking
· Peripherals
· General Hardware
· Highly Technical
· Computer Help
· Home Theater PCs
· Consumer Electronics
· Digital and Video Cameras
· Mobile Devices & Gadgets
· Audio/Video & Home Theater
· Software
· Software for Windows
· All Things Apple
· *nix Software
· Operating Systems
· Programming
· PC Gaming
· Console Gaming
· Distributed Computing
· Security
· Social
· Off Topic
· Politics and News
· Discussion Club
· Love and Relationships
· The Garage
· Health and Fitness
· Merchandise and Shopping
· For Sale/Trade
· Hot Deals with Free Stuff/Contests
· Black Friday 2014
· Forum Issues
· Technical Forum Issues
· Personal Forum Issues
· Suggestion Box
· Moderator Resources
· Moderator Discussions
   

Reply
 
Thread Tools
Old 07-11-2013, 02:08 AM   #26
Idontcare
Administrator
Elite Member
 
Idontcare's Avatar
 
Join Date: Oct 1999
Location: 台北市
Posts: 20,474
Default

Quote:
Originally Posted by Exophase View Post
I more or less agree with this notion too. If a legitimate compiler optimization breaks a benchmark that doesn't necessarily make the optimization wrong, it makes the benchmark bad. If the compiler optimization does nothing but break that benchmark then the optimization is dishonest.

As far as I'm concerned you can't break a non-synthetic benchmark, and generally you can't even break a good synthetic benchmark. nbench is quite bad (some parts worse than others). It's also very very old. If the writers realized this part could be broken like this, which they should have but may not have, they may have also thought no compiler would bother because compilers were a fair bit more primitive back then.
110% agree, especially with the bolded part.

I remember well when SUN broke the SPEC benchmark by some compiler optimization that improved scores in just one test out of the suite by something like 10x (or was it even more).

So they scored the same in 14 out of 15 tests, but in that one test their score went from something like 8.7 to an astonishing 93 (IIRC), which then pulled the weighted average in such a way that it seemed like they were suddenly kicking Power6's ass in Spec.

Completely broke the purpose of the benchmark, and the compiler trick wasn't really applicable to real world software.

Everyone knew it, the subtest results were there for all to see...but that didn't stop SUN marketing from hyping and grandstaging their uber processor for its spec scores

(and yes, the irony never escaped me that while we were SUN's foundry, I knew their chips were crappy and delivered bottom-tier performance, the Atom's of the big-iron world )
Idontcare is offline   Reply With Quote
Old 07-11-2013, 03:37 AM   #27
BLaber
Member
 
Join Date: Jun 2008
Posts: 184
Default

Quote:
Originally Posted by Idontcare View Post
Very true. I don't see too many people complaining that successive video card driver releases basically optimize the drivers by way of profiling specific games and rolling out optimizations that are absolutely game and hardware specific.

In the end these video driver profiles improve gameplay, improve the performance of the hardware when processing the given software, and the consumer gets more for their money.

If that is what compilers are doing, and in doing it the performance optimizations are being captured and represented in benchmark scores, then that is a good thing.
Optimizing for games & apps that people buy systems to run on makes sense, its worth the effort , but whats going on here is not even remotely comparable to games & apps optimization.
__________________
AMD Phenom II X550 @ X4 3.9Ghz Nb @ 2.6Ghz / Asus crosshair v / Corsair H70 / 4Gb x 2 Gskill @ 1600 / 3 X Asus 5850 crossfire / Asus Xonar D2x / Corsair HX750 / Silverstone Raven rv-01
BLaber is offline   Reply With Quote
Old 07-11-2013, 03:57 AM   #28
Abwx
Diamond Member
 
Join Date: Apr 2011
Posts: 4,803
Default

A french saying , who did drink will drink again...

http://techreport.com/review/17732/i...-optimizations
Abwx is offline   Reply With Quote
Old 07-11-2013, 05:27 AM   #29
beginner99
Platinum Member
 
Join Date: Jun 2009
Posts: 2,196
Default

Quote:
Originally Posted by CTho9305 View Post
Great post, Exophase. A friend who works on an ARM design has been ranting for a while about Antutu and Geekbench and the quality of code currently coming out of JITs on ARM... I really hate the cross-ISA situation in terms of benchmarking. The worst part is that generally-credible reviewers don't caveat their articles enough, so people actually give credit to these results. It's worse than the 80s IPC comparisons across RISC/CISC because the macroscopic workload characteristics aren't even the same.
I agree that if a compiler gets optimized for s specific benchmark it's cheating and idiotic. However the thing with the JIT is ARMs problem and they must solve it. Because if the like it or not but the software is part of the whole stack and if your CPU and ISA rocks and is 10x times faster if properly optimized for it's still useless if no software does it or your your compiler sucks. I guess you get the point.

If not defending intel but on the other hand you can't always say Intel CPU is only faster due to the better compiler. Well, your free to make a compiler for your CPU if it is from scratch or by contributing to GCC.
beginner99 is offline   Reply With Quote
Old 07-11-2013, 05:52 AM   #30
galego
Banned
 
Join Date: Apr 2013
Posts: 1,099
Default

Quote:
Originally Posted by Intel17 View Post
Because ARM chips win Geekbench despite evidence that Geekbench is intentionally crippled on Intel processors. From a recent interview with Silvermont's lead architect,

Quote:
Q: I saw very interesting comparisons of Silvermont with Saltwell in the disclosure. What puzzles me, though, is that it is very difficult to get a read on CPU-limited performance of these low power micro-architectures. For example, a benchmark like "Geekbench" paints "Saltwell" in a rather unflattering light compared to the ARM contemporaries, but then you see benchmarks such as AnTuTu showing a 2C/4T Saltwell taking leadership positions againt a 4C/4T "Krait" or even Cortex A15 in integer and memory bandwidth, while even staying competitive on floating point! Could you help me to understand how Saltwell compares to the competition from what you have seen with more sophisticated measurements, and then from there I have a lot better context to think about Silvermont performance?

A: Geekbench is interesting: you look at the results, and the main “unflattering” results are in a few sub-benchmarks, where the IA version is set up to handle denorms precisely, and the input dataset is configured to be 100% denorms. This is not normal FP code, not a normal setup, and possibly not even a apples-to-apples comparison to how ARM is handling these numbers. So we view this as an anomaly. (The Geekbench developer agrees with us)
Saltwell trails A15 in raw IPC, but its higher frequency and threads are able to help compensate.
Saltwell trails Krait on very basic workloads like DMIPS, but on more complicated workloads Saltwell’s robust architecture will pull ahead.
I bolded relevant parts. This Intel engineer claims that the Geekbench developer agrees. Therefore, it all of this is right, this would be a case where the benchmark needs to be adjusted/improved, not one where the benchmark has been deliberately cheated to cripple competence. Nobody cheats a benchmark and then admits that did. It is stupid.

Quote:
Originally Posted by AnandThenMan View Post
I'm with SiliconWars on this one. OEMs don't give a flying bleep about Intel's benchmark tricks and games, so why even bother?
But history shows that OEM have been fooled before. From the Intel-AMD FTC complaint (verified in the settlement):

Quote:
59. To the public, OEMs, ISVs, and benchmarking organizations, the slower performance of non-Intel CPUs on Intel-compiled software applications appeared to be caused by the non-Intel CPUs rather than the Intel software. Intel failed to disclose the effects of the changes it made to its software in or about 2003 and later to its customers or the public. Intel also disseminated false or misleading documentation about its compiler and libraries. Intel represented to ISVs, OEMs, benchmarking organizations, and the public that programs inherently performed better on Intel CPUs than on competing CPUs. In truth and in fact, many differences were due largely or entirely to the Intel software. Intel’s misleading or false statements and omissions about the performance of its software were material to ISVs, OEMs, benchmarking organizations, and the public in their purchase or use of CPUs. Therefore, Intel’s representations that programs inherently performed better on Intel CPUs than on competing CPUs were, and are, false or misleading. Intel’s failure to disclose that the differences were due largely to the Intel software, in light of the representations made, was, and is, a deceptive practice. Moreover, those misrepresentations and omissions were likely to harm the reputation of other x86 CPUs companies, and harmed competition.
I find reasonable that some mobile OEMs could be cheated again.

Quote:
Originally Posted by Idontcare View Post
Very true. I don't see too many people complaining that successive video card driver releases basically optimize the drivers by way of profiling specific games and rolling out optimizations that are absolutely game and hardware specific.

In the end these video driver profiles improve gameplay, improve the performance of the hardware when processing the given software, and the consumer gets more for their money.
I am one of those complaining. Yes, those optimizations are improvements, but you are ignoring the true problem.

The true problem is that optimization is made for some few games (for instance half dozen of them) and then, curiously, those few good-performing games are used in reviews ever and ever. The final user gets the false impression of that is the general performance of the hardware, when he is not aware that that performance is not achievable with the 99% of games for which no improvement was made in the driver.

Last edited by galego; 07-11-2013 at 06:41 PM. Reason: settlement -> complaint
galego is offline   Reply With Quote
Old 07-11-2013, 07:13 AM   #31
SlimFan
Member
 
Join Date: Jul 2013
Location: Austin, TX
Posts: 75
Default

It's not clear to me at all that phone OEMs know anything about performance, benchmarks, or what's important to end-user experience. Quad core phones are pretty much useless in today's market with today's workloads. The only things that the additional cores make faster are the benchmarks that we're talking about here.

ARM has been selling Dhrystone performance for a very long time, and DMIPS/MHz, DMIPS/mW, and total DMIPS score (reached by multiplying by the number of cores, even though it's a single threaded benchmark). This is what much of the ARM ecosystem has been using to make decisions. These parts typically went into a closed system (before App stores) that made it almost impossible to run 3rd party software to figure out how fast they were. This was all wonderful, because nobody really cared. As long as the preloaded software ran at an acceptable performance level, all was well.

In today's market, you have app stores where arbitrary code is now run on the products. Suddenly you can download new code that may or may not run very well. The inherent performance of the product is now visible to the end user in a brand new way.

This is now a new world for the OEMs. I wouldn't assume that they are that much more evolved about performance, benchmarks, and what portions of the platform matter to the end user experience. This is not the PC/server space where you've had Intel, AMD, IBM, Sun, etc., fighting over benchmarks, multiple versions of SPEC, and virtually limitless software that can be run on each and every platform.

Now the only thing that anyone can run are these toy benchmarks with crazy results. End users and reviewers run them, and this is now publicly seen by people looking to make a purchase. An OEM that ignored these would be taking a big risk and either assuming that their customers are smarter than the reviewers, or that none of their customers would ever read these reviews.
SlimFan is offline   Reply With Quote
Old 07-11-2013, 07:26 AM   #32
mrmt
Platinum Member
 
Join Date: Aug 2012
Posts: 2,484
Default

Quote:
Originally Posted by galego View Post
But history shows that OEM have been fooled before. From the Intel-AMD FTC settlement:

You are not quoting the settlement, you are quoting the complaint.
mrmt is online now   Reply With Quote
Old 07-11-2013, 07:53 AM   #33
beginner99
Platinum Member
 
Join Date: Jun 2009
Posts: 2,196
Default

Quote:
Originally Posted by mrmt View Post
You are not quoting the settlement, you are quoting the complaint.
lol. epic fail. but what else is to be expected from current troll number one.
beginner99 is offline   Reply With Quote
Old 07-11-2013, 08:15 AM   #34
AnandThenMan
Platinum Member
 
AnandThenMan's Avatar
 
Join Date: Nov 2004
Posts: 2,562
Default

I just don't think the phone makers give a hairy rats *** about a bench like this, and neither does the target buyer. The form factor, screen, "must have" appeal etc. etc. are much higher on the list. If you ask just about any smart phone owner, hey did you see the latest AnTuTu bench? Your next phone has to be the one that does the best on it, they will not understand a word you're saying let alone even begin to care.
AnandThenMan is offline   Reply With Quote
Old 07-11-2013, 08:21 AM   #35
mrmt
Platinum Member
 
Join Date: Aug 2012
Posts: 2,484
Default

Quote:
Originally Posted by AnandThenMan View Post
I just don't think the phone makers give a hairy rats *** about a bench like this, and neither does the target buyer. The form factor, screen, "must have" appeal etc. etc. are much higher on the list. If you ask just about any smart phone owner, hey did you see the latest AnTuTu bench? Your next phone has to be the one that does the best on it, they will not understand a word you're saying let alone even begin to care.
How much would it cost to AMD/ARM to develop their own compiler/benchmarks and why didn't do this before?
mrmt is online now   Reply With Quote
Old 07-11-2013, 08:27 AM   #36
sontin
Platinum Member
 
Join Date: Sep 2011
Posts: 2,232
Default

I guess nobody takes a benchmark from an IHV serious...
sontin is online now   Reply With Quote
Old 07-11-2013, 08:50 AM   #37
simboss
Junior Member
 
Join Date: Jan 2013
Posts: 24
Default

Quote:
Originally Posted by mrmt View Post
How much would it cost to AMD/ARM to develop their own compiler/benchmarks and why didn't do this before?
ARM and AMD do develop their own compilers

The trick (well, one of them) from AnTuTu and/or Intel is that they use a compiler that no one else can or want to use on Android, whereas the other benchmarks use GCC, which is the officially supported compiler for android for ARM and x86.

As for benchmark, how much credibility would you give to a benchmark developed by ARM, AMD or Intel?
If they were open source, it would still be vaguely reliable, but closed-source benchmark would be pretty much useless.
I have even done one myself

Code:
If (my_arch) score = 100000000000; 
else score = 1; // let's not give them 0
simboss is offline   Reply With Quote
Old 07-11-2013, 08:58 AM   #38
AnandThenMan
Platinum Member
 
AnandThenMan's Avatar
 
Join Date: Nov 2004
Posts: 2,562
Default

lol
AnandThenMan is offline   Reply With Quote
Old 07-11-2013, 09:40 AM   #39
Genx87
Lifer
 
Join Date: Apr 2002
Location: Earth
Posts: 36,193
Default

Quote:
Originally Posted by Idontcare View Post
Very true. I don't see too many people complaining that successive video card driver releases basically optimize the drivers by way of profiling specific games and rolling out optimizations that are absolutely game and hardware specific.

In the end these video driver profiles improve gameplay, improve the performance of the hardware when processing the given software, and the consumer gets more for their money.

If that is what compilers are doing, and in doing it the performance optimizations are being captured and represented in benchmark scores, then that is a good thing.
Oh you werent around on the video card forum about a decade ago then? Any optimization was considered cheating. I agree with you if the optimization can be used in real world applications then it is legit. But if it is a trick that only works within the benchmark then it is crap and misrepresenting the capabilities of the processor.
__________________
"Communism can be defined as the longest route from capitalism to capitalism."
"Capitalism is the unequal distribution of wealth. Socialism is the equal distribution of poverty"
"Because you can trust freedom when it is not in your hand. When everybody is fighting for their promised land"
Genx87 is online now   Reply With Quote
Old 07-11-2013, 09:53 AM   #40
AnandThenMan
Platinum Member
 
AnandThenMan's Avatar
 
Join Date: Nov 2004
Posts: 2,562
Default

If there a more slippery slope than benchmarking? Probably not. The responsibility to keep things as fair as possible ends up falling on the review sites, they are the last line of defense against vendors trying to cheat their way to the top. And you can be sure if the respective vendors can, they will.
AnandThenMan is offline   Reply With Quote
Old 07-11-2013, 10:53 AM   #41
SlimFan
Member
 
Join Date: Jul 2013
Location: Austin, TX
Posts: 75
Default

I think you're right that these benchmarks shouldn't be involved in any phone makers or phone buyer's decision process. But I think you're wrong in assuming they don't. Review sights haven't really caught onto the worthlessness of the benchmarks. No, nobody says "did you see the latest AnTuTu score" but if a phone comes out with bad scores relative to others people say "yeah, that phone sucks."

Why else is there a phone SOC arms race? Because nobody cares? How do you think OEMs measure who's "winning" the race?
SlimFan is offline   Reply With Quote
Old 07-11-2013, 11:56 AM   #42
Nothingness
Senior Member
 
Join Date: Jul 2013
Posts: 615
Default

Quote:
Originally Posted by SlimFan View Post
No, nobody says "did you see the latest AnTuTu score"
Did you miss the ABI Research report or the recent leak of Bay Trail AnTuTu score?
Nothingness is online now   Reply With Quote
Old 07-11-2013, 12:41 PM   #43
krumme
Platinum Member
 
Join Date: Oct 2009
Posts: 2,216
Default

Intel is acting like they did in the old days fighting amd. The difference this time is they are not the big gorilla but a small player, and most sites for phones is not dependant on info and good relationship with Intel.

Its not like a Intel power engineer just happen to come by gsmarena or t3 with a voltmeter, sunspider and ohms law. What do they care? They just need the newest pink fast and will eat the pr from Samsung or Apple.

The end result might even backfire and the result be some arm bend benchmarks dominating not showing the potential of the new Atom core. After all its a little in apple and samsung interest to show a few of the customers that have already bougt their phones, it was the best choice, as nobody can tell the difference anyway.

Intels best bet to pull this antutu is after all the oem have more important problems to fight. Like what looks more and more like a stagnating high end phone market.
krumme is offline   Reply With Quote
Old 07-11-2013, 01:09 PM   #44
Kidster3001
Junior Member
 
Join Date: Jul 2013
Posts: 1
Default

While I agree that most benchmarks favor one platform or another, I think the OP is missing the point about compilers.

Compilers aren't there to convert your source code directly into machine code in the same sequence so that the same steps run in the same order on all platforms. The whole idea behind compilers is to create the most efficient code possible for the target platform. If a given compiler can figure out how to make work easier, for example if you are dealing with loops and constants then good for the compiler.

If you create and initialize an array would you prefer your compiler to use a loop to set the contents or would you prefer your compiler to call memset() in libc? The second compiler here is going to generate code that performs much faster.

Why not use the compiler that generates the best code for each platform? If you don't then you're really just benchmarking the compiler.
Kidster3001 is offline   Reply With Quote
Old 07-11-2013, 01:14 PM   #45
sefsefsefsef
Member
 
Join Date: Jun 2007
Posts: 196
Default

Quote:
Originally Posted by Kidster3001 View Post
Why not use the compiler that generates the best code for each platform? If you don't then you're really just benchmarking the compiler.
This is actually OP's complaint. AnTuTu on Intel is just benchmarking ICC, not AnTuTu.
sefsefsefsef is offline   Reply With Quote
Old 07-11-2013, 02:01 PM   #46
jfpoole
Member
 
Join Date: Jul 2013
Location: Toronto, Ontario
Posts: 26
Default

Quote:
Originally Posted by galego View Post
I bolded relevant parts. This Intel engineer claims that the Geekbench developer agrees. Therefore, it all of this is right, this would be a case where the benchmark needs to be adjusted/improved, not one where the benchmark has been deliberately cheated to cripple competence. Nobody cheats a benchmark and then admits that did. It is stupid.
John from Primate Labs here (the company behind Geekbench).

I wanted to provide some details about what's going on with the floating point workloads the Silvermont architect referenced. Two of the Geekbench 2 floating point workloads (Sharpen Image and Blur Image) have a fencepost error. This error causes the workloads to read uninitialized memory, which can contain denorms (depending on the platform). This causes a massive drop in performance, and isn't representative of real-world performance.

We only found out about this issue a couple of months ago. Given that Geekbench 3 will be out in August, and fixing the issue in Geekbench 2 would break the ability to compare Geekbench 2 scores, we made the call not to fix the issue in Geekbench 2.

If you've got any questions about this (or about anything Geekbench) please let me know and I'd be happy to answer them. My email address is john at primatelabs dot com if you'd prefer to get in touch that way.
jfpoole is offline   Reply With Quote
Old 07-11-2013, 02:01 PM   #47
KompuKare
Senior Member
 
Join Date: Jul 2009
Posts: 474
Default

Quote:
Originally Posted by sefsefsefsef View Post
This is actually OP's complaint. AnTuTu on Intel is just benchmarking ICC, not AnTuTu.
That, plus as Exophase said in the OP the kind of optimizations which ICC has done to AnTuTu is not something which they can do without knowledge of the benchmark. In other words, Intel seems to specifically targeting this benchmark to look good. Seems like a lot of trouble but Intel do have past form for this kind of thing. So yes, the Intel compiler is genuinely a very good compiler but a certain percentage of Intel compiler budget seems to be set aside for these kind of shenanigans.

Quote:
Originally Posted by Exophase View Post
In this case I'm sure Intel could claim that they're performing a legitimate optimization. Frankly, I doubt it; this kind of optimization would be difficult to recognize and apply in generic code. It'd also be for little benefit, because I've never seen someone use code like this to set or clear huge sets of bits. That part is kind of the catch, because this optimization would make the code slower if the run lengths weren't sufficiently large. In nbench's case they are, but there's no way the compiler could have known that on its own.

What's more, this optimization wasn't present in ICC until a recent release. Somehow I don't think that they just now discovered it has general purpose value. More likely case is that they discovered is they could manipulate AnTuTu's scores. Seems to coincide well with this third-party report appearing showing how amazing Atom's perf/W is - using nothing but AnTuTu. Or the leaked scores seen for CloverTrail+ and now BayTrail that are AnTuTu. Is this really a coincidence?
__________________
_______________
|| i5-3570K at stock || Asus P8Z77-V LX || 8GB DDR3 || 120GB Sandisk Extreme
|| Gigabyte HD7950 || Corsair CX400 (Seasonic) || Nec 2170NX (1600x1200)
KompuKare is online now   Reply With Quote
Old 07-11-2013, 02:05 PM   #48
TuxDave
Lifer
 
TuxDave's Avatar
 
Join Date: Oct 2002
Posts: 10,461
Default

Quote:
Originally Posted by jfpoole View Post
John from Primate Labs here (the company behind Geekbench).

I wanted to provide some details about what's going on with the floating point workloads the Silvermont architect referenced. Two of the Geekbench 2 floating point workloads (Sharpen Image and Blur Image) have a fencepost error. This error causes the workloads to read uninitialized memory, which can contain denorms (depending on the platform). This causes a massive drop in performance, and isn't representative of real-world performance.

We only found out about this issue a couple of months ago. Given that Geekbench 3 will be out in August, and fixing the issue in Geekbench 2 would break the ability to compare Geekbench 2 scores, we made the call not to fix the issue in Geekbench 2.

If you've got any questions about this (or about anything Geekbench) please let me know and I'd be happy to answer them. My email address is john at primatelabs dot com if you'd prefer to get in touch that way.
Nice of you to chime in. Thanks for your comments.
__________________
post count = post count + 0.999.....
(\__/)
(='.'=)This is Bunny. Copy and paste bunny into your
(")_(")signature to help him gain world domination.
TuxDave is offline   Reply With Quote
Old 07-11-2013, 02:07 PM   #49
Nothingness
Senior Member
 
Join Date: Jul 2013
Posts: 615
Default

Quote:
Originally Posted by Kidster3001 View Post
If you create and initialize an array would you prefer your compiler to use a loop to set the contents or would you prefer your compiler to call memset() in libc? The second compiler here is going to generate code that performs much faster.
You'll be happy then to know that recent versions of gcc just do that to one part of the Stream benchmarks: it's changed into a call to memcpy.

The problem here is that the code that icc transforms into a form of memset certainly doesn't look like a memset. The transformation is really impressive and is probably useless outside of that particular loop in that particular benchmark.

Exophase also mentions the optimization in icc only happened recently and conveniently just before AnTuTu v3 release. Call me paranoid if you want...

My experience with icc is the same as many people I know: if your code can't be vectorized icc isn't significantly faster than gcc, if at all. Ah well except if your code looks like a benchmark for which Intel already has tweaked icc
Nothingness is online now   Reply With Quote
Old 07-11-2013, 02:10 PM   #50
StrangerGuy
Diamond Member
 
StrangerGuy's Avatar
 
Join Date: May 2004
Posts: 7,121
Default

Quote:
Originally Posted by krumme View Post
Intel is acting like they did in the old days fighting amd. The difference this time is they are not the big gorilla but a small player, and most sites for phones is not dependant on info and good relationship with Intel.

Its not like a Intel power engineer just happen to come by gsmarena or t3 with a voltmeter, sunspider and ohms law. What do they care? They just need the newest pink fast and will eat the pr from Samsung or Apple.

The end result might even backfire and the result be some arm bend benchmarks dominating not showing the potential of the new Atom core. After all its a little in apple and samsung interest to show a few of the customers that have already bougt their phones, it was the best choice, as nobody can tell the difference anyway.

Intels best bet to pull this antutu is after all the oem have more important problems to fight. Like what looks more and more like a stagnating high end phone market.
I'm sure ARM licensees are going to bend over their asses to Intel over doctored benchmarks so they can join the line of Intel slaves like Asus to suffer stuff like the RDRAM fiasco.
StrangerGuy is offline   Reply With Quote
Reply

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 08:28 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.