Why Intel is losing ground...

May 30, 2005
142
0
0
I originally posted this in the Tom's benchmark thread (you know which one), but it seems to deserve its own thread here. My original intent was to point out the problems in Intel's Netburst design, problems that specifically result in a chip that runs hotter and slower than it should be. I've added a little stuff to give a better comprehension.



The funny thing about Intel's hyperthreading is, with all the other stuff they did to the NetBurst architecture, they ended up negating most of the advantages of hyperthreading. It would be far more powerful if it had been designed to work out the potential chip-level bottlenecks. But no, Intel's long pipelines, combined with branch mispredictions, and instruction-prefetch mechanisms that produced code errors at the hardware level forced them to add a "replay" mechanism whose sole job is to correct the code in execution to make it work correctly.

It seems to me that Intel's philosophy with Netburst was to scale to as high of clock speeds as possible, and use hyperthreading, branch prediction, instruction prefetch and such to improve the efficiency of their processor; that is, make them do more stuff while under "full" load. Problem is, their plans completely backfired and that is what forced Intel to include correcting mechanisms for branch misprediction and erronious instruction prefetch; correcting mechanisms that increase die size, heat and slow their processor down. As far as I can tell, there's nothing that can fix a slow hyperthread or speed it up, except for an architecture redesign. But that's not all; not only do the Intel processors deal with these issues, but the disadvantages of the correcting mechanisms for branches and prefetching stack to make hyperthreading even less efficient than it would be otherwise.

I don't know for sure, but I hypothesize that Intel's been stuck with Netburst's problematic architecture; it wouldn't do to spend billions in redesigning it right after it was developed; that would cost money. It would also potentially cost future sales due to bad advertising. So, the natural thing to do was keep quiet about the failures of their architecture design.

So, it's no wonder that Intel's abandoning Netburst. The way I see it, Intel had a very good idea with Netburst, but it didn't match with reality. Their simulations were likely faulty if they "verified" that Netburst would improve the Pentium line. The ironic thing is, they would likely be much more competitive if they just stayed with whatever architecture they used in the Pentium 3; remember the early benchies where the slower P3 beat the then-new, faster P4.

Then there's Intel's Itanium, where Intel cleverly thought to market a very expensive chip that would cost their clients millions in code re-writing and ended up giving marginal improvements. All that these moves of Intel did was give AMD the chance they needed to expand into the market, and AMD utilized every bit of that chance that they could. Case in point: simultaneous 32-64-bit Opterons for servers, with lower cooling costs and not costing millions in re-writing code. Then the Linux community seemed to develop a fondness for AMD when they developed their 64-bit OS's last year. But we should be reminded that AMD moved to 64-bit computing in 2003 specifically to get marketshare in the lucrative server market, what's happening now in the personal computer realm is the result of that push. Now AMD also had the foresight to develop multi-core processors in mind for some time now, which likely wouldn't be here now if Intel was our only player. They are also

This brings me to the reason why Intel is losing ground, technologically and in the market, while AMD is gaining ground in both. Intel has lost sight of what made them succesful: giving the software community what they wanted. Instead of serving the market's desires, Intel's trying to control the market's desires by pushing all this stuff that won't really add massive benefit while at the same time making it more inconvenient to adopt. Case in point: Itanium, new motherboard for Intel's dual-core, constantly changing RAM memory support, etc. AMD is becoming more succesful because they adopted this very approach to the market, by offering more of what people want while making upgrading relatively painless. It is true that AMD used to be the x86 clone, but they've finally broken ground to become truly innovative in their technology and perceptive to market needs and desires. In the meantime, Intel's chasing perfectionist pipedreams that turn out not to be. Unless Intel gets their act together, AMD technology is going to be superior for some time to come, and that technology is also going to be spot-on with what people want. This is true for 32 to 64-bit computing, true for moving to multi-core processors, true for support of memory standards, true for not forcing developers to waste millions in writing code that can work only on your own proprietary processor with only marginal improvements.

I suspect that Intel's redesigned architecture will feature some sort of "Hyperthreading 2" which will function more like it was supposed to, with enough L1 and L2 cache for each hyperthread to reduce thrashing and prefetch errors, increased bandwidth and lower latencies to cache, and hopefully better power management. Of course, AMD is going to continue doing more of what's making them succesful, but as it stands Intel needs to get back down to basics instead of pursuing unrealistic dreams for their technology.
 

Dothan

Banned
Jun 5, 2005
90
0
0
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...
 

clarkey01

Diamond Member
Feb 4, 2004
3,419
1
0
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...

2%.

Apple sold about 1.5 Million mac's last quater or was that for the year. I'll find you the link. That's not alot.

Instead of saying " Oh the X2 is going to die once Yonah is out in q1 2006" why dont you compare present day silicon ? ( X2 vs Pentium D). Do you really think AMD's best offering by Q1 2006 is going to be an X2 @ 2.4 Ghz ?. AMD has quad cpu's for Q1 and there new Fab is on track and going through testing as we speak. 300mm wafers were shipped to the place last month.
 

clarkey01

Diamond Member
Feb 4, 2004
3,419
1
0
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...

Your numbers are way off.

"Last quarter, according to some reports, Apple sold 1.07 million computers, which is not really a lot. Still, it could be important for IBM to provide Apple with PowerPC chips in order to leverage the influence of this architecture. For instance, all three next-generation game consoles from Nintendo, Microsoft and Sony will be powered by chips that use PowerPC architecture. Furthermore, IBM offers consumer electronics designers to design PowerPC derivative processors for their needs."


http://www.xbitlabs.com/news/cpu/display/20050609121500.html



 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...



What world do you live in?? deal with current parts please not stuff in the future..

Now you are talking about 45nm parts...LOL!!!! Oh Intel employee please go away!!!


If you even backed up 1/3rd of what you say I would respect you about 20%......That is why you are a troll...You state things that can not even be backed up...start producing links troll with what you say or it is just wasted bandwidth...
 

Sahakiel

Golden Member
Oct 19, 2001
1,746
0
86
Originally posted by: Technomancer
I originally posted this in the Tom's benchmark thread (you know which one), but it seems to deserve its own thread here. My original intent was to point out the problems in Intel's Netburst design, problems that specifically result in a chip that runs hotter and slower than it should be. I've added a little stuff to give a better comprehension.
The problem is simple : reliance on process technology. If you want to go even further, you can blame the industry as a whole for its reliance on linearization. Going even further, blame the field of physics for promoting linear approximations. Lastly, the ultimate problem lies with humans and their inability to grasp intricate concepts beyond "what goes up must come down."
Intel bet on the foundry. AMD bet on cost.


The funny thing about Intel's hyperthreading is, with all the other stuff they did to the NetBurst architecture, they ended up negating most of the advantages of hyperthreading. It would be far more powerful if it had been designed to work out the potential chip-level bottlenecks. But no, Intel's long pipelines, combined with branch mispredictions, and instruction-prefetch mechanisms that produced code errors at the hardware level forced them to add a "replay" mechanism whose sole job is to correct the code in execution to make it work correctly.
Some simple examples :
The sole job of the processor scheduler is to force "correctness" in the first place. Taking x86 as an example, the ISA is in-order, meaning the order of instructions that goes in determines program flow. You cannot execute newer instructions first while waiting for older instructions that require more time even if they use different operands. Nowadays, we have a whole bunch of tricks to speed up execution including OOOE, but from a software viewpoint, the ISA is still in-order.

Pipelining isn't automatically correct 100% of the time. The first designs required stalls to maintain correctness. Newer designs use speculative execution and stall much less frequently.

OOOE helped reduce stalling and introduced speculative execution. Even though the next instruction may be independent, the previous instruction may raise an exception or branch. Therefore, speculative instructions are not "committed" until the processor knows all previous instructions are "correct."

Branch prediction increased speculative execution. Tournament predictors make multiple predictions using different schemes then chooses the best for the current instruction flow. IIRC, Alpha 21264 made good use of a tournament predictor.
Long pipelines (greater than 5) with very accurate predictors introduced a new concept of branch prediction. You couple a simple single cycle predictor with a slower, multicycle predictor. The first predictor is very fast, but inaccurate (usually around 80%-90%). The second predictor is a lot more accurate, but slower requiring multiple cycles. If the second predictor produces a different result than the faster predictor, the first prediction is overridden and the processor flushes and restarts that portion of the code.


It seems to me that Intel's philosophy with Netburst was to scale to as high of clock speeds as possible, and use hyperthreading, branch prediction, instruction prefetch and such to improve the efficiency of their processor; that is, make them do more stuff while under "full" load.
Well, most would consider a processor that idles a lot during heavy load to be a very bad design.

Problem is, their plans completely backfired and that is what forced Intel to include correcting mechanisms for branch misprediction and erronious instruction prefetch; correcting mechanisms that increase die size, heat and slow their processor down. As far as I can tell, there's nothing that can fix a slow hyperthread or speed it up, except for an architecture redesign. But that's not all; not only do the Intel processors deal with these issues, but the disadvantages of the correcting mechanisms for branches and prefetching stack to make hyperthreading even less efficient than it would be otherwise.
The effects of hyperthreading are debatable. That's why Intel has marketing teams. However, correction logic is present in just about every modern processor design. I'm sure some ASICS don't need any, but any processor you buy for your desktop PC uses logic to correct speculative execution. The extra logic does add heat, die area, and slow down execution.


I don't know for sure, but I hypothesize that Intel's been stuck with Netburst's problematic architecture; it wouldn't do to spend billions in redesigning it right after it was developed; that would cost money. It would also potentially cost future sales due to bad advertising. So, the natural thing to do was keep quiet about the failures of their architecture design.
That would be the smart thing to do from a business standpoint.

So, it's no wonder that Intel's abandoning Netburst. The way I see it, Intel had a very good idea with Netburst, but it didn't match with reality. Their simulations were likely faulty if they "verified" that Netburst would improve the Pentium line. The ironic thing is, they would likely be much more competitive if they just stayed with whatever architecture they used in the Pentium 3; remember the early benchies where the slower P3 beat the then-new, faster P4.
"Ironically," the P3 architecture couldn't beat the original Pentium architecture until a die shrink and a refresh. Clock for clock, Pentium Pros could not keep up, especially with 16-bit code. Anyone that only needed to use two processors used Pentiums if they could. Pentium II's performed faster with a new die shrink and a new instruction decoder that improved 16-bit execution at the expense of 32-bit execution.
Moral of the story? If you double the pipeline, you'll lose clock for clock performance. I doubt the simulations were "faulty" as much as they were "incomplete."


Then there's Intel's Itanium, where Intel cleverly thought to market a very expensive chip that would cost their clients millions in code re-writing and ended up giving marginal improvements. All that these moves of Intel did was give AMD the chance they needed to expand into the market, and AMD utilized every bit of that chance that they could. Case in point: simultaneous 32-64-bit Opterons for servers, with lower cooling costs and not costing millions in re-writing code. Then the Linux community seemed to develop a fondness for AMD when they developed their 64-bit OS's last year. But we should be reminded that AMD moved to 64-bit computing in 2003 specifically to get marketshare in the lucrative server market, what's happening now in the personal computer realm is the result of that push. Now AMD also had the foresight to develop multi-core processors in mind for some time now, which likely wouldn't be here now if Intel was our only player. They are also
I think you accidently cut off some text here. At any rate...
Itanium is the result of the old RISC vs CISC wars. x86 as an ISA epitomizes just about everything anti-RISC. RISC succeeded by steamrolling old processor philosophies. For example, one of the most successful early PC architectures was VAX. VAX succumbed to early RISC a long time ago for one reason : It emphasized flexibility in pursuit of speed like x86 is now doing.
Itanium attempted to lay a new foundation for mainstream computing. The decision to push server-side first stemmed from the idea that initial costs would be astronomical. Intel would use the high budgets of the server industry to lay the foundation for wider adoption with mature tools and compilers. That was the philosophy behind Itanium.
AMD had neither the budget nor the influence to attempt something similar. I'm sure the former Alpha members would have loved to resurrect their beloved lineage. Therefore, AMD had little choice but to focus on improving x86 and that included extension to 64-bit. Dual core is likely the result of very little progess with ILP research. We've been squeezing blood from stone for some time with few revolutionary changes. Multi-core is a concept that's been around longer than Opteron. It's just that we've had solutions with better price/performance ratios in the meantime.


This brings me to the reason why Intel is losing ground, technologically and in the market, while AMD is gaining ground in both. Intel has lost sight of what made them succesful: giving the software community what they wanted.
I would disagree, but more due to philosophy. I think the software community has lost sight of the user and Intel hasn't really helped or detracted because it's not even connected at this point.

Instead of serving the market's desires, Intel's trying to control the market's desires by pushing all this stuff that won't really add massive benefit while at the same time making it more inconvenient to adopt. Case in point: Itanium, new motherboard for Intel's dual-core, constantly changing RAM memory support, etc.
I'm sorry, but am I safe to assume you like the fact that AMD has 3 sockets for 3 market segments which will all need to be replaced when AMD follows Intel and uses DDR-II? Or, perhaps you're implying that AMD will stick with DDR for quite a while? I wonder why AMD doesn't release dual-core for S754. I wonder why AMD suddenly decided to push validation of DDR500. I find it ironic that AMD has a much tighter control over what memory their processors use than Intel.
As businesses, AMD and Intel will utilize every scrap of influence they have to dictate market forces or risk losing control of the market. When you lose control of the market, you no longer dictate the future, you react to it. Reactive business policy doesn't help your company especially when development cycles have minimum latencies of 5 years.

AMD is becoming more succesful because they adopted this very approach to the market, by offering more of what people want while making upgrading relatively painless.
Excuse me, but users hate upgrades and tend to be very short-sighted with money (it's much easier to market $1/day rather than $30/month. If you're good, you can get away with $1.50/day as opposed to $40/month) The fact that MS is nowhere near ready with a mainstream 64-bit OS pretty much nullifies the best marketing tool AMD has. Except, of course, that 64 > 32. AMD is gaining ground with consumers because PC's are evolving into appliances. Nobody really knows how efficient their microwave emitter is, but ask them how many Watts it uses...


It is true that AMD used to be the x86 clone, but they've finally broken ground to become truly innovative in their technology and perceptive to market needs and desires.
Innovative is debatable and perceptive is like calling a blind man finds a wallet on the ground "perceptive." AMD didn't sell lower-cost processors because of market needs. They sold them cheaper because they had to. The sub-$1000 PC permanently changed the way users see computers. Calling AMD perceptive is like driving slow on the highway because you're low on gas and thinking you're psychic because you drove through the speed trap under the limit.
AMD designed K8 with cost in mind. The onboard memory controller improves performance while reducing system cost. With higher clockspeeds, the improvement would decrease. HyperTransport is serial to reduce system cost. The pipeline mirrors a significant portion of the K7 which reduces design costs. Dual core provides improved performance for servers and workstations at a lower system cost. The fact that process technology has stifled clock speeds made K8 look better after the fact by pushing server-level requirements down into the mainstream a lot sooner than expected.

In the meantime, Intel's chasing perfectionist pipedreams that turn out not to be. Unless Intel gets their act together, AMD technology is going to be superior for some time to come, and that technology is also going to be spot-on with what people want. This is true for 32 to 64-bit computing, true for moving to multi-core processors, true for support of memory standards, true for not forcing developers to waste millions in writing code that can work only on your own proprietary processor with only marginal improvements.
Intel will lose marketshare and influence because AMD leads the performance war, much like the Athlon/P3 era. However, AMD is in no position to influence what people want. People want a cheap, easy to use computer that's reliable. So far, we're 1 for 3. A new ISA could help improve the prospects for 2 and 3 by improving the abilities of software, but nobody wants to pay for it. That's why Intel failed. They underestimated the short-sightedness of people who don't understand the technology.


I suspect that Intel's redesigned architecture will feature some sort of "Hyperthreading 2" which will function more like it was supposed to, with enough L1 and L2 cache for each hyperthread to reduce thrashing and prefetch errors, increased bandwidth and lower latencies to cache, and hopefully better power management. Of course, AMD is going to continue doing more of what's making them succesful, but as it stands Intel needs to get back down to basics instead of pursuing unrealistic dreams for their technology.

I could be wrong, but from what I understand, simply increasing cache size won't help hyperthreading to a significant degree. I think the problem lies with design.
Increased bandwidth is what got Intel in trouble with marketing in the first place. Pentium 4 boasted much higher bandwidth numbers, but offered little improvement at the time. Not only did those specs apply to the far off future, but they also depended on a shift in computing that has proven very slow and difficult to develop.
Lower cache latency is an architectural problem heavily intertwined with the rest of the processor. Short of a revolutionary design change, there won't be much difference without a different processor.
AMD's recent success is largely due to accident. It was getting down to basics that killed Intel with Netburst. Otherwise, the Pentium 4 would be past 5GHz and K8 would be past 3GHz.
 

fatty4ksu

Golden Member
Mar 5, 2005
1,282
0
0
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...

:thumbsup:
 

Rock Hydra

Diamond Member
Dec 13, 2004
6,466
1
0
Originally posted by: fatty4ksu
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...

:thumbsup:

:disgust:
 

theMan

Diamond Member
Mar 17, 2005
4,386
0
0
Originally posted by: fatty4ksu
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...

:thumbsup:

woah! the trolls are teaming up!!!
 

RichUK

Lifer
Feb 14, 2005
10,341
678
126
Originally posted by: theman
Originally posted by: fatty4ksu
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...

:thumbsup:

woah! the trolls are teaming up!!!

who wants to start a tage TEAM WWF style .. us against the trolls :D

 
Jun 7, 2005
58
0
0
Whoever has the best technology has little to do with who sells the most product :) How often do consumers buy the best...They buy HP or Dell... Get the OEM's on your side, your market share will increase :D
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
The ironic thing is, they would likely be much more competitive if they just stayed with whatever architecture they used in the Pentium 3; remember the early benchies where the slower P3 beat the then-new, faster P4.

You really think so? P3 may have beat the early p4s, but the athlons crushed the late p3s and early p4s.(well, athlons with ddr memory anyhow)
PM is based off the P3 architecture, which actually dates back to the Pentium Pro anyhow. Interesting how it turned out to be better for Intel to continue pursueing the ideas that made x86 cpus competitive in the first place, rather than to try and make x86 cpus more like other architectures.
P3 couldn't have been instantly reworked into PM though, if Intel had stayed with the P3, they would have been in just as bad a position as Apple was with the G4. A horribly outdated underpowered processors which can't clock as high as the competition, supports outdated memory technologies, and doesn't perform as well mhz per mhz. However, if the P4 had been designed as a direct successor to the P3, that may have worked out well, it would have basically been the Pentium M.
 

BitByBit

Senior member
Jan 2, 2005
474
2
81
I believe Intel's reasons for its decisions regarding Netburst's architecture, if memory serves, was that Intel saw the future as high bandwidth, high clock speed, with more emphasis on floating-point performance and less on integer performance.
At the time, Intel no doubt thought increases in pipeline depth with corresponding increases in clock speed was an easier method of acheiving higher performance than trying to extract more ILP from x86 code.
I don't subscribe to these theories that Intel was trying to fool the public into believing performance is determined by clock speed alone, and that the P4 was as a result far superior to the competition. Intel's PR department may well have capitalised on the decisions made by the P4's designers, which does seem to me to be the more likely turn of events.
The P4's biggest problem is in my view, its single decoder.
As I understand it, the trace cache is only useful if the ops the schedulers require are in it; if they aren't, then they must be fetched from memory, and squeezed through that single decoder, which obviously spells trouble.
The P4's limited decode ability brings me to another point: Hyperthreading.
In order to fully benefit from the core's ability to execute instructions from two threads simulataneously, it must of course, be able to get them into the schedulers rapidly enough. Having a single decoder means the P4 can only decode a single instruction per clock, thus greating limiting its ability to do what HT is supposed to do.
A second decoder should have been introduced with Prescott.
The Athlon has a far greater case for SMT than the P4, not least among the reasons its decode ability.
It's also has a wider execution core, which is no doubt difficult to keep full given the difficulty of extracting ILP from x86 code, which obviously strengthens its case again.
The Netburst architecture being 'narrow and deep' just isn't suited to SMT, in my opinion.
Certain benchmarks show a hit in performance with HT enabled.
Its designers should have focused on increasing its single-thread performance through an additional decoder, and perhaps through the re-instroduction of some of the features that were omitted from the P4 - barrel shifter etc...
Overall, the P4 is not a bad performer, but it could have been better.
Needless to say, Intel has learned lessons here, and we will see that reflected in Conroe.
Being a 'wide' architecture - wider than the Athlon, it will be able to benefit from HT far more so than the P4 has, and I wouldn't be surprised if the gains seen from enabling HT on Conroe were anything over 40%. Let's just hope Intel don't cripple it with too few decoders.



 

exar333

Diamond Member
Feb 7, 2004
8,518
8
91
fact is, AMD has made some steps, but not nearly the strides you say they have made...marketsshare has slowly advanced and still the average user has no idea who or what they are...they need PR before they can really chanllenge anything...plus, they dont make any money, Intel makes it hand over fist...

 

Xed

Golden Member
Nov 15, 2003
1,452
0
71
Pretty sure I read amd's cpu department made money at their last report, but the flash business was killing them.

But yes I agree they need to advertise, at least some.
 

Duvie

Elite Member
Feb 5, 2001
16,215
0
71
Originally posted by: RichUK
Originally posted by: theman
Originally posted by: fatty4ksu
Originally posted by: Dothan
intel is gaining ground !!!

they are gaining more and more each day !!!

now intel has taken over apple share, another 3-5% of market !!!

intel have big items in pipeline for future too !!! like dothan x 2, 45nm process tech, etc...

:thumbsup:

woah! the trolls are teaming up!!!

who wants to start a tage TEAM WWF style .. us against the trolls :D


Nahhh... i will let them TAG each other if you know what I mean...I am sure they can touch each other from their cubicles!!!
 

Fox5

Diamond Member
Jan 31, 2005
5,957
7
81
Having a single decoder means the P4 can only decode a single instruction per clock, thus greating limiting its ability to do what HT is supposed to do.
A second decoder should have been introduced with Prescott.

Maybe a 2nd decoder would have greatly increased heat or lower maximum potential clock?

BTW, I thought Conroe was exactly the same width as the Athlon.

fact is, AMD has made some steps, but not nearly the strides you say they have made...marketsshare has slowly advanced and still the average user has no idea who or what they are...they need PR before they can really chanllenge anything...plus, they dont make any money, Intel makes it hand over fist...

AMD gained a lot of ground with the original Athlon versus the P3 and early P4s, but then lost a lot with the athlon xp, only now with the athlon 64 are they back up to pre-good p4 level.

Pretty sure I read amd's cpu department made money at their last report, but the flash business was killing them.

The flash business is very strange for AMD, it's very low margin yet AMD has spent all this work trying to make their chips high margin. Maybe AMD should start making flash based products, like apple's ipod, which are able to charge a higher price because of a certain uniqueness. Come on AMD, lets see some Athlon XP + flash based PDAs or low end pcs or something!