R Read - "Our semi-custom APUs" = Xbox 720 + PS4?

Page 6 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

zaydq

Senior member
Jul 8, 2012
782
0
0
Sony has already stated they are going to push 4K with the PS4. 1080p/3D is likely to be there baseline target, 'normal' 1080p will be low resolution. MS hasn't come out and said that they were going to match Sony explicitly, but I would be surprised if they let Sony clearly blow them away on this front.

Sounds like Sony is spoon shovelling BS to get some hype stirring. 4k TVs are going to be exotic merchandise for quite some time... game devs aren't going to waste time building a game around 4K television resolutions when ~2% of people are going to own them. Ontop of all that, you'd need an elitist gaming computer to even run 4k res at playable framerates, let alone a plastic Sony box being able to do it... sorry for railing ya, but you can't honestly believe Sony when they said that, do you?
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
The 7770 doesn't break 80W even under Furmark, strip the memory and you won't break 100W even with a 18-35W CPU on the die.

Add a 35W CPU and you are easily breaking 100watts, add a 35 watt CPU from AMD and you are getting an absolute dog too.

How else do you explain Kinect being the most successful launch of all time, and that it is now pushing 20mil sales?

I stated for the most part, 50 million 360 owners didn't buy Kinect. I stated a fact.

MS/ATI were there, first. By any definition that would allow a Cell to be called 8-core, the XB360 already has 51.

Cores 4-51 can execute code independently?

Without delving in to it, think of it like 1 powerful core and then 7 very weak, subordinate cores that rely on the main core for all their I/O.

That isn't how Cell works at all, in fact I think you would find most devs would much rather prefer that it did work as such. The vector cores explicitly manage data, it is rather a pain the way it is done.

4k TVs are going to be exotic merchandise for quite some time

As were 1080p TVs when the PS3 came out. 3D TVs didn't exist at all, and yet the PS3 supports both and has games using both settings. Sony will use their first party developers to get it done, I'm not stating that it will be commonplace but Sony has a rather large incentive to push 4K displays.

Ontop of all that, you'd need an elitist gaming computer to even run 4k res at playable framerates, let alone a plastic Sony box being able to do it.

Not really.

http://images.anandtech.com/graphs/graph5699/45157.png

No, Crysis 3 won't be doing that resolution on a console, but it would be trivial for Sony to hit those numbers on PSN style games with a decent GPU, not even a really strong one for Q4 2013.

Just like last gen was HD? Nearly all top-end games of last gen were significantly below 720p.

It won't be the norm, but 4K games will be pushed by Sony. They have another multi billion dollar company segment to help promote. 1080p will be the baseline for them this generation.

Well, technically PS2 supported 1080p.

1080i, GT4- which sold ~10 million units, supported that resolution. Was it commonplace? Absolutely not. Was it used? Yes, in an *extremely* high profile game.

GFLOPS is not, and never has been, a measure of the speed of a CPU.

http://www.top500.org/

It isn't ideal, but when comparing cross platform architectures it is the easiest tool we have. And yes, not only is it used, it is the industry standard.

When comparing Jaguar throughput to last gen, it's fair to divide the last gen by 5 or so for differences in efficiency.

At the clockspeeds you quoted, using your guidelines, that makes Jaguar barely faster then Cell, seven years later. That is a good generational leap?

Again, raw numbers don't tell the full story. Those Xenos FLOPS are (like the NV2A FLOPS) Vec4+1 flops. Or, you can get actual full throughput only on loads where you have 4 identical ops per pixel per cycle. For geometry, this is often close enough. For anything else, we are talking about 30% typical per-pixel utilization.

That doesn't take away the lack of progress for everything besides shader hardware.

It's not like anything could actually use the full FB bandwidth.

MSAA

I don't think you understand how small Jaguar is.

Cell's PPC core is 11.32mm@45nm. I think everyone here is forgetting that cache doesn't get smaller just because functional units are. The die size of most modern processors aren't mainly a function of their execution units(outside of Cell). If Jaguar has next to no cache at all, which is what everyone in this thread is claiming based on how much die size it will utilize, then it will be a rather big step backwards in performance versus current offerings.

To this date, consoles have always used expensive boutique ram.

But not mainboards. How many additional layers of PCB is the move to higher bus width going to require?

If you want to compare Cell with it's peers, go have a look at the Tilera thingies -- they are roughly comparable and you can now get 8 of their cores in something like quarter of that space.

PPC core is 11.32mm, vector core is 6.47mm both at 45nm. I'm assuming you must work for Tilera, since they haven't relelased any die specifications for us to compare to how large are their functional units?
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
You can easily fit an AMD APU + a discrete GPU into a console for hybrid CF. HD7950M is a slightly underclocked desktop HD7870 2GB part and uses 50W of power only.
http://www.notebookcheck.net/AMD-Radeon-HD-7950M.72676.0.html

With an AMD APU and a discrete mobile HD7950M you'd come in well below 170W of power consumption. The question is not whether it's physically possible but whether it's cost effective to do so.

Launch Xbox360 used 180W of power consumption in gaming.
Launch PS3 used 240W of power consumption in gaming.

50411.png


50W HD7950M is comparable to 45W used by GeForce 7900GTX Go, which is very close to RSX GPU used in PS3.

You guys need to take into account that you cannot compare the power consumption of desktop GPUs to mobile GPUs. Mobile parts are an entirely different breed as they are binned (think of binned Intel Core i5/i7 Ivy Bridge chips).

With 4GB of GDDR5, a GTX680M with its 1344 SPs @ 720mhz uses just 100W of power. Will a desktop GTX670 underclocked to 720mhz use just 100W of power? Not a chance.

If anyone is going to estimate the power consumption of GPUs, you have to start looking at mobile parts since neither MS nor Sony will ever use a desktop GPU in their consoles and a desktop GPU was never used last generation either. Mobile GPUs have totally different bins, voltages and GPU clocks, as well as the option to reduce memory bus in half is always on the table to curb power consumption (as was done exactly for RSX to move it to 22.4 GB/sec compared to 44.8GB/sec for the desktop variant).

These types of GPUs in consoles are not only highly binned mobile chips, but they are custom made to extract even higher power efficiency.

I am not saying that an APU will be used, but power consumption is definitely not a limitation. Based on the estimated power consumption of the RSX GPU in PS3, it doesn't take a lot of hard math to estimate that PS3's Cell was not really more power efficient than AMD's current APUs, and yet Sony had no problems dealing with its power consumption. No matter what happens, it's impossible to make an inferior console for Sony as even a $60 AMD CPU will mop the floor with the Cell in modern x86 gaming code. The key is going to be the GPU, not the CPU. Until we know the details of what GPUs will lie in the next Xbox or PS4, there isn't much point discussing the CPU side as we've seen with PS3 vs. Xbox360, a superior Cell CPU on paper accounted for squat as PS3 fell flat on its face 90% of the time, being GPU limited anyway.

This is a theme that persisted for most of PS3's life in the last 6 years:

""Both games aim for 30 frames per second, dropping v-sync if the target is not met - and it's immediately apparent that it's the PS3 version that has the most issues in maintaining that goal." Even the best CPU in the world cannot save you if your GPU is slower and it showed with PS3 vs. 360 in prob. 90% of console ports.

Whichever console has the better GPU setup next generation will have the best graphics most of the time. The key to best graphics is a discrete/dedicated GPU which is why I am going for a discrete GPU being a must for next generation consoles unless MS and Sony plan to sell them for $299. Not going with a discrete GPU is suicide because if the competitor goes with a discrete GPU, your console with even AMD's best APU is toast. That's a lot of risk to take. You could end up with 400 Shader Trinity going up against a 1280 SP Pitcairn and then you might as well quit unless you are Nintendo.

I can see the GPU and CPU on the same package as Wii U but it won't be an APU (i.e., CPU+GPU on the same die like IVB or Trinity) exclusively. AMD's current generation APU is just not fast enough at 384 Shaders to last another 6-7 years. Another reason it won't be exclusively an APU is because the Wii U never used one. If the Wii U never used an APU and knowing how cost conscious Nintendo is, I am sure they evaluated the possibilities, then it's doubtful MS and Sony will take this compromising approach. And finally, probably the most important reason it won't be exclusively an APU is that AMD has nothing between Trinity and Kaveri. Kaveri is only expected to have 512 GCN shaders which is still worse than even an HD7770. It's also supposed to have Steamroller CPU cores but that CPU is unlikely to launch in 2013 based on current rumors of delays to 2014.

MS could just as easily go with a more advance PowerPC architecture like Nintendo did.
 
Last edited:

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Cores 4-51 can execute code independently?
No. Can the SPEs? No. Tada! The memory model necessitates that any concurrent tasks be either scheduled before-hand, or dynamically scheduled by software running on the PPE, because only the PPE can do a reasonable job of running software that can handle synchronization. They are not remotely equals.

The vector cores explicitly manage data, it is rather a pain the way it is done.
In the 60s and 70s, that was considered a major problem to solve, because doing just that was rather a pain--not a feature--and that hasn't changed (instead, GPGPUs and uCs are getting virtual memory support...gee, I wonder why?). Slicing up code and data into overlays efficiently takes more time than optimizing with proper fully-fledged MMU in place, even for the best low-level programmers, and still rarely gets even close to the same kind of real performance (in fact, you could dictate 4KB pages, have the OS do contiguous allocations, and do all the DMA at page size and offsets-only, and have 99% of what manual memory management tuning will get you, with 1% the time investment). The only upside is higher peak performance numbers on paper. A full MMU would take little space (no need to have hardware fill on faults, or anything else which makes them big for high performance CPUs), add only a small (1-2 cycle?) latency to memory access, which would typically be fully hidden by pipelining, and thus would offer no practical downsides (assuming manual software fill, and crash on any fault--determinism and memory abstraction don't have to be enemies). The emergent benefit would the ability to implement fully-fledged fine-grained multitasking, which would help make it actually worth using.
 
Last edited:

Arzachel

Senior member
Apr 7, 2011
903
76
91
Add a 35W CPU and you are easily breaking 100watts, add a 35 watt CPU from AMD and you are getting an absolute dog too.

What kind of moon math are you using? Just going from GDDR5 to GDDR3 decreases power consumption by 15-20W. Although I pretty sure going from GDDR5 to no memory would net an even bigger decrease, even the 15W would put the peak total under 100W.

http://www.top500.org/

It isn't ideal, but when comparing cross platform architectures it is the easiest tool we have. And yes, not only is it used, it is the industry standard.

Good thing the industry is lead by engineering types instead of investors that want a hard number even if it's absolutely unrepresentative! ...Oh, wait.

At the clockspeeds you quoted, using your guidelines, that makes Jaguar barely faster then Cell, seven years later. That is a good generational leap?

Seeing as the only metric you seem to be interested in is peak throughput, I've got some snake oil Bulldozer to sell you.

Cell's PPC core is 11.32mm@45nm. I think everyone here is forgetting that cache doesn't get smaller just because functional units are. The die size of most modern processors aren't mainly a function of their execution units(outside of Cell). If Jaguar has next to no cache at all, which is what everyone in this thread is claiming based on how much die size it will utilize, then it will be a rather big step backwards in performance versus current offerings.

A single Bobcat core is 4.9@40nm and 3.1@28nm for Jaguar with around 10% more transistors. I don't exactly see your point.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
Good thing the industry is lead by engineering types instead of investors that want a hard number even if it's absolutely unrepresentative! ...Oh, wait.
Even better, notice that the list is dominated by processors with traditionally proven types of processors, all utilizing abstracted memory management, with caches much larger than pages; pretty much with the exception of x86+Tesla computers (it would be hard to argue that Fermi is traditional, at this point).
 

BenSkywalker

Diamond Member
Oct 9, 1999
9,140
67
91
You can easily fit an AMD APU + a discrete GPU into a console for hybrid CF.

Forget CrossFire, use the APU's GPU for OpenCL functionality so they will have some compute power to use. Higher resolution and more complex shaders are nice, but seeing *every* game having cool physics effects using OpenCL to me would be *much* better.

The memory model necessitates that any concurrent tasks be either scheduled before-hand, or dynamically scheduled by software running on the PPE, because only the PPE can do a reasonable job of running software that can handle synchronization. They are not remotely equals.

How do you hand Xenos general code to compute from Xenon? Since you said the 360 is a 51 core part, really interested in that detail, as I'm sure a great deal of the 360 development community would be interested in too :)

In the 60s and 70s, that was considered a major problem to solve, because doing just that was rather a pain

You even quoted me-

it is rather a pain the way it is done.

Why the rant?

Just going from GDDR5 to GDDR3 decreases power consumption by 15-20W.

Do you have any links to that? Badly behaving early GDDR5 RAM is the only time I can find RAM consuming that much power total.

Good thing the industry is lead by engineering types instead of investors that want a hard number even if it's absolutely unrepresentative!

Computational throughput is normally measured using FLOPS. How else should you measure it? How it makes you feel on the inside?

Seeing as the only metric you seem to be interested in is peak throughput, I've got some snake oil Bulldozer to sell you.

*For a console* I would take Bulldozer over a comparable gen i7. Highly optimized hand tuned code would make Bulldozer look very good(there are already examples of Bulldozer besting the i7 decently). The console market is *not* like the PC market. BTW- Would be a very cold day in hell before I put one of AMD's recent CPUs in my PC, but for a console? It makes more sense then Intel.

A single Bobcat core is 4.9@40nm and 3.1@28nm for Jaguar with around 10% more transistors. I don't exactly see your point.

The point is that most CPUs die size is due to their cache, not their functional cores. A Jaguar core could be 1nm and quad core part still wouldn't shrink down as much as some people in this thread are claiming due to the size the caches for the CPU are going to take up.

Even better, notice that the list is dominated by processors with traditionally proven types of processors, all utilizing abstracted memory management, with caches much larger than pages; pretty much with the exception of x86+Tesla computers (it would be hard to argue that Fermi is traditional, at this point).

The Tesla parts are looking to own the top spot on those rankings in the near future, RoadRunner held the top spot for a while using a bunch of Cells, I would expect XeonPhi to make an appearance before too long also.
 

Arzachel

Senior member
Apr 7, 2011
903
76
91
Do you have any links to that? Badly behaving early GDDR5 RAM is the only time I can find RAM consuming that much power total.

http://www.rage3d.com/reviews/video/his_hd5550_hd5570_silence/index.php?p=5

Computational throughput is normally measured using FLOPS. How else should you measure it? How it makes you feel on the inside?

The issue is that peak computational throughput taken by itself tells you very little about performance at a given workload.

*For a console* I would take Bulldozer over a comparable gen i7. Highly optimized hand tuned code would make Bulldozer look very good(there are already examples of Bulldozer besting the i7 decently). The console market is *not* like the PC market. BTW- Would be a very cold day in hell before I put one of AMD's recent CPUs in my PC, but for a console? It makes more sense then Intel.

I agree, I'd love to see modified Piledriver cores instead of Jaguar, but that's not likely. But the point was that peak flops is pretty meaningless in itself.

The point is that most CPUs die size is due to their cache, not their functional cores. A Jaguar core could be 1nm and quad core part still wouldn't shrink down as much as some people in this thread are claiming due to the size the caches for the CPU are going to take up.

Even if the uncore stays the same size as on 40nm, the total die size would increase by a few square mm tops compared to Bobcat.
 

psoomah

Senior member
May 13, 2010
416
0
0
Not going with a discrete GPU is suicide because if the competitor goes with a discrete GPU, your console with even AMD's best APU is toast. That's a lot of risk to take. You could end up with 400 Shader Trinity going up against a 1280 SP Pitcairn and then you might as well quit unless you are Nintendo.

I can see the GPU and CPU on the same package as Wii U but it won't be an APU (i.e., CPU+GPU on the same die like IVB or Trinity) exclusively. AMD's current generation APU is just not fast enough at 384 Shaders to last another 6-7 years

There is a smoking gun here.

With an AMD cpu and gpu in the 720 and PS4 being highly probable based on information to hand and making an assumption that will be the case, the question then becomes ... which cpu and gpu? For that Rory Read's recent statement "Our semi-custom APUs already have a number of confidential high-volume design wins in place" points the way.

The logical inference is an AMD APU which infers specs sufficient for next gen needs. The question then becomes which APU? The logical candidate is Kaveri with it's shared memory, unified address space and HSA components.

That leads to supply, can it be produced in time and sufficienct numbers?

a) AMD in on record it has working Kaveri silicon in hand and scheduled for 2Q 2013 release.
b) AMD is now free to source it's APUs beyond GF.
c) GF, TSMC and Samsung are all capable of fabricating Kaveri at 28nm.
d) Recent rumors of a Trinity 2.0 in 1H 2013 and Kaveri retail delayed to 2H 2013.

That delay makes sense if AMD is prioritizing Xbox Kaveri production if production capacity is limited for the time period it's needed.

The pieces appear to be in place to make a Kaveri driven Xbox 360 possible.

Where Sony fits in this picture is murkier. Kaveri is their logical choice also, but if production bottlenecks exist, Sony ain't gonna be at the front of the line. Charlie D recently said Xbox in 2013, PS4 in 2014. Considering Sony got caught with it's pants around it's ankles when it found out how far along Ms was with a 2013 console release and their subsequent decision to throw overboard whatever PS4 processor/gpu route they were pursuing to go with a far timlier and more financially feasible near turnkey AMD APU, essentially copy Ms, it's going to be well behind the curve vs. Ms in just about every area, perhaps most vitally in access to 28nm production. A 2014 release may be the best Sony can do. Gonna suck for them to give Ms a years lead again, but going with essentially the same hardware is going to be good for their bottom line when they do release and their game development costs aren't going to suck. Also give them time to refine their version of Kinect.

But hoo-boy, with Nintendow wandering in the street with their finger up their nose, if Sony, with similar hardware, is a year behind Ms with gamers so avid for next gen capability, Ms is going to gain a massive sales lead.

Ms and AMD have surely been in deep collaboration for quite a while to develop the programming infrastructure for the HSA oriented APU which is going in their next gen console.

For AMD this is win, win, win. Getting an HSA capable APU in the next gen consoles is huge, supercharging development of the programming tools and infrastructure and providing substantial momentum to making HSA successful.

This is where the ATI aquisition is really starting to pay off.
 
Last edited:

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
But Kabini is the chip that AMD has on hand and that has Jaguar cores and even less graphical prowess than Trinity: AMD_Already_Tests_Next_Gen_Low_Power_Kabini_Chip.

Of course, even though Kaveri won't be ready as a consumer product, it could still be under development for consoles in custom form under contract (since MS or Sony would be funding the production). Kaveri could be a very good move coupled with a GPU. For 2D applications, multimedia - maybe even flash games, there would be plenty of power. When a 3D title is launched, the GPU could take over and provide the extra power needed. This makes the most sense to me.
 

Ferzerp

Diamond Member
Oct 12, 1999
6,438
107
106
That an AMD cpu and gpu will be in the 720 and PS4 is known

If by "known" you mean "rumored", sure. The basis of your entire argument is speculation. Everything that follows is then speculation.

edit: The problem with a lot of the crap that gets spewed around these days is that people just don't make a sanity check on how the available options can perform. They see buzzword and they automatically accept any rumor, regardless of how that would work. I mean, people actually think Apple would drop Intel for current ARM levels of performance in laptops. There's just a major disconnect between reality and what people are willing to accept.
 
Last edited:

psoomah

Senior member
May 13, 2010
416
0
0
If by "known" you mean "rumored", sure. The basis of your entire argument is speculation. Everything that follows is then speculation.

Valid observation on use of the word 'known'. I have modified my post accordingly.
 
Last edited:

anongineer

Member
Oct 16, 2012
25
0
0
b) AMD is now free to source it's APUs beyond GF.

This isn't a get out of jail free card:

"GLOBALFOUNDRIES waived the exclusivity arrangement for AMD to manufacture certain 28nm APU products at GLOBALFOUNDRIES for a specified period."

They don't say which APU's, and they don't say for how long. It could very well be that whatever gets fab'd elsewhere will eventually have to come back to GF28, which would be a sad day.

Speaking of which,

c) GF, TSMC and Samsung are all capable of fabricating Kaveri at 28nm.

A sidenote: There is a substantial amount of foundry process-specific stuff that has to be swapped out, like SRAM's for cache. Heck, even the standard cell logic gates aren't necessarily the same area, speed, or power.

To retarget a design means taking it through the entire backend physical design flow, again. That requires time, workers, and money, all of which seem to be in short supply these days.

d) Recent rumors of a Trinity 2.0 in 1H 2013 and Kaveri retail delayed to 2H 2013.

MS has always launched Xbox's in November. If they do so again, this would collide with these delayed Kaveri rumours.
 

Cerb

Elite Member
Aug 26, 2000
17,484
33
86
How do you hand Xenos general code to compute from Xenon? Since you said the 360 is a 51 core part, really interested in that detail, as I'm sure a great deal of the 360 development community would be interested in too :)
How do you hand a SPE general code to compute from? You don't. You give it specialized code that only performs small compute loops. If you don't give it any, it won't have any. OTOH, software on the PPE can be complex enough to actually make decisions about what it is going to run, next. The SPEs are vector coprocessors with scratch space, working only at the behest of the PPE.

Why the rant?
You're missing rather basic problems that the Cell exacerbates, rather than helps with, but calling them benefits. Overlays are bad, have been bad, and will remain bad. They used be necessities, but they no longer are, and abandoning them is a good thing. If memory is to be divided up into chunks, those chunks need to be much smaller than the available working memory. It's the same problem the PDPs had, the same problem x86 OSes that ran on pre-386 CPUs had*, and every now and then, somebody decides to rehash it as the best new thing. Flat virtual memory with small arbitrary page sizes is categorically a superior way to arrange memory than overlays, which split up memory into some physical-limit chunk sizes, or address-range-limit chunk sizes.

Working on sizable chunks wouldn't be bad, to keep things simple, but a 128-entry TLB (for pre-filling), software-fill, with a cache that only caches whole pages (so, 4K-64K as limitations), would make such a coprocessor far more flexible, to the point that amazing low-level programmers wouldn't necessarily struggle, only to waste its FPUs. FPUs that are being wasted because the workload can't be made to utilize them are worse than having less of them.

*Shame we didn't get solid well-supported VM OSes until ~95.
 
Last edited:

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
I stated for the most part, 50 million 360 owners didn't buy Kinect. I stated a fact.

70 million 360s have been sold, but how many of those are still working? Back in 2009 the failure rate was put at 54%. (Obviously, the failure rate has been reduced since then through hardware shrinks, but that's still a hell of a lot of dead 360s.) So the ratio between functional 360s and Kinects sold is far higher.
 

Ancalagon44

Diamond Member
Feb 17, 2010
3,274
202
106
Computational throughput is normally measured using FLOPS. How else should you measure it? How it makes you feel on the inside?

Most people know how misleading FLOPS is, thus relative computational throughput can only be measured using benchmarks. No point in using FLOPS.


*For a console* I would take Bulldozer over a comparable gen i7. Highly optimized hand tuned code would make Bulldozer look very good(there are already examples of Bulldozer besting the i7 decently). The console market is *not* like the PC market. BTW- Would be a very cold day in hell before I put one of AMD's recent CPUs in my PC, but for a console? It makes more sense then Intel.

First of all I doubt that hand optimized code running on BD would ever be faster than hand optimized code running on Ivy Bridge. Ivy Bridge is simply a better design for the usage scenarios that matter for games.

But more specifically, were I a console engineer, all other things being equal, I would choose hardware that didnt require such heavy optimization to produce acceptable results. Look at PS3 vs 360 - the 360 is a lot easier to develop for, and it shows. Whatever the peak theoretical output of the PS3 is irrelevant, because consumers see multi platform games running better on 360.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
MS has always launched Xbox's in November. If they do so again, this would collide with these delayed Kaveri rumours.

As I mentioned above, just because AMD's release of Kaveri is delayed, doesn't mean custom silicon for M$, built off Kaveri's design, is delayed. If M$ is footing the bill for production, then there is now cash for AMD in terms of getting M$s console chip shipped.
 

NTMBK

Lifer
Nov 14, 2011
10,411
5,677
136
As I mentioned above, just because AMD's release of Kaveri is delayed, doesn't mean custom silicon for M$, built off Kaveri's design, is delayed. If M$ is footing the bill for production, then there is now cash for AMD in terms of getting M$s console chip shipped.

Wow, you replaced the "S" with a "$"... that's like... so funny... because Microsoft like money...

It wasn't funny in 1995, and it's not funny now.
 

Ajay

Lifer
Jan 8, 2001
16,094
8,112
136
Wow, you replaced the "S" with a "$"... that's like... so funny... because Microsoft like money...

It wasn't funny in 1995, and it's not funny now.

:rolleyes: I've probably been doing that since b/4 1995, big deal. Are you the Microsoft police?
 

Bobisuruncle54

Senior member
Oct 19, 2011
333
0
0
:rolleyes: I've probably been doing that since b/4 1995, big deal. Are you the Microsoft police?

Don't you know? Microsoft are cool now because they bought out a decent product recently (W7) and are the underdog when it comes to MP3 players, phones and tablets. You should be typing "MS" instead of "M$" and Apple as "$$$Apple$$$moneygrabbing*****$$$$".

Oh and Android as "thou-can't-do-no-wrongeth".

:biggrin:
 

DrBoss

Senior member
Feb 23, 2011
415
1
81
No one on here actually uses apple products do they?
I figured their products were reserved for the computer illiterate.
 

DrBoss

Senior member
Feb 23, 2011
415
1
81
I'm sure quite a few do, and why not? Is this some hipster forum where people think corporations have personality traits?
I'm sure Apple is perfectly aware of their perceived "personality" i.e. "brand" which they've cultivated through years of "hip" advertising.

Sorry for the thread derailment.

*sent from my LG Env2 dumbphone
 

thilanliyan

Lifer
Jun 21, 2005
12,040
2,254
126
*sent from my LG Env2 dumbphone

Hahaha those signatures really irritate me when I see them in emails, etc. ("sent from my iPhone"..."sent from my Blackberry device"...) I DON'T CARE WHAT DEVICE YOU USED TO SEND ME A MESSAGE!! :D
 

Bobisuruncle54

Senior member
Oct 19, 2011
333
0
0
I'm sure Apple is perfectly aware of their perceived "personality" i.e. "brand" which they've cultivated through years of "hip" advertising.

Sorry for the thread derailment.

*sent from my LG Env2 dumbphone

Personality is not an interchangeable word for brand, there's a key difference - i.e. behaviour versus perception.

Anyway, back to the thread.