Fudzilla: Bulldozer performance figures are in

Page 41 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.
Status
Not open for further replies.

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
Quantum Mechanics RELIES heavily on statistics......

Would you suggest that supply chain management is a form of Quantum Mechanics?

Hell, gambling relies heavily on statistics. Would you claim that gambling is a form of Quantum Mechanics?

Threading only briefly brushes with statistics (at best) and it certainly isn't something at the forefront of the problem.

These are different cases, completely , separated by the usual
continous /Discrete domain of definition.
What differentiate theses statistics are the definition domains
of departure and arrival sets.

QMechanics Satistics : Complexe number set (C) to Real number set (R)
All three other cases : From Integer numbers (N) set to Rationnal positive numbers set(Q+) .
 

ElFenix

Elite Member
Super Moderator
Mar 20, 2000
102,402
8,574
126
so... are we any closer to having a clue as to what bulldozer's performance might be than we were 4 months ago?
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
These are different cases, completely , separated by the usual
continous /Discrete domain of definition.
What differentiate theses statistics are the definition domains
of departure and arrival sets.

QMechanics Satistics : Complexe number set (C) to Real number set (R)
All three other cases : From Integer numbers (N) set to Rationnal positive numbers set(Q+) .

:) My point being that just because that because you see stats enter into QM, that doesn't mean that everything using stats is using "advanced theoretical physics".
 

Abwx

Lifer
Apr 2, 2011
11,885
4,873
136
:) My point being that just because that because you see stats enter into QM, that doesn't mean that everything using stats is using "advanced theoretical physics".

You are right..:)

Wuliheron did confuse "same theories" and theories using the "same tools",
hence , such propositions are not commutative...
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
You still on this kick? Multithreading about as old as computer science itself. It was extensively studied in the early 60's and has been used as long as there have been servers. Multithreading is only new in personal computer consumer level applications (which, btw, is really a small market compared to business and server applications).

As for the investing in multithreaded tech. All those companies you listed HAVE been investing HEAVILY into trying to make multithreading easier to do. http://en.wikipedia.org/wiki/OpenMP <- This, for example, has been around since 1997 and pretty much every company you listed has tried to make it successful (intel especially).

You still don't grasp the depth of the problem. Theoretically it should be possible to have the damned chips thread themselves or at least create a single compiler that will work every time. Yeah, we can get by with trial and error and creating tools to make the trial and error process easier, but those merely deal with the symptoms of the problem.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
You still don't grasp the depth of the problem. Theoretically it should be possible to have the damned chips thread themselves or at least create a single compiler that will work every time. Yeah, we can get by with trial and error and creating tools to make the trial and error process easier, but those merely deal with the symptoms of the problem.

I have quite a firm grasp on the issue.

At the chip level we DO thread things (though, at this level, "thread" really isn't the correct term. We do things concurrently). Look up pipelining, out of order processing, and superscaler design. At the chip level, we "thread" things as much as humanly possible while still putting on the facade of being a serial execution. Hell, we completely throw away that facade with things like SIMD instructions.

The "damn" chips can't actually manage the threading because they would be stepping on the toes of software. The "damn" chips are made to allow the "damn" software do whatever it wants or needs to do. And as it should. Would you really propose that we lock every single operating system into one single threading model? That would be a terrible idea.

"create a single compiler that will work every time". Works every time? GCC, ICC, VC++ all seem to do a nice job of working every time. It is pretty rare for any of those compilers to be 100% totally broken. Creating a "unified" compiler would also be a terrible idea. The more people attacking the idea of compiling software, the better. That is how improvements are made, not through setting one standard for all to follow.

"Yeah, we can get by with trial and error and creating tools". Humans make errors. There is no escaping it. That is why we do that design->test->design process. There is no magic approach that is going to somehow escape the need for test and verification.

It isn't a problem of lack of tools or lack of research. It is a problem with training developers to use those tools and training those developers to use them well. The research is there, the ability to create highly threaded programs is there. People have been using them for a LONG time. The only thing that really truly lacks is the business motivation to highly thread things. Most applications don't NEED to use every single resource given to it. Computers have gotten so fast that most programs and programmers simply don't care.

Why do you think we have seen such a rise of Tablet PCs and computing? People are buying and happily using machines that are FAR slower than their dedicated PC counterparts. This is because people don't need petaflops of processing power.
 

OCGuy

Lifer
Jul 12, 2000
27,224
37
91
I don't want to look through this whole thread for the image, but how close are we to the 90 day mark that the AMD slide said BD would launch?
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
I have quite a firm grasp on the issue....

Would you really propose that we lock every single operating system into one single threading model? That would be a terrible idea.

No, I'm proposing that the "damned chip" can determine the best course of action for whatever you feed it.

"create a single compiler that will work every time". Works every time? GCC, ICC, VC++ all seem to do a nice job of working every time. It is pretty rare for any of those compilers to be 100&#37; totally broken. Creating a "unified" compiler would also be a terrible idea. The more people attacking the idea of compiling software, the better. That is how improvements are made, not through setting one standard for all to follow.

That is how improvements are made because they are not attacking the root problems, but merely addressing the symptoms. A unified compiler is a terrible idea only because we have neither the theory nor the experience to create a good one.

"Yeah, we can get by with trial and error and creating tools". Humans make errors. There is no escaping it. That is why we do that design->test->design process. There is no magic approach that is going to somehow escape the need for test and verification.

It isn't a problem of lack of tools or lack of research. It is a problem with training developers to use those tools and training those developers to use them well. The research is there, the ability to create highly threaded programs is there. People have been using them for a LONG time. The only thing that really truly lacks is the business motivation to highly thread things. Most applications don't NEED to use every single resource given to it. Computers have gotten so fast that most programs and programmers simply don't care.

Why do you think we have seen such a rise of Tablet PCs and computing? People are buying and happily using machines that are FAR slower than their dedicated PC counterparts. This is because people don't need petaflops of processing power.

People also buy McDonald's, cheap beer, and whatever political rhetoric they prefer. That doesn't mean they don't want more, just that they're often willing to settle for less when the alternatives just aren't that compelling.
 

RussianSensation

Elite Member
Sep 5, 2003
19,458
765
126
I don't want to look through this whole thread for the image, but how close are we to the 90 day mark that the AMD slide said BD would launch?

I am pretty sure right now the consensus is that BD is supposed to launch September 19th. But that depends if AMD can get enough flying monkeys from the zoo for the launch and if Katy Perry is on-board for the theme song.

If I was you, I'd be unloading those 5850s and getting ready for HD7950s instead. That's far more exciting on AMD's side :) Those 5850s still have value on Ebay due to bitcoin right?
 

sequoia464

Senior member
Feb 12, 2003
870
0
71
I am pretty sure right now the consensus is that BD is supposed to launch September 19th. But that depends if AMD can get enough flying monkeys from the zoo for the launch and if Katy Perry is on-board for the theme song.

If I was you, I'd be unloading those 5850s and getting ready for HD7950s instead. That's far more exciting on AMD's side :) Those 5850s still have value on Ebay due to bitcoin right?

You seem to know quite a bit about the BD launch event with Katy Perry and all, but I thought the flying monkeys were still a part of the NDA.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
A unified compiler is a terrible idea only because we have neither the theory nor the experience to create a good one.
Maybe I'm misunderstanding you, but if not: Every optimizing compiler I know of and at least every successful one today optimizes on intermediate code that abstracts the underlying machine code away. It's much simpler to do escape analysis, dead code elimination, CSE and whatnot on intermediate code (possibly using several different variants) than not.

Plucking in a different backend to create the machine code, do the register allocation and so on is the simple part there - you wouldn't gain much or anything by designing your compiler for only one particular backend in the first place (that's just how the usual compiler architecture works..)
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
Maybe I'm misunderstanding you, but if not: Every optimizing compiler I know of and at least every successful one today optimizes on intermediate code that abstracts the underlying machine code away. It's much simpler to do escape analysis, dead code elimination, CSE and whatnot on intermediate code (possibly using several different variants) than not.

Plucking in a different backend to create the machine code, do the register allocation and so on is the simple part there - you wouldn't gain much or anything by designing your compiler for only one particular backend in the first place (that's just how the usual compiler architecture works..)

You are still arguing what it is easier to do because of the current limitations of the technology and programmers, rather then what it is theoretically possible to do. So yes, you are misunderstanding me.
 

Voo

Golden Member
Feb 27, 2009
1,684
0
76
You are still arguing what it is easier to do because of the current limitations of the technology and programmers, rather then what it is theoretically possible to do. So yes, you are misunderstanding me.
No I'm saying it doesn't make any sense to do what you're proposing because optimizing code means for the largest part using - machine independent representations. So your claim that "one compiler" is a bad idea doesn't make any sense.

It's obvious for anyone who's ever written a compiler that you don't optimize the created machine code, it's harder, doesn't allow reuse and is inferior in pretty much any possible way.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
No, I'm proposing that the "damned chip" can determine the best course of action for whatever you feed it.

Lets start here then. What is "the best course of action". If everyone is misunderstanding you, explain more clearly and fully your ideas.
 

podspi

Golden Member
Jan 11, 2011
1,982
102
106
I don't want to look through this whole thread for the image, but how close are we to the 90 day mark that the AMD slide said BD would launch?

IIRC, it implied BD would ship by the end of August.

So we're looking at a launch sometime in September. If you frequent SA, Charlie there insists the date is not the 19th, although he doesn't offer any alternatives. I personally believe him but YMMV.
 

wuliheron

Diamond Member
Feb 8, 2011
3,536
0
0
Lets start here then. What is "the best course of action". If everyone is misunderstanding you, explain more clearly and fully your ideas.

I think what we are doing now is the "best course of action". We're doing the best we can with what we've got as people have done since the stone age. However, that is no reason to assume it can't be done better and that multicore cpu processing can't be completely automated. My only argument is that it is the current limitations of the underlying theories and our lack of experience that prevent us from doing better.

In other words, our ignorance. The first step towards overcoming ignorance is in accepting that you actually are ignorant.
 

Cogman

Lifer
Sep 19, 2000
10,286
145
106
I think what we are doing now is the "best course of action". We're doing the best we can with what we've got as people have done since the stone age. However, that is no reason to assume it can't be done better and that multicore cpu processing can't be completely automated. My only argument is that it is the current limitations of the underlying theories and our lack of experience that prevent us from doing better.

In other words, our ignorance. The first step towards overcoming ignorance is in accepting that you actually are ignorant.

In context, you said that the chips should be taking the best course of action. What is that? How are chips today not taking the "best course of action".

However, that is no reason to assume it can't be done better and that multicore cpu processing can't be completely automated.
How is multicore cpu processing NOT automated? Computers are completely automated machines, they can't do things that aren't automated.

My only argument is that it is the current limitations of the underlying theories and our lack of experience that prevent us from doing better.
What are the limitations of our current theories?

Please, provide FIRM examples of what you are talking about. none of this "Mankind is stupid, we can do better!" crap. Engineers are completely open to new ideas, but simply saying "all of our ideas are infantile" just doesn't cut it, HOW are they limited. Identify the problems with the theories don't just proclaim "Everything just sucks!".

I'm completely open to the possibility that there are much better ways of doing things. I'm not open to people spouting crap about how bad things are without providing any sort of example of how things are bad. Nor am I fine with someone saying that 50 year old theories are "new".
 

Black96ws6

Member
Mar 16, 2011
140
0
0
Anyone know if these 6 benches are also fake?

Where is dug777? He is like from the future and sh!t. Only he knows how fast BD truly is.

You've must've missed "from OBR" at the top or the "OBR Edition" over the chip in the screenshot.

Definitely fakes, he already admitted he was lying.

Nobody has true benchmarks. AMD has done a good job. You can look at that as either a good thing (going to blow Intel out of the water and catch them by surprise a la 64), or a bad thing (BD is barely faster than PH2 and only because of the higher clock speeds).
 

Nemesis 1

Lifer
Dec 30, 2006
11,366
2
0
You still don't grasp the depth of the problem. Theoretically it should be possible to have the damned chips thread themselves or at least create a single compiler that will work every time. Yeah, we can get by with trial and error and creating tools to make the trial and error process easier, but those merely deal with the symptoms of the problem.

The compiler war is over . Intel won . They need only add a disclaimer . BD will not use Intels AVX compilers. and Intel will not give what they have been working on for years away to others to use. You can talk all the other Compilers work good all you want . The mitosis compiler is not in those compilers and never will be . Of course its not called a mitosis compiler now . But when it was in development thats what it was called . P slices(VEX-prefix) are exactly what it is that replace say a Rex prefix with much much shorter code . Its in the AVX intel PDF . So I didn't pull this from my arse. I also have the papers that tell me how it all works. I had to pay for these papers . So I not giving anything away.
 
Status
Not open for further replies.