Quantum Mechanics RELIES heavily on statistics......
Would you suggest that supply chain management is a form of Quantum Mechanics?
Hell, gambling relies heavily on statistics. Would you claim that gambling is a form of Quantum Mechanics?
Threading only briefly brushes with statistics (at best) and it certainly isn't something at the forefront of the problem.
so... are we any closer to having a clue as to what bulldozer's performance might be than we were 4 months ago?
These are different cases, completely , separated by the usual
continous /Discrete domain of definition.
What differentiate theses statistics are the definition domains
of departure and arrival sets.
QMechanics Satistics : Complexe number set (C) to Real number set (R)
All three other cases : From Integer numbers (N) set to Rationnal positive numbers set(Q+) .
My point being that just because that because you see stats enter into QM, that doesn't mean that everything using stats is using "advanced theoretical physics".
Latest update(considering the source) with B2 stepping. http://wccftech.com/amd-bulldozer-z...rs-detailed-fx8150-fx8120-hit-retail-q3-2011/
You still on this kick? Multithreading about as old as computer science itself. It was extensively studied in the early 60's and has been used as long as there have been servers. Multithreading is only new in personal computer consumer level applications (which, btw, is really a small market compared to business and server applications).
As for the investing in multithreaded tech. All those companies you listed HAVE been investing HEAVILY into trying to make multithreading easier to do. http://en.wikipedia.org/wiki/OpenMP <- This, for example, has been around since 1997 and pretty much every company you listed has tried to make it successful (intel especially).
You still don't grasp the depth of the problem. Theoretically it should be possible to have the damned chips thread themselves or at least create a single compiler that will work every time. Yeah, we can get by with trial and error and creating tools to make the trial and error process easier, but those merely deal with the symptoms of the problem.
I have quite a firm grasp on the issue....
Would you really propose that we lock every single operating system into one single threading model? That would be a terrible idea.
"create a single compiler that will work every time". Works every time? GCC, ICC, VC++ all seem to do a nice job of working every time. It is pretty rare for any of those compilers to be 100% totally broken. Creating a "unified" compiler would also be a terrible idea. The more people attacking the idea of compiling software, the better. That is how improvements are made, not through setting one standard for all to follow.
"Yeah, we can get by with trial and error and creating tools". Humans make errors. There is no escaping it. That is why we do that design->test->design process. There is no magic approach that is going to somehow escape the need for test and verification.
It isn't a problem of lack of tools or lack of research. It is a problem with training developers to use those tools and training those developers to use them well. The research is there, the ability to create highly threaded programs is there. People have been using them for a LONG time. The only thing that really truly lacks is the business motivation to highly thread things. Most applications don't NEED to use every single resource given to it. Computers have gotten so fast that most programs and programmers simply don't care.
Why do you think we have seen such a rise of Tablet PCs and computing? People are buying and happily using machines that are FAR slower than their dedicated PC counterparts. This is because people don't need petaflops of processing power.
I don't want to look through this whole thread for the image, but how close are we to the 90 day mark that the AMD slide said BD would launch?
I am pretty sure right now the consensus is that BD is supposed to launch September 19th. But that depends if AMD can get enough flying monkeys from the zoo for the launch and if Katy Perry is on-board for the theme song.
If I was you, I'd be unloading those 5850s and getting ready for HD7950s instead. That's far more exciting on AMD's sideThose 5850s still have value on Ebay due to bitcoin right?
Maybe I'm misunderstanding you, but if not: Every optimizing compiler I know of and at least every successful one today optimizes on intermediate code that abstracts the underlying machine code away. It's much simpler to do escape analysis, dead code elimination, CSE and whatnot on intermediate code (possibly using several different variants) than not.A unified compiler is a terrible idea only because we have neither the theory nor the experience to create a good one.
Maybe I'm misunderstanding you, but if not: Every optimizing compiler I know of and at least every successful one today optimizes on intermediate code that abstracts the underlying machine code away. It's much simpler to do escape analysis, dead code elimination, CSE and whatnot on intermediate code (possibly using several different variants) than not.
Plucking in a different backend to create the machine code, do the register allocation and so on is the simple part there - you wouldn't gain much or anything by designing your compiler for only one particular backend in the first place (that's just how the usual compiler architecture works..)
No I'm saying it doesn't make any sense to do what you're proposing because optimizing code means for the largest part using - machine independent representations. So your claim that "one compiler" is a bad idea doesn't make any sense.You are still arguing what it is easier to do because of the current limitations of the technology and programmers, rather then what it is theoretically possible to do. So yes, you are misunderstanding me.
No, I'm proposing that the "damned chip" can determine the best course of action for whatever you feed it.
I don't want to look through this whole thread for the image, but how close are we to the 90 day mark that the AMD slide said BD would launch?
Lets start here then. What is "the best course of action". If everyone is misunderstanding you, explain more clearly and fully your ideas.
I think what we are doing now is the "best course of action". We're doing the best we can with what we've got as people have done since the stone age. However, that is no reason to assume it can't be done better and that multicore cpu processing can't be completely automated. My only argument is that it is the current limitations of the underlying theories and our lack of experience that prevent us from doing better.
In other words, our ignorance. The first step towards overcoming ignorance is in accepting that you actually are ignorant.
How is multicore cpu processing NOT automated? Computers are completely automated machines, they can't do things that aren't automated.However, that is no reason to assume it can't be done better and that multicore cpu processing can't be completely automated.
What are the limitations of our current theories?My only argument is that it is the current limitations of the underlying theories and our lack of experience that prevent us from doing better.
Anyone know if these 6 benches are also fake?
Where is dug777? He is like from the future and sh!t. Only he knows how fast BD truly is.
You still don't grasp the depth of the problem. Theoretically it should be possible to have the damned chips thread themselves or at least create a single compiler that will work every time. Yeah, we can get by with trial and error and creating tools to make the trial and error process easier, but those merely deal with the symptoms of the problem.