Is there any calculation that current processors cannot calculate ?

Page 2 - Seeking answers? Join the AnandTech community: where nearly half-a-million members share solutions and discuss the latest tech.

sdifox

No Lifer
Sep 30, 2005
99,418
17,569
126
Originally posted by: Kyteland
Originally posted by: sdifox
Originally posted by: dinkumthinkum
Originally posted by: sdifox
P and NP. There are problems that can not be solved efficiently. Those are known as NP (not possible) problems.

Eeek!! Way off dude. Did you ever read those links yourself?

P stands for Polynomial-time. This means there is an algorithm on a Deterministic Turing Machine which worst-case solves the problem in polynomial time, if the problem is in P.

NP stands for Non-deterministic Polynomial time. This means there is an algorithm on a NON-deterministic Turing Machine which worst-case solves the problem in POLYNOMIAL time, if the problem is in NP.

And before you jump in with further misconceptions: a NON-deterministic Turing Machine can -- in parallel -- consider many solutions simultaneously and only output the one which gets the acceptance result. Needless to say, no one has managed to create a "real" non-deterministic Turing Machine. It has NOTHING to do with random numbers.

Another way of characterizing NP is that it is the problems which have polynomial-size PROOFS for a given input that they accept it. This is easy to see: simply generate ALL polynomial-size proofs for a given input and use a non-deterministic Turing Machine to check every one in parallel, in polynomial time.

Generally problems outside of P are considered "intractable" which is just a shorthand way of saying that they may take too much time to be practical. But even problems outside of P may have average-case polynomial-time algorithms which are quite useful: Hindley-Milner type inference and checking is a good example. It is PSPACE-complete in its simplest form, but average case is linear time.

The set of NP-hard problems are those problems for which it is the case that every problem in NP can be reduced to them. The NP-complete problems are NP-hard AND also in NP themselves.

And P, NP are just the first two levels of the polynomial hierarchy, which is all part of PSPACE, which is part of Turing-computable. Then there's a whole arithmetic hierarchy of undecidable problems, which no computer will ever be able solve.

So yea...

lol, he was asking about whether there are problems that current computers can't compute and I pointed him to p-np. What is the problem? Once he gets that, he'll get what he is asking.

I think he was referring to the bolded part. ;)

ok... so I missed a few words :)
 

PlasmaBomb

Lifer
Nov 19, 2004
11,636
2
81
Originally posted by: silverpig
There are plenty of simulations you can run at very low resolution today that we would love to bump up the resolution on in order to be more accurate but can't because of the lack of processor power.

This - there is a nice simulation of the galaxy done on a supercomputer (I think it was posted here a while ago), which runs pretty nicely for a while.... before assploding due to computational rounding, errors and such like.

Damb... beaten on the astrophysics angle.

Although I will leave you with this - Space, more complicated than you think...
 

Gibsons

Lifer
Aug 14, 2001
12,530
35
91
Originally posted by: EULA
No one has mentioned PI?

I think the OP was talking about things that are do-able in theory, but aren't done currently due to lack of processing power. I don't think there's any amount of processing power that would allow you to completely calculate pi. Or divide by zero for that matter...
 

Nuwave

Member
Jun 30, 2008
118
0
0
Googolplex

Not sure if that really counts as it is actually just a number. But it can be argued that it takes processing time to 'print' any amount of text to the screen and therefore printing a Googolplex to the screen within a year requires more processing power I can possibly conceive.
 

HolyFire

Member
Nov 8, 2008
34
0
66
Here is a nice question with a simple yes-or-no answer:

"Can the white player in chess guarantee a win?"

Solvable in theory, but far beyond the reach of the combined processing power of all the computers which will exist on earth in the foreseeable future.
 

Modelworks

Lifer
Feb 22, 2007
16,240
7
76
Originally posted by: Gibsons
Originally posted by: EULA
No one has mentioned PI?

I think the OP was talking about things that are do-able in theory, but aren't done currently due to lack of processing power. I don't think there's any amount of processing power that would allow you to completely calculate pi. Or divide by zero for that matter...

Yeah I am interested in things that would be useful to the masses, like cancer research, not just things that are done for the sake of doing them, like pi.
 

Gannon

Senior member
Jul 29, 2004
527
0
0
Originally posted by: Modelworks
Originally posted by: gevorg
Poincare conjecture

How would this benefit people by solving it ?


I'm really looking for things that would be useful to the masses.

Searching really large datasets(tm) (better search). Language processing, consider the fact that we have enormous amounts of cpu power and still don't have a device that can universally translate, understand, decode and learn even one language automatically without any kind of training or user intervention without error, also consider metarecognition (i.e. picking out objects in an image), which requires enormously more time and resources then a human being spends on doing pattern recognition.

Also artificial intelligence would benefit extremely from more computing power. Right now A.I. is in the dark ages, I want my computer to think for me, imagine not having to visit websites, or have people write sites like anandtech, and have a group or a single AI do that kind of thing for you through mere sifting of information on the web and being humanly creative in that you can't distinguish something a person has written from something an AI has written.

Imagine never having to work again because AI computes faster and is smarter then all human beings on earth. Right now computers are better mathematicians in terms of computational speed then all mathematicians combined, now if they could learn to think and do as humans do at that speed, that would obsolete entire populations of experts quite quickly as human beings become more redundant and useless in the face of their own creations.
 

Vee

Senior member
Jun 18, 2004
689
0
0
Originally posted by: superunknown98
Heh, nobody mentioned the weather? To accuratly predict the weather would require alot of power.

Accurately predicting the weather is not even theoretically possible. It would not fail due to unsufficient computing power, it fails because it's actually not possible. The reason is that arbitrarily small input conditions will eventually have very significant consequences.
(Please note that this is a mathematical property of the system! It affects the simulation itself as profoundly as the reality it tries to emulate! Even a change in the floating point precision by a single bit will, after a while, lead to a radically different prediction.)

The only thing that we can do, and ever will be able to do, is to compute an approximation that is going to be increasingly incorrect, as it tries to look farther into the future.


Originally posted by: EULA
No one has mentioned PI?


The exact value of PI is known. We also know that the exact value cannot be expressed with a rational number. That is hardly anything special, since it is a property PI shares with most numbers. The exact value of PI is instead expressed with the greek letter PI.
Using our knowledge of the exact value of PI, we can express a rational number arbitrarily close to PI. But, of course, never PI itself.
There is no task to compute.

In terms of computing a decimal approximation with an arbitrary number of correct decimals, we can certainly do that. No problem.
 

yh125d

Diamond Member
Dec 23, 2006
6,886
0
76
Weather prediction is very much theoretically possible, it is drastically limited by processing power yes, but drastically moreso by our inability to collect the required data to make a full, accurate prediction.

If you were unlimited by processing power, and had the ability to collect every possible variable, the weather could be accurately predicted. However this would entail taking into account such things as a supernova hundreds of parsecs away, as over the lifetime of the universe an event as such can and would affect the weather (assuming Earth was still in existence by the time such an event would begin to influence our corner of the universe)


Also:

We already know the exact value of pi as has been stated. It's 22/7. Pretty simple really, the only tricky part is that there is no expressable rational number for pi in mathematics as we know it
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: yh125d
Weather prediction is very much theoretically possible, it is drastically limited by processing power yes, but drastically moreso by our inability to collect the required data to make a full, accurate prediction.

If you were unlimited by processing power, and had the ability to collect every possible variable, the weather could be accurately predicted. However this would entail taking into account such things as a supernova hundreds of parsecs away, as over the lifetime of the universe an event as such can and would affect the weather (assuming Earth was still in existence by the time such an event would begin to influence our corner of the universe)
More or less right. However, Vee is also correct because measuring all of the initial conditions required to solve the system deterministically would disturb the system due to the uncertainty principle.
Also:

We already know the exact value of pi as has been stated. It's 22/7. Pretty simple really, the only tricky part is that there is no expressable rational number for pi in mathematics as we know it
Wrong. pi=3.14159265... 22/7=3.1428571...
 

CP5670

Diamond Member
Jun 24, 2004
5,660
762
126
More or less right. However, Vee is also correct because measuring all of the initial conditions required to solve the system deterministically would disturb the system due to the uncertainty principle.

I don't think that is what she meant. We can't ever get a completely exact measurement, and the system itself is known to be unstable (in the sense that its behavior might not depend continuously on the initial data) regardless of how it's observed.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: CP5670
I don't think that is what she meant. We can't ever get a completely exact measurement, and the system itself is known to be unstable (in the sense that its behavior might not depend continuously on the initial data) regardless of how it's observed.
I don't think that's exactly what Vee meant either, but she got the principle right, even if only by accident. :p Even if a system is unstable, we can still accurately predict future behavior if we know the initial conditions to a sufficient degree of accuracy. One can even solve some unstable systems analytically, though generally only if they are linear.
 

CP5670

Diamond Member
Jun 24, 2004
5,660
762
126
Yeah, try and solve this symbolically. :D

It really depends on how and in what sense the instability occurs. Its effects may not be prominent at small time scales, so you can predict it accurately as long as you don't go too far into the future, or it could be that the instability is localized and the system is still stable in a weaker, averaged sense.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: CP5670
Yeah, try and solve this symbolically. :D

It really depends on how and in what sense the instability occurs. Its effects may not be prominent at small time scales, so you can predict it accurately as long as you don't go too far into the future, or it could be that the instability is localized and the system is still stable in a weaker, averaged sense.
I have solved the Navier-Stokes equations many, many times, including in cases with instabilities. They may be solved symbolically in cases where the equations are linear (i.e. when inertial contributions may be neglected) and numerically in cases where they are not. All the instability does is increase the number of solutions possible by adding in non-uniqueness. All of the cases may still be solved in principle. It has also been shown that the solution to the NS equations exists, which is a fundamental problem in and of itself. Now if you can find a way to find the general analytical solution, you will get about a million dollars. It has already been done for the Navier equations of solid mechanics (many years ago, in fact), though the solid mechanics equations don't have the inertial terms. That's one of the reasons fluid mechanics is much more complex than solid mechanics and why I primarily stick to solid mechanics despite my many years of training as a chemical engineer. :p

Of course, the NS equations are simply a continuum approximation to the true phenomena occurring at smaller time and length scales. To get a more accurate model, we would need direct numerical simulation (DNS) of each molecule in the system. This generally requires a time resolution on the order of picoseconds for numeric stability, so it's pretty tricky to use it for a long time scale problem like, say, the weather. :p But for even better accuracy, we would need to go to atomistic scaling, which current computational limits keep at only a few hundred atoms. Even this is not entirely accurate because it generally does not account for certain phenomena. In the end, they are still all approximations of reality. The question, then, is: what is considered an "accurate" prediction?
 

CP5670

Diamond Member
Jun 24, 2004
5,660
762
126
All the instability does is increase the number of solutions possible by adding in non-uniqueness. All of the cases may still be solved in principle.

It can be unstable even if you have uniqueness, but in any case you would have to figure out which branch of the solution is the physically correct one. I don't know, does that cause trouble in practice?

It has also been shown that the solution to the NS equations exists, which is a fundamental problem in and of itself.

This is not actually known in the general case. It's the problem with the million dollar bounty you mentioned.

In the end, they are still all approximations of reality. The question, then, is: what is considered an "accurate" prediction?

I guess the real test is how well it works in practice. Whatever they do with actual weather forecasts these days, there certainly seems to be room for improvement. :D
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: CP5670
It can be unstable even if you have uniqueness, but in any case you would have to figure out which branch of the solution is the physically correct one. I don't know, does that cause trouble in practice?
In most cases, only one branch gives physically meaningful results, so the others may be excluded. For example, a project I'm working on now requires that I compute the static geometry of certain objects. Some of the geometries require the solution of cubic equations, which give one real solution and a pair of complex conjugate solutions for something like the radius. Generally, I chuck the imaginary ones right away because obviously one can't really have an imaginary radius. However, plugging in physically realistic values for the one I solved last night cause the "real" solution to return imaginary results. Thus, I have to stay up late tonight and figure out which of the apparent complex solutions gives a real answer. Good times. These aren't exactly instability problems, but the idea is pretty much the same. The math tells you the answers, but you need physical insight to determine which one is useful and/or real.
This is not actually known in the general case. It's the problem with the million dollar bounty you mentioned.
Ah, you're right. I was thinking of the existence of a weak solution, which has already been shown. I've been working on numerical solutions for too long, apparently. :eek:
 

CP5670

Diamond Member
Jun 24, 2004
5,660
762
126
In most cases, only one branch gives physically meaningful results, so the others may be excluded. For example, a project I'm working on now requires that I compute the static geometry of certain objects. Some of the geometries require the solution of cubic equations, which give one real solution and a pair of complex conjugate solutions for something like the radius. Generally, I chuck the imaginary ones right away because obviously one can't really have an imaginary radius. However, plugging in physically realistic values for the one I solved last night cause the "real" solution to return imaginary results. Thus, I have to stay up late tonight and figure out which of the apparent complex solutions gives a real answer.

That seems simple enough. What happens if you don't know all the branches though? I don't work that much with PDEs but it's common with unstable numerical problems in general to have an infinite number of branches and no control over which one is being computed. (so you want to know something about how much they can deviate from each other)

Ah, you're right. I was thinking of the existence of a weak solution, which has already been shown. I've been working on numerical solutions for too long, apparently.

Yeah, you can always get existence if you keep weakening the concept of a solution by enough. :p I remember for the standard linear classes of equations, you could show weak existence in a few lines but making that into a classical solution took much more work.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: CP5670
That seems simple enough. What happens if you don't know all the branches though? I don't work that much with PDEs but it's common with unstable numerical problems in general to have an infinite number of branches and no control over which one is being computed. (so you want to know something about how much they can deviate from each other)
If you're having this problem, you're probably doing something wrong. :p There is almost always some metric (e.g. energy or something analogous) that can be computed in order to determine the "most likely" branch for your system. By turning the problem into a sequential optimization problem, an entirely new arsenal is at your disposal for following the "most correct" branch of the problem, though the computational cost is pretty high.
Yeah, you can always get existence if you keep weakening the concept of a solution by enough. :p I remember for the standard linear classes of equations, you could show weak existence in a few lines but making that into a classical solution took much more work.
If symbolic solutions were easily obtainable for most problems, they would all already be solved and we'd be out on the street. Very few people understand the concept of weak solutions, even among those with PhDs in engineering/science. Job security FTW.
 

CP5670

Diamond Member
Jun 24, 2004
5,660
762
126
Originally posted by: CycloWizard
If you're having this problem, you're probably doing something wrong. :p There is almost always some metric (e.g. energy or something analogous) that can be computed in order to determine the "most likely" branch for your system. By turning the problem into a sequential optimization problem, an entirely new arsenal is at your disposal for following the "most correct" branch of the problem, though the computational cost is pretty high.

You need to have a way of actually finding those branches though. That is sometimes but not always possible. I'm looking into this signal prediction scheme at the moment, which only converges on some (complicated and hard to characterize) set of input data and blows up rapidly otherwise. The results are unique but very unstable and I can easily make examples where it gives bogus results, although it seems to work well on some types of real world data.

If symbolic solutions were easily obtainable for most problems, they would all already be solved and we'd be out on the street. Very few people understand the concept of weak solutions, even among those with PhDs in engineering/science. Job security FTW.

By classical solution I just meant something with enough regularity that it would "make sense" in the PDE, but you're quite right. It's a very simple idea too, basically just integration by parts in the linear case. :p
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: CP5670
You need to have a way of actually finding those branches though. That is sometimes but not always possible. I'm looking into this signal prediction scheme at the moment, which only converges on some (complicated and hard to characterize) set of input data and blows up rapidly otherwise. The results are unique but very unstable and I can easily make examples where it gives bogus results, although it seems to work well on some types of real world data.
Just out of curiosity, what kind of signal are you trying to predict? It sounds like you're trying to feed-forward a nonlinear system, and I can see where that would be tricky (at best).
 

CP5670

Diamond Member
Jun 24, 2004
5,660
762
126
It can be anything bandlimited (possibly in a distributional sense), although I'm mainly considering speech audio signals as part of a compression method. I basically want to make the whole thing stable by finding additional conditions on the signal and/or modifying the algorithm, and am also trying to determine what types of sampling points work in it, which is a tricky issue in general. That is essentially my thesis problem actually. :p