Probability Question for Y'All...

Woodchuck2000

Golden Member
Jan 20, 2002
1,632
1
0
Righty then... I was having an argument with some of my mathsy friends down the pub about the following scenario.

Consider a particle on a discrete 1-D axis. It moves randomly, one step at a time, starting at 0. It has a 50/50 chance of moving left/right.

After a given time, what's the expected position of the particle? (I'm interested in your answers before clouding your thoughts with any argument.)
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
There is no "expected position". The position follows a pretty complicated probability distribution (Bernouli if I remember correctly). As the number of steps goes to infinity the distribution becomes Gaussian.

But yes, the most likely position is zero.

 

djhuber82

Member
May 22, 2004
51
0
0
As the particle's position is discrete, it's pdf it NOT gaussian. This is a binomial distribution. The probability that the particle is at position x=k after n moves is the same as the probability of getting (n+k)/2 heads after n coin flips. As other posters have mentioned, the expected position is zero. The variance is proportional to n.
 

f95toli

Golden Member
Nov 21, 2002
1,547
0
0
As n->inf the Bernoulli binomial distribution converges to a Gaussian distribution (the fact that it is discrete is irrelevant).

Btw, the "expectation value" is zero, but there is no "expected position" since it is a distribution (the expectation value is just the moment that gives the most likely position, but the particle is actually much more likely to be in some other position).
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Originally posted by: f95toli
As n->inf the Bernoulli binomial distribution converges to a Gaussian distribution (the fact that it is discrete is irrelevant).

Btw, the "expectation value" is zero, but there is no "expected position" since it is a distribution (the expectation value is just the moment that gives the most likely position, but the particle is actually much more likely to be in some other position).
:thumbsup:
 

Mday

Lifer
Oct 14, 1999
18,646
1
76
Originally posted by: f95toli
As n->inf the Bernoulli binomial distribution converges to a Gaussian distribution (the fact that it is discrete is irrelevant).

Btw, the "expectation value" is zero, but there is no "expected position" since it is a distribution (the expectation value is just the moment that gives the most likely position, but the particle is actually much more likely to be in some other position).

this is the most correct answer. but I'd still say 0, cuz I dont think the original poster meant "expected" math-literally.
 

DrPizza

Administrator Elite Member Goat Whisperer
Mar 5, 2001
49,606
166
111
www.slatebrookfarm.com
What you are describing is a "random walk." You can google for more information. There are some interesting mathematics behind it, especially in 2 and 3 dimensions. (or higher dimensions)

random walk
A process in which the position of a particle changes by discrete steps of fixed length, and the direction of each step is chosen randomly. Random walks have interesting mathematical properties that vary greatly depending on the number of dimensions in which the walk takes place and whether it is confined to a lattice. For a random walk in one dimension there are only two directions to choose from. Imagine a drunken person wandering on the number line who starts at 0, and then moves left or right (+/-1) with probability 1/2. The probability that the walker will eventually return to his starting point is 1; in other words, it is certain to happen. The same is true for a random walk in the plane, moving on the integer lattice points, with probability 1/4 in each of the coordinate directions: the probability of ending up back at the starting point is 1. However, the situation changes in three dimensions. Suppose a drunken fly moves randomly from one point to another in a three-dimensional lattice with a probability of 1 in 6 of arriving at any of the six adjacent lattice points on each move. No matter how long the fly roams, it has only a 0.34054... probability of ever getting back to where it started. Probabilists say that random walks on the line and plane are recurrent, whereas random walks in three dimensions or more are transient. Effectively, this is because there is so much more "space" in three or more dimensions. The numbers giving the probability of eventually returning to the starting point are known as random walk constants. The random thermal perturbations in a liquid are responsible for a random walk phenomenon known as Brownian motion, and the collisions of molecules in a gas are a random walk responsible for diffusion.
 

Woodchuck2000

Golden Member
Jan 20, 2002
1,632
1
0
DrPizza - that's what I'm looking for. I wasn't sure of the technical name for the phenomenon. I've heard some people suggest that even in one dimension, the expected value is not 0.

The argument is as follows. After any number of moves, you could define the position of the particle as a new 'zero point' and argue that the particle ought to tend to that point after n moves. Given the equal possibility of a left/right move, at <any> time, the particle's probability function will suggest that its current position is probabalistically the most likely point for it to be at in n moves time. Because of this, the original zero point very quickly becomes quite unlikely!

Thoughts?
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
This 'random walker' will converge to a guassian distribution as the number of walks approaches infinity...so we can say with confidence that the expectation is 0 & variance is 1.

This simple example of a random walk can actually be used to derive the diffusion equation in 1D. Note that the diffusion equation has a similarity solution in the form of the gaussian 'bell curve'.

In fact you can consider random walks in N dimensions to find diffusion in N-dimensions and it leads you to the formulation/proof of the famous "Central Limit Theorem" of probability.

Check out this link for a very mathematical treatment of the 3 sentences preceding this one: http://www-math.mit.edu/18.366/lec/
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Originally posted by: Woodchuck2000
DrPizza - that's what I'm looking for. I wasn't sure of the technical name for the phenomenon. I've heard some people suggest that even in one dimension, the expected value is not 0.

The argument is as follows. After any number of moves, you could define the position of the particle as a new 'zero point' and argue that the particle ought to tend to that point after n moves. Given the equal possibility of a left/right move, at <any> time, the particle's probability function will suggest that its current position is probabalistically the most likely point for it to be at in n moves time. Because of this, the original zero point very quickly becomes quite unlikely!

Thoughts?

Yes, that's fine. But at N time, you expect it to be at 0, so you would just be redefining 0 as it's 0 position. And the particle's 'probability function' will suggest that its most likely position is 0.

Think about it...if my choices are left and right (heads or tails), then the probability of me taking 100 steps left is *very* low. It's much more likely that I will take 50 steps left and 50 steps right (regardless of order) and end up back at the origin. Blah blah blah at odd # of steps--think in terms of asymptotics (sp)...the "expectation" characterizes the behavior we expect to see as time->inf. That is, locally after 5 throws, maybe you did get 5 heads in a row...but who cares? After 100000000 throws, you should have very near a 1:1 heads:tails ratio.

Edit: Now that I think of it, this is a cute litte program that illustrates the random walk a of molecule in 2D and also provides you with a (guassian-looking) histogram. Kind of expands your thinking a little bit.
 

CycloWizard

Lifer
Sep 10, 2001
12,348
1
81
Just because I'm a huge dork, I wrote a random walk program in MATLAB. After 500 iterations, it ended with a location of +162. Then MATLAB crashed, because it's the devil, so I couldn't plot the probability or anything. Maybe later.
 

Woodchuck2000

Golden Member
Jan 20, 2002
1,632
1
0
Originally posted by: eLiu
Originally posted by: Woodchuck2000
DrPizza - that's what I'm looking for. I wasn't sure of the technical name for the phenomenon. I've heard some people suggest that even in one dimension, the expected value is not 0.

The argument is as follows. After any number of moves, you could define the position of the particle as a new 'zero point' and argue that the particle ought to tend to that point after n moves. Given the equal possibility of a left/right move, at <any> time, the particle's probability function will suggest that its current position is probabalistically the most likely point for it to be at in n moves time. Because of this, the original zero point very quickly becomes quite unlikely!

Thoughts?

Yes, that's fine. But at N time, you expect it to be at 0, so you would just be redefining 0 as it's 0 position. And the particle's 'probability function' will suggest that its most likely position is 0.

Think about it...if my choices are left and right (heads or tails), then the probability of me taking 100 steps left is *very* low. It's much more likely that I will take 50 steps left and 50 steps right (regardless of order) and end up back at the origin. Blah blah blah at odd # of steps--think in terms of asymptotics (sp)...the "expectation" characterizes the behavior we expect to see as time->inf. That is, locally after 5 throws, maybe you did get 5 heads in a row...but who cares? After 100000000 throws, you should have very near a 1:1 heads:tails ratio.

Edit: Now that I think of it, this is a cute litte program that illustrates the random walk a of molecule in 2D and also provides you with a (guassian-looking) histogram. Kind of expands your thinking a little bit.
Hmm, maybe I didn't phrase that well...

After defining an original zero-point, we set the particle moving, hoping to plot a nice curve detailing the likelyhood of a particle being at any given point on our axis.

Now at any given time, the most likely position after a further time interval is the current position, wherever that may be, not the original zero-point (Agreed?).

We would therefore <expect> the particle do drift off and end up nowhere near the original 0, as in CycloWizard's Matlab program.

Would the overall frequency distribution still be gaussian? I'm fairly sure that the discreteness of this example shouldn't matter...

Good answers though guys, will read your various sources and get back to you all!


 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Originally posted by: Woodchuck2000

Now at any given time, the most likely position after a further time interval is the current position, wherever that may be, not the original zero-point (Agreed?).

That's due to the nature of conditional probability. The probability of it ending at position +2 after two steps isn't the same as the probability of it ending in position +2 given that after 1 step it is at position +1.


We would therefore <expect> the particle do drift off and end up nowhere near the original 0, as in CycloWizard's Matlab program.

Well, if you wanted to compare the probability of the particle being at point 0 vs 'not being at point 0', I would assume that the probability of 'not being at point 0' would be greater. But what you're asking is the probability of being at point 0 greater than being at point +1? How about point 0 vs being at point +10? etc....
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Originally posted by: DrPizza
What you are describing is a "random walk." You can google for more information. There are some interesting mathematics behind it, especially in 2 and 3 dimensions. (or higher dimensions)

random walk
A process in which the position of a particle changes by discrete steps of fixed length, and the direction of each step is chosen randomly. Random walks have interesting mathematical properties that vary greatly depending on the number of dimensions in which the walk takes place and whether it is confined to a lattice. For a random walk in one dimension there are only two directions to choose from. Imagine a drunken person wandering on the number line who starts at 0, and then moves left or right (+/-1) with probability 1/2. The probability that the walker will eventually return to his starting point is 1; in other words, it is certain to happen. The same is true for a random walk in the plane, moving on the integer lattice points, with probability 1/4 in each of the coordinate directions: the probability of ending up back at the starting point is 1. However, the situation changes in three dimensions. Suppose a drunken fly moves randomly from one point to another in a three-dimensional lattice with a probability of 1 in 6 of arriving at any of the six adjacent lattice points on each move. No matter how long the fly roams, it has only a 0.34054... probability of ever getting back to where it started. Probabilists say that random walks on the line and plane are recurrent, whereas random walks in three dimensions or more are transient. Effectively, this is because there is so much more "space" in three or more dimensions. The numbers giving the probability of eventually returning to the starting point are known as random walk constants. The random thermal perturbations in a liquid are responsible for a random walk phenomenon known as Brownian motion, and the collisions of molecules in a gas are a random walk responsible for diffusion.

Ouch... that 3D case makes my brain hurt. I can't reason it out. I guess you can't simplify to a 1D case is because you happen to need all the 1D motions to end up at 0 at the same time. Each direction is bound to cross its own 1D 0 point but not necessarily at the same time as everything else.
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
After defining an original zero-point, we set the particle moving, hoping to plot a nice curve detailing the likelyhood of a particle being at any given point on our axis.

Now at any given time, the most likely position after a further time interval is the current position, wherever that may be, not the original zero-point (Agreed?).

We would therefore <expect> the particle do drift off and end up nowhere near the original 0, as in CycloWizard's Matlab program.

Would the overall frequency distribution still be gaussian? I'm fairly sure that the discreteness of this example shouldn't matter...

Good answers though guys, will read your various sources and get back to you all!

Well that's an entirely different situation...look up "conditional probability." What you're doing now is conditioning on some event, which changes the probabilistic universe.

I doubt that makes much sense, so here's an example:

I throw 2 fair coins independently. If they both show heads, I win; else I lose.
Now straight up, the probability of me winning is 1/4 (1/2 * 1/2).
But now I ask...what is the probability of winning *given* that the first coin landed heads up? It's 1/2.

By knowing something about the first coin, it completely changes the probability of me winning.

In the same way, by knowing something about how the first 500 moves turned out, that tells me about the where I can end up in the next 500 moves. For example, if I start at 0 and end up at +100 after 500 moves, then I know it's impossible to reach -500 after the next 500 moves.

So when you try to "reset" the situation in the scenario you describe, you are trying to compare apples & oranges.

Edit: TuxDave, isn't it funny how a lot of mathematical concepts/theorems/etc make perfect sense in 1 & 2D, but as soon as you get to 3D and higher, everything gets crazy? heh it seems like for many things, the jump from 1D to 2D isn't a big deal...but from 2D to 3D can be weird. But from 3D and onward, it's all the same.
 

djhuber82

Member
May 22, 2004
51
0
0
Originally posted by: f95toli
As n->inf the Bernoulli binomial distribution converges to a Gaussian distribution (the fact that it is discrete is irrelevant).

With appropriate scaling of the x-axis, this does "converge to a normal distribution" in some sense. But the fact that the position is discrete means that it cannot possibly BE a normal distribution. The position is always an integer, so the probability of finding the particle between 0.1 and 0.9 is ZERO. If the position followed a normal distribution, then this probability would be nonzero.

Originally posted by: eLiu
This 'random walker' will converge to a guassian distribution as the number of walks approaches infinity...so we can say with confidence that the expectation is 0 & variance is 1.

You seem to be forgetting about the scaling of the x-axis I mentioned above. The variance is proportional to the number of steps taken. Think about it: if you release the particle at x=0 and let it take 100 steps you're going to find it alot farther away from the origin than if you only let it take 10 steps.
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Originally posted by: djhuber82
Originally posted by: f95toli
As n->inf the Bernoulli binomial distribution converges to a Gaussian distribution (the fact that it is discrete is irrelevant).

With appropriate scaling of the x-axis, this does "converge to a normal distribution" in some sense. But the fact that the position is discrete means that it cannot possibly BE a normal distribution. The position is always an integer, so the probability of finding the particle between 0.1 and 0.9 is ZERO. If the position followed a normal distribution, then this probability would be nonzero.

Originally posted by: eLiu
This 'random walker' will converge to a guassian distribution as the number of walks approaches infinity...so we can say with confidence that the expectation is 0 & variance is 1.

You seem to be forgetting about the scaling of the x-axis I mentioned above. The variance is proportional to the number of steps taken. Think about it: if you release the particle at x=0 and let it take 100 steps you're going to find it alot farther away from the origin than if you only let it take 10 steps.

Then let it take an 'sufficiently large' number of 'sufficiently small' steps. Yes it's discrete, but we're speaking asymptotically (sp again). Besides no one is out to prove any theorems in this thread...if you want a formal-ish proof, refer to my first link.

And 10 steps or 100 steps, we still expect it to be back at 0. Trial per trial, the 100 step walker can get out much farther than 10. But across a large number of trials, it'd even out and you make a nice histogram that would have a bell-curve-esque look to it.
 

djhuber82

Member
May 22, 2004
51
0
0
My point was about the VARIANCE, not expected value. The expected value is zero and independent of the number of trials. So yes, both the 100 step trials and the 10 step trials will produce bell-like curves cetered at zero. But the 100 step bell will be wider, flatter, and lower (since the area under it must remain equal to 1) than the 10 step bell. In the limit that the particle takes an infinite number of steps, it's pdf flattens out so much that it is essentially zero everywhere. At this point the variance is infinite and you have an equal chance of finding the particle anywhere. While you could make an argument that the expected value is still zero for the infinite-step case, expected value means very little when the variance is infinite.
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Originally posted by: djhuber82
My point was about the VARIANCE, not expected value. The expected value is zero and independent of the number of trials. So yes, both the 100 step trials and the 10 step trials will produce bell-like curves cetered at zero. But the 100 step bell will be wider, flatter, and lower (since the area under it must remain equal to 1) than the 10 step bell. In the limit that the particle takes an infinite number of steps, it's pdf flattens out so much that it is essentially zero everywhere. At this point the variance is infinite and you have an equal chance of finding the particle anywhere. While you could make an argument that the expected value is still zero for the infinite-step case, expected value means very little when the variance is infinite.

Shoot...I misread your post...i.e. missed the word 'variance.' Sorry djhuber82, you're definitely correct here.
 

Shalmanese

Platinum Member
Sep 29, 2000
2,157
0
0
Your all wrong, at a given time n, the probability that the coin will lie at 0 is maximum providing n is even. If n is odd, then the probability it is at 0 is 0.
 

TuxDave

Lifer
Oct 8, 2002
10,572
3
71
Originally posted by: Shalmanese
Your all wrong, at a given time n, the probability that the coin will lie at 0 is maximum providing n is even. If n is odd, then the probability it is at 0 is 0.

touche....
 

djhuber82

Member
May 22, 2004
51
0
0
Yes, this is one of the differences of a bernoulli distribution from a normal distribution.

Originally posted by: djhuber82
As the particle's position is discrete, it's pdf it NOT gaussian. This is a binomial distribution. The probability that the particle is at position x=k after n moves is the same as the probability of getting (n+k)/2 heads after n coin flips.

So obviously (n+k) must be even.