- Apr 17, 2001
- 253
- 0
- 0
Thanks for any advice. I'd just like to know where I'm going wrong with this.
This is from an exercise in CLRS that I've been mulling over for the last week, but I haven't really figured out where I'm going wrong.
Question: Use indicator random variables to compute the expected value of the sum of n dice.
Before even starting on the problem, we know that the expected value of the sum of n dice would simply be n times the expected value(mean) of one die roll, which is 1*1/6 + 2*1/6 + 3*1/6 + 4*1/6 + 5*1/6 + 6*1/6 = 3.5
Therefore, the expected value of the sum of n dice would be 3.5n.
This makes perfect sense, but I can't seem to figure out how to use indicator random variables to do the same problem. Going by the text, this is what I came up with:
Given a sample space S and event A, an indicator random variable is defined as:
I{A} = { 1 if A occurs, 0 if A does not occur
Let X be the event of a die roll.
Let Xi be the event where the die roll results in an event of i, where 1 <= i <= 6
This gives:
Xi = I{i} = { 1 if i occurs, 0 if i does not occur
By a lemma in the text:
E[X(A)] = Pr{A}
So that gives:
E[Xi] = Pr{i} = 1/6
It follows that:
E[X] = E[X1] + E[X2] + E[X3] + E[X4] + E[X5] + E[X6]
The expected value of a single die roll, then, would be:
E[X] = 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6
Obviously, this is not quite there. Somewhere along the way, the outcome value needs to be factored in, but I can't give a reason other than that the fact that I know it should be in there.
I'd appreciate any help. Thanks.
This class is pretty annoying. Homework(textbook questions) is not graded, but exam questions are styled after the textbook questions, and we don't get solutions. Makes it kind of hard to take a test when you don't know if you've been doing the homework correctly or not.
This is from an exercise in CLRS that I've been mulling over for the last week, but I haven't really figured out where I'm going wrong.
Question: Use indicator random variables to compute the expected value of the sum of n dice.
Before even starting on the problem, we know that the expected value of the sum of n dice would simply be n times the expected value(mean) of one die roll, which is 1*1/6 + 2*1/6 + 3*1/6 + 4*1/6 + 5*1/6 + 6*1/6 = 3.5
Therefore, the expected value of the sum of n dice would be 3.5n.
This makes perfect sense, but I can't seem to figure out how to use indicator random variables to do the same problem. Going by the text, this is what I came up with:
Given a sample space S and event A, an indicator random variable is defined as:
I{A} = { 1 if A occurs, 0 if A does not occur
Let X be the event of a die roll.
Let Xi be the event where the die roll results in an event of i, where 1 <= i <= 6
This gives:
Xi = I{i} = { 1 if i occurs, 0 if i does not occur
By a lemma in the text:
E[X(A)] = Pr{A}
So that gives:
E[Xi] = Pr{i} = 1/6
It follows that:
E[X] = E[X1] + E[X2] + E[X3] + E[X4] + E[X5] + E[X6]
The expected value of a single die roll, then, would be:
E[X] = 1/6 + 1/6 + 1/6 + 1/6 + 1/6 + 1/6
Obviously, this is not quite there. Somewhere along the way, the outcome value needs to be factored in, but I can't give a reason other than that the fact that I know it should be in there.
I'd appreciate any help. Thanks.
This class is pretty annoying. Homework(textbook questions) is not graded, but exam questions are styled after the textbook questions, and we don't get solutions. Makes it kind of hard to take a test when you don't know if you've been doing the homework correctly or not.
