The data you have in the table already defines the curve. It is in the form of Y (Cumulative Probability, but in terms of % instead of fraction) and X (Score).
The typical "Bell Curve" appearance of the Normal Distribution is in the form of Probability on the Y axis versus z. Z is just a way to scale the actual data into a unitless range from z = - infinity to z = + infinity. Z is defined by the formula z = {(actual "X" value) - (mean of "X")} / (Standard Deviation of the X's). The cumulative form of the curve is obtained mathematically by integrating the curve from z = - infinity up to the z value for your "X". In practice, you could approximate it by breaking up the X axis into pieces and then, moving from left to right, for each piece adding its average "Y" value to the previous total.
In your data table, you have the Cumulative Probability already in the Percentile column, so the graph of Percentile (on the Y axis) versus Score (on the X axis) looks like an S-shaped curve rising from left to right, slowly on the left, quickly rising in the middle, and leveling off on the right. It's pretty hard to get the Bell Curve out of these data because the table has Scores given for evenly-spaced value of Cumulative Probability, whereas the usual Bell Curve has Probability given for evenly-spaced intervals of Score. However, you can still do calculations on these data using the formulas for the Normal Distribution. (So far we are ASSUMING that these are the right formulas because the Normal Distribution is the right model for the data - and we plan to look closely at the results to decide whether or not that is reasonable.)
Now, if we had the values of X, Mean of X and Standard Deviation of the X's, we could calculate the z vales for each X given, look up in a table the Cumulative Probability for that z, and predict those values. In the current case we have to work backwards. We have the Cumulative Probabilities (as Percentile values) so we can use the table to see what value of z corresponds to each Percentile. For each z we now have three of the four quantities in the formula for z (z, X, Mean of X) so we can calculate the Standard Deviation of X. We can do that 19 times and they all should be pretty close to each other IF we have used the correct model for the data. The average of those 19 estimates can then be used as the "real" Standard Deviation of the X's. Finally, that puts us in a position to calculate the predicted values for Cumulative Probability (Percentile) for each Score value. Comparing these to the values of Percentile in the data table provided gives us a feel for whether the data really do fit the Normal Distribution Curve. If our answer is "close enough", then we can proceed to use all the other tools of that distribution to predict (calculate) other values, like what percentage of people actually met or exceeded the minimum score for passing, 1440. In Excel, one function, NORMDIST(), can produce both Probability and Cumulative Probability values for you to examine. As inputs it needs the values For X, Mean of X, Standard Deviation of X, and a logical switch. You just calculated all those values; for the switch function, check the Excel Help file for this statistical function.
On the other hand, if the judgment is "No, the fit is poor and the real data don't fit a classic symmetrical "Bell Curve", then we need to decide what the proper statistical model is and try its tools.
As a complete aside, IF the Normal Distribution is the correct statistical model, these data say that about 62% of people writing the California Bar Exam in 2008 failed. I bet that's by design, so that the plain "average" law student won't be good enough (yet) to get a license to practice.