shortylickens' Bull's Eye analogy is a good one. Note one dilemma: to determine accuacy, you need to know the actual answer to a very precise value. But to assess precision, you can use common statistical tools because this parameter is characterized completely by the data itself with almost no external reference.
In the shots-on-target analogy, visually you look at it and judge where the middle of the group of holes is, and the average distance of each hole from the middle. Mathematically, you'd establish a coordinate system, calculate the mean for all holes, then the Standard Deviation for the collection. That is a measure of precision. But you could do all of that with no bulls eye at all - no external reference point. However, you do start to look immediately for one type of external reference - some commonly-accepted measure of what is acceptable precision for this case. That is how you decide whether the precision is good or not. In the shot-on-target analogy, you might ask for an opinion from some "expert" target shooter on how small the group of holes should be, and he / she certainly will ask you one important question before answering: how far was the shooter from the target?
When you want to assess accuracy, you do need external information - in fact, two pieces of information. The most obvious is: is the data average close to the "truth"? In the shot-on-target analogy, the "truth" is established visually - it is the center of the bullseye. Mathematically, the questions translates into: how much (numerically) does the data mean differ from the known "true" value? Often it is hard to establish the "truth". The second related item is: how precisely known is the "truth"? If the external information source can tell you the correct "true" value, it also should provide a numeric estimate of the precision of this value. For example, this calibration weight standard is 2.0040 g, with a Standard Deviation of 0.00073. Given that information, you can use standard statistical tools to decide whether your measurement intrument's data shows it to be within acceptable agreement with the known "truth".
Accuracy often is used to answer the question: can I rely on the answer you give me to be correct, or is there some consistent bias in your results? Precision, on the other hand, tends to be used to answer a different type of question: if your answer today is different from what I expect as a target value, or is different from the data obtained yesterday, should I be concerned or should I ignore the difference as small random variations that are of no significance?