Another matlab question

Status
Not open for further replies.

TecHNooB

Diamond Member
Sep 10, 2005
7,458
1
76
What's the max # of decimal places matlab can display? I have it set at 16 places rite now, but for some reason, when I put these values into the auto-grader, if I write I(1,1) instead of the actual value stored in I(1,1), I get a pass. but writing the actual value gets me a fail. I'm dancing on the edge of this boundary so I need to know if I can get more precision.
 

TecHNooB

Diamond Member
Sep 10, 2005
7,458
1
76
blah, ended up using fprintf with %0.40f

stupid matlab, hiding digits from me!
 

Fenixgoon

Lifer
Jun 30, 2003
32,886
12,165
136
Cuz I optimized the hell out of something and every digit counts :p The hidden digits made all the difference.

IIRC matlab stores most numbers as double precision. the digits are there, it just doesn't display them unless you tell it to otherwise.
 

gwai lo

Senior member
Sep 29, 2004
347
0
0
I don't know what you're doing exactly..but for what it's worth, MATLAB's default settings are for double precision and you're way beyond that if you're looking at 40 decimal places.

You're into round off error now, so it may be something to think about.
 

TecHNooB

Diamond Member
Sep 10, 2005
7,458
1
76
I don't know what you're doing exactly..but for what it's worth, MATLAB's default settings are for double precision and you're way beyond that if you're looking at 40 decimal places.

You're into round off error now, so it may be something to think about.

ya i just arbitrarily chose a number so i can see what was trailing :p
 

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
Uhm, I'm confused here.
MATLAB's floating point calculations operate on IEEE standard double precision floating point numbers. That's 64 bits of data. iirc, it's 1bit for +/-, 11 bits for the exponent, and 52 bits for the mantissa (data).

The mantissa alone is meaningful out to about 16 decimal places. You can type eps(number) (replace number with an actual number) in MATLAB to get it to tell you the smallest difference between two numbers that are nearly "number" in magnitude. This difference is also useful in describing the possible amount of error caused by adding (or any arithmetic operation) 2 floating point numbers; i.e. if I have x,y which are real numbers, and fl(x),fl(y) which are their floating point representations, the eps() quantity is (roughly speaking): (x+y) - fl(fl(x) + fl(y)) ~ eps. As you can imagine, doing large numbers of floating point operations will *decrease* the number of meaningful digits. Like after using backslash to solve a very ill-conditioned linear system, it is entirely possible that you can only trust 3 or 4 digits of the output!

Printing 40 digits of "precision" is meaningless. Unless you're working primarily with denormalized numbers (in which case you should strongly reconsider your algorithm), those extra digits are just noise. With denormalized, the extra digits have some meaning, but the space between 2 representable numbers is inconsistent, making computation somewhat dangerous.

Anyway could you post a code snippet? Supposing that l(1,1) = X, where X is the explicit value stored there, X - l(1,1) will be 0, always. There's floating point arithmetic always evaluates this exactly.

If you are in a situation where "every digit counts," that means you haven't optimized it. High sensitivity to floating point error is never a good sign. If your code requires high levels of precision, you should consider switching to a language like C or Java that have adaptive/arbitrary precision arithmetic libraries available. But really I have run into few situations where double precision is inadequate. (High accuracy & consistent intersections of curves in 3D is one example.)
 

Born2bwire

Diamond Member
Oct 28, 2005
9,840
6
71
1 digit escapes when you do that :p for everyone asking about why I picked 40, it was arbitrary and yes there are a lot of trailing zeros.

No, technically you can assume a guaranteed accuracy of 15 digits. That last digit is up for grabs because the round-off error is going to be reflected in it. In addition, the machine epsilon is a little larger than 1e-16, which is why people are usually careful in saying that double precision has approximately 16 digits of accuracy.
 
Last edited:

eLiu

Diamond Member
Jun 4, 2001
6,407
1
0
No, technically you can assume a guaranteed accuracy of 15 digits. That last digit is up for grabs because the round-off error is going to be reflected in it. In addition, the machine epsilon is a little larger than 1e-16, which is why people are usually careful in saying that double precision has approximately 16 digits of accuracy.

Machine epsilon is only 1e-16 (2^(-53) to be specific) when the numbers you're working with are in the range of like... 1 to 2 or maybe 1/2 to 2. From 2 to 4, it doubles, From 4 to 8, doubles again... etc.

Also depending on how many floating point ops have occured, the answer may have far fewer trustable digits (example: linear direct-solve on a very ill-conditioned system).
 
Status
Not open for further replies.