I have a Numerical Methods assignment due. I've been scratching my head with this one. Among other things, we are required to write a program to find the smallest positive root to a certain equation. I'm certain of the smallest positive root due to the fact my program works for the most part and by checking the result with a calculator. My fixed point method also finds that same, smallest root.
The problem is we are required to take in multiple input values (initial guesses). The actual smallest root is at 3.0764xxx. So, given an input of 3.5 or 4.0, then it seems that no matter what, Newton's method will converge on a further root. The next root following the one we want of 3.0764 is 3.199xxx, and there's a lot more after that. So when they go to grade our programs using multiple initial guesses, only guesses lying inbetween something like .237 and 3.137 (the latter number is just shy of the midpoint between the smallest root we want and the one after it) will actualy cause Newton's method to converge on the smallest positive root of 3.0764. If you give it 3.5, it will converge to the following root of 3.199.
I've talked to my prof but he is quite hard to get a clear answer from. Any ideas if this is even possible to ALWAYS get to the SMALLEST root, even if the input value is a fair bit outside that range (i.e. it will "see" the 3.199 root or some other so converge to that)?
The problem is we are required to take in multiple input values (initial guesses). The actual smallest root is at 3.0764xxx. So, given an input of 3.5 or 4.0, then it seems that no matter what, Newton's method will converge on a further root. The next root following the one we want of 3.0764 is 3.199xxx, and there's a lot more after that. So when they go to grade our programs using multiple initial guesses, only guesses lying inbetween something like .237 and 3.137 (the latter number is just shy of the midpoint between the smallest root we want and the one after it) will actualy cause Newton's method to converge on the smallest positive root of 3.0764. If you give it 3.5, it will converge to the following root of 3.199.
I've talked to my prof but he is quite hard to get a clear answer from. Any ideas if this is even possible to ALWAYS get to the SMALLEST root, even if the input value is a fair bit outside that range (i.e. it will "see" the 3.199 root or some other so converge to that)?