Have just picked up mention of this problem from Melody's wrap, ... , looks interesting.
It's what's known as a fixed point iteration.
It can be written as
\(\displaystyle x_{n+1}=x_{n}+\sin x_{n}+\frac{1}{6}\sin^{3}x_{n}\) ,
(x rather than n, n suggests an integer).
Choose a starting value x0 for x, substitute into the rhs to calculate x1, substitute that back into the rhs to calculate x2 and so on. Providing that the process converges to some number, we have a solution, (a root) of the equation
\(\displaystyle x = x + \sin x + \frac{1}{6}\sin^{3}x\) .
Assuming that for a suitable starting value that this particular iteration converges to pi, (and it can be shown that it does), we can let
\(\displaystyle x_{n}=\pi + \epsilon_{n} \text{ and } x_{n+1}=\pi + \epsilon_{n+1}\) ,
where
\(\displaystyle \epsilon_{n} \text{ and } \epsilon_{n+1}\)
are the errors in successive iterates.
The object of the exercise is to determine an approximate relationship between them, since this tells us just how quickly the iteration is converging.
We have,
\(\displaystyle \sin x_{n}=\sin(\pi+\epsilon_{n})=-\sin \epsilon_{n}\),
and expanding by Maclaurin's series,
\(\sin x_{n} = -\left(\epsilon_{n}-\frac{\epsilon^{3}_{n}}{3!}+\frac{\epsilon^{5}_{n}}{5!}-\frac{\epsilon^{7}_{n}}{7!}+\dots\right)\).
From that, we find that
\(\displaystyle \sin^{3}x_{n} = -\left(\epsilon^{3}_{n}+\frac{60\epsilon^{5}_{n}}{5!}-\frac{546\epsilon^{7}_{n}}{7!}+\dots \right)\),
(nb. the easiest way of doing that on paper is to use the identity
\(\displaystyle \sin3A=3\sin A-4\sin^{3}A\) ).
Substituting into the iteration, and simplifying,
(\(\displaystyle \text{the } \epsilon_{n} \text{ and } \epsilon^{3}_{n}\) terms on the rhs cancel out),
\(\displaystyle \epsilon_{n+1}=\frac{9}{5!}\epsilon^{5}_{n}-\frac{90}{7!}\epsilon^{7}_{n}+\dots\) (but check my arithmetic),
showing that the iteration has fifth order (quintic) convergence.
So, for this iteration with a starting value x0 = 3, meaning an initial error of about 0.14, we could expect an error in x1 to be roughly 9*0.14^5/120 = 0.000004 and for x2 roughly 9*0.000004^5/120, about 8e-29, and so on.
For comparison, Newton-Raphson is usually second order convergent. That is, the error of an iterate is roughly the square of the preceding one.
- Bertie