Why does the following simple formula converge QUINTICALLY, or order 5, to the constant Pi?:

ⁿₐ₊₁=ⁿₐ+sin(ⁿₐ)+1/6(sin³)ⁿₐ . Any insights will be greatly appreciated. Thanks a lot.

Guest Dec 25, 2015

#7**+10 **

Have just picked up mention of this problem from Melody's wrap, ... , looks interesting.

It's what's known as a fixed point iteration.

It can be written as

\(\displaystyle x_{n+1}=x_{n}+\sin x_{n}+\frac{1}{6}\sin^{3}x_{n}\) ,

(x rather than n, n suggests an integer).

Choose a starting value x0 for x, substitute into the rhs to calculate x1, substitute that back into the rhs to calculate x2 and so on. Providing that the process converges to some number, we have a solution, (a root) of the equation

\(\displaystyle x = x + \sin x + \frac{1}{6}\sin^{3}x\) .

Assuming that for a suitable starting value that this particular iteration converges to pi, (and it can be shown that it does), we can let

\(\displaystyle x_{n}=\pi + \epsilon_{n} \text{ and } x_{n+1}=\pi + \epsilon_{n+1}\) ,

where

\(\displaystyle \epsilon_{n} \text{ and } \epsilon_{n+1}\)

are the errors in successive iterates.

The object of the exercise is to determine an approximate relationship between them, since this tells us just how quickly the iteration is converging.

We have,

\(\displaystyle \sin x_{n}=\sin(\pi+\epsilon_{n})=-\sin \epsilon_{n}\),

and expanding by Maclaurin's series,

\(\sin x_{n} = -\left(\epsilon_{n}-\frac{\epsilon^{3}_{n}}{3!}+\frac{\epsilon^{5}_{n}}{5!}-\frac{\epsilon^{7}_{n}}{7!}+\dots\right)\).

From that, we find that

\(\displaystyle \sin^{3}x_{n} = -\left(\epsilon^{3}_{n}+\frac{60\epsilon^{5}_{n}}{5!}-\frac{546\epsilon^{7}_{n}}{7!}+\dots \right)\),

(nb. the easiest way of doing that on paper is to use the identity

\(\displaystyle \sin3A=3\sin A-4\sin^{3}A\) ).

Substituting into the iteration, and simplifying,

(\(\displaystyle \text{the } \epsilon_{n} \text{ and } \epsilon^{3}_{n}\) terms on the rhs cancel out),

\(\displaystyle \epsilon_{n+1}=\frac{9}{5!}\epsilon^{5}_{n}-\frac{90}{7!}\epsilon^{7}_{n}+\dots\) (but check my arithmetic),

showing that the iteration has fifth order (quintic) convergence.

So, for this iteration with a starting value x0 = 3, meaning an initial error of about 0.14, we could expect an error in x1 to be roughly 9*0.14^5/120 = 0.000004 and for x2 roughly 9*0.000004^5/120, about 8e-29, and so on.

For comparison, Newton-Raphson is usually second order convergent. That is, the error of an iterate is roughly the square of the preceding one.

- Bertie

Guest Dec 27, 2015

#2**+5 **

The Formula is basically: n=sin(n)+1/6sin^3(n)>>>1st.iteration........the result of which is fed back into the formula and you get the result of the 2nd iteration, which converges quintically to Pi. Example: If n=3, then feeding it into the above formula you get=3.1415884055156107677901598795611. Now, if you feed this result into the formula, you get this result on the 2nd iteration=3.1415926535897932384626433831757, which converges very rapidly towards the actual value of Pi.............and so on. This is obviously in radians, but I don't understand WHY it converges so rapidly, i.e., each iteration quintuples the number of accurate digits of Pi. The 2nd iteration above gave about 27 accurate decimal digits. The third iteration would give: 27 X 5=135 accurate digits(at least). In fact, it gives 140 or so accurate digits........and so on.

Guest Dec 26, 2015

#3**+5 **

Sorry, A minor typo: Pi~n + sin(n) + 1/6sin^3(n). So, for n=3,

Pi~3 +0.14112000805986722210074480280811 + 4.6839745574354568941507675299565e-4

Pi~3.1415884055156107677901598795611........1st iteration...........and so on.

Guest Dec 26, 2015

#4**+5 **

It doesn't always converge to pi. Try a value for n greater than 2pi for example. And sometimes it takes a while to get going even when it does converge to pi. or example, here's a graph where n = 2pi-10^{-8}

^{}

Alan Dec 26, 2015

#5**+5 **

The following might be more helpful. Put the result of one iteration into the expression symboloically, and then expand as a Taylor series:

Alan Dec 26, 2015

#6**+5 **

Do you want to be shown why, or would you prefer to be guided so that you derive the result yourself ?

Guest Dec 26, 2015

#7**+10 **

Best Answer

Have just picked up mention of this problem from Melody's wrap, ... , looks interesting.

It's what's known as a fixed point iteration.

It can be written as

\(\displaystyle x_{n+1}=x_{n}+\sin x_{n}+\frac{1}{6}\sin^{3}x_{n}\) ,

(x rather than n, n suggests an integer).

Choose a starting value x0 for x, substitute into the rhs to calculate x1, substitute that back into the rhs to calculate x2 and so on. Providing that the process converges to some number, we have a solution, (a root) of the equation

\(\displaystyle x = x + \sin x + \frac{1}{6}\sin^{3}x\) .

Assuming that for a suitable starting value that this particular iteration converges to pi, (and it can be shown that it does), we can let

\(\displaystyle x_{n}=\pi + \epsilon_{n} \text{ and } x_{n+1}=\pi + \epsilon_{n+1}\) ,

where

\(\displaystyle \epsilon_{n} \text{ and } \epsilon_{n+1}\)

are the errors in successive iterates.

The object of the exercise is to determine an approximate relationship between them, since this tells us just how quickly the iteration is converging.

We have,

\(\displaystyle \sin x_{n}=\sin(\pi+\epsilon_{n})=-\sin \epsilon_{n}\),

and expanding by Maclaurin's series,

\(\sin x_{n} = -\left(\epsilon_{n}-\frac{\epsilon^{3}_{n}}{3!}+\frac{\epsilon^{5}_{n}}{5!}-\frac{\epsilon^{7}_{n}}{7!}+\dots\right)\).

From that, we find that

\(\displaystyle \sin^{3}x_{n} = -\left(\epsilon^{3}_{n}+\frac{60\epsilon^{5}_{n}}{5!}-\frac{546\epsilon^{7}_{n}}{7!}+\dots \right)\),

(nb. the easiest way of doing that on paper is to use the identity

\(\displaystyle \sin3A=3\sin A-4\sin^{3}A\) ).

Substituting into the iteration, and simplifying,

(\(\displaystyle \text{the } \epsilon_{n} \text{ and } \epsilon^{3}_{n}\) terms on the rhs cancel out),

\(\displaystyle \epsilon_{n+1}=\frac{9}{5!}\epsilon^{5}_{n}-\frac{90}{7!}\epsilon^{7}_{n}+\dots\) (but check my arithmetic),

showing that the iteration has fifth order (quintic) convergence.

So, for this iteration with a starting value x0 = 3, meaning an initial error of about 0.14, we could expect an error in x1 to be roughly 9*0.14^5/120 = 0.000004 and for x2 roughly 9*0.000004^5/120, about 8e-29, and so on.

For comparison, Newton-Raphson is usually second order convergent. That is, the error of an iterate is roughly the square of the preceding one.

- Bertie

Guest Dec 27, 2015