\section{FIXED POINTS and ZEROS of NONLINEAR FUNCTIONS.} Let $f:[a,b]\rightarrow E$ be a continuous function. The fixed point problem is to find $x\in [a,b]$ such that $x=f(x)$. $x$ is called a {\it fixed point \/} of $f$. The root finding problem is to find $x\in[a,b]$ such that $f(x)=0$. $x$ is called a {\it root\/} or {\it zero\/} of $f$. \medskip\noindent{\bf Brouwer Fixed Point Theorem}. Let $f:[a,b]\rightarrow[a,b]$ be continuous. Then $\exists$ a point $s\in[a,b]$ such that $s=f(s)$. \noindent{\bf Definition}. A function $f:[a,b]\rightarrow E$ is said to satisfy a Lipschitz condition with Lipschitz constant $L$ on $[a,b]$ if $|f(x)-f(y)|\le L|x-y|$ $\forall x,y\in[a,b]$. Note: If $f:[a,b]\rightarrow E$ is differentiable and $|f'(x)|\le L$ on $[a,b]$, then $f$ satisfies a Lipschitz condition with Lipschitz constant $L$ on $[a,b]$. \medskip\noindent{\bf Theorem}. Let $f:[a,b]\rightarrow[a,b]$ satisfy a Lipschitz condition with Lipschitz constant $L<1$. Then for any $x_0\in[a,b]$, the fixed point iteration $$x_{n+1}=f(x_n),\quad n=0,1,2,\ldots$$ converges to the unique fixed point $s$ of $f$ in $[a,b]$. Proof. The Lipschitz condition implies that $f$ is continuous, and therefore by the Brouwer Fixed Point Theorem $\exists$ a fixed point $s\in[a,b]$. Suppose there were two distinct fixed points, $s_1$, and $s_2$. Then $|s_1-s_2|=|f(s_1)-f(s_2)|\le L|s_1-s_2|<|s_1-s_2|$, a contradiction. Hence there is a unique fixed point $s$. The error $e_n=x_n-s$ satisfies $$|e_n|=|x_n-s|=|f(x_{n-1})-f(s)|\le L|x_{n-1}-s|=L|e_{n-1}|\le L^n|e_0|.$$ Since $L<1$, $e_n\rightarrow0\Rightarrow x_n\rightarrow s$.\QED \midinsert \centerline{\vtop{\null \epsfxsize=6in\epsffile{fpiter.eps}}} \endinsert Any root finding problem $f(x)=0$ can be converted to a fixed point problem $$x=g(x)=x-h(x)f(x),$$ where $h(s)\ne0$ at the zero $s$ of $f$. Different choices for the function $h(x)$ lead to different algorithms (fixed point iteration $x_{n+1}=g(x_n))$ for solving $f(x)=0$. \medskip\hrule \medskip\centerline{\bf Root finding algorithms.} \noindent{\bf Bisection}. Let $f:[a,b]\rightarrow E$ be continuous and $f(a)f(b)<0$. Choose $\epsilon>0$ and set $L\:a$, $R\:b$, $FL\:f(a)$, $FR\:f(b)$.\hfil {\tt \break\indent while $R-L>\epsilon$ do\hfil \break\indent\indent $S\:\displaystyle{L+R\over2};\; FS\:f(S)$;\hfil\break\indent\indent if $FS=0$ then exit; \hfil\break\indent\indent if $FS*FL>0$ then\qquad\qquad\qquad {\rm The best estimate of the zero is $S$}.\hfil\break\indent\indent\indent $L\:S,\; FL\:FS$ \hfil\break\indent\indent else\hfil\break\indent\indent\indent $R\:S,\; FR\:FS$;\hfil\break\indent end do\hfil\break\indent if $|FL|<|FR|$ then $S\:L$ else $S\:R$;} \medskip\noindent{\bf Newton's method}. Let $P(x)$ be the polynomial interpolating $f$ and $f'$ at $x_0$. Approximate $f(x)=0$ by $P(x)=0$, and approximate the root $s$ of $f$ by the root $x_1$ of $P$. Repeat until convergence. \midinsert\vglue -.7in \centerline{\vtop{\null \epsfxsize=3in\epsffile{newton.eps}}\hfil \vtop{\null\vskip30pt\hsize=2.5in $$x_{n+1}=x_n-{f(x_n)\over f'(x_n)},\quad n=0,1,2,\ldots$$}} \vglue -.7in \endinsert \hrule \medskip\noindent{\bf Theorem}. Let $f:E\rightarrow E$ be three times continuously differentiable, $f(s)=0$ and $f'(s)\ne0$. Then for $x_0$ sufficiently close to $s$, Newton's method converges to $s$ quadratically. Proof. Let $\displaystyle g(x)=x-{f(x)\over f'(x)}$. Observe that Newton's method $\displaystyle x_{n+1}=x_n-{f(x_n)\over f'(x_n)}$ is equivalent to the fixed point iteration $x_{n+1}=g(x_n)$, and $s=g(s)$, $g'(s)=0$. Let $e_n=x_n-s$, $I=[s-|e_0|,s+|e_0|]$, $\displaystyle m=\max\limits_I{|g''(x)|\over2}$. If $x_n\in I$, then $\displaystyle |e_{n+1}| =|x_{n+1}-s| =|g(x_n)-g(s)| =\left|g'(s)(x_n-s) +{g''(\zeta)\over2!}(x_n-s)^2\right| ={|g''(\zeta)|\over2}e_n^2\le me_n^2$. Therefore $|me_0|<1\Rightarrow x_n\in I$ $\forall n$ by induction and $\displaystyle |e_n|\le{1\over m}(me_0)^{2^n}\Rightarrow e_n\rightarrow0\Rightarrow x_n\rightarrow s$. Now if $x_n\rightarrow s$, $$\lim\limits_{x_n\rightarrow s}{x_{n+1}-s\over (x_n-s)^2}=\lim\limits_{x_n\rightarrow s}{g''(\zeta)\over 2!}={g''(s)\over 2!}$$ (quadratic convergence), since $\zeta$ lies between $x_n$ and $s$, and $g''$ is continuous.\QED \medskip\noindent{\bf Theorem}. Let $f\in C^2(E)=\{f:E\rightarrow E\;|\;f \hbox{ is twice continuously differentiable}\}$. Given $x_0$, define $x_n$ and $h_n$ by $x_{n+1}=x_n+h_n$, $h_n=-f(x_n)/f'(x_n)$. Let $I_0$ be the interval int$(x_0,x_0+2h_0)$ and suppose that $2|h_0|M\le|f'(x_0)|$, where $M=\max\limits_{I_0}|f''(x)|$. Then $x_n\in I_0$ for $n=1,2,3,\ldots$ and $\lim\limits_{n\rightarrow\infty}x_n=\alpha$ is the unique root of $f(x)$ in $I_0$. \medskip\noindent{\bf Theorem}. Let $f\in C^2[a,b]$, and suppose that $f'(x)\ne0$ $\forall x\in[a,b]$, $f''(x)$ does not change sign in the interval $[a,b]$, and that $f(a)f(b)<0$. If $\displaystyle\left|{f(a)\over f'(a)}\right|