\section{NUMERICAL INTEGRATION (QUADRATURE)} Let $f\in C^4[a,b]$, $h=(b-a)/n$, $x_i=a+ih$, $f_i=f(x_i)$ for $i=0,1,\ldots,n$. \noindent{\bf Trapezoidal rule}. Approximate $f(x)$ by the continuous linear spline $s(x)=\sum_{i=0}^n f_iB_{i,2}(x)$ with breakpoints $x_0$, $\ldots$, $x_n$ that interpolates $f(x)$ at $x_0$, $\ldots$, $x_n$. Then $$\int_a^b f(x)dx\approx\int_a^b s(x)dx = T_n=h\left[{1\over2}f_0 + f_1 +\cdots+f_{n-1}+{1\over2}f_n\right].$$ In $[x_i,x_{i+1}]$, $f(x)=s(x) + {f''(\xi)\over2!}(x-x_i)(x-x_{i+1}) \Rightarrow$ $$\eqalign{ \int_{x_i}^{x_{i+1}} f(x)dx&= \int_{x_i}^{x_{i+1}} s(x)dx + \int_{x_i}^{x_{i+1}}{f''(\xi)\over2!}(x-x_i)(x-x_{i+1})dx \cr &={h\over2}(f_i+f_{i+1}) + {f''(\lambda_i)\over2!} \int_{x_i}^{x_{i+1}} (x-x_i)(x-x_{i+1})dx \cr &={h\over2}(f_i+f_{i+1}) - {f''(\lambda_i)h^3 \over12}\cr}$$ by the Mean Value Theorem for Integrals. Thus $$\int_a^b f(x)dx =T_n-{h^3\over12} \sum_{i=0}^{n-1} f''(\lambda_i) = T_n-{h^3\over12} (nf''(\eta)) =T_n-{(b-a)\over12}f''(\eta)h^2.$$ \hrule\medskip\noindent{\bf Simpson's rule}. Approximate $f(x)$ by the continuous quadratic spline $s(x)=\sum_{i=0}^n \alpha_i B_{i,3}(x)$ with breakpoints $x_0$, $x_2$, $x_4$, $\ldots$, $x_{n-2}$, $x_n$ ($n$ even) that interpolates $f(x)$ at $x_0$, $x_1$, $x_2$, $\ldots$, $x_{n-1}$, $x_n$. Then $$\int_a^b f(x)dx\approx\int_a^b s(x)dx = {h\over3}\left[ f_0+4f_1 +2f_2 +\cdots +2f_{n-2} + 4f_{n-1} + f_n\right].$$ Over the interval $[x_i,x_{i+2}]$ with $i$ even, $s(x)$ is just the quadratic polynomial interpolating $f$ at $x_i$, $x_{i+1}$, $x_{i+2}$, so $$\eqalign{ \int_{x_i}^{x_{i+2}} f(x)dx&= \int_{x_i}^{x_{i+2}} s(x)dx + \int_{x_i}^{x_{i+2}} f\left[x_i,x_{i+1},x_{i+2},x\right] \prod_{j=i}^{i+2} (x-x_j)\,dx \cr &= \int_{x_i}^{x_{i+2}} s(x)dx + \int_{x_i}^{x_{i+2}} \bigl(f\left[ x_i,x_{i+1},x_{i+2},x_{i+1}\right] \cr &\qquad\qquad{} + f\left[x_i,x_{i+1},x_{i+2},\eta, \eta\right](x-x_{i+1})\bigr) \prod_{j=i}^{i+2} (x-x_j)\,dx \cr &={h\over3}\bigl(f_i+4f_{i+1}+f_{i+2}\bigr) + {f^{(4)}(\lambda_i) \over 4!} \int_{x_i}^{x_{i+2}} (x-x_i)(x-x_{i+1})^2(x-x_{i+2})\,dx \cr &={h\over3}\bigl(f_i+4f_{i+1}+f_{i+2}\bigr) - {f^{(4)}(\lambda_i) \over 90}h^5.\cr}$$ Therefore, summing over all intervals, $$\int_a^b f(x)dx = {h\over3}\left[ f_0+4f_1 +2f_2 +\cdots +2f_{n-2} + 4f_{n-1} + f_n\right] - {(b-a)f^{(4)}(\eta)\over180}h^4.$$ Note that {\it Simpson's rule is exact for cubic polynomials}, even though it was only constructed to be exact for quadratics. \bigskip\hrule\medskip\noindent{\bf Gaussian quadrature}. Let $Q_{n+1}(x)$ be a polynomial of degree $n+1$ orthogonal to any polynomial of degree $\le n$ with respect to the inner product $$\la p,q\ra=\int_a^b p(x)q(x)w(x)\,dx.$$ Let $f(x)$ be a polynomial of degree $\le 2n+1$, $$L_i(x)=\prod_{k=0\atop k\ne i}^n {x-x_k\over x_i-x_k}$$ the $i$th Lagrange polynomial for $i=0$, 1, \dots, $n$, where $x_0$, $\ldots$, $x_n$ are the real, distinct zeros of $Q_{n+1}$ which lie in $(a,b)$. Now $$f(x)=p(x)Q_{n+1}(x) + r(x),$$ where $p(x)$ and $r(x)$ are polynomials of degree $\le n$. Then $$\eqalign{ \int_a^bf(x)w(x)\,dx &= \int_a^b\left[ p(x)Q_{n+1}(x) + r(x) \right]w(x)\,dx = \int_a^b r(x)w(x)\,dx \cr &=\int_a^b\left[ \sum_{i=0}^n r(x_i)L_i(x)\right]w(x)\,dx = \sum_{i=0}^n r(x_i) \int_a^b L_i(x)w(x)\,dx \cr &= \sum_{i=0}^n w_i\,f(x_i),\cr}$$ where $$w_i=\int_a^b L_i(x)w(x)\,dx \qquad\hbox{for }i=0,1,\ldots,n.$$ The Gaussian quadrature formula $$\int_a^bf(x)w(x)\,dx \approx \sum_{i=0}^n w_i\,f(x_i)$$ is {\it exact\/} if $f(x)$ is a polynomial of degree $\le2n+1$, and approximate otherwise. \medskip\noindent{\bf Theorem}. Gaussian quadrature formulas are numerically stable (all $w_i>0$) and $$\lim_{n\to\infty} \sum_{i=0}^n w_i\,f(x_i) = \int_a^bf(x)w(x)\,dx \qquad\hbox{for any }f\in C[a,b].$$ \hrule\medbreak Gaussian quadrature was resurrected by the space program in the 1960s for control algorithms in on-board computers because it is accurate, fast, and has very low memory requirements. For a large collection of commonly occurring weight functions $w(x)$ and intervals $(a,b)$, the abscissa $x_i$ and weights $w_i$ have been tabulated in a book by Stroud and Secrest ({\sl Gaussian Quadrature Formulas}, Prentice-Hall, 1966). Gaussian quadrature is excellent for two classes of problems: \item{(1)}real time integration, \item{(2)}improper integrals. \medskip\hrule\smallskip Example: $$\eqalign{\int_0^3 {e^{\sin x}\over \sqrt x}dx &= \sqrt 3\int_0^1 e^{\sin 3t}\left({1\over\sqrt t}\right) dt=6.15259\cr &\approx \sqrt3\sum_{i=0}^4 w_ie^{\sin3t_i} =6.15302\cr}$$ where the $t_i$ are the roots of the 5th orthogonal polynomial $Q_5(x)$ for weight $w(x)=1/\sqrt x$ and interval $(0,1)$, and $w_i= \int_0^1 L_i(x)/\sqrt x\,dx$. \bigskip\hrule\medskip\noindent{\bf Romberg integration}. Using the Euler-Maclaurin summation formula (Dahlquist, Bjorck, {\sl Numerical Methods}, 1975) or Everett's interpolation formula (Henrici, {\sl Elements of Numerical Analysis}, 1964) it is possible to prove $$\int_a^bf(x)dx = T_n + a_2h^2 + a_4h^4 +\cdots+ a_{2k}h^{2k} + {\cal O}\bigl(h^{2k+2}\bigr),$$ where $h=(b-a)/n$, the $a_i$ are constants independent of $h$, and $f\in C^{2k+2}[a,b]$. Combining the two relations $$\eqalignno{ \int_a^bf(x)dx&= T_n + a_2h^2 + a_4h^4 + a_6h^6 + \cdots\cr \int_a^bf(x)dx&= T_{2n} + a_2\left({h\over2}\right)^2 + a_4 \left({h\over2}\right)^4 + a_6\left({h\over2}\right)^6 + \cdots\cr \noalign{\hbox{gives}} \int_a^bf(x)dx&= {4T_{2n}-T_n\over3} + a_4^{(1)} \left({h\over2}\right)^4 + a_6^{(1)} \left({h\over2}\right)^6 + \cdots= T_{2n}^{(1)} + a_4^{(1)} \left({h\over2}\right)^4 + a_6^{(1)} \left({h\over2}\right)^6 + \cdots.\cr \noalign{\hbox{Similarly}} \int_a^bf(x)dx&= {4T_{4n}-T_{2n}\over3} + a_4^{(1)} \left({h\over4}\right)^4 + a_6^{(1)} \left({h\over4}\right)^6 + \cdots= T_{4n}^{(1)} + a_4^{(1)} \left({h\over4}\right)^4 + a_6^{(1)} \left({h\over4}\right)^6 + \cdots.\cr}$$ Now combining these last two equations gives $$\int_a^bf(x)dx = {16T_{4n}^{(1)}-T_{2n}^{(1)}\over 15} + a_6^{(2)} \left({h\over4}\right)^6 + \cdots= T_{4n}^{(2)} + a_6^{(2)} \left({h\over4}\right)^6 + \cdots.$$ This process can be represented in a triangular table, where the entries in each column and row are more accurate than those in the preceding column or row: $$\matrix{{\cal O}(h^2)&{\cal O}(h^4)&{\cal O}(h^6)& {\cal O}(h^8)&\ldots\cr T_n\cr T_{2n}&T_{2n}^{(1)}\cr T_{4n}&T_{4n}^{(1)}& T_{4n}^{(2)}\cr T_{8n}&T_{8n}^{(1)}& T_{8n}^{(2)}& T_{8n}^{(3)}\cr \vdots&\vdots&\vdots&\vdots&\ddots\cr}\qquad T_{2^kn}^{(j)} = {2^{2j}T_{2^kn}^{(j-1)} - T_{2^{k-1}n}^{(j-1)}\over 2^{2j}-1},\quad 1\le j\le k.$$ {\bf Theorem}. If $f\in C^{2k+2}[a,b]$, then $\displaystyle \int_a^bf(x)dx = T_{2^kn}^{(k)} + {\cal O}\bigl(h^{2k+2}\bigr)$. \bigskip\hrule\medskip\noindent{\bf Adaptive quadrature}. Consider a problem like $$\int_{-1}^1 \left( 1-|x|^{1/10}\right)^{10}dx.$$ All the methods described so far will either miss the spike at $x=0$ entirely or require an extremely small $h$ throughout $[-1,1]$. Clearly the right approach is to ``adapt'' the step $h$ to the local behavior of the function being integrated. In general, an adaptive quadrature algorithm operates as follows: two rules, Rule 1 and Rule 2, such that (error in Rule 2) $\ll$ (error in Rule 1) are used. Rule 2 is used to estimate the error for Rule 1, and thus adapt the step size $h$ in Rule 1 to control its error. Suppose that $\int_a^bf(x)dx$ is required with absolute accuracy $\epsilon$, that $\int_a^wf(x)dx$ has been computed so far, and that an estimate $h$ exists for an appropriate local step at $w$. The acceptable error from $w$ to $w+h$ is $\epsilon h/(b-a)$. The algorithm is: \medskip\settabs\+\hskip30pt&\hskip30pt&\cr \+\tt do while $w