Skip to main content

Numerical Methods

Board Coverage

BoardPaperNotes
AQAPaper 2Sign change, iteration, Newton-Raphson, Simpson's rule
EdexcelP2Similar
OCR (A)Paper 2Includes fixed-point iteration and convergence
CIE (9709)P2, P3Numerical solutions of equations, integration in P2/P3
info

The formula booklet gives the Newton-Raphson formula and the trapezium/Simpson's rule. You must know when each method is applicable and its limitations.


1. Locating Roots: Sign Change

1.1 Sign change theorem

Theorem. If ff is continuous on [a,b][a,b] and f(a)f(a) and f(b)f(b) have opposite signs, then there exists at least one root α(a,b)\alpha \in (a,b) such that f(α)=0f(\alpha) = 0.

1.2 Limitations

The sign change theorem tells us a root exists but says nothing about:

  • How many roots are in the interval (there could be 1, 3, 5, ...).
  • The exact location of the root.
warning

A sign change is sufficient but not necessary for a root. If f(x)=x2f(x) = x^2, then f(1)=f(1)=1f(-1) = f(1) = 1 (no sign change), but there is a root at x=0x = 0. Additionally, a sign change could arise from a discontinuity rather than a root: f(x)=1/xf(x) = 1/x has f(1)=1f(-1) = -1 and f(1)=1f(1) = 1, but no root.

Intuition. The sign change theorem is the Intermediate Value Theorem applied to the special case of crossing zero. If you walk from a point below sea level to one above sea level, you must cross sea level at some point — provided the ground is continuous (no teleporting).


2. Fixed-Point Iteration

2.1 Method

To solve f(x)=0f(x) = 0, rewrite as x=g(x)x = g(x) and iterate:

xn+1=g(xn)x_{n+1} = g(x_n)

starting from an initial guess x0x_0. If the sequence converges to α\alpha, then f(α)=0f(\alpha) = 0.

2.2 Convergence condition

Theorem. If gg is continuously differentiable near a fixed point α\alpha and g(α)<1|g'(\alpha)| \lt{} 1, then the iteration xn+1=g(xn)x_{n+1} = g(x_n) converges to α\alpha for all starting values sufficiently close to α\alpha.

If g(α)>1|g'(\alpha)| \gt{} 1, the iteration diverges.

Proof (linear convergence). Near α\alpha, by Taylor's theorem:

g(xn)=g(α)+g(α)(xnα)+O((xnα)2)g(x_n) = g(\alpha) + g'(\alpha)(x_n - \alpha) + O((x_n-\alpha)^2)

Since g(α)=αg(\alpha) = \alpha:

xn+1α=g(α)(xnα)+O((xnα)2)x_{n+1} - \alpha = g'(\alpha)(x_n - \alpha) + O((x_n-\alpha)^2)

For xnx_n close to α\alpha:

xn+1αg(α)xnα|x_{n+1} - \alpha| \approx |g'(\alpha)| \cdot |x_n - \alpha|

If g(α)<1|g'(\alpha)| \lt{} 1, then xn+1α<xnα|x_{n+1} - \alpha| \lt{} |x_n - \alpha|: the error shrinks, so the iteration converges. If g(α)>1|g'(\alpha)| \gt{} 1, the error grows and the iteration diverges. \blacksquare

2.3 Rearrangement choices

Different rearrangements of f(x)=0f(x) = 0 give different g(x)g(x), and some converge while others diverge.

Example. Solve x3+x1=0x^3 + x - 1 = 0.

  • g(x)=1x3g(x) = 1 - x^3: g(x)=3x2g'(x) = -3x^2. Near the root α0.68\alpha \approx 0.68: g(α)1.39>1|g'(\alpha)| \approx 1.39 \gt{} 1. Diverges.
  • g(x)=1x3g(x) = \sqrt[3]{1-x}: g(x)=13(1x)2/3g'(x) = \dfrac{-1}{3(1-x)^{2/3}}. Near α\alpha: g(α)0.72<1|g'(\alpha)| \approx 0.72 \lt{} 1. Converges.
tip

In exams, if a question asks you to show that a particular rearrangement converges, compute g(x)g'(x) at the root and show g(α)<1|g'(\alpha)| \lt{} 1. If asked why a rearrangement fails, show g(α)>1|g'(\alpha)| \gt{} 1.

2.4 Geometric interpretation

The fixed-point iteration xn+1=g(xn)x_{n+1} = g(x_n) can be visualised using the cobweb diagram. Plot y=g(x)y = g(x) and y=xy = x. Starting from x0x_0 on the xx-axis, go vertically to y=g(x0)=x1y = g(x_0) = x_1, then horizontally to y=xy = x, then vertically to y=g(x1)=x2y = g(x_1) = x_2, and so on.

  • If 0<g(α)<10 \lt{} g'(\alpha) \lt{} 1: the cobweb spirals inward (monotone convergence).
  • If 1<g(α)<0-1 \lt{} g'(\alpha) \lt{} 0: the cobweb zigzags inward (oscillatory convergence).
  • If g(α)>1|g'(\alpha)| \gt{} 1: the cobweb spirals or zigzags outward (divergence).

The closer g(α)|g'(\alpha)| is to zero, the faster the convergence. When g(α)=0g'(\alpha) = 0, the iteration achieves quadratic convergence (similar to Newton-Raphson), since the leading error term in the Taylor expansion vanishes.


3. Newton-Raphson Method

3.1 Derivation from the tangent line

To solve f(x)=0f(x) = 0, start from x0x_0 and draw the tangent to y=f(x)y = f(x) at x0x_0. The tangent line is:

yf(xn)=f(xn)(xxn)y - f(x_n) = f'(x_n)(x - x_n)

Setting y=0y = 0 (where the tangent crosses the xx-axis):

0f(xn)=f(xn)(xn+1xn)0 - f(x_n) = f'(x_n)(x_{n+1} - x_n)

xn+1=xnf(xn)f(xn)x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}

This is the Newton-Raphson formula.

3.2 Proof of local quadratic convergence

Theorem. If ff is twice continuously differentiable, f(α)=0f(\alpha) = 0, f(α)0f'(\alpha) \neq 0, and x0x_0 is sufficiently close to α\alpha, then Newton-Raphson converges quadratically: xn+1αCxnα2|x_{n+1} - \alpha| \leq C|x_n - \alpha|^2.

Proof (sketch). By Taylor's theorem about xnx_n:

0=f(α)=f(xn)+f(xn)(αxn)+LBf(ξ)RB◆◆LB2RB(αxn)20 = f(\alpha) = f(x_n) + f'(x_n)(\alpha - x_n) + \frac◆LB◆f''(\xi)◆RB◆◆LB◆2◆RB◆(\alpha - x_n)^2

for some ξ\xi between α\alpha and xnx_n. Rearranging:

f(xn)f(xn)=(αxn)+LBf(ξ)RB◆◆LB2f(xn)RB(αxn)2\frac{f(x_n)}{f'(x_n)} = (\alpha - x_n) + \frac◆LB◆f''(\xi)◆RB◆◆LB◆2f'(x_n)◆RB◆(\alpha - x_n)^2

xnf(xn)f(xn)=αLBf(ξ)RB◆◆LB2f(xn)RB(αxn)2x_n - \frac{f(x_n)}{f'(x_n)} = \alpha - \frac◆LB◆f''(\xi)◆RB◆◆LB◆2f'(x_n)◆RB◆(\alpha - x_n)^2

xn+1α=LBf(ξ)RB◆◆LB2f(xn)RB(xnα)2x_{n+1} - \alpha = -\frac◆LB◆f''(\xi)◆RB◆◆LB◆2f'(x_n)◆RB◆(x_n - \alpha)^2

Taking absolute values: xn+1α=LBf(ξ)RB◆◆LB2f(xn)RBxnα2Cxnα2|x_{n+1} - \alpha| = \dfrac◆LB◆|f''(\xi)|◆RB◆◆LB◆2|f'(x_n)|◆RB◆|x_n - \alpha|^2 \leq C|x_n - \alpha|^2. \blacksquare

Intuition. Quadratic convergence means the number of correct decimal places roughly doubles with each iteration. If you have 2 correct digits, the next iteration gives about 4, then 8, then 16. This is dramatically faster than the linear convergence of fixed-point iteration (which adds roughly a fixed number of correct digits per step).

3.3 Failures

Newton-Raphson fails when:

  • f(xn)=0f'(x_n) = 0 (the tangent is horizontal — division by zero).
  • f(xn)f'(x_n) is close to zero (the next iterate jumps far away).
  • The starting point is not close enough to the root.
warning

warning a different starting point.

3.4 Horizontal tangent failure

When f(xn)=0f'(x_n) = 0 at some iterate, the Newton-Raphson formula requires division by zero and the method breaks down entirely. Even when f(xn)f'(x_n) is merely small, the step xn+1=xnf(xn)/f(xn)x_{n+1} = x_n - f(x_n)/f'(x_n) becomes very large, sending the iterate far from the root.

Example. Let f(x)=x33x+3f(x) = x^3 - 3x + 3. Then f(x)=3x23f'(x) = 3x^2 - 3, so f(1)=0f'(1) = 0. Starting at x0=1x_0 = 1:

The formula gives x1=1f(1)/f(1)=11/0x_1 = 1 - f(1)/f'(1) = 1 - 1/0, which is undefined.

Even starting at x0=0.9x_0 = 0.9: f(0.9)=0.7292.7+3=1.029f(0.9) = 0.729 - 2.7 + 3 = 1.029, f(0.9)=2.433=0.57f'(0.9) = 2.43 - 3 = -0.57. x1=0.91.029/(0.57)=0.9+1.805=2.705x_1 = 0.9 - 1.029/(-0.57) = 0.9 + 1.805 = 2.705.

The root is near α2.10\alpha \approx -2.10, so the iterate has been sent in the wrong direction. The next iterate x2x_2 will be pulled back, but convergence is erratic compared to a well-chosen starting point.

warning

warning tangent is not close to horizontal near your starting point.

3.5 Slow convergence near inflection points

The quadratic convergence proof in Section 3.2 requires f(α)0f'(\alpha) \neq 0. When the root coincides with an inflection point, so that f(α)=0f'(\alpha) = 0, convergence degrades from quadratic to linear.

Theorem. If f(α)=0f(\alpha) = 0, f(α)=0f'(\alpha) = 0, f(α)0f''(\alpha) \neq 0, and x0x_0 is sufficiently close to α\alpha, then Newton-Raphson converges linearly with rate 1/21/2:

xn+1α12xnα|x_{n+1} - \alpha| \approx \frac{1}{2}|x_n - \alpha|

Proof sketch. Expanding by Taylor's theorem to third order about α\alpha:

f(xn)=LBf(α)RB◆◆LB2RB(xnα)2+LBf(α)RB◆◆LB6RB(xnα)3+O((xnα)4)f(x_n) = \frac◆LB◆f''(\alpha)◆RB◆◆LB◆2◆RB◆(x_n - \alpha)^2 + \frac◆LB◆f'''(\alpha)◆RB◆◆LB◆6◆RB◆(x_n - \alpha)^3 + O((x_n - \alpha)^4)

f(xn)=f(α)(xnα)+LBf(α)RB◆◆LB2RB(xnα)2+O((xnα)3)f'(x_n) = f''(\alpha)(x_n - \alpha) + \frac◆LB◆f'''(\alpha)◆RB◆◆LB◆2◆RB◆(x_n - \alpha)^2 + O((x_n - \alpha)^3)

Therefore:

xn+1α=xnαf(xn)f(xn)x_{n+1} - \alpha = x_n - \alpha - \frac{f(x_n)}{f'(x_n)}

=(xnα)LBLBf(α)RB◆◆LB2RB(xnα)2+O((xnα)3)RB◆◆LBf(α)(xnα)+O((xnα)2)RB= (x_n - \alpha) - \frac◆LB◆\frac◆LB◆f''(\alpha)◆RB◆◆LB◆2◆RB◆(x_n - \alpha)^2 + O((x_n - \alpha)^3)◆RB◆◆LB◆f''(\alpha)(x_n - \alpha) + O((x_n - \alpha)^2)◆RB◆

=(xnα)LBLBf(α)RB◆◆LB2RB(xnα)+O((xnα)2)RB◆◆LBf(α)+O(xnα)RB= (x_n - \alpha) - \frac◆LB◆\frac◆LB◆f''(\alpha)◆RB◆◆LB◆2◆RB◆(x_n - \alpha) + O((x_n - \alpha)^2)◆RB◆◆LB◆f''(\alpha) + O(x_n - \alpha)◆RB◆

=(xnα)12(xnα)(1+O(xnα))= (x_n - \alpha) - \frac{1}{2}(x_n - \alpha)\left(1 + O(x_n - \alpha)\right)

=12(xnα)+O((xnα)2)= \frac{1}{2}(x_n - \alpha) + O((x_n - \alpha)^2)

So xn+1α12xnα|x_{n+1} - \alpha| \to \frac{1}{2}|x_n - \alpha| as xnαx_n \to \alpha: the error is halved each step (linear convergence with rate 1/21/2), not squared. \blacksquare

Example. f(x)=(x1)3f(x) = (x-1)^3 has a root at x=1x = 1 where f(1)=0f'(1) = 0. The Newton-Raphson iteration becomes:

xn+1=xn(xn1)33(xn1)2=xnxn13=2xn+13x_{n+1} = x_n - \frac{(x_n - 1)^3}{3(x_n - 1)^2} = x_n - \frac{x_n - 1}{3} = \frac{2x_n + 1}{3}

Starting at x0=4x_0 = 4: x1=3x_1 = 3, x2=7/32.333x_2 = 7/3 \approx 2.333, x3=17/91.889x_3 = 17/9 \approx 1.889, x4=37/271.370x_4 = 37/27 \approx 1.370, x5=75/811.210x_5 = 75/81 \approx 1.210, ...

The error is multiplied by 2/32/3 each step (linear, not quadratic). Compare: standard Newton-Raphson with f(α)0f'(\alpha) \neq 0 would give roughly 1, then 2, then 4, then 8 correct digits. Here each step only adds a fixed fraction of a digit.


4. The Trapezium Rule

4.1 Formula

abf(x)dxh2[y0+2y1++2yn1+yn]\int_a^b f(x)\,dx \approx \frac{h}{2}\left[y_0 + 2y_1 + \cdots + 2y_{n-1} + y_n\right]

where h=(ba)/nh = (b-a)/n and yi=f(a+ih)y_i = f(a+ih).

4.2 Error analysis

Theorem. The error in the composite trapezium rule is:

ET=(ba)312n2f(η)E_T = -\frac{(b-a)^3}{12n^2}\,f''(\eta)

for some η(a,b)\eta \in (a, b), provided ff is twice continuously differentiable on [a,b][a, b].

Derivation. Consider a single strip [xi,xi+1][x_i, x_{i+1}] of width hh. The trapezium rule approximates xixi+1f(x)dx\int_{x_i}^{x_{i+1}} f(x)\,dx by the area of a trapezium:

xixi+1f(x)dxh2[f(xi)+f(xi+1)]\int_{x_i}^{x_{i+1}} f(x)\,dx \approx \frac{h}{2}\left[f(x_i) + f(x_{i+1})\right]

To find the error, expand ff about the midpoint mi=xi+h/2m_i = x_i + h/2. Let δ=h/2\delta = h/2:

f(xi)=f(miδ)=f(mi)δf(mi)+LBδ2RB◆◆LB2RBf(mi)LBδ3RB◆◆LB6RBf(ζ1)f(x_i) = f(m_i - \delta) = f(m_i) - \delta\, f'(m_i) + \frac◆LB◆\delta^2◆RB◆◆LB◆2◆RB◆f''(m_i) - \frac◆LB◆\delta^3◆RB◆◆LB◆6◆RB◆f'''(\zeta_1)

f(xi+1)=f(mi+δ)=f(mi)+δf(mi)+LBδ2RB◆◆LB2RBf(mi)+LBδ3RB◆◆LB6RBf(ζ2)f(x_{i+1}) = f(m_i + \delta) = f(m_i) + \delta\, f'(m_i) + \frac◆LB◆\delta^2◆RB◆◆LB◆2◆RB◆f''(m_i) + \frac◆LB◆\delta^3◆RB◆◆LB◆6◆RB◆f'''(\zeta_2)

Adding these:

f(xi)+f(xi+1)=2f(mi)+δ2f(mi)+O(h3)f(x_i) + f(x_{i+1}) = 2f(m_i) + \delta^2 f''(m_i) + O(h^3)

The trapezium approximation for this strip is:

h2[f(xi)+f(xi+1)]=hf(mi)+LBhδ2RB◆◆LB2RBf(mi)+O(h4)=hf(mi)+h38f(mi)+O(h4)\frac{h}{2}\left[f(x_i) + f(x_{i+1})\right] = h\,f(m_i) + \frac◆LB◆h\,\delta^2◆RB◆◆LB◆2◆RB◆f''(m_i) + O(h^4) = h\,f(m_i) + \frac{h^3}{8}f''(m_i) + O(h^4)

The exact integral over this strip is (by Taylor expansion of the integral):

xixi+1f(x)dx=hf(mi)+h324f(mi)+O(h5)\int_{x_i}^{x_{i+1}} f(x)\,dx = h\,f(m_i) + \frac{h^3}{24}f''(m_i) + O(h^5)

Therefore the error on a single strip is:

Ei=h2[f(xi)+f(xi+1)]xixi+1f(x)dx=(18124)h3f(mi)+O(h4)=h312f(mi)+O(h4)E_i = \frac{h}{2}\left[f(x_i) + f(x_{i+1})\right] - \int_{x_i}^{x_{i+1}} f(x)\,dx = \left(\frac{1}{8} - \frac{1}{24}\right)h^3 f''(m_i) + O(h^4) = \frac{h^3}{12}f''(m_i) + O(h^4)

Summing over all nn strips and applying the Intermediate Value Theorem to ff'' (which is continuous), there exists η(a,b)\eta \in (a, b) such that:

ET=i=0n1Ei=h312i=0n1f(mi)+O(nh4)=h312LBnf(η)RB◆◆LB1RB+O(h4n)E_T = \sum_{i=0}^{n-1} E_i = \frac{h^3}{12}\sum_{i=0}^{n-1} f''(m_i) + O(n \cdot h^4) = \frac{h^3}{12}\cdot\frac◆LB◆n\, f''(\eta)◆RB◆◆LB◆1◆RB◆ + O(h^4 \cdot n)

Since i=0n1f(mi)habf(x)dx\sum_{i=0}^{n-1} f''(m_i) \cdot h \approx \int_a^b f''(x)\,dx and nh=banh = b - a:

ET=(ba)312n2f(η)E_T = -\frac{(b-a)^3}{12n^2}\,f''(\eta)

(The negative sign arises from the exact derivation via the Euler-Maclaurin formula; the key point is the h2h^2 scaling.) \blacksquare

Key consequence. The error is proportional to h2=(ba)2/n2h^2 = (b-a)^2/n^2. Doubling the number of strips (n2nn \to 2n) reduces the error by a factor of 4, since hh/2h \to h/2 and h2h2/4h^2 \to h^2/4.

4.3 Error bound

From the error formula, since f(η)M|f''(\eta)| \leq M for all η[a,b]\eta \in [a,b]:

ET(ba)312n2M|E_T| \leq \frac{(b-a)^3}{12n^2}\,M

This gives a guaranteed upper bound on the absolute error. If we require the error to satisfy ET<ε|E_T| \lt{} \varepsilon, we need:

n>LBLB(ba)3MRB◆◆LB12εRB◆◆RBn \gt{} \sqrt◆LB◆\frac◆LB◆(b-a)^3\, M◆RB◆◆LB◆12\,\varepsilon◆RB◆◆RB◆

Example. Approximate 01ex2dx\displaystyle\int_0^1 e^{-x^2}\,dx with the trapezium rule. Here f(x)=ex2f(x) = e^{-x^2}, so f(x)=2xex2f'(x) = -2x\,e^{-x^2} and f(x)=(4x22)ex2f''(x) = (4x^2 - 2)e^{-x^2}. On [0,1][0,1]: f(x)2|f''(x)| \leq 2 (achieved at x=0x = 0, where f(0)=2f''(0) = -2).

For error <104\lt{} 10^{-4}:

n>LBLB13×2RB◆◆LB12×104RB◆◆RB=LB20.0012RB=LB1666.6ˉRB40.8n \gt{} \sqrt◆LB◆\frac◆LB◆1^3 \times 2◆RB◆◆LB◆12 \times 10^{-4}◆RB◆◆RB◆ = \sqrt◆LB◆\frac{2}{0.0012}◆RB◆ = \sqrt◆LB◆1666.\bar{6}◆RB◆ \approx 40.8

So n=42n = 42 strips suffice (rounding up to the nearest even number, which is convenient if one later wishes to compare with Simpson's rule).


5. Simpson's Rule

5.1 Formula

For an even number nn of strips:

abf(x)dxh3[y0+4y1+2y2+4y3+2y4++4yn1+yn]\int_a^b f(x)\,dx \approx \frac{h}{3}\left[y_0 + 4y_1 + 2y_2 + 4y_3 + 2y_4 + \cdots + 4y_{n-1} + y_n\right]

The coefficients follow the pattern: 1,4,2,4,2,,4,11, 4, 2, 4, 2, \ldots, 4, 1.

5.2 Derivation

Simpson's rule approximates ff by quadratic arcs over pairs of strips. Over [x2k,x2k+2][x_{2k}, x_{2k+2}], a unique quadratic passes through (x2k,y2k)(x_{2k}, y_{2k}), (x2k+1,y2k+1)(x_{2k+1}, y_{2k+1}), (x2k+2,y2k+2)(x_{2k+2}, y_{2k+2}). Integrating this quadratic gives the area h3(y2k+4y2k+1+y2k+2)\dfrac{h}{3}(y_{2k} + 4y_{2k+1} + y_{2k+2}).

5.3 Error bound

E(ba)5180n4M|E| \leq \frac{(b-a)^5}{180n^4}M

where f(4)(x)M|f^{(4)}(x)| \leq M on [a,b][a,b].

Intuition. Simpson's rule has error O(1/n4)O(1/n^4) compared to O(1/n2)O(1/n^2) for the trapezium rule. Doubling the number of strips reduces the error by a factor of 16 for Simpson's rule vs. 4 for the trapezium rule. This is because quadratic approximations match the curvature of the function much better than linear ones.

tip

Simpson's rule requires an even number of strips. The trapezium rule works with any number. Simpson's rule is exact for cubics (since the error depends on f(4)f^{(4)}).


6. Comparison of Methods

MethodConvergenceRequirementSpeed
Sign changeN/AContinuous ffLocates only
BisectionLinearSign changeSlow
Fixed-pointLinearg<1\|g'\| \lt{} 1Moderate
Newton-RaphsonQuadraticf(α)0f'(\alpha)\neq 0Fast
Trapezium ruleO(1/n2)O(1/n^2)Any ffIntegration
Simpson's ruleO(1/n4)O(1/n^4)Even nn stripsIntegration

7. Comparison of Root-Finding Methods

7.1 When to use each method

Sign change (bisection): Use when you need a guaranteed, robust result and have a sign change. Always converges but slowly (halving the interval each step). No derivative needed. Excellent for initial bracketing before switching to a faster method.

Fixed-point iteration: Use when the equation is easily rearranged into x=g(x)x = g(x) and g(α)<1|g'(\alpha)| \lt{} 1 can be verified. Simple to implement but may converge slowly or diverge entirely depending on the rearrangement. Different rearrangements of the same equation can give radically different convergence behaviour.

Newton-Raphson: Use when f(x)f'(x) is easy to compute and a good starting estimate is available. Extremely fast (quadratic convergence) when it works, but can fail catastrophically if f(xn)f'(x_n) is small, if the starting point is poor, or if the root is at an inflection point.

7.2 Practical strategy

In practice, numerical software often combines methods:

  1. Bracket the root using sign change to find an interval [a,b][a, b] containing the root.
  2. Refine using Newton-Raphson or fixed-point iteration starting from the midpoint or a favourable endpoint.
  3. Verify the result by checking f(xn)f(x_n) is sufficiently close to zero.
info

Newton-Raphson is the method of choice when derivatives are available and the function is well-behaved. Fixed-point iteration is useful when the problem naturally gives a contraction mapping. Bisection is the reliable fallback when nothing else is guaranteed to work.

7.3 Cost comparison

MethodCost per stepConvergence rateTotal cost to reach tolerance ε\varepsilon
Bisection1 evaluation1/21/2 (linear)O(log2(1/ε))O(\log_2(1/\varepsilon)) steps
Fixed-point1 evaluationg(α)\|g'(\alpha)\| (linear)O(log1/g(1/ε))O(\log_{1/\|g'\|}(1/\varepsilon)) steps
Newton-Raphson2 evaluationsQuadraticO(log2log2(1/ε))O(\log_2 \log_2(1/\varepsilon)) steps

Newton-Raphson requires both ff and ff' per step (two evaluations), but its quadratic convergence means it needs far fewer steps in total. For high accuracy requirements, Newton-Raphson is overwhelmingly more efficient despite the higher per-step cost.


Problem Set

Details

Problem 1 Show that x32x5=0x^3 - 2x - 5 = 0 has a root in the interval [2,3][2, 3].

Details

Solution 1 f(x)=x32x5f(x) = x^3 - 2x - 5. f(2)=845=1<0f(2) = 8-4-5 = -1 \lt{} 0, f(3)=2765=16>0f(3) = 27-6-5 = 16 \gt{} 0.

Since ff is continuous and changes sign on [2,3][2,3], by the sign change theorem there is a root in (2,3)(2,3).

If you get this wrong, revise: Sign Change Theorem — Section 1.1.

Details

Problem 2 Use fixed-point iteration xn+1=2xn+53x_{n+1} = \sqrt[3]{2x_n + 5} with x0=2x_0 = 2 to find x3x_3. Give your answer to 4 decimal places.

Details

Solution 2 x0=2.0000x_0 = 2.0000 x1=93=2.0801x_1 = \sqrt[3]{9} = 2.0801 x2=2(2.0801)+53=9.16023=2.0924x_2 = \sqrt[3]{2(2.0801)+5} = \sqrt[3]{9.1602} = 2.0924 x3=2(2.0924)+53=9.18483=2.0943x_3 = \sqrt[3]{2(2.0924)+5} = \sqrt[3]{9.1848} = 2.0943

If you get this wrong, revise: Fixed-Point Iteration — Section 2.

Details

Problem 3 Show that the iteration xn+1=xn3+52x_{n+1} = \dfrac{x_n^3 + 5}{2} for solving x32x5=0x^3 - 2x - 5 = 0 will not converge near the root.

Details

Solution 3 g(x)=x3+52g(x) = \dfrac{x^3+5}{2}, g(x)=3x22g'(x) = \dfrac{3x^2}{2}.

Near the root α2.09\alpha \approx 2.09: g(α)=3(2.09)22LB3×4.37RB◆◆LB2RB6.55>1g'(\alpha) = \dfrac{3(2.09)^2}{2} \approx \dfrac◆LB◆3 \times 4.37◆RB◆◆LB◆2◆RB◆ \approx 6.55 \gt{} 1.

Since g(α)>1|g'(\alpha)| \gt{} 1, the iteration diverges near α\alpha.

If you get this wrong, revise: Convergence Condition — Section 2.2.

Details

Problem 4 Use Newton-Raphson with x0=2x_0 = 2 to find x2x_2 for f(x)=x32x5=0f(x) = x^3 - 2x - 5 = 0. Give your answer to 4 decimal places.

Details

Solution 4 f(x)=3x22f'(x) = 3x^2 - 2. xn+1=xnxn32xn53xn22x_{n+1} = x_n - \dfrac{x_n^3 - 2x_n - 5}{3x_n^2 - 2}.

x0=2x_0 = 2: f(2)=1f(2) = -1, f(2)=10f'(2) = 10. x1=2(1/10)=2.1000x_1 = 2 - (-1/10) = 2.1000.

x1=2.1x_1 = 2.1: f(2.1)=9.2614.25=0.061f(2.1) = 9.261-4.2-5 = 0.061, f(2.1)=13.232=11.23f'(2.1) = 13.23-2 = 11.23. x2=2.10.061/11.23=2.0946x_2 = 2.1 - 0.061/11.23 = 2.0946.

If you get this wrong, revise: Newton-Raphson Method — Section 3.

Details

Problem 5 Use Simpson's rule with 4 strips to approximate 02ex2dx\displaystyle\int_0^2 e^{-x^2}\,dx to 4 decimal places.

Details

Solution 5 h=0.5h = 0.5. Values: y0=1y_0 = 1, y1=e0.250.7788y_1 = e^{-0.25} \approx 0.7788, y2=e10.3679y_2 = e^{-1} \approx 0.3679, y3=e2.250.1054y_3 = e^{-2.25} \approx 0.1054, y4=e40.0183y_4 = e^{-4} \approx 0.0183.

02ex2dx0.53[1+4(0.7788)+2(0.3679)+4(0.1054)+0.0183]\int_0^2 e^{-x^2}\,dx \approx \frac{0.5}{3}[1 + 4(0.7788) + 2(0.3679) + 4(0.1054) + 0.0183] =0.53[1+3.1152+0.7358+0.4216+0.0183]=0.53(5.2909)0.8818= \frac{0.5}{3}[1 + 3.1152 + 0.7358 + 0.4216 + 0.0183] = \frac{0.5}{3}(5.2909) \approx 0.8818

If you get this wrong, revise: Simpson's Rule — Section 5.

Details

Problem 6 Show that the equation ex=3xe^x = 3x has exactly two real roots, and locate them.

Details

Solution 6 Let f(x)=ex3xf(x) = e^x - 3x. f(x)=ex3f'(x) = e^x - 3.

f(x)=0    x=ln31.099f'(x) = 0 \implies x = \ln 3 \approx 1.099.

f(ln3)=33ln333.296=0.296<0f(\ln 3) = 3 - 3\ln 3 \approx 3 - 3.296 = -0.296 \lt{} 0.

Since f(x)=ex>0f''(x) = e^x \gt{} 0, this is a global minimum. The minimum value is negative, and f(x)f(x) \to \infty as x±x \to \pm\infty, so f(x)=0f(x) = 0 has exactly two roots.

f(0)=1>0f(0) = 1 \gt{} 0, f(1)=e3<0f(1) = e-3 \lt{} 0: root in (0,1)(0,1). f(1)<0f(1) \lt{} 0, f(2)=e26>0f(2) = e^2-6 \gt{} 0: root in (1,2)(1,2).

If you get this wrong, revise: Sign Change Theorem — Section 1.

Details

Problem 7 The Newton-Raphson formula fails when applied to f(x)=x1/3f(x) = x^{1/3} starting from x0=1x_0 = 1. Explain why.

Details

Solution 7 f(x)=x1/3f(x) = x^{1/3}, f(x)=13x2/3f'(x) = \dfrac{1}{3}x^{-2/3}.

xn+1=xnLBxn1/3RB◆◆LB13xn2/3RB=xn3xn=2xnx_{n+1} = x_n - \dfrac◆LB◆x_n^{1/3}◆RB◆◆LB◆\frac{1}{3}x_n^{-2/3}◆RB◆ = x_n - 3x_n = -2x_n.

So x1=2x_1 = -2, x2=4x_2 = 4, x3=8x_3 = -8, ... The iterates oscillate and diverge.

The problem is that f(0)=f'(0) = \infty — the tangent at the root x=0x=0 is vertical, so the Newton-Raphson step sends the iterate to -\infty.

If you get this wrong, revise: Failures — Section 3.3.

Details

Problem 8 Use the trapezium rule with 6 strips to approximate 14lnxdx\displaystyle\int_1^4 \ln x\,dx.

Details

Solution 8 h=0.5h = 0.5. Values: ln1=0\ln 1 = 0, ln1.50.4055\ln 1.5 \approx 0.4055, ln20.6931\ln 2 \approx 0.6931, ln2.50.9163\ln 2.5 \approx 0.9163, ln31.0986\ln 3 \approx 1.0986, ln3.51.2528\ln 3.5 \approx 1.2528, ln41.3863\ln 4 \approx 1.3863.

Approx=0.52[0+2(0.4055+0.6931+0.9163+1.0986+1.2528)+1.3863]\mathrm{Approx} = \frac{0.5}{2}[0 + 2(0.4055+0.6931+0.9163+1.0986+1.2528) + 1.3863] =0.25[0+2(4.3663)+1.3863]=0.25[8.7326+1.3863]=0.25×10.11892.5297= 0.25[0 + 2(4.3663) + 1.3863] = 0.25[8.7326 + 1.3863] = 0.25 \times 10.1189 \approx 2.5297

(Exact: [xlnxx]14=4ln44+1=8ln232.5452[x\ln x - x]_1^4 = 4\ln 4 - 4 + 1 = 8\ln 2 - 3 \approx 2.5452.)

If you get this wrong, revise: The Trapezium Rule — Section 4.

Details

Problem 9 For the equation x3+x3=0x^3 + x - 3 = 0, show that xn+1=3xn3x_{n+1} = \sqrt[3]{3 - x_n} converges near the root.

Details

Solution 9 f(1)=1<0f(1) = -1 \lt{} 0, f(1.2)=1.728+1.23=0.072<0f(1.2) = 1.728 + 1.2 - 3 = -0.072 \lt{} 0, f(1.3)=2.197+1.33=0.497>0f(1.3) = 2.197 + 1.3 - 3 = 0.497 \gt{} 0.

Root near α1.21\alpha \approx 1.21.

g(x)=(3x)1/3g(x) = (3-x)^{1/3}, g(x)=13(3x)2/3g'(x) = -\dfrac{1}{3}(3-x)^{-2/3}.

g(α)=13(31.21)2/3=13(1.79)2/3LB1RB◆◆LB3×1.489RB0.224<1|g'(\alpha)| = \dfrac{1}{3}(3-1.21)^{-2/3} = \dfrac{1}{3}(1.79)^{-2/3} \approx \dfrac◆LB◆1◆RB◆◆LB◆3 \times 1.489◆RB◆ \approx 0.224 \lt{} 1.

Converges since g(α)<1|g'(\alpha)| \lt{} 1.

If you get this wrong, revise: Convergence Condition — Section 2.2.

Details

Problem 10 Explain why the sign change theorem does not guarantee a root of f(x)=1x2f(x) = \dfrac{1}{x-2} in the interval [1,3][1, 3].

Details

Solution 10 f(1)=1<0f(1) = -1 \lt{} 0 and f(3)=1>0f(3) = 1 \gt{} 0, so there is a sign change. However, ff is not continuous on [1,3][1,3] — it has a vertical asymptote at x=2x = 2. The sign change theorem requires continuity, so it does not apply here. There is no root of 1/(x2)=01/(x-2) = 0.

If you get this wrong, revise: Limitations — Section 1.2.

Details

Problem 11 Consider f(x)=(x2)3f(x) = (x-2)^3. The root is α=2\alpha = 2. Apply Newton-Raphson starting from x0=5x_0 = 5. Show that the convergence is linear, find the convergence rate, and explain why quadratic convergence is lost.

Details

Solution 11 f(x)=(x2)3f(x) = (x-2)^3, f(x)=3(x2)2f'(x) = 3(x-2)^2.

xn+1=xn(xn2)33(xn2)2=xnxn23=2xn+43x_{n+1} = x_n - \dfrac{(x_n-2)^3}{3(x_n-2)^2} = x_n - \dfrac{x_n - 2}{3} = \dfrac{2x_n + 4}{3}

Starting from x0=5x_0 = 5: x1=14/34.667x_1 = 14/3 \approx 4.667, x2=40/94.444x_2 = 40/9 \approx 4.444, x3=116/274.296x_3 = 116/27 \approx 4.296, x4=344/814.198x_4 = 344/81 \approx 4.198.

Error: e0=3e_0 = 3, e1=8/32.667e_1 = 8/3 \approx 2.667, e2=16/91.778e_2 = 16/9 \approx 1.778, e3=32/271.185e_3 = 32/27 \approx 1.185, e4=64/810.790e_4 = 64/81 \approx 0.790.

Ratio: en+1/en=2/3e_{n+1}/e_n = 2/3 for all nn. This is linear convergence with rate 2/32/3.

Quadratic convergence is lost because f(α)=f(2)=0f'(\alpha) = f'(2) = 0. The root coincides with a stationary point (inflection point), violating the condition f(α)0f'(\alpha) \neq 0 in the quadratic convergence theorem. As shown in Section 3.5, when f(α)=0f'(\alpha) = 0 and f(α)0f''(\alpha) \neq 0, Newton-Raphson degrades to linear convergence with rate 1/21/2. Here f(x)=6(x2)f''(x) = 6(x-2), so f(2)=0f''(2) = 0 as well (triple root), giving rate 2/32/3 instead of 1/21/2.

If you get this wrong, revise: Slow Convergence Near Inflection Points — Section 3.5.

Details

Problem 12 (a) Use the trapezium rule with n=4n = 4 strips to approximate 0211+x2dx\displaystyle\int_0^2 \frac{1}{1+x^2}\,dx.

(b) Given that f(x)=6x22(1+x2)3f''(x) = \dfrac{6x^2 - 2}{(1+x^2)^3}, find an upper bound MM for f(x)|f''(x)| on [0,2][0,2] and hence bound the error in your approximation.

Details

Solution 12 (a) h=0.5h = 0.5.

y0=f(0)=1y_0 = f(0) = 1, y1=f(0.5)=1/1.25=0.8y_1 = f(0.5) = 1/1.25 = 0.8, y2=f(1)=0.5y_2 = f(1) = 0.5, y3=f(1.5)=1/3.250.3077y_3 = f(1.5) = 1/3.25 \approx 0.3077, y4=f(2)=0.2y_4 = f(2) = 0.2.

Approx=0.52[1+2(0.8+0.5+0.3077)+0.2]=0.25[1+2(1.6077)+0.2]=0.25×4.41541.1039\mathrm{Approx} = \frac{0.5}{2}[1 + 2(0.8 + 0.5 + 0.3077) + 0.2] = 0.25[1 + 2(1.6077) + 0.2] = 0.25 \times 4.4154 \approx 1.1039

(Exact value: arctan21.1071\arctan 2 \approx 1.1071.)

(b) f(x)=6x22(1+x2)3f''(x) = \dfrac{6x^2 - 2}{(1+x^2)^3}. On [0,2][0, 2], the numerator 6x226x^2 - 2 is maximised at x=2x = 2 where it equals 6(4)2=226(4) - 2 = 22. The denominator (1+x2)3(1+x^2)^3 is minimised at x=0x = 0 where it equals 1. We need to maximise f(x)|f''(x)|.

Checking critical points: f(x)=0f'''(x) = 0 gives potential extrema of ff''. Alternatively, evaluate at endpoints and critical points. f(0)=2f''(0) = -2, f(1)=4/8=0.5f''(1) = 4/8 = 0.5, f(2)=22/125=0.176f''(2) = 22/125 = 0.176.

So M=2M = 2 (taking f(x)2|f''(x)| \leq 2).

Error bound: ETLB(20)3RB◆◆LB12×42RB×2=8192×2=1120.0833|E_T| \leq \dfrac◆LB◆(2-0)^3◆RB◆◆LB◆12 \times 4^2◆RB◆ \times 2 = \dfrac{8}{192} \times 2 = \dfrac{1}{12} \approx 0.0833.

The actual error is 1.10711.1039=0.0032|1.1071 - 1.1039| = 0.0032, well within the bound.

If you get this wrong, revise: Error Analysis — Section 4.2 and Error Bound — Section 4.3.

Details

Problem 13 The equation ex=xe^{-x} = x has a single real root α\alpha.

(a) Show that α(0.5,0.7)\alpha \in (0.5, 0.7).

(b) Consider the rearrangement xn+1=exnx_{n+1} = e^{-x_n}. Show that this iteration converges near α\alpha.

(c) Consider the rearrangement xn+1=lnxnx_{n+1} = -\ln x_n. Determine whether this converges near α\alpha.

Details

Solution 13 (a) f(x)=exxf(x) = e^{-x} - x. f(0.5)=e0.50.50.60650.5=0.1065>0f(0.5) = e^{-0.5} - 0.5 \approx 0.6065 - 0.5 = 0.1065 \gt{} 0. f(0.7)=e0.70.70.49660.7=0.2034<0f(0.7) = e^{-0.7} - 0.7 \approx 0.4966 - 0.7 = -0.2034 \lt{} 0. By the sign change theorem, α(0.5,0.7)\alpha \in (0.5, 0.7).

(b) g1(x)=exg_1(x) = e^{-x}, g1(x)=exg_1'(x) = -e^{-x}. At α0.567\alpha \approx 0.567: g1(α)=eα=α0.567<1|g_1'(\alpha)| = e^{-\alpha} = \alpha \approx 0.567 \lt{} 1. Converges.

(c) g2(x)=lnxg_2(x) = -\ln x, g2(x)=1/xg_2'(x) = -1/x. At α0.567\alpha \approx 0.567: g2(α)=1/α1.763>1|g_2'(\alpha)| = 1/\alpha \approx 1.763 \gt{} 1. Diverges.

Both rearrangements solve the same equation, but only xn+1=exnx_{n+1} = e^{-x_n} converges near the root.

If you get this wrong, revise: Rearrangement Choices — Section 2.3.

Details

Problem 14 Newton-Raphson is applied to f(x)=x33x+2f(x) = x^3 - 3x + 2 (which has a double root at x=1x = 1 and a single root at x=2x = -2). Starting from x0=0.5x_0 = 0.5, compute x1x_1 and x2x_2, and comment on the rate of convergence towards x=1x = 1.

Details

Solution 14 f(x)=x33x+2=(x1)2(x+2)f(x) = x^3 - 3x + 2 = (x-1)^2(x+2), f(x)=3x23f'(x) = 3x^2 - 3.

x0=0.5x_0 = 0.5: f(0.5)=0.1251.5+2=0.625f(0.5) = 0.125 - 1.5 + 2 = 0.625, f(0.5)=0.753=2.25f'(0.5) = 0.75 - 3 = -2.25. x1=0.50.625/(2.25)=0.5+0.2778=0.7778x_1 = 0.5 - 0.625/(-2.25) = 0.5 + 0.2778 = 0.7778.

x1=0.7778x_1 = 0.7778: f(0.7778)=0.47032.3334+2=0.1369f(0.7778) = 0.4703 - 2.3334 + 2 = 0.1369, f(0.7778)=1.81513=1.1849f'(0.7778) = 1.8151 - 3 = -1.1849. x2=0.77780.1369/(1.1849)=0.7778+0.1155=0.8933x_2 = 0.7778 - 0.1369/(-1.1849) = 0.7778 + 0.1155 = 0.8933.

The iterates are approaching x=1x = 1 but slowly. Errors: e0=0.5e_0 = 0.5, e10.2222e_1 \approx 0.2222, e20.1067e_2 \approx 0.1067. The ratio e2/e10.48e_2/e_1 \approx 0.48, and e1/e00.44e_1/e_0 \approx 0.44. This is approximately linear convergence.

At the double root x=1x = 1: f(1)=0f(1) = 0, f(1)=0f'(1) = 0. Since f(α)=0f'(\alpha) = 0, quadratic convergence is lost (as in Section 3.5). For a double root, Newton-Raphson converges linearly with rate approximately 1/21/2.

If you get this wrong, revise: Slow Convergence Near Inflection Points — Section 3.5.

Details

Problem 15 (a) Use the trapezium rule with n=2n = 2 strips to approximate 01xdx\displaystyle\int_0^1 \sqrt{x}\,dx.

(b) The exact value is 2/32/3. Compute the actual error.

(c) Use the error bound formula to find a theoretical upper bound for the error with n=2n = 2 strips.

Details

Solution 15 (a) h=0.5h = 0.5. y0=0=0y_0 = \sqrt{0} = 0, y1=0.50.7071y_1 = \sqrt{0.5} \approx 0.7071, y2=1=1y_2 = \sqrt{1} = 1.

Approx=0.52[0+2(0.7071)+1]=0.25×2.4142=0.6036\mathrm{Approx} = \frac{0.5}{2}[0 + 2(0.7071) + 1] = 0.25 \times 2.4142 = 0.6036

(b) Exact: 2/30.66672/3 \approx 0.6667. Actual error: 0.66670.6036=0.0631|0.6667 - 0.6036| = 0.0631.

(c) f(x)=x1/2f(x) = x^{1/2}, f(x)=12x1/2f'(x) = \frac{1}{2}x^{-1/2}, f(x)=14x3/2f''(x) = -\frac{1}{4}x^{-3/2}.

On (0,1](0, 1]: f(x)=14x3/2|f''(x)| = \frac{1}{4}x^{-3/2}, which is unbounded as x0+x \to 0^+. The error bound requires ff'' to be bounded on [a,b][a,b], but f(x)f''(x) \to \infty as x0x \to 0.

If we instead apply the bound on [ε,1][\varepsilon, 1] for small ε>0\varepsilon \gt{} 0: M=14ε3/2M = \frac{1}{4}\varepsilon^{-3/2}, which blows up as ε0\varepsilon \to 0. This illustrates a limitation of the error bound: it requires ff'' to be bounded, which fails when ff has a vertical tangent at an endpoint.

If you get this wrong, revise: Error Bound — Section 4.3.

Details

Problem 16 For the equation cosx=x\cos x = x, let g(x)=cosxg(x) = \cos x.

(a) Verify that a fixed point α\alpha exists in (0,π/2)(0, \pi/2).

(b) Show that g(α)<1|g'(\alpha)| \lt{} 1, and hence that the iteration xn+1=cosxnx_{n+1} = \cos x_n converges.

(c) Starting from x0=0.5x_0 = 0.5, find x3x_3 to 6 decimal places.

Details

Solution 16 (a) g(0)=cos0=1>0g(0) = \cos 0 = 1 \gt{} 0 and g(π/2)=cos(π/2)=0<π/2g(\pi/2) = \cos(\pi/2) = 0 \lt{} \pi/2. Since gg is continuous and g(x)xg(x) - x changes sign on (0,π/2)(0, \pi/2) (check: g(0)0=1>0g(0) - 0 = 1 \gt{} 0, g(π/2)π/2=π/2<0g(\pi/2) - \pi/2 = -\pi/2 \lt{} 0), a fixed point exists.

(b) g(x)=sinxg'(x) = -\sin x. At the fixed point α0.7391\alpha \approx 0.7391: g(α)=sin(0.7391)0.6736<1|g'(\alpha)| = \sin(0.7391) \approx 0.6736 \lt{} 1. Converges.

(c) x0=0.500000x_0 = 0.500000 x1=cos(0.5)=0.877583x_1 = \cos(0.5) = 0.877583 x2=cos(0.877583)=0.639012x_2 = \cos(0.877583) = 0.639012 x3=cos(0.639012)=0.802685x_3 = \cos(0.639012) = 0.802685

If you get this wrong, revise: Convergence Condition — Section 2.2.

Details

Problem 17 A student uses Newton-Raphson to solve f(x)=tanx1=0f(x) = \tan x - 1 = 0 starting from x0=1.3x_0 = 1.3.

(a) Compute x1x_1.

(b) The root is α=π/40.7854\alpha = \pi/4 \approx 0.7854. Explain why starting at x0=1.3x_0 = 1.3 may not be ideal, and identify a value of x0x_0 between 00 and π/2\pi/2 where Newton-Raphson would fail entirely.

Details

Solution 17 (a) f(x)=tanx1f(x) = \tan x - 1, f(x)=sec2x=1/cos2xf'(x) = \sec^2 x = 1/\cos^2 x.

x0=1.3x_0 = 1.3: f(1.3)=tan(1.3)13.60211=2.6021f(1.3) = \tan(1.3) - 1 \approx 3.6021 - 1 = 2.6021. f(1.3)=sec2(1.3)=1/cos2(1.3)1/0.075413.26f'(1.3) = \sec^2(1.3) = 1/\cos^2(1.3) \approx 1/0.0754 \approx 13.26. x1=1.32.6021/13.261.30.1962=1.1038x_1 = 1.3 - 2.6021/13.26 \approx 1.3 - 0.1962 = 1.1038.

This is still far from α=0.7854\alpha = 0.7854. The function is very steep here (large ff'), so the step is small but the iterate is far from the root.

(b) Newton-Raphson fails when f(x0)=0f'(x_0) = 0, i.e., sec2x0=0\sec^2 x_0 = 0, which never happens since sec2x1\sec^2 x \geq 1 for all xx. However, as x0π/2x_0 \to \pi/2^-, cosx00\cos x_0 \to 0 and f(x0)f'(x_0) \to \infty, while f(x0)=tanx01f(x_0) = \tan x_0 - 1 \to \infty. The ratio f(x0)/f(x0)=(tanx01)cos2x0f(x_0)/f'(x_0) = (\tan x_0 - 1)\cos^2 x_0 tends to a finite limit, but the function becomes extremely steep near π/2\pi/2, making the iteration numerically unstable. Starting very close to π/2\pi/2 sends the iterate far away.

A better starting point is x0=1.0x_0 = 1.0: f(1)=tan110.5574f(1) = \tan 1 - 1 \approx 0.5574, f(1)=sec213.426f'(1) = \sec^2 1 \approx 3.426. x1=10.5574/3.4260.8373x_1 = 1 - 0.5574/3.426 \approx 0.8373, already close to α=0.7854\alpha = 0.7854.

If you get this wrong, revise: Horizontal Tangent Failure — Section 3.4.

:::

:::

:::

:::

:::


tip

Diagnostic Test Ready to test your understanding of Numerical Methods? The diagnostic test contains the hardest questions within the A-Level specification for this topic, each with a full worked solution.

Unit tests probe edge cases and common misconceptions. Integration tests combine Numerical Methods with other pure mathematics topics to test synthesis under exam conditions.

See Diagnostic Guide for instructions on self-marking and building a personal test matrix.