Numerical solutions of equations, integration in P2/P3
info
The formula booklet gives the Newton-Raphson formula and the trapezium/Simpson's rule. You
must know when each method is applicable and its limitations.
The sign change theorem tells us a root exists but says nothing about:
How many roots are in the interval (there could be 1, 3, 5, ...).
The exact location of the root.
warning
A sign change is sufficient but not necessary for a root. If f(x)=x2, then
f(−1)=f(1)=1 (no sign change), but there is a root at x=0. Additionally, a sign change
could arise from a discontinuity rather than a root: f(x)=1/x has f(−1)=−1 and
f(1)=1, but no root.
Intuition. The sign change theorem is the Intermediate Value Theorem applied to the special case
of crossing zero. If you walk from a point below sea level to one above sea level, you must cross
sea level at some point — provided the ground is continuous (no teleporting).
Theorem. If g is continuously differentiable near a fixed point α and
∣g′(α)∣<1, then the iteration xn+1=g(xn) converges to α for all starting
values sufficiently close to α.
If ∣g′(α)∣>1, the iteration diverges.
Proof (linear convergence). Near α, by Taylor's theorem:
g(xn)=g(α)+g′(α)(xn−α)+O((xn−α)2)
Since g(α)=α:
xn+1−α=g′(α)(xn−α)+O((xn−α)2)
For xn close to α:
∣xn+1−α∣≈∣g′(α)∣⋅∣xn−α∣
If ∣g′(α)∣<1, then ∣xn+1−α∣<∣xn−α∣: the error shrinks, so the
iteration converges. If ∣g′(α)∣>1, the error grows and the iteration diverges.
■
Different rearrangements of f(x)=0 give different g(x), and some converge while others
diverge.
Example. Solve x3+x−1=0.
g(x)=1−x3: g′(x)=−3x2. Near the root α≈0.68:
∣g′(α)∣≈1.39>1. Diverges.
g(x)=31−x: g′(x)=3(1−x)2/3−1. Near α:
∣g′(α)∣≈0.72<1. Converges.
tip
In exams, if a question asks you to show that a particular rearrangement converges, compute
g′(x) at the root and show ∣g′(α)∣<1. If asked why a rearrangement fails, show
∣g′(α)∣>1.
The fixed-point iteration xn+1=g(xn) can be visualised using the cobweb diagram. Plot
y=g(x) and y=x. Starting from x0 on the x-axis, go vertically to y=g(x0)=x1,
then horizontally to y=x, then vertically to y=g(x1)=x2, and so on.
If 0<g′(α)<1: the cobweb spirals inward (monotone convergence).
If −1<g′(α)<0: the cobweb zigzags inward (oscillatory convergence).
If ∣g′(α)∣>1: the cobweb spirals or zigzags outward (divergence).
The closer ∣g′(α)∣ is to zero, the faster the convergence. When g′(α)=0, the
iteration achieves quadratic convergence (similar to Newton-Raphson), since the leading error term
in the Taylor expansion vanishes.
Theorem. If f is twice continuously differentiable, f(α)=0, f′(α)=0, and
x0 is sufficiently close to α, then Newton-Raphson converges quadratically:
∣xn+1−α∣≤C∣xn−α∣2.
Intuition. Quadratic convergence means the number of correct decimal places roughly doubles
with each iteration. If you have 2 correct digits, the next iteration gives about 4, then 8,
then 16. This is dramatically faster than the linear convergence of fixed-point iteration (which
adds roughly a fixed number of correct digits per step).
When f′(xn)=0 at some iterate, the Newton-Raphson formula requires division by zero and the
method breaks down entirely. Even when f′(xn) is merely small, the step
xn+1=xn−f(xn)/f′(xn) becomes very large, sending the iterate far from the root.
Example. Let f(x)=x3−3x+3. Then f′(x)=3x2−3, so f′(1)=0. Starting at
x0=1:
The formula gives x1=1−f(1)/f′(1)=1−1/0, which is undefined.
Even starting at x0=0.9: f(0.9)=0.729−2.7+3=1.029, f′(0.9)=2.43−3=−0.57.
x1=0.9−1.029/(−0.57)=0.9+1.805=2.705.
The root is near α≈−2.10, so the iterate has been sent in the wrong direction. The
next iterate x2 will be pulled back, but convergence is erratic compared to a well-chosen
starting point.
warning
warning
tangent is not close to horizontal near your starting point.
The quadratic convergence proof in Section 3.2 requires f′(α)=0. When the root coincides
with an inflection point, so that f′(α)=0, convergence degrades from quadratic to
linear.
Theorem. If f(α)=0, f′(α)=0, f′′(α)=0, and x0 is sufficiently
close to α, then Newton-Raphson converges linearly with rate 1/2:
∣xn+1−α∣≈21∣xn−α∣
Proof sketch. Expanding by Taylor's theorem to third order about α:
So ∣xn+1−α∣→21∣xn−α∣ as xn→α: the error is halved each
step (linear convergence with rate 1/2), not squared. ■
Example.f(x)=(x−1)3 has a root at x=1 where f′(1)=0. The Newton-Raphson iteration
becomes:
xn+1=xn−3(xn−1)2(xn−1)3=xn−3xn−1=32xn+1
Starting at x0=4: x1=3, x2=7/3≈2.333, x3=17/9≈1.889,
x4=37/27≈1.370, x5=75/81≈1.210, ...
The error is multiplied by 2/3 each step (linear, not quadratic). Compare: standard Newton-Raphson
with f′(α)=0 would give roughly 1, then 2, then 4, then 8 correct digits. Here each step
only adds a fixed fraction of a digit.
(The negative sign arises from the exact derivation via the Euler-Maclaurin formula; the key point
is the h2 scaling.) ■
Key consequence. The error is proportional to h2=(b−a)2/n2. Doubling the number of strips
(n→2n) reduces the error by a factor of 4, since h→h/2 and h2→h2/4.
From the error formula, since ∣f′′(η)∣≤M for all η∈[a,b]:
∣ET∣≤12n2(b−a)3M
This gives a guaranteed upper bound on the absolute error. If we require the error to satisfy
∣ET∣<ε, we need:
n>◆LB◆L◆B◆(b−a)3M◆RB◆◆LB◆12ε◆RB◆◆RB◆
Example. Approximate ∫01e−x2dx with the trapezium rule. Here
f(x)=e−x2, so f′(x)=−2xe−x2 and f′′(x)=(4x2−2)e−x2. On [0,1]:
∣f′′(x)∣≤2 (achieved at x=0, where f′′(0)=−2).
Simpson's rule approximates f by quadratic arcs over pairs of strips. Over [x2k,x2k+2], a
unique quadratic passes through (x2k,y2k), (x2k+1,y2k+1), (x2k+2,y2k+2).
Integrating this quadratic gives the area 3h(y2k+4y2k+1+y2k+2).
Intuition. Simpson's rule has error O(1/n4) compared to O(1/n2) for the trapezium rule.
Doubling the number of strips reduces the error by a factor of 16 for Simpson's rule vs. 4 for the
trapezium rule. This is because quadratic approximations match the curvature of the function much
better than linear ones.
tip
Simpson's rule requires an even number of strips. The trapezium rule works with any
number. Simpson's rule is exact for cubics (since the error depends on f(4)).
Sign change (bisection): Use when you need a guaranteed, robust result and have a sign change.
Always converges but slowly (halving the interval each step). No derivative needed. Excellent for
initial bracketing before switching to a faster method.
Fixed-point iteration: Use when the equation is easily rearranged into x=g(x) and
∣g′(α)∣<1 can be verified. Simple to implement but may converge slowly or diverge
entirely depending on the rearrangement. Different rearrangements of the same equation can give
radically different convergence behaviour.
Newton-Raphson: Use when f′(x) is easy to compute and a good starting estimate is available.
Extremely fast (quadratic convergence) when it works, but can fail catastrophically if f′(xn) is
small, if the starting point is poor, or if the root is at an inflection point.
In practice, numerical software often combines methods:
Bracket the root using sign change to find an interval [a,b] containing the root.
Refine using Newton-Raphson or fixed-point iteration starting from the midpoint or a
favourable endpoint.
Verify the result by checking f(xn) is sufficiently close to zero.
info
Newton-Raphson is the method of choice when derivatives are available and the function is
well-behaved. Fixed-point iteration is useful when the problem naturally gives a contraction
mapping. Bisection is the reliable fallback when nothing else is guaranteed to work.
Newton-Raphson requires both f and f′ per step (two evaluations), but its quadratic convergence
means it needs far fewer steps in total. For high accuracy requirements, Newton-Raphson is
overwhelmingly more efficient despite the higher per-step cost.
Problem 10
Explain why the sign change theorem does not guarantee a root of f(x)=x−21 in the interval [1,3].
Details
Solution 10f(1)=−1<0 and f(3)=1>0, so there is a sign change. However, f is not continuous on [1,3] — it has a vertical asymptote at x=2. The sign change theorem requires continuity, so it does not apply here. There is no root of 1/(x−2)=0.
If you get this wrong, revise:Limitations — Section 1.2.
Details
Problem 11
Consider f(x)=(x−2)3. The root is α=2. Apply Newton-Raphson starting from x0=5.
Show that the convergence is linear, find the convergence rate, and explain why quadratic
convergence is lost.
Details
Solution 11f(x)=(x−2)3, f′(x)=3(x−2)2.
xn+1=xn−3(xn−2)2(xn−2)3=xn−3xn−2=32xn+4
Starting from x0=5: x1=14/3≈4.667, x2=40/9≈4.444,
x3=116/27≈4.296, x4=344/81≈4.198.
Ratio: en+1/en=2/3 for all n. This is linear convergence with rate 2/3.
Quadratic convergence is lost because f′(α)=f′(2)=0. The root coincides with a stationary
point (inflection point), violating the condition f′(α)=0 in the quadratic convergence
theorem. As shown in Section 3.5, when f′(α)=0 and f′′(α)=0, Newton-Raphson
degrades to linear convergence with rate 1/2. Here f′′(x)=6(x−2), so f′′(2)=0 as well
(triple root), giving rate 2/3 instead of 1/2.
(b) f′′(x)=(1+x2)36x2−2. On [0,2], the numerator 6x2−2 is maximised at
x=2 where it equals 6(4)−2=22. The denominator (1+x2)3 is minimised at x=0 where
it equals 1. We need to maximise ∣f′′(x)∣.
Checking critical points: f′′′(x)=0 gives potential extrema of f′′. Alternatively, evaluate at
endpoints and critical points. f′′(0)=−2, f′′(1)=4/8=0.5, f′′(2)=22/125=0.176.
Problem 14
Newton-Raphson is applied to f(x)=x3−3x+2 (which has a double root at x=1 and a single
root at x=−2). Starting from x0=0.5, compute x1 and x2, and comment on the rate of
convergence towards x=1.
The iterates are approaching x=1 but slowly. Errors: e0=0.5, e1≈0.2222,
e2≈0.1067. The ratio e2/e1≈0.48, and e1/e0≈0.44. This is
approximately linear convergence.
At the double root x=1: f(1)=0, f′(1)=0. Since f′(α)=0, quadratic convergence
is lost (as in Section 3.5). For a double root, Newton-Raphson converges linearly with rate
approximately 1/2.
(b) Exact: 2/3≈0.6667. Actual error: ∣0.6667−0.6036∣=0.0631.
(c) f(x)=x1/2, f′(x)=21x−1/2, f′′(x)=−41x−3/2.
On (0,1]: ∣f′′(x)∣=41x−3/2, which is unbounded as x→0+. The error bound
requires f′′ to be bounded on [a,b], but f′′(x)→∞ as x→0.
If we instead apply the bound on [ε,1] for small ε>0:
M=41ε−3/2, which blows up as ε→0. This illustrates a
limitation of the error bound: it requires f′′ to be bounded, which fails when f has a vertical
tangent at an endpoint.
If you get this wrong, revise:Error Bound — Section 4.3.
Details
Problem 16
For the equation cosx=x, let g(x)=cosx.
(a) Verify that a fixed point α exists in (0,π/2).
(b) Show that ∣g′(α)∣<1, and hence that the iteration xn+1=cosxn converges.
(c) Starting from x0=0.5, find x3 to 6 decimal places.
Details
Solution 16
(a) g(0)=cos0=1>0 and g(π/2)=cos(π/2)=0<π/2.
Since g is continuous and g(x)−x changes sign on (0,π/2) (check: g(0)−0=1>0,
g(π/2)−π/2=−π/2<0), a fixed point exists.
(b) g′(x)=−sinx. At the fixed point α≈0.7391:
∣g′(α)∣=sin(0.7391)≈0.6736<1. Converges.
Problem 17
A student uses Newton-Raphson to solve f(x)=tanx−1=0 starting from x0=1.3.
(a) Compute x1.
(b) The root is α=π/4≈0.7854. Explain why starting at x0=1.3 may not be
ideal, and identify a value of x0 between 0 and π/2 where Newton-Raphson would fail
entirely.
This is still far from α=0.7854. The function is very steep here (large f′), so the step
is small but the iterate is far from the root.
(b) Newton-Raphson fails when f′(x0)=0, i.e., sec2x0=0, which never happens since
sec2x≥1 for all x. However, as x0→π/2−, cosx0→0 and
f′(x0)→∞, while f(x0)=tanx0−1→∞. The ratio
f(x0)/f′(x0)=(tanx0−1)cos2x0 tends to a finite limit, but the function becomes
extremely steep near π/2, making the iteration numerically unstable. Starting very close to
π/2 sends the iterate far away.
A better starting point is x0=1.0: f(1)=tan1−1≈0.5574,
f′(1)=sec21≈3.426. x1=1−0.5574/3.426≈0.8373, already close to
α=0.7854.
Diagnostic Test
Ready to test your understanding of Numerical Methods? The diagnostic test contains the hardest questions within the A-Level specification for this topic, each with a full worked solution.
Unit tests probe edge cases and common misconceptions. Integration tests combine Numerical Methods with other pure mathematics topics to test synthesis under exam conditions.
See Diagnostic Guide for instructions on self-marking and building a personal test matrix.