Tests edge cases, boundary conditions, and common misconceptions for numerical methods.
UT-1: Newton-Raphson Divergence Near a Turning Point
Question:
The function f(x)=x3−2x+2 has a root near x=−1.77.
(a) Show that f(x)=0 has exactly one real root.
(b) Apply the Newton-Raphson formula with initial value x0=0. Compute x1, x2, and x3. Describe the behaviour of the iteration.
(c) Explain why the iteration fails to converge, referring to the value of f′(x0) and the geometry of the Newton-Raphson method.
(d) Find a value of x0 for which the Newton-Raphson method does converge to the root.
[Difficulty: hard. Tests the classic failure case of Newton-Raphson where the initial value is near a point where the derivative is small, causing the tangent to project far from the root.]
Since the local minimum is positive (≈0.91), the graph crosses the x-axis only once (to the left of the local maximum, where f goes from −∞ to the maximum). So there is exactly one real root.
(c) The iteration fails because at x0=0, the tangent to the curve has gradient f′(0)=−2, which points towards x=1 rather than towards the root at x≈−1.77. The Newton-Raphson method overshoots dramatically because the tangent at x=0 intersects the x-axis at x=1, which is on the opposite side of the local minimum. The iteration then gets trapped in a 2-cycle between x=0 and x=1.
More precisely, the problem is that f′(0)=0 but the initial guess is far from the root relative to the curvature of the function. The function has a local minimum at x≈0.816 with f≈0.91>0, creating a barrier that the iteration cannot cross.
(d) Any x0<−2/3≈−0.816 will converge, since on this interval f is strictly decreasing (and hence f and f′ have useful properties for convergence).
The equation x=g(x) is to be solved by fixed-point iteration xn+1=g(xn), where g(x)=21(x+x3).
(a) Show that the equation x=g(x) is equivalent to x2=3, and hence state the positive root α.
(b) Compute x1, x2, x3 starting from x0=2.
(c) Verify the convergence criterion ∣g′(α)∣<1 at the root.
(d) A student proposes the rearrangement x=x2−3+x (i.e. g(x)=x2−3+x) to solve x2=3. Show that this iteration diverges when x0=2, and explain why by evaluating ∣g′(3)∣.
[Difficulty: hard. Tests the fixed-point iteration convergence theorem and the dependence of convergence on the choice of rearrangement.]
Since ∣g′(3)∣=0<1, the fixed-point iteration converges (with quadratic convergence, since g′(α)=0).
(d)g(x)=x2−3+x. g′(x)=2x+1.
At x=3: g′(3)=23+1≈4.464.
Since ∣g′(3)∣≈4.464>1, the fixed-point theorem tells us this iteration diverges near the root.
Verification with x0=2:
x1=4−3+2=3
x2=9−3+3=9
x3=81−3+9=87
Clearly diverging.
UT-3: Trapezium Rule — Overestimate vs Underestimate
Question:
(a) Use the trapezium rule with 4 strips to estimate ∫02xdx, giving your answer to 4 decimal places.
(b) Determine whether your estimate is an overestimate or underestimate, justifying your answer by examining the concavity of f(x)=x.
(c) The exact value of the integral is L◆B◆42◆RB◆◆LB◆3◆RB◆. Calculate the percentage error of the trapezium rule estimate.
(d) How many strips would be needed to guarantee that the trapezium rule estimate of ∫02xdx is within 0.001 of the exact value? (The error bound for the trapezium rule with n strips on [a,b] is 12n2(b−a)3max[a,b]∣f′′(x)∣.)
[Difficulty: hard. Tests the trapezium rule with concavity analysis for error direction, and the error bound formula for determining required strip count.]
Solution:
(a)n=4 strips, width h=42−0=0.5.
x
0
0.5
1
1.5
2
y=x
0
L◆B◆1◆RB◆◆LB◆2◆RB◆≈0.7071
1
1.5≈1.2247
2≈1.4142
T4=20.5[0+2(0.7071+1+1.2247)+1.4142]
=0.25[0+5.8636+1.4142]=0.25×7.2778=1.8194
(b)f(x)=x1/2. f′(x)=21x−1/2. f′′(x)=−41x−3/2.
For x∈(0,2]: f′′(x)=−41x−3/2<0.
Since f′′(x)<0 on (0,2], the curve is concave down. The trapezium rule overestimates when the curve is concave down (the trapezia lie above the curve).
(d) The error bound: ∣E∣≤12n2(b−a)3max[a,b]∣f′′(x)∣.
f′′(x)=−41x−3/2. On (0,2], f′′(x) is unbounded as x→0+.
The maximum of ∣f′′(x)∣ on [0,2] does not exist (it diverges). The error bound formula is not directly applicable because f′′ is unbounded at x=0.
However, on the subinterval [ϵ,2] for any ϵ>0: max∣f′′(x)∣=41ϵ−3/2.
This illustrates a limitation of the trapezium rule error bound: it requires the second derivative to be bounded, which is not the case here. For functions with singularities at endpoints, alternative methods or a change of variable are needed.
(c)f′′(1)=12−24+16=4>0, so x=1 is a local minimum.
f(1)=1−4+8−8+3=0.
The point (1,0) is a local minimum.
(d)f(x)=(x−1)4=x4−4x3+6x2−4x+1.
But f(x)=x4−4x3+8x2−8x+3=(x−1)4.
Let me check: (x−1)4=x4−4x3+6x2−4x+1, while f(x)=x4−4x3+8x2−8x+3.
These are different. The question's claim is incorrect.
The actual stationary point structure: f′(x)=4x3−12x2+16x−8=4(x3−3x2+4x−2)=4(x−1)(x2−2x+2).
x2−2x+2=(x−1)2+1>0 for all real x, so x=1 is the only stationary point.
The Newton-Raphson method converges rapidly because f′(x)=4(x−1)((x−1)2+1) has a simple root at x=1 (multiplicity 1), so g′(1)=f′′(1)=4=0, ensuring quadratic convergence near the root.
(a) Show that the equation cosx=x has exactly one real root in the interval [0,L◆B◆π◆RB◆◆LB◆2◆RB◆].
(b) Use the fixed-point iteration xn+1=cosxn with x0=0.5 to find the root to 5 decimal places.
(c) Explain why this iteration converges by examining ∣g′(x)∣ where g(x)=cosx.
(d) Use the Newton-Raphson method with x0=0.5 to find the root to 5 decimal places. Compare the number of iterations required with the fixed-point method.
[Difficulty: hard. Combines the intermediate value theorem with two numerical methods, comparing their convergence rates.]
Solution:
(a) Let h(x)=cosx−x. We need h(x)=0.
h(0)=1−0=1>0.
h(L◆B◆π◆RB◆◆LB◆2◆RB◆)=0−L◆B◆π◆RB◆◆LB◆2◆RB◆<0.
By the intermediate value theorem, there is at least one root in [0,L◆B◆π◆RB◆◆LB◆2◆RB◆].
h′(x)=−sinx−1<0 for all x (since sinx≥0 on [0,L◆B◆π◆RB◆◆LB◆2◆RB◆]).
Since h is strictly decreasing, there is at most one root. Combined with existence, there is exactly one root.
(b)x0=0.5:
x1=cos(0.5)≈0.87758
x2=cos(0.87758)≈0.63901
x3=cos(0.63901)≈0.80269
x4=cos(0.80269)≈0.69478
x5=cos(0.69478)≈0.76820
x6=cos(0.76820)≈0.71917
x7=cos(0.71917)≈0.75236
x8=cos(0.75236)≈0.73012
x9=cos(0.73012)≈0.74512
x10=cos(0.74512)≈0.73501
This converges slowly (oscillating above and below the root). After approximately 25-30 iterations, xn≈0.73909.
To 5 decimal places: α≈0.73909.
(c)g(x)=cosx, g′(x)=−sinx.
At the root α≈0.739: ∣g′(α)∣=∣−sin(0.739)∣≈sin(0.739)≈0.674.
Since ∣g′(α)∣≈0.674<1, the fixed-point iteration converges (linearly, since 0<∣g′(α)∣<1).
Newton-Raphson converges in 3 iterations to 5 decimal places, compared to approximately 25-30 iterations for fixed-point iteration. Newton-Raphson has quadratic convergence while fixed-point has linear convergence in this case.
IT-3: Estimating ∫01e−x2dx Using the Trapezium Rule (with Integration)
Question:
The integral ∫01e−x2dx cannot be evaluated in terms of elementary functions.
(a) Use the trapezium rule with 4 strips to estimate ∫01e−x2dx, giving your answer to 5 decimal places.
(b) Use the trapezium rule with 8 strips and compare. Determine whether doubling the number of strips approximately quarters the error (as predicted by the error bound).
(c) Determine whether the trapezium rule overestimates or underestimates this integral, by examining the concavity of f(x)=e−x2.
(d) The Maclaurin series for e−x2 is 1−x2+2!x4−3!x6+⋯. Use the first four terms to estimate the integral, and compare with your trapezium rule estimate.
[Difficulty: hard. Combines the trapezium rule with series expansion to estimate a non-elementary integral, and error analysis via concavity.]
The exact value (to 5 d.p.) is ∫01e−x2dx≈0.74682.
Error with T4: ∣0.74298−0.74682∣=0.00384.
Error with T8: ∣0.74580−0.74682∣=0.00102.
Ratio: 0.00384/0.00102≈3.76≈4. This confirms that doubling the strips approximately quarters the error.
(c)f(x)=e−x2.
f′(x)=−2xe−x2.
f′′(x)=−2e−x2+4x2e−x2=e−x2(4x2−2).
f′′(x)=0 when 4x2=2, i.e. x=L◆B◆1◆RB◆◆LB◆2◆RB◆≈0.707.
For 0≤x<L◆B◆1◆RB◆◆LB◆2◆RB◆: 4x2−2<0, so f′′(x)<0 (concave down).
For L◆B◆1◆RB◆◆LB◆2◆RB◆<x≤1: 4x2−2>0, so f′′(x)>0 (concave up).
Since the concavity changes within the interval, the trapezium rule will overestimate on the concave-down portion and underestimate on the concave-up portion. The net effect depends on the relative magnitudes. In this case, the trapezium rule underestimates overall (both T4 and T8 are below the exact value).
(d) Using the first four terms of the Maclaurin series: