So far, in this module of our course in Engineering Computations you have learned to:
You also learned that Euler's method is a first-order method: a Taylor series expansion shows that stepping in time with Euler makes an error—called the _truncation error_—proportional to the time increment, $\Delta t$.
In this lesson, you'll work with oscillating systems. Euler's method doesn't do very well with oscillating systems, but we'll show you a clever way to fix this. (The modified method is still first order, however. We will also confirm the order of convergence by computing the error using different values of $\Delta t$.
As always, we will need our best-loved numerical Python libraries, and we'll also re-use the eulerstep()
function from the previous lesson. So let's get that out of the way.
import numpy
from matplotlib import pyplot
%matplotlib inline
pyplot.rc('font', family='serif', size='14')
def eulerstep(state, rhs, dt):
'''Update a state to the next time increment using Euler's method.
Arguments
---------
state : array of dependent variables
rhs : function that computes the RHS of the DiffEq
dt : float, time increment
Returns
-------
next_state : array, updated after one time increment'''
next_state = state + rhs(state) * dt
return next_state
A prototypical mechanical system is a mass $m$ attached to a spring, in the simplest case without friction. The elastic constant of the spring, $k$, determines the restoring force it will apply to the mass when displaced by a distance $x$. The system then oscillates back and forth around its position of equilibrium.
#### Simple spring-mass system, without friction.Newton's law applied to the friction-less spring-mass system is:
\begin{equation} -k x = m \ddot{x} \end{equation}Introducing the parameter $\omega = \sqrt{k/m}$, the equation of motion is rewriten as:
\begin{equation} \ddot{x} + \omega^2 x = 0 \end{equation}where a dot above a dependent variable denotes the time derivative. This is a second-order differential equation for the position $x$, having a known analytical solution that represents simple harmonic motion:
$x(t) = x_0 \cos(\omega t)$
The solution represents oscillations with period $P = 2 \pi/ \omega $ (the time between two peaks), and amplitude $x_0$.
It's useful to write a second-order differential equation as a set of two first-order equations: in this case, for position and velocity, respectively:
\begin{eqnarray} \dot{x} &=& v \nonumber\\ \dot{v} &=& -\omega^2 x \end{eqnarray}Like we did in Lesson 2 of this module, we write the state of the system as a two-dimensional vector,
\begin{equation} \mathbf{x} = \begin{bmatrix} x \\ v \end{bmatrix}, \end{equation}and the differential equation in vector form:
\begin{equation} \dot{\mathbf{x}} = \begin{bmatrix} v \\ -\omega^2 x \end{bmatrix}. \end{equation}Several advantages come from writing the differential equation in vector form, both theoretical and practical. In the study of dynamical systems, for example, the state vector lives in a state space called the phase plane, and many things can be learned from studying solutions to differential equations graphically on a phase plane.
Practically, writing the equation in vector form results in more general, compact code. Let's write a function to obtain the right-hand side of the spring-mass differential equation, in vector form.
def springmass(state):
'''Computes the right-hand side of the spring-mass differential
equation, without friction.
Arguments
---------
state : array of two dependent variables [x v]^T
Returns
-------
derivs: array of two derivatives [v - ω*ω*x]^T
'''
derivs = numpy.array([state[1], -ω**2*state[0]])
return derivs
This worked example follows Reference [1], section 4.3 (note that the source is open access). We set the parameters of the system, choose a time interval equal to 1-20th of the oscillation period, and decide to solve the motion for a duration equal to 3 periods.
ω = 2
period = 2*numpy.pi/ω
dt = period/20 # we choose 20 time intervals per period
T = 3*period # solve for 3 periods
N = round(T/dt)
print(N)
print(dt)
60 0.15707963267948966
Next, set up the time array and initial conditions, initialize the solution array with zero values, and assign the initial values to the first elements of the solution array.
t = numpy.linspace(0, T, N)
x0 = 2 # initial position
v0 = 0 # initial velocity
#initialize solution array
num_sol = numpy.zeros([N,2])
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
We're ready to solve! Step through the time increments, calling the eulerstep()
function with the springmass
right-hand-side derivatives and time increment as inputs.
for i in range(N-1):
num_sol[i+1] = eulerstep(num_sol[i], springmass, dt)
Now, let's compute the position with respect to time using the known analytical solution, so that we can compare the numerical result with it. Below, we make a plot including both numerical and analytical values in our chosen time range.
x_an = x0*numpy.cos(ω * t)
# plot solution with Euler's method
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t, num_sol[:, 0], linewidth=2, linestyle='--', label='Numerical solution')
pyplot.plot(t, x_an, linewidth=1, linestyle='-', label='Analytical solution')
pyplot.xlabel('Time [s]')
pyplot.ylabel('$x$ [m]')
pyplot.title('Spring-mass system with Euler\'s method (dashed line).\n');
Yikes! The numerical solution exhibits a marked growth in amplitude over time, which certainly is not what the physical system displays. What is wrong with Euler's method?
Try repeating the calculation above using smaller values of the time increment, dt
, and see if the results improve. Try dt=P/40
, P/160
and P/2000
.
Although the last case, with 2000 steps per oscillation, does look good enough, see what happens if you then increase the time of simulation, for example to 20 periods. —Run the case again: What do you see now?
We consistently observe a growth in amplitude in the numerical solution, worsening over time. The solution does improve when we reduce the time increment dt
(as it should), but the amplitude still displays unphysical growth for longer simulations.
The thing is, Euler's method has a fundamental problem with oscillatory systems. Look again at the approximation made by Euler's method to get the position at the next time interval:
\begin{equation} x(t_i+\Delta t) \approx x(t_i) + v(t_i) \Delta t \end{equation}It uses the velocity value at the beginning of the time interval to step the solution to the future.
A graphical explanation can help here. Remember that the derivative of a function corresponds to the slope of the tangent at a point. Euler's method approximates the derivative using the slope at the initial point in an interval, and advances the numerical position with that initial velocity. The sketch below illustrates two consecutive Euler steps on a function with high curvature.
#### Sketch of two Euler steps on a curved function.Since Euler's method makes a linear approximation to project the solution into the future, assuming the value of the derivative at the start of the interval, it's not very good on oscillatory functions.
A clever idea that improves on Euler's method is to use the updated value of the derivatives for the second equation.
Pure Euler's method applies:
\begin{eqnarray} x(t_0) = x_0, \qquad x_{i+1} &=& x_i + v_i \Delta t \nonumber\\ v(t_0) = v_0, \qquad v_{i+1} &=& v_i - {\omega}^2 x_i \Delta t \end{eqnarray}What if in the equation for $v$ we used the value $x_{i+1}$ that was just computed? Like this:
\begin{eqnarray} x(t_0) = x_0, \qquad x_{i+1} &=& x_i + v_i \Delta t \nonumber\\ v(t_0) = v_0, \qquad v_{i+1} &=& v_i - {\omega}^2 x_{i+1} \Delta t \end{eqnarray}Notice the $x_{i+1}$ on the right-hand side of the second equation: that's the updated value, giving the acceleration at the end of the time interval. This modified scheme is called Euler-Cromer method, to honor clever Mr Cromer, who came up with the idea [2].
Let's see what it does. Study the function below carefully—it helps a lot if you write things out on a piece of paper!
def euler_cromer(state, rhs, dt):
'''Update a state to the next time increment using Euler-Cromer's method.
Arguments
---------
state : array of dependent variables
rhs : function that computes the RHS of the DiffEq
dt : float, time increment
Returns
-------
next_state : array, updated after one time increment'''
mid_state = state + rhs(state)*dt # Euler step
mid_derivs = rhs(mid_state) # updated derivatives
next_state = numpy.array([mid_state[0], state[1] + mid_derivs[1]*dt])
return next_state
We've copied the whole problem set-up below, to get the solution in one code cell, for easy trial with different parameter choices. Try it out!
ω = 2
period = 2*numpy.pi/ω
dt = period/200 # time intervals per period
T = 800*period # simulation time, in number of periods
N = round(T/dt)
print('The number of time steps is {}.'.format( N ))
print('The time increment is {}'.format( dt ))
# time array
t = numpy.linspace(0, T, N)
x0 = 2 # initial position
v0 = 0 # initial velocity
#initialize solution array
num_sol = numpy.zeros([N,2])
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = euler_cromer(num_sol[i], springmass, dt)
The number of time steps is 160000. The time increment is 0.015707963267948967
Recompute the analytical solution, and plot it alongside the numerical one, when you're ready. We computed a crazy number of oscillations, so we'll need to pick carefully the range of time to plot.
First, get the analytical solution. We chose to then plot the first few periods of the oscillatory motion: numerical and analytical.
x_an = x0*numpy.cos(ω * t) # analytical solution
iend = 800 # in number of time steps
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t[:iend], num_sol[:iend, 0], linewidth=2, linestyle='--', label='Numerical solution')
pyplot.plot(t[:iend], x_an[:iend], linewidth=1, linestyle='-', label='Analytical solution')
pyplot.xlabel('Time [s]')
pyplot.ylabel('$x$ [m]')
pyplot.title('Spring-mass system, with Euler-Cromer method.\n');
The plot shows that Euler-Cromer does not have the problem of growing amplitudes. We're pretty happy with it in that sense.
But if we plot the end of a long period of simulation, you can see that it does start to deviate from the analytical solution.
istart = 400
fig = pyplot.figure(figsize=(6,4))
pyplot.plot(t[-istart:], num_sol[-istart:, 0], linewidth=2, linestyle='--', label='Numerical solution')
pyplot.plot(t[-istart:], x_an[-istart:], linewidth=1, linestyle='-', label='Analytical solution')
pyplot.xlabel('Time [s]')
pyplot.ylabel('$x$ [m]')
pyplot.title('Spring-mass system, with Euler-Cromer method. \n');
Looking at the last few oscillations in a very long run shows a slight phase difference, even with a very small time increment. So although the Euler-Cromer method fixes a big problem with Euler's method, it still has some error. It's still a first-order method!
You'll often find the presentation of the Euler-Cromer method with the reverse order of the equations, i.e., the velocity equation solved first, then the position equation solved with the updated value of the velocity. This makes no difference in the results: it's just a convention among physicists.
The Euler-Cromer method is equivalent to a semi-implicit Euler method.
We've said that both Euler's method and the Cromer variant are first-order accurate: the error goes as the first power of $\Delta t$. In Lesson 2 of this module, we showed this using a Taylor series. Let's now confirm it numerically.
Because simple harmonic motion has a known analytical function that solves the differential equation, we can directly compute a measure of the error made by the numerical solution.
Suppose we ran a numerical solution in the interval from $t_0$ to $T=N/\Delta t$. We could then compute the error, as follows:
\begin{equation} e = x_N - x_0 \cos(\omega T) \end{equation}where $x_N$ represents the numerical solution at the $N$-th time step.
How could we confirm the order of convergence of a numerical method? In the lucky scenario of having an analytical solution to directly compute the error, all we need to do is solve numerically with different values of $\Delta t$ and see if the error really varies linearly with this parameter.
In the code cell below, we compute the numerical solution with different time increments. We use two nested for
-statements: one iterates over the values of $\Delta t$, and the other iterates over the time steps from the initial condition to the final time. We save the results in a new variable called num_sol_time
, which is an array of arrays. Check it out!
dt_values = numpy.array([period/50, period/100, period/200, period/400])
T = 1*period
num_sol_time = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for j, dt in enumerate(dt_values):
N = int(T/dt)
t = numpy.linspace(0, T, N)
#initialize solution array
num_sol = numpy.zeros([N,2])
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = eulerstep(num_sol[i], springmass, dt)
num_sol_time[j] = num_sol.copy()
We will need to compute the error with our chosen norm, so let's write a function for that. It includes a line to obtain the values of the analytical solution at the needed instant of time, and then it takes the difference with the numerical solution to compute the error.
def get_error(num_sol, T):
x_an = x0 * numpy.cos(ω * T) # analytical solution at final time
error = numpy.abs(num_sol[-1,0] - x_an)
return error
All that is left to do is to call the error function with our chosen values of $\Delta t$, and plot the results. A logarithmic scale on the plot confirms close to linear scaling between error and time increment.
error_values = numpy.empty_like(dt_values)
for j in range(len(dt_values)):
error_values[j] = get_error(num_sol_time[j], T)
# plot the solution errors with respect to the time incremetn
fig = pyplot.figure(figsize=(6,6))
pyplot.loglog(dt_values, error_values, 'ko-') #log-log plot
pyplot.loglog(dt_values, 10*dt_values, 'k:')
pyplot.grid(True) #turn on grid lines
pyplot.axis('equal') #make axes scale equally
pyplot.xlabel('$\Delta t$')
pyplot.ylabel('Error')
pyplot.title('Convergence of the Euler method (dotted line: slope 1)\n');
What do you see in the plot of the error as a function of $\Delta t$? It looks like a straight line, with a slope close to 1. On a log-log convergence plot, a slope of 1 indicates that we have a first-order method: the error scales as ${\mathcal O}(\Delta t)$—using the "big-O" notation. It means that the error is proportional to the time increment: $ e \propto \Delta t.$
Another improvement on Euler's method is achieved by stepping the numerical solution to the midpoint of a time interval, computing the derivatives there, and then going back and updating the system state using the midpoint derivatives. This is called modified Euler's method.
If we write the vector form of the differential equation as:
\begin{equation} \dot{\mathbf{x}} = f(\mathbf{x}), \end{equation}then modified Euler's method is: \begin{align} \mathbf{x}_{n+1/2} & = \mathbf{x}_n + \frac{\Delta t}{2} f(\mathbf{x}_n) \\ \mathbf{x}_{n+1} & = \mathbf{x}_n + \Delta t \,\, f(\mathbf{x}_{n+1/2}). \end{align}
We can now write a Python function to update the state using this method. It's equivalent to a so-called Runge-Kutta second-order method, so we call it rk2_step()
.
def rk2_step(state, rhs, dt):
'''Update a state to the next time increment using modified Euler's method.
Arguments
---------
state : array of dependent variables
rhs : function that computes the RHS of the DiffEq
dt : float, time increment
Returns
-------
next_state : array, updated after one time increment'''
mid_state = state + rhs(state) * dt*0.5
next_state = state + rhs(mid_state)*dt
return next_state
Let's see how it performs with our spring-mass model.
dt_values = numpy.array([period/50, period/100, period/200,period/400])
T = 1*period
num_sol_time = numpy.empty_like(dt_values, dtype=numpy.ndarray)
for j, dt in enumerate(dt_values):
N = int(T/dt)
t = numpy.linspace(0, T, N)
#initialize solution array
num_sol = numpy.zeros([N,2])
#Set intial conditions
num_sol[0,0] = x0
num_sol[0,1] = v0
for i in range(N-1):
num_sol[i+1] = rk2_step(num_sol[i], springmass, dt)
num_sol_time[j] = num_sol.copy()
error_values = numpy.empty_like(dt_values)
for j, dt in enumerate(dt_values):
error_values[j] = get_error(num_sol_time[j], dt)
# plot of convergence for modified Euler's
fig = pyplot.figure(figsize=(6,6))
pyplot.loglog(dt_values, error_values, 'ko-')
pyplot.loglog(dt_values, 5*dt_values**2, 'k:')
pyplot.grid(True)
pyplot.axis('equal')
pyplot.xlabel('$\Delta t$')
pyplot.ylabel('Error')
pyplot.title('Convergence of modified Euler\'s method (dotted line: slope 2)\n');
The convergence plot, in this case, does look close to a slope-2 line. Modified Euler's method is second-order accurate: the effect of computing the derivatives (slope) at the midpoint of the time interval, instead of the starting point, is to increase the accuracy by one order!
Using the derivatives at the midpoint of the time interval is equivalent to using the average of the derivatives at $t$ and $t+\Delta t$: this corresponds to a second-order Runge-Kutta method, or RK2, for short. Combining derivatives evaluated at different points in the time interval is the key to Runge-Kutta methods that achieve higher orders of accuracy.
Linge S., Langtangen H.P. (2016) Solving Ordinary Differential Equations. In: Programming for Computations - Python. Texts in Computational Science and Engineering, vol 15. Springer, Cham, https://doi.org/10.1007/978-3-319-32428-9_4, open access and reusable under CC-BY-NC license.
Cromer, A. (1981). Stable solutions using the Euler approximation. American Journal of Physics, 49(5), 455-459. https://doi.org/10.1119/1.12478
# Execute this cell to load the notebook's style sheet, then ignore it
from IPython.core.display import HTML
css_file = '../style/custom.css'
HTML(open(css_file, "r").read())