Location>code7788 >text

Numerical analysis methods

Popularity:998 ℃/2024-08-04 09:46:38

Numerical analysis methods

Numerical methods can be understood by approximation

For problems for which no analytic solution is available or for which the analytic solution is too difficult to compute, how to use thelesserThe calculation of therelatively good(meet accuracy requirements) numerical solution, approximate solution.
img
The most basic foothold of numerical analysis is the tolerance error

Example:
img
img

Basic Approximate Calculation Methods.

  1. Discretization (. Numerical Integration)
  2. Recursion (converting a complex computational process into multiple iterations of a simple process . The second example above)
  3. Approximate substitution (replacing a mathematical problem with an infinite process with a computer approximation of a finite number of calculations . In the first example above Taylor's formula cannot be computed an infinite number of times, it must be approximated and replaced)

It's not that the analytic solution is unaffordable, it's that the numerical solution is more cost-effective

The most basic numerical calculations for the most commonly used mathematical models are presented next.

img

Some basic concepts about error

Sources and Classification of Errors

img
img
The first two errors are not related to the numerical method itself, and the numerical method should focus on truncation and rounding errors

Concept of algorithmic stability: initial small perturbations accumulate rapidly and errors follow an increasing trend. --Unstable algorithm!
Such an algorithm is called a stabilized algorithm when the error decreases gradually.

Errors and Error Limits

Absolute error.\(\epsilon(x) = x^* - x ,where x^* is an approximation\)
Can be positive or negative or 0

accuracy value\(x\)It is generally impossible to know with certainty, so we generally estimate an upper bound for the absolute error, the\(|\epsilon(x)| \le \eta\)πŸ‘‰ Absolute Error Limit\(\eta\)
Obviously positive.

relative error\(\epsilon_r(x) = \frac{x^* - x}{x}\)
Can be positive or negative or 0

img
same premise
img

Relative error, relative error limits are dimensionless quantities
Relative error limits portray the degree of approximation of the proximal numbers better than absolute error limits do

significant figure

img

img
Accurate to the jth decimal place 🟰 Retain k + j significant digits 🟰 Absolute limit of error not to exceed\(0.5*10 ^{βˆ’j}\)

\(x^*\)can be expressed as\(\pm0.x_1x_2... .x_n * 10^m, where x_1 \ne 0\),
as\(x^*\)The number of significant digits is n, then we have\(x^*\)The relative error limits of the\(\eta_r\)because of\(\frac{1}{2x_1}*10^{1-n}\)
as\(x^*\)The relative error limits of the\(\eta_r\)because of\(\frac{1}{2(x_1 + 1)}*10^{1-n}\)follow\(x^*\)The number of significant digits is n

Proof:
img
img

Errors and Error Limits for Arithmetic Operations

Addition and subtraction:

img
img

Multiplication:

img

The removal method:

img
When the absolute value of the divisor is small, the absolute value of the absolute error of the quotient will be very large in absolute terms

Power, logarithmic:

img
img

Principles to be followed in approximate calculations

img
Why?
img
img
img

Some noteworthy issues

  • Prevents two similar numbers from being subtracted together πŸ‘‰ Can result in a significant loss of significant digits
    img
  • Avoid absolute value of divisor << Absolute value of divisor πŸ‘‰ A slight error in the divisor results in a large change in the result of the calculation
    img
  • Preventing large numbers from "eating" small numbers
    Calculating πŸ” may result in the trailing code of the smaller number becoming all zeros after pairing orders when calculating floating-point numbers
  • Simplify calculation steps and reduce the number of operations
    img

interpolation method

img
Given a set of discrete data, one would like to create a simple and easy to compute function g(x) to make it an approximate replacement for f(x).

Existence and uniqueness of interpolating polynomials

img
img
This determinant can be computed using the formula for the Vandermont determinant as\(x_i\)are mutually exclusive nodes, so the value of the determinant is not 0

The interpolating polynomial pn(x) can be determined by solving a system of linear equations, the
However, when n is large, this method is very computationally intensive.

Lagrange Interpolation

n + 1 interpolating nodes πŸ‘‰ n + 1 constraints πŸ‘‰ n + 1 interpolating basis functions πŸ‘‰ nth Lagrange difference polynomials

interpolating basis function (math.)
img
The interpolating basis function is nice to write, the function is\(n\)polynomial of the second degree, and it is known that it\(n\)zeros, the constant coefficients only need to satisfy the contemporary entry of the\(x_i\)A value of 1 for the time base function is sufficient

Lagrange interpolating polynomials are linear combinations of interpolating basis functions

The nth order Lagrange interpolating polynomial written in this way must satisfy all the constraints and the interpolating polynomial exists and is unique, so the Lagrange interpolating polynomial is the interpolating polynomial we want

Error analysis:

img

set up in a particular location\(x\)Error at\(f(x) - L_n(x)\)can be expressed as\(c\omega(x)\ i.e. \ c\prod_{i =0}^{n}(x-x_i), here c is not a constant, it is a function of x\)

Next construct a function with n + 2 zeros and which contains an expression for the error\(f(x) - L_n(x)\)

In the interpolation interval\([a, b]\)Find a node that is not an interpolated node in\(x\), and the error at that point is\(c_x\omega(x)\)

The constructed function is\(F(t) = f(t) - L(t) - c_x\omega(t)\)
Such a function has a function value of 0 at all interpolated nodes and x, for a total of n + 2 zeros

img

pair expression (math.)\(F(t) = f(t) - L(t) - c_x\omega(t)\)Take the n+1st order derivative of t on the left and right sides (t is not related to x in any functional way)
img
substitution point\(\xi\)
img
Notice here the\(\xi\)There is some unknown functional relationship related to our choice of x

img
img
img

Newton interpolation

Lagrange interpolation is simple, but with each additional node, the original interpolation basis function\(l_i(x)\) Must be recalculated and thus not hereditary πŸ‘‰ Adding a node originally has a new zero for all interpolated basis functions and the coefficients k are changed

Hence the introduction of Newton interpolation
img
One more node, as long as it is based on the previous\(\phi(x)\) Construct a new\(\phi(x)\) It is sufficient to add to the polynomial, and the previous coefficients do not need to be changed (the newly added\(\phi\) function has a function value of 0 at the preceding interpolation node)

Features: In\(x_i\) The value of the function at the place consists only of 0 to i\(c_i * \phi(x)\) decide (to do something)

Introducing the concept of difference quotient gives the interpolation formula for Newton's interpolation
img

Calculate the quotient of differences by drawing a quotient of differences table to facilitate the calculation.
img

Some of the properties of the differential quotient:

the quotient of differences (slope of a line in coordinate geometry)\(f[x_0,x_1,...x_k]\) be\(f(x)\) exist\(x_0,x_1,...x_n\) A combination of the values of the functions at which the coefficients of the combination are the inverse of a concatenated product of the form
img
The proof of this property can be done by mathematical induction, firstly the 0th order difference quotient is the function value obviously holds, and after that it follows from the fact that it holds for k = n-1 that it holds for k = n

From the fact that the quotient of differences is a linear combination of function values, it follows that the value of the quotient of differences is independent of the order in which the points are arranged

the quotient of differences (slope of a line in coordinate geometry)\(f[x,x_0,x_1,...x_k]\) The same can be written in combinatorial form, except that one of the nodes is to be replaced by x, then it should be a function of x

By definition
img
It is certain that if\(f[x,x_0,x_1,...x_k]\) is an nth degree polynomial in x. Then\(f[x,x_0,x_1,...x_k,x_{k+1}]\) is an n-1st polynomial in x

This can be seen qualitatively from the combinatorial form, where the number of times f(x) is constant for x and the denominator of the\((x-x_0)(x-x_1)...(x-x_k)\) One more item, then the number of times should be reduced by one
It can also be inferred from the defining equation, since\(f[x,x_0,x_1,...x_k,x_{k+1}]\)can be written as\(\frac{f[x_0,x_1,...x_k,x_{k+1}]-f[x,x_0,x_1,...x_k]}{x_{k+1} - x}\), and it is known that\(f[x,x_0,x_1,...x_k]\)is an nth degree polynomial.\(f[x_0,x_1,...x_k,x_{k+1}]\)is constant and the molecule is in\(x=x_{k+1}\)is 0 at time 0, so the nth degree polynomial contains\((x-x_k+1)\)factors that can be approximately dropped with the denominator, so the whole is an n-1st polynomial

further
img

Newton's interpolation in terms of the difference quotient can be written as
img
The form is in\(x_0\)point clearly holds, and at any other point by the definition of the quotient of differences\(f[x,x_0] = \frac{f(x) - f(x_0)}{x - x_0}\)Also clearly established
at the same time\(f[x,x_0,x_1] = \frac{f[x_0,x_1] - f[x,x_0]}{x_1 - x}\)rolled out\(f[x,x_0] = f[x,x_0,x_1]*(x - x_1) + f[x_0,x_1]\)
the reason why\(f(x) = f(x_0) + f[x_0,x_1](x-x_0)+f[x,x_0,x_1](x-x_0)(x-x_1)\)
When generalized to n+1 points, it is
img
It can be noticed that the previous\(c_n\)is the form of the Newton interpolation, followed by the\(R_n\)is the residual term of Newton's interpolation

From the uniqueness of the interpolating polynomials, the Newton interpolation and Lagrange of the same nth degree polynomial and the Newton interpolation and Lagrange interpolation of the same nth degree polynomial are exactly equal, so the residual terms are also equal
img

Hermite Interpolation

It's Lagrange interpolation with more constraints.
It is now required that the derivative value of the interpolating polynomial at the interpolating node should be the same as f(x)

Write the interpolating basis functions along the same lines

n + 1 interpolating nodes πŸ‘‰ 2n + 2 constraints πŸ‘‰ 2n + 2 interpolating basis functions
img
The interpolating basis functions are likewise well-written, in that the\(\alpha_i\) exist\(x_i\) Point values other than 0 and derivative values are also 0. Therefore, the inclusion of the\((x - x_j)^2\) Factoring, since it is a 2n+1 polynomial, the method of determining coefficients\((ax+b)\) , making it possible for the\(x_i\) The value of the interpolated basis function at is 1 and the derivative value is 0

\(\beta_i\)also contains a factor of 2n times and\(\beta_i\) exist\(x_i\) The value of the function is 0, so it also contains\((x-x_i)\) factor, so it is sufficient to find a constant for the coefficient to be determined, which makes the\(\beta_i\) exist\(x_i\) A derivative value of 1 at

Error analysis:

Similar to the proof of the Lagrange interpolating polynomial, the interpolating residual term is obtained as
img

take note of\(R_{2n+1}(x)\) has a value of 0 at the interpolating node, and\({R_{2n+1}}'(x)\) is also 0 at the interpolation node πŸ‘ˆ\(\omega(x)\) role of

segmented interpolation

Due to the Rouge phenomenon, we segment the large interval and fit each segment with a low-order interpolating polynomial obtained from fewer interpolating nodes
img
Cons: not smooth enough at the segments

Error analysis:

at each segment with the same error analysis as Lagrange interpolation

img
included among these\(h_i\) is the length of the ith segment
The error analysis for the other segmented interpolations is similar in that the one with the largest Lagrange error residual among all the segments is found to be the overall upper bound on the error

curve fitting

Curve fitting is the process of finding a curve, given a function class H, according to some criterion, that reflects the form of the overall distribution of the given data without large localized fluctuations

Unlike interpolation, which is done as long as the interpolation conditions are met at the interpolation nodes, curve fitting is done to reflect the distribution of the given data
Interpolation is accurate locally at the interpolated nodes, while curve fitting allows for error at a given point and expects to be accurate in the overall distribution

Least Squares Method for Curve Fitting

A starting point for the least squares method:
img
The sum of squares of the residuals obtained by finding g(x) computed in the function class H must be minimal

Approach: the method of coefficients to be determined, solving equations according to the least squares condition

linear fitting
img
img

a polynomial fit (math.)
img
img
img
img
img
img

numerical integration

Although integrals in math problems can often be found by finding the original function and solving analytically using the Newton-Leibniz formula, in practice it is often not possible to find the original function or the product function are given in discrete form (e.g., in interpolation problems, there is no way of knowing what f(x) is, but we may need to ask for its integral)

Therefore, according to the definition of the integral, the computerized accumulation method is used for calculation
img

Mechanical product formulas and algebraic accuracy

img
The mechanical product formula attempts to give\(f(\xi)\) approximation
img
img
img

Error analysis:
img
Introducing the concept of algebraic precision
img
holds for mth degree polynomials, and it is only necessary that the product formula holds separately for the\(f(x) for x^0,x^1, . .x^m\) It is sufficient that πŸ‘ˆ The linear nature of the integral, i.e.\(∫(f(x)+h(x))dx=∫f(x)dx+∫h(x)dx\)

img
Significance of the theorem:Given a number of nodes, the integration accuracy of the mechanical product formula is guaranteed
img
It is also shown that f(x) is approximated by n-times Lagrange interpolation before integrating, with an algebraic precision of n
The mechanical product formula in the\(A_i\) It is possible to use\(\int_a^bl_i(x)\) Calculated

Method of construction of the product formula

It follows from the previous analysis that there exist specific coefficients such that the algebraic precision of the product formula obtained by the combination of k + 1 nodes is at least k

can be found by solving the equation according to the conditions for determining algebraic accuracy
img

It is also possible to approximate the quilt function by n-times Lagrange interpolation, with the same algebraic precision of n
img
When the product function itself is an algebraic polynomial not larger than nth degree, then the nth degree Lagrange interpolating polynomial is in fact equivalent to the product function (with Lagrange residue 0), then the interpolating product formula is exact

provable
img
If the algebraic precision is at least n times, then the formula for\(l_j(x)\) is accurate because\(l_j(x)\) is an nth degree polynomial, then\(\int_{a}^{b} l_j(x) \,dx = \sum_{i=1}^{n}A_il_j(x_i)\) on account of\(l_j(x_i)\) Only in\(i=j\) is 1 at all times and 0 at all other points, then there are\(\int_{a}^{b} l_j(x) \,dx = A_j\) ,This conclusion is true for all n + 1 interpolating basis functions, and by the linear nature of the integral, it is known that the product formula is of the interpolating type

On the other hand, if the product formula is of the interpolating type, it means that we have approximated the product function with n times Lagrange interpolation. Then we can take the producted function\(f(x)\) is divided into two parts, one for polynomials not greater than nth order, and the other for polynomials higher than nth order, since the part not greater than nth order is exactly productable after approximation, so the algebraic precision is at least n.

Newton-Cotes product formula

The mechanical product formula above is useful for\(x_i\) The selection was not restricted because\(A_i = \int_a^bl_i(x)\)If the nodes are changed or the product interval is changed, the coefficients in the product equation\(A_i\) Need to recalculate

But if we make the product interval independent and specify that\(x_i\) is an n-equivariant point of the interval and will\(A_i\) Using the Cotes factor\(C_i\) Instead, it can be found that the coefficients are consistent with both the product interval\([a,b]\) Not relevant, and also relevant to the node\(x_i\) has nothing to do with the value of n, but only with the total number of nodes n

  • The product interval stands alone
    img
  • Nodes are n-equivalent points of the interval
    img

The final form of the integral has only one n that is a variable quantity, so the Cotes coefficients hold for all integrals that satisfy the conditions for a given node

Remember the common Cotes coefficients
img

Error analysis:

img
Notice here the\(f^{(n+1)}(ΞΎ)\) cannot be regarded as a constant mentioned outside the integral equation, recalling the error analysis of Lagrange interpolation.\(ΞΎ\) be\(x\) is also a function of\(t\) function of
This is where the error analysis of the product formula becomes difficult

img
on account of\((t-0)(t-1)...(t-n)\) in which n is an even number, so that the function on the\((\frac{n}{2},0)\) Center symmetric, the function integrates to zero.

stability analysis

img
The truncation error is given in the previous section by the generalized equation
Rounding errors are caused by the limited word length of the machine

The stability of the Newton-Cotes formula is analyzed next
img
Here in the first line of the\(β‰ˆ\) is the truncation error (with an algebraic precision of at least n, terms higher than n times may have incorrect integral values)

img

Common Newton-Cotes formulas

trapezoidal formula

img
img
img
Here the change from the first line to the second line uses the second median theorem of integration to convert the\(f''(ΞΎ)\)Bring it up.\(t^2-t\) exist\([0,1]\) The interval is less than 0 invariant
The second median theorem of integration is as follows
img

Simpson's formula

(Three nodes, second-order Lagrange, algebraic precision of three)
img
img
Since Simpson's formula has an algebraic precision of 3, analyzing its error would require the\(f(x)\) denoted as the sum of some cubic interpolating polynomial and the remaining terms, and then integrating the remaining terms
If Lagrange interpolating polynomials are used, Simpson's formula can't be analyzed in terms of error as the trapezoidal formula does, and the Second Median Theorem of Integration can be used directly to incorporate the\(f'''(ΞΎ)\) Bring it up because the other part\((t-0)(t-1)(t-2)(t-3)\) does not satisfy the condition of an invariant number on the interval of integration

So instead, Hermite interpolating polynomials are used to satisfy the invariant sign condition
img
img
img

Cates' formula

(five nodes, fourth-order Lagrange, algebraic precision of five)
img
img

Three product formulas and their residuals

img

complexification and product method

In a certain range, the algebraic accuracy of the product formula improves with the increase of the interpolation nodes, but the Rouge phenomenon occurs with high interpolation, which leads to a deterioration of the approximation, then the result obtained by the product of the approximation function also deteriorates.
and n is large, the Newton-Cotes coefficients are not easy to solve and negative terms appear, and the higher-order Newton-Cotes formulas are numerically unstable.
Again, from the discussion of the residual term, we see that the smaller the interval of integration, the smaller the truncation error of the product formula.

Therefore, we often divide the integral interval into a number of small intervals, in each small interval on the use of a small number of interpolation formula, the construction of the corresponding product formula, and then add them up to get the entire interval on the product formula, which is the basic idea of the complexification of the product formula.

Complexification of the T-equation

img
img

Error analysis:
The total error of the complexified product formula is equal to the sum of the errors of all the small intervals
Note that h here is the step size of the complex trapezoidal formula\(\frac{b-a}{n}\), instead of\(b-a\)
img
The last line uses the median theorem

Complexification of the S-equation

corridor\([a,b]\) is actually divided into m blocks, each block using Simpson's formula, and the step size of Simpson's formula is\(\frac{1}{2}\) The length of the interval, so the total\(n\) Equal parts.\(n = 2m\), with a step size of\(\frac{b-a}{n}\)
The coefficients of Simpson's formula are\(\frac{1}{6} ,\, \frac{4}{6}, \, \frac{1}{6}\)
img
img

Error analysis:
Note that there are m intervals, and the step size h is the length of each small step.
img

Complexification C formula

The coefficients of the Cotes formula are\(\frac{7}{90} ,\, \frac{32}{90} ,\, \frac{12}{90} ,\, \frac{32}{90} ,\, \frac{7}{90}\)
img

Error analysis:
Note that there are m intervals, and the step size h is the length of each small step.
img

Romberg's product formula

From the above error analysis, it can be seen that the truncation error of the complexified integral decreases with the increase of the partition factor n
How to choose the right step size according to the accuracy requirements?

Dichotomy:
Take the trapezoidal formula as an example
img
img

The process converges slowly

An analysis of the complexification formula
img
Note that there is an approximate substitution from the first line of formulas to the second line of formulas, which approximates the cumulative sum of the function values multiplied by the step size of the discrete function values on the small interval after dividing the large interval by n equal parts to the integral on the large interval
img
commander-in-chief (military)\(\overline{T}\) Expanding the combined like terms reveals that
img

Qualitatively compare the errors of the complexification T, complexification S and complexification C formulas
img
img
img
can be found
The complexification T is bisected every two minutes, and the error is approximated to the original\(\frac{1}{4}\) 。
The complexification S is bisected every two minutes, and the error is approximated to the original\(\frac{1}{16}\) 。
Complexification C is bisected every two minutes, and the error is approximated to the original\(\frac{1}{64}\) 。

After the above analysis, the Richardson extrapolation table is obtained
img
img
Acceleration using Richardson extrapolation while complexifying the T-equation dichotomy

numerical differentiation

Given the values of a number of discrete points on an unknown function, the expectation is to obtain the derivative (approximate derivative) of that function

Numerical differentiation of difference quotient type

img
img
img
img
The error is
img
img

Interpolative Numerical Differentiation

Approximate the function using interpolating polynomials and then solve for the derivative
img

Error analysis:
img
When x takes an interpolating node, i.e., the value of the derivative at the interpolating node is sought, since\(\omega_{n+1}(x) = 0\)The latter of these errors is zero.

Common interpolative numerical differentiation (it is generally sufficient to write Lagrange interpolation and then derivation):

two-point formula
img
Two-point formula with remainder term
img

three-point formula
img
img

Numerical solution of ordinary differential equations

This class of problems is given an ordinary differential equation and find the function that satisfies the equation
The nth order ordinary differential equation is shaped like
img
Ordinary differential equations of order n are generally given n initial values or n pairs of edge values

introductory

img

uniqueadequatelyCondition:
img

How to Calculate\(y(x)\)
img

stipulated (time, amount, quality etc)\(y(x_n)\) is the function y(x) to be found in\(x_n\) The exact function value at\(y_n\) is the function y(x) to be found in\(x_n\) The value of the approximate function at

Additional math knowledge:

img
img
img

The multivariate function f(x,u,t) is derived with respect to x, where u is a function of x and t is independent of x πŸ‘‰ then\(f'(x,u,t) = f_x(x,u,t) + f_u(x,u,t)u'(x)\), the same for the nth-order derivative can be viewed as the derivative of the n-1st-order derivative function
Then the above equation\(\frac{\partial f(x_n,y_n)}{\partial x}\) The result of the calculation of\(f_x(x_n,y_n)+f_y(x_n,y_n)y'(x_n) = f_x(x_n,y_n)+f_y(x_n,y_n)f(x_n,y_n)\)

Euler's method

Euler's method is a numerical solution of ordinary differential equations
The basic idea is not to seek\(y(x)\) The analytic solution, rather, is a solution to the solution region\([a,b]\) Make an isometric section to find\(y(x)\) At the dissecting node\(x_n\) The approximation on the\(y_n\)

Because we already know the expression for the derivative function
This can be done by adding the initial value to the numerical integration over the dissecting interval
(Of course it is also possible to convert the derivative into a difference quotient of the dissected nodes)
img
h is the length of the sectioning interval

Route 1: Reducing the derivative to the quotient of the difference

img
img
img
img

The single-step Eulerian method requires only the information from the current step to calculate the next step, and the two-step Eulerian method requires information from both the current and previous steps to calculate the next step

Pathway 2: Initial Value + Numerical Integration

img
img
img
img
Euler's method for trapezoidal formulas
img

The numerical solution of an ordinary differential equation is a discrete set of function-valued data, and its exact expression is difficult to obtain by solving, but it can be approximated by the interpolating function after interpolating the calculation y(x)

Local truncation error analysis

img
Note the difference between here and algebraic precision
in the previous step\(y_n\) The truncation error, provided it is accurate, is the error of the numerical integration over the profiling interval. Obviously, the larger the profile interval, the larger the error from the integration, so the error can be measured by the h power.
And algebraic precision is the fact that for algebraic polynomials with number not greater than n, the product formula holds exactly.

The local truncation error can be analyzed using a Taylor expansion
Because the previous step was accurate, the\(y(x_n) = y_n\), and there are ordinary differential equations that show that\(y'(x_n) = f(x_n,y(x_n))\)

Taylor Expand
\(y(x_{n + 1}) = y(x_n + h) = y(x_n) + y'(x_n)h + \frac{y''(ΞΎ)}{2}h^2\)

  • explicit one-step Euler (math.)
    \(y_{n+1} = y_n + hf(x_n,y_n) = y(x_n) + hf(x_n, y(x_n)) = y(x_n) + hy'(x_n)\)
    Therefore, the error is\(T_{n+1} = \frac{y''(ΞΎ)}{2}h^2 = O(h^2)\)
    first order accuracy
  • implicit one-step Euler (math.)
    $y_{n+1} = y_n + hf(x_{n+1},y_{n+1}) $
    A less precise assumption is made here: we compute the\(y_{n+1}\) can be approximated as\(y(x_{n+1})\) substitute into\(f\) center
    imitate\(y_{n+1} = y(x_n) + hf(x_{n+1}, y(x_{n+1})) = y(x_n) + hy'(x_{n+1})\)
    We can do the same for\(y'(x_{n+1})\) carry out a Taylor expansion
    capture\(y_{n+1} = y(x_n) + h(y'(x_{n}) + O(h))\)
    imitate\(y_{n+1} = y(x_n) + hy'(x_{n}) + O(h^2)\)
    precession error\(T_{n+1} =y(x_{n+1}) - y_{n+1} = \frac{y''(ΞΎ)}{2}h^2 - O(h^2) = O(h^2) - O(h^2) = O(h^2)\)
    first order accuracy
  • explicit two-step Euler method
    The approach is to similarly set the\(y(x_{n-1})\) exist\(x_n\) at the Taylor expansion, followed by a difference
    img
    second order accuracy
  • Euler's method for trapezoidal formulas (implicit single step)
    The trapezoidal formula is the arithmetic average of the explicit and implicit one-step Euler method, and the proof does not go into details
    img
    second order accuracy

img

Improved Euler's method

As can be seen from the above example, the trapezoidal method has a smaller local truncation error than the explicit Eulerian and implicit Eulerian methods due to its second-order accuracy, but the trapezoidal
method is an implicit algorithm that requires an iterative solution and is computationally intensive.
Explicit Euler's method is an explicit algorithm that is less computationally intensive but less accurate.

The improved Eulerian method then starts with an explicit Eulerian calculation\(y_{n+1}\), substituting into the right-hand side of the trapezoidal method to find the more precise\(y_{n+1}\)
The improved Euler method, also known as the forecast-correction method, was first\(y_{n+1}\) is the forecast value and the second\(y_{n+1}\) The school is in the middle of
Improved Euler's method with second order accuracy
img
img
Of course it is also possible to carry out the forecast-correction process several times
img
geometric significance
ground\((x_n,y_n)\)information to estimate a value, re-estimate a value based on the information from the estimated value, and take the average of the two
img

The Longe Kuta Method

Based on the above analysis, the improved Eulerian method forecasts a value in the profiling interval with improved accuracy over the explicit Eulerian method
The idea of the Longe-Kuta method is to forecast a few more points in the profile interval\(k_i\) values, and use their weighted average as the\(k^*\) approximation

img
How the p-order R-K formula is constructed can be solved by Taylor expansion

Take the second-order R-K formula construction process as an example
img
The process of unfolding has been abbreviated.
img
img
The R-K formula consisting of four unknowns, three equations with infinitely many solutions and satisfying any set of coefficients of the system of equations is of 2nd order accuracy

These include the improved Euler method and the deformed Euler method
img

The R-K formula for the third and fourth orders can be solved in the same way

Third-order R-K formula, 8 unknowns, 6 equations, infinitely many sets of solutions
The most commonly used is the Kutta formula
img

The most commonly used fourth order is the standard fourth order Dragon Gekkuta formula and the Gill formula
img
img

In practice, we generally choose fourth-order and lower Lunger-Kuta
The fourth-order and higher Lunger-Kuta formulas are too computationally intensive and do not necessarily increase in accuracy, but sometimes decrease in precision

Lunger Kuta with a change of pace

We know that the step size is small for high accuracy and the step size is large for low computation, so how to choose the right step size?

Lunger Kuta, who changed his stride length, was designed to solve this problem.

img
img
This is well understood by assuming that\(y_{n+1}^{(h/2)}\) The error of the\(a\) Because\(y_{n+1}^{(h)}\) The error is almost\(y_{n+1}^{(h/2)}\) inaccurate\(16\) times, and these two points are not necessarily on the same side of the point whose exact solution is on the numerical axis, so the minimum distance between the two points is\(15a\)namely\(|y_{n+1}^{(h)} - y_{n+1}^{(h/2)}|>15a\)both... and...\(|y_{n+1}^{(h)} - y_{n+1}^{(h/2)}|<15\epsilon\)former situation\(a<\epsilon\)This is why the accuracy requirements are met.

With a method for determining whether the chosen step size meets the accuracy requirements, we can adjust the size of the step size according to the nature of the solution: in the gently varying part of the solution, a larger step size is used for the numerical solution, while in the strongly varying part of the solution a smaller step size is used

Convergence and Stability

Convergence and stability are harder to analyze
We next analyze the convergence and stability of simpler models only

convergence (math.)

Definition of convergence
img
Although in our previous error analysis, the minimum error was\(O(h^5)\)regard as\(\lim\limits_{h \to 0}\) When the error is clearly 0, but the errors we analyzed before were all local errors, building on the premise that the results of the previous step were accurate. Although we are now approximating the error at each step to be 0, similarly\(\lim\limits_{h \to 0}\) It also means\(\lim\limits_{n \to \infty}\), there are now an infinite number of steps.
but\(n\) together with\(h\) should satisfy\(n*h\) is a constant value\((x_n-x_0)\)

Next we analyze only single-step explicit methods

The method for determining convergence is the convergence theorem
img

show that
img
The error can first be split into the local error and the\(|\overline y_{n+1} - y_{n+1}|\) The sum of the two parts of the error
The local error is\(O(h^{p+1})\) , the latter part of the error can be turned into a product of the constant and the error of the previous step using the Lipschitz condition
You end up with a recursive formula

img

img
utilization\(e^x>(1+x)\) Perform the deflation so that\(nh\) Putting it all together into a constant
at the same time\(|\epsilon_0| = 0\)which results in a final error of 0

stability

We have already introduced the definition of stability, a concept used to determine whether rounding errors (small perturbations) of an algorithm accumulate (grow viciously).

img

Only studied here\(y'=\lambda y \, (\lambda < 0)\)That is to say, the\(y = e^{\lambda x} \, (\lambda < 0)\) present situation

Explicit Euler method:
img
Note the assumptions here, which show that the analysis of the stability of the model is only a qualitative one
img

Implicit Euler method:
img

Finding the roots of an equation

rt, this chapter examines numerical methods for equations to find real roots (generally polynomial or transcendental equations with a number greater than 4, and polynomial equations of lower degree have root formulas)

Three steps:
img

(Maoism) one divides intwo

A very classical solution, the principle of which is the zero existence theorem, presupposes that given a rooted interval
img
img
A property is maintained during iteration that the left and right endpoints satisfy the\(f(left)*f(right)<0\), each iteration takes the midpoint of the interval for judgment, bisecting the interval with roots
img
The result can be taken by taking the midpoint of the rooted interval

img

iterative method

img
Specific Steps:
img
It can be shown that a convergent sequence of iterations must converge to a solution of the equation
img
Note the intermediate transformation requirements\(Ο†(x)\) l Continuous

Geometric significance of the iterative method:
img
take note of\(y = Ο†(x)\) together with\(y = x\) The intersection of the roots of

There are two other problems with the method:

  1. Not all iterative sequences are convergent, and how to find convergent iterative sequences (i.e., how to choose initial values)\(x_0\) and the iterative equation\(x=Ο†(x)\)οΌ‰
  2. Convergence of a convergent sequence is judged in terms of limits, but we can't count indefinitely (i.e., to determine what stops the computation or not, i.e., to give an error analysis of the iterations up to the kth step)

These two issues are addressed in the next section

Convergence and error estimation of iterative methods

This section analyzes the convergence of the iterative method and the error of the iterative process

Convergence of the iterative method

Predict whether the sequence converges or not πŸ‘‰ Convergence theorem (sufficient condition)
The convergence theorem guarantees that for each iteration, the\(x\) (math.) discrete solution\(x^*\) It's all closer.
img
The self-mapping is to prove the existence of the roots, the absolute value of the derivative is less than 1 is to prove the uniqueness of the roots and convergence
In addition, Weierstrass's theorem states that continuous functions on a closed interval must obtain their maximum and minimum values on that interval, so as long as\(|Ο†'(x)|<1\)If it's there, it must be there.\(|Ο†'(x)| \le L < 1\)
img
Use the differential first median theorem to prove
img
Use the differential first median theorem to prove
img

When the iteration theorem is not satisfied, it may be necessary to change the iterative equation or the interval in which the initial value is located\([a,b]\) How to determine if the iterative function is poorly obtained, or if the interval\([a,b]\) Getting too big?

Introducing the concept of local convergence
img
It can be found that if local convergence is satisfied, as long as\([a,b]\) The interval is well enough obtained that it must converge, and it is possible to determine that it is not the iterative function that is poorly obtained, but the interval\([a,b]\) overachieve

img

The theorem is well proved, first of all local convergence requires only a proof that convergence is all-important, and the existence and uniqueness of solutions is clearly established. Also\(Ο†'(x)\) It's continuous.\(|Ο†'(x^*)| < 1\) It can be introduced that there exists a neighborhood such that the points in the neighborhood satisfy\(|Ο†'(x)| \le L < 1\)follow\(|Ο†(x) - x^*| = |Ο†(x) - Ο†(x^*)| \le L(x - x^*)\)This means that after one iteration the new $x $ is closer to the exact solution, so the sequence is convergent.

The theorem makes it simple to determine whether the iterated equations are locally convergent, but requires that the exact solution be known in advance, which is clearly not possible

Therefore, the following less stringent criterion is used to judge whether the iterative equations are well chosen or not
img

Iterative process error estimation

Subject to the convergence theorem, the analysis of the error is simple
img

show that
img

significance
img

Convergence speed and acceleration of iterative methods

img
img
img

Accelerating the process:
img
img
Similar to forecast-correction, and a bit like Richardson extrapolation

img
From this equation solve for\(x^*\)
Or just refer to the conclusion
img

Newton's method

Newton's method is a generalized method for constructing iterative equations that are constructed to satisfy local convergence and converge at a rate of square convergence (i.e., the\(Ο†'(x^*) = 0\)οΌ‰

The approach is that for solving\(f(x) = 0\) of the issue of\(f(x)\) In the approximate solution\(x_0\) The Taylor expansion is performed at and expanded to the second order, the\(f(x) β‰ˆ f(x_0) + f'(x_0)(x - x_0) = 0\) , which yields a linear equation that can be solved directly if\(f'(x_0) \ne 0\)The new solution to the problem is a new solution to the problem.\(x_1 = x_0 - \frac{f(x_0)}{f'(x_0)}\)We're next in the\(x_1\) The Taylor expansion is performed at and the process is repeated. Then our iterative formula is\(x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}\)which\(f'(x) \ne 0\)

The iterative formula constructed in this way satisfies the following properties:
img
img
Note the condition that the property should be satisfied: a single root +\(f(x)\) have continuous second order derivatives

Proof:
img
img

geometric significance
img

calculation step
img
Note that the iteration termination condition is\(|x_{n+1} - x_n| < \epsilon\)

Selection of the initial value of Newton's method

Newton's method is a locally convergent algorithm if the initial value of\(x_0\) Inappropriate choices run the risk of not getting a convergent sequence of iterations
In order for Newton's method to converge, it must be satisfied that the iterative formulae calculated using the\(x_1\) (particle used for comparison and "-er than")\(x_0\) Closer to the exact root\(x^*\)
According to the geometrical significance of Newton's method it can be found that\(f'(x)\) is 0 or very small, neither converges very quickly

img
img
img
Condition two is meant to make sense of condition one

The soft version of Newton's method
img

the end ......