Technical background
The unveiling of the 2024 Nobel Prize in Physics and Chemistry officially announced the scientific community's recognition of the AI era, and AI is changing the interoperability patterns of various scenarios in human society in all aspects, while data fitting and the control of error and arithmetic power are the focus of most AI workers. Different from the idea of data fitting, people in traditional numerical computation prefer to use polynomials for accurate parameter computation, which is called interpolation. Of course, the accuracy of interpolation algorithms is relative to the boundary conditions, and different interpolation algorithms have different residual terms as the number of points changes. Now in model training, because the data points themselves are in error, so the forced use of interpolation algorithms will lead to the phenomenon of overfitting. Only in some traditional computing scenarios that require high accuracy, the application of interpolation algorithms is retained.
linear interpolation
Given two points:\((x_1,y_1),(x_2,y_2)\), whose interpolated linear function is:
A slight rewrite of the form there:
available\(f(x_1)=y_1,f(x_2)=y_2\)。
quadratic interpolation
Given three points:\((x_1,y_1),(x_2,y_2),(x_3,y_3)\), assuming its interpolation function:\(f(x)=ax^2+bx+c\), which can then be written in matrix form based on a three-point system of simultaneous equations is:
So solving for the coefficients\(a,b,c\)becomes a matrix inverse problem, which can be done manually as an elementary transformation:
That is, the final inverse matrix is:
It can be verified:
With the inverse matrix, the parameter values can be calculated\(a,b,c\), then here we write the function form directly:
Lagrange interpolation
A generalized form can be given by observing the laws of the previous functions of linear and quadratic interpolation:
where the coefficient function\(c_i(x,x_1,x_2,...,x_N)=\prod_{j=1}^{i-1}\frac{x-x_j}{x_i-x_j}\prod_{k=i+1}^{N}\frac{x-x_k}{x_i-x_k}\). It is possible to give\(N\)data points\(N-1\)The sub-interpolating function analytical equation, which is the Lagrange interpolation method, satisfies\(f(x_i)=y_i\)The constraints of the
Newton's interpolation
If the function expression in linear interpolation is modified in form a little more to become:
Similarly, the second-order interpolation function can be changed to the following form:
If a first order difference quotient is defined as:
The implications are\((x_i,x_{i+1})\)The average rate of change over the interval. With the definition of the first-order difference quotient, the second-order difference quotient can be defined recursively:
as well as\(m\)The differential quotient of the order:
Then the functional form of Newton's interpolation can be written as:
Comparison of interpolation forms
The Lagrange interpolation algorithm and the Newton interpolation algorithm, the order of interpolation is the same, and the polynomials interpolated from the same number of points are also unique, in other words the functions interpolated from the two methods are actually equivalent. So what are the advantages and disadvantages of the two interpolation algorithms? Let's consider a situation where there are\(N\)data points need to be interpolated, at this point if a new data point is introduced, the total number of points becomes\(N+1\)The coefficients of the coefficients of the Lagrangian interpolation method are not used. At this point, if we are using Lagrange interpolation, then we need to calculate all the coefficients all over again. If we are using Newtonian interpolation, then we find that the preceding\(N\)The coefficients are not changed, we only need to calculate a new coefficient, which greatly reduces the amount of parameter calculations brought about by the update of the number of points. But it does not mean that Lagrange interpolation is not useful, in today's tensor computing era, Lagrange interpolation of each coefficient is the same Shape of the tensor operation, instead of Newton interpolation of the recursive form of the tensor calculation will have some trouble.
Summary outline
This paper introduces the basic form of Lagrange interpolation algorithm as well as Newton interpolation algorithm through the form of linear interpolation and quadratic interpolation. The final function form of the two interpolation algorithms is the same, but the computational amount of parameter solving in different scenarios is inconsistent, and you need to choose a more appropriate interpolation algorithm according to your own application scenario.
copyright statement
This article was first linked to:/dechinphy/p/
Author ID: DechinPhy
More original articles:/dechinphy/
Buy the blogger coffee:/dechinphy/gallery/image/