Rolling Gaussian Process Regression with Application to Regime Shifts

Abstract

Gaussian Process Regression (GPR) can be applied to the problem of estimating a spatially-varying field on a regular grid, based on noisy observations made at irregular positions. In cases where the field has a weak time dependence, one may desire to estimate the present-time value of the field using a time window of data that rolls forward as new data become available, leading to a sequence of solution updates. We introduce “rolling GPR” (or moving window GPR) and present a procedure for implementing that is more computationally efficient than solving the full GPR problem at each update. Furthermore, regime shifts (sudden large changes in the field) can be detected by monitoring the change in posterior covariance of the predicted data during the updates, and their detrimental effect is mitigated by shortening the time window as the variance rises, and then decreasing it as it falls (but within prior bounds). A set of numerical experiments is provided that demonstrates the viability of the procedure.

Share and Cite:

Menke, W. (2022) Rolling Gaussian Process Regression with Application to Regime Shifts. Applied Mathematics, 13, 859-868. doi: 10.4236/am.2022.1311054.

1. Introduction

Gaussian Process Regression (GPR) is a popular data assimilation method that can be used to reconstruct a field, $m\left({x}_{1},{x}_{2},\cdots \right)\in ℝ$, that varies with spatial coordinates, ${x}_{i}\in ℝ$ ( $1\le i\le K$ )    . Observations of the field at particular positions, together with prior information on its value at these positions and its spatial correlation function, are used to estimate the field at arbitrary positions. GPR is conceptually similar to interpolation, but is not true interpolation because the reconstructed field does not, in general, reproduce the observations (except for the special case called Kriging ). This behavior is advantageous in the case of noisy data, because the reconstruction contains fewer defects caused by data outliers.

In many real-world applications, the field that is being reconstructed also has time, $t\in ℝ$, dependence, that is, $m\left(t,{x}_{1},{x}_{2},\cdots \right)$. Furthermore, observations often are not made synchronously, but rather in a sporadic and ongoing manner. When the goal is to estimate the current state of the field, older measurements become obsolete and only the most recent observations are relevant. The choice of the length of the observation window, T, affects both the resolution of the reconstruction (for resolution improves with the number of data) and the variance of the reconstruction (for the inclusion of new data decreases variance but the inclusion of obsolete data increases it). The issue of window length is particularly acute when the field is relatively stable, except for undergoing sporadic reorganizations. These so-called regime shifts are common features of fields associated with the Earth’s biosphere   , climate system   and geodynamo  .

We use a formulation of GPR  in which the M model parameters, $m=\left[{m}^{\left(t\right)};{m}^{\left(c\right)}\right]\in {ℝ}^{M+N}$, and corresponding spatial coordinates, $x=\left[{x}^{\left(t\right)};{x}^{\left(c\right)}\right]\in {ℝ}^{M+N}$ (where the semicolon implies vertical concatenation), are divided into target, t, groups of length M and control, c, groups of length N. Only the control group is observed by data, d. The control points represent observations of the field at irregularly-spaced positions and the target points represent its values on a regularly-spaced grid. A prior estimate of the model parameters, $\stackrel{^}{m}$, their covariance, ${C}_{m}$, and the covariance of the data, ${\sigma }_{d}^{2}I$, are assumed to be known. The GPR estimate of the model parameters is then:

$\begin{array}{l}\Delta m\equiv \left[\begin{array}{c}\Delta {m}^{\left(t\right)}\\ \Delta {m}^{\left(c\right)}\end{array}\right]=\left[\begin{array}{c}{C}_{m}^{\left(tc\right)}\\ {C}_{m}^{\left(cc\right)}\end{array}\right]{A}^{-1}\Delta d\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{with}\text{\hspace{0.17em}}A\equiv {C}_{m}^{\left(cc\right)}+{\sigma }_{d}^{2}I\\ \text{and}\text{\hspace{0.17em}}\text{ }\text{ }{m}^{est}=\Delta m+\stackrel{^}{m}\text{\hspace{0.17em}}\text{ }\text{and}\text{\hspace{0.17em}}\text{ }\Delta d\equiv d-{\stackrel{^}{m}}^{\left(c\right)}\text{\hspace{0.17em}}\text{ }\text{and}\text{\hspace{0.17em}}\text{ }{d}^{pre}={\left[{m}^{\left(c\right)}\right]}^{est}\\ {C}_{{m}^{est}}^{\left(cc\right)}={\sigma }_{d}^{2}{C}_{m}^{\left(cc\right)}{A}^{-2}{C}_{m}^{\left(cc\right)}\text{\hspace{0.17em}}\text{ }\text{ }\text{and}\text{\hspace{0.17em}}\text{ }\text{ }{C}_{{m}^{est}}^{\left(tt\right)}={\sigma }_{d}^{2}{C}_{m}^{\left(tc\right)}{A}^{-2}{C}_{m}^{\left(tc\right)\text{T}}\end{array}$ (1)

The formula for the solution, $\Delta m$, should be evaluated from right to left for maximum computational efficiency. The matrix inverse, ${A}^{-1}$, appears both in the estimated solution, ${m}^{est}$, and its posterior covariance, ${C}_{{m}^{est}}$ , which is an important diagnostic of the solution’s quality. In typical cases, $A$ is far from being sparse, so when N is large, ${A}^{-1}$ can be computationally expensive to compute. Alternatives methods that omit the calculation of ${A}^{-1}$, such as solving the linear system, $Au=\Delta d$, and then calculating $\Delta m=\left[{C}_{m}^{\left(tc\right)};{C}_{m}^{\left(cc\right)}\right]u$, are nearly as expensive and do not provide a (simple) pathway for computing the posterior covariance.

Suppose that we know the GPR solution when the control group has ${N}_{A}$ element, ordered by time of observation. Our goal is to compute an updated solution after we delete the first ${N}_{1}$ observations from the group, leaving ${N}_{2}={N}_{A}-{N}_{1}$, and then append ${N}_{3}$ elements, so that it now has ${N}_{B}={N}_{2}+{N}_{3}$ elements. This process, which we term rolling GPR (or moving-window GPR) is repeated many times as new data become available and old data are deemed obsolete. The process ensures that the estimate of the field is kept up-to-date.

The structure of this paper is as follows: The process of performing rolling GPR is detailed in Methodology section, and issues concerning its implementation are discussed. This process is divided into four conceptually distinct parts: discarding obsolete data from the GPR problem, appending new data, detecting and correcting the error in the solution, and detecting and responding to regime changes through changes in window length. The complete process of performing rolling GPR is outlined in Procedure section. Examples section provides a demonstration of the technique, applied to the reconstruction of a two-dimensional field with a regime shift, using an exemplary synthetic dataset and a Gaussian prior covariance function. Finally, a discussion of the efficiency of the discard-append process, together with summary remarks, are presented in Discussion and Conclusion section.

2. Methodology

Discarding Obsolete Data. The quantities $\stackrel{^}{m}$, ${x}^{\left(c\right)}$, $d$, ${C}_{m}^{\left(tc\right)}$, ${C}_{m}^{\left(cc\right)}$, $A$ and ${A}^{-1}$ all need to be modified when data are discarded. The vectors, ${\stackrel{^}{m}}^{\left(c\right)}$ and $d$ are modified by retaining only their last ${N}_{2}$ elements:

$\begin{array}{l}{\stackrel{^}{m}}^{\left(c\right)}\equiv \left[\begin{array}{c}{\stackrel{^}{m}}^{\left({c}_{1}\right)}\\ {\stackrel{^}{m}}^{\left({c}_{2}\right)}\end{array}\right]\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{\stackrel{^}{m}}^{\left({c}_{2}\right)}\\ d\equiv \left[\begin{array}{c}{d}^{\left(1\right)}\\ {d}^{\left(2\right)}\end{array}\right]\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{d}^{\left(2\right)}\end{array}$ (2)

Here, ${c}_{1}$ and ${c}_{2}$ refer to the deleted group and retained group, respectively. The corresponding xs must be modified in an identical manner. Similarly, only one block within the covariances matrices is retained:

$\begin{array}{l}{C}_{m}^{\left(tc\right)}\equiv \left[\begin{array}{cc}{C}_{m}^{\left(t{c}_{1}\right)}& {C}_{m}^{\left(t{c}_{2}\right)}\end{array}\right]\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{C}_{m}^{\left(t{c}_{2}\right)}\\ {C}_{m}^{\left(cc\right)}\equiv \left[\begin{array}{cc}{C}_{m}^{\left({c}_{1}{c}_{1}\right)}& {C}_{m}^{\left({c}_{1}{c}_{2}\right)}\\ {C}_{m}^{\left({c}_{2}{c}_{1}\right)}& {C}_{m}^{\left({c}_{2}{c}_{2}\right)}\end{array}\right]\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{C}_{m}^{\left({c}_{2}{c}_{2}\right)}\end{array}$ (3)

The updating of ${A}^{-1}$ requires more effort, but uses only well-known techniques. At the start of the process, the ${N}_{A}×{N}_{A}$ matrix, $A$, and its inverse, ${A}^{-1}$, are known. These matrices are partitioned into submatrices:

$A\equiv \left[\begin{array}{cc}X& {Z}^{\text{T}}\\ Z& Y\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{A}^{-1}\equiv \left[\begin{array}{cc}P& {R}^{\text{T}}\\ R& Q\end{array}\right]$ (4)

Here, $X$ and $P$ are ${N}_{1}×{N}_{1}$, $Y$ and $Q$ are ${N}_{2}×{N}_{2}$, and $Z$ and $R$ are ${N}_{2}×{N}_{1}$, with ${N}_{1}+{N}_{2}={N}_{A}$. We mention an identity involving $Z$, $R$, $P$ and $Q$ that will be used later in the paper:

${Z}^{\text{T}}R{P}^{-1}\left[I-{R}^{\text{T}}Z\right]+{Z}^{\text{T}}QZ={Z}^{\text{T}}R{P}^{-1}-{Z}^{\text{T}}R{P}^{-1}{R}^{\text{T}}Z+{Z}^{\text{T}}QZ=0$ (5)

It can be derived by multiplying out the block matrix form of ${A}^{-1}A=I$, solving the diagonal elements for $X$ and $Y$, and substituting the result into the off-diagonal elements. The process of removing the first ${N}_{1}$ data corresponds to replacing $A$ with $Y$, and ${A}^{-1}$ with ${Y}^{-1}$. An efficient method for computing ${Y}^{-1}$ can be designed using the Woodbury identity :

${\left(M+UWV\right)}^{-1}={M}^{-1}-{M}^{-1}U{\left({W}^{-1}+V{M}^{-1}U\right)}^{-1}V{M}^{-1}$ (6)

Here, $M$, $W$, $U$ and $V$ are conformable matrices. The reader may confirm by direct substitution that:

$\begin{array}{l}W=I\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}M=A\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{\left(M+UWV\right)}^{-1}=\left[\begin{array}{cc}{X}^{-1}& 0\\ 0& {Y}^{-1}\end{array}\right]\\ {U}_{N,{N}_{1}}=\left[\begin{array}{cc}{I}_{{N}_{1},{N}_{1}}& {0}_{{N}_{1},{N}_{1}}\\ {0}_{{N}_{2},{N}_{1}}& -{Z}_{{N}_{2},{N}_{1}}\end{array}\right]\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.17em}}{V}_{{N}_{1},N}=\left[\begin{array}{cc}{0}_{{N}_{1},{N}_{1}}& -{\left[{Z}^{\text{T}}\right]}_{{N}_{1},{N}_{2}}\\ {I}_{{N}_{1},{N}_{1}}& {0}_{{N}_{1},{N}_{2}}\end{array}\right]\end{array}$ (7)

are consistent choices of the several matrices (with the subscripts indicating the sizes of selected matrices). Thus, ${Y}^{-1}$ can be read from the lower-right-hand block of ${A}^{-1}-{A}^{-1}U{\left(I+V{A}^{-1}U\right)}^{-1}V{A}^{-1}$. Note that the matrix, $\left(I+V{A}^{-1}U\right)$, is $\left(2{N}_{1}\right)×\left(2{N}_{1}\right)$, so that its inverse requires less computational effort than does the ${N}_{2}×{N}_{2}$ matrix, $Y$, as long as a relatively few data are being removed. Below, we will develop an analytic formular for ${\left(I+V{A}^{-1}U\right)}^{-1}$ that further reduces the size of the necessary matrices to ${N}_{1}×{N}_{1}$ ). An explicit formula for ${Y}^{-1}$ is obtained by manipulating the block matrix form of the Woodbury formula, starting with:

$\begin{array}{c}\left[I+V{A}^{-1}U\right]=I+\left[\begin{array}{cc}0& -{Z}^{\text{T}}\\ I& 0\end{array}\right]\left[\begin{array}{cc}P& {R}^{\text{T}}\\ R& Q\end{array}\right]\left[\begin{array}{cc}I& 0\\ 0& -Z\end{array}\right]\\ =\left[\begin{array}{cc}\left[{I}_{{N}_{1},{N}_{1}}-{Z}^{\text{T}}R\right]& {Z}^{\text{T}}QZ\\ P& \left[{I}_{{N}_{1},{N}_{1}}-{R}^{\text{T}}Z\right]\end{array}\right]\end{array}$ (8)

Note that the off-diagonal elements of this matrix are symmetric, and the off-diagonal elements are transposes of one another. Its inverse has the form:

$\begin{array}{l}{\left[I+V{A}^{-1}U\right]}^{-1}=\left[\begin{array}{cc}{I}_{{N}_{1},{N}_{1}}& {C}_{{N}_{1},{N}_{1}}\\ {D}_{{N}_{1},{N}_{1}}& {I}_{{N}_{1},{N}_{1}}\end{array}\right]\\ \text{with}\text{\hspace{0.17em}}C={Z}^{\text{T}}R{P}^{-1}={Z}^{\text{T}}R{P}^{-1}{R}^{\text{T}}Z-{Z}^{\text{T}}QZ\\ \text{and}\text{\hspace{0.17em}}D={R}^{\text{T}}Z{\left[{Z}^{\text{T}}QZ\right]}^{-1}=-{\left[{P}^{-1}-C\right]}^{-1}\end{array}$ (9)

The formulas for $C$ and $D$ were derived by multiplying out the block diagonal form of ${\left[I+V{A}^{-1}U\right]}^{-1}\left[I+V{A}^{-1}U\right]=I$, solving the diagonal elements for $C$ and $D$, and applying identity Equation (5). Note that both $C$ and $D$ are symmetric matrices. The products ${A}^{-1}U$ and $V{A}^{-1}$ have the forms:

$\begin{array}{l}{A}^{-1}U=\left[\begin{array}{cc}P& {R}^{\text{T}}\\ R& Q\end{array}\right]\left[\begin{array}{cc}I& 0\\ 0& -Z\end{array}\right]=\left[\begin{array}{cc}P& -{R}^{\text{T}}Z\\ R& -QZ\end{array}\right]\equiv \left[\begin{array}{cc}P& S\\ R& T\end{array}\right]\\ V{A}^{-1}=\left[\begin{array}{cc}0& -{Z}^{\text{T}}\\ I& 0\end{array}\right]\left[\begin{array}{cc}P& {R}^{\text{T}}\\ R& Q\end{array}\right]=\left[\begin{array}{cc}-{Z}^{\text{T}}R& -{Z}^{\text{T}}Q\\ P& {R}^{\text{T}}\end{array}\right]\equiv \left[\begin{array}{cc}{S}^{\text{T}}& {T}^{\text{T}}\\ P& {R}^{\text{T}}\end{array}\right]\\ \text{with}\text{\hspace{0.17em}}S\equiv -{R}^{\text{T}}Z\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}T=-QZ\end{array}$ (10)

Thus, the expression, ${A}^{-1}U{\left(I+V{A}^{-1}U\right)}^{-1}V{A}^{-1}$, can be simplified to:

$\begin{array}{l}{A}^{-1}U{\left(I+V{A}^{-1}U\right)}^{-1}V{A}^{-1}=\left[\begin{array}{cc}P& S\\ R& T\end{array}\right]\left[\begin{array}{cc}I& C\\ D& I\end{array}\right]\left[\begin{array}{cc}{S}^{\text{T}}& {T}^{\text{T}}\\ P& {R}^{\text{T}}\end{array}\right]\\ =\left[\begin{array}{cc}\left\{P\left({S}^{\text{T}}+CP\right)+S\left(D{S}^{\text{T}}+P\right)\right\}& \left\{P\left({T}^{\text{T}}+C{R}^{\text{T}}\right)+S\left(D{T}^{\text{T}}+{R}^{\text{T}}\right)\right\}\\ \left\{R\left({S}^{\text{T}}+CP\right)+T\left(D{S}^{\text{T}}+P\right)\right\}& \left\{R\left({T}^{\text{T}}+C{R}^{\text{T}}\right)+T\left(D{T}^{\text{T}}+{R}^{\text{T}}\right)\right\}\end{array}\right]\end{array}$ (11)

The lower right-hand block of Equation (11) becomes the new ${A}^{-1}$ :

$\begin{array}{l}A\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}Y\\ {A}^{-1}\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{Y}^{-1}=Q-\left(\left(R{T}^{\text{T}}+T{R}^{\text{T}}\right)+RC{R}^{\text{T}}+TD{T}^{\text{T}}\right)\end{array}$ (12)

This formula has been tested numerically, and was found to be correct to within machine precision. Accuracy and speed are improved thorough the use of coding techniques that utilize and enforce the symmetry of all the symmetric matrices.

Appending New Data. As before, the quantities ${\stackrel{^}{m}}^{\left({c}_{2}\right)}$, ${x}^{\left({c}_{2}\right)}$, ${d}^{\left(2\right)}$, ${C}_{m}^{\left(t{c}_{2}\right)}$, ${C}_{m}^{\left({c}_{2}{c}_{2}\right)}$, $A$ and ${A}^{-1}$ all need to be modified when new data are appended. The vectors, ${\stackrel{^}{m}}^{\left(c\right)}$ and $d$ are modified by appending ${N}_{3}$ elements:

${\stackrel{^}{m}}^{\left({c}_{2}\right)}\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{\stackrel{^}{m}}^{\left(c\right)}\equiv \left[\begin{array}{c}{\stackrel{^}{m}}^{\left({c}_{2}\right)}\\ {\stackrel{^}{m}}^{\left({c}_{3}\right)}\end{array}\right]\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{d}^{\left(2\right)}\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}d\equiv \left[\begin{array}{c}{d}^{\left(2\right)}\\ {d}^{\left(3\right)}\end{array}\right]$ (13)

The corresponding xs must be modified in an identical manner. Similarly, new blocks are appended to the covariances matrices:

$\begin{array}{l}{C}_{m}^{\left(t{c}_{2}\right)}\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{C}_{m}^{\left(tc\right)}\equiv \left[\begin{array}{cc}{C}_{m}^{\left(t{c}_{2}\right)}& {C}_{m}^{\left(t{c}_{3}\right)}\end{array}\right]\\ {C}_{m}^{\left({c}_{2}{c}_{2}\right)}\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}{C}_{m}^{\left(cc\right)}\equiv \left[\begin{array}{cc}{C}_{m}^{\left({c}_{2}{c}_{2}\right)}& {C}_{m}^{\left({c}_{2}{c}_{3}\right)}\\ {C}_{m}^{\left({c}_{3}{c}_{2}\right)}& {C}_{m}^{\left({c}_{3}{c}_{3}\right)}\end{array}\right]\end{array}$ (14)

As before, the updating of ${A}^{-1}$ requires more effort, but use a well-known technique based on the Bordering method  . We rename the existing matrix, $A$, to $X$, as it will become the upper-left block of the modified $A$. Both $X$ and its inverse, ${X}^{-1}$, are known ${N}_{2}×{N}_{2}$ matrices. We seek to add the blocks, $Y$ and $Z$, associated with the ${N}_{3}$ new data, creating an augmented ${N}_{B}×{N}_{B}$ matrix, $A$, with ${N}_{B}={N}_{2}+{N}_{3}$. The matrix, $A$, and its inverse satisfy:

${A}^{-1}A\equiv \left[\begin{array}{cc}P& {R}^{\text{T}}\\ R& Q\end{array}\right]\left[\begin{array}{cc}X& {Z}^{\text{T}}\\ Z& Y\end{array}\right]=\left[\begin{array}{cc}{I}_{{N}_{2}{N}_{2}}& {0}_{{N}_{2}{N}_{3}}\\ {0}_{{N}_{3}{N}_{2}}& {I}_{{N}_{3}{N}_{3}}\end{array}\right]$ (15)

Multiplying out the equation and solving for $P$, $Q$ and $R$ yields:

$\begin{array}{l}PX+{R}^{\text{T}}Z=I\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{so}\text{\hspace{0.17em}}P=\left[I-{R}^{\text{T}}Z\right]{X}^{-1}\\ P{Z}^{\text{T}}+{R}^{\text{T}}Y=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{so}\text{\hspace{0.17em}}{R}^{\text{T}}=-P{Z}^{\text{T}}{Y}^{-1}\\ RX+QZ=0\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{so}\text{\hspace{0.17em}}R=-QZ{X}^{-1}\\ R{Z}^{\text{T}}+QY=I\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{so}\text{\hspace{0.17em}}-QZ{X}^{-1}{Z}^{\text{T}}+QY=I\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{so}\text{\hspace{0.17em}}Q={\left[Y-Z{X}^{-1}{Z}^{\text{T}}\right]}^{-1}\end{array}$ (16)

The matrices then become:

$\begin{array}{l}A\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}\left[\begin{array}{cc}X& {Z}^{\text{T}}\\ Z& Y\end{array}\right]\\ {A}^{-1}\text{\hspace{0.17em}}\text{isreplacedwith}\text{\hspace{0.17em}}\left[\begin{array}{cc}P& {R}^{\text{T}}\\ R& Q\end{array}\right]\end{array}$ (17)

These formulas have been tested numerically, and are correct to within machine precision. Accuracy and speed are improved thorough the use of coding techniques that utilize and enforce the symmetry of all the symmetric matrices.

Detecting and Correcting of Errors in the Solution. Numerical tests (not shown) indicate that delete/append process can be repeated many (e.g. hundreds) of times without significant loss of precision, at least for the class of matrices encountered in GPR. However, as a precaution, we recommend that the inverse be corrected every few hundred iterations, either though the direct calculation of ${A}^{-1}$ from $A$, or through one application of the Schultz method   :

${A}^{-1}\text{\hspace{0.17em}}\text{ }\text{isreplacedby}\text{\hspace{0.17em}}\text{ }{A}^{-1}\left(2I-A{A}^{-1}\right)$ (18)

One way to monitor accuracy is to compute the absolute value of just one (or a few) of the diagonal elements of the error matrix, $\Phi \equiv A{A}^{-1}-I$. For fixed i, quantity $|{\Phi }_{ii}|$ can be computed very efficiently, because only one inner product (between the ith row of $A$ and the th column of ${A}^{-1}$ ) need be performed. The correction process then can be initiated when $|{\Phi }_{ii}|$ exceeds a threshold, chosen to be small fraction, say 10−8.

Readjusting Window Length. The posterior data variance,

${\sigma }^{2}={N}_{B}^{-1}{e}^{\text{T}}e\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{ }e=d-{d}^{pre}$ (19)

is a measure of how well the reconstruction fits the data. The quantity, ${\chi }^{2}\equiv {N}_{B}{\sigma }^{2}/{\sigma }_{d}^{2}$, is approximately chi-squared distributed with ${N}_{B}$ degrees of freedom, and therefore has an expected value of ${N}_{B}$ and variance of $2{N}_{B}$. Thus, the expected value of ${\sigma }^{2}$ is ${\sigma }_{d}^{2}$, its variance is $2{\sigma }_{d}^{4}/{N}_{B}$ and its 95% confidence bound is ${\sigma }_{95}^{2}={\sigma }_{d}^{2}+2{\sigma }_{d}^{2}{\left[2/{N}_{B}\right]}^{1/2}$. However, ${\sigma }^{2}$ can be expected to rise well above this bound during a regime shift due to model error, that is, two distinct spatial patterns are being comingled and cannot be fit simultaneously. Shortening the window length tends to bring ${\sigma }^{2}$ closer to the bound, because obsolete data are being discarded more rapidly. This suggests the strategy of defining a threshold, guided by the value of ${\sigma }_{95}$, and decreasing the window length when E is above it (by setting ${N}_{1}>{N}_{3}$ when) and increasing it once E has dropped below it (by setting ${N}_{1}<{N}_{3}$ until ${N}_{B}$ has reached some upper limit).

3. Procedure

Step 1. Decide upon initial values of ${N}_{1}$, ${N}_{2}$, ${N}_{3}$. The choice ${N}_{1}={N}_{3}$ implies that the same number of data are appended as are discarded, leaving the window length, ${N}_{B}={N}_{2}+{N}_{3}$ unchanged. The rolling process is most efficient when ${N}_{2}\gg {N}_{1}$ and ${N}_{2}\gg {N}_{3}$.

Step 2. Solve the GPR problem for an initial group of ${N}_{A}={N}_{1}+{N}_{2}$ data, as in Equation (1).

Step 3. Discard ${N}_{1}$ data by modifying $\stackrel{^}{m}$ and $d$, and their corresponding xs, as in Equation (2), ${C}_{m}^{\left(tc\right)}$ and ${C}_{m}^{\left(cc\right)}$ as in Equation (3) and $A$ and ${A}^{-1}$ as in Equation (12).

Step 4. Append ${N}_{3}$ data by modifying $\stackrel{^}{m}$ and $d$, and their corresponding xs, as in Equation (13), ${C}_{m}^{\left(tc\right)}$ and ${C}_{m}^{\left(cc\right)}$ as in Equation (14) and $A$ and ${A}^{-1}$ as in Equation (17).

Step 5. (Optional) Monitor the error $|{\Phi }_{ii}|$, where $\Phi \equiv \left[A{A}^{-1}-I\right]$, for a single index, i, and when it exceeds a threshold of say, 10−8, refine ${A}^{-1}$ as in Equation (18).

Step 6. Solve the GPR problem for ${N}_{B}={N}_{2}+{N}_{3}$ data, obtaining ${m}^{est}$ and ${d}^{pre}$ as in Equation (1).

Step 7. (Optional) Compute the posterior data variance, ${\sigma }^{2}$, as in Equation (19) and reassign the values of ${N}_{1}$, ${N}_{2}$ and ${N}_{3}$, as needed (as discussed in the Readjusting Window Length section).

Step 8. Once ${N}_{3}$ new are available, iterate the procedure, starting at Step 3.

4. Examples

Test Scenario. In these examples, the two dimenional field, $m\left(x,y\right)$, $0\le x\le 30$, $0\le y\le 30$ is reconstructed on a regular 30 × 30 grid, using data that are acquired at the steady rate of 10 observations per time interval, ∆t. The observations are made at randomly chosen positions and have uncorrelated and uniform error with prior variance, ${\sigma }_{d}^{2}={10}^{-4}$. The true field experiences a regime shift at time, 25∆t, when it abruptly changes from a four-lobed to a three-lobed pattern:

$m\left(t,x,y\right)=\left\{\begin{array}{ll}\mathrm{sin}\left(2\pi x/L\right)\mathrm{sin}\left(2\pi y/L\right)\hfill & \left(t<25\right)\hfill \\ \mathrm{sin}\left(\pi x/L\right)\mathrm{sin}\left(3\pi y/L\right)\hfill & \left(t\ge 25\right)\hfill \end{array}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}L=30$ (20)

The field is assumed to have zero prior value and Gaussian prior covariance:

$\begin{array}{l}\stackrel{^}{m}=0\text{\hspace{0.17em}}\text{ }\text{and}\text{\hspace{0.17em}}\text{ }{C}_{ij}={\gamma }^{2}\mathrm{exp}\left(-\frac{{r}_{ij}^{2}}{2{s}^{2}}\right)\\ \text{with}\text{\hspace{0.17em}}{r}_{ij}^{2}={\left({x}_{i}-{x}_{j}\right)}^{2}+{\left({y}_{i}-{y}_{j}\right)}^{2}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}{\gamma }^{2}=\frac{1}{3}\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}s=0.22\end{array}$ (21)

Example without Window Length Readjustment. In this example (Figure 1(A)), the size of the rolling set of observations increases with time up to an upper bound of 90. For times, $t<25\Delta t$, the field is correctly reconstructed, showing the correct four-lobed pattern, and the posterior variance is about equal to the

Figure 1. Example of rolling GPR. (A) Case where the window length is held constant. The window length (top curve) grows to ${N}_{b}=90$ and then is held constant. The two-dimensional field, $m\left(x,y\right)$ (colors), abruptly changes from a four-lobed patter to a three-lobed pattern, at time, $t=25$. The posterior data variance, ${\sigma }^{2}$ (bottom curve), is low, except near the time of the change. (B) Case where the window length is varied. The posterior data variance, ${\sigma }^{2}$ (top black curve), and the time at which it exceeds a threshold (top red curve) is detected. The window length (bottom black curve) is decreased during the interval of high variance, and then increased afterwards, within bounds (bottom two red curves).

prior variance. The field is incorrectly reconstructed during the time interval, $25\Delta t\le t\le 33\Delta t$, when the window comingles the two patterns, and the posterior variance is about one thousand times higher than the prior variance. For times, $t\ge 34\Delta t$, the field is correctly reconstructed, showing the correct three-lobed pattern, and the posterior variance returns to its original level. This example confirms the ability of the method to reconstruct the field in the presence of regime shifts.

Example with Window Length Readjustment. In this example (Figure 1(B)), the posterior data variance, ${\sigma }^{2}$ (a measure of the misfit of the data), is monitored, and the size of the rolling set of observations is reduced when it increases past a threshold (but not below a lower bound of ${N}_{B}=50$ ) (Figure 1(B)). Once the error has declined below the threshold, the set size is allowed to increase, up to an upper bound of ${N}_{B}=90$. The duration of the interval of elevated error is reduced by a factor of about two compared to the first experiment. The presence of the second three-lobed pattern is evident at an earlier time than in the first example, demonstrating the utility of the window length readjustment.

5. Discussion and Conclusions

The main advantage of the rolling Gaussian Process Regression method that we describe here is its ability to reconstruct a time-varying field without any assumptions about its dynamics. This is in contrast to other data assimilation techniques, such as Kalman filtering , in which the differential equation describing the time dependence is assumed to be known.

The $N×N$ matrix, $A$, that arises in Gaussian Process Regression is a non-sparse, symmetric positive definite matrix that takes ${N}_{B}^{3}/3+O\left({N}_{B}^{2}\right)$ floating point operations to invert . Careful counting of operations reveals that the discard/append process described above takes about $6{N}_{A}{N}_{B}^{2}+4{N}_{B}^{2}{N}_{C}$ operations. Thus, for ${N}_{A}={N}_{C}$, it is more efficient when ${N}_{A}/{N}_{M}\approx 1/30$, that is, when just a few percent of the data are being updated.

The procedure for implementing “rolling GRP” (or moving window GPR) that we present here is more computationally efficient than solving the full GPR problem at each update, at least when the number of data that are deleted/appended is only a few percent of the total number used in the calculation. Regime shifts (sudden large changes in the field) can be detected by monitoring the posterior data variance (a measure of the misfit of the data) during the updates, and their detrimental effect is mitigated by shortening the time window as the variance rises, and then decreasing it as it falls (but within prior bounds). The numerical experiments presented here demonstrate the viability and usefulness of the procedure.

Acknowledgements

The author thanks Roger Creel of Columbia University (New York, USA) for the helpful discussion.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

  Krige, D.G. (1951) A Statistical Approach to Some Basic Mine Valuation Problems on the Witwatersrand. Journal of the Chemical, Metallurgical and Mining Society of South Africa, 52, 119-139.  Rasmussen, C.E. and Williams, C.K.I. (2006) Gaussian Processes for Machine Learning. The MIT Press, Cambridge, 272 p. https://doi.org/10.7551/mitpress/3206.001.0001  Menke, W. and Creel, R. (2021) Gaussian Process Regression Reviewed in the Context of Inverse Theory. Surveys in Geophysics, 42, 473-503.https://doi.org/10.1007/s10712-021-09640-w  Menke, W. (2022) Environmental Data Analysis with MATLAB or Python, Principles, Appilications and Prospects. 3rd Edition, Elsevier, Amsterdam.  Scheffer, M., Carpenter, S., Foley, J.A., Folke, C. and Walker, B. (2001) Catastrophic Shifts in Ecosystems. Nature, 413, 591-596. https://doi.org/10.1038/35098000  Beisner, B.E., Haydon, D.T. and Cuddington, K. (2003) Alternative Stable States in Ecology. Frontiers in Ecology and the Environment, 1, 376-382.  Smol, J.P., Wolfe, A.P., Birks, H.J.B, Douglas, M.S.V., Jones, V.J., Korhola, A., Pienitz, R., Rühland, K., Sorvari, S., Antoniades, D., Brooks, S.J., Fallu, M.-A., Hughes, M., Keatley, B.E., Laing, T.E., Michelutti, N., Nazarova, L., Nyman, M., Paterson, A.M., Perren, B., Quinlan, R., Rautio, M., Saulnier-Talbot, É., Siitonen, S., Solovieva, N. and Weckström, J. (2005) Climate-Driven Regime Shifts in the Biological Communities of Arctic Lakes. Proceedings of the National Academy of Sciences of the United States of America, 102, 4397-4402.https://doi.org/10.1073/pnas.0500245102  Zhang, Q.-B. and Fang, O. (2020) Tree Rings Circle an Abrupt Shift in Climate. Science, 370, 1037-1038. https://doi.org/10.1126/science.abf1700  Alley, R.B., Marotzke, J., Nordhaus, W.D., Overpeck, J.T., Peteet, D.M., Pielke Jr., R.A., Pierrehumbert, R.T., Rhines, P.B., Stocker, T.F., Talley, L.D. and Wallace, J.M. (2003) Abrupt Climate Change. Science, 299, 2005-2010.https://doi.org/10.1126/science.1081056  Glatzmaier, G.A. and Roberts, P.H. (1995) A Three-Dimensional Self-Consistent Computer Simulation of a Geomagnetic Field Reversal. Nature, 377, 203-209.https://doi.org/10.1038/377203a0  Meynadier, L. (1993) Geomagnetic Field Intensity and Reversals during the Past Four Million Years. Nature, 366, 234-238. https://doi.org/10.1038/366234a0  Woodbury, M. (1950) Inverting Modified Matrices. Memorandum Report 42. Statistical Research Group, Princeton University, Princeton.  Houreholder, A.S. (1975) The Theory of Matrices in Numerical Analysis. Dover Publicatiions, New York.  Westlake, J.R. (1968) A Handbook of Numerical Matrix Inversion and Solution of Linear Equations. John Wiley & Sons, Hoboken.  Li, R.C. (2014) Matrix Perturbation Theory. In: Hogben, L., Ed., Handbook of Linear Algebra, 2nd Edition, CRC Press, Boca Raton.  Petersen, K.B. and Pedersen, M.S. (2012) The Matrix Cookbook. https://www.academia.edu/42127358/The_Matrix_Cookbook  Petkovic, M.D. (2014) Generalized Schultz Iterative Methods for the Computation of Outer Inverses. Computers & Mathematics with Applications, 67, 1837-1847.https://doi.org/10.1016/j.camwa.2014.03.019  Grewal, M.S. and Andrews, A.P. (2014) Kalman Filtering: Theory and Practice using MATLAB. 4th Edition, Wiley-IEEE Press, Hoboken.  Hunger, R. (2007) Floating Point Operations in Matrix-Vector Calculus (Version 1.3). Technical Report, Technische Universität München, Associate Institute for Signal Processing, München.     customer@scirp.org +86 18163351462(WhatsApp) 1655362766  Paper Publishing WeChat 