Tuning of Prior Covariance in Generalized Least Squares

Show more

1. Introduction

Generalized Least Squares (GLS, also called least-squared with prior information) is a tool for statistical inference [1] - [6] that is widely used in geotomography [7] - [12] and geophysical inversion [13] [14], as well as other areas of the physical sciences and engineering. One of the attractive features of GLS that makes it especially useful in the imaging of multidimensional fields (for example, density, velocity, viscosity) is its ability to implement, in a natural and versatile way, prior information of the behavior of the field. Widely-used types of prior information include the field being smooth, as quantified by its low-order derivatives [15], having a specified power spectral density or autocovariance [7] [15], and satisfying a specified partial differential equation (such as the geostrophic flow equation [16] or the diffusion equation [4] ). The word “regularization” sometimes is used to describe the effect of prior information on the solution process [17].

We review the Generalized Least Squares (GLS) method here, following the notation in [6], in order to provide context and to establish nomenclature. In GLS, observations (or data) and prior information (or inferences) are combined to arrive at a best-estimate of initially-unknown model parameters (which might, for example, represent a field sampled on a regular grid). The data are assumed to satisfy the linear equation $Gm=d$, where $d\in {\mathbb{R}}^{N}$ is a vector of data, $m\in {\mathbb{R}}^{M}$ is a vector of model parameters, and $G$ is a known “kernel” matrix associated with the data. Prior information is assumed to satisfy a linear equation $Hm=h$, where $h\in {\mathbb{R}}^{K}$ is a vector of prior values and $H$ is a kernel matrix associated with the prior information. GLS problems are assumed to be over-determined, with $N+K>M$. For observed data ${d}^{obs}$, known prior information ${h}^{pri}$ and a specified model $m$, the prediction error is $e\equiv {d}^{obs}-Gm$ and prior information error is $\mathcal{l}\equiv {h}^{pri}-Hm$. These errors are assumed to be Normally-distributed with zero mean and prior covariance ${C}_{d}$ and ${C}_{h}$, respectively. Then, the normalized errors $\stackrel{\u02dc}{e}\equiv {C}_{d}^{-1/2}e$ and $\stackrel{\u02dc}{\mathcal{l}}\equiv {C}_{h}^{-1/2}\mathcal{l}$ are independent and identically-distributed Normal random variables with zero mean and unit variance. Bayes theorem can be used to show that the best estimate ${m}^{est}$ of the solution is the one that minimizes the generalized error $\Phi \equiv E+L$, with $E\equiv {\stackrel{\u02dc}{e}}^{\text{T}}\stackrel{\u02dc}{e}$ and $L\equiv {\stackrel{\u02dc}{\mathcal{l}}}^{\text{T}}\stackrel{\u02dc}{\mathcal{l}}$ [1] [2] [5]. The solution can be expressed in a variety of equivalent forms, among which is the widely-used version [6]:

${m}^{est}={Z}^{-1}\left({G}^{\text{T}}{C}_{d}^{-1}{d}^{obs}+{H}^{\text{T}}{C}_{h}^{-1}{h}^{pri}\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}Z\equiv {G}^{\text{T}}{C}_{d}^{-1}G+{H}^{\text{T}}{C}_{h}^{-1}H$ (1)

The assumption of linear kernels $G$ and $H$ is a very restrictive one. In the well-studied nonlinear generalization [1] [6], the products $Gm$ and $Hm$ are replaced with vector functions $g\left(m\right)$ and $h\left(m\right)$. Then, a common solution method is to linearize the data and prior information equations around a trial solution ${m}^{\left(0\right)}$ :

$\begin{array}{l}{G}^{\left(0\right)}\Delta m=\Delta d\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{G}_{ij}^{\left(0\right)}\equiv {\frac{\partial {g}_{i}}{\partial {m}_{j}}|}_{{m}^{\left(0\right)}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\Delta d\equiv {d}^{obs}-g\left(m\right)\\ {H}^{\left(0\right)}\Delta m=\Delta h\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{with}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{H}_{ij}^{\left(0\right)}\equiv {\frac{\partial {h}_{i}}{\partial {m}_{j}}|}_{{m}^{\left(0\right)}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\Delta h\equiv {h}^{pri}-h\left(m\right)\end{array}$ (2)

and $\Delta m=m-{m}^{\left(0\right)}$. The solution is then found by iterative application of (1) applied to (2); that is, by the Gauss-Newton’s method [3]. Alternatively, a gradient-descent method [18] can be used that employs:

${{\nabla}_{m}\Phi |}_{{m}^{\left(0\right)}}=-2{G}^{\left(0\right)\text{T}}{C}_{d}^{-1}\left({d}^{obs}-G{m}^{\left(0\right)}\right)-2{H}^{\left(0\right)\text{T}}{C}_{h}^{-1}\left({h}^{pri}-H{m}^{\left(0\right)}\right)$ (3)

The latter approach is preferred for very large *M*, since the convergence rate of gradient descent is independent of its dimension [18], whereas the effort required to solve the *M*× *M* system (1) by a direct method scales as *M*^{3} [19].

We now discuss issues related to the covariance matrices that appear in GLS. The data covariance
${C}_{d}$ quantifies the uncertainty of the observations and the information covariance
${C}_{h}$ quantifies the uncertainty of the prior information. Prior knowledge of the inherent accuracy of the measurement technique is needed to assign
${C}_{d}$, and prior knowledge of the physically-plausible solutions, perhaps stemming from and understanding of the underlying physics, is needed to assign
${C}_{h}$. These assignments are often very subjective, especially when correlations are believed to occur (that is,
${C}_{d}$ and
${C}_{h}$ have non-zero off-diagonal elements). For example, one geotomographic study [7] reconstructs a two-dimensional field using a
${C}_{h}$ that represents autocovariance of the field and that is dependent upon a scale length *q*. The value of *q* is chosen on the basis of broad physical arguments that, while plausible, leaves considerable room for subjectivity.

The matrices ${C}_{d}$ and ${C}_{h}$ together contain $\frac{1}{2}N\left(N+1\right)+\frac{1}{2}K\left(K+1\right)$ elements,

many more than the $\left(N+K\right)$ constraints imposed by the data $d$ and prior information $h$.Consequently, insufficient information is available to uniquely solve for all the elements of ${C}_{d}$ and ${C}_{h}$. However, it sometimes may be possible to parameterize ${C}_{d}\left(q\right)$ and/or ${C}_{h}\left(q\right)$ in terms of $q\in {\mathbb{R}}^{J}$, and ask whether an initial estimate of $q$ can be improved. As long as $\left(M+J\right)<\left(N+K\right)$, adequate information may be available to determine a best estimate ${q}^{est}$. We refer to the process of determining ${q}^{est}$ as “tuning”, since in typical practice it requires that the covariances be close to their true values.

As an example of a parametrized covariance, we consider the case where the model parameters represent a sampled version of a continuous function
$m\left(x\right)$, where
$x\in \mathbb{R}$ is an independent variable; that is,
${m}_{n}=m\left({x}_{n}\right)$, with
${x}_{n}\equiv n\Delta x$ and
$\Delta x$ the sampling interval. The prior information that
$m\left(x\right)$ is approximately oscillatory with wavenumber *q* can be modeled by:

$H=I\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.05em}}{h}^{pri}=0\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}{\left[{C}_{h}\right]}_{nm}={\sigma}_{h}^{2}\mathrm{cos}\left(q\left|{x}_{n}-{x}_{m}\right|\right)$ (4)

In this case, ${C}_{h}$ approximates the autocovariance of $m\left(x\right)$, which is assumed to be stationary. The goal of tuning is to provides a best-estimate ${q}^{est}$, as well of best estimated ${m}^{est}$ of the model parameters. This problem is further developed in Example 4, below.

Although the GLS formulation is widely used in geotomography and geophysical imaging, the tuning of variance is typically implemented in a very limited fashion, through the use of trade-off curves [7] - [12]. In this procedure, a scalar parameter *q* controls the relative size of
${C}_{d}$ and
${C}_{h}$, that is,
${C}_{h}\left(q\right)=q{C}_{h}^{\left(0\right)}$, where
${C}_{h}^{\left(0\right)}$ is specified [20]. The GLS problem is then solved for a suite of *q*s, the functions
$E\left(q\right)$ and
$L\left(q\right)$ are tabulated and the resulting trade-off curve
$E\left(L\right)$ is used to identify a solution
$m\left({q}_{0}\right)$ that has acceptably low *E* and *L* (for example, Figure 1 of [20] ). As we will show below, this ad hoc procedure is not a consistent extension of GLS, because it results in a different *q* than the one implied by Bayes’ principle. A more consistent approach is to apply Bayes theorem directly to estimate both the model parameters
$m$ and the covariance parameters
$q$. Such an approach has been implemented in the context of ordinary least squares [21] and the Markov chain Monte Carlo (MCMC) inversion method [22] (which is a computationally-intensive alternative to GLS). An important and novel result of this paper is a computationally-efficient procedure for tuning GLS in a Bayes-consistent manner.

2. Bayesian Extenion of GLS

The general process of using Bayes’ theorem to construct a posterior probability density function (p.d.f.) that depends on unknown parameters and of estimating those parameters though the maximization of probability is very well understood [23]. In the current case, the p.d.f. has *M* model parameters and *J* covariance parameters, so the maximization process (implemented, say, with a gradient ascent method) must search an
$\left(M+J\right)$ -dimensional space. Our main purpose here is to show that the process can be organized in a way that makes use of the GLS solution (1) and thus reduce the dimensionality of the searched space to *J*.

The GLS solution (1) yields the $m$ that minimizes the generalized error $\Phi \left(m\right)$, or equivalently, the $m$ that maximizes the Normal posterior probability density function (p.d.f.) $p\left(m|{d}^{obs},{h}^{pri}\right)$ :

$\begin{array}{l}{m}^{est}=\underset{m}{\mathrm{arg}\mathrm{max}}p\left(m|{d}^{obs},{h}^{pri}\right)\\ \text{with}\text{\hspace{0.17em}}\text{\hspace{0.05em}}p\left(m|{d}^{obs},{h}^{pri}\right)\propto p\left({d}^{obs}|m\right)p\left({h}^{pri}|m\right)\end{array}$ (5)

Here, Bayes theorem [23] is used to related the Normal posterior p.d.f. $p\left(m|{d}^{obs},{h}^{pri}\right)$ to the Normal likelihood $p\left({d}^{obs}|m\right)$ and the Normal prior $p\left({h}^{pri}|m\right)$. When poorly known parameters $q$ are added to the problem, they must be treated as additional random variables [22]. Writing $q\equiv {\left[{q}^{\left(d\right)},{q}^{\left(h\right)}\right]}^{\text{T}}$, with ${q}^{\left(d\right)}$ appearing in the likelihood and ${q}^{\left(h\right)}$ appear in the prior, we have:

$\begin{array}{l}{m}^{est},{q}^{est}=\underset{m,{q}^{\left(d\right)},{q}^{\left(h\right)}}{\mathrm{arg}\mathrm{max}}p\left(m,q|{d}^{obs},{h}^{pri}\right)\\ \text{with}\text{\hspace{0.17em}}\text{\hspace{0.05em}}p\left(m,q|{d}^{obs},{h}^{pri}\right)\propto p\left({d}^{obs}|m,{q}^{\left(d\right)}\right)p\left({h}^{pri}|m,{q}^{\left(h\right)}\right)p\left({q}^{\left(d\right)}\right)p\left({q}^{\left(h\right)}\right)\end{array}$ (6)

Here, we have assumed that $q$ and $m$ are not correlated with one another. The maximization with respect to the two variables can be performed as a sequence of two single-variable maximizations:

$m\left({q}_{0}\right):\underset{m}{\mathrm{arg}\mathrm{max}}p\left(m,{q}_{0}|{d}^{obs},{h}^{pri}\right)\text{\hspace{0.17em}}\left(\text{atfixed}\text{\hspace{0.17em}}{q}_{0}\right)$ (7a)

${q}^{est}:\underset{{q}_{0}}{\mathrm{arg}\mathrm{max}}p\left(m\left({q}_{0}\right),{q}_{0}|{d}^{obs},{h}^{pri}\right)$ (7b)

${m}^{est}=m\left({q}^{est}\right)$ (7c)

In the special case of the uniform prior $p\left({q}_{\left(d\right)}\right)p\left({q}_{\left(h\right)}\right)\propto \text{constant}$, the maximization in (7a) is the GPR solution at fixed ${q}_{0}$. For the Normal p.d.f.:

$\begin{array}{l}p\left(m\left({q}_{0}\right),{q}_{0}|{d}^{obs},{h}^{pri}\right)\\ ={\left(2\pi \right)}^{-\frac{1}{2}\left(N+K\right)}{\left(\mathrm{det}{C}_{d}\right)}^{-\frac{1}{2}}{\left(\mathrm{det}{C}_{k}\right)}^{-\frac{1}{2}}\mathrm{exp}\left(-\frac{1}{2}E\right)\left(-\frac{1}{2}L\right)\end{array}$ (8)

the maximization (7b) is equivalent to the minimization of an objective function $\Psi \left(q\right)$, defined as:

$\Psi \equiv -2\left[\mathrm{ln}p+\frac{1}{2}\left(N+K\right)\mathrm{ln}\left(2\pi \right)\right]=\mathrm{ln}\left(\mathrm{det}{C}_{d}\right)+\mathrm{ln}\left(\mathrm{det}{C}_{h}\right)+E+L$ (9)

The quantity $\mathrm{ln}\left(\mathrm{det}{C}_{d}\right)$ is best computed by finding the Choleski decomposition ${C}_{d}=D{D}^{\text{T}}$, the algorithm [24] for which is implemented in many software environments, including MATLAB® and PYTHON/linalg. Then, $\mathrm{ln}\left(\mathrm{det}{C}_{d}\right)=2{\displaystyle {\sum}_{n}\mathrm{ln}\left({D}_{nn}\right)}$ (and similarly for $\mathrm{ln}\left(\mathrm{det}{C}_{h}\right)$ ).The nonlinear optimization problem of minimizing $\Psi \left(q\right)$ can be implemented using a gradient descent method, provided that the derivative $\partial \text{\hspace{0.05em}}\Psi /\partial {q}_{m}$ can be calculated [18]. In the next section, we derive analytic formula for this and related derivatives.

3. Solution Method and Formula for Derivatives

The process of simultaneously estimating the covariance parameters ${q}^{est}$ and model parameters ${m}^{est}$ consists of six steps. First, the analytic form of the covariance matrices ${C}_{d}\left(q\right)$ and ${C}_{h}\left(q\right)$ are specified, and their derivatives $\partial {C}_{d}/\partial {q}_{m}$ and $\partial {C}_{h}/\partial {q}_{m}$ are computed analytically. Second, an initial estimate ${q}^{\left(0\right)}$ is identified. Third, the covariance matrices ${C}_{d}\left({q}^{\left(0\right)}\right)$ and ${C}_{h}\left({q}^{\left(0\right)}\right)$ are inserted into (1), yielding model parameters $m\left({q}^{\left(0\right)}\right)$. Fourth, using formulas developed below, the value of the derivative $\partial \text{\hspace{0.05em}}\Psi /\partial {q}_{m}$ is calculated at ${q}^{\left(0\right)}$. Fifth, a gradient descent method employing $\partial \text{\hspace{0.05em}}\Psi /\partial {q}_{m}$ is used to iteratively perturb ${q}^{\left(0\right)}$ towards the minimum of $\text{\Psi}$ at ${q}^{est}$ (and in process, repeating steps three through five many times). Sixth, the estimated model parameters are computed as ${m}^{est}=m\left({q}^{est}\right)$. This process is depicted in Figure 1.

Our derivation of $\partial \text{\hspace{0.05em}}\Psi /\partial {q}_{m}$ uses three matrix derivatives, $\partial {M}^{-1}/\partial q$, $\partial {M}^{-1/2}/\partial q$ and $\partial \mathrm{ln}\left(\mathrm{det}M\right)/\partial q$ that may be unfamiliar to some readers, so we derive them here for completeness. Let $M\left(q\right)$ be asquare, invertible, differentiable matrix. Differentiating ${M}^{-1}M=I$ yields $\left[\partial {M}^{-1}/\partial {q}_{m}\right]M+{M}^{-1}\left[\partial M/\partial {q}_{m}\right]=0$, which can be rearranged into ( [25], their (36)):

$\frac{\partial {M}^{-1}}{\partial {q}_{m}}=-{M}^{-1}\left[\frac{\partial M}{\partial {q}_{m}}\right]{M}^{-1}$ (10)

Figure 1. Schematic depiction of solution process. (a) The GLS solution ${m}^{est}$ (red curve) is considered a function of the covariance parameters $q$ and its derivative $\partial {m}^{est}/\partial {q}_{n}$ (blue line) at a point ${q}^{\left(0\right)}$ is computed by analytic differentiation of GLS equation (1); (b) The objective function Ψ (colors) is considered a function of $q$. The results of (a) are used to compute its gradient ${\nabla}_{q}\Psi $ at the point ${q}^{\left(0\right)}$. The gradient descent method is used to iteratively perturb this point anti-parallel to the gradient until it reaches the minimum ${\Psi}^{\mathrm{min}}$ of the objective function, resulting in the best-estimate ${q}^{est}$. This value is then used to determine a best-estimate of the model parameters ${m}^{est}$, as depicted in (a).

Similarly, differentiating ${M}^{-1/2}{M}^{-1/2}={M}^{-1}$ and applying (10), yields the Sylvester equation:

$\frac{\partial {M}^{-1/2}}{\partial {q}_{m}}{M}^{-1/2}+{M}^{-1/2}\frac{\partial {M}^{-1/2}}{\partial {q}_{m}}=\frac{\partial {M}^{-1}}{\partial {q}_{m}}=-{M}^{-1}\left[\frac{\partial M}{\partial {q}_{m}}\right]{M}^{-1}$ (11)

We have not been able to determine a source for this equation, but in all likelihood, it has been derived previously. In practice, (11) is not significantly harder to compute than (10), because efficient algorithms for solving Sylvester equations [26] and for computing a symmetric (principal) square root [27], are widely available and implemented in many software environments, including MATLAB® and PYTHON/linalg. The derivative of $\mathrm{ln}\left(\mathrm{det}{C}_{d}\right)$ is derived starting with Jacobi’s formula [12]:

$\frac{\partial \mathrm{det}\left(M\right)}{\partial q}=\text{tr}\left(\text{adj}\left(M\right)\frac{\partial M}{\partial q}\right)=\text{tr}\left(\mathrm{det}\left(M\right){M}^{-1}\frac{\partial M}{\partial q}\right)=\mathrm{det}\left(M\right)\text{tr}\left({M}^{-1}\frac{\partial M}{\partial q}\right)$ (12)

where
$\text{adj}(.)$ is the adjugate and
$\text{tr}(.)$ is the trace, applying Laplace’s identify [28]
$\text{adj}\left({C}_{d}\right)=\mathrm{det}\left({C}_{d}\right){C}_{d}^{-1}$ and the rule
$\text{tr}\left(cM\right)=c\text{tr}\left(M\right)$ (where *c* is a scalar and
$M$ is a matrix) [29]. Finally, the determinant is moved to the left-hand side and the well-known relationship
$\partial \mathrm{ln}\left(f\right)/\partial q={f}^{-1}\left(\partial f/\partial q\right)$, for a differentiable function
$f\left(q\right)$, is applied, yielding ( [25], their (38)):

$\frac{\partial \mathrm{ln}\left(\mathrm{det}M\right)}{\partial q}=\frac{1}{\mathrm{det}\left(M\right)}\frac{\partial \mathrm{det}\left(M\right)}{\partial q}=\text{tr}\left({M}^{-1}\frac{\partial M}{\partial q}\right)$ (13)

We begin the main derivation by considering the case in which data variance ${C}_{d}\left(q\right)$ depends on a parameter vector $q$, and the information variance ${C}_{h}$ is constant. The derivative of the GLS solution can be found by applying the chain rule applied to (1):

$\begin{array}{l}\frac{\partial {m}^{est}}{\partial {q}_{m}}=\frac{\partial {Z}^{-1}}{\partial {q}_{m}}{G}^{\text{T}}{C}_{d}^{-1}{d}^{obs}+{Z}^{-1}{G}^{\text{T}}\frac{\partial {C}_{d}^{-1}}{\partial {q}_{m}}{d}^{obs}+\frac{\partial {Z}^{-1}}{\partial {q}_{m}}{H}^{\text{T}}{C}_{h}^{-1}{h}^{pri}\\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}={Z}^{-1}\left({G}^{\text{T}}\frac{\partial {C}_{d}^{-1}}{\partial {q}_{m}}{d}^{obs}-\frac{\partial Z}{\partial {q}_{m}}{m}^{est}\right)\\ \text{with}\text{\hspace{0.17em}}\frac{\partial Z}{\partial {q}_{m}}={G}^{\text{T}}\frac{\partial {C}_{d}^{-1}}{\partial {q}_{m}}G\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\frac{\partial {C}_{d}^{-1}}{\partial {q}_{m}}=-{C}_{d}^{-1}\frac{\partial {C}_{d}}{\partial {q}_{m}}{C}_{d}^{-1}\end{array}$ (14)

Note that we have used (10). The derivative of the normalized prediction error is $\stackrel{\u02dc}{e}\equiv {C}_{d}^{-1/2}\left({d}^{obs}-G{m}^{est}\right)$ and total error $E\equiv {\stackrel{\u02dc}{e}}^{\text{T}}\stackrel{\u02dc}{e}$ are:

$\begin{array}{l}\frac{\partial \stackrel{\u02dc}{e}}{\partial {q}_{m}}=-{C}_{d}^{-1/2}G\frac{\partial {m}^{est}}{\partial {q}_{m}}+\frac{\partial {C}_{d}^{-1/2}}{\partial {q}_{m}}\left({d}^{obs}-G{m}^{est}\right)\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\frac{\partial E}{\partial {q}_{m}}=2{\stackrel{\u02dc}{e}}^{\text{T}}\frac{\partial \stackrel{\u02dc}{e}}{\partial {q}_{m}}\\ \text{with}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\frac{\partial {C}_{h}^{-1/2}}{\partial {q}_{m}}{C}_{h}^{-1/2}+{C}_{h}^{-1/2}\frac{\partial {C}_{h}^{-1/2}}{\partial {q}_{m}}=-{C}_{h}^{-1}\frac{\partial {C}_{h}}{\partial {q}_{m}}{C}_{h}^{-1}\end{array}$ (15)

Here, the Sylvester equation arises from (11). An alternate way of differentiating *E* that does not require solving a Sylvester equation is:

$\frac{\partial E}{\partial {q}_{m}}=\frac{\partial}{\partial {q}_{m}}\left({e}^{\text{T}}{C}_{d}^{-1}e\right)=-{\left(\frac{\partial {m}^{est}}{\partial {q}_{m}}\right)}^{\text{T}}{G}^{\text{T}}{C}_{d}^{-1}e+{e}^{\text{T}}\frac{\partial {C}_{d}^{-1}}{\partial {q}_{m}}e-{e}^{\text{T}}{C}_{d}^{-1}G\frac{\partial {m}^{est}}{\partial {q}_{m}}$ (16)

The derivative of the normalized error in prior information $\stackrel{\u02dc}{\mathcal{l}}={C}_{h}^{-1/2}\left(h-H{m}^{est}\right)$ and total error $L\equiv {\stackrel{\u02dc}{\mathcal{l}}}^{\text{T}}\stackrel{\u02dc}{\mathcal{l}}$ are:

$\frac{\partial \stackrel{\u02dc}{\mathcal{l}}}{\partial {q}_{m}}=-{C}_{h}^{-1/2}H\frac{\partial {m}^{est}}{\partial {q}_{m}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\frac{\partial L}{\partial {q}_{m}}=2{\stackrel{\u02dc}{\mathcal{l}}}^{\text{T}}\frac{\partial \stackrel{\u02dc}{\mathcal{l}}}{\partial {q}_{m}}$ (17)

Finally, since $\Psi =\mathrm{ln}\left(\mathrm{det}{C}_{d}\right)+\mathrm{ln}\left(\mathrm{det}{C}_{h}\right)+E+L$, we have:

$\frac{\partial \Psi}{\partial {q}_{m}}=\frac{\partial \mathrm{ln}\left(\mathrm{det}{C}_{d}\right)}{\partial {q}_{m}}+\frac{\partial E}{\partial {q}_{m}}+\frac{\partial L}{\partial {q}_{m}}=\text{tr}\left({C}_{d}^{-1}\frac{\partial {C}_{d}}{\partial {q}_{m}}\right)+\frac{\partial E}{\partial {q}_{m}}+\frac{\partial L}{\partial {q}_{m}}$ (18)

Note that we have applied (13).

Finally, we consider the case in which the information variance
${C}_{h}\left(q\right)$ depends on parameters
$q$, and
${C}_{d}$ is constant. Since the data and prior information play completely symmetric roles in (1), the derivatives can be obtained by interchanging the roles of
${C}_{d}$ and
${C}_{h}$,
$G$ and
$H$,
${d}^{obs}$ and
${h}^{pri}$,
$\stackrel{\u02dc}{e}$ and
$\stackrel{\u02dc}{\mathcal{l}}$ and *E* and *L*, in the equations above, yielding:

$\begin{array}{l}\frac{\partial {m}^{est}}{\partial {q}_{m}}={Z}^{-1}\left({H}^{\text{T}}\frac{\partial {C}_{h}^{-1}}{\partial {q}_{m}}{h}^{pri}-\frac{\partial Z}{\partial {q}_{m}}{m}^{est}\right)\\ \text{with}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\frac{\partial Z}{\partial {q}_{m}}={H}^{\text{T}}\frac{\partial {C}_{h}^{-1}}{\partial {q}_{m}}H\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\frac{\partial {C}_{h}^{-1}}{\partial {q}_{m}}=-{C}_{h}^{-1}\frac{\partial {C}_{h}}{\partial {q}_{m}}{C}_{h}^{-1}\end{array}$

$\frac{\partial \stackrel{\u02dc}{e}}{\partial {q}_{m}}=-{C}_{d}^{-1/2}G\frac{\partial {m}^{est}}{\partial {q}_{m}}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{and}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\frac{\partial E}{\partial {q}_{m}}=2{\stackrel{\u02dc}{e}}^{\text{T}}\frac{\partial \stackrel{\u02dc}{e}}{\partial {q}_{m}}$

$\frac{\partial \stackrel{\u02dc}{\mathcal{l}}}{\partial {q}_{m}}=-{C}_{h}^{-1/2}H\frac{\partial {m}^{est}}{\partial {q}_{m}}+\frac{\partial {C}_{h}^{-1/2}}{\partial {q}_{m}}\left({h}^{pri}-H{m}^{est}\right)$

$\frac{\partial L}{\partial {q}_{m}}=2{\stackrel{\u02dc}{\mathcal{l}}}^{\text{T}}\frac{\partial \stackrel{\u02dc}{\mathcal{l}}}{\partial {q}_{m}}=-{\left(\frac{\partial {m}^{est}}{\partial {q}_{m}}\right)}^{\text{T}}{H}^{\text{T}}{C}_{h}^{-1}\mathcal{l}+{\mathcal{l}}^{\text{T}}\frac{\partial {C}_{h}^{-1}}{\partial {q}_{m}}\mathcal{l}-{\mathcal{l}}^{\text{T}}{C}_{h}^{-1}H\frac{\partial {m}^{est}}{\partial {q}_{m}}$

$\frac{\partial {C}_{h}^{-1/2}}{\partial {q}_{m}}{C}_{h}^{-1/2}+{C}_{h}^{-1/2}\frac{\partial {C}_{h}^{-1/2}}{\partial {q}_{m}}=-{C}_{h}^{-1}\frac{\partial {C}_{h}}{\partial {q}_{m}}{C}_{h}^{-1}$

$\frac{\partial \mathrm{ln}\left(\mathrm{det}{C}_{h}\right)}{\partial {q}_{m}}=\text{tr}\left({C}_{h}^{-1}\frac{\partial {C}_{h}}{\partial {q}_{m}}\right)$

$\frac{\partial \Psi}{\partial {q}_{m}}=\text{tr}\left({C}_{h}^{-1}\frac{\partial {C}_{h}}{\partial {q}_{m}}\right)+\frac{\partial E}{\partial {q}_{m}}+\frac{\partial L}{\partial {q}_{m}}$ (19)

These formulas have been checked numerically.

4. Examples with Discussion

In the first example, we examine the simplistic case in which the parameter *q* represents an overall scaling of variance; that is
${C}_{d}\left(q\right)=q{C}_{d}^{\left(0\right)}$ and
${C}_{h}\left(q\right)=q{C}_{h}^{\left(0\right)}$, with specified
${C}_{d}^{\left(0\right)}$ and
${C}_{h}^{\left(0\right)}$. The solution
${m}^{est}$ is independent of *q*, as can be verified by substitution into (1). The parameter *q* can then be found by direct minimization of (9), which simplifies to:

$\Psi =\mathrm{ln}\left({q}^{N}\mathrm{det}{C}_{d}^{\left(0\right)}\right)+\mathrm{ln}\left({q}^{K}\mathrm{det}{C}_{k}^{\left(0\right)}\right)+{q}^{-1}{E}_{0}+{q}^{-1}{L}_{0}$ (20)

Here, we have used the rule $\mathrm{det}\left(qM\right)={q}^{N}\mathrm{det}\left(M\right)$ [25], valid for any $N\times N$ matrix $M$, and have defined ${E}_{0}\equiv E\left(q=1\right)$ and ${L}_{0}\equiv L\left(q=1\right)$. The minimum occurs when:

$\frac{\partial \Psi}{\partial q}=0=\left(N+K\right){q}^{-1}-\left({E}_{0}+{L}_{0}\right){q}^{-2}\text{\hspace{0.17em}}\text{\hspace{0.05em}}\text{or}\text{\hspace{0.17em}}\text{\hspace{0.05em}}q=\frac{{E}_{0}+{L}_{0}}{N+K}$ (21)

This is a generalization of the well-known maximum likelihood estimate of the sample variance [30]. As long as
$\left({E}_{0}+{L}_{0}\right)$ exists, the minimization in (21) is well-behaved and the overall scaling *q* is uniquely determined.

In the second example, we examine another simplistic case in which a parameter *q* represents the relative weighting of variance; that is
${C}_{d}^{-1}\left(q\right)=qI$ and
${C}_{h}^{-1}\left(q\right)=\left(1-q\right)I$.We consider the problem of estimating the mean
${m}_{1}$ of data given observations
$d=1$ and prior information
$h=0$ (where
$0$ and
$1$ are vectors of zeros and ones, respectively), when
$N=K$,
$M=1$ and
$G=H=1$. Applying (1), we find that
${m}^{est}=q$. Then, the objective function is
$\Psi =\mathrm{ln}\left({q}^{N}\right)+\mathrm{ln}\left({\left(1-q\right)}^{N}\right)+Nq\left(1-q\right)$ and its derivative is
$\partial \Psi /\partial q=N\left[-{q}^{-1}+{\left(1-q\right)}^{-1}+\left(1-q\right)-q\right]$. The solution to
$\partial \Psi /\partial q=0$ is
${q}^{est}=1/2$, as can be verified by direct substitution. Thus, the solution splits the difference between the observations and the prior values, and yields prior variances
${C}_{d}$ and
${C}_{h}$ that are equal. While simplistic, this problem illustrates that, at least in some cases, GLS is capable of uniquely determining the relative sizes of
${C}_{d}$ and
${C}_{h}$. Because trade-off curves, as defined in the Introduction, are based on the behavior of *E* and *L*, and not the complete objective function Ψ, the weighting parameter
${q}_{0}$ estimated from them in general will be different from
${q}^{est}$.Consequently, the trade-off curve procedure is not consistent with the Bayesian framework upon which GLS rests.

Our third example demonstrates the tuning of data covariance
${C}_{d}\left(q\right)$. In many cases, observational error increases during the course of an experiment, due to degradation of equipment or to worsening environmental conditions. The example demonstrates that the method is capable of accurately quantifying the fractional rate of increase *p* of the variance
${\sigma}_{{d}_{n}}$, which is assumed to vary with position
${x}_{n}$. In our simulation, we consider
$N=201$ synthetic data, evenly-spaced on the interval
$0\le {x}_{i}\le 1$, which scatter around the curve
${d}_{i}={m}_{1}+{m}_{2}{x}_{i}^{1/2}$ (Figure 2). The covariance of the data is modeled as
${\left[{C}_{d}\right]}_{mn}={\sigma}_{{d}_{n}}^{2}{\delta}_{mn}$, where
${\sigma}_{{d}_{n}}={\left(1\right)}^{2}\left(1+q\left(2{x}_{n}-1\right)\right)$ and
${\delta}_{mn}$ is the Kronecker delta; that is, the data are uncorrelated and their variance increases linearly with *x*. The derivative of the covariance is
$\left(\partial /\partial q\right){\left[{C}_{d}\right]}_{mn}={\left(1\right)}^{2}\left(2{x}_{n}-1\right){\delta}_{mn}$. We have included prior information with
$H=I$ and
${h}^{pri}=0$, which implements the notion that the model parameters are small. The corresponding covariance is chosen to be large,
${C}_{h}={\left(1000\right)}^{2}I$, indicating that this information is weak. The goal is to tune the rate of increase of variance and to arrive at a best-estimate of the two model parameters. The starting value is taken to be
${q}_{0}=0$, which corresponds to uniform variance. It is successively improved by a gradient descent method that minimizes Ψ, yielding an estimated value
${q}^{est}\approx 0.709$.This estimate differs from the true value
${q}^{true}=0.700$ by about 1%. The estimated solution
${m}^{est}$ differs from
$m\left(q=0\right)$ by a few tenths of a percent, which may be significant in some applications.

Figure 2. Example of tuning ${C}_{d}\left(q\right)$. (a) Plot of synthetic data (red dots) and predicted data (green curve); (b) The starting value ${q}_{0}=0$ corresponds to uniform variance (black curve). The estimate ${q}^{est}$ corresponds to increasing variance (green curve); (c) Generalized error $\Phi \left(q\right)$ (black curve). The starting value ${q}_{0}$ (black circle) is successively improved (red circles) by a gradient descent method, yielding an estimate ${q}^{est}$ (green circle); (d) The gradient $\partial \Phi /\partial q$, computed using the formulas developed in the text; (e) The first model parameter ${m}_{1}\left(q\right)$, highlighting the initial value (black circle) and estimated value (green circle) (f) Same as (e), except for the second model parameter ${m}_{2}\left(q\right)$.

The fourth example demonstrates tuning of information covariance
${C}_{h}\left(q\right)$. In many instances, one may need to “reconstruct” or “interpolate” a function on the basis of unevenly and sparsely sampled data. In this case, prior information on the autocovariance of the function can enable a smooth interpolation. Furthermore, it can enforce a covariance structure that may be required, say, by the underlying physics of the problem. In our example, we suppose that the function is known to be oscillatory on physical grounds, but that the wavenumber of those oscillations is known only imprecisely. The goal is to tune prior knowledge of wavenumber to arrive at a best-estimate of the reconstructed function. In our simulation, a total of
$M=101$ model parameters
${m}_{j}$ are uniformly spaced on the interval
$0\le x\le 100$ and representing a sampled version of a continuous, sinusoidal function
$m\left(x\right)$ with wavenumber
${p}^{true}=0.1571$ (Figure 3). Synthetic data
${d}_{i}^{obs}$ with uncorrelated error with variance
${\sigma}_{d}^{2}={\left(0.01\right)}^{2}$ are available for
$N=40$ randomly-chosen points
${x}_{j\left(i\right)}$, where the index function
$j\left(i\right)$ aligns in *x* observations to model parameters. The data kernel is
${G}_{ij}={\delta}_{i,j\left(i\right)}$. The prior information is given in (4), with autocovariance
${\left[{C}_{h}\right]}_{nm}={\sigma}_{h}^{2}\mathrm{cos}\left(q\left|{x}_{n}-{x}_{m}\right|\right)$ and

${\sigma}_{h}^{2}={\left(10\right)}^{2}$. The derivative is $\left(\partial /\partial q\right){\left[{C}_{h}\right]}_{nm}=-{\sigma}_{h}^{2}\left|{x}_{n}-{x}_{m}\right|\mathrm{sin}\left(q\left|{x}_{n}-{x}_{m}\right|\right)$. An

initial guess ${p}_{0}=0.95{p}^{true}$ is improved using a gradient descent method, yielding an estimated value of ${p}^{est}=0.1571$ that differs from ${p}^{true}$ by less than 0.01%. The reconstructed function is smooth and sinusoidal and the fit to the data is much improved.

Examples three and four were implemented in MATLAB® and executed in <5s on a notebook computer. They confirm the flexibility, speed and effectiveness of the method. An ability to tune prior information on autocovariance may be of special utility in seismic exploration applications, where three-dimensional waveform datasets are routinely interpolated.

A limitation of this overall “parametric” approach is that the solution is dependent on the choice of parameterization, which must be guided by prior knowledge of the general properties of the covariance matrices in particular problem being solved. In Example 3, we were able to recognize (say, by visually

Figure 3. Example of tuning ${C}_{h}\left(q\right)$. Sparsely-sampled synthetic data ${d}_{i}^{obs}$ (red dots) are oscillatory. (a) A regularly-sampled version ${m}_{j}^{est}$ is created by imposing the oscillatory covariance ${\left[{C}_{h}\right]}_{nm}={\sigma}_{h}^{2}\mathrm{cos}\left(q\left|{x}_{n}-{x}_{m}\right|\right)$. With the starting value ${q}_{0}=0.9500{q}^{true}$, the reconstruction poorly fits the data (black curve). Tuning leads to a better fit (green curve with dots), as well as a precise estimate of wavenumber ${q}_{0}\approx 0.9999{q}^{true}$ ; (b) Decrease in ${\Psi}_{n}$ with iteration number during the gradient descent process.

examining the data plotted in Figure 2(a)) that observational error increases with *x* and chose
${\left[{C}_{d}\right]}_{mn}={\sigma}_{d}^{2}\left(1+q\left({x}_{n}-{x}_{1}\right)\right){\delta}_{mn}$ that matched this scenario. If, instead, the degree of correlation between successive data increased with *x*, this pattern might be less expected, more difficult to detect, and require a different

parameterization—say, ${\left[{C}_{d}\left(q\right)\right]}_{nm}={\sigma}_{d}^{2}\mathrm{exp}\left[-\frac{1}{2}q\left({x}_{n}+{x}_{m}\right)\left|{x}_{n}-{x}_{m}\right|\right]$.

Not every parameterization of ${C}_{d}$ (or ${C}_{h}$ ) is necessarily well-behaved. To avoid poor behavior, the parameterization must be chosen so its determinant does not have zeros at values of ${q}^{est}$ that will prevent the steepest descent process from converging to the global minimum. That this choice can be problematical is illustrated by the simple Toeplitz version of ${C}_{d}$ (with $N=10$, $J=9$ ):

${C}_{d}=\left[\begin{array}{cccccc}1& {q}_{1}& {q}_{2}& {q}_{3}& \cdots & {q}_{9}\\ {q}_{1}& 1& {q}_{1}& {q}_{2}& \cdots & {q}_{8}\\ {q}_{2}& {q}_{1}& 1& {q}_{1}& \cdots & {q}_{7}\\ {q}_{3}& {q}_{2}& {q}_{1}& 1& \cdots & {q}_{6}\\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ {q}_{9}& {q}_{8}& {q}_{7}& {q}_{6}& \cdots & 1\end{array}\right]$ (22)

with $\left|{q}_{i}\right|<1$. This form is useful for quantifying correlations within a stationary sequence of data [31]. Yet as is illustrated in Figure 4, the ${\mathbb{R}}^{J}$ volume is crossed

Figure 4. The function
$\mathrm{det}{C}_{d}\left(q\right)=0$ for the case given by (22). (a) The
$\left({q}_{1},{q}_{2}\right)$ surface for
${q}_{3}=-0.95$ and the other *q*s randomly assigned; (b) Same as (a), but with
${q}_{3}=0.00$ ; (c) Same as (a), but with
${q}_{3}=0.95$ ; (d) Perspective view of the surfaces in the
${q}_{1},{q}_{2},{q}_{3}$ volume. The positions of the three slices in (a), (b) and (c) are noted on the
${q}_{3}$ -axis (green arrows). A question posed in the text is whether, given an arbitrary point
${q}^{\left(0\right)}$ and the global minimum of the objective function, say at
${q}^{est}$ (and with both points satisfying
$\mathrm{det}{C}_{d}>0$ ), a steepest-descent path necessarily exists between them.

by many $\mathrm{det}{C}_{d}=0$ surfaces that correspond to surfaces of singular objective function Ψ. Their presence suggests that the steepest descent path between a starting value ${q}^{\left(0\right)}$ and the global minimum at ${q}^{est}$ may be very convoluted (if, indeed, such a path exists) unless ${q}^{\left(0\right)}$ is very close to ${q}^{est}$.

5. Conclusion

Generalized Least Squares requires the assignment of two prior covariance matrices, the prior covariance of the data and the prior covariance of the prior information. Making these assignments is often a very subjective process. However, in cases in which the forms of these matrices can be anticipated up to a set of poorly-known parameters, information contained within the data and prior information can be used to improve knowledge of them—a process we call “tuning”. Tuning can be achieved by minimizing an objective function that depends on both the generalized error and determinants of the covariance matrices to arrive at a best estimate of the parameters. Analytic and computationally-tractable formulas are derived for the derivative needed to implement the minimization via a gradient descent method. Furthermore, the problem is organized so that the minimization need be performed only over the space of covariance parameters, and not over the typically-much-larger space of model and covariance parameters. Although some care needs to be exercised as the covariance matrices are parametrized, the minimization is tractable and can lead to better estimates of the model parameters. An important outcome is this study is the recognition that the use of trade-off curves to determine relative weighting of covariance—a practice ubiquitous in the geophysical imaging—is not consistent with the underlying Bayesian framework of Generalized Least Squares. The strategy outlined here provides a consistent solution.

Acknowledgements

The author thanks Roger Creel for helpful discussion.

References

[1] Tarantola, A. and Valette, B. (1982) Generalized Non-Linear Inverse Problems Solved Using the Least Squares Criterion. Reviews of Geophysics and Space Physics, 20, 219-232.

https://doi.org/10.1029/RG020i002p00219

[2] Tarantola, A. and Valette, B. (1982) Inverse Problems = Quest for Information. Journal of Geophysics, 50, 159-170.

https://n2t.net/ark:/88439/y048722

[3] Menke, W. (2018) Geophysical Data Analysis: Discrete Inverse Theory. 4th Edition, Elsevier, 350 p.

[4] Menke, W. and Menke, J. (2016) Environmental Data Analysis with MATLAB. 2nd Edition, Elsevier, 3342 p.

https://doi.org/10.1016/B978-0-12-804488-9.00001-X

[5] Tarantola, A. (2005) Inverse Problem Theory and Methods for Model Parameter Estimation. SIAM: Society for Industrial and Applied Mathematics, 342 p.

https://doi.org/10.1137/1.9780898717921

[6] Menke, W. (2014) Review of the Generalized Least Squares Method. Surveys in Geophysics, 36, 1-25.

https://doi.org/10.1007/s10712-014-9303-1

[7] Abers, G. (1994) Three-Dimensional Inversion of Regional P and S Arrival Times in the East 723 Aleutians and Sources of Subduction Zone Gravity Highs. Journal of Geophysical Research, 99, 4395-4412.

https://doi.org/10.1029/93JB03107

[8] Schmandt, B. and Lin, F.-C. (2014) P and S Wave Tomography of the Mantle beneath the United States. Geophysical Research Letters, 41, 6342-6349.

https://doi.org/10.1002/2014GL061231

[9] Menke, W. (2005) Case Studies of Seismic Tomography and Earthquake Location in a Regional Context. Geophysical Monograph 157. American Geophysical Union, Washington DC.

https://doi.org/10.1029/157GM02

[10] Nettles, M., and Dziewonski, A.M. (2008) Radially Anisotropic Shear Velocity Structure of the Upper Mantle Globally and Beneath North America. Journal of Geophysical Research, 113, B02303.

https://doi.org/10.1029/2006JB004819

[11] Chen, W. and Ritzwoller, M.H. (2016) Crustal and Uppermost Mantle Structure Beneath the United States. Journal of Geophysical Research, 121, 4306-4342.

https://doi.org/10.1002/2016JB012887

[12] Humphreys, E.D., Dueker, K.G., Schutt, D.L. and Smith, R.B. (2000) Beneath Yellowstone: Evaluating Plume and Nonplume Models Using Teleseismic Images of the Upper Mantle. GSA Today, 10, 1-7.

https://www.geosociety.org/gsatoday/archive/10/12/

[13] Gillet, N., Schaeffer, N. and Jault, D. (2011) Rationale and Geophysical Evidence for Quasi-Geostrophic Rapid Dynamics within the Earth’s Outer Core. Physics of the Earth and Planetary Interiors, 187, 380-390.

https://doi.org/10.1016/j.pepi.2011.01.005

[14] Zhao, S. (2013) Lithosphere Thickness and Mantle Viscosity Estimated from Joint Inversion of GPS and GRACE-Derived Radial Deformation and Gravity Rates in North America. Geophysical Journal International, 194, 1455-1472.

https://doi.org/10.1093/gji/ggt212

[15] Menke, W. and Eilon, Z. (2015) Relationship between Data Smoothing and the Regularization of Inverse Problems. Pure and Applied Geophysics, 172, 2711-2726.

https://doi.org/10.1007/s00024-015-1059-0

[16] Voorhies, C.F. (1986) Steady Flows at the Top of Earth’s Core Derived from Geomagnetic Field Models. Journal of Geophysical Research, 91, 12444-12466.

https://doi.org/10.1029/JB091iB12p12444

[17] Yao, Z.S. and Roberts, R.G. (1999) A Practical Regularization for Seismic Tomography. Geophysical Journal International, 138, 293-299.

https://doi.org/10.1046/j.1365-246X.1999.00849.x

[18] Snyman, J.A. and Wilke, D.N. (2018) Practical Mathematical Optimization—Basic Optimization Theory and Gradient-Based Algorithms. Springer Optimization and Its Applications, 2nd Edition, Springer, New York, 340 p.

[19] Hidebrand, F.B. (1987) Introduction to Numerical Analysis. 2nd Edition, Dover Publications, New York.

[20] Zaroli, C., Sambridge, M., Lévêque, J.-J., Debayle, E. and Nolet, G. (2013) An Objective Rationale for the Choice of Regularization Parameter with Application to Global Multiple-Frequency S-Wave Tomography. Solid Earth, 4, 357-371.

https://doi.org/10.5194/se-4-357-2013

[21] Malinverno, A. and Parker, R.L. (2006) Two Ways to Quantify Uncertainty in Geophysical Inverse Problems. Geophysics, 71, W15-W27.

https://doi.org/10.1190/1.2194516

[22] Malinverno, A. and Briggs, V.A. (2004) Expanded Uncertainty Quantification in Inverse Problems: Hierarchical Bayes and Empirical Bayes. Geophysics, 69, 877-1103.

https://doi.org/10.1190/1.1778243

[23] Box, G.E.P. and Tiao, G.C. (1992) Bayesian Inference in Statistical Analysis. Wiley, New York, 589 p.

https://doi.org/10.1002/9781118033197

[24] Schmidt, E. (1973) Cholesky Factorization and Matrix Inversion, National Oceanic and Atmospheric Administration Technical Report NOS-56. US Government Printing Office, Washington DC.

https://books.google.com/books?id=MiRHAQAAIAAJ

[25] Petersen, K.B. and Pedersen, M.S. (2008) The Matrix Cookbook, 71 p.

https://archive.org/details/imm3274

[26] Bartels, R.H. and Stewart, G.W. (1972) Solution of the matrix equation AX + XB = C. Communications of the ACM, 15, 820-826.

https://doi.org/10.1145/361573.361582

[27] Higham, N.J. (1987) Computing Real Square Roots of a Real Matrix. Linear Algebra and its Applications, 88-89, 405-430.

https://doi.org/10.1016/0024-3795(87)90118-2

[28] Magnus, J.R. and Neudecker, H. (1999) Matrix Differential Calculus with Applications in Statistics and Econometrics, Revised Edition. John Wiley and Sons, New York, 424 p.

[29] Gantmacher, F.R. (1960) The Theory of Matrices, Volume 1. Chelsea Publishing, New York, 374 p.

[30] Fisher, R.A. (1925) Theory of Statistical Estimation. Mathematical Proceedings of the Cambridge Philosophical Society, 22, 700-725.

https://doi.org/10.1017/S0305004100009580

[31] Claerbout, J.F. (1985) Fundamentals of Geophysical Data Processing with Applications to Petroleum Prospecting. Blackwell Scientific Publishing, Oxford, UK, 267 p.