Tucker tensor analysis of Matern functions in spatial statistics

by Alexander Litvinenko, David Keyes, Venera Khoromskaia, Boris N. Khoromskij, Hermann G. Matthies
Manuscripts Year: 2018


​Alexander Litvinenko, David Keyes, Venera Khoromskaia,Boris N. Khoromskij and Hermann G. Matthies, Tucker tensor analysis of  Matern functions in spatial statistics, accepted to J. CMAM, 2018

Extra Information


​The research reported in this publication was supported by funding from King
Abdullah University of Science and Technology (KAUST).


   author = {{Litvinenko}, A. and {Keyes}, D. and {Khoromskaia}, V. and {Khoromskij}, B.~N. and 
 {Matthies}, H.~G.},
    title = "{Tucker Tensor analysis of Matern functions in spatial statistics}",
  journal = {ArXiv e-prints},
archivePrefix = "arXiv",
   eprint = {1711.06874},
 primaryClass = "math.NA",
 keywords = {Mathematics - Numerical Analysis, 62F99, 62P12, 65F30, 65F40, G.3, G.4, J.2},
     year = 2017,
    month = nov,
   adsurl = {http://adsabs.harvard.edu/abs/2017arXiv171106874L},
  adsnote = {Provided by the SAO/NASA Astrophysics Data System}


​In this work, we describe advanced numerical tools for working with multivariate functions and for the
analysis of large data sets. These tools will drastically reduce the required computing time and the
storage cost, and, therefore, will allow us to consider much larger data sets or finer meshes.
Covariance matrices are crucial in spatio-temporal statistical tasks, but are often very expensive to
compute and store, especially in 3D. Therefore, we approximate covariance functions by cheap surrogates
in a low-rank tensor format. We apply the Tucker and canonical tensor decompositions to a family of
Mat\'ern- and Slater-type functions with varying parameters and demonstrate numerically
that their approximations exhibit exponentially fast convergence.
We prove the exponential convergence of the Tucker and canonical approximations in tensor
rank parameters.
Several statistical operations are performed in this low-rank tensor format, including evaluating the
conditional covariance matrix, spatially averaged
estimation variance, computing a quadratic form, determinant, trace, loglikelihood, inverse,
and Cholesky decomposition of a large covariance matrix.
Low-rank tensor approximations reduce the computing and storage costs essentially.
For example, the storage cost
is reduced from an exponential $\mathcal{O}(n^d)$ to a linear scaling $\mathcal{O}(drn)$,
where $d$ is the spatial dimension, $n$ is the number of mesh points in one direction,
and $r$ is the tensor rank.
Prerequisites for applicability of the proposed techniques are the assumptions that the data, locations,
and measurements lie on a tensor (axes-parallel) grid and that the covariance
function depends on a distance, $\| x-y\|$.


accepted to Journal Comput. Methods Appl. Math. (CMAM), DE GRUYTER


Fourier transform low-rank tensor approximation geostatistical optimal design, Kriging Matern covariance Hilbert tensor Kalman filter Bayesian update loglikelihood surrogate