Inverse problems lend themselves naturally to a Bayesian formulation, in
which the quantity of interest is a posterior distribution of state
and/or parameters given some uncertain observations. For the common case
in which the forward operator is smoothing, then the inverse problem is
ill-posed. Well-posedness is imposed via regularisation in the form of a
prior, which is often Gaussian. Under quite general conditions, it can
be shown that the posterior is absolutely continuous with respect to the
prior and it may be well-defined on function space in terms of its
density with respect to the prior. In this case, by constructing a
proposal for which the prior is invariant, one can define
Metropolis-Hastings schemes for MCMC which are well-defined on function
space, and hence do not degenerate as the dimension of the underlying
quantity of interest increases to infinity, e.g. under mesh refinement
when approximating PDE in finite dimensions. However, in practice,
despite the attractive theoretical properties of the currently available
schemes, they may still suffer from long correlation times, particularly
if the data is very informative about some of the unknown parameters. In
fact, in this case it may be the directions of the posterior which
coincide with the (already known) prior which decorrelate the slowest.
The information incorporated into the posterior through the data is
often contained within some finite-dimensional subspace, in an
appropriate basis, perhaps even one defined by eigenfunctions of the
prior. We aim to exploit this fact and improve the mixing time of
function-space MCMC by careful rescaling of the proposal. To this end,
we introduce two new basic methods of increasing complexity, involving
(i) characteristic function truncation of high frequencies and (ii)
hessian information to interpolate between low and high frequencies.