Let (Ω, F, P) be a probability space on which all random objects will be defined. A filtration {F_{t} : t ≥ 0} of σ-algebras, is fixed and defines the information available at each time t.

Random field: A real-valued random field is a family of random variables Z(x) indexed by x ∈ R^{d} together with a collection of distribution functions of the form F_{x1},…,_{xn} which satisfy

F_{x1},…,_{xn}(b_{1},…,b_{n}) = P[Z(x_{1}) ≤ b_{1},…,Z(x_{n}) ≤ b_{n}], b_{1},…,b_{n} ∈ R

The mean function of Z is m(x) = E[Z(x)] whereas the covariance function and the correlation function are respectively defined as

R(x, y) = E[Z(x)Z(y)] − m(x)m(y)

c(x, y) = R(x, x)/√(R(x, x)R(y, y))

Notice that the covariance function of a random field Z is a non-negative definite function on R^{d} × R^{d}, that is if x_{1}, . . . , x_{k} is any collection of points in R^{d}, and ξ_{1}, . . . , ξ_{k} are arbitrary real constants, then

∑_{l=1}^{k}∑_{j=1}^{k} ξ_{l}ξ_{j} R(x_{l}, x_{j}) = ∑_{l=1}^{k}∑_{j=1}^{k} ξ_{l}ξ_{j} E(Z(x_{l}) Z(x_{j})) = E (∑_{j=1}^{k} ξ_{j} Z(x_{j}))^{2} ≥ 0

Without loss of generality, we assumed m = 0. The property of non-negative definiteness characterizes covariance functions. Hence, given any function m : R^{d} → R and a non-negative definite function R : R^{d} × R^{d} → R, it is always possible to construct a random field for which m and R are the mean and covariance function, respectively.

Bochner’s Theorem: A continuous function R from R^{d} to the complex plane is non-negative definite if and only if it is the Fourier-Stieltjes transform of a measure F on R^{d}, that is the representation

R(x) = ∫_{Rd} e^{ix.λ} dF(λ)

holds for x ∈ R^{d}. Here, x.λ denotes the scalar product ∑_{k=1}^{d} x_{k}λ_{k} and F is a bounded, real-valued function satisfying ∫_{A} dF(λ) ≥ 0 ∀ measurable A ⊂ R^{d}

The cross covariance function is defined as R_{12}(x, y) = E[Z_{1}(x)Z_{2}(y)] − m_{1}(x)m_{2}(y)

, where m_{1} and m_{2} are the respective mean functions. Obviously, R_{12}(x, y) = R_{21}(y, x). A family of processes Z_{ι} with ι belonging to some index set I can be considered as a process in the product space (R^{d}, I).

A central concept in the study of random fields is that of homogeneity or stationarity. A random field is homogeneous or (second-order) stationary if E[Z(x)^{2}] is finite ∀ x and

• m(x) ≡ m is independent of x ∈ R^{d}

• R(x, y) solely depends on the difference x − y

Thus we may consider R(h) = Cov(Z(x), Z(x+h)) = E[Z(x) Z(x+h)] − m^{2}, h ∈ R^{d},

and denote R the covariance function of Z. In this case, the following correspondence exists between the covariance and correlation function, respectively:

c(h) = R(h)/R(o)

i.e. c(h) ∝ R(h). For this reason, the attention is confined to either c or R. Two stationary random fields Z_{1}, Z_{2} are stationarily correlated if their cross covariance function R_{12}(x, y) depends on the difference x − y only. The two random fields are uncorrelated if R_{12} vanishes identically.

An interesting special class of homogeneous random fields that often arise in practice is the class of isotropic fields. These are characterized by the property that the covariance function R depends only on the length ∥h∥ of the vector h:

R(h) = R(∥h∥) .

In many applications, random fields are considered as functions of “time” and “space”. In this case, the parameter set is most conveniently written as (t,x) with t ∈ R_{+} and x ∈ R^{d}. Such processes are often homogeneous in (t, x) and isotropic in x in the sense that

E[Z(t, x)Z(t + h, x + y)] = R(h, ∥y∥) ,

where R is a function from R^{2} into R. In such a situation, the covariance function can be written as

R(t, ∥x∥) = ∫_{R} ∫_{λ=0}^{∞} e^{itu} H_{d} (λ ∥x∥) dG(u, λ),

where

H_{d}(r) = (2/r)^{(d – 2)/2} Γ(d/2) J_{(d – 2)/2} (r)

and J_{m} is the Bessel function of the first kind of order m and G is a multiple of a distribution function on the half plane {(λ,u)|λ ≥ 0,u ∈ R}.