192 T.H. Luu et al. / Journal of Computational Physics 345 (2017) 189–206
The tensor directional basis functions are not a priori chosen but tuned to each cross-section we consider. In particular,
the number of retained tensor directional basis functions (for a given accuracy) depends on the cross-section and also
on the phase direction. Thanks to the Karhunen–Loève decomposition technique, the most important information of each
cross-section is extracted, axis by axis, and represented into few tensor directional basis functions.
The
coefficients in the combination of the tensor products are determined by a system of linear equations traducing
interpolation equalities at some points. Note that the points are the same, whatever the cross-section, this allows us to limit
the number of APOLLO2 calculations performed on these points at the stage of determining the coefficients. In order to
determine the points on which this system depends, we rely on the idea presented in the empirical interpolation method
(EIM) [9].
With
these techniques, we can reconstruct the cross-sections with a high accuracy while reducing significantly the
calculation points and the storage in the neutron library.
In
the light to what was presented at the beginning of the introduction about the four concepts for approximation, our
approach allows us to minimize the number of data acquisition that each involves a use of the code APOLLO2. Since this
data is an important part of the storage, the storage size is therefore significantly reduced. The other part of our approach
is the definition of the tensor directional basis functions. The interpolation process through a tensor shape approach of
each cross-section allows us to minimize the complexity of the function reconstruction with further evaluation in various
different points.
The
paper is organized in the following manner:
Section
2 describes the theoretical background of the Karhunen–Loève decomposition on which our approach is based in
order to construct the tensor directional basis functions.
In
section 3, we present our proposed methodology for the Tucker decomposition in a general case.
Section
4 is reserved for practical applications to our problem: the reconstruction of cross-sections in neutronics. The
implementation procedure as well as the cost of Tucker model will be detailed.
In
section 5, we show the numerical results of a test case in order to compare the Tucker model with the multilinear
model using the following criteria: the number of calculation points, the storage in neutron libraries and the accuracy.
Section
6 is reserved for conclusion and discussion.
2. Theoretical background
2.1. Problem statement
From a mathematical point of view, our problem: the reconstruction of cross-sections, stands as the approximation of
about one thousand multivariate functions { f
k
(x)}
k
, defined on the domain . Here, x = (x
1
, ..., x
d
) ∈ ⊂ R
d
, d is the
number of parameters and =
1
× ...×
d
(
i,1≤i≤d
⊂ R).
The
objective is to acquire information, store the data, and propose a reconstruction of each f
k
with a high [accu-
racy]/[complexity]
ratio. Remember that we expect an absolute accuracy to be of the order of 10
−5
(or pcm).
Our
approach is based on a low-rank tensor technique in order to represent each function f
k
. Low-rank tensor rep-
resentations
(or formats) are widely used to treat the large-scale problems and there are many ways to express them,
depending on the specific domain of application. We refer to [8,10,11] for a general description of different formats. The
Karhunen–Loève decomposition will be at the basis of our approach so we recall in the following subsection some elements
of context.
2.1.1.
Karhunen–Loève decomposition
The
Karhunen–Loève decomposition that was introduced in statistics for continuous random processes [12,13] is also
known in various communities, as e.g. Proper Orthogonal Decomposition (POD) [14], Principal Component Analysis (PCA)
[15,16], Empirical Orthogonal Functions (EOFs) [17]. This decomposition is available for two-dimensional function spaces but
has no direct extension to high-dimensional ones. The interest of this decomposition is that it provides an optimal r-rank
approximation for two-variate functions, with respect to the L
2
-norm.
Let
f = f (x, y) be a two-variate function in L
2
(
x
×
y
). Through the Karhunen–Loève decomposition, f can be ex-
pressed
as follows:
f (x, y) =
+∞
n=1
λ
n
ϕ
n
(x)ψ
n
(y), ∀(x, y) ∈
x
×
y
(2)
where {ϕ
n
(x)}
+∞
n=1
(resp. {ψ
n
(y)}
+∞
n=1
) is a L
2
-orthonormal basis of L
2
(
x
) (resp. L
2
-orthonormal basis of L
2
(
y
)). Further-
more,
each (λ
n
, ϕ
n
(x)) (resp. (λ
n
, ψ
n
(y))) is a couple of eigenvalue–eigenfunction of a Hilbert–Schmidt operator K
(x)
f
(resp.
Hilbert–Schmidt operator K
(y)
f
) with the following definitions: