613 Page 6 of 29 Eur. Phys. J. C (2015) 75 :613
those collected using the Initial State Radiation (ISR) mode
by KLOE [36–38], BaBar [39,40]orBESSIII[41]; further-
more, the normalization uncertainties reported for each of the
ISR data samples have all a peculiar structure which deserves
each a specific treatment—this is the subject of the next sub-
section.
A constant global scale uncertainty, as those affecting the
data samples from CMD2, SND or BESSIII, can be writ-
ten β = 1 + λ, where λ is a random variable with range
]−1, +∞[.AsE(λ) = 0 and E (λ
2
) = σ
2
with σ 1, the
gaussian approximation for λ is safe [45,46]. A data sample
subject to such a global scale uncertainty provides an indi-
vidual contribution to an effective global χ
2
glob.
which should
a priori be written:
χ
2
=[m − M(a) − λ A]
T
V
−1
[m − M(a) − λ A]+
λ
2
σ
2
(3)
where m, M, V , and a have the same definitions as in
Sect. 4.1, while λ and σ have just been defined. As for A,even
if intuitively one may prefer A = m, the choice A = M(a)
has been shown to drop out any biasing issue
13
[42,45,70].
Assuming that the unknown scale factor λ is solely of
experimental origin—and, then, independent of the model
parameters a—the solution to ∂χ
2
/∂λ = 0 provides its most
probable value λ
0
[34]. After substitution, Eq. (3) becomes
χ
2
=[m − M(a)]
T
W
−1
[m − M(a)]
with W = V + σ
2
AA
T
, (4)
which exhibits a modified error covariance matrix W and
only depends on the (physics) model parameters. More pre-
cisely, the single recollection of the scale uncertainty λ is
the occurrence of its variance σ
2
in the modified covariance
matrix W .
However, Eq. (4) clearly points toward a difficulty if the
model is not numerically known beforehand as the modified
covariance matrix becomes a-dependent when setting the
unbiasing choice A = M. In this case, the parameter error
covariance matrix provided by the χ
2
minimization might
not be easy to interpret.
The way out is to define iterative procedures; this is allu-
sively stated in [42], but explicitly considered in [44] as solu-
tion to the so-called “Peelle’s Pertinent Puzzle”
14
[43], pro-
vided a good starting approximate solution is known before-
hand; however, defining such a tool might be a delicate task if
the underlying model is non-linear, as quite usual in particle
13
This does not mean that the choice A = m necessarily leads to a
significantly biased solution as shown below.
14
Peelle’s reference is no longer of common access, but its main
content—which closely resembles the D’Agostini issue raised in [42]—
is reproduced in [44].
physics. Such a procedure has already been followed and suc-
cessfully worked out in [47] in order to derive through a mini-
mization procedure the parton density functions from several
measured spectra. When dealing with samples of form factor
and/or cross section data, other appropriate iterative methods
should be defined.
The starting step of the iteration implies choosing some
initial value for A,sayA = A
0
. Without further information,
the best approximation one can choose is obviously A
0
≡
m, the experimental spectrum itself. Quite interestingly, this
turns out to start iterating with λ = 0(σ = 0inEq.(4)), i.e.
β = 1, a unit scale factor; this makes the connection with the
iterative method followed in [47].
Then the minimization of the χ
2
in Eq. (4) with A = A
0
≡
m is performed using the minuit procedure [71] which yields
the (step # 0) solution
15
M
0
via the fitted parameter vector
value a
0
. The next step (# 1) consists in minimizing Eq. (4)
using A = M
0
≡ M(a
0
), which is easily implemented in the
procedure and, at convergence, minuit provides the step # 1
solution M(a
1
). This stepwise procedure.
16
is followed until
some convergence criterion is met. As in each minimization
procedure the covariance matrix is constant, the interpreta-
tion of the parameter error covariance matrix is canonical.
The convergence speed of the iterative procedure cannot
be guessed ab initio but may be expected fast, referring to the
fit of the parton density functions where the convergence is
essentially reached at the first iteration [47]. This is confirmed
by the Monte Carlo studies reported in Appendix A.
Nevertheless, one may infer that the number of iteration
steps is smaller for a starting guess for A close to the actual
model than for an arbitrary choice; clearly, as the choice
A = m (the experimental spectrum) should be the closest to
the actual model, one may think that it should minimize the
number of iterations needed to reach convergence. Addition-
ally, this choice does not imply any a priori assumption on
the parameter vector to be fitted.
Among the data samples one deals within the BHLS-based
global fit method, most have been collected in scan mode,
essentially at Novosibirsk, and carry a constant scale uncer-
tainty merging several effects. This is especially the case for
the e
+
e
−
→ π
+
π
−
data samples collected by the CMD2
[52,53] and SND [54] detectors; this also covers the case of
the BESSIII data sample [41].
In order to simplify and unify the notations in the follow-
ing discussion, it is suitable to perform the change of random
variable λ = σμ. Then the statistical properties for λ prop-
agate to E(μ) = 0 and E(μ
2
) = 1 and, defining in addition
B = σ A,Eq.(3) above becomes
15
The analysis method in [34,35] actually stops there; the present anal-
ysis aims at going beyond.
16
Each such step is defined as a full (minuit) minimization procedure
where the covariance matrix is unchanged until convergence is reached.
123