-
Notifications
You must be signed in to change notification settings - Fork 3
VFNS SV NLO discrepancy
-
comparison in FFNS of current implementation still have relevant discrepancy w.r.t. apfel-pegasus one (i.e. at the level per-mille in the valence)
-
check same claimed solution (
perturbative-exact
, same parameters) -
using the Mellin path only in the upper imaginary plane is sufficient (relative impact:
~ 1e-5
) -
check the effect of the Talbot path w.r.t. the pegasus one: relative impact:
~ 1e-6
(seetest_operator.test_pegasus_path
) -
check the accuracy of implemented NLO anomalous dimension: they match exactly.
-
A per cent difference on eko and pegasus outputs is found when looking at the difference between Nf=3 and Nf=4
-
Inspect the effect on the interpolation
-
Apfel accuracy depends on the grid size (as expected), we want to agree both with Pegasus and Apfel only when using an accurate grid. Thus the ratio of the discrepancy w.r.t. apfel-pegasus one depends on the grid size.
-
Better accuracy with pegasus is found with interpolation using linear polynomials (relative impact:
~ 1e-4
) in the low x region. The opposite (interpolation_polynomial_degree=4
) holds forx~1
.
A few-percent discrepancy is observed @ NLO in some regimes for some flavors when doing Scale Variations (SV) in a Variable Flavor Number Scheme (VFNS).
It is about:
- a 1-2.5% discrepancy in the and large-x for all PDFs
- in the very-large-x a discrepancy is expected because of vanishing PDFs and sparse interpolation x-grid (makes more difficult to quote an upper bound for this error)
- a 1.5% discrepancy in the small-x for those non-singlet flavors that are vanishing in the small-x
- a 2-3% for gluon-singlet in the small-x
The main issue is that the discrepancy has a clear dependency on x, not a scattered one, so there is a systematic difference. This might be expected since solving the differential equation might turn a numerical difference into a consistent one (or some other integration can do it).
- no problem @ LO
a84946172cc9de779c890a437bc4b3b4b8026cbc3b71b1c435fe82ce053ed9e0
- problems @ NLO
- ~5% in all the quark (according to the flavor can be ~2% in a larger or smaller regime and up to ~8% somewhere else, always <10%)
- ~30% in the gluon distribution, ranging from 10% to 60% (no problem was present @ LO, where it was ~0.05% as the others)
60d31eca6f3530e925d7a865fd88476cdb9a5cf8b8d17283bfe5fef6ff0807f5
- numbers are referred to the diff with LO (so pure NLO correction)
Notice that a pure NLO correction comparison is not that meaningful, because the NLO part is not generated independently from the LO (like in process cross-sections calculations), since it enhance the differential equation to the NLO, and then the solution is generated all together (it is not the solution to be expanded in series but the equation).
The setup and obtained numbers can be found in the ekomark generated benchmark database.
-
XIR
enters in few places, throughtheory_card
:- it's propagated through
runner
andfrom_dict
methods-
ThresholdsAtlas.from_dict
is passedtheory_card
but does not retainXIR
-
-
StrongCoupling.from_dict
incorporateXIR
in the thresholds definition, but does not store it directly -
OperatorGrid.from_dict
is the only one to store it directly infact_to_ren
and it is used in the following ways:- call
a_s
atmu_R
- pass to
scipy.integrate.quad()
argumentargs
, and so toquad_ker
- call
- it's propagated through
At the end of the day only to functions should be the entry point for scale variations:
-
strong_coupling.a_s()
, through thresholds andmu_R
dependency -
quad_ker()
, only throughL = np.log(fact_to_ren)
- propagates only to
gamma_singlet_fact
andgamma_ns_fact
- and their implementation perfectly agrees with the Pegasus paper
- propagates only to
All the following comparison has been performed @ NLO:
- FFNS
- FFNS + Scale Variations
- VFNS
- VFNS + Scale Variations
Looks like the effect of scale variations is just to enhance discrepancies that were already present in the non-scale-varied analogous setup, through an unknown mechanism that correlates with the presence of thresholds.
Indeed there is somehow a continuous deterioration going towards more complex options, rather than a newly introduced discrepancy when toggling some features.
The comparison has been performed against APFEL because:
- it is the software used to evolve NNPDF PDF-sets
- so there is no benefit in doing with LHAPDF since it's only introducing more noise because of a further interpolation
- the NLO LHA benchmark paper is only reliable at the level of % (since even in the cases in which APFEL and EKO agree on ‰ or sub-‰ level, and they are completely independent codes, the paper has still a % discrepancy)
eko is stable w.r.t. changing the numerical parameters that control the complexity for the numerical integrations.
All in all the conclusion is:
There is no reason to claim for an implementation bug in eko
Even if the numbers are huge and coherent enough to expect a deterministic source for the discrepancy (some kind of difference in the numerical strategy adopted for the solution integration).