-
Notifications
You must be signed in to change notification settings - Fork 49
merge DDalphaAMG_nd branch into etmc/tmLQCD/master #423
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…umberOfShifts for setting how many shifts has to be done by DDalphaAMG.
…acceptance step saving half of the applications of Z. Fixed the checking of the residual (a note will follow).
…ction term for the rational approximation only.
…al.c for heatbath.
…se in cg_mms_tm and cg_mms_tm_nd.
…on; we scale them with the coefficient which sum the inverse.
Array of tolerances
Pushed, I think now you can pull |
monomial/cloverdetratio_rwmonomial.c
Outdated
#endif | ||
mg_update_gauge = 1; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't the old gauge field be restored here?
…_threaded introduce threading into update_momenta_fg
@@ -54,24 +54,32 @@ void reweighting_factor(const int N, const int nstore) { | |||
mnl = &monomial_list[j]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@gbergner: I would like to pull this in as soon as possible, does this collide with your work?
#include "su3spinor.h" | ||
#include "square_and_minmax.h" | ||
|
||
void square_and_minmax(double * const sum, double * const min, double * const max, const spinor * const P, const int N) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbacchio are these routines used? They are not threaded...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's seems like they are not used anywhere. Should I remove them? Otherwise they should be threaded. Note that I've implemented some comfortable wrappers for doing multi-threaded Kahan sums.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See kahan_summation.h
, omp_accumulator.h
and their use in meas/measure_clover_field_strength_observables.c
monomial/cloverdetratio_rwmonomial.c
Outdated
|
||
double cloverdetratio_rwacc(const int id, hamiltonian_field_t * const hf) { | ||
monomial * mnl = &monomial_list[id]; | ||
int save_sloppy = g_sloppy_precision_flag; | ||
double atime, etime; | ||
atime = gettime(); | ||
|
||
if (restoresu3_flag) { | ||
for(int ix=0;ix<VOLUME;ix++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
threading missing
mnl->type == RATCOR || mnl->type == CLOVERRATCOR) | ||
scale = 1; | ||
|
||
if(scale) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sbacchio @urbach Okay, I've checked this now and it seems that it works correctly, also when QPhiX is used.
First line tmLQCD only, second line tmLQCD+QPhiX for sample-hmc-rat.input on a 8^3x4 lattice. I have acceptance on the 4^4 lattice, but that I can't run with QPhiX. Third line is the current master
running with QPhiX solvers.
00000000 0.124045033778 127924.456342349527 0.000000e+00 220 3592 366 3384 583 0 0 5.771806e+01 5.114121e-02
00000000 0.124045033778 127924.455873184837 0.000000e+00 220 3592 363 3334 1384 0 0 3.592745e+01 5.114121e-02
00000000 0.124045033778 127924.455873178696 0.000000e+00 220 3592 363 3334 594 0 0 3.555627e+01 5.114121e-02
…ards" This reverts commit 9a5f4ad.
Okay, pending some final test runs I will merge this in in the next few hours. |
Alright, did some high statistics runs especially to test the effect of 2MNFG and I can't detect any statistically significant deviations in for different combinations of MPI tasks / OpenMP threads after O(70k) trajectories on a small lattice. Will merge this now. |
No description provided.