You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There's a little discrepancy between how the validation step is computed and how that final prediction is run. In validation, the input is processed as-is and the output is left-cropped to match the model's receptive field; in the final prediction, the input is pre-padded with zeroes so that it's always the exact same output that's being reported on no matter what model is used (and what its receptive field happens to be)
I'll leave this Issue open because I might be able to do a bit of refactoring elsewhere that makes this all agree (I'm thinking of refactoring some of the data processing code), but I'll want to hang on a bit and do that instead of jumping in on this--using the tools that are in the code right now, the result might come out kind of ugly.
To take a step back, this discrepancy won't cause any real problems--if the difference between two models is their ability to predict a bit of silence, then that's probably not telling you which model is actually better for real 😉.
sdatkinson
changed the title
[BUG] Lowest checkpoint ESR is not exported by the trainer in v0.10.0
[BUG] Final prediction ESR doesn't match what's computed during the trainer's validation step
Oct 11, 2024
Hi Steve,
I've been re-training some models with trainer version v0.10.0 and encountered something I know happened a while back.
In the "checkpoints" folder, my lowest ESR seems to have been 0.01003; two other checkpoints with ESR 0.01004.
However, when the trainer is stopped (I hit Ctrl + C in the CLI), it reports an ESR of 0.01006.
The text was updated successfully, but these errors were encountered: