You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for open-sourcing this great work!
I would like to know which repository or code you used to calculate FID, FVD, etc., in the original paper. As for FID, how many real and synthetic images were used, respectively?
For example, I used 10k real images and 10k synthetic images (including all synthetic video frames) to compute the FID score using the pytorch-fid repository. However, I found that the FID score is around 160, which does not match the reported FID score.
Any insights on this would be greatly appreciated in helping better reproduce the evaluation metric reported in the paper!
The text was updated successfully, but these errors were encountered:
Thanks for open-sourcing this great work!
I would like to know which repository or code you used to calculate FID, FVD, etc., in the original paper. As for FID, how many real and synthetic images were used, respectively?
For example, I used 10k real images and 10k synthetic images (including all synthetic video frames) to compute the FID score using the pytorch-fid repository. However, I found that the FID score is around 160, which does not match the reported FID score.
Any insights on this would be greatly appreciated in helping better reproduce the evaluation metric reported in the paper!
The text was updated successfully, but these errors were encountered: