Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to reproduce the evaluation metric used in the paper? #33

Open
CyberSculptor96 opened this issue Feb 24, 2025 · 0 comments
Open

How to reproduce the evaluation metric used in the paper? #33

CyberSculptor96 opened this issue Feb 24, 2025 · 0 comments

Comments

@CyberSculptor96
Copy link

Thanks for open-sourcing this great work!
I would like to know which repository or code you used to calculate FID, FVD, etc., in the original paper. As for FID, how many real and synthetic images were used, respectively?
For example, I used 10k real images and 10k synthetic images (including all synthetic video frames) to compute the FID score using the pytorch-fid repository. However, I found that the FID score is around 160, which does not match the reported FID score.
Any insights on this would be greatly appreciated in helping better reproduce the evaluation metric reported in the paper!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant