Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Recreating paper results #6

Open
tomk2005 opened this issue May 8, 2021 · 3 comments
Open

Recreating paper results #6

tomk2005 opened this issue May 8, 2021 · 3 comments

Comments

@tomk2005
Copy link

tomk2005 commented May 8, 2021

Hello,

I am having difficulty recreating your paper results.
I'm currently running your code on the data you provided with pre-trained models with the config values of:
patch_size = [64, 64]
sample_amount = 1191

I then average KLD and get KLD ~0.5 .
Should running test noise models.py with these settings recreate the paper's KLD (~0.00159)?

If so, can you possibly upload a more detailed requirements files of your environment? maybe it is some package mismatch?

@ZcsrenlongZ
Copy link

I have the same problem as user tomk2005, can you upload more detailed requirements?

@OrGreenberg
Copy link

OrGreenberg commented Mar 9, 2022

Hi!
same for me.

Here are the results I achieved running the code over the provided test set, without any changes in the config file, except for the data_dir. I used the pretrained models provided.

--> Denoiser Test (PSNR)

average psnr ours: 47.841578781906506
average psnr g: 41.00051686254721
average psnr pg: 46.08441194932963
average psnr noiseflow: 47.147988764349506
average psnr ours real: 47.71135122092602
average psnr real: 46.7334418265441

--> Noise Models Test (KLD):

G: 0.6996052614889056 | PG: 0.06394353525639518 | Ours (CA-NoiseGAN): 0.03184607044417602
[** noise-flow results are not presented]

The denoiser's results make sense, yet not fully aligned with the results presented in the paper (Table 7).
On the other hand, the noise model results seem completely wrong, particularly compared to the ones presented in the paper (Table 1).

Any help here? :)

@happycaoyue
Copy link

Hi! same for me.

Here are the results I achieved running the code over the provided test set, without any changes in the config file, except for the data_dir. I used the pretrained models provided.

--> Denoiser Test (PSNR)

average psnr ours: 47.841578781906506 average psnr g: 41.00051686254721 average psnr pg: 46.08441194932963 average psnr noiseflow: 47.147988764349506 average psnr ours real: 47.71135122092602 average psnr real: 46.7334418265441

--> Noise Models Test (KLD):

G: 0.6996052614889056 | PG: 0.06394353525639518 | Ours (CA-NoiseGAN): 0.03184607044417602 [** noise-flow results are not presented]

The denoiser's results make sense, yet not fully aligned with the results presented in the paper (Table 7). On the other hand, the noise model results seem completely wrong, particularly compared to the ones presented in the paper (Table 1).

Any help here? :)

Can you leave an email address for me? Let's talk about noise modelling?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants