Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issue in scripts/global_optimal_proposal_variational.py #21

Open
DLPerf opened this issue Feb 24, 2023 · 1 comment
Open

Comments

@DLPerf
Copy link

DLPerf commented Feb 24, 2023

Hello! Our static bug checker has found a performance issue in scripts/global_optimal_proposal_variational.py: get_gradient_descent_function is repeatedly called in a for loop, but there is a tf.function decorated function gradient_descent defined and called in get_gradient_descent_function.

In that case, when gradient_descent is called in a loop, the function get_gradient_descent_function will create a new graph every time, and that can trigger tf.function retracing warning.

Here is the tensorflow document to support it.

Briefly, for better efficiency, it's better to use:

@tf.function
def inner():
    pass

def outer():
    inner()  

than:

def outer():
    @tf.function
    def inner():
        pass
    inner()

Looking forward to your reply. Btw, I am glad to create a PR to fix it if you are too busy.

@DLPerf
Copy link
Author

DLPerf commented Mar 6, 2023

Hi, just taking the inner function outside should help. We are investigating this kind of issues, and your answer will be of great help to our work. Can you talk about it? @JTT94

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant