Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to Add DiTFastAttn in our custom model? do i need to write a pipeline or will it work if i can just swap out some attention layers/ processors #420

Open
asahni04 opened this issue Jan 2, 2025 · 4 comments

Comments

@asahni04
Copy link

asahni04 commented Jan 2, 2025

How to Add DiTFastAttn in our custom model? do i need to write a pipeline or will it work if i can just swap out some attention layers/ processors

@feifeibear
Copy link
Collaborator

DiTFastAttn is a relatively independent feature, which essentially replaces the attention module with DiTFastAttn. You can refer to the following MR for details.

We haven't tested this feature for a while, so I welcome you to submit a useful MR to help improve this functionality.

#297

@asahni04
Copy link
Author

asahni04 commented Jan 3, 2025

hi @feifeibear will it require changing the sampling code or only swapping attention processors would be enough? the model i have is nn.Module how do i go about using xfuser DiTFastAttn with it? if you can give me an idea i can come up with a MR to improve functionality

@feifeibear
Copy link
Collaborator

We have currently integrated xFuserFastAttention in diffusers, and you can replace the original attention with this class in your nn.Module, which offers great flexibility.

@asahni04
Copy link
Author

asahni04 commented Jan 3, 2025

@feifeibear that's amazing can you please share the PR or reference code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants