You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How to Add DiTFastAttn in our custom model? do i need to write a pipeline or will it work if i can just swap out some attention layers/ processors
#420
Open
asahni04 opened this issue
Jan 2, 2025
· 4 comments
DiTFastAttn is a relatively independent feature, which essentially replaces the attention module with DiTFastAttn. You can refer to the following MR for details.
We haven't tested this feature for a while, so I welcome you to submit a useful MR to help improve this functionality.
hi @feifeibear will it require changing the sampling code or only swapping attention processors would be enough? the model i have is nn.Module how do i go about using xfuser DiTFastAttn with it? if you can give me an idea i can come up with a MR to improve functionality
We have currently integrated xFuserFastAttention in diffusers, and you can replace the original attention with this class in your nn.Module, which offers great flexibility.
How to Add DiTFastAttn in our custom model? do i need to write a pipeline or will it work if i can just swap out some attention layers/ processors
The text was updated successfully, but these errors were encountered: