Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[shortfin-sd] Add exports and support for scheduled unet, batch sizes. #972

Draft
wants to merge 17 commits into
base: main
Choose a base branch
from

Conversation

monorimet
Copy link
Contributor

No description provided.

@monorimet
Copy link
Contributor Author

monorimet commented Feb 16, 2025

Untitled
turbine_generate action works really nicely, with multiple batch size export and compile jobs running in parallel!

@@ -147,7 +147,7 @@ def forward(
# 1b. Aug embedding of text_embeds, time_ids
time_embeds = self.add_time_proj(time_ids.flatten())
time_embeds = time_embeds.reshape((text_embeds.shape[0], -1))
add_embeds = torch.concat([text_embeds, time_embeds], dim=-1).to(emb.dtype)
add_embeds = torch.cat([text_embeds, time_embeds], dim=-1).to(emb.dtype)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any reason for this change?

):
super().__init__()
self.torch_dtype = torch.float16 if precision == "fp16" else torch.float32
config_1 = CLIPTextConfig.from_pretrained(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we grabbing the HF implementation for all these (except punet) models? We should have everything covered in our sharktank models that we should switch to

]
with decompositions.extend_aot_decompositions(
from_current=True,
add_ops=decomp_list,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will decompose attention for all models, we want to keep sdpa for punet but not (at least for now) clip or vae

Copy link
Contributor

@daveliddell daveliddell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! I can't say that I understand everything, but of what I do understand, I can't find anything wrong with it. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants