Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't use GGUF but convert T5 directly from Hugging Face #967

Merged
merged 2 commits into from
Feb 20, 2025

Conversation

sogartar
Copy link
Contributor

We don't want the stack to depend on the conversion tool from the lamma.cpp repo. Also the conversion to GGUF would not convert all tensors to bf16, but leave some in f32. We would like to control that ourselves if needed.

This change makes any previously generated IRPA files obsolete.

@sogartar sogartar force-pushed the users/sogartar/t5-irpa-directly-from-hf branch 2 times, most recently from 7559cc5 to 8af8997 Compare February 15, 2025 02:03
@sogartar sogartar marked this pull request as ready for review February 15, 2025 02:12
@sogartar sogartar force-pushed the users/sogartar/t5-irpa-directly-from-hf branch from 8af8997 to e39df9b Compare February 17, 2025 20:35
@sogartar sogartar force-pushed the users/sogartar/t5-irpa-directly-from-hf branch from e39df9b to b0d3637 Compare February 17, 2025 23:43

gguf_to_optional_config_names_map = {
"t5.decoder_start_token_id": ["decoder_start_token_id"],
def from_hugging_face_config(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not blocking on this, but can these keys be discovered automatically?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I shortened it.

@@ -498,19 +493,19 @@ def testCompareAgainstTransformers(

theta = Theta(
{
"attn_q.weight": DefaultPrimitiveTensor(
"q.weight": DefaultPrimitiveTensor(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the motivation with all the renames? Is it just the corresponding names from huggingface's layout instead of gguf's?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

We don't want the stack to depend on the conversion tool from the
lamma.cpp repo. Also the conversion to GGUF would not convert all
tensors to bf16, but leave some in f32. We would like to control that
ourselves if needed.

This change makes any previously generated IRPA files obsolete.
@sogartar sogartar force-pushed the users/sogartar/t5-irpa-directly-from-hf branch from e0286ac to 26b0ad7 Compare February 20, 2025 19:14
@sogartar sogartar merged commit 63606a5 into main Feb 20, 2025
34 of 36 checks passed
@sogartar sogartar deleted the users/sogartar/t5-irpa-directly-from-hf branch February 20, 2025 19:40
IanNod pushed a commit that referenced this pull request Feb 22, 2025
We don't want the stack to depend on the conversion tool from the
lamma.cpp repo. Also the conversion to GGUF would not convert all
tensors to bf16, but leave some in f32. We would like to control that
ourselves if needed.

This change makes any previously generated IRPA files obsolete.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants