Adapters v1.1.0 #788
calpt
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
This version is built for Hugging Face Transformers v4.47.x.
New
Add AdapterPlus adapters (@julian-fong via #746, #775):
AdapterPlus (Steitz & Roth, 2024) is a new bottleneck adapter variant optimized for vision transformers. Check out our notebook for training AdapterPlus adapters for ViT models.
Easy saving, loading and pushing of full adapter compositions to the Hub (@calpt via #771):
The newly added
save_adapter_setup()
,load_adapter_setup()
andpush_adapter_setup_to_hub()
methods allow saving, loading and uploading complex adapter compositions - including AdapterFusion setups - with one line of code. Read our documentation for more.Enabling full gradient checkpointing support with adapters (@lenglaender via #759)
Gradient checkpointing is a technique for enabling fine-tuning in very memory-limited setttings that nicely complements efficient adapters. It is now supported across all integrated adapter methods. Check out our notebook for fine-tuning Llama with gradient checkpointing and adapters.
More
Allows distinguishing multiple fusions on the same adapter by name. See details.
Changed
See tests readme for details.
Fixed
This discussion was created from the release Adapters v1.1.0.
Beta Was this translation helpful? Give feedback.
All reactions