Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: utilities to post process checkpoint for LoRA #338

Merged
merged 39 commits into from
Sep 25, 2024
Merged
Show file tree
Hide file tree
Changes from 4 commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
fa42c73
utilities to post process checkpoint for LoRA
Ssukriti Sep 10, 2024
e5e4c27
Merge branch 'main' into utility_to_post-process_LoRA
Ssukriti Sep 10, 2024
0fa3dac
improve code comments
Ssukriti Sep 10, 2024
fa97871
Add unit test and fix some lint errors
aluu317 Sep 17, 2024
4c9bb95
lint: fix more fmt errors
aluu317 Sep 17, 2024
af191d1
feat: Add post_process_vLLM_adapters_new_tokens function to main
willmj Sep 18, 2024
fb1dcc9
Merge remote-tracking branch 'origin/main' into utility_to_post-proce…
willmj Sep 18, 2024
bcc17b1
fmt
willmj Sep 18, 2024
57cadc3
fix: Add post processing flag so post processing is only done for vLLM
willmj Sep 18, 2024
36a554c
fix: get num_added_tokens from resize function (#344)
Ssukriti Sep 19, 2024
0d34b1f
Merge branch 'main' into utility_to_post-process_LoRA
Ssukriti Sep 19, 2024
4380c5b
Ran fmt and also removed unneccessary files from test artifact
aluu317 Sep 19, 2024
146e9f1
fix: unit tests
Ssukriti Sep 19, 2024
0022da3
fix: Adding tokens in special_tokens_dict
Abhishek-TAMU Sep 20, 2024
e6a2bc8
Merge branch 'main' into utility_to_post-process_LoRA
Ssukriti Sep 20, 2024
0d077ea
fix: Add additional arg to tests to reflect new flag post_process_vllm
willmj Sep 20, 2024
c8d8f98
fmt
willmj Sep 20, 2024
80fae90
feat: Refactor post-processing of adapters (#345)
Ssukriti Sep 23, 2024
fcdfa29
add test for LoRA tuning from main
Ssukriti Sep 23, 2024
e5406e5
fix formatting
Ssukriti Sep 23, 2024
6ae8f36
correcting post processing script
Ssukriti Sep 23, 2024
a93e902
fix:post-process in place
Ssukriti Sep 23, 2024
7f864d0
update documentation for post-processing
Ssukriti Sep 23, 2024
3de588a
fix:formatting
Ssukriti Sep 23, 2024
f38361c
fix:linting
Ssukriti Sep 23, 2024
3966fef
more warnings /exceptions in script
Ssukriti Sep 24, 2024
2b73e63
check for no tokens added
Ssukriti Sep 24, 2024
e4dd9b2
fix:linting
Ssukriti Sep 24, 2024
9caef81
additional unit test
Ssukriti Sep 24, 2024
820222c
add more tests
Ssukriti Sep 25, 2024
5a8aca0
fix:tokenizer test
Ssukriti Sep 25, 2024
8f92b90
fix:linting and docstrings
Ssukriti Sep 25, 2024
48321e3
fix:return type of trainer
Ssukriti Sep 25, 2024
85f623b
test: enable tests and fix copytree
anhuong Sep 25, 2024
7531836
use copy function from build
Ssukriti Sep 25, 2024
3eb0e54
fix:linting and formatting
Ssukriti Sep 25, 2024
f8fd164
make build a module
Ssukriti Sep 25, 2024
3aaae3c
Merge branch 'main' into utility_to_post-process_LoRA
Ssukriti Sep 25, 2024
2b92881
add back old copy function
Ssukriti Sep 25, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
202 changes: 202 additions & 0 deletions tests/artifacts/tuned_llama_with_added_tokens/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,202 @@
---
base_model: Maykeye/TinyLLama-v0
library_name: peft
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->



## Model Details

### Model Description

<!-- Provide a longer summary of what this model is. -->



- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]

### Model Sources [optional]

<!-- Provide the basic links for the model. -->

- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]

## Uses

<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->

### Direct Use

<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->

[More Information Needed]

### Downstream Use [optional]

<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->

[More Information Needed]

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->

[More Information Needed]

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->

[More Information Needed]

### Recommendations

<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

## How to Get Started with the Model

Use the code below to get started with the model.

[More Information Needed]

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->

[More Information Needed]

### Training Procedure

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Preprocessing [optional]

[More Information Needed]


#### Training Hyperparameters

- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

#### Speeds, Sizes, Times [optional]

<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->

[More Information Needed]

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->

[More Information Needed]

#### Factors

<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->

[More Information Needed]

#### Metrics

<!-- These are the evaluation metrics being used, ideally with a description of why. -->

[More Information Needed]

### Results

[More Information Needed]

#### Summary



## Model Examination [optional]

<!-- Relevant interpretability work for the model goes here -->

[More Information Needed]

## Environmental Impact

<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->

Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).

- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]

## Technical Specifications [optional]

### Model Architecture and Objective

[More Information Needed]

### Compute Infrastructure

[More Information Needed]

#### Hardware

[More Information Needed]

#### Software

[More Information Needed]

## Citation [optional]

<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->

**BibTeX:**

[More Information Needed]

**APA:**

[More Information Needed]

## Glossary [optional]

<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->

[More Information Needed]

## More Information [optional]

[More Information Needed]

## Model Card Authors [optional]

[More Information Needed]

## Model Card Contact

[More Information Needed]
### Framework versions

- PEFT 0.12.0
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
{
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These files are all just dummy LoRA artifacts needed for unit tests

"alpha_pattern": {},
"auto_mapping": null,
"base_model_name_or_path": "Maykeye/TinyLLama-v0",
"bias": "none",
"fan_in_fan_out": false,
"inference_mode": true,
"init_lora_weights": true,
"layer_replication": null,
"layers_pattern": null,
"layers_to_transform": null,
"loftq_config": {},
"lora_alpha": 32,
"lora_dropout": 0.05,
"megatron_config": null,
"megatron_core": "megatron.core",
"modules_to_save": null,
"peft_type": "LORA",
"r": 8,
"rank_pattern": {},
"revision": null,
"target_modules": [
"v_proj",
"q_proj"
],
"task_type": "CAUSAL_LM",
"use_dora": false,
"use_rslora": false
}
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{
"<pad>": 32000
}
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
{
"bos_token": {
"content": "<s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"eos_token": {
"content": "</s>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"pad_token": {
"content": "<pad>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
},
"unk_token": {
"content": "<unk>",
"lstrip": false,
"normalized": false,
"rstrip": false,
"single_word": false
}
}
Loading
Loading