We fine-tuned a Salesforce/CodeT5p-770 on a 35k dataset and gain an accuracy of 74% on test data scoring 6th position. We applied Lora and post quanitization to speeding up the inference and training. Moreover we also applied post processing in order to fix some common indentation issues
-
Notifications
You must be signed in to change notification settings - Fork 0
AbdulHadi806/LLM_fune_tuning_Hackathon
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
In the recent competition, we were challenged to finetune a model that can convert a LaTeX expressions into Python code effectively. My team, which I led, secured 6th place overall.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published