how to save adapter.bin model as .pt model

34 Views Asked by At

I have a hugging face openai-whisper model adapter.bin that has been finetuned using LoRa, following this tutorial. Adapter.bin has only the adapter weights. Now I want to save my original model with this adapter weight and get a finetuned.pt model that I can load locally. A sample from the code below:

peft_model_id = "reach-vb/test" 
language = "en"
task = "transcribe"
peft_config = PeftConfig.from_pretrained(peft_model_id)
model = WhisperForConditionalGeneration.from_pretrained(
  peft_config.base_model_name_or_path, load_in_8bit=False, device_map="auto"
)

How do I save this model as a .pt torch model that I can load using model = whisper.load_model('model.pt')

0

There are 0 best solutions below