I tried to load Llama-2-7b-hf LLM with QLora with the following code:

model_id = "meta-llama/Llama-2-7b-hf"
tokenizer = AutoTokenizer.from_pretrained(model_id, use_auth_token=True) # I have permissions.
model = AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True, quantization_config=bnb_config, device_map="auto", use_auth_token=True)
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)

config = LoraConfig(
    r=8,
    lora_alpha=32,
    target_modules=[
        "query_key_value",
        "dense",
        "dense_h_to_4h",
        "dense_4h_to_h",
        ],
    lora_dropout=0.05,
    bias="none",
    task_type="CAUSAL_LM"
)

model = get_peft_model(model, config) # got the error here

I got this error:

  File "/home/<my_username>/.local/lib/python3.10/site-packages/peft/tuners/lora.py", line 333, in _find_and_replace
    raise ValueError(
ValueError: Target modules ['query_key_value', 'dense', 'dense_h_to_4h', 'dense_4h_to_h'] not found in the base model. Please check the target modules and try again.

How can I solve this? Thank you!

1

There are 1 best solutions below

1
On

You can add target module in LoraConfig as below for llama 2 7b hf:

from peft import LoraConfig, get_peft_model

config = LoraConfig(
    r=16,  # dimension of the updated matrices
    lora_alpha=64,  # parameter for scaling
    target_modules=[
    "q_proj",
    "up_proj",
    "o_proj",
    "k_proj",
    "down_proj",
    "gate_proj",
    "v_proj"],
    lora_dropout=0.1,  # dropout probability for layers
    bias="none",
    task_type="CAUSAL_LM",
)

model = get_peft_model(model,config)
print_trainable_parameters(model)