Why we use return_tensors = "pt" during tokenization?

140 Views Asked by At

So I am doing tokenization of my dataset, and created one function,

max_length = 1026

def generate_and_tokenize_prompt(prompt):
    result = tokenizer(
        prompt,
        return_tensors="pt",
        truncation=True,
        max_length=max_length,
        padding="max_length",
    )
    return result

train_dataset = df_train['prompt']
val_dataset = df_test['prompt']
tokenized_train_dataset = train_dataset.map(generate_and_tokenize_prompt)
tokenized_val_dataset = val_dataset.map(generate_and_tokenize_prompt)

Here you can see we are using return_tensors="pt", but I am not sure why are using it. Because even without this parameters, I am able to tokenize my dataset.

1

There are 1 best solutions below

0
Dtoc On

"pt" means return pytorch tensor. See documentation https://huggingface.co/docs/transformers/main_classes/tokenizer