Langchain map_reduce can't load gpt2 tokenizer error

131 Views Asked by At

I'm trying to summarize the huge context using Langchain's map_reduce chain and using locally stored llama 2 7b model. My org has blocked hugging face and whenever I try to run the load_summarize method it fails saying can't access gpt2 tokenizer from hugging face. I'm excplicitly passing llama tokenizer and llama model to a child class derived from parent class -LLM

something like this -

from typing import Any, Dict, List, Mapping, Optional

from pydantic import Extra, root_validator

from langchain.callbacks.manager import CallbackManagerForLLMRun
from langchain.llms.base import LLM
from langchain.llms.utils import enforce_stop_tokens

from langchain import PromptTemplate, LLMChain

class HuggingFaceHugs(LLM):
  pipeline: Any
  class Config:
    """Configuration for this pydantic object."""
    extra = Extra.forbid

  def __init__(self, model, tokenizer, task="text-generation"):
    super().__init__()
    self.pipeline = pipeline(task, model=model, tokenizer=tokenizer)

  @property
  def _llm_type(self) -> str:
    """Return type of llm."""
    return "huggingface_hub"

  def _call(self, prompt, stop: Optional[List[str]] = None, run_manager: Optional[CallbackManagerForLLMRun] = None,):
    # Runt the inference.
    text = self.pipeline(prompt, max_length=100)[0]['generated_text']
    
    # @alvas: I've totally no idea what this in langchain does, so I copied it verbatim.
    if stop is not None:
      # This is a bit hacky, but I can't figure out a better way to enforce
      # stop tokens when making calls to huggingface_hub.
      text = enforce_stop_tokens(text, stop)
    print(text)
    return text[len(prompt):]


template = """ Hey llama, you like to eat quinoa. Whatever question I ask you, you reply with "Waffles, waffles, waffles!".
 Question: {input} Answer: """
prompt = PromptTemplate(template=template, input_variables=["input"])


hf_model = HuggingFaceHugs(model=m, tokenizer=tok)

passed this hf_model object to load_summarize_chain method. references:

https://github.com/langchain-ai/langchain/issues/9273

https://github.com/chatchat-space/Langchain-Chatchat/issues/43

but error still exists. Please help me if there is anything that I can do apart from modifying the source code.

   1792         f"containing all relevant files for a {cls.__name__} tokenizer."
   1793     )
   1795 for file_id, file_path in vocab_files.items():
   1796     if file_id not in resolved_vocab_files:

OSError: Can't load tokenizer for 'gpt2'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'gpt2' is the correct path to a directory containing all relevant files for a GPT2TokenizerFast tokenizer.```
0

There are 0 best solutions below