Anybody is able to run langchain gpt4all successfully?

5.1k Views Asked by At

The following piece of code is from https://python.langchain.com/docs/modules/model_io/models/llms/integrations/gpt4all

from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler


template = """Question: {question}

Answer: Let's think step by step."""

prompt = PromptTemplate(template=template, input_variables=["question"])

local_path = (
    "./models/ggml-gpt4all-l13b-snoozy.bin"  # replace with your desired local file path
)

# Callbacks support token-wise streaming
callbacks = [StreamingStdOutCallbackHandler()]
# Verbose is required to pass to the callback manager
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
# If you want to use a custom model add the backend parameter
# Check https://docs.gpt4all.io/gpt4all_python.html for supported backends
llm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)


llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"

llm_chain.run(question)`

Here is the error message:

Found model file at  ./models/ggml-gpt4all-l13b-snoozy.bin
Invalid model file
---------------------------------------------------------------------------
ValidationError                           Traceback (most recent call last)
Cell In[16], line 19
     17 callbacks = [StreamingStdOutCallbackHandler()]
     18 # Verbose is required to pass to the callback manager
---> 19 llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
     20 # If you want to use a custom model add the backend parameter
     21 # Check https://docs.gpt4all.io/gpt4all_python.html for supported backends
     22 llm = GPT4All(model=local_path, backend="gptj", callbacks=callbacks, verbose=True)

File ~/anaconda3/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()

ValidationError: 1 validation error for GPT4All
__root__
  Unable to instantiate model (type=value_error)
2

There are 2 best solutions below

0
On

Without further info (e.g., versions, OS, ...), it is hard to say what the problem here is.

First thing to check is whether ./models/ggml-gpt4all-l13b-snoozy.bin is valid. For this, you should compare the checksum of the local file with the valid ones, which you can find here: https://gpt4all.io/models/models.json.

Note that your model is not in the file, and is not officially supported in the current version of gpt4all (1.0.2) anymore, so you might want to download and use GPT4All-13B-snoozy.ggmlv3.q4_0.bin from their official website. If the checksum is not correct, delete the old file and re-download.

If the problem persists, try to load the model directly via gpt4all to pinpoint if the problem comes from the file / gpt4all package or langchain package.

    from gpt4all import GPT4All

    model = GPT4All("ggml-gpt4all-l13b-snoozy.bin", model_path="./models/")

Finally, you are not supposed to call both line 19 and line 22. As the comments state: If you have a predefined model, use 19, if you have a custom one, use line 22.

0
On

First you have to installe "ggml-gpt4all-l13b-snoozy.bin" on your pc:

from gpt4all import GPT4All
GPT4All(model_name = "ggml-gpt4all-l13b-snoozy.bin")

then you can use langchain.llms to access to your model WARNING : use gpt4all = 0.3.5