I am writing a program in Python, I want to connect GPT4ALL so that the program works like a GPT chat, only locally in my programming environment. To do this, I already installed the GPT4All-13B-snoozy.ggmlv3.q4_0.bin model (here is the download link: https://huggingface.co/TheBloke/GPT4All-13B-snoozy-GGML/resolve/main/GPT4All-13B-snoozy.ggmlv3.q4_0.bin), I have the model locally, location: C:\Models\GPT4All-13B-snoozy.ggmlv3.q4_0.bin. As far as I understand, the problem is in the installation of GPT4ALL. But I ran the pip install gpt4all
command in the terminal and it downloads everything and everything works (it does not give an error).The program sees the model, but gives the following error when running the code: Found model file at C:\\Models\\GPT4All-13B-snoozy.ggmlv3.q4_0.bin Unable to load the model: 1 validation error for GPT4All __root__ Unable to instantiate model (type=value_error) Invalid model file Process finished with exit code 0
I am writing code in PyCharm (If it is important), the langchain library is installed and working (its latest version). The program sees and finds the model, but gives this error. I downloaded the model from this site: https://gpt4all.io/index.html. Used the official langchain documentation to connect. I can't figure out what's wrong.
I tried to download other models from the same site - gave the same error. Also changed the format of the model - did not work.