I am getting this error, could anyone please help to resolve it?
PS C:\Users\name\Desktop\privateGPT-main\privateGPT-main> python privateGPT.py Found model file at models/ggml-gpt4all-j-v1.3-groovy.bin gptj_model_load: loading model from 'models/ggml-gpt4all-j-v1.3-groovy.bin' - please wait ...
gptj_model_load: n_vocab = 50400
gptj_model_load: n_ctx = 2048
gptj_model_load: n_embd = 4096
gptj_model_load: n_head = 16
gptj_model_load: n_layer = 28
gptj_model_load: n_rot = 64
gptj_model_load: f16 = 2
gptj_model_load: ggml ctx size = 5401.45 MB
gptj_model_load: kv self size = 896.00 MB
gptj_model_load: ................................... done
gptj_model_load: model size = 3609.38 MB / num tensors = 285
Traceback (most recent call last):
File "C:\Users\name\Desktop\privateGPT-main\privateGPT-main\privateGPT.py", line 83, in
main()
File "C:\Users\name\Desktop\privateGPT-main\privateGPT-main\privateGPT.py", line 38, in main
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\name\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\load\serializable.py", line 74, in init
super().init(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All
n_ctx
extra fields not permitted (type=value_error.extra)
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
MODEL_N_BATCH=8
TARGET_SOURCE_CHUNKS=4
While Suresh's initial response provided partial assistance, I encountered another error subsequently, prompting me to seek an alternative solution. Thankfully, on GitHub, I found a useful suggestion by user adrisalcedo00, advising the installation of "langchain"
And recommending the modification of the "n_ctx" parameter to "max_tokens" in the "GPT4All" case:
Following this advice, I was able to resolve the issue successfully.