Name: gpt4all Version: 2.0.2
I am trying to query a database using GPT4All package using my postgresql database. Below is the code
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_experimental.sql import SQLDatabaseChain
from langchain import SQLDatabase
from langchain.llms import GPT4All
import os
username = "postgres"
password = "password"
host = "127.0.0.1" # internal IP
port = "5432"
mydatabase = "reporting_db"
pg_uri = f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{mydatabase}"
my_db = SQLDatabase.from_uri(pg_uri)
PROMPT = """
Given an input question, first create a syntactically correct postgresql query to run,
then look at the results of the query and return the answer.
The question: {question}
"""
path = "./models/mistral-7b-openorca.Q4_0.gguf"
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model = path,
callbacks=callbacks,
n_threads=3,
max_tokens=5162,
verbose=True
)
db_chain = SQLDatabaseChain.from_llm(llm = llm,
db = my_db,
verbose=True
)
question = "Describe the table Sales"
answer = db_chain.run(PROMPT.format(question=question)
)
print(answer)
but I am getting the following error,
ERROR: sqlalchemy.exc.ProgrammingError: (psycopg2.errors.SyntaxError) syntax error at
or near "ERROR"
LINE 1: ERROR: The prompt size exceeds the context window size and c...
^
[SQL: ERROR: The prompt size exceeds the context window size and cannot be processed.]
(Background on this error at: https://sqlalche.me/e/20/f405)
Is there a parameter I should change in order to overcome this limitation?
Maybe you can look like this link. It seems gpt4all itself cant adjust max token of models.