I have setup llm as GPT4All model locally and integrated with few shot prompt template using LLMChain. The few shot prompt examples are simple Few shot prompt template. I have tried the same template using OpenAI model it gives expected results and with GPT4All model, it just hallucinates for such simple examples. (I know that OpenAI models paramater are huge but still why is GPT4All results are so terrible.
OpenAI:
from langchain.llms import OpenAI
from langchain.chains import LLMChain
llm = OpenAI(
model = "text-davinci-003",
temperature = 0
)
openai_chain = LLMChain(llm=llm, prompt=few_shot_prompt_template)
GPT4All:
from langchain.llms import GPT4All
local_path = "./models/ggml-gpt4all-j-v1.3-groovy.bin"
llm = GPT4All(model=local_path, verbose=True)
gpt4all_chain = LLMChain(llm=llm, prompt=few_shot_prompt_template)
OpenAI model results:OpenAI
GPT4All model results:GPT4All