So I am using GPT4ALL for a project and its very annoying to have the output of gpt4all loading in a model everytime I do it, also for some reason I am also unable to set verbose to False, although this might be an issue with the way that I am using langchain too.
I tried looking up the gpt4all docs and couldn't find much on how to mute the logs anywhere.
I am also unable to disable verbose, heres the code I used:
llm = GPT4All(model=PATH, verbose=False)
prompt = PromptTemplate(input_variables=['user','prompt', 'info'], template="### System:\nYou are an AI assistant helping {user}\n\n### User:\n{prompt}\n\n### Input:\n{info}\n\n### Response:\n")
prompt.format(user=user_name, prompt=input_text, info=info)
chain=LLMChain(prompt=prompt, llm=llm, verbose=False)
response= chain.run({'prompt':input_text, 'user':user_name, 'info':info})
I've set both langchain and gpt4all verbose to false, yet its still producing verbose outputs, and I cant figure out how to disable the logging when gpt4all loads a model (line 1)