I Installed the GPT4All Installer using GUI based installers for Mac.
Then, I downloaded the required LLM models and take note of the PATH they're installed to.
Now I'm trying to load the models through a python application using streamlit.
Here is my app.py
file:
# Import app framework
import streamlit as st
# Import dependencies
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.llms import GPT4All
# Path to weights
PATH = "/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin"
# Instance of llm
llm = GPT4All(model=PATH, verbose=True)
# Prompt template
prompt = PromptTemplate(input_variables=['question'],
template="""
Question: {question}
Answer: Let's think step by step
""")
# LLM chain
chain = LLMChain(prompt=prompt, llm=llm)
# Title
st.title(' GPT For Y\'all')
# Prompt text box
prompt = st.text_input('Enter your prompt here!')
# if we hit enter do this
if prompt:
# Pass the prompt to the LLM chain
response = chain.run(prompt)
st.write(response)
When I'm running streamlit run app.py
I get the following message:
You can now view your Streamlit app in your browser.
Local URL: http://localhost:8501
Network URL: http://10.81.128.79:8501
llama_model_load: loading model from '/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin' - please wait ...
llama_model_load: invalid model file '/Users/toto/Library/Application Support/nomic.ai/GPT4All/GPT4All-13B-snoozy.ggmlv3.q4_0.bin' (unsupported format version 3, expected 1)
llama_init_from_file: failed to load model
[1] 3908 segmentation fault streamlit run app.py
What's weird is that it's correctly work from the GPT4All desktop app but not from python code.
I have encounter similar situation yesterday.
We set the torch version as torch==2.0.1
See if this can help you.