I installed the gpt4all python bindings on my MacBook Pro (M1 Chip) according to these instructions: https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python.
However, while trying out some models, I ran into an LLModel Error that I assume is related to the M1 chip of my Macbook. But there should be a solution/workaround to this? Or am I doing something wrong?
I imported gpt4all and then tried to load the orca-mini-3b, but got the following Traceback.
>>> from gpt4all import GPT4All
>>> model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.98G/1.98G [02:34<00:00, 12.8MiB/s]
LLModel ERROR: CPU does not support AVX
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.11/site-packages/gpt4all/gpt4all.py", line 101, in __init__
self.model.load_model(self.config["path"])
File "/usr/local/lib/python3.11/site-packages/gpt4all/pyllmodel.py", line 260, in load_model
raise ValueError(f"Unable to instantiate model: code={err.code}, {err.message.decode()}")
ValueError: Unable to instantiate model: code=45, Model format not supported (no matching implementation found)
Downgrading to
gpt4all==0.3.0
solved this issue for me, Please refer here more info [https://github.com/nomic-ai/gpt4all/issues/866]