__call__ fails to function correctly with langchain LLMChain

309 Views Asked by At

When executed outside of an class object, the code runs correctly, however if I pass the same functionality into a new class it fails to provide the same output

This runs as excpected:

from langchain.llms import GPT4All
from langchain import PromptTemplate,  LLMChain

MODEL_PATH="C:/Users/nous-hermes-13b.ggmlv3.q4_0.bin"
template ="""
list the {n} most similar words to the word: {word} ,do not include the word itself.
"""
prompt = PromptTemplate(template=template, input_variables=["n","word"])

#load the llm
gpt4all = GPT4All(model=MODEL_PATH, n_threads=12)

# create the llm chain
llm_chain = LLMChain(prompt=prompt, llm=gpt4all,verbose=False)

# Run
llm_chain.run({"word":"Interesting","n":10})

gives the output:

'1. Fascinating, 2. Intriguing, 3. Engaging, 4. Compelling, 5. Enthralling, 6. Riveting, 7. Alluring, 8. Gripping, 9. Catchy, 10. Exciting'

But when I use the same code and put it in a class the chain as __call__ function:

from langchain.llms import GPT4All
from langchain import PromptTemplate,  LLMChain

class NousHermes:
    def __init__(self, **kwargs):

        self.MODEL_PATH= "C:/Users/weights_of_model.bin"

        for key, value in kwargs.items():
            setattr(self, key, value)
   

        self.template ="""
        list the {n} most similar words to the word: {word} ,do not include the word itself.
        """
        self.prompt = PromptTemplate(template=self.template, input_variables=["n","word"])

        #load the llm
        self.gpt4all = GPT4All(model=self.MODEL_PATH, n_threads=12)

        # create the llm chain
        self.llm_chain = LLMChain(prompt=self.prompt, llm=self.gpt4all,verbose=False)  
    
    def __call__(self,word,n):
        result=self.llm_chain.run({"word":word,"n":n})
        return result

and run the __call__

noushermes=NousHermes(MODEL_PATH="C:/Users/nous-hermes-13b.ggmlv3.q4_0.bin")
noushermes("Interesting",10)

I get a different output

'\n    """\n    \n    from nltk.corpus import wordnet\n    from nltk.stem import WordNetLemmatizer\n    \n    lemmatizer = WordNetLemmatizer()\n    \n    interesting_synonyms = []\n    for syn in wordnet.SYNONYMS(interesting): \n        if syn != \'Interesting\':\n            interesting_synonyms.append(lemmatizer.lemmatize(syn, get_wordnet_pos(syn)))\n    \n    most_similar = max(set([len(wl.words()) for wl in wordnet.WordNetLemmatizer().lemmatize(\'Interesting\', get_wordnet_pos(interesting))]), key=lambda x:x[1])[0] \n    \n    return interesting_synonyms[:most_similar]'
specs:
python=3.11.4
langchain==0.0.239
matplotlib==3.7.2
networkx==3.1
numpy==1.25.1
pandas==2.0.3
pyvis==0.3.2

It seems like the text comes somehow from the nltk package. I havent installed nltk separately in this environment.However I have another env where i did install nltk and did lemmatize the words as preprocessing step to the llm, this confuses me, since i didnt train the model further nor have i installed nltk in the env. And also this shouldnt actually happen, or what am I missing?

How can i work around that? , where does this come from? thanks

0

There are 0 best solutions below