I have a list of triplets (question, question, answer). In other word, each answer has 2 questions. I want to retrieve the answer that matches a user's question by computing the semantic similarity between the user's question and the questions from that list of triplets with Haystack.
The following code based on a Haystack tutorial works on a list of pairs of (question, answer). How can I extend it to support triplets (question, question, answer)? Ideally, it would also support any number of questions for a given answer.
import pprint
import logging
logging.basicConfig(format="%(levelname)s - %(name)s - %(message)s", level=logging.WARNING)
logging.getLogger("haystack").setLevel(logging.INFO)
from haystack.document_stores import InMemoryDocumentStore
document_store = InMemoryDocumentStore()
from haystack.nodes import EmbeddingRetriever
retriever = EmbeddingRetriever(
document_store=document_store,
embedding_model="sentence-transformers/all-MiniLM-L6-v2",
use_gpu=True,
scale_score=False,
)
import pandas as pd
from haystack.utils import fetch_archive_from_http
# Download
doc_dir = "data/tutorial4"
s3_url = "https://s3.eu-central-1.amazonaws.com/deepset.ai-farm-qa/datasets/documents/small_faq_covid.csv.zip"
fetch_archive_from_http(url=s3_url, output_dir=doc_dir)
# Get dataframe with columns "question", "answer" and some custom metadata
df = pd.read_csv(f"{doc_dir}/small_faq_covid.csv")
# Minimal cleaning
df.fillna(value="", inplace=True)
df["question"] = df["question"].apply(lambda x: x.strip())
print(df.head())
# Create embeddings for our questions from the FAQs
# In contrast to most other search use cases, we don't create the embeddings here from the content of our documents,
# but rather from the additional text field "question" as we want to match "incoming question" <-> "stored question".
questions = list(df["question"].values)
df["embedding"] = retriever.embed_queries(queries=questions).tolist()
df = df.rename(columns={"question": "content"})
# Convert Dataframe to list of dicts and index them in our DocumentStore
docs_to_index = df.to_dict(orient="records")
document_store.write_documents(docs_to_index)
from haystack.pipelines import FAQPipeline
pipe = FAQPipeline(retriever=retriever)
from haystack.utils import print_answers
# Run any question and change top_k to see more or less answers
#prediction = pipe.run(query="How is the virus spreading?", params={"Retriever": {"top_k": 3}})
Requirements:
conda create -y --name haystacktest python==3.9
conda activate haystacktest
pip install --upgrade pip
pip install farm-haystack
conda install pytorch cpuonly -c pytorch
pip install sentence_transformers