How to build My ChatGPT-like Model using My Knowldge base and some Foundation Model?

82 Views Asked by At

Let's consider Openai's GPTs platform. They let you "build" your own chatGPT like model with your knowledge base and your system prompt.

How did they do it so well?

Using their API or Palm's or anyone elses the results doesn't get even close.

Suppose you embed your database in a vector storage like chroma or pinecone. I'm guessing this is how they do it. BUT, how does that improve the results so much?

There must be something else behind it. I was testing storing my knowledge base on a chroma collection. That didn't improve the results all that much. I was getting about 60% accuracy on questions about the content.

I also tried to fine tune gpt-3.5 with the same data, and that didn't improve our results and created some other problems such as badly written responses and confused results.

The fine tune results might improve if we add a RLHF routine, but I don't think that is the way I want to invest from now on.

Back to GPTs. I don't think they're doing fine tuning of a model to handle our database since the time to create or update a gpt is incredibly fast. I do believe they're using some kind of vector storage. Do you guys happen to have any information or guess on how they have structured the creation of gpts to have such incredible results?

Or any tips to help on preparing LLM models for specific tasks.

0

There are 0 best solutions below