UI issue while adding Gemini API using Gradio

123 Views Asked by At

I have created a very basic gradio based UI for integrating Gemini API. The following is the code snippet :

from google.cloud import aiplatform
import vertexai
from vertexai.preview.generative_models import GenerativeModel, Part
import gradio as gr

def generate(prompt, chat_history):
    print(prompt)
    chat_history = chat_history or [];
    model = GenerativeModel("gemini-pro")
    if prompt :
        response = model.generate_content(
            prompt,
            generation_config={
                "max_output_tokens": 2048,
                "temperature": 0.9,
                "top_p":1
            },
            stream=False,
        )

    output = response.candidates[0].content.parts[0].text
    if not chat_history:
        return output
    return chat_history.append((prompt, output))

iface = gr.ChatInterface(generate)


if __name__ == "__main__":
    iface.launch()

I am using the new functionality provided by Gradio : ChatInterface https://www.gradio.app/docs/chatinterface . Now the problem is the chat history is showing the user input (/prompt) twice in the Chat interface. As per the documentation, I have returning the chat_history.append((prompt, output)) as it was mentioned as follows :

the function to wrap the chat interface around. Should accept two parameters: a string input message and list of two-element lists of the form [[user_message, bot_message], ...] representing the chat history, and return a string response. 

Can you tell me where I am doing wrong ?

enter image description here

1

There are 1 best solutions below

0
On

We have to only return the output from here not the tuple - please go through the disucussion in https://github.com/gradio-app/gradio/issues/6881 . It actually resolved the issue.