I'm trying to use Streamlit to show the final output to the user together with the step-by-step thoughts. For it, I'm using StreamlitCallbackHandler copying from the MRKL example.
However, it is not working, and the error messages are unclear to me.
The only other reference that I found about the problem is the GitHub issue, but as pointed out in the comment, the example code is not using langgraph, so I'm unsure the problem is the same, even with very similar error messages.
This is the relevant part of my code (pretty much the same as multi-agent collaboration example inside the Streamlight container from MRKL example):
#...Other Langgraph code from the example
workflow = StateGraph(AgentState)
workflow.add_node("Researcher", research_node)
workflow.add_node("Chart Generator", chart_node)
workflow.add_node("call_tool", tool_node)
workflow.add_conditional_edges(
"Researcher",
router,
{"continue": "Chart Generator", "call_tool": "call_tool", "end": END},
)
workflow.add_conditional_edges(
"Chart Generator",
router,
{"continue": "Researcher", "call_tool": "call_tool", "end": END},
)
workflow.add_conditional_edges(
"call_tool",
# Each agent node updates the 'sender' field
# the tool calling node does not, meaning
# this edge will route back to the original agent
# who invoked the tool
lambda x: x["sender"],
{
"Researcher": "Researcher",
"Chart Generator": "Chart Generator",
},
)
workflow.set_entry_point("Researcher")
graph = workflow.compile()
#... Other Streamlit configurations
with st.form(key="form"):
user_input = st.text_input("Define the task")
submit_clicked = st.form_submit_button("Execute")
output_container = st.empty()
if with_clear_container(submit_clicked):
output_container = output_container.container()
output_container.chat_message("user").write(user_input)
answer_container = output_container.chat_message("assistant", avatar="")
st_callback = StreamlitCallbackHandler(answer_container)
cfg = RunnableConfig()
cfg["callbacks"] = [st_callback]
cfg["recursion_limit"] = 100
answer = graph.invoke({
"messages": [
HumanMessage(
content=user_input
)
],
}, cfg)
answer_container.write(answer["content"])
Errror messages:
2024-02-18 13:30:17.030 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
Error in StreamlitCallbackHandler.on_tool_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
2024-02-18 13:30:18.630 Thread 'ThreadPoolExecutor-5_0': missing ScriptRunContext
Error in StreamlitCallbackHandler.on_llm_start callback: NoSessionContext()
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
Any tip on how to use StreamlitCallbackHandler with Langgraph?