Skip to main content
Prefer zero setup? Use plog run or import provenlog.auto instead. See Auto-Instrumentation.

Setup

from provenlog.integrations.langchain import Trail

trail = Trail(agent_id="my-langchain-agent")
chain.invoke(input, config={"callbacks": [trail]})

What gets captured

EventAction TypeDetails
LLM callLLM_CALLModel name, prompt, parameters
LLM responseLLM_RESPONSEGenerated text, token usage
Tool callTOOL_CALLTool name, input arguments
Tool resultTOOL_RESULTTool output, duration
Agent actionCUSTOMAgent decisions, routing
Retriever queryTOOL_CALLRetriever name, query

Usage with different chain types

# Simple chain
chain = prompt | llm | parser
chain.invoke(input, config={"callbacks": [trail]})

# Agent with tools
agent = create_react_agent(llm, tools)
agent_executor = AgentExecutor(agent=agent, tools=tools)
agent_executor.invoke(input, config={"callbacks": [trail]})

# Retrieval chain
chain = retriever | prompt | llm
chain.invoke(input, config={"callbacks": [trail]})

Configuration

# Simple — uses default embedded mode
trail = Trail(agent_id="my-agent")

# With explicit client for custom configuration
from provenlog import ProvenLogClient

client = ProvenLogClient("http://localhost:7600", agent_id="my-agent")
trail = Trail(client=client, agent_id="my-agent")