LangChain has a callback framework that allows you to handle events from the various stages of a LangChain pipeline.
You can use the
AutoblocksCallbackHandler from any of the SDKs to send these events to Autoblocks in order
to get a better understanding of the individual steps of your LLM chain.
See the LangChain documentation for details on where you can pass in the callback handler:
In order to run the examples below, you need:
- to have
pip install openai langchain
- to have your OpenAI API key set as an environment variable:
- to have your Autoblocks ingestion key set as an environment variable:
Then run the following:
from langchain.llms import OpenAI from autoblocks.vendor.langchain import AutoblocksCallbackHandler llm = OpenAI() handler = AutoblocksCallbackHandler() llm.predict( "Hello, world!", callbacks=[handler], )
After running the above script, you should eventually see a trace on the explore page with events from the LLM pipeline.
If you want to send custom events in addition to the events that are sent automatically by the callback handler,
such as user feedback events, you can access the underlying
AutoblocksTracer instance via the
tracer property on the handler.
import uuid from langchain.llms import OpenAI from autoblocks.vendor.langchain import AutoblocksCallbackHandler handler = AutoblocksCallbackHandler() # Set the trace ID on the tracer directly handler.tracer.set_trace_id(str(uuid.uuid4())) # Events sent by the callback handler # will have the trace ID set above llm = OpenAI() llm.predict( "Hello, world!", callbacks=[handler], ) # This custom feedback event will also # have the same trace ID set above handler.tracer.send_event( "user.feedback", properties=dict(feedback="good"), )
On the explore page you should eventually see a new trace with the same events
from the previous example and also a
See the LangChain examples in our examples repository.