Analyze and Debug LLM Requests

Learn how to analyze and debug LLM requests with Autoblocks.

Send LLM requests to Autoblocks

This guide will assume you are using OpenAI, but you can easily swap out the OpenAI code for any large language model. The event properties are free form and will adapt to any model.

import os
import uuid
import traceback
import time

from openai import OpenAI
from autoblocks.tracer import AutoblocksTracer

openai_client = OpenAI()

tracer = AutoblocksTracer(
  # All events sent below will have this trace ID
  trace_id=str(uuid.uuid4()),
  # All events sent below will include this property
  # alongside any other properties set in the send_event call
  properties=dict(
    provider="openai",
  ),
)

params = dict(
  model="gpt-3.5-turbo",
  messages=[
    {
      "role": "system",
      "content": "You are a helpful assistant. You answer questions about a software product named Acme.",
    },
    {"role": "user", "content": "How do I sign up?"},
  ],
  temperature=0.7,
  top_p=1,
  frequency_penalty=0,
  presence_penalty=0,
  n=1,
)

# Use a span ID to group together the request + response events
span_id = str(uuid.uuid4())

tracer.send_event(
  "ai.request",
  span_id=span_id,
  properties=params,
)

try:
  start_time = time.time()
  completion = openai_client.chat.completions.create(**params)
  tracer.send_event(
    "ai.response",
    span_id=span_id,
    properties=dict(
      # OpenAI returns pydantic models, so they need
      # to be serialized via model_dump.
      response=completion.model_dump(),
      latency_ms=(time.time() - start_time) * 1000,
    ),
  )
except Exception as error:
  tracer.send_event(
    "ai.error",
    span_id=span_id,
    properties=dict(
      error=dict(
        type=type(error).__name__,
        message=str(error),
        stacktrace=traceback.format_exc(),
      ),
    ),
  )
  raise

# Simulate user feedback
tracer.send_event(
  "user.feedback",
  properties=dict(
    feedback="good",
  ),
)

Search for traces in the Autoblocks UI

Navigate to the explore page and search for sign up in the search bar. You should see a list of traces if you used the same prompt in the example above.

Visualize average latency

  1. Select Chart Options
  2. Set Chart Type to Line
  3. Set Visualize to average
  4. Set Property to latency

Save as a view

Saving as a view allows you to quickly access this chart in the future.

  1. Click the save button in the header
  2. Name the view Average Latency
  3. Click save

Add view to dashboard

  1. Navigate to the dashboard page
  2. Click the plus button in the header
  3. Select the Average Latency view
  4. Use the drag handles in bottom right and top right of chart to resize and move