Quick Start

This quick start guide will walk you through sending events to Autoblocks and viewing them on the platform.

Set up your Autoblocks account

To get started with Autoblocks, you'll need to create an account. Go to the Autoblocks website and sign up for a free trial. Once you've signed up, you can access the platform.

Get your ingestion key

This key is used to authenticate with the ingestion API.

export AUTOBLOCKS_INGESTION_KEY=<your ingestion key>

Install an SDK (optional)

We recommend you install one of our SDKs to make sending events to Autoblocks easier. If you prefer to send events manually, you can skip this step and check out our documentation on the event ingestion API.

poetry add autoblocksai

pip install autoblocksai

Send your first trace to Autoblocks

Below is an example of how to send events related to a chat completion request to OpenAI. The events are grouped together via a trace ID: this is useful for tracking the flow of a request through your application as well as associating user feedback to your LLM events. Our SDKs allow you to easily group events into traces:

import os
import uuid
import traceback
import time

from openai import OpenAI
from autoblocks.tracer import AutoblocksTracer

openai_client = OpenAI()

tracer = AutoblocksTracer(
  # All events sent below will have this trace ID
  trace_id=str(uuid.uuid4()),
  # All events sent below will include this property
  # alongside any other properties set in the send_event call
  properties=dict(
    provider="openai",
  ),
)

params = dict(
  model="gpt-3.5-turbo",
  messages=[
    {
      "role": "system",
      "content": "You are a helpful assistant. You answer questions about a software product named Acme.",
    },
    {"role": "user", "content": "How do I sign up?"},
  ],
  temperature=0.7,
  top_p=1,
  frequency_penalty=0,
  presence_penalty=0,
  n=1,
)

# Use a span ID to group together the request + response events
span_id = str(uuid.uuid4())

tracer.send_event(
  "ai.request",
  span_id=span_id,
  properties=params,
)

try:
  start_time = time.time()
  completion = openai_client.chat.completions.create(**params)
  tracer.send_event(
    "ai.response",
    span_id=span_id,
    properties=dict(
      # OpenAI returns pydantic models, so they need
      # to be serialized via model_dump.
      response=completion.model_dump(),
      latency_ms=(time.time() - start_time) * 1000,
    ),
  )
except Exception as error:
  tracer.send_event(
    "ai.error",
    span_id=span_id,
    properties=dict(
      error=dict(
        type=type(error).__name__,
        message=str(error),
        stacktrace=traceback.format_exc(),
      ),
    ),
  )
  raise

# Simulate user feedback
tracer.send_event(
  "user.feedback",
  properties=dict(
    feedback="good",
  ),
)