TypeScript Prompt SDK Quick Start

Install

npm install @autoblocks/client

Create a prompt

Go to the prompts page and click Create Prompt to create your first prompt. The prompt must contain one or more templates and can optionally contain LLM model parameters.

The code samples below are using this example prompt:

Generate types

In order to generate types, you need to set your Autoblocks API key from the settings page as an environment variable:

export AUTOBLOCKS_API_KEY=...

Then, add the prompts generate command to your package.json scripts:

"scripts": {
  "gen": "prompts generate"
}

You will need to run this script any time you deploy a new major version of your prompt.

Initialize the prompt manager

Create a single instance of the prompt manager for the lifetime of your application. When initializing the prompt manager, the major version must be pinned while the minor version can either be pinned or set to 'latest':

import { AutoblocksPromptManager } from '@autoblocks/client/prompts';

const mgr = new AutoblocksPromptManager({
  id: 'text-summarization',
  version: {
    major: '1',
    minor: '0',
  },
});

Wait for the manager to be ready

At the entrypoint to your application, wait for the prompt manager to be ready before handling requests.

await mgr.init();

Execute a prompt

The exec method on the prompt manager starts a new prompt execution context. It creates a PromptExecutionContext instance that gives you fully-typed access to the prompt's templates and parameters:

const response = await mgr.exec(async ({ prompt }) => {
  const tracer = new AutoblocksTracer({
    traceId: crypto.randomUUID(),
  });

  const params: ChatCompletionCreateParamsNonStreaming = {
    model: prompt.params.model,
    temperature: prompt.params.temperature,
    messages: [
      {
        role: 'system',
        content: prompt.render({
          template: 'system',
          params: {
            languageRequirement: prompt.render({
              template: 'util/language',
              params: {
                language: 'Spanish',
              },
            }),
            toneRequirement: prompt.render({
              template: 'util/tone',
              params: {
                tone: 'silly',
              },
            }),
          },
        }),
      },
      {
        role: 'user',
        content: prompt.render({
          template: 'user',
          params: {
            document: 'mock document',
          },
        }),
      },
    ],
  };

  tracer.sendEvent('ai.request', {
    properties: params,
  });

  const response = await openai.chat.completions.create(params);

  tracer.sendEvent('ai.response', {
    properties: {
      response,
    },
    promptTracking: prompt.track(),
  });

  return response;
});

Include prompt information in the LLM response event

Notice that we include prompt tracking information on the LLM response event:

tracer.sendEvent('ai.response', {
  properties: {
    response,
  },
  promptTracking: prompt.track(),
});

This correlates LLM response events with the prompt that was used to generate them. The prompt ID and version will be sent as properties on your event, allowing you to track its performance on the explore page.

Develop locally against a prompt revision that hasn't been deployed

As you create new revisions in the UI, your private revisions (or revisions that have been shared by your teammates) can be pulled down using dangerously-use-undeployed:

import { AutoblocksPromptManager } from '@autoblocks/client/prompts';

const mgr = new AutoblocksPromptManager({
  id: 'text-summarization',
  version: {
    major: 'dangerously-use-undeployed',
    minor: 'latest',
  },
});