TypeScript Prompt SDK Quick Start

Install

npm install @autoblocks/client

Create a prompt

Go to the prompts page and click Create Prompt to create your first prompt. The prompt must contain one or more templates and can optionally contain LLM model parameters.

The code samples below are using this example prompt:

Generate types

In order to generate types, you need to set your Autoblocks API key from the settings page as an environment variable:

export AUTOBLOCKS_API_KEY=...

Then, add the prompts generate command to your package.json scripts:

"scripts": {
  "gen": "prompts generate"
}

You will need to run this script any time you deploy a new major version of your prompt.

Initialize the prompt manager

Create a single instance of the prompt manager for the lifetime of your application. When initializing the prompt manager, the major version must be pinned while the minor version can either be pinned or set to 'latest':

import { AutoblocksPromptManager } from '@autoblocks/client/prompts';

const mgr = new AutoblocksPromptManager({
  id: 'text-summarization',
  version: {
    major: '1',
    minor: '0',
  },
});

Wait for the manager to be ready

At the entrypoint to your application, wait for the prompt manager to be ready before handling requests.

await mgr.init();

Execute a prompt

The exec method on the prompt manager starts a new prompt execution context. It creates a PromptExecutionContext instance that gives you fully-typed access to the prompt's templates and parameters:

const response = await mgr.exec(async ({ prompt }) => {
  const tracer = new AutoblocksTracer({
    traceId: crypto.randomUUID(),
  });

  const params: ChatCompletionCreateParamsNonStreaming = {
    model: prompt.params.model,
    temperature: prompt.params.temperature,
    messages: [
      {
        role: 'system',
        content: prompt.render({
          template: 'system',
          params: {
            languageRequirement: prompt.render({
              template: 'util/language',
              params: {
                language: 'Spanish',
              },
            }),
            toneRequirement: prompt.render({
              template: 'util/tone',
              params: {
                tone: 'silly',
              },
            }),
          },
        }),
      },
      {
        role: 'user',
        content: prompt.render({
          template: 'user',
          params: {
            document: 'mock document',
          },
        }),
      },
    ],
  };

  tracer.sendEvent('ai.request', {
    properties: params,
  });

  const response = await openai.chat.completions.create(params);

  tracer.sendEvent('ai.response', {
    properties: {
      response,
    },
    promptTracking: prompt.track(),
  });

  return response;
});

Include prompt information in the LLM response event

Notice that we include prompt tracking information on the LLM response event:

tracer.sendEvent('ai.response', {
  properties: {
    response,
  },
  promptTracking: prompt.track(),
});

This correlates LLM response events with the prompt that was used to generate them. The prompt ID and version will be sent as properties on your event, allowing you to track its performance on the explore page.

Develop locally against a prompt that hasn't been deployed

You will often want to develop locally against a prompt that you've modified in the UI but haven't deployed yet. As you modify prompts in the UI, the undeployed changes can be pulled down at any time by setting the major version to dangerously-use-undeployed:

import { AutoblocksPromptManager } from '@autoblocks/client/prompts';

const mgr = new AutoblocksPromptManager({
  id: 'text-summarization',
  version: {
    major: 'dangerously-use-undeployed',
    minor: '',  // can be any string, will be ignored
  },
});