Prompt SDK
The Autoblocks Prompt SDK makes it effortless to organize and track your prompt templates.
Advantages
The Autoblocks Prompt SDK is an auto-generated and type-safe prompt builder for TypeScript applications. It provides several advantages over traditional prompt building techniques:
1. Prompt templates are kept in text files
Prompt templates are kept in text files, not code, so you don't need to awkwardly un-indent multiline strings within objects and functions:
const makePrompt = () => {
return `First line has to be here so that we don't have a newline at the beginning
and further content needs to be un-indented
to the left so that we don't have extra
whitespace before each line`;
};
I can write things normally in a text file!
I don't have to awkwardly un-indent my text to the left!
2. Prompt building is type-safe
The placeholders in your templates are automatically extracted and turned into TypeScript types, so you don't have to worry about typos or missing placeholder values.
If you have the below template in a file called feature-a/system
:
This template expects a {{ name }} and {{ age }} property.
Then your builder will be aware of both its path and its expected placeholder values:
builder.build('feature-a/system', {
name: 'Alice',
age: '43',
});
3. Auto-versioning
The most powerful feature of the Autoblocks Prompt SDK is the automated versioning based on which templates you used in the process of building the final, rendered prompt(s) you send to an LLM model.
For each component of your application that makes an LLM request, choose a human-readable identifier to represent that task. When you use this identifier to initialize a new builder instance, the SDK will keep track of which templates you used in the process of building the prompt(s) for that request.
This allows you to version the entire prompt building process in addition to the templates themselves. This is an important distinction, since building prompt(s) for an LLM request is done programmatically using a variable combination of templates. For example, assume we have the below templates:
common/language
:
Always respond in {{ language }}.
feature-a/system
:
You are a helpful assistant.
{{ languageRequirement }}
feature-a/user
:
Hello, my name is {{ name }}!
We can use any combination of these to build the prompts for an LLM request:
enum PromptTrackingId {
FEATURE_A = 'feature-a',
FEATURE_B = 'feature-b',
}
// Create a new builder anytime you're about to build
// prompt(s) for an LLM request. Use an identifier
// that represents the task you're performing. Autoblocks
// will automatically track version changes and how LLM
// performance is changing over time
const builder = new AutoblocksPromptBuilder(PromptTrackingId.FEATURE_A);
// Use the builder to build prompt(s) programmatically
// using any combination of templates you want
const messages = [
{
role: 'system',
content: builder.build('feature-a/system', {
// Replacement value for the {{ languageRequirement }} placeholder
// is the result of another rendered template
languageRequirement: builder.build('common/language', {
// Replacement value for the {{ language }} placeholder is a constant
language: 'Spanish',
}),
}),
},
{
role: 'user',
content: builder.build('feature-a/user', {
// Replacement value for the {{ name }} placeholder is a constant
name: 'Alice',
}),
},
];
const response = await openai.chat.completions.create({
messages,
model: 'gpt-3.5-turbo',
});
// Send response + template usage to Autoblocks
await tracer.sendEvent('ai.response', {
properties: { response },
promptTracking: builder.usage(),
});
Then, if you make a change to any of the below templates, it will result in a new version for PromptTrackingId.FEATURE_A
when that
change is deployed:
common/language
feature-a/system
feature-a/user
For example, let's say we change the common/language
template to:
common/language
:
ALWAYS respond in {{ language }}.
It's a small change, but we want to consider this a new version of PromptTrackingId.FEATURE_A
, since it could have an impact on LLM performance.
Once this change is deployed, Autoblocks will automatically detect the new version and you'll be able to compare performance before and after the change.
At the top of a version history page, you'll see your identifier along with all of the templates that were used in the process of building prompt(s) for the currently selected version.
With the old version selected:
With the new version selected:
Note the difference in the common/language
template.
Getting Started
Install
npm install @autoblocks/client
Configure your templates directory
Tell us where your templates directory is:
package.json
:
"autoblocks": {
"templatesDirectory": "prompt-templates"
}
Create templates
Add templates to your templates directory:
package.json
prompt-templates/
common/
language
tone
feature-a/
system
user
feature-b/
system
user
These are just plain text files with placeholders. Placeholders should be surrounded by double curly braces:
This is a template with a {{ placeholder }}.
Placeholders can have any amount of whitespace, so these are also valid:
My name is {{name}}.
My age is {{ age }}
Placeholders aren't always replaced with constants. They can also be replaced with the value of another rendered template:
const messages = [
{
role: 'system',
content: builder.build('feature-a/system', {
// Replacement value for the {{ languageRequirement }} placeholder
// is the result of another rendered template
languageRequirement: builder.build('common/language', {
// Replacement value for the {{ language }} placeholder is a constant
language: 'Spanish',
}),
}),
},
];
Run the CLI to compile the prompt templates
The autoblocks prompts generate
CLI generates types and also copies your templates from the text files to JavaScript values that your application will need at runtime.
This CLI needs to run every time you change your templates, so we recommend installing nodemon and adding both gen
and gen:watch
scripts to your package.json
.
npm install nodemon --save-dev
Then:
package.json
:
"scripts": {
"gen": "autoblocks prompts generate",
"gen:watch": "nodemon --quiet --watch prompt-templates --ext \"*\" --exec \"npm run gen\""
}
You will also need to run the CLI during your CI/CD pipelines. For example, you will want to make sure you are compiling the templates before doing any type checking or builds:
"scripts": {
"gen": "autoblocks prompts generate",
"build": "npm run gen && tsc",
"serve": "npm run build && node dist/serve.js"
}
Initialize a builder
You should initialize a new builder any time you're making an LLM request. Use an identifier to represent the task you're performing when initializing the builder.
import { AutoblocksPromptBuilder } from '@autoblocks/client/prompts';
const builder = new AutoblocksPromptBuilder('feature-a');
// Use the builder however you want to compose your prompt(s)
const messages = [
{
role: 'system',
content: builder.build('feature-a/system', {
languageRequirement: builder.build('common/language', {
language: 'Spanish',
}),
toneRequirement: builder.build('common/tone', {
tone: 'friendly',
}),
}),
},
{
role: 'user',
content: builder.build('feature-a/user', {
name: 'Alice',
}),
},
];
Send template usage data with the LLM response event
When you send an LLM response event, you should include the usage data from the builder:
const response = await openai.chat.completions.create({
messages,
model: 'gpt-3.5-turbo',
});
await tracer.sendEvent('ai.response', {
properties: { response },
promptTracking: builder.usage(),
});
Autoblocks will show you how your LLM performance changes over time as you update your templates.
Example
Clone our examples repository and follow the instructions in the prompt-sdk example.