TypeScript Prompt SDK Quick Start
Install
npm install @autoblocks/client
Create a prompt
Go to the prompts page and click Create Prompt to create your first prompt. The prompt must contain one or more templates and can optionally contain LLM model parameters.
The code samples below are using this example prompt:
Generate types
In order to generate types, you need to set your Autoblocks API key from the settings page as an environment variable:
export AUTOBLOCKS_API_KEY=...
Then, add the prompts generate
command to your package.json
scripts:
"scripts": {
"gen": "prompts generate"
}
You will need to run this script any time you deploy a new major version of your prompt.
Make sure to generate the types in your CI/CD pipeline before running type checking on your application.
"scripts": {
"gen": "prompts generate",
"type-check": "npm run gen && tsc --noEmit"
}
Initialize the prompt manager
Create a single instance of the prompt manager for the lifetime of your application.
When initializing the prompt manager, the major version must be pinned while the minor version can either be
pinned or set to 'latest'
:
import { AutoblocksPromptManager } from '@autoblocks/client/prompts';
const mgr = new AutoblocksPromptManager({
id: 'text-summarization',
version: {
major: '1',
minor: '0',
},
});
When the version is set to 'latest'
, the prompt manager periodically refreshes the in-memory prompt
in the background according to the refreshInterval
.
See the AutoblocksPromptManager
reference for more information.
Wait for the manager to be ready
At the entrypoint to your application, wait for the prompt manager to be ready before handling requests.
await mgr.init();
Execute a prompt
The exec
method on the prompt manager starts a new prompt execution context.
It creates a PromptExecutionContext
instance that gives you fully-typed access to the prompt's templates and parameters:
const response = await mgr.exec(async ({ prompt }) => {
const tracer = new AutoblocksTracer({
traceId: crypto.randomUUID(),
});
const params: ChatCompletionCreateParamsNonStreaming = {
model: prompt.params.model,
temperature: prompt.params.temperature,
messages: [
{
role: 'system',
content: prompt.renderTemplate({
template: 'system',
params: {
languageRequirement: prompt.renderTemplate({
template: 'util/language',
params: {
language: 'Spanish',
},
}),
toneRequirement: prompt.renderTemplate({
template: 'util/tone',
params: {
tone: 'silly',
},
}),
},
}),
},
{
role: 'user',
content: prompt.renderTemplate({
template: 'user',
params: {
document: 'mock document',
},
}),
},
],
};
tracer.sendEvent('ai.request', {
properties: params,
});
const response = await openai.chat.completions.create(params);
tracer.sendEvent('ai.response', {
properties: {
response,
},
promptTracking: prompt.track(),
});
return response;
});
Include prompt information in the LLM response event
Notice that we include prompt tracking information on the LLM response event:
tracer.sendEvent('ai.response', {
properties: {
response,
},
promptTracking: prompt.track(),
});
This correlates LLM response events with the prompt that was used to generate them. The prompt ID and version will be sent as properties on your event, allowing you to track its performance on the explore page.
Develop locally against a prompt revision that hasn't been deployed
As you create new revisions in the UI, your private revisions (or revisions that have been shared by your teammates)
can be pulled down using dangerously-use-undeployed
:
import { AutoblocksPromptManager } from '@autoblocks/client/prompts';
const mgr = new AutoblocksPromptManager({
id: 'text-summarization',
version: {
major: 'dangerously-use-undeployed',
minor: 'latest',
},
});
As the name suggests, this should only be used in local development and never in production.
Organizing multiple prompt managers
If you are using many prompt managers, we recommend initializing them in a single file and importing them as a module:
prompts.ts
:
import { AutoblocksPromptManager } from '@autoblocks/client/prompts';
const refreshInterval = { seconds: 5 };
const managers = {
testSummarization: new AutoblocksPromptManager({
id: 'text-summarization',
version: {
major: '1',
minor: 'latest',
},
refreshInterval,
}),
flashcardGenerator: new AutoblocksPromptManager({
id: 'flashcard-generator',
version: {
major: '1',
minor: 'latest',
},
refreshInterval,
}),
studyGuideOutline: new AutoblocksPromptManager({
id: 'study-guide-outline',
version: {
major: '1',
minor: 'latest',
},
refreshInterval,
}),
};
async function init() {
await Promise.all(Object.values(managers).map(mgr => mgr.init()));
}
export default {
init,
...managers,
};
Make sure to call init
at the entrypoint of your application:
import prompts from '~/prompts';
async function start() {
await prompts.init();
...
}
Then, throughout your application, import the entire prompts
module and use the prompt managers as needed:
import prompts from '~/prompts';
prompts.textSummarization.exec(({ prompt }) => {
...
});
prompts.flashcardGenerator.exec(({ prompt }) => {
...
});
prompts.studyGuideOutline.exec(({ prompt }) => {
...
});