TypeScript Prompt SDK Reference
AutoblocksPromptManager
Below are the arguments that can be passed when initializing the prompt manager:
name | required | default | description |
---|---|---|---|
id | true | The ID of the prompt. | |
version.major | true | Must be pinned to a specific major version. | |
version.minor | true | Can be one of: a specific minor version, the string "latest" , or a weighted list. If a weighted list, the minor version will be chosen randomly at runtime for each exec call according to the weights. | |
apiKey | false | AUTOBLOCKS_API_KEY environment variable | Your Autoblocks API key. |
refreshInterval | false | { seconds: 10 } | How often to refresh the latest prompt. Only relevant if the minor version is set to "latest" or "latest" is used in the weighted list. |
refreshTimeout | false | { seconds: 30 } | How long to wait for the latest prompt to refresh before timing out. A refresh timeout will not throw an uncaught error. An error will be logged and the background refresh process will continue to run at its configured interval. |
initTimeout | false | { seconds: 30 } | How long to wait for the prompt manager to be ready (when calling init() ) before timing out. |
import { AutoblocksPromptManager } from '@autoblocks/client/prompts';
const mgr = new AutoblocksPromptManager({
id: 'text-summarization',
version: {
major: '1',
minor: '0',
},
});
When using a weighted list, the weights do not need to add up to 100. The weights simply represent the relative probability of choosing a given minor version. For example, the weighted list:
minor: [
{
version: 'latest',
weight: 10,
},
{
version: '0',
weight: 90,
},
]
is the same as:
minor: [
{
version: 'latest',
weight: 1,
},
{
version: '0',
weight: 9,
},
]
exec
Starts a prompt execution context by creating a new PromptExecutionContext
instance.
const response = await mgr.exec(async ({ prompt }) => {
...
});
PromptExecutionContext
An instance of this class is created every time a new execution context is started with the exec
function.
It contains a frozen copy of the prompt manager's in-memory prompt at the time exec
was called.
This ensures the prompt is stable for the duration of an execution, even if the in-memory prompt on the manager
instance is refreshed mid-execution.
params
An object with the prompt's parameters.
const response = await mgr.exec(async ({ prompt }) => {
const params: ChatCompletionCreateParamsNonStreaming = {
model: prompt.params.model,
temperature: prompt.params.temperature,
...
};
});
renderTemplate
The renderTemplate
function accepts a template ID and parameters and returns the rendered template as a string.
name | required | description |
---|---|---|
template | true | The ID of the template to render. |
params | true | The parameters to pass to the template. These values are used to replace the template parameters wrapped in double curly braces. |
const response = await mgr.exec(async ({ prompt }) => {
// Use `prompt.renderTemplate` to render a template
console.log(prompt.renderTemplate(
{
template: 'util/language',
params: {
// Replaces "{{ language }}" with "Spanish"
language: 'Spanish',
},
},
));
// Logs "Always respond in Spanish."
});
renderTool
The renderTool
function accepts a tool name and parameters and returns the rendered tool as an object in the JSON schema format that OpenAI expects.
name | required | description |
---|---|---|
tool | true | The name of the tool to render. |
params | true | The parameters to pass to the tool. These values are used to replace the tool parameters wrapped in double curly braces. |
const response = await mgr.exec(async ({ prompt }) => {
// Use `prompt.renderTool` to render a tool
console.log(prompt.renderTool(
{
tool: 'MyTool',
params: {
// Replaces "{{ language }}" with "Spanish"
language: 'Spanish',
},
},
));
});
track
This function returns tracking information for current prompt execution context.
It is meant to be sent as the promptTracking
property on an LLM response event.
tracer.sendEvent('ai.response', {
properties: {
response,
},
promptTracking: prompt.track(),
});