Documentation Index
Fetch the complete documentation index at: https://docs.autoblocks.ai/llms.txt
Use this file to discover all available pages before exploring further.
TypeScript Prompt SDK Reference
AutoblocksPromptManager
Below are the arguments that can be passed when initializing the prompt manager:
| name | required | default | description |
|---|
appName | true | | The name of the app that contains the prompt. |
id | true | | The ID of the prompt. |
version.major | true | | Must be pinned to a specific major version. |
version.minor | true | | Can be one of: a specific minor version or the string "latest". |
apiKey | false | AUTOBLOCKS_V2_API_KEY environment variable | Your Autoblocks API key. |
refreshInterval | false | { seconds: 10 } | How often to refresh the latest prompt. Only relevant if the minor version is set to "latest". |
refreshTimeout | false | { seconds: 30 } | How long to wait for the latest prompt to refresh before timing out. A refresh timeout will not throw an uncaught error. An error will be logged and the background refresh process will continue to run at its configured interval. |
initTimeout | false | { seconds: 30 } | How long to wait for the prompt manager to be ready (when calling init()) before timing out. |
import { AutoblocksPromptManager } from '@autoblocks/client/prompts';
const mgr = new AutoblocksPromptManager({
appName: 'my-app',
id: 'text-summarization',
version: {
major: '1',
minor: '0',
},
});
exec
Starts a prompt execution context by creating a new PromptExecutionContext instance.
const response = await mgr.exec(async ({ prompt }) => {
...
});
PromptExecutionContext
An instance of this class is created every time a new execution context is started with the exec function.
It contains a frozen copy of the prompt manager’s in-memory prompt at the time exec was called.
This ensures the prompt is stable for the duration of an execution, even if the in-memory prompt on the manager
instance is refreshed mid-execution.
params
An object with the prompt’s parameters.
const response = await mgr.exec(async ({ prompt }) => {
const params: ChatCompletionCreateParamsNonStreaming = {
model: prompt.params.model,
temperature: prompt.params.temperature,
...
};
});
renderTemplate
The renderTemplate function accepts a template ID and parameters and returns the rendered template as a string.
| name | required | description |
|---|
| template | true | The ID of the template to render. |
| params | true | The parameters to pass to the template. These values are used to replace the template parameters wrapped in double curly braces. |
const response = await mgr.exec(async ({ prompt }) => {
// Use `prompt.renderTemplate` to render a template
console.log(prompt.renderTemplate(
{
template: 'util/language',
params: {
// Replaces "{{ language }}" with "Spanish"
language: 'Spanish',
},
},
));
// Logs "Always respond in Spanish."
});
The renderTool function accepts a tool name and parameters and returns the rendered tool as an object in the JSON schema format that OpenAI expects.
| name | required | description |
|---|
| tool | true | The name of the tool to render. |
| params | true | The parameters to pass to the tool. These values are used to replace the tool parameters wrapped in double curly braces. |
const response = await mgr.exec(async ({ prompt }) => {
// Use `prompt.renderTool` to render a tool
console.log(prompt.renderTool(
{
tool: 'MyTool',
params: {
// Replaces "{{ language }}" with "Spanish"
language: 'Spanish',
},
},
));
});