Python Prompt SDK Reference
AutoblocksPromptManager
This is the base class the autogenerated prompt manager classes inherit from. Below are the arguments that can be passed when initializing a prompt manager:
name | required | default | description |
---|---|---|---|
minor_version | true | Can be one of: a specific minor version or a weighted list. If a weighted list, the minor version will be chosen randomly at runtime for each exec call according to the weights. | |
api_key | false | AUTOBLOCKS_API_KEY environment variable | Your Autoblocks API key. |
refresh_interval | false | timedelta(seconds=10) | How often to refresh the latest prompt. Only relevant if the minor version is set to LATEST or LATEST is used in the weighted list. |
refresh_timeout | false | timedelta(seconds=30) | How long to wait for the latest prompt to refresh before timing out. A refresh timeout will not raise an uncaught exception. An error will be logged and the background refresh process will continue to run at its configured interval. |
init_timeout | false | timedelta(seconds=30) | How long to wait for the prompt manager to be ready before timing out. |
from my_project.autoblocks_prompts import TextSummarizationPromptManager
mgr = TextSummarizationPromptManager(
minor_version="0",
)
When using a weighted list, the weights do not need to add up to 100. The weights simply represent the relative probability of choosing a given minor version. For example, the weighted list:
[
WeightedMinorVersion(
version="latest",
weight=10,
),
WeightedMinorVersion(
version="0",
weight=90,
),
]
is the same as:
[
WeightedMinorVersion(
version="latest",
weight=1,
),
WeightedMinorVersion(
version="0",
weight=9,
),
]
exec
A context manager that starts a prompt execution context by creating a new
PromptExecutionContext
instance.
with mgr.exec() as prompt:
...
PromptExecutionContext
An instance of this class is created every time a new execution context is started with the exec
context manager.
It contains a frozen copy of the prompt manager's in-memory prompt at the time exec
was called.
This ensures the prompt is stable for the duration of an execution, even if the in-memory prompt on the manager
instance is refreshed mid-execution.
params
A pydantic
model instance with the prompt's parameters.
with mgr.exec() as prompt:
params = dict(
model=prompt.params.model,
temperature=prompt.params.temperature,
...
)
render_template
The render_template
attribute contains an instance of a class that has methods for rendering each of the prompt's templates.
The template IDs and template parameters are all converted to snake case so that the method and argument names follow
Python naming conventions.
For example, the prompt in the quick start guide contains the below templates:
system
Objective: You are provided with a document...
{{ languageRequirement }}
{{ toneRequirement }}
user
Document:
'''
{{ document }}
'''
Summary:
util/language
Always respond in {{ language }}.
util/tone
Always respond in a {{ tone }} tone.
From this, the CLI autogenerates a class with the following methods:
def system(
self,
*,
language_requirement: str,
tone_requirement: str,
) -> str:
...
def user(self, *, document: str) -> str:
...
def util_language(self, *, language: str) -> str:
...
def util_tone(self, *, tone: str) -> str:
...
As a result, you are able to render your templates with functions that are aware of the required parameters for each template:
with mgr.exec() as prompt:
params = dict(
model=prompt.params.model,
temperature=prompt.params.temperature,
max_tokens=prompt.params.max_tokens,
messages=[
dict(
role="system",
content=prompt.render_template.system(
language_requirement=prompt.render_template.util_language(
language="Spanish",
),
tone_requirement=prompt.render_template.util_tone(
tone="formal",
),
),
),
dict(
role="user",
content=prompt.render_template.user(
document="mock document",
),
),
],
)
render_tool
The render_tool
attribute contains an instance of a class that has methods for rendering each of the prompt's tools.
The tool names and tool parameters are all converted to snake case so that the method and argument names follow
Python naming conventions. The tool will be in the JSON schema format that OpenAI expects.
with mgr.exec() as prompt:
params = dict(
model=prompt.params.model,
temperature=prompt.params.temperature,
max_tokens=prompt.params.max_tokens,
tools=[
prompt.render_tool.my_tool(
description="My description"
),
]
# rest of params...
)