Python Prompt SDK Quick Start
Install
poetry add autoblocksai
The prompt SDK requires pydantic
v2 to be installed.
You can install it with poetry add pydantic
or pip install pydantic
.
Create a prompt
Go to the prompts page and click Create Prompt to create your first prompt. The prompt must contain one or more templates and can optionally contain LLM model parameters.
The code samples below are using this example prompt:
Autogenerate prompt classes
The prompt SDK ships with a CLI that generates Python classes with methods and arguments that mirror the structure of your prompt's templates and parameters. This gives you type safety and autocomplete when working with Autoblocks prompts in your codebase.
Create .autoblocks.yml
The .autoblocks.yml
file allows you to configure:
- The location of the file that will contain the generated classes
- A list of prompts to generate classes for
This file should be at the root of your project.
autogenerate:
prompts:
# The location of the file that will contain the generated classes
outfile: my_project/autoblocks_prompts.py
# A list of prompts to generate classes for
prompts:
# The ID of the prompt (what you chose when creating the prompt in the UI)
- id: text-summarization
# The major versions of the `text-summarization` prompt to generate classes for
major_version: 2
- id: flashcard-generator
major_version: 1
Set your API key as an environment variable
Get your Autoblocks API key from the settings page and set it as an environment variable:
export AUTOBLOCKS_API_KEY=...
Run the CLI
Installing the autoblocksai
package adds the prompts generate
CLI to your path:
poetry run prompts generate
Running the CLI will create a file at the outfile
location you have configured.
You will need to run the prompts generate
CLI any time you update .autoblocks.yml
or if
there is a new minor version of a prompt in your .autoblocks.yml
file that you want to use.
When a new major version of a prompt is available and you want to update your codebase to use it, the process will be:
- update the
.autoblocks.yml
file with the new major version for the given prompt - run
prompts generate
- update any broken code
See an example here.
If you're not using poetry
, make sure to activate the virtualenv where the autoblocksai
package is installed
so that the prompts generate
CLI can be found.
Import and use a prompt manager
For each prompt and major version specified in your .autoblocks.yml
file,
there will be a prompt manager class named after the prompt's ID.
For example, if a prompt's ID is "text-summarization"
, then the autogenerated file will have
a class called TextSummarizationPromptManager
.
Initialize the prompt manager
Create a single instance of the prompt manager for the lifetime of your application. The only required argument when initializing a prompt manager is the minor version. To specify the minor version, use the enum that was autogenerated by the CLI:
from my_project.autoblocks_prompts import TextSummarizationPromptManager
mgr = TextSummarizationPromptManager(
minor_version="0",
)
When the version is set to "latest"
, the prompt manager periodically refresh the in-memory prompt
in the background according to the refresh_interval
.
See the AutoblocksPromptManager
reference for more information.
Execute a prompt
The exec
method on the prompt manager starts a new prompt execution context.
It is a context manager that creates a PromptExecutionContext
instance that gives you fully-typed access to the prompt's templates and parameters:
with mgr.exec() as prompt:
tracer = AutoblocksTracer(trace_id=str(uuid.uuid4()))
params = dict(
model=prompt.params.model,
temperature=prompt.params.temperature,
max_tokens=prompt.params.max_tokens,
messages=[
dict(
role="system",
content=prompt.render_template.system(
language_requirement=prompt.render_template.util_language(
language="Spanish",
),
tone_requirement=prompt.render_template.util_tone(
tone="formal",
),
),
),
dict(
role="user",
content=prompt.render_template.user(
document="mock document",
),
),
],
)
tracer.send_event(
"ai.request",
properties=params,
)
response = openai.chat.completions.create(**params)
tracer.send_event(
"ai.response",
prompt_tracking=prompt.track(),
properties=dict(
response=response.model_dump(),
),
)
Include prompt information in the LLM response event
Notice that we include prompt tracking information on the LLM response event:
tracer.send_event(
"ai.response",
prompt_tracking=prompt.track(),
properties=dict(
response=response.model_dump(),
),
)
This correlates LLM response events with the prompt that was used to generate them. The prompt ID and version will be sent as properties on your event, allowing you to track its performance on the explore page.
Develop locally against a prompt revision that hasn't been deployed
As you create new revisions in the UI, your private revisions (or revisions that have been shared by your teammates)
can be pulled down using dangerously_use_undeployed_revision
:
prompts:
- id: text-summarization
# major_version: 1
dangerously_use_undeployed_revision: latest
As the name suggests, this should only be used in local development and never in production.
This will update your existing TextSummarizationPromptManager
class using either your latest text-summarization
revision if the value is latest
or the given revision if a revision ID was provided.
When using the TextSummarizationPromptManager
class while it's configured to use an undeployed revision,
you can set the minor_version
to either "latest"
or a specific revision ID:
text_summarization = TextSummarizationPromptManager(
minor_version="latest",
)
Organizing multiple prompt managers
If you are using many prompt managers, we recommend initializing them in a single file and importing them as a module:
prompt_managers.py
:
from my_project.autoblocks_prompts import TextSummarizationPromptManager
from my_project.autoblocks_prompts import FlashcardGeneratorPromptManager
from my_project.autoblocks_prompts import StudyGuideOutlinePromptManager
text_summarization = TextSummarizationPromptManager(
minor_version"0",
)
flashcard_generator = FlashcardGeneratorPromptManager(
minor_version="0",
)
study_guide_outline = StudyGuideOutlinePromptManager(
minor_version="0",
)
Then, throughout your application, import the entire prompt_managers
module and use the prompt managers as needed:
from my_project import prompt_managers
with prompt_managers.text_summarization.exec() as prompt:
...
with prompt_managers.flashcard_generator.exec() as prompt:
...
with prompt_managers.study_guide_outline.exec() as prompt:
...
This is preferable over importing each prompt manager individually, as it keeps the context of it being a prompt manager in the name. If you were to import each manager individually, it is hard to tell at a glance that it is a prompt manager:
from my_project.prompt_managers import text_summarization
# Somewhere deep in a file, it is not clear
# what the `text_summarization` variable is
with text_summarization.exec() as prompt:
...