Monitoring in Production
After you deploy your AI application, how can you tell if it works as expected?
Autoblocks provides solutions for monitoring your product's performance in production.
Online evaluations
Testing will only get you so far. The quality of your AI is ultimately going to be judged in production when your product faces real user scenarios.
To help you evaluate quality in production, Autoblocks enables you to configure evaluators on the events you are sending.
import { AutoblocksTracer } from '@autoblocks/client';
const tracer = new AutoblocksTracer();
tracer.sendEvent('ai.response', {
evaluators: [new IsProfessionalTone()],
});
After sending events with evaluators attached, evaluation results will be saved in Autoblocks and available for analyzing.
Learn more about online evaluations
Tracking user outcomes
Autoblocks can be used as a more traditional user analytics tool to track user interactions and outcomes. This is useful to keep all of your data in one place and to be able to correlate user interactions with the changes you are making to your AI powered product.
Learn more about tracking user outcomes
Monitoring performance metrics
Autoblocks enables you to also monitor performance metrics of your AI product in production. For example, you may want to track and compare the latency of different models and prompt versions in a production environment.