Q: How does billing work? A: You are only charged when you run a simulation. Each simulation run counts as one execution. We recommend starting with simple scenarios and gradually scaling up to understand costs.
Q: How can I monitor my usage? A: You can view your usage details in the Settings page, including:
Q: How can I control costs? A: We recommend:
Q: How many personas should I include in a scenario? A: Start with 1-2 personas that represent your most common use cases. Add more personas once you’re comfortable with the results.
Q: What makes a good scenario description? A: The best scenario descriptions are specific and clear about what you’re testing. For example: “Test if the agent can correctly schedule a follow-up appointment while verifying patient information.”
Q: Can I reuse personas across different scenarios? A: Yes! You can use the same personas across multiple scenarios to ensure consistent testing across different situations.
Q: How many data field variants should I include? A: Start with 2-3 common variants for each field. For example, for dates: “March 15, 2023”, “3/15/23”, and “next Wednesday”.
Q: Which edge cases should I test first? A: Begin with common edge cases like:
Q: Do I need an OpenAI API key to use evaluators? A: Yes, you need to configure your OpenAI API key in Settings before using evaluators.
Q: What makes a good evaluation criteria? A: Good evaluation criteria should be:
Q: Why did my evaluation fail? A: Check that:
Q: Can I share simulation results with my team? A: Yes, you can share specific executions and test runs with team members who have access to your organization.
Q: How can I search through my simulation results? A: Use our search syntax to find specific results:
output:text
- Find specific output contentduration>100
- Filter by durationQ: My simulation isn’t starting. What should I check? A: Verify that:
Q: The agent isn’t responding as expected. What can I do? A: Try:
Q: My evaluator keeps failing. How can I fix it? A: Ensure:
Q: How can I get support? A: You can: