Request – Usage Reports, Alerts & “Human/AI in the Loop” Quality Control

Hello @Alex_MindStudio again :slightly_smiling_face:

We’d like to know the best way to implement three capabilities for our clients using MindStudio:


1. Monthly Usage Report (Per Client / Per Agent)

We need a monthly usage report that shows:

  • Interactions/runs per day, per agent, and per client

  • Monthly totals and, if possible, cost/expenses per agent

  • Delivered automatically by email to:

    • Our internal team (configurable addresses)

    • The client (one or more email addresses)

Questions:

  • Is there a built-in way to generate and email such reports?

  • If not, do you provide an API or export endpoint with this level of usage detail?


2. Usage Alerts (Per Agent / Per Client)

We define a monthly limit of interactions/runs per client/agent and bill per interaction. We need alerts when we are close to those limits.

Requirements:

  • Ability to set a monthly usage limit per agent (and ideally per client)

  • Email alerts when usage reaches thresholds (e.g., 70%, 85%, 95%) and when the limit is exceeded

  • Alert content: client ID, agent name/ID, current usage vs. limit

Questions:

  • Can we get real-time or near real-time usage via API or webhooks so we can implement this ourselves?

3. Observability / Quality Control (“Human/AI in the Loop”)

We want to review and validate agent responses, especially at the start.

Option A – Human in the loop:

  • Access to conversation logs (user question + agent answer).

  • Ability to review randomly or selectively.

  • Ideally, review inside MindStudio or receive selected interactions by email for manual evaluation.

Option B – AI in the loop:

  • A second “evaluator” agent that:

    • Receives the conversation and the main agent’s response.

    • Scores response quality (e.g., 0–10).

    • Triggers an email notification when the score is below 8 or the answer seems dubious.

Questions:

  • Is it possible to configure such reviewer/evaluator workflows natively in MindStudio?

  • Any examples or templates for evaluation/monitoring agents?


Logs / Debugger Data Export

Below there’s a screenshot of the debugger view we see in MindStudio.

Question:

  • Is there a way to export these logs (prompts, responses, metadata) so they can be:

    • Downloaded in bulk, or

    • Sent to another agent/service for automated analysis?

Any documentation or examples covering usage exports, alerts, and reviewer workflows would be very helpful.

Best,

Fernando

Hi @Fjmtrigo,

Let’s go over the points you mentioned one by one.

You can filter and download usage costs data per Agent, per user, and by date range in the Usage tab:
https://app.mindstudio.ai/usage

While reports can’t be emailed automatically, if you trigger an Agent through the API, you can include the Billing Cost parameter so it’s returned along with the Agent output.

You can set a monthly usage limit per Agent and per user. If you’re triggering the Agent via API, the user isn’t logged into MindStudio, so the Agent can’t identify who triggered the run. In cases like that, having separate Agents per client is the best approach.

Option A - Human in the loop:

  • You can review these logs in the Debugger, or add a Logic block to branch the workflow and send you an email with conversation details when certain conditions are met

Option B - AI in the loop:

  • Your Agent can pass conversation data to another Agent, or store it in e.g. a Google Sheet. A second Agent can run on a schedule to fetch and analyze the data. A Logic block can then check if the score falls below a threshold like 8 to send an alert, or end the run if it’s above

Bulk export isn’t available yet. Feel free to create a post in the Feature Request category so others can upvote it

Let me know if you have any other questions!

1 Like

Hi @Alex_MindStudio ,

Regarding this topic:

Do you think it is possible to create a workflow within the agent that performs this quality control on the output generated in two ways, “AI Validation in the loop” and “Human Validation in the loop”?

thanks agian

fernando

Hi @Fjmtrigo,

You can create an Agent to review individual runs by following the steps in my previous comment. If you want the Agent to analyze logs in bulk, you’d need to store the outputs returned from the API responses in something like a Google Sheet, then have another Agent run on a schedule to process and analyze that data.

Hope this helps!

1 Like