Checkpoint Block Losing Context

When using Checkpoint block to discuss and improve content, it doesn’t appear to keep adequate context. So for example if I configure it to ask the user a question, the user responds without everything it asked for, so it asks them for more information, it loses all context of its original question/task. Are there any settings that can get this to behave more like a chat but working on the content as it is intended, as without context it gets very difficult to work with with any kind of complex instructions/discussion.

Hi @royden,

Thanks for the post!

Could you share an example of this behavior? A quick Loom and a link to the Debugger log would really help us understand when and how it loses context.

Yes, my apologies for the delay in responding. Here is a loom example of the behaviour:

The step prior to this gets input the user has provided, identifies if the user has provided sufficient information or not, and provides back what the user provided and a set of questions. The checkpoint then gives the user the opportunity to respond to those questions, refine the original user input, and on approval from the user overwrites the original information and questions from the previous step with the refined request.

It appears the checkpoint though doesn’t remember any information the user provides - it just looks at its prompt and the information in the screen on the left, so can’t constructively work with the user to refine the content on the left.

Logs in the debugger here: https://app.mindstudio.ai/agents/689d41f4-2118-47ba-80c5-7d36513c3393/edit

@Alex_MindStudio just realised I didn’t tag you in this, so just bringing it back to you.

Regards,

Royden

Hi @royden,

Thanks for the loom and the link!

You have an interesting use case, although the Checkpoint is not currently designed for this scenario. Its primary purpose is to let a user adjust content stored in a variable. For example, an Agent generates a LinkedIn post, then the Checkpoint lets the user tweak tone or details before posting. In this setup, the LLM doesn’t have access to previously sent messages.

I’ve submitted your request for review with our engineers. In the meantime, I’d recommend using a Chat block with a Next button transition. Here’s one way to set it up:

  • Replace the Checkpoint block with a Jump block that sends the user to another workflow
  • Start that workflow with a Generate Text block that reviews the data and generates clarifying questions
  • Enable Chat History Behavior in that block so the LLM in the downstream block has context
  • Move your instructions into the System Prompt, since the Chat block relies on it when generating output
  • Set the Chat block to a Next button transition and connect it to a Jump block
  • Create a second workflow for everything that currently follows the Checkpoint block
  • Start the new workflow with a Generate Text block that reviews the History variable from the Chat block and extracts the final prompt

This setup is longer, but it should let you achieve the behavior you’re looking for.

Hi @royden,

Great news, the Checkpoint block has been updated and now has access to previously sent messages.

Hope this helps!

Thank you @Alex_MindStudio - I will test that new functionality first and if still any issues will use the alternative method you’ve outlined.