One of my workflows has gone rogue on JSON array outputs

I’m going through a hierarchy of prompts to build a model. One of the Generate Text outputs from a subworkflow is returning what is supposed to be JSON but not recognized by a function as a valid array.

===

Looking at the ‘detail_results_arry’ (and input into my function) I’m getting something that looks like this (this is just a small snippet. Notice it’s wrapped in {“value”:"[[…

{“value”:"[[{"Approach":"Checked an easy-to-see digital schedule at my induction station that clearly showed upcoming parcel arrivals and their importance.","Themes":

====

Here’s what the formatting part of my instruction looks like (I’m using the same JSON example in the Structured Output example - this was added out of frustration)

3. Output Formatting
Provide the information in a JSON array format, where each top-level element is an array representing a different category of ‘Approaches’. Each ‘Approach’ object within these arrays should contain the keys: Approach (string), Themes (an array of strings), and Examples (an array of strings), matching the provided example structure.The outermost structure must be [ and ], not {“value”: “…”}.

Example:

[
{
“Approach”: “Developed a standardized visual sorting checklist for condition assessment.”,
“Themes”: [
“Process Optimization”,
“Quality Control”,
“Visual Aids”
],
“Examples”: [
“Color-coded damage evaluation guide”,
“Laminated reference sheet for return condition categories”
]
},
{
“Approach”: “Implemented digital tracking for parcel metadata.”,
“Themes”: [
“Data Management”,
“Process Automation”,
“Efficiency”
],
“Examples”: [
“Handheld scanning devices with condition logging”,
“Real-time digital return processing dashboard”
]
}
]

===

I’ve tried different models, and get the same result. I’m not seeing this problem on other subworkflows where I’m passing an array back to the parent. I’m using the exact same configuration approach.

Is there anything obvious jumping out?

Hi @mikeboysen,

Nothing specific is standing out that would explain this behavior, but it would help to get a clearer picture of the full prompt and the data being passed into the Generate Text block.

You could try reinforcing the prompt, but since all models are behaving the same way, it’s likely something in the prompt or the data being processed that’s triggering this.

Would you be able to share a Loom with an overview, along with the full prompt and some sample test data?