What problem does this feature request solve?
Current deep-research options in MindStudio, such as the full Deep Research agent or block, are powerful but often excessive for simple research queries. They can be slow, costly, and return extremely large, highly detailed datasets that require additional processing. For many users, this adds unnecessary overhead when all they need is a concise, well-sourced research summary directly from a single Generate Text block.
What is the use case for this feature?
Users who want Google- or ChatGPT-style grounded research outputs (e.g., short research reports, structured summaries, or answer-with-citations) without invoking a complex multi-step agent or building a custom workflow.
Please describe the functionality of this feature request.
Add an option directly within the Generate Text block that enables a lightweight “Deep Research Mode.” for supported LLMs. This mode would:
-
Allow the LLM to perform web-grounded research
-
Return a concise, structured result (e.g., summary + citations)
-
Avoid backend pipelines that produce massive resource lists
-
Run significantly faster and cheaper than the full Deep Research agent
-
Give users control over the desired output length or depth (e.g., brief / medium / detailed)
Essentially, it would mirror the capabilities offered by services like Google Gemini’s Deep Research or ChatGPT’s “Deep Research” option, but without the overhead of the existing MindStudio Deep Research workflow.
Is there anything else we should know?
MindStudio already has access to advanced models from OpenAI and Google. What’s missing is a streamlined way to leverage their deep research capabilities within the Generate Text block. A lightweight deep-research mode would improve usability, reduce cost, and simplify common research tasks.