Model Switch Retains Incompatible Tool Configuration

Summary
Switching from Gemini 3 Flash (with Google Search enabled) to Sonar results in a runtime error due to an incompatible leftover tools configuration.


Steps to Reproduce

  1. Add a Generate Text block
  2. Set the model to Gemini 3 Flash
  3. Enable the Google Search Tool
  4. Change the model to Sonar
  5. Run the block

Actual Result
The following error is returned:

Perplexity Error: ["At body -> tools -> 0: Input should be a dictionary or an instance of ToolSpec"]

Expected Result

  • The block should run successfully with Sonar
    OR
  • The system should automatically remove or adapt incompatible tool configurations when switching models
    OR
  • A clear validation error should be shown before execution

Root Cause (Hypothesis)
The tools configuration from Gemini persists after switching models, even though Sonar does not support the same tool format.

Problematic config:

"modelOverride": {
  "model": "sonar",
  "config": {
    "tools": [
      "googleSearch"
    ]
  },
  "temperature": 0.95,
  "ignorePreamble": false,
  "maxResponseTokens": 32768
}

Sonar likely expects tools to be structured as a ToolSpec object (dictionary), not a string.


Suggested Fixes

  • Clear or reset config.tools when switching models
  • Add schema validation on model change
  • Provide UI feedback when incompatible settings persist
  • Optionally auto-transform supported tool formats across models

Impact

  • Confusing error message for users
  • Breaks workflow when switching models
  • Hard to diagnose without inspecting raw config

Hi @bkermen,

Appreciate the detailed report.

I’ve reported the issue to our engineers and will get back to you as soon as I have an update.

1 Like

Hi @bkermen,

The issue has been fixed!

Appreciate you bringing this to our attention.

Thanks! I recently noticed that setting the max response size (say, to 128,000) and then switching to a model with a shorter max response size caused a similar issue. However, I just tried it again now, and that seems to be fixed too! I think the fix applies more broadly to switching between LLMs, taking into account each model’s specific parameters. :slight_smile:

1 Like