Add Full ElevenLabs API Support for Voice Selection via API Key in External Integrations

:white_check_mark: What problem does this feature request solve?

Current external integrations with ElevenLabs (e.g., in MindStudio workflows) cannot fetch or use custom or subscription-specific voices directly via API, because there’s no supported method to retrieve the user’s available voice list programmatically using their API key.

Additionally, our team encountered technical limitations (e.g., missing fetch/Buffer in certain execution environments like CloudRunner) that prevent us from implementing workarounds. Native support would eliminate these issues entirely.


:white_check_mark: What is the use case for this feature?

We are building dynamic AI workflows in MindStudio that allow users to convert text to speech using their ElevenLabs voices.
Many of these users have custom voices or voices unlocked by subscription tiers — but currently, there’s no way to list or select these voices programmatically inside external environments.

We want to build a dropdown menu of available voices (per user API key) within the MindStudio UI that updates dynamically — exactly like how ElevenLabs does it in its own dashboard.


:white_check_mark: Please describe the functionality of this feature request.

We request that ElevenLabs provide one of the following (or both):

  1. API endpoint:
    GET /v1/voices that returns the full list of voices available to the API key holder — including custom voices, cloned voices, and subscription-tier voices.

  2. Voice metadata endpoint:
    A way to fetch details (such as name, category, stability range) for a given voice_id to validate and enrich the UI.

These endpoints should work securely with the Bearer token (API key) and allow direct integration into platforms like MindStudio without requiring browser-based authentication.


:white_check_mark: Is there anything else we should know?

  • We attempted building this ourselves using Cloud Functions but ran into multiple issues (no access to fetch, missing Buffer, CORS in the browser).

  • You can view our current spec and debug attempts in my agent, I can share if you like.

  • We believe this would unlock a significant use case for creators and developers using ElevenLabs within no-code / low-code tools.

  • We’re happy to help test the API on behalf of the MindStudio ambassador team.

1 Like

I can see how this feature would have great value in many use cases for Creators. Creating podcasts, music, instructional materials and self-help genres. It has my vote!

Actually we now support custom voice IDs in ElevenLabs! If you are using one of your own voices, it does need to be set to public or else MindStudio can’t access it.

Enjoy!

1 Like

Check out this new update to custom functions: Today we released a crazy powerful update to Custom Functions: Full Virtual Machine Execution. | Sean Thielen

Here’s some code that will work for you!

import { ElevenLabsClient } from "@elevenlabs/elevenlabs-js";

export const handler = async () => {
  const client = new ElevenLabsClient({
    apiKey: ai.config.apiKey,
  });

  const stream = await client.textToSpeech.convert(ai.config.voiceId, {
    text: ai.config.inputText,
    model_id: ai.config.modelId,
  });

  if (!stream) {
    throw new Error('Audio generation failed');
  }

  const chunks = [];

  for await (const chunk of stream) {
    chunks.push(chunk);
  }

  const data = Buffer.concat(chunks);

  if (!data) {
    throw new Error('Audio generation failed');
  }

  // Upload data
  const uploadResult = await ai.uploadFile(data, "audio/mpeg");
  ai.vars[ai.config.outputVariableName] = uploadResult;
};

Thank you very much, already implemented this, It’s works like a charm!

1 Like

Amazing, glad to hear it! Would love any other feedback as you continue to play with some of these more advanced features.

1 Like