Create an ephemeral API token for use in client-side applications with the Realtime API specifically for realtime transcriptions. Can be configured with the same session parameters as the `transcription_session.update` client event. It responds with a session object, plus a `client_secret` key which contains a usable ephemeral API token that can be used to authenticate browser clients for the Realtime API.

POST /realtime/transcription_sessions
application/json

Body Required

Create an ephemeral API key with the given session configuration.

  • modalities array[string]

    The set of modalities the model can respond with. To disable audio, set this to ["text"].

    Values are text or audio. Default value is ["text", "audio"].

  • input_audio_format string

    The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.

    Values are pcm16, g711_ulaw, or g711_alaw. Default value is pcm16.

  • input_audio_transcription object

    Configuration for input audio transcription. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

    Hide input_audio_transcription attributes Show input_audio_transcription attributes object
    • model string

      The model to use for transcription, current options are gpt-4o-transcribe, gpt-4o-mini-transcribe, and whisper-1.

      Values are gpt-4o-transcribe, gpt-4o-mini-transcribe, or whisper-1.

    • language string

      The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

    • prompt string

      An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models, the prompt is a free text string, for example "expect words related to technology".

  • turn_detection object

    Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. Semantic VAD is more advanced and uses a turn detection model (in conjuction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

    Hide turn_detection attributes Show turn_detection attributes object
    • type string

      Type of turn detection.

      Values are server_vad or semantic_vad. Default value is server_vad.

    • eagerness string

      Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium.

      Values are low, medium, high, or auto. Default value is auto.

    • threshold number

      Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

    • prefix_padding_ms integer

      Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

    • silence_duration_ms integer

      Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

    • create_response boolean

      Whether or not to automatically generate a response when a VAD stop event occurs. Not available for transcription sessions.

      Default value is true.

    • interrupt_response boolean

      Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs. Not available for transcription sessions.

      Default value is true.

  • input_audio_noise_reduction object

    Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

    Hide input_audio_noise_reduction attribute Show input_audio_noise_reduction attribute object
    • type string

      Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

      Values are near_field or far_field.

  • include array[string]

    The set of items to include in the transcription. Current available items are:

    • item.input_audio_transcription.logprobs

Responses

  • 200 application/json

    Session created successfully.

    Hide response attributes Show response attributes object
    • client_secret object Required

      Ephemeral key returned by the API. Only present when the session is created on the server via REST API.

      Hide client_secret attributes Show client_secret attributes object
      • value string Required

        Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.

      • expires_at integer Required

        Timestamp for when the token expires. Currently, all tokens expire after one minute.

    • modalities array[string]

      The set of modalities the model can respond with. To disable audio, set this to ["text"].

      Values are text or audio.

    • input_audio_format string

      The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.

    • input_audio_transcription object

      Configuration of the transcription model.

      Hide input_audio_transcription attributes Show input_audio_transcription attributes object
      • model string

        The model to use for transcription. Can be gpt-4o-transcribe, gpt-4o-mini-transcribe, or whisper-1.

        Values are gpt-4o-transcribe, gpt-4o-mini-transcribe, or whisper-1.

      • language string

        The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

      • prompt string

        An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.

    • turn_detection object

      Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

      Hide turn_detection attributes Show turn_detection attributes object
      • type string

        Type of turn detection, only server_vad is currently supported.

      • threshold number

        Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

      • prefix_padding_ms integer

        Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

      • silence_duration_ms integer

        Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

POST /realtime/transcription_sessions
curl \
 --request POST 'https://api.openai.com/v1/realtime/transcription_sessions' \
 --header "Authorization: Bearer $ACCESS_TOKEN" \
 --header "Content-Type: application/json" \
 --data '{"modalities":["text","audio"],"input_audio_format":"pcm16","input_audio_transcription":{"model":"gpt-4o-transcribe","language":"string","prompt":"string"},"turn_detection":{"type":"server_vad","eagerness":"auto","threshold":42.0,"prefix_padding_ms":42,"silence_duration_ms":42,"create_response":true,"interrupt_response":true},"input_audio_noise_reduction":{"type":"near_field"},"include":["string"]}'
Request examples
{
  "modalities": [
    "text",
    "audio"
  ],
  "input_audio_format": "pcm16",
  "input_audio_transcription": {
    "model": "gpt-4o-transcribe",
    "language": "string",
    "prompt": "string"
  },
  "turn_detection": {
    "type": "server_vad",
    "eagerness": "auto",
    "threshold": 42.0,
    "prefix_padding_ms": 42,
    "silence_duration_ms": 42,
    "create_response": true,
    "interrupt_response": true
  },
  "input_audio_noise_reduction": {
    "type": "near_field"
  },
  "include": [
    "string"
  ]
}
Response examples (200)
{
  "client_secret": {
    "value": "string",
    "expires_at": 42
  },
  "modalities": [
    "text"
  ],
  "input_audio_format": "string",
  "input_audio_transcription": {
    "model": "gpt-4o-transcribe",
    "language": "string",
    "prompt": "string"
  },
  "turn_detection": {
    "type": "string",
    "threshold": 42.0,
    "prefix_padding_ms": 42,
    "silence_duration_ms": 42
  }
}