Create an ephemeral API token for use in client-side applications with the Realtime API. Can be configured with the same session parameters as the `session.update` client event. It responds with a session object, plus a `client_secret` key which contains a usable ephemeral API token that can be used to authenticate browser clients for the Realtime API.

POST /realtime/sessions
application/json

Body Required

Create an ephemeral API key with the given session configuration.

  • modalities array[string]

    The set of modalities the model can respond with. To disable audio, set this to ["text"].

    Values are text or audio. Default value is ["text", "audio"].

  • model string

    The Realtime model used for this session.

    Values are gpt-4o-realtime-preview, gpt-4o-realtime-preview-2024-10-01, gpt-4o-realtime-preview-2024-12-17, gpt-4o-mini-realtime-preview, or gpt-4o-mini-realtime-preview-2024-12-17.

  • instructions string

    The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

    Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

  • voice string

    The voice to use when generating the audio. Supported voices are alloy, ash, ballad, coral, echo, fable, onyx, nova, sage, shimmer, and verse. Previews of the voices are available in the Text to speech guide.

    Any of:
  • input_audio_format string

    The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.

    Values are pcm16, g711_ulaw, or g711_alaw. Default value is pcm16.

  • output_audio_format string

    The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw. For pcm16, output audio is sampled at a rate of 24kHz.

    Values are pcm16, g711_ulaw, or g711_alaw. Default value is pcm16.

  • input_audio_transcription object

    Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through the /audio/transcriptions endpoint and should be treated as guidance of input audio content rather than precisely what the model heard. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.

    Hide input_audio_transcription attributes Show input_audio_transcription attributes object
    • model string

      The model to use for transcription, current options are gpt-4o-transcribe, gpt-4o-mini-transcribe, and whisper-1.

    • language string

      The language of the input audio. Supplying the input language in ISO-639-1 (e.g. en) format will improve accuracy and latency.

    • prompt string

      An optional text to guide the model's style or continue a previous audio segment. For whisper-1, the prompt is a list of keywords. For gpt-4o-transcribe models, the prompt is a free text string, for example "expect words related to technology".

  • turn_detection object

    Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to null to turn off, in which case the client must manually trigger model response. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. Semantic VAD is more advanced and uses a turn detection model (in conjuction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency.

    Hide turn_detection attributes Show turn_detection attributes object
    • type string

      Type of turn detection.

      Values are server_vad or semantic_vad. Default value is server_vad.

    • eagerness string

      Used only for semantic_vad mode. The eagerness of the model to respond. low will wait longer for the user to continue speaking, high will respond more quickly. auto is the default and is equivalent to medium.

      Values are low, medium, high, or auto. Default value is auto.

    • threshold number

      Used only for server_vad mode. Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

    • prefix_padding_ms integer

      Used only for server_vad mode. Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

    • silence_duration_ms integer

      Used only for server_vad mode. Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

    • create_response boolean

      Whether or not to automatically generate a response when a VAD stop event occurs.

      Default value is true.

    • interrupt_response boolean

      Whether or not to automatically interrupt any ongoing response with output to the default conversation (i.e. conversation of auto) when a VAD start event occurs.

      Default value is true.

  • input_audio_noise_reduction object

    Configuration for input audio noise reduction. This can be set to null to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio.

    Hide input_audio_noise_reduction attribute Show input_audio_noise_reduction attribute object
    • type string

      Type of noise reduction. near_field is for close-talking microphones such as headphones, far_field is for far-field microphones such as laptop or conference room microphones.

      Values are near_field or far_field.

  • tools array[object]

    Tools (functions) available to the model.

    Hide tools attributes Show tools attributes object
    • type string

      The type of the tool, i.e. function.

      Value is function.

    • name string

      The name of the function.

    • description string

      The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

    • parameters object

      Parameters of the function in JSON Schema.

  • tool_choice string

    How the model chooses tools. Options are auto, none, required, or specify a function.

    Default value is auto.

  • temperature number

    Sampling temperature for the model, limited to [0.6, 1.2]. For audio models a temperature of 0.8 is highly recommended for best performance.

    Default value is 0.8.

  • max_response_output_tokens integer | string

    Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

Responses

  • 200 application/json

    Session created successfully.

    Hide response attributes Show response attributes object
    • client_secret object Required

      Ephemeral key returned by the API.

      Hide client_secret attributes Show client_secret attributes object
      • value string Required

        Ephemeral key usable in client environments to authenticate connections to the Realtime API. Use this in client-side environments rather than a standard API token, which should only be used server-side.

      • expires_at integer Required

        Timestamp for when the token expires. Currently, all tokens expire after one minute.

    • modalities array[string]

      The set of modalities the model can respond with. To disable audio, set this to ["text"].

      Values are text or audio.

    • instructions string

      The default system instructions (i.e. system message) prepended to model calls. This field allows the client to guide the model on desired responses. The model can be instructed on response content and format, (e.g. "be extremely succinct", "act friendly", "here are examples of good responses") and on audio behavior (e.g. "talk quickly", "inject emotion into your voice", "laugh frequently"). The instructions are not guaranteed to be followed by the model, but they provide guidance to the model on the desired behavior.

      Note that the server sets default instructions which will be used if this field is not set and are visible in the session.created event at the start of the session.

    • voice string

      The voice to use when generating the audio. Supported voices are alloy, ash, ballad, coral, echo, fable, onyx, nova, sage, shimmer, and verse. Previews of the voices are available in the Text to speech guide.

      Any of:
    • input_audio_format string

      The format of input audio. Options are pcm16, g711_ulaw, or g711_alaw.

    • output_audio_format string

      The format of output audio. Options are pcm16, g711_ulaw, or g711_alaw.

    • input_audio_transcription object

      Configuration for input audio transcription, defaults to off and can be set to null to turn off once on. Input audio transcription is not native to the model, since the model consumes audio directly. Transcription runs asynchronously through Whisper and should be treated as rough guidance rather than the representation understood by the model.

      Hide input_audio_transcription attribute Show input_audio_transcription attribute object
      • model string

        The model to use for transcription, whisper-1 is the only currently supported model.

    • turn_detection object

      Configuration for turn detection. Can be set to null to turn off. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech.

      Hide turn_detection attributes Show turn_detection attributes object
      • type string

        Type of turn detection, only server_vad is currently supported.

      • threshold number

        Activation threshold for VAD (0.0 to 1.0), this defaults to 0.5. A higher threshold will require louder audio to activate the model, and thus might perform better in noisy environments.

      • prefix_padding_ms integer

        Amount of audio to include before the VAD detected speech (in milliseconds). Defaults to 300ms.

      • silence_duration_ms integer

        Duration of silence to detect speech stop (in milliseconds). Defaults to 500ms. With shorter values the model will respond more quickly, but may jump in on short pauses from the user.

    • tools array[object]

      Tools (functions) available to the model.

      Hide tools attributes Show tools attributes object
      • type string

        The type of the tool, i.e. function.

        Value is function.

      • name string

        The name of the function.

      • description string

        The description of the function, including guidance on when and how to call it, and guidance about what to tell the user when calling (if anything).

      • parameters object

        Parameters of the function in JSON Schema.

    • tool_choice string

      How the model chooses tools. Options are auto, none, required, or specify a function.

    • temperature number

      Sampling temperature for the model, limited to [0.6, 1.2]. Defaults to 0.8.

    • max_response_output_tokens integer | string

      Maximum number of output tokens for a single assistant response, inclusive of tool calls. Provide an integer between 1 and 4096 to limit output tokens, or inf for the maximum available tokens for a given model. Defaults to inf.

POST /realtime/sessions
curl \
 --request POST 'https://api.openai.com/v1/realtime/sessions' \
 --header "Authorization: Bearer $ACCESS_TOKEN" \
 --header "Content-Type: application/json" \
 --data '{"modalities":["text","audio"],"model":"gpt-4o-realtime-preview","instructions":"string","voice":"ash","input_audio_format":"pcm16","output_audio_format":"pcm16","input_audio_transcription":{"model":"string","language":"string","prompt":"string"},"turn_detection":{"type":"server_vad","eagerness":"auto","threshold":42.0,"prefix_padding_ms":42,"silence_duration_ms":42,"create_response":true,"interrupt_response":true},"input_audio_noise_reduction":{"type":"near_field"},"tools":[{"type":"function","name":"string","description":"string","parameters":{}}],"tool_choice":"auto","temperature":0.8,"max_response_output_tokens":42}'
Request examples
{
  "modalities": [
    "text",
    "audio"
  ],
  "model": "gpt-4o-realtime-preview",
  "instructions": "string",
  "voice": "ash",
  "input_audio_format": "pcm16",
  "output_audio_format": "pcm16",
  "input_audio_transcription": {
    "model": "string",
    "language": "string",
    "prompt": "string"
  },
  "turn_detection": {
    "type": "server_vad",
    "eagerness": "auto",
    "threshold": 42.0,
    "prefix_padding_ms": 42,
    "silence_duration_ms": 42,
    "create_response": true,
    "interrupt_response": true
  },
  "input_audio_noise_reduction": {
    "type": "near_field"
  },
  "tools": [
    {
      "type": "function",
      "name": "string",
      "description": "string",
      "parameters": {}
    }
  ],
  "tool_choice": "auto",
  "temperature": 0.8,
  "max_response_output_tokens": 42
}
Response examples (200)
{
  "client_secret": {
    "value": "string",
    "expires_at": 42
  },
  "modalities": [
    "text"
  ],
  "instructions": "string",
  "voice": "ash",
  "input_audio_format": "string",
  "output_audio_format": "string",
  "input_audio_transcription": {
    "model": "string"
  },
  "turn_detection": {
    "type": "string",
    "threshold": 42.0,
    "prefix_padding_ms": 42,
    "silence_duration_ms": 42
  },
  "tools": [
    {
      "type": "function",
      "name": "string",
      "description": "string",
      "parameters": {}
    }
  ],
  "tool_choice": "string",
  "temperature": 42.0,
  "max_response_output_tokens": 42
}