Create an ephemeral API token for use in client-side applications with the Realtime API specifically for realtime transcriptions. Can be configured with the same session parameters as the `transcription_session.update` client event. It responds with a session object, plus a `client_secret` key which contains a usable ephemeral API token that can be used to authenticate browser clients for the Realtime API.
Body
Required
Create an ephemeral API key with the given session configuration.
-
The set of modalities the model can respond with. To disable audio, set this to ["text"].
Values are
text
oraudio
. Default value is["text", "audio"]
. -
The format of input audio. Options are
pcm16
,g711_ulaw
, org711_alaw
. Forpcm16
, input audio must be 16-bit PCM at a 24kHz sample rate, single channel (mono), and little-endian byte order.Values are
pcm16
,g711_ulaw
, org711_alaw
. Default value ispcm16
. -
Configuration for input audio transcription. The client can optionally set the language and prompt for transcription, these offer additional guidance to the transcription service.
-
Configuration for turn detection, ether Server VAD or Semantic VAD. This can be set to
null
to turn off, in which case the client must manually trigger model response. Server VAD means that the model will detect the start and end of speech based on audio volume and respond at the end of user speech. Semantic VAD is more advanced and uses a turn detection model (in conjuction with VAD) to semantically estimate whether the user has finished speaking, then dynamically sets a timeout based on this probability. For example, if user audio trails off with "uhhm", the model will score a low probability of turn end and wait longer for the user to continue speaking. This can be useful for more natural conversations, but may have a higher latency. -
Configuration for input audio noise reduction. This can be set to
null
to turn off. Noise reduction filters audio added to the input audio buffer before it is sent to VAD and the model. Filtering the audio can improve VAD and turn detection accuracy (reducing false positives) and model performance by improving perception of the input audio. -
The set of items to include in the transcription. Current available items are:
item.input_audio_transcription.logprobs
curl \
--request POST 'https://api.openai.com/v1/realtime/transcription_sessions' \
--header "Authorization: Bearer $ACCESS_TOKEN" \
--header "Content-Type: application/json" \
--data '{"modalities":["text","audio"],"input_audio_format":"pcm16","input_audio_transcription":{"model":"gpt-4o-transcribe","language":"string","prompt":"string"},"turn_detection":{"type":"server_vad","eagerness":"auto","threshold":42.0,"prefix_padding_ms":42,"silence_duration_ms":42,"create_response":true,"interrupt_response":true},"input_audio_noise_reduction":{"type":"near_field"},"include":["string"]}'
{
"modalities": [
"text",
"audio"
],
"input_audio_format": "pcm16",
"input_audio_transcription": {
"model": "gpt-4o-transcribe",
"language": "string",
"prompt": "string"
},
"turn_detection": {
"type": "server_vad",
"eagerness": "auto",
"threshold": 42.0,
"prefix_padding_ms": 42,
"silence_duration_ms": 42,
"create_response": true,
"interrupt_response": true
},
"input_audio_noise_reduction": {
"type": "near_field"
},
"include": [
"string"
]
}
{
"client_secret": {
"value": "string",
"expires_at": 42
},
"modalities": [
"text"
],
"input_audio_format": "string",
"input_audio_transcription": {
"model": "gpt-4o-transcribe",
"language": "string",
"prompt": "string"
},
"turn_detection": {
"type": "string",
"threshold": 42.0,
"prefix_padding_ms": 42,
"silence_duration_ms": 42
}
}