Get a stored chat completion. Only Chat Completions that have been created with the `store` parameter set to `true` will be returned.

GET /chat/completions/{completion_id}

Path parameters

  • completion_id string Required

    The ID of the chat completion to retrieve.

Responses

  • 200 application/json

    A chat completion

    Hide response attributes Show response attributes object
    • id string Required

      A unique identifier for the chat completion.

    • choices array[object] Required

      A list of chat completion choices. Can be more than one if n is greater than 1.

      Hide choices attributes Show choices attributes object
      • finish_reason string Required

        The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool, or function_call (deprecated) if the model called a function.

        Values are stop, length, tool_calls, content_filter, or function_call.

      • index integer Required

        The index of the choice in the list of choices.

      • message object Required

        A chat completion message generated by the model.

        Hide message attributes Show message attributes object
        • content string | null Required

          The contents of the message.

        • refusal string | null Required

          The refusal message generated by the model.

        • tool_calls array[object]

          The tool calls generated by the model, such as function calls.

          Hide tool_calls attributes Show tool_calls attributes object
          • id string Required

            The ID of the tool call.

          • type string Required

            The type of the tool. Currently, only function is supported.

            Value is function.

          • function object Required

            The function that the model called.

            Hide function attributes Show function attributes object
            • name string Required

              The name of the function to call.

            • arguments string Required

              The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

        • annotations array[object]

          Annotations for the message, when applicable, as when using the web search tool.

          A URL citation when using web search.

          Hide annotations attributes Show annotations attributes object
          • type string Required

            The type of the URL citation. Always url_citation.

            Value is url_citation.

          • url_citation object Required

            A URL citation when using web search.

            Hide url_citation attributes Show url_citation attributes object
            • end_index integer Required

              The index of the last character of the URL citation in the message.

            • start_index integer Required

              The index of the first character of the URL citation in the message.

            • url string Required

              The URL of the web resource.

            • title string Required

              The title of the web resource.

        • role string Required

          The role of the author of this message.

          Value is assistant.

        • function_call object Deprecated

          Deprecated and replaced by tool_calls. The name and arguments of a function that should be called, as generated by the model.

          Hide function_call attributes Show function_call attributes object Deprecated
          • arguments string Required

            The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.

          • name string Required

            The name of the function to call.

        • audio object | null

          If the audio output modality is requested, this object contains data about the audio response from the model. Learn more.

          Hide audio attributes Show audio attributes object | null
          • id string Required

            Unique identifier for this audio response.

          • expires_at integer Required

            The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations.

          • data string Required

            Base64 encoded audio bytes generated by the model, in the format specified in the request.

          • transcript string Required

            Transcript of the audio generated by the model.

      • logprobs object | null Required

        Log probability information for the choice.

        Hide logprobs attributes Show logprobs attributes object | null
        • content array[object] | null Required

          A list of message content tokens with log probability information.

          Hide content attributes Show content attributes object
          • token string Required

            The token.

          • logprob number Required

            The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

          • bytes array[integer] | null Required

            A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

          • top_logprobs array[object] Required

            List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.

            Hide top_logprobs attributes Show top_logprobs attributes object
            • token string Required

              The token.

            • logprob number Required

              The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

            • bytes array[integer] | null Required

              A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

        • refusal array[object] | null Required

          A list of message refusal tokens with log probability information.

          Hide refusal attributes Show refusal attributes object
          • token string Required

            The token.

          • logprob number Required

            The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

          • bytes array[integer] | null Required

            A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

          • top_logprobs array[object] Required

            List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned.

            Hide top_logprobs attributes Show top_logprobs attributes object
            • token string Required

              The token.

            • logprob number Required

              The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely.

            • bytes array[integer] | null Required

              A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token.

    • created integer Required

      The Unix timestamp (in seconds) of when the chat completion was created.

    • model string Required

      The model used for the chat completion.

    • service_tier string | null

      The service tier used for processing the request.

      Values are scale or default.

    • system_fingerprint string

      This fingerprint represents the backend configuration that the model runs with.

      Can be used in conjunction with the seed request parameter to understand when backend changes have been made that might impact determinism.

    • object string Required

      The object type, which is always chat.completion.

      Value is chat.completion.

    • usage object

      Usage statistics for the completion request.

      Hide usage attributes Show usage attributes object
      • completion_tokens integer Required

        Number of tokens in the generated completion.

        Default value is 0.

      • prompt_tokens integer Required

        Number of tokens in the prompt.

        Default value is 0.

      • total_tokens integer Required

        Total number of tokens used in the request (prompt + completion).

        Default value is 0.

      • completion_tokens_details object

        Breakdown of tokens used in a completion.

        Hide completion_tokens_details attributes Show completion_tokens_details attributes object
        • accepted_prediction_tokens integer

          When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion.

          Default value is 0.

        • audio_tokens integer

          Audio input tokens generated by the model.

          Default value is 0.

        • reasoning_tokens integer

          Tokens generated by the model for reasoning.

          Default value is 0.

        • rejected_prediction_tokens integer

          When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits.

          Default value is 0.

      • prompt_tokens_details object

        Breakdown of tokens used in the prompt.

        Hide prompt_tokens_details attributes Show prompt_tokens_details attributes object
        • audio_tokens integer

          Audio input tokens present in the prompt.

          Default value is 0.

        • cached_tokens integer

          Cached tokens present in the prompt.

          Default value is 0.

GET /chat/completions/{completion_id}
curl \
 --request GET 'https://api.openai.com/v1/chat/completions/{completion_id}' \
 --header "Authorization: Bearer $ACCESS_TOKEN"
Response examples (200)
{
  "id": "string",
  "choices": [
    {
      "finish_reason": "stop",
      "index": 42,
      "message": {
        "content": "string",
        "refusal": "string",
        "tool_calls": [
          {
            "id": "string",
            "type": "function",
            "function": {
              "name": "string",
              "arguments": "string"
            }
          }
        ],
        "annotations": [
          {
            "type": "url_citation",
            "url_citation": {
              "end_index": 42,
              "start_index": 42,
              "url": "string",
              "title": "string"
            }
          }
        ],
        "role": "assistant",
        "function_call": {
          "arguments": "string",
          "name": "string"
        },
        "audio": {
          "id": "string",
          "expires_at": 42,
          "data": "string",
          "transcript": "string"
        }
      },
      "logprobs": {
        "content": [
          {
            "token": "string",
            "logprob": 42.0,
            "bytes": [
              42
            ],
            "top_logprobs": [
              {
                "token": "string",
                "logprob": 42.0,
                "bytes": [
                  42
                ]
              }
            ]
          }
        ],
        "refusal": [
          {
            "token": "string",
            "logprob": 42.0,
            "bytes": [
              42
            ],
            "top_logprobs": [
              {
                "token": "string",
                "logprob": 42.0,
                "bytes": [
                  42
                ]
              }
            ]
          }
        ]
      }
    }
  ],
  "created": 42,
  "model": "string",
  "service_tier": "scale",
  "system_fingerprint": "string",
  "object": "chat.completion",
  "usage": {
    "completion_tokens": 0,
    "prompt_tokens": 0,
    "total_tokens": 0,
    "completion_tokens_details": {
      "accepted_prediction_tokens": 0,
      "audio_tokens": 0,
      "reasoning_tokens": 0,
      "rejected_prediction_tokens": 0
    },
    "prompt_tokens_details": {
      "audio_tokens": 0,
      "cached_tokens": 0
    }
  }
}