Path parameters
-
The ID of the thread that was run.
-
The ID of the run to retrieve.
Responses
-
OK
Hide response attributes Show response attributes object
-
The identifier, which can be referenced in API endpoints.
-
The object type, which is always
thread.run
.Value is
thread.run
. -
The Unix timestamp (in seconds) for when the run was created.
-
The ID of the thread that was executed on as a part of this run.
-
The ID of the assistant used for execution of this run.
-
The status of the run, which can be either
queued
,in_progress
,requires_action
,cancelling
,cancelled
,failed
,completed
,incomplete
, orexpired
.Values are
queued
,in_progress
,requires_action
,cancelling
,cancelled
,failed
,completed
,incomplete
, orexpired
. -
Details on the action required to continue the run. Will be
null
if no action is required.Hide required_action attributes Show required_action attributes object | null
-
For now, this is always
submit_tool_outputs
.Value is
submit_tool_outputs
. -
Details on the tool outputs needed for this run to continue.
Hide submit_tool_outputs attribute Show submit_tool_outputs attribute object
-
A list of the relevant tool calls.
Tool call objects
Hide tool_calls attributes Show tool_calls attributes object
-
The ID of the tool call. This ID must be referenced when you submit the tool outputs in using the Submit tool outputs to run endpoint.
-
The type of tool call the output is required for. For now, this is always
function
.Value is
function
. -
The function definition.
-
-
-
-
The last error associated with this run. Will be
null
if there are no errors. -
The Unix timestamp (in seconds) for when the run will expire.
-
The Unix timestamp (in seconds) for when the run was started.
-
The Unix timestamp (in seconds) for when the run was cancelled.
-
The Unix timestamp (in seconds) for when the run failed.
-
The Unix timestamp (in seconds) for when the run was completed.
-
Details on why the run is incomplete. Will be
null
if the run is not incomplete. -
The model that the assistant used for this run.
-
The instructions that the assistant used for this run.
-
The list of tools that the assistant used for this run.
Not more than
20
elements. Default value is[]
(empty).One of: Hide attributes Show attributes
-
The type of tool being defined:
file_search
Value is
file_search
. -
Overrides for the file search tool.
Hide file_search attributes Show file_search attributes object
-
The maximum number of results the file search tool should output. The default is 20 for
gpt-4*
models and 5 forgpt-3.5-turbo
. This number should be between 1 and 50 inclusive.Note that the file search tool may output fewer than
max_num_results
results. See the file search tool documentation for more information.Minimum value is
1
, maximum value is50
. -
The ranking options for the file search. If not specified, the file search tool will use the
auto
ranker and a score_threshold of 0.See the file search tool documentation for more information.
Hide ranking_options attributes Show ranking_options attributes object
-
The ranker to use for the file search. If not specified will use the
auto
ranker.Values are
auto
ordefault_2024_08_21
. -
The score threshold for the file search. All values must be a floating point number between 0 and 1.
Minimum value is
0
, maximum value is1
.
-
-
Hide attributes Show attributes
-
The type of tool being defined:
function
Value is
function
. -
Hide function attributes Show function attributes object
-
A description of what the function does, used by the model to choose when and how to call the function.
-
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
-
The parameters the functions accepts, described as a JSON Schema object. See the guide for examples, and the JSON Schema reference for documentation about the format.
Omitting
parameters
defines a function with an empty parameter list.Additional properties are allowed.
-
Whether to enable strict schema adherence when generating the function call. If set to true, the model will follow the exact schema defined in the
parameters
field. Only a subset of JSON Schema is supported whenstrict
istrue
. Learn more about Structured Outputs in the function calling guide.Default value is
false
.
-
-
-
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
-
Usage statistics related to the run. This value will be
null
if the run is not in a terminal state (i.e.in_progress
,queued
, etc.).Hide usage attributes Show usage attributes object | null
-
The sampling temperature used for this run. If not set, defaults to 1.
-
The nucleus sampling value used for this run. If not set, defaults to 1.
-
The maximum number of prompt tokens specified to have been used over the course of the run.
Minimum value is
256
. -
The maximum number of completion tokens specified to have been used over the course of the run.
Minimum value is
256
. -
Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.
Hide truncation_strategy attributes Show truncation_strategy attributes object
-
The truncation strategy to use for the thread. The default is
auto
. If set tolast_messages
, the thread will be truncated to the n most recent messages in the thread. When set toauto
, messages in the middle of the thread will be dropped to fit the context length of the model,max_prompt_tokens
.Values are
auto
orlast_messages
. -
The number of most recent messages from the thread when constructing the context for the run.
Minimum value is
1
.
-
tool_choice
string | object Required One of: none
means the model will not call any tools and instead generates a message.auto
means the model can pick between generating a message or calling one or more tools.required
means the model must call one or more tools before responding to the user.Values are
none
,auto
, orrequired
.Specifies a tool the model should use. Use to force the model to call a specific tool.
Hide attributes Show attributes
-
Whether to enable parallel function calling during tool use.
Default value is
true
. response_format
string | null | object Required Specifies the format that the model must output. Compatible with GPT-4o, GPT-4 Turbo, and all GPT-3.5 Turbo models since
gpt-3.5-turbo-1106
.Setting to
{ "type": "json_schema", "json_schema": {...} }
enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the Structured Outputs guide.Setting to
{ "type": "json_object" }
enables JSON mode, which ensures the message the model generates is valid JSON.Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request. Also note that the message content may be partially cut off if
finish_reason="length"
, which indicates the generation exceededmax_tokens
or the conversation exceeded the max context length.One of: auto
is the default valueValue is
auto
.Default response format. Used to generate text responses.
JSON object response format. An older method of generating JSON responses. Using
json_schema
is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so.JSON Schema response format. Used to generate structured JSON responses. Learn more about Structured Outputs.
Hide attributes Show attributes
-
The type of response format being defined. Always
json_schema
.Value is
json_schema
. -
Structured Outputs configuration options, including a JSON Schema.
Hide json_schema attributes Show json_schema attributes object
-
A description of what the response format is for, used by the model to determine how to respond in the format.
-
The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
-
The schema for the response format, described as a JSON Schema object. Learn how to build JSON schemas here.
Additional properties are allowed.
-
Whether to enable strict schema adherence when generating the output. If set to true, the model will always follow the exact schema defined in the
schema
field. Only a subset of JSON Schema is supported whenstrict
istrue
. To learn more, read the Structured Outputs guide.Default value is
false
.
-
-
-
curl \
--request GET 'https://api.openai.com/v1/threads/{thread_id}/runs/{run_id}' \
--header "Authorization: Bearer $ACCESS_TOKEN"
{
"id": "string",
"object": "thread.run",
"created_at": 42,
"thread_id": "string",
"assistant_id": "string",
"status": "queued",
"required_action": {
"type": "submit_tool_outputs",
"submit_tool_outputs": {
"tool_calls": [
{
"id": "string",
"type": "function",
"function": {
"name": "string",
"arguments": "string"
}
}
]
}
},
"last_error": {
"code": "server_error",
"message": "string"
},
"expires_at": 42,
"started_at": 42,
"cancelled_at": 42,
"failed_at": 42,
"completed_at": 42,
"incomplete_details": {
"reason": "max_completion_tokens"
},
"model": "string",
"instructions": "string",
"tools": [
{
"type": "code_interpreter"
}
],
"metadata": {
"additionalProperty1": "string",
"additionalProperty2": "string"
},
"usage": {
"completion_tokens": 42,
"prompt_tokens": 42,
"total_tokens": 42
},
"temperature": 42.0,
"top_p": 42.0,
"max_prompt_tokens": 42,
"max_completion_tokens": 42,
"truncation_strategy": {
"type": "auto",
"last_messages": 42
},
"tool_choice": "none",
"parallel_tool_calls": true,
"response_format": "auto"
}