Classifies if text and/or image inputs are potentially harmful. Learn more in the [moderation guide](/docs/guides/moderation).

POST /moderations
application/json

Body Required

  • input string | array[string] | array[object] Required

    Input (or inputs) to classify. Can be a single string, an array of strings, or an array of multi-modal input objects similar to other models.

    One of:

    A string of text to classify for moderation.

    Default value is empty.

    An array of strings to classify for moderation.

    Default value is empty.

    An array of multi-modal inputs to the moderation model.

    One of:
  • model string

    The content moderation model you would like to use. Learn more in the moderation guide, and learn about available models here.

    Any of:

    Default value is omni-moderation-latest.

    Values are omni-moderation-latest, omni-moderation-2024-09-26, text-moderation-latest, or text-moderation-stable. Default value is omni-moderation-latest.

Responses

  • 200 application/json

    OK

    Hide response attributes Show response attributes object
    • id string Required

      The unique identifier for the moderation request.

    • model string Required

      The model used to generate the moderation results.

    • results array[object] Required

      A list of moderation objects.

      Hide results attributes Show results attributes object
      • flagged boolean Required

        Whether any of the below categories are flagged.

      • categories object Required

        A list of the categories, and whether they are flagged or not.

        Hide categories attributes Show categories attributes object
        • hate boolean Required

          Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. Hateful content aimed at non-protected groups (e.g., chess players) is harassment.

        • hate/threatening boolean Required

          Hateful content that also includes violence or serious harm towards the targeted group based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste.

        • harassment boolean Required

          Content that expresses, incites, or promotes harassing language towards any target.

        • harassment/threatening boolean Required

          Harassment content that also includes violence or serious harm towards any target.

        • illicit boolean | null Required

          Content that includes instructions or advice that facilitate the planning or execution of wrongdoing, or that gives advice or instruction on how to commit illicit acts. For example, "how to shoplift" would fit this category.

        • illicit/violent boolean | null Required

          Content that includes instructions or advice that facilitate the planning or execution of wrongdoing that also includes violence, or that gives advice or instruction on the procurement of any weapon.

        • self-harm boolean Required

          Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders.

        • self-harm/intent boolean Required

          Content where the speaker expresses that they are engaging or intend to engage in acts of self-harm, such as suicide, cutting, and eating disorders.

        • self-harm/instructions boolean Required

          Content that encourages performing acts of self-harm, such as suicide, cutting, and eating disorders, or that gives instructions or advice on how to commit such acts.

        • sexual boolean Required

          Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness).

        • sexual/minors boolean Required

          Sexual content that includes an individual who is under 18 years old.

        • violence boolean Required

          Content that depicts death, violence, or physical injury.

        • violence/graphic boolean Required

          Content that depicts death, violence, or physical injury in graphic detail.

      • category_scores object Required

        A list of the categories along with their scores as predicted by model.

        Hide category_scores attributes Show category_scores attributes object
        • hate number Required

          The score for the category 'hate'.

        • hate/threatening number Required

          The score for the category 'hate/threatening'.

        • harassment number Required

          The score for the category 'harassment'.

        • harassment/threatening number Required

          The score for the category 'harassment/threatening'.

        • illicit number Required

          The score for the category 'illicit'.

        • illicit/violent number Required

          The score for the category 'illicit/violent'.

        • self-harm number Required

          The score for the category 'self-harm'.

        • self-harm/intent number Required

          The score for the category 'self-harm/intent'.

        • self-harm/instructions number Required

          The score for the category 'self-harm/instructions'.

        • sexual number Required

          The score for the category 'sexual'.

        • sexual/minors number Required

          The score for the category 'sexual/minors'.

        • violence number Required

          The score for the category 'violence'.

        • violence/graphic number Required

          The score for the category 'violence/graphic'.

      • category_applied_input_types object Required

        A list of the categories along with the input type(s) that the score applies to.

        Hide category_applied_input_types attributes Show category_applied_input_types attributes object
        • hate array[string] Required

          The applied input type(s) for the category 'hate'.

          Value is text.

        • hate/threatening array[string] Required

          The applied input type(s) for the category 'hate/threatening'.

          Value is text.

        • harassment array[string] Required

          The applied input type(s) for the category 'harassment'.

          Value is text.

        • harassment/threatening array[string] Required

          The applied input type(s) for the category 'harassment/threatening'.

          Value is text.

        • illicit array[string] Required

          The applied input type(s) for the category 'illicit'.

          Value is text.

        • illicit/violent array[string] Required

          The applied input type(s) for the category 'illicit/violent'.

          Value is text.

        • self-harm array[string] Required

          The applied input type(s) for the category 'self-harm'.

          Values are text or image.

        • self-harm/intent array[string] Required

          The applied input type(s) for the category 'self-harm/intent'.

          Values are text or image.

        • self-harm/instructions array[string] Required

          The applied input type(s) for the category 'self-harm/instructions'.

          Values are text or image.

        • sexual array[string] Required

          The applied input type(s) for the category 'sexual'.

          Values are text or image.

        • sexual/minors array[string] Required

          The applied input type(s) for the category 'sexual/minors'.

          Value is text.

        • violence array[string] Required

          The applied input type(s) for the category 'violence'.

          Values are text or image.

        • violence/graphic array[string] Required

          The applied input type(s) for the category 'violence/graphic'.

          Values are text or image.

POST /moderations
curl \
 --request POST 'https://api.openai.com/v1/moderations' \
 --header "Authorization: Bearer $ACCESS_TOKEN" \
 --header "Content-Type: application/json" \
 --data '{"input":"I want to kill them.","model":"omni-moderation-2024-09-26"}'
Request examples
{
  "input": "I want to kill them.",
  "model": "omni-moderation-2024-09-26"
}
Response examples (200)
{
  "id": "string",
  "model": "string",
  "results": [
    {
      "flagged": true,
      "categories": {
        "hate": true,
        "hate/threatening": true,
        "harassment": true,
        "harassment/threatening": true,
        "illicit": true,
        "illicit/violent": true,
        "self-harm": true,
        "self-harm/intent": true,
        "self-harm/instructions": true,
        "sexual": true,
        "sexual/minors": true,
        "violence": true,
        "violence/graphic": true
      },
      "category_scores": {
        "hate": 42.0,
        "hate/threatening": 42.0,
        "harassment": 42.0,
        "harassment/threatening": 42.0,
        "illicit": 42.0,
        "illicit/violent": 42.0,
        "self-harm": 42.0,
        "self-harm/intent": 42.0,
        "self-harm/instructions": 42.0,
        "sexual": 42.0,
        "sexual/minors": 42.0,
        "violence": 42.0,
        "violence/graphic": 42.0
      },
      "category_applied_input_types": {
        "hate": [
          "text"
        ],
        "hate/threatening": [
          "text"
        ],
        "harassment": [
          "text"
        ],
        "harassment/threatening": [
          "text"
        ],
        "illicit": [
          "text"
        ],
        "illicit/violent": [
          "text"
        ],
        "self-harm": [
          "text"
        ],
        "self-harm/intent": [
          "text"
        ],
        "self-harm/instructions": [
          "text"
        ],
        "sexual": [
          "text"
        ],
        "sexual/minors": [
          "text"
        ],
        "violence": [
          "text"
        ],
        "violence/graphic": [
          "text"
        ]
      }
    }
  ]
}