You Can Now Control How Polite Or Casual ChatGPT Is With You
OpenAI has added new controls that let users decide how polite or professional ChatGPT responses should be, aiming to balance friendliness, safety and responsible AI behaviour.

OpenAI has introduced new personalisation controls for ChatGPT, giving users more control over how the AI talks to them. Users can now decide how warm, enthusiastic, or emoji-heavy responses should be by choosing More, Less, or Default settings inside the Personalisation menu. These new controls come at a time when ChatGPT’s tone has been under constant discussion throughout 2025. OpenAI has faced criticism for responses that felt overly flattering, cold, or emotionally confusing.
The update aims to balance friendliness, safety, and responsibility, especially as concerns grow around teen usage, mental health impact, and user dependency on AI conversations.
ChatGPT New Controls Focus On Tone, Warmth, & Behaviour
The new ChatGPT controls allow users to customise how the AI communicates. Along with adjusting warmth and enthusiasm levels, users can now select a base response style such as Professional, Candid, or Quirky. These tone options were first introduced in November and are now paired with emotional controls for better personalisation.
You can now adjust specific characteristics in ChatGPT, like warmth, enthusiasm, and emoji use.
— OpenAI (@OpenAI) December 19, 2025
Now available in your "Personalization" settings. pic.twitter.com/7WSkOQVTKU
Throughout 2025, ChatGPT’s behaviour has been a sensitive topic. Earlier, OpenAI rolled back an update after users complained that responses felt overly agreeable and flattering. Later, after modifying GPT-5 to be “warmer and friendlier,” many users again expressed frustration, saying the chatbot now felt distant and less natural.
ChatGPT Teen Safety Rules & AI Mental Health Concerns
OpenAI is also under legal and public scrutiny following allegations that prolonged chatbot interactions may have played a role in teen suicides. As a result, the company recently updated its Model Spec to strengthen protections for users under 18 and released AI literacy resources for teens and parents.
The updated safety principles clearly prioritise teen well-being. The AI is instructed to place safety above unrestricted freedom, encourage real-world help from trusted people, speak with warmth without treating teens like adults, and remain transparent about its limitations as a non-human tool.
To enforce this, OpenAI uses automated systems that monitor text, image, and audio content in real time. These systems flag sensitive topics such as self-harm or abuse. If a serious risk is detected, trained reviewers assess the situation for signs of acute distress and may take further steps, including alerting parents when necessary.

























