ChatGPT is an advanced AI-driven chatbot designed to generate surprisingly human-like responses to queries. US-based research lab OpenAI released a prototype version of ChatGPT in November 2022. Ever since then, it appears that humans have taken it upon themselves to prove that the machine-learning-run AI tool isn’t up to the task. Social media was flooded with various users posting how they managed to one-up ChatGPT, either by highlighting its wrong answers or by simply bullying it into admitting a wrong answer. Such shenanigans could very well be a thing of the past now.


On January 30, OpenAI announced that it has upgraded the ChatGPT model “with improved factuality and mathematical capabilities.” Since it’s still a prototype, OpenAI regularly adds updates to the bot to make it better and faster. 


Now, prior to the recent update, ChatGPT had this weird hack that allowed users to exploit it by repeatedly asking it to admit a wrong answer. For example, even if it gave the write answer to a logic-based question, it could still be ‘bullied’ into admitting a wrong answer, if the user repeatedly typed in the wrong answer, challenging ChatGPT’s response.


With the new update, ChatGPT remains adamant and keeps flashing the correct answer, no matter what you tell the chatbot. 






As reported by Search Engine Journal, ChatGPT is still getting basic questions wrong, such as who is taller — Shaquille O’Neal or Yao Ming.



In this screenshot shared by Search Engine Journal, ChatGPT continues to get this height-related question wrong.


However, it does appear that following the latest January 30 update, ChatGPT manages to return correct responses to complex queries.










Disclaimer: The article majorly includes the responses given by ChatGPT (AI-driven chatbot developed by OpenAI) to various questions/questionnaire and ABP Network Private Limited (‘ABP’) is in no manner liable/responsible for any of such responses. Accordingly, reader discretion is advised.