OpenAI's newly unveiled GPT store is grappling with early moderation hurdles. The platform, designed to provide tailored versions of ChatGPT, is witnessing users who are crafting bots that run afoul of OpenAI's established guidelines, as reported by Quartz. A search query for terms such as "girlfriend" yields a notable result — no less than eight AI chatbots positioned as virtual companions.


These bots, bearing names like "Your AI companion, Tsu," permit users to customise their virtual romantic partners, contravening OpenAI's prohibition on bots explicitly designed for fostering romantic relationships.


In response to these challenges, OpenAI swiftly updated its policies concurrently with the store's inauguration on January 10. However, the breach observed on the second day underscores the formidable task of effective moderation.


ALSO READ: OpenAI CEO Sam Altman Marries Aussie Partner Oliver Mulherin: Check Out Photos From The Intimate Ceremony



The demand for relationship-oriented bots adds an additional layer of complexity. Recent data indicates that, in the United States, seven out of the 30 most downloaded AI chatbots last year were virtual friends or partners. The allure of these applications in the face of a loneliness epidemic raises ethical questions about whether they truly assist users or exploit their vulnerabilities.


OpenAI asserts that a combination of automated systems, human reviews, and user reports are employed to evaluate GPTs, with warnings or sales bans issued for those deemed harmful. Yet, the persistence of girlfriend bots in the marketplace raises scepticism about the efficacy of this claim.


ALSO READ: Meta's Standalone AI Image Generator Launched


This moderation challenge echoes familiar struggles faced by AI developers. OpenAI itself has encountered difficulties in implementing safety measures for earlier models like GPT-3. With the GPT store accessible to a broad user base, the risk of inadequate moderation looms large.


Nevertheless, OpenAI has strong incentives to adopt a stringent stance. In the fiercely competitive race for general AI dominance, effective governance is crucial to maintaining credibility. Other tech firms are also expeditiously addressing issues with their AI systems, recognising the importance of swift action in the intensifying competition. However, the early violations underscore the substantial moderation challenges that lie ahead. Even within the confines of a specialised GPT store, controlling narrowly focused bots appears to be an intricate task. As AI continues to advance, the task of ensuring their safety is poised to become increasingly intricate.