ChatGPT maker OpenAI has introduced GPT-4o, a multimodal large language model (LLM) that surpasses GPT-4 in speed and enhances text, vision, and audio capabilities. The company's CTO Mira Murati has said that this model will soon be accessible to free-tier users and now, OpenAI is extending these features to all users.


OpenAI is now enabling free-tier users to browse and use custom bots from the GPT Store. It was launched for paid subscribers earlier this year. The GPT Store allows users to create their own chatbots, called GPTs, for tasks like coding and writing. Although only ChatGPT premium subscribers can create and share GPTs, access to these custom GPT chatbots is now available to all users.


Also read: Nothing Phone 2a New Variant Launched In Red, Yellow And Blue Accents


"All ChatGPT Free users can now use browse, vision, data analysis, file uploads, and GPTs," OpenAI posted on X, formerly Twitter.






However, there is a limitation. Although free ChatGPT users can access and interact with GPTs, they cannot create new chatbots. The feature that allows the creation of new GPTs using text prompts, and optionally a specific database, is not available to users on the free tier.


Also read: WhatsApp Communities May Soon Let Admins Set Notifications For Upcoming Events


Free-tier ChatGPT users can use the latest features powered by GPT-4o, according to OpenAI.


However, paid subscribers will enjoy "up to five times the capacity limits" compared to free users. When users reach their GPT-4o limits, ChatGPT will seamlessly transition to GPT-3.5 to sustain the conversation. OpenAI highlights GPT-4o's enhanced safety measures, including filtered training data and refined model behavior post-training.


It is important to note that GPTs are not general conversation chatbots like ChatGPT. They are designed with a limited scope and may not handle a wide range of queries. However, users can expect more precise answers within their specific domains, as these GPTs function like specialised language models.


GPT-4o is capable of comprehending tone, background noises, and can even convey emotions through its voice. Additionally, it provides multilingual support.