A Faster GPT-4o AI Model is Available for Free

| Updated on 15 May 2024

OpenAI has launched a new iteration of their GPT-4 LLM. The new GPT-4o is much faster and capable of using text, vision, and audio to interact with users. 

OpenAI CTO Mira Murati said in a livestream announcement that the model “is much faster” and improves “capabilities across text, vision, and audio.” It was also announced that it’ll be free for all users and the paid users to have up to five times the capability limit. 

In a blog post from OpenAI, the company says that they’ll release the GPT-4o’s capabilities iteratively. For now, the text and image capabilities will be rolled out starting today in ChatGPT. 

Here is what Sam Altman tweeted about the new model. He says that the new model is “natively multimodal.” A multimodal model is something that could generate content or understand commands like voice, text, or images. 

This new feature from OpenAI will be coming to ChatGPT’s voice mode as part of the new model. The app will be able to act as a voice assistant while observing the world right beside you. 

There were reports of OpenAI launching an AI search engine to rival Google or perplexity. But that wasn’t the case and the company has timed this launch perfectly with Google I/O 2024 just around the corner. 

Google’s Gemini team is expected to launch some new AI products to entice users. 

Go ahead and update your ChatGPT app and try out the new faster model from OpenAI.

Alap Naik Desai

Tech Journalist

Comments Leave a Reply
Leave A Reply

Thanks for choosing to leave a comment. Please keep in mind that all comments are moderated according to our comment Policy.

Related Posts