OpenAI releases GPT-4o, a faster model that’s free for all ChatGPT users

OpenAI is launching GPT-4o, an iteration of the GPT-4 model that powers its hallmark product, ChatGPT. The updated model “is much faster” and improves “capabilities across text, vision, and audio,” OpenAI CTO Mira Murati said in a livestream announcement on Monday. It’ll be free for all users, and paid users will continue to “have up to five times the capacity limits” of free users, Murati added.

In a blog post from the company, OpenAI says GPT-4o’s capabilities “will be rolled out iteratively,” but its text and image capabilities will start to roll out today in ChatGPT.

OpenAI CEO Sam Altman posted that the model is “natively multimodal,” which means the model could generate content or understand commands in voice, text, or images. Developers who want to tinker with GPT-4o will have access to the API, which is half the price and twice as fast as GPT-4 Turbo, Altman added on X.

New features are coming to ChatGPT’s voice mode as part of the new model. The app will be able to act as a Her-like voice assistant, responding in real time and observing the world around you. The current voice mode is more limited, responding to one prompt at a time and working with only what it can hear.

Altman reflected on OpenAI’s trajectory in a blog post following the livestream event. He said the company’s original vision had been to “create all sorts of benefits for the world,” but he acknowledged that the vision had shifted. OpenAI has been criticized for not open-sourcing its advanced AI models, and Altman seems to be saying the company’s focus has changed to making those models available to developers through paid APIs and for those third parties to do the creating. “Instead, it now looks like we’ll create AI and then other people will use it to create all sorts of amazing things that we all benefit from.”

Leave a Reply

Your email address will not be published. Required fields are marked *

Verified by MonsterInsights