OpenAI Start Rolling Out ChatGPT Voice to Some Paying Subscribers

| Updated on 2 August 2024

After a major backlash, OpenAI finally started rolling out its voice mode. However, the rollout of ChatGPT’s voice mode is in a phased manner. 

They have given some paying subscribers access to their most advanced GPT-40’s hyperrealistic audio responses. According to OpenAI, the alpha version of the feature will be made available to a small group of ChatGPT Plus users today. They also said that the feature will be made available to all Plus users gradually in the fall of 2024. 

When OpenAI first showcased GPT-40’s voice in May, the feature shocked the audience with how efficiently it was working. And then there was an issue of the voice resembling a celebrity. Just after the OpenAI’s demo, Scarlett Johansson said she refused multiple inquiries from CEO Sam Altman to use her voice, and after seeing the demo, she is considering her legal options to defend her likeness. 

The company denied using her voice, but they later removed the voice named Sky shown in the demo. In June, the company announced the delay in the release of this advanced AI mode because they want to improve safety measures. 

The company says that the screen sharing and video capabilities shown during the demo will not be part of this alpha update. These features will be launched at a later date. 

You may have already tried the voice mode on ChatGPT, but the company claims that the advanced mode is something different. The current ChatGPT model first converts your audio to text, processes the prompt, and then converts it back to audio. 

GPT-4o is a multimodal model that is capable of processing these things without the help of auxiliary models. They are also adding the ability to sense emotional intonations in your voice.

Gaurav Pal

Follow Me:

Comments Leave a Reply
Leave A Reply

Thanks for choosing to leave a comment. Please keep in mind that all comments are moderated according to our comment Policy.

Related Posts