New reports indicate that after developing advanced text and image models, OpenAI is now planning to focus its resources on developing audio models.
According to a new report published by The Information, OpenAI has integrated several engineering, product and research teams over the past two months to fundamentally redesign its audio models. All of this is in preparation for a voice-based personal device that is expected to arrive in about a year.
OpenAI wants to launch a voice-based artificial intelligence device
This move is indicative of the direction the entire tech industry is headed. A future where screens are no longer functional and sound becomes the main focus of interaction. Smart speakers have already made voice assistants a permanent part of more than a third of American homes.

Meta also recently released a feature for its Ray-Ban smart glasses that uses an array of five microphones to help you hear conversations better in crowded environments, a feature that effectively turns your face into a directional listening device.
In June, Google also began testing a feature called Audio Overviews, which turns search results into audio-based conversational summaries. Tesla is also integrating the Grok chatbot into its cars to create a conversational voice assistant that handles everything from navigation to air conditioning control through natural conversation.
Reports suggest that the new OpenAI voice model, scheduled for release in early 2026, will speak more naturally, handle interruptions like a real person in a conversation, and can even speak at the same time as you; A feature that today’s models cannot do. The company is also said to be considering a family of AI-based devices, such as glasses or smart speakers without screens, that act less like gadgets and more like companions.
RCO NEWS



