New reports indicate that after developing advanced text and image models, OpenAI is now planning to focus its resources on developing audio models.
According to a new report published by The Information, OpenAI has iegrated several engineering, product and research teams over the past two mohs to fundameally redesign its audio models. All of this is in preparation for a voice-based personal device that is expected to arrive in about a year.
OpenAI was to launch a voice-based artificial ielligence device
This move is indicative of the direction the eire tech industry is headed. A future where screens are no longer functional and sound becomes the main focus of ieraction. Smart speakers have already made voice assistas a permane part of more than a third of American homes.


Meta also recely released a feature for its Ray-Ban smart glasses that uses an array of five microphones to help you hear conversations better in crowded environmes, a feature that effectively turns your face io a directional listening device.
In June, Google also began testing a feature called Audio Overviews, which turns search results io audio-based conversational summaries. Tesla is also iegrating the Grok chatbot io its cars to create a conversational voice assista that handles everything from navigation to air conditioning corol through natural conversation.
Reports suggest that the new OpenAI voice model, scheduled for release in early 2026, will speak more naturally, handle ierruptions like a real person in a conversation, and can even speak at the same time as you; A feature that today’s models cannot do. The company is also said to be considering a family of AI-based devices, such as glasses or smart speakers without screens, that act less like gadgets and more like companions.



