OpenAI has unveiled its latest artificial intelligence model, GPT-4o, boasting enhanced speech and vision capabilities along with a deeper understanding of language and context. Initially available to select users, the model currently offers text and web search functionalities, with voice and video features yet to be activated.
Access to GPT-4o is granted to users through their OpenAI account, with no waitlist or expedited access options available. Upon availability, users receive a notification when accessing the website, indicating limited access to the model. The update extends to the Android and iOS apps linked to the user's account, although refreshing the page removes the notification.
To determine if GPT-4o access has been granted, users can check the collapsible menu on the ChatGPT website. If the user has access, the menu will display only ChatGPT and ChatGPT Plus, omitting version numbers, and replace the lightning icon with a minimalist atom symbol.
Early testing of GPT-4o revealed improvements in response quality, particularly evident in creative writing tasks with minimal prompts. Compared to its predecessor, GPT-3.5, the new model exhibits smoother creative generation with reduced robotic language. Additionally, GPT-4o's ability to search the web for real-time information marks a significant upgrade, eliminating concerns about outdated knowledge. Web search results now include citations, indicating the sources used to retrieve information.