views
OpenAI is set to release the new model of ChatGPT in the next few months. The version, dubbed as GPT-5 could launch by Summer, as per report from Business Insider. Two sources who are close to Sam Altman led AI Company, told Business Insider that some businesses got the demos of the advanced and improved ChatGPT model.
One of the CEOs who tested GPT-5 gave his approval to the model and said, “It is really good, like materially better.” The CEO mentioned that “OpenAI demonstrated the new model with use cases and data unique to his company.”
OpenAI is still training GPT-5. After the internal team at OpenAI finishes the new multimodal large language model, they will put it through the red teaming. Red teaming is a cybersecurity check process during which a selection of outsiders challenges the program and finds issues or weak points within it that might have escaped its makers.
One source told Business Insider that since there is no fixed timeframe regarding the completion of safety testing; therefore there could be a delay in ChatGPT-5’s launch, especially if the read teamers find flaws within the system.
The main revenue stream for ChatGPT comes from businesses that pay OpenAI to give them an enhanced or customised version of ChatGPT. With CPT-5, the OpenAI team hopes to impress their potential customers and the public alike. ChatGPT was launched on November 30, 2022. Since its launch, it has grown in many directions and impacted many industries ranging from education to customer service.
While OpenAI has not made any announcement about GPT-5, Sam Altman did introduce Sora, an AI-backed tool that can create videos based on text prompts. These videos can be one minute long. On February 15, Altman asked his X (Formerly Twitter) followers to give him video prompts for Sora. He then shared the videos made by Sora on his X account.
Despite its impressive results, the OpenAI team admits that Sora still has many weaknesses. In a blog post, the research organisation wrote that Sora “may struggle with accurately simulating the physics of a complex scene, and may not understand specific instances of cause and effect.”
It added that the model may confuse “spatial details of a prompt, for example, mixing up left and right.” OpenAI also highlighted that they are making sure that Sora rejects prompts that violate their usage policies with regards to hateful imagery, sexual content, or IP theft.
Comments
0 comment