Last year, OpenAI hosted a high-profile event in San Francisco, unveiling several new products and tools, including the ill-fated GPT Store.
This year, however, OpenAI is taking a different approach. On Monday, the company announced that its DevDay conference will shift from a major event to a series of on-the-road developer engagement sessions. OpenAI also confirmed that it will not release its next major flagship model during DevDay, instead focusing on updates to its APIs and developer services.
“We’re not planning to announce our next model at DevDay,” an OpenAI spokesperson told TechCrunch. “We’ll be focused more on educating developers about what’s available and showcasing dev community stories.”
This year’s DevDay events will be held in San Francisco on October 1, London on October 30, and Singapore on November 21. These events will feature workshops, breakout sessions, demos with OpenAI’s product and engineering staff, and developer spotlights. Registration costs $450, with scholarships available for eligible attendees. Applications close on August 15.
Recently, OpenAI has taken more incremental steps in generative AI, focusing on refining and fine-tuning its tools while training the successors to its current leading models, GPT-4o and GPT-4o mini. The company has improved the overall performance of its models and reduced the frequency of errors, but it seems to have lost its technical lead in the generative AI race, according to some benchmarks.
One contributing factor could be the increasing difficulty in finding high-quality training data.
OpenAI’s models, like most generative AI models, are trained on vast amounts of web data. However, many creators are now restricting access to their data due to concerns about plagiarism and lack of credit or compensation. According to Originality.AI, over 35% of the world’s top 1,000 websites now block OpenAI’s web crawler. Additionally, a study by MIT’s Data Provenance Initiative found that around 25% of data from “high-quality” sources has been restricted from the major datasets used to train AI models.
If the current trend of access-blocking continues, the research group Epoch AI predicts that developers may run out of data to train generative AI models between 2026 and 2032. This, along with concerns about copyright lawsuits, has led OpenAI to enter expensive licensing agreements with publishers and data brokers.
OpenAI has reportedly developed a reasoning technique that could enhance its models’ responses to certain questions, especially in mathematics. CTO Mira Murati has promised a future model with “Ph.D.-level” intelligence. In a blog post from May, OpenAI revealed that it had begun training its next “frontier” model. This is a significant commitment, and there is immense pressure to deliver. OpenAI is reportedly spending billions of dollars on training its models and hiring top-tier research staff.
OpenAI continues to face several controversies, including the use of copyrighted data for training, restrictive employee NDAs, and the effective sidelining of safety researchers. However, the slower product cycle might help counter the perception that OpenAI has deprioritized AI safety in favor of developing more powerful generative AI technologies.