Decoding OpenAI’s o1 family of large language models – Computerworld



Further, Fan said that OpenAI must have figured out the inference scaling law a long time ago, which academia is just recently discovering. However, he did point out that productionizing o1 is much harder than nailing the academic benchmarks and raised several questions.

“For reasoning problems in the wild, how (the model) to decide when to stop searching? What’s the reward function? Success criterion? When to call tools like code interpreter in the loop? How to factor in the compute cost of those CPU processes? Their research post didn’t share much.
 

OpenAI, too, in one of the blog posts has said that the new model, which is still in the early stages of development and is expected to undergo significant iteration, doesn’t yet have many of the features that make ChatGPT useful, such as browsing the web for information and uploading files and images.

Latest articles

spot_imgspot_img

Related articles

Leave a reply

Please enter your comment!
Please enter your name here

spot_imgspot_img