According to OpenAI CEO Sam Altman, the size of large language models (LLMs) will become less important in the future. According to an interview with Altman at MIT’s Imagination in Action event, we’ve reached the limit of LLM size for the sake of size. Altman believes that there has been too much emphasis on parameter count, comparing it to the chip speed races of the 1990s and 2000s, where everyone was trying to point to a large number.
Altman believes that size is an inaccurate measure of model quality. He emphasises the importance of focusing on rapidly increasing capability rather than getting hung up on parameter count. He admits that if there is a reason for the parameter count to decrease over time or for multiple models to collaborate, we would do so. What matters is that the world receives the most capable, useful, and safe models.
When asked about the letter requesting that OpenAI be put on hold for six months, Altman defends his company’s approach while agreeing with some of the letter’s points. He also believes that the letter missed the mark in a number of ways. Altman admits that he and other company representatives sometimes say “dumb stuff” that turns out to be incorrect, but he’s willing to take that risk because it’s important to have a discussion about this technology.
Altman believes that OpenAI was successful because the company worked on it for a long time, gradually increasing confidence that it would work. The company has been developing for seven years, and these things take time. Altman emphasises that most people are not willing to work hard and sweat over every detail for an extended period of time.
Finally, Altman believes that we are nearing the end of the era of these massive models and that we will improve them in other ways. The emphasis should be on rapidly increasing capability, and if there is a reason why the number of parameters should decrease over time or multiple models should work together, we would do so. What matters is that the world receives the most capable, useful, and safe models.