Altman is Bluffing
Very few people know what the hell is going on anymore. As much as it pains me to say it, I think Gary Marcus is right in that LLMs are approaching a point of diminishing returns, i.e., increases in computing power will not yield significantly more capable models. All the data shows that the gap between smaller and larger models is slowly closing, to the extent that you will soon be able to run a GPT-4 equivalent intelligence locally from your iPhone (is Apple actually the best-placed company to benefit from the AI revolution?). Yet, at the same time, Sam Altman publicly stated the current frontier model is mildly embarrassing at best. Is this just a standard CEO tactic to sell dreams to people, or does he really have something?
My hunch is that OpenAI’s next model will be materially better, but after that, progress will be a slow and painful grind. OpenAI can only maintain their moat until text, video, image, and audio generation performance asymptotes (it already has). It’s no coincidence that Gemini 1.5 Pro, Claude 3 Opus, and GPT-4 are performing roughly around the same level. Unfortunately, the average AI enthusiast seems to believe that compute is all you need and that progress is infinitely exponential.