Today marks one year since the release of GPT 4. AI enthusiasts on the internet are ideologically divided. On one side, there are doomers, such as Eliezer Yudkowsky, who argue that even with safety measures, there remains a high probability, often referred to as p(doom) in specialized circles, that AI will lead to humanity’s extinction. On the other side, accelerationists believe that the rapid development of AI is morally justifiable because it has the potential to solve many of the world’s problems. Placing myself on this spectrum, I tilt towards the accelerationist perspective. This stance isn’t borne out of a fear that a future superintelligence might torture those not explicitly supportive of its birth. Rather, I am convinced that AI will be the most positively transformative technology in human history.

The release of Claude 3 has led many people to declare that AGI has arrived. There are a million ways to refute this. We do not have AGI until online video games are ruined. We do not have AGI until 99% of the content on the internet is created by an AI agent. We do not have AGI until level 5 self driving arrives. Doomers and accelerationists alike believe that all these milestones will be reached someday, though they frequently engage in discussions over the precise timeline.

I am obsessed with the concept of a knowledge limit, a threshold beyond ASI where all practically useful knowledge has been acquired. While the universe contains an infinite expanse of knowledge, only a finite subset of this is meaningful or applicable. An AI operating near these limits would only be constrained by laws of physics and would view energy as the primary determinant for the solvability of problems. Such an entity would likely harness energy on a cosmic scale, possibly using a structure akin to a Dyson sphere. Doomers often argue that “you don’t know what you don’t know,” suggesting the potential for a system trillions of times more intelligent than Albert Einstein. I find merit in the doomer’s viewpoint. While immutable limits to computation and intelligence exist, they may be far beyond our comprehension. For instance, Landauer’s principle dictates that there is a minimum amount of energy required to change one bit of information, setting a fundamental limit on computing efficiency. It implies that it is physically possible to generate a 1000 hour 8k video in milliseconds.

I have already accepted that human labor will be economically worthless in my lifetime. Future generations will view our current education system as assembly lines for corporate manpower. These future observers will likely consider our era primitive, our motivations narrowly defined by greed.