In a recent debate, Guillaume Verdon, known as Beff Jezos and a proponent of the effective accelerationist (e/acc) movement, engaged in a discussion with AI skeptic Connor Leahy. The debate highlighted differing viewpoints on the potential existential threat posed by Artificial General Intelligence (AGI) development.
The conversation was a bit disappointing. I’m much more aligned with the e/acc movement and have criticized AI doomers in the past. But Beff came off as rambling and a bit detached. The conversation was mostly driven by hypothetical situations posed by Leahy in an attempt to bait Beff into adminting AGI is an existential threat and should be controlled. After the conversation Beff claimed Leahy was subtly threatening him with violence from his supporters (more on that later).
Leahy starts with the question:
Can you imagine there being a technology that should be banned? Like, is it an acceptable thing in your ontology or do you think this does not like this cannot even exist?
Leahy's question comes across as somewhat disingenuous, begging Beff to reply "of course." Surely a civilization ending technology should be banned. By framing the question this way, Leahy sets the stage to argue that AGI represents such a technology. Given even a minimal risk, the potential downside of AGI could be catastrophic, prompting the question: why take the risk at all?
Alternatively, the question could be reframed as, "Should certain private speech be banned?" Many technologists have long argued that code amounts to speech, so research utilizing computers amounts to speech as well. In this light, the answer is less obvious.
We have a lot more experience banning commercial goods. We've banned all sorts of things in the US, from certain explosives to menthol cigarettes. The effectiveness of bans is a different story but we have a framework to think about bans. And we can probably tell Microsoft or OpenAI to cut it out and they'll oblige. However we have had much less success banning private speech.
Exploring the idea of banning technology (or speech) raises questions about the methods of proposed regulations on AI development. While many doomers hand wave about global cooperation, Eliezer Yudkowsky stands out for his willingness to articulate a more radical (and realistic) approach:
Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs. Put a ceiling on how much computing power anyone is allowed to use in training an AI system, and move it downward over the coming years to compensate for more efficient training algorithms. No exceptions for governments and militaries. Make immediate multinational agreements to prevent the prohibited activities from moving elsewhere. Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue datacenter by airstrike.
Frame nothing as a conflict between national interests, have it clear that anyone talking of arms races is a fool. That we all live or die as one, in this, is not a policy but a fact of nature. Make it explicit in international diplomacy that preventing AI extinction scenarios is considered a priority above preventing a full nuclear exchange, and that allied nuclear countries are willing to run some risk of nuclear exchange if that’s what it takes to reduce the risk of large AI training runs.
There's a speculative risk that AGI could become a threat. However, implementing a system to restrict computational activities would almost certainly lead to an equally dystopian outcome. Such measures would lead to conflict between the most AI-pessimistic nations, presumably the US, and any country with the means to procure and power GPUs. And if you think we can regulate GPU proliferation, a quick study on international arms control measures will quickly dispell you from that idea.
There is the peaceful option of cooperation though but we've seen how that plays out. Look at man made climate change through carbon emissions. Some countries can get together, fly private jets to meet and make promises. So what's the end result?
Western world has a natural leveling off and decline, arguably through secular forces like more efficiency and advances in material science. And the rest of the world is catching up with energy use as they grow wealthier.
Which brings us to the death threats. Why wouldn’t doomers do anything they can to off someone promoting irresponsible devlopment of AGI? Beff interpreted this as a veiled threat or signal to others
To which Connor replied is ridiculous and morally wrong. But why would it be wrong for an ai doomer to take moves to prevent annihilation of humanity? Sure there are a lot of hucksters in the industry, but I’m talking about the true believers. Beff isn’t baby Hitler. To them, he’s 1933 Hitler just before the Reichtag fire! Okay maybe not Beff, but Sam Altman for sure.
You have to remember, doomers are not normal people. They're sci-fi obsessed godless fanatics. They think the world is ending. They un-ironically post things like this:
Ironically these are the first people that will wire head and merge with AI.
So let’s get specific about what AI doomers want and stop propping up this sci-fi death cult as some kind of reasonable group of concerned citizens.
this post is damn near incoherent