What AI Governance Can Learn From Crypto’s Decentralization Ethos

Prominent critics of AI development are calling for government intervention to stave off the threat of human extinction. But we need more than centralized regulation of this industry, argues Michael J. Casey.

AccessTimeIconJun 2, 2023 at 5:02 p.m. UTC
Updated Jun 6, 2023 at 3:02 p.m. UTC
AccessTimeIconJun 2, 2023 at 5:02 p.m. UTCUpdated Jun 6, 2023 at 3:02 p.m. UTC
AccessTimeIconJun 2, 2023 at 5:02 p.m. UTCUpdated Jun 6, 2023 at 3:02 p.m. UTC

The titans of U.S. tech have rapidly gone from being labeled by their critics as self-serving techno-utopianists to being the most vocal propagators of a techno-dystopian narrative.

This week, a letter signed by the more than 350 people, including Microsoft founder Bill Gates, OpenAI CEO Sam Altman and former Google scientist Geoffrey Hinton (sometimes called the “Godfather of AI”) delivered a single, declarative sentence: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

You’re reading Money Reimagined, a weekly look at the technological, economic and social events and trends that are redefining our relationship with money and transforming the global financial system. Subscribe to get the full newsletter here.

Just two months ago, an earlier open letter signed by Tesla and Twitter CEO Elon Musk along with 31,800 others, called for a six-month pause in AI development to allow society to determine its risks to humanity. In an op-ed for TIME that same week, Eliezer Yudkowsky, considered the founder of the field artificial general intelligence (AGI), said he refused to sign that letter because it didn’t go far enough. Instead, he called for a militarily-enforced shutdown of AI development labs lest a sentient digital being arises that kills everyone of us.

World leaders will find it hard to ignore the concerns of these highly recognized experts. It is now widely understood that a threat to human existence actually exists. The question is: how, exactly, should we mitigate it?

As I’ve written previously, I see a role for the crypto industry, working with other technological solutions and in concert with thoughtful regulation that encourages innovative, human-centric innovation, in society’s efforts to keep AI in its lane. Blockchains can help with the provenance of data inputs, with proofs to prevent deep fakes and other forms of disinformation, and to enable collective, rather than corporate ownership. But even setting aside those considerations, I think the most valuable contribution from the crypto community lies in its “decentralization mindset,” which offers a unique perspective on the dangers posed by concentrated ownership of such a powerful technology.

A Byzantine view of AI risks

First, what do I mean by this “decentralization mindset?”

Well, at its core, crypto is steeped in a “don’t trust, verify” ethos. Diehard crypto developers – rather than the money-grabbers whose centralized token casinos put the industry into disrepute – relentlessly engage in “Alice and Bob” thought-experiments to consider all threat vectors and points of failure by which a rogue actor might intentionally or unintentionally be enabled to do harm. Bitcoin itself was born of Satoshi trying to solve one of the most famous of these game theory scenarios, the Byzantine Generals Problem, which is all about how to trust information from someone you don’t know.

The mindset treats decentralization as the way to address those risks. The idea is that if there is no single, centralized entity with middleman powers to determine the outcome of an exchange between two actors, and both can trust the information available about that exchange, then the threat of malicious intervention is neutralized.

Now, let’s apply this worldview to the demands laid out in this week’s AI “extinction” letter.

The signatories want governments to come together and devise international-level policies to contend with the AI threat. That’s a noble goal, but the decentralization mindset would say it’s naive. How can we assume that all governments, present and future, will recognize that their interests are served by cooperating rather than going it alone – or worse, that they won’t say one thing but do another? (If you think monitoring North Korea’s nuclear weapons program is hard, try getting behind a Kremlin-funded encryption wall to peer into its machine learning experiments.)

It was one thing to expect global coordination around the COVID pandemic, when every country had a need for vaccines, or to expect that the logic of mutually assured destruction (MAD) would lead even the bitterest enemies in the Cold War to agree not to bring even nuclear weapons, where the worst-case scenario is so obvious to everyone. It’s another for it to happen around something as unpredictable as the direction of AI – and, just as importantly, where non-government actors can easily use the technology independently of governments.

The concern that some in the crypto community have about these big AI players rushing to regulate is that they will create a moat to protect their first-mover advantage, making it harder for competitors to go after them. Why does that matter? Because in endorsing a monopoly, you create the very centralized risk that these decades-old crypto thought-experiments tell us to avoid.

I never gave Google’s “Do No Evil” motto much credence, but even if Alphabet, Microsoft, OpenAI and co. are well intentioned, how do I know their technology won’t be co-opted by a differently-motivated executive board, government, or a hacker in the future? Or, in a more innocent sense, if that technology exists inside an impenetrable corporate black box, how can outsiders check the algorithm’s code to ensure that well-intentioned development is not inadvertently going off the rails?

And here’s another thought experiment to examine the risk of centralization for AI:

If, as people like Yudkowsky believe, AI is destined under its current trajectory to Artificial General Intelligence (AGI) status, with an intelligence that could lead it to conclude that it should kill us all, what structural scenario will lead it to draw that conclusion? If the data and processing capacity that keeps AI “alive” is concentrated in a single entity that can be shut down by a government or a worried CEO, one could logically argue that the AI would then kill us to prevent that possibility. But if AI itself “lives” within a decentralized, censorship-resistant network of nodes that cannot be shut down, this digital sentient won’t feel sufficiently threatened to eradicate us.

I have no idea, of course, whether that’s how things would play out. But in the absence of a crystal ball, the logic of Yudowskly’s AGI thesis demands that we engage in these thought-experiments to consider how this potential future nemesis might “think.”

Finding the right mix

Of course, most governments will struggle to buy any of this. They will naturally prefer the “please regulate us” message that OpenAI’s Altman and others are actively delivering right now. Governments want control; they want the capacity to subpoena CEOs and order shutdowns. It’s in their DNA.

And, to be clear, we need to be realistic. We live in a world organized around nation-states. Like it or not, it’s the jurisdictional system we’re stuck with. We have no choice but to involve some level of regulation in the AI extinction-mitigation strategy.

The challenge is to figure out the right, complementary mix of national government regulation, international treaties and decentralized, transnational governance models.

There are, perhaps, lessons to take from the approach that governments, academic institutions, private companies and non-profit organizations took to regulating the internet. Through bodies such the Internet Corporation for Assigned Names (ICANN) and the Internet Engineering Task Force (IETF), we installed multistakeholder frameworks to enable the development of common standards and to allow for dispute resolution through arbitration rather than the courts.

Some level of AI regulation will, undoubtedly, be necessary, but there’s no way that this borderless, open, rapidly changing technology can be controlled entirely by governments. Let’s hope they can put aside the current animus toward the crypto industry and seek out its advice on resolving these challenges with decentralized approaches.

Edited by Ben Schiller.


Learn more about Consensus 2024, CoinDesk's longest-running and most influential event that brings together all sides of crypto, blockchain and Web3. Head to consensus.coindesk.com to register and buy your pass now.


Disclosure

Please note that our privacy policy, terms of use, cookies, and do not sell my personal information has been updated.

CoinDesk is an award-winning media outlet that covers the cryptocurrency industry. Its journalists abide by a strict set of editorial policies. In November 2023, CoinDesk was acquired by the Bullish group, owner of Bullish, a regulated, digital assets exchange. The Bullish group is majority-owned by Block.one; both companies have interests in a variety of blockchain and digital asset businesses and significant holdings of digital assets, including bitcoin. CoinDesk operates as an independent subsidiary with an editorial committee to protect journalistic independence. CoinDesk employees, including journalists, may receive options in the Bullish group as part of their compensation.

Michael J. Casey

Michael J. Casey is CoinDesk's Chief Content Officer.


Read more about