AI Boosters Would Sacrifice Humanity for a Simulacra - as Long as They're in Control

AI boosterism and associated “long-termist” ideas may be a threat to your privacy, property and civil rights.

AccessTimeIconApr 14, 2023 at 4:22 p.m. UTC

Humans have always had a strange relationship to their technology. Whether it’s fire or the written word or artificial intelligence (AI), the things we create to empower and extend ourselves often come to seem like strange outside forces, their transformative impact on our lives and minds as profound and unpredictable as that of capricious gods.

This article is excerpted from The Node, CoinDesk's daily roundup of the most pivotal stories in blockchain and crypto news. You can subscribe to get the full newsletter here.

In the 21st century, there are many technologies whose impact we still haven’t adjusted to, like social media. Others, particularly artificial intelligence and cryptocurrency, are not yet fully developed but stand to have major impacts on how we live. That includes shaping the way we think about the world: Human groups are often profoundly influenced by the built-in biases of the technologies they rely on. (This relationship between mankind and its technology was the main focus of my academic career before turning to journalism.)

The cases of AI and cryptocurrency are particularly enthralling. Their respective technological underpinnings each map to much broader ideas about society and even individual morality. The two sets of ideas also seem in many ways diametrically and fundamentally opposed. On the one hand, you have crypto ideologues, often some variety of libertarian; on the other, you have the techno-utopianism of the AI boosters.

The core principle of blockchain tech is universality. The entire point is that everyone, everywhere, can access these tools and services, whether someone in authority is OK with that or not. A corollary to this principle is valuing privacy very highly. These values lead to a significant amount of chaos, but the underlying ethos includes a belief that such chaos is productive, or at least a necessary trade-off for human flourishing.

Artificial intelligence communities tend towards a much different view of the world, also closely tied to the structure of the technology. The basic role of an AI is to observe human behavior and then copy it, in all its nuance. This has many implications, but major ones include the eventual end of human labor, technical centralization and a bias towards total data transparency.

These may seem innocuous or even positive. But to put it bluntly, the implications of AI’s technical biases seem quite grim for humanity. That makes it all the more fascinating to watch AI boosters not just embrace those grim implications, but work tirelessly to make them real.

You aren’t human anymore

The most glaringly malevolent manifestation of the AI worldview might not look like one. Worldcoin is seemingly a cryptocurrency project designed to distribute a “universal basic income” (UBI) in exchange for harvesting uber-sensitive biometric data from vulnerable populations.

But while it’s a “crypto” on the surface, Worldcoin is actually an AI project in substance. It’s the brainchild of Sam Altman, also the founder of OpenAI. Its goal of identifying individuals and giving them UBI is premised on a future in which artificial intelligence takes over all human jobs, requiring a centrally-administered redistribution of AI-generated wealth.

One often overlooked implication of this is that in this future, people like Sam Altman will be the ones who still have meaningful work and individual freedom, because they’ll be running the machines. The rest of us, it seems, will have to subsist on worldcoin.

If that sounds dehumanizing, try this on for size: human rights for machines. In an essay published in The Hill on March 23, contributor Jacy Reese Anthis declared, “We need an AI rights movement.” As many pointed out, that’s philosophically suspect because there’s no compelling evidence that artificial intelligences will or can ever have any subjective experience, or know suffering in a way that would necessitate protection.

The AI hypesters have gone to great lengths to obscure this question of subjective consciousness. They have floated an array of maliciously misleading arguments that AIs – including, absurdly, currently existing language models – possess some form of consciousness, or the subjective experience of being in the world. Consciousness and its origins are among the all-time great mysteries of existence, but AI advocates often seem to have a superficial understanding of the issue, or even of what the word "consciousness" means.

Anthis, a co-founder of something called the Sentience Institute, repeats in his essay the cardinal logical fallacy of much flawed AI research: uncritically accepting the visible output of AI as a direct sign of some internal subjective experience.

As nuts as the call for AI rights is, we still haven’t touched the furthest fringes of misguided AI hypedom. In late March, no less a figure than aspiring philosopher-king Eleizer Yudkowsky suggested that his followers should be prepared to commit violent attacks on server farms in order to slow or stop the advance of AI.

Some may be surprised I include Yudkowsky with pro-AI forces. Yudkowsky is the leader of an “AI Safety” movement that grew out of his “rationalist” blog LessWrong, and argues AI may potentially take over the world and destroy humanity. The validity of those ideas is up for debate, but the important thing here is that Yudkowsy and his so-called rationalist peers don’t oppose artificial intelligence. The Yudkowskyites believe AI can bring about a utopia (again, a utopia they would be in charge of) but that it carries serious risks. They are fundamentally AI advocates dressed in skeptics’ clothing.

That contrasts with the more genuine and grounded skepticism of figures including Timnit Gebru and Cathy O’Neil. These researchers, who do not have nearly the following of Yudkowsky, aren’t worried about a far-future apocalypse. Their concern is with a clear and present danger: that algorithms trained on human behavior will also reproduce the flaws of that behavior, such as racial and other forms of discrimination. Gebru was notoriously fired from Google for having the temerity to point out problems with AI that threatened Google’s business model.

You may note the similarity here to other ideological movements, particularly the related ideas of “effective altruism” and “long-termism.” Emile P. Torres, a philosopher based at Leibniz University in Germany, is among those who describes the deep alignment and affiliations between Yudkowskyites, “transhumanists,” effective altruists and longtermists, under the acronym TESCREAL. As just one tiny example of this alignment, FTX, the fraudulent crypto exchange staffed substantially by effective altruists and led by a man who pretended to be one, made large donations (using allegedly stolen funds) to AI safety groups, as well as other long–termist causes.

This package of worldviews, Torres argues, boils down to “colonize, subjugate, exploit, and maximize.” In Worldcoin in particular we see a preview of the implacable authoritarianism of a world where machines endlessly copy human behaviors, and replace them – except for the handful of humans with their hands on the tiller.

See also: The Intersection of AI and Crypto | Money Reimagined

There is much more to be said here, but for those in the crypto world, one declaration from the TESCREAL axis deserves particular attention. In a very recent essay, Yudkowsky declared that the future threat of dangerous artificial intelligence could require "very strong global monitoring, including breaking all encryption, to prevent hostile AGI behavior."

In other words, to these folks the privacy and autonomy of humans in the present must be discarded in favor of creating the conditions for safe and continuing expansion of artificial intelligence in the future. I’ll leave it to you to decide whether that’s a future, or present, that anyone should want.


Learn more about Consensus 2024, CoinDesk's longest-running and most influential event that brings together all sides of crypto, blockchain and Web3. Head to consensus.coindesk.com to register and buy your pass now.


Disclosure

Please note that our privacy policy, terms of use, cookies, and do not sell my personal information has been updated.

CoinDesk is an award-winning media outlet that covers the cryptocurrency industry. Its journalists abide by a strict set of editorial policies. In November 2023, CoinDesk was acquired by the Bullish group, owner of Bullish, a regulated, digital assets exchange. The Bullish group is majority-owned by Block.one; both companies have interests in a variety of blockchain and digital asset businesses and significant holdings of digital assets, including bitcoin. CoinDesk operates as an independent subsidiary with an editorial committee to protect journalistic independence. CoinDesk employees, including journalists, may receive options in the Bullish group as part of their compensation.

David Z. Morris

David Z. Morris was CoinDesk's Chief Insights Columnist. He holds Bitcoin, Ethereum, and small amounts of other crypto assets.