Is Crypto-AI Really a Match Made in Heaven?

There’s lots of talk these days about how blockchains can make AI safe for human consumption. But is the idea, widely touted in the crypto community, workable in practice? Jeff Wilser takes a critical look.

AccessTimeIconSep 21, 2023 at 4:11 p.m. UTC

Trent McConaghy’s father was a pig farmer in rural Canada. There wasn’t much to do for fun. His father bought him a computer when he was a kid, in the 1980s, along with a reference manual. Trent couldn’t play video games at an arcade, so he taught himself to code and whipped up his own version of Pac-Man. When he was around 10-years-old, he found a used book at a garage sale.

The book was about AI.

“I’ve been super excited about it ever since, for my whole life,” says McConaghy. In the 1990s, he downloaded neural networks from “bulletin boards” before the internet was even a thing; he pioneered research on using AI for circuit-board design; and he worked on AI-related startups until “getting the blockchain bug in 2013.”

While many in the crypto space have recently “pivoted to AI,” McConaghy has been there since the beginning. “Blockchain has been my focus for several years,” he wrote in 2017. “But AI was always my first love, and I never let go.”

McConaghy’s project, Ocean Protocol, is one of a growing number of crypto-meets-AI startups that’s trying to leverage blockchain to improve AI. But this is where things get fuzzy. What does “crypto-meets-AI” even mean, exactly? Ever since the exploding hype of ChatGPT, many in the Web3 space, including me, have written about how crypto and AI could somehow fit together. But often the idea is hazy and abstract and framed in a vague sense of “let’s combine chocolate and peanut butter.”

Perhaps tellingly, you usually see the hopeful “Web3 and AI need each other” takes from the Web3 space, and rarely from the AI camp, which has the feel of a one-sided college rivalry. Even many web3/AI champions are frustrated by the hopium. “I’m actually quite disappointed with the narrative out there,” says Mrinal Manohar, CEO of Casper Labs, which is working on blockchain-meets-AI solutions. “There’s a sense of, ‘Let’s throw a little blockchain fairy dust on it, and it gets better.’ That’s not really how stuff works.”

So the goal here is to go deeper and explore how, more concretely, Web3 can help in the development of safe and ethical AI. There are dozens of projects working on a wide range of use cases, from spotting deep fakes to making AI safer -- Allison Duettmann gives an excellent overview -- but here we’ll focus on two essential areas: Data and compute.

Any AI model is only as good as its data. And training these AI models requires so much computational power -- and energy -- that it makes bitcoin mining look like a pocket calculator.

(Trent McConaghy)
(Trent McConaghy)

So how could Web3 help?

Let’s start with data. As anyone who has dabbled in ChatGPT knows, the AI has a tendency to “hallucinate,” which is a nice euphemism for “bullshitting.” I like to think of ChatGPT as an employee who is so eager to please you, so hungry for your affection, that it will lie so you’ll think it’s doing a good job.

The problem of hallucinations is complicated. There’s no clear answer. Once the data is hoovered up by the LLM it’s essentially a black box, and even the brightest engineers in the business can’t point to precisely which data inputs caused given data outputs. (Here I’m imagining a brilliantly written academic white paper that ends in a shrug emoji.) On top of that, as we’ve seen from recent lawsuits from writers like Sarah Silverman and Michael Chabon, there are charges that AI is illegally built from copyrighted work.

“Blockchain can eliminate all of that,” says Manohar. “Blockchain is the world’s strongest copy protection technology.” Blockchain, with its ability to track every chunk of data along the chain, could in theory -- heavy emphasis on “in theory” -- let you audit the data for copyright infringement, bias, or inaccuracies.

An embrace of blockchain could not only vet the data used to feed the models, it could even, potentially, expand the pool of useful data. Think about health care or financial information. Given the obvious concerns over privacy, “these are two data sources that we cannot touch,” says Duettmann, CEO of Foresight Institute. “Large data models cannot go there.”

But what if individuals are incentivized to offer our own personal data, assuming this could be done in a way that preserves privacy? Ali Yahya, a General Partner at blue-chip VC fund Andreessen Horowitz, has called this the “long tail” of data, and argues that Web3 is the only way to harness it. “There’s just data out there in the world that doesn’t exist except for the long tail,” says Yahya.

The way that AI teams have traditionally built data models, explains Yahya, involves a “top-down” framework where you scour the earth and hunt for data, like how Waymo slaps cameras atop cars with a mission of filming every road on the planet. “But maybe a better way to do that is bottoms-up,” says Yahya, “where you just have normal people out there contributing data from their own driving.”

But these normal people wouldn’t do it for free. They need an incentive, and that brings us back to Trent McConaghy. As far back as 2017, he envisioned Ocean Protocol as a data marketplace that could “connect data-haves with data-have-nots,” which in turn would spread out “data-AI network effects, in turn spreading power and wealth.”

In the very next sentence McConaghy added, almost as a reality check, “But the technology is not easy.” The challenges: “It needs to store who owns what data, with tight user control and privacy,” McConaghy explains. “It needs to reconcile with governments and regulators on privacy and data sharing. It needs to be decentralized. It needs to be at scale, not just some shiny toy technology. Decentralized tech at scale is hard.”

Read more: Allison Duettmann - How Crypto Can Help Secure AI

This is why it took five years to build out the infrastructure of Ocean’s marketplace. (Another OG crypto-meets-AI project, SingularityNET, has also been building out decentralized AI marketplaces for the past five years.) At times it didn’t even look like an AI-focused project. A recent blog post, for example, said that the next phase of Ocean is to “drive value-creation loops,” and that “the most promising and mature vertical is decentralized finance.”

The point of all this minutia is that before the chocolate-meets-peanut-butter of crypto-AI becomes a reality, there’s an unfathomable amount of plumbing that needs to be put into place. McConaghy is a lifelong AI addict and he’s been building this for years, but he acknowledges that, “We haven’t been able to build much AI stuff directly yet.”

That could soon change. He says the data marketplace now exists, the incentives are in place, and “Now we get to play with the AI a lot more again.”

Hard to compute, hard to verify

Playing with AI requires computation, and computation requires computer chips -- like Nvidia’s H100 -- and these chips are in short supply. (OpenAI even announced that they are delaying the training of their next models because of a lack of chips.)

While perhaps the shortage is only a short-term pinch, it’s also possible that it will continue -- or even get worse -- as the appetite for data processing grows. What happens when OpenAI is training ChatGPT 7? Could the need for chips outpace the supply?

“We’re kind of running out of places to build enormous data centers,” says Ben Fielding, co-founder of Gensyn, another crypto-meets-AI project. “You hear about people going to Antarctica [to build data centers]. It’s getting a lot harder.” So Gensyn is working on an audacious plan to unlock more global supply. “You don’t just have one big data center,” says Fielding. “Now you have every data center on the planet.”

The idea of Gensyn: Users will be compensated (via some type of token) for loaning their idle computing power to a network, and this will morph into a decentralized system that can train AI. There’s actually precedent for these kinds of crypto projects. Filecoin incentivizes users to give unused data storage to the cloud; Helium rewards people for providing local WiFi. Both projects started as moonshots, and now they both have traction.

Read more: Chris Coll-Beswick - AI Is Killing Crypto Venture Capital Interest

Gensyn is up against longer odds.

The team faces one fundamental challenge that complicates everything they do: Verification. In some ways their task is even trickier than securing the bitcoin network. “Bitcoin is hard to compute, easy to verify,” says Harry Grieve, Gensyn’s other co-founder, explaining that while Proof of Work demands enormous computing power, the actual math itself is relatively straightforward. “Machine learning is hard to compute, also hard to verify.”

The challenge involves the old “trustless” issue. If you’re sending me bitcoin, you don’t know me and you can’t trust me, so the genius of the network is to verify both ends of the transaction. But that’s static. If you’re actually computing data that will train AI -- such as images or movies -- that’s literally a moving target that needs to be verified against other checkpoints in the network, and it all needs to happen on-chain.

Then it gets even harder. The fluid nature of neural networks means the entire system is always in flux. “You can’t just take two pieces and check them separately,” says Fielding. The details get awfully weedy, but the key idea is that each piece of the puzzle needs to be checked against the entire puzzle -- which is in motion -- at all times, and it needs to be done using some kind of Zero-Knowledge Proof to preserve privacy.

Then there’s the “distributed system challenge.” Let’s say you, me and Taylor Swift are all offering our computers to help train AI models. Sounds great in theory. But in this kind of decentralized system, who exactly is deciding what chunk of the task is being performed by your idle computer, what part by mine, and what part by Taylor Swift while she’s onstage? This is the type of infrastructure that needs to be built. “It’s a hard problem,” says Yahya (a16z is an investor in Gensyn), “but it’s not an impossible problem.”

Centralized systems, in contrast, are relatively simple. It’s easy to coordinate resources. Verification is a given. You don’t have the headache of latency. This is why centralized solutions from the likes of OpenAI and DeepMind have leapt out to such a head start. The pitch from Gensyn and Ocean and many other projects like them is that, yes, what they are doing is harder and slower (for now) and will take time to build, but in the end the payoff -- AI that’s unshackled from big tech – will be worth the wait.

Then again… do these decentralized solutions even need blockchain?

Blockchains necessary?

I ask Yahya this point-blank. What if the real secret sauce is cryptography, yes, but not necessarily cryptocurrency?

For him this is simple. “Cryptography by itself cannot create this incentive field,” says Yahya. “It cannot create this decentralized marketplace for resources in the way that a blockchain can.”

Maybe that’s true, but it also might be a stretch to conflate every merit of cryptography with the perks of cryptocurrency. “You don’t need blockchain for everything related to cryptography,” says Duettmann, explaining that a good chunk of the AI field is focused on cryptographic solutions that have nothing to do with Web3. And while it’s true that decentralized AI is a goal embraced by many, it’s also true that plenty of open-source solutions -- such as the promising work from Llama 2 -- seem indifferent to tokens or cryptocurrency.

It’s also possible that Web3 is not only unnecessary for the development of ethical AI, but might even be harmful. Think of the darkest elements of crypto -- rug pulls, scams, phishing attacks -- and now combine that with the worst fears of runaway AIs. Imagine an AI-controlled-DAO that’s on some kind of privacy-preserving chain, and this AI is good at making money and then using it to spread chaos, a la Ultron.

Nightmares aside, even in the best-case scenarios for crypto-infused AI, the experts generally agree that mainstream solutions are nowhere imminent. The infrastructure will take time.

“Timing is always hard to predict,” Yahya says guardedly, when I press him for an estimate. He suspects that the technical solutions might be in place in “one to two years,” perhaps using a model of progressive decentralization. Gensyn’s Fielding says, “Lots of things we need will take multiple years to build.”

It’s also not clear exactly how the data solutions (like Ocean) will ultimately plug into the computing solutions (like Gensyn). And even if these solutions are able to snap together and work as advertised, the UX needs to be clean and simple and easy to use. “The actual user experience has to feel as if you’re using something like the cloud, that’s centralized,” says Yahya. “If it’s much, much harder to use something like Gensyn than Amazon, then I think that’d be difficult, even if the price is lower.”

Clean UX has never been Web3’s strong-suit, but let’s imagine that developers pull it off. Let’s envision a world where all of these solutions work. And now let’s explore an AI-plus-blockchain scenario that’s truly wild.

Nature 2.0

McConaghy has obsessed over AI since his childhood on the pig farm. He has spent years building crypto-AI-solutions. “Can blockchain help with AI alignment? In the near-term, it can help,” he says. But ultimately, he thinks there’s little -- or maybe nothing -- that can be done to curb AGIs in the long-term. And he believes it’s naïve to think AGIs will spare our “meat bags” out of benevolence.

Consider the ants.

McConaghy thinks that AGI is a fait accompli, and the only question is when. So he imagines a metaphor of humans and ants, where the future AGIs are the humans and we are the ants. “Let’s say the ants come to you, and they can talk, and they say, ‘Hey, we’d like you to stop being as smart as humans. Please, please, please only be as smart as ants.’” Humans are 100 times as smart as ants, says McConaghy, and it’s “completely obvious that AI will get to be 100 times as smart as humans and beyond.”

We wouldn’t bow to the ants wishes; we’d just stomp them with our boots.

If you accept this premise, logically, there are only two ways to crack this problem. The first is the one that most people are talking about -- regulation, slowing down, or somehow “nerfing” large language models to keep the growth of AI in check, or aligning them to our values.

McConaghy sees this as a losing battle. (Perhaps a battle we should fight nonetheless, but one that we will likely lose.)

So, instead, he’s bullish on the second solution, the one that almost no one’s talking about, and this is where things get spicy: Leveraging AI to let humans become “super intelligent” and add functionality to our brains. He thinks of this as “Nature 2.0,” and as the next phase of human evolution. This is a future of humanity where, as McConaghy puts it, we are “not bound by our blood bodies.”

The exact solution is still a whopping TBD, but maybe our brains are scanned and uploaded to a computer, or perhaps even our entire body is replicated atom-by-atom. Eventually, McConaghy thinks that when you are 90 years old and your body is withering, maybe your new cyborg-ish self realizes that this “meat-bag stuff is getting in the way,” and you “clip it like a finger-nail.” (He unpacked the idea way back in a 2016 essay “The AI Existential Threat: Reflections of a Recovering Bio-Narcissist.”)

He then gives a compelling, step-by-step scenario that I find surprisingly plausible

Clearly this is all still, at this point, just the stuff of science fiction. McConaghy recognizes that the tech is still years or even decades away. But he then gives a compelling, step-by-step scenario that I find surprisingly plausible.

He starts with something that is very much in our current world: Imagine you’re wearing Apple earbuds and walking around with them all day long. You’re interacting with friends (sending and receiving texts) and chatting with an upgraded version of Siri. This is not far-fetched.

Now imagine you’re doing the same thing, but, instead of AirPods, you’re wearing glasses that have a simple LCD screen -- a souped-up version of those Google Glasses that never took off. The glasses let you send and receive texts, interact with an AI chat-bot and query the internet.

Maybe the glasses have “eye-tracking,” so you glance at prompts on the screen like you’re clicking a mouse. Again, not at all far-fetched, as this is precisely how Apple’s new Vision Pro goggles work.

Now imagine a twist. The glasses get an upgrade with “brain-scanning,” meaning you can click a link by thinking instead of pressing a button or even looking at it. This is not as outlandish as it sounds. “That’s pretty practical right now,” says McConaghy. For example, a brain-computer interface was recently used to allow a paralyzed woman -- who had no physical control of her vocal cords -- to speak again through her brain signals. One of Elon Musk’s roughly 347 companies, Neurallink, is quietly working on these kind of brain-interface solutions.

The next twist: These glasses are taking video of everything around you, always, and yes, this was indeed the plot of a Black Mirror episode. The glasses store the video. It’s now a powerful database that you can access at any time, just by thinking. As McConaghy says, “When you try to remember what you had for breakfast you can do a query, ‘What did I have for breakfast?’” This would give you a literal photographic memory.

And the glasses will also, of course, be equipped with some form of advanced chatbot, giving you instant access to the entirety of human knowledge, and letting you fetch results just by thinking. Let’s say you have a problem to solve. You could have silent and rapid-fire conversations with this chat-bot to help you brainstorm, strategize, map out scenarios, challenge assumptions, check your math, consider what Socrates would have said about your predicament, and then Immanuel Kant and then David Hume and then Jean-Paul Sartre.

You are now super-smart. You have swallowed the “Limitless” intelligence pill. You are now a member of Nature 2.0, and perhaps you are now, finally, a match for the inevitable AGIs. Now it’s a fair fight.

So, does blockchain have a role in this world of AI-powered super-humans?

McConaghy had a vision for this in 2018, an idea that (as he said then) “combines and extends AI [plus] blockchain into a symbiosis of biology and machine, for a future of abundance.” The gist of the idea: Just as today’s Web2 data is owned by the corporate giants, tomorrow’s thought-data could be up for grabs. “How do you make sure your thoughts are not being watched by Google?” McConaghy asks. Some kind of blockchain solution would give you privacy, agency, and ownership over your very thoughts that are now linked to this larger network.

“You need to basically own [your thoughts] in the same way you own bitcoin,” says McConaghy. And he leaves me with one final concept: “Not your keys, not your thoughts.”

Edited by Ben Schiller.


Please note that our privacy policy, terms of use, cookies, and do not sell my personal information has been updated.

CoinDesk is an award-winning media outlet that covers the cryptocurrency industry. Its journalists abide by a strict set of editorial policies. In November 2023, CoinDesk was acquired by the Bullish group, owner of Bullish, a regulated, digital assets exchange. The Bullish group is majority-owned by; both companies have interests in a variety of blockchain and digital asset businesses and significant holdings of digital assets, including bitcoin. CoinDesk operates as an independent subsidiary with an editorial committee to protect journalistic independence. CoinDesk employees, including journalists, may receive options in the Bullish group as part of their compensation.

Jeff  Wilser

Jeff Wilser is the author of 7 books including Alexander Hamilton's Guide to Life, The Book of Joe: The Life, Wit, and (Sometimes Accidental) Wisdom of Joe Biden, and an Amazon Best Book of the Month in both Non-Fiction and Humor.

Read more about