Politics

Fear of an A.I. Pundit

Nick Bostrom’s 2014 book, “Superintelligence,” a crucial text for the community of worriers about the risks of artificial intelligence, begins with a fable: A tribe of sparrows, weary of a marginal existence, becomes convinced that everything would be better if they could only have an owl to help them out — to build nests, to assist with care and feeding, to keep an eye out for other predators. Delighted by this idea, the sparrows decide to go hunting for an owl egg or owl chick that they might bring up as their own.

Only Scronkfinkle, “a one-eyed sparrow with a fretful temperament,” points out that maybe they should consider the dangers of living with a full-grown owl and put some thought into owl taming and owl domestication first. But he’s overruled on the grounds that merely getting an owl will be hard enough and there will be time to worry about taming it once it’s been acquired and reared. So while the others fly off to search for eggs and chicks, Scronkfinkle and a few other sparrows try to put their minds to the taming problem — a difficult challenge, lacking an owl to work with, and one shadowed by the fear that at any moment their nest mates might come back with an owlet and put their sketched-out theories to a brutal test.

It’s a neat fable about what A.I. alarmists think is happening right now. The accelerating power of artificial intelligence, manifest publicly so far in chatbots and image generators, is a growing owlet in our nest, and our alarmists are still unprepared to tame it. And it’s in the spirit of Scronkfinkle that a collection of Silicon Valley notables, including Elon Musk, just signed an open letter urging at least a six-month pause in large-scale A.I. experiments to allow our safety protocols to catch up.

But there’s a crucial difference between the fable and our own situation, which helps explain why the human pause urgers have a harder task even than Scronkfinkle. Note that the sparrows, for all their guilelessness, at least know generally what an owl looks like, what it is and what it does. So it shouldn’t be hard for them, and it isn’t hard for the reader, to imagine the powers that an untamed owl would bring to bear — familiar powers of speed and sight and strength, which could tear and gouge and devour the luckless sparrow clan.

With a notional-for-now superintelligence, however, the whole point is that there isn’t an analogue in existence right now for us to observe, understand and learn to fear. The alarmists don’t have a simple scenario of risk, a clear description of the claws and beak; they have a lot of highly uncertain scenarios based on even more uncertain speculation about what an intelligence somehow greater than ours might be capable of doing.

That doesn’t make their arguments wrong. Indeed, you could argue that the very uncertainty makes superintelligent A.I. that much more worth fearing. But generally, when human beings turn against a technology or move to restrain it, we have a good idea of what we’re afraid of happening, what kind of apocalypse we’re trying to forestall. The nuclear test ban treaties came after Hiroshima and Nagasaki, not before. Or a less existential example: The current debate about limiting kids’ exposure to social media is potent because we’ve lived with the internet and the iPhone for some time; we know a lot about what the downsides of online culture seem to be. Whereas it’s hard to imagine persuading someone to pre-emptively regulate TikTok in the year 1993.

I write this as someone who struggles to understand the specific dooms that might befall us if the A.I. alarmists are correct or even precisely what we mean when we say “superintelligence.”

Some of my uncertainty attaches to the debates about machine consciousness and whether A.I. would need to acquire a sense of self-awareness to become genuinely dangerous. But it’s also possible to distill the uncertainty to narrower questions that don’t require taking a position on the nature of the self or soul.

So let’s walk through one of them: Will supercharged machine intelligence find it significantly easier to predict the future?

I like this question because it’s connected to my own vocation — or at least what other people think my vocation is supposed to be: No matter how many times you disclaim prophetic knowledge, there is no more reliable dinner-party question for a newspaper columnist than, “What’s going to happen in Ukraine?” Or “Who’s going to win the next primary?”

I don’t think my own intelligence is especially suited to this kind of forecasting. When I look back on my own writing, I do OK at describing large-scale trends that turn out to have a shaping influence on events — like the transformation of the Republican Party into a downscale, working-class coalition, say. But where the big trends distill into specific events, I’m just doing guesswork like everybody else: Despite my understanding of the forces that gave rise to Donald Trump, I still consistently predicted that he wouldn’t be the Republican nominee in 2016.

There are forms of intelligence, however, that do better than mine at concrete prediction. If you read the work of Philip Tetlock, who studies superforecasters, it’s clear that certain habits of mind yield better predictions than others, at least when their futurology is expressed in percentages averaged over a wide range of predictions.

Thus (to use an example from Tetlock’s book, “Superforecasting,” written with Dan Gardner) the average pundit, early in the Syrian civil war, might have put the likelihood of President Bashar al-Assad losing power within six months at around 40 percent. But the superforecasters, with a slightly deeper focus on the situation, put the odds at less than 25 percent. Assad’s subsequent survival alone doesn’t prove that the superforecasters had it exactly right — maybe the dictator just beat the odds — but it helps their overall batting average, which across a range of similar predictive scenarios is higher than the pundit baseline.

But not so much higher that a statesman can just rely on their aggregates to go on some kind of geopolitical winning streak. So one imaginable goal for a far superior intelligence would be to radically improve on this kind of merely human prognostication.

We know that artificial intelligence already has powers of pattern recognition that exceed and sometimes mystify its human makers. For instance, A.I. can predict a person’s sex at above-average rates based on a retina photograph alone, for reasons that remain unclear. And there’s growing evidence that artificial intelligence will be able to do remarkable diagnostic work in medicine.

So imagine some grander scale of pattern recognition being applied to global politics, predicting not just some vague likelihood of a dictator’s fall, but this kind of plot, in this specific month, with these particular conspirators. Or this particular military outcome in this particular province with these events rapidly following.

Superintelligence in this scenario would be functioning as a version of the “psychohistory” imagined by Isaac Asimov in his “Foundation” novels, which enables its architect to guide future generations through the fall of a galactic empire. And a prophetic gift of this sort would have obvious applications beyond politics — to stock market forecasting, for instance, or to the kind of “precrime” prediction engine envisioned by Philip K. Dick and then, in adaptation, Steven Spielberg.

It would also fit neatly into some of the speculation from A.I. pessimists. When the Silicon Valley-adjacent writer Scott Alexander set out to write a vision of a malevolent A.I.’s progress, for instance, he imagined it attaching itself initially to Kim Jong-un and taking over his country through a kind of superforecasting prowess: “Its advice is always excellent — its political stratagems always work out, its military planning is impeccable and its product ideas turn North Korea into an unexpected economic powerhouse.”

But is any intelligence, supercharged or otherwise, capable of such foresight? Or is the world so irreducibly complex that even if you pile pattern recognition upon pattern recognition and let A.I. run endless simulations, you will still end up with probabilities that aren’t all that much more accurate than what can be achieved with human judgment and intelligence?

My assumption is that it’s the latter, that there are diminishing returns to any kind of intelligence as a tool of prophecy, that the world is not fashioned to be predicted in such detailed ways — any more than the current trawl-the-internet capacities of ChatGPT have enabled it to resolve current mysteries that don’t require prophecy. When a chatbot reveals, Sherlock Holmes-style, the detailed evidence that our all-too-human powers missed that solves the Nord Stream pipeline bombing or explains the disappearance of Malaysia Airlines flight 370, then I’ll start to expect psychohistory in a future iteration. But it seems more likely that the power of real prophecy will escape A.I., and any doomsday scenario requiring perfect Machiavellian foresight from our would-be overlord isn’t terribly credible, no matter how super its forecasting becomes.

Or maybe I’m just a sparrow who’s never seen an owl and can’t imagine how it would see so clearly in the dark.


Breviary

Tyler Cowen and James Pethokoukis against the A.I. pause.

Scott Alexander and Zvi Mowshowitz against Tyler Cowen.

Freddie deBoer on reality itself.

Elliot Kaufman and Damon Linker on Israel’s turmoil.

Dan Drezner argues with me about Vietnam and Iraq.

Can this novel save heterosexuality?


This Week in Decadence

“In the early 1990s, two Russian artists named Vitaly Komar and Alexander Melamid took the unusual step of hiring a market research firm. Their brief was simple. Understand what Americans desire most in a work of art ….

“Komar and Melamid then set about painting a piece that reflected the results. The pair repeated this process in a number of countries including Russia, China, France and Kenya.

“Each piece in the series, titled ‘People’s Choice,’ was intended to be a unique collaboration with the people of a different country and culture.

“But it didn’t quite go to plan.

“Describing the work in his book ‘Playing to the Gallery,’ the artist Grayson Perry said:

“Despite soliciting the opinions of over 11,000 people, from 11 different countries, each of the paintings looked almost exactly the same ….

“Thirty years after People’s Choice, it seems the landscapes which Komar and Melamid painted have become the landscapes in which we live ….

“The interiors of our homes, coffee shops and restaurants all look the same. The buildings where we live and work all look the same. The cars we drive, their colors and their logos all look the same. The way we look and the way we dress all looks the same. Our movies, books and video games all look the same. And the brands we buy, their adverts, identities and taglines all look the same.

“But it doesn’t end there. In the age of average, homogeneity can be found in an almost indefinite number of domains.

“The Instagram pictures we post, the tweets we read, the TV we watch, the app icons we click, the skylines we see, the websites we visit and the illustrations which adorn them all look the same. The list goes on, and on, and on.”

— “The Age of Average,” Alex Murrell (March 20)

Back to top button