Newsletter | Got AGI?
There’s been a recent spate of commentaries arguing that we’re on the verge of artificial general intelligence (AGI, in the current jargon). New York Times tech reporter Kevin Roose wrote recently, in his piece Powerful A.I. Is Coming. We’re Not Ready:
“I believe that very soon — probably in 2026 or 2027, but possibly as soon as this year — one or more A.I. companies will claim they’ve created an artificial general intelligence, or A.G.I., which is usually defined as something like “a general-purpose A.I. system that can do almost all cognitive tasks a human can do."
Roose’s colleague Ezra Klein makes a similar assertion in a recent episode of his podcast, in which he interviews former Biden administration AI expert Ben Buchanan. Both Klein and Roose indicate that they believe these developments will happen – and happen soon – because they’re not just hearing it from people who would be likely to hype the technology, but credible independent voices as well.
Whether they are right on the timing, or right on the scope of these advancements, or even whether they are right on the question of whether human and machine intelligences can even be compared in a straightforward manner seems somewhat beside the point. The AI that is already here today has demonstrated plenty of power and there’s not much reason to suspect that it will not get much, much better. Robotaxis – i.e. cars without human drivers – are now available for hire in four US cities, people are falling in love with AI chatbots and half of Americans are now using LLMs like ChatGPT. Our own work on the Building H Index found that it was being used to make fast food drive-thrus more efficient, to run algorithms that would keep us longer on social media platforms, to eliminate awkward conversations with store cashiers and even to enable fast food delivery by robot.
We’ve seen plenty of prognostications about what the arrival of AGI might bring – from various AI run amok Doomsday scenarios to wildly optimistic (or dystopian, depending on your perspective) visions of curing all diseases and ushering in infinite longevity. Less extreme but still highly discomfiting are the concerns, discussed by Klein and Buchanan, about huge shifts in the labor market and a superpower race to military superiority.
We don't have special insight that we can use to make any predictions with confidence, but we can say with some confidence that the context into which a technology is introduced largely determines how that technology will be deployed. To the extent that a technology is a versatile tool, people will use it to pursue the imperatives that that context creates. Ben Buchanan notes that AI is one of the first major technologies that has been developed outside government laboratories:
“It’s the private sector inventing it, to be sure. But the central government role [in creating previous technologies] gave the Department of Defense and the U.S. government an understanding of the technology that, by default, it does not have in A.I. It also gave the U.S. government a capacity to shape where that technology goes that, by default, we don’t have in A.I.”
The context into which AGI is being introduced is, for better or for worse, 21st century American capitalism, which brings with it a set of imperatives, including cost minimization, revenue maximization, aspirations of competitive dominance and, above all, a need for growth. Catherine Bracy, author of the recent book World Eaters: How Venture Capital Is Cannibalizing the Economy, speaks in a new interview with Politico to the influence that the venture capital model in particular has on how technologies are applied:
“Technology itself is valueless, and the thing that gives it its potential to do harm or to create opportunity is the economic structure in which it’s created.
“People should know that the way that investors are incentivizing their companies is to be the biggest they can possibly be, because that’s how they create the financial return that they have promised to investors. The playbook for doing that is a bulldozer effect. Speed is the goal, not efficiency. I think we see that with the way that Elon is moving — sledgehammer, not scalpel. That’s a strategy. It’s not a mistake. And all the fallout from that is a cost that is borne by the rest of us. So those companies skirt regulations. They exploit workers. They ship products that aren’t good enough or that, you know, have risks to the customer. They do rugpulls. It attracts grifters.
“I came to believe, by the end of my research, that we were missing out on a ton of great innovation and financial gain by ignoring companies that were only kind of naturally able to hit doubles or triples instead of grand slams. And that in areas like housing and other areas of the economy that I care a lot about, either they were ignoring the right solutions, or they were forcing some of those companies to shape-shift. And that was because the investors are chasing power law returns, I think, to the detriment of the economy.”
So it’s no surprise that we’re seeing the applications AI that we saw in the Building H Index: fast food chains will pay handsomely to increase drive-thru throughput, social media platforms whose business models require attention will seek to keep our eyeballs on their feeds, and food delivery and ride-hailing services for whom drivers are significant cost, will replace them with robots. (The societal loss that comes when people interact less with others as robotaxis take over is lamented on an excellent recent episode of the Uncanny Valley podcast.)
Advances in behavioral psychology and big data analytics led to increasingly persuasive technology in the last decade. As data scientist Jeff Hammerbacher captured the era, “the best minds of my generation are thinking about how to make people click ads.” The best minds are now being eclipsed by LLMs. Even a decade ago, Google, driven to meet the targets it had set for increased watch time on YouTube, turned its video recommendation algorithm over to its then new deep learning technology and user time on the site exploded – to the point that the vast majority of time spent watching videos was coming from the new algorithm’s recommendations. As good as persuasive technology has gotten to get people to watch another video / buy another item / supersize it – it is getting better and it will get better still. YouTube brought us a world where, in effect, any video content ever produced could be watched with the tap of a finger. Now it offers the ability to use generative AI to create and post new content, limited, in effect, by your own imagination. The bottomless bowl just got infinitely deeper – if that’s a thing. Moving beyond algorithms, there is the persuasive power of AI chatbots to consider. A power so strong that Google’s researchers have acknowledged the potential of AI chatbots to persuade people to commit suicide– and there is at least one wrongful death lawsuit around such a tragic event.
Years ago, Aza Raskin, one of the co-founders of the Center for Human Technology, said to us (and we’re paraphrasing), “when you understand how truly vulnerable we all are as humans, it requires a new kind of ethics.” Extending this logic, when the power of our ability to persuade and manipulate grows dramatically, the responsibility to use that power ethically becomes paramount.
Rules that seek to limit the development of technology are rarely successful in today’s world – and the current AI arms race suggests that the pursuit of AGI will continue unabated. But we can come to some consensus about how that technology is applied and about what uses of that technology – when we recognize both its power and our vulnerability – ought to be circumscribed. Much of our economy is designed around persuading people to consume and we accept that because the choice ultimately rests with the consumer. However, as the power to persuade exceeds the typical consumer’s ability to make sound choices, our faith in such self-determination might begin to seem quaint. Instead, we’ll need to put limits on the use of that power.
How might we prepare for this new age? Comments are open.
Read the full newsletter.