Photo by Frida Lannerström on Unsplash Photo by Frida Lannerström on Unsplash

Generative AI is a paper tiger with a real tiger behind it

We Were Afraid Of The Wrong Stuff

Earlier this month, James Marriott, a literary critic at The Times, published an article on his newsletter where he talks about “one of the most important revolutions in modern history” — a bloodless coup, perpetrated by smartphones and social media algorithms, which transformed much of the world into a “post-literate society,” undermining centuries of progress as a result.

Starting in the early 18th century, he writes, literacy was something no longer exclusively limited to just the landed and affluent, or (as it was in the Middle Ages), something associated strongly with the clergy and monastics. Even the common peasantry could read — and read they did, so much that it became something of a moral panic, being described as a“fever”, an “epidemic”, a “craze”, and a “madness,” according to Marriott.

There’s something inherently special about a book — and it’s something that can’t really be substituted by any other form. Over the course of 100,000 words or so, an author can take an argument and dissect it, analyze it, and ultimately make a case to the reader. As the working classes began to imbibe texts on everything from politics and economics, to religion and philosophy, society began to transform. Parallel to the industrial revolution, we had an intellectual revolution.

The “most important revolution in modern history” I mentioned at the start of the piece? Marriott describes that as a counter-revolution. Book sales are down, few people are reading for pleasure, and in the developed world, literacy levels are declining or stagnating. This trend, he notes, really gathered pace when the smartphone came onto the scene.

Quoting Marriott:

“If the reading revolution represented the greatest transfer of knowledge to ordinary men and women in history, the screen revolution represents the greatest theft of knowledge from ordinary people in history.”

I agree with Marriott’s hypothesis and his conclusions. Even if your smartphone makes your life manifestly easier — and it does mine — it also exacts a cost from the user, whether that be in their time, or their attention span, or simply by taking them away from things that would otherwise be more beneficial to them (like, but not exclusively, reading actual books).

What I find interesting about the smartphone is that, when it first became the kind of mass-market, consumer-friendly, multimedia device that we know and understand today, nobody said that it would be so bad for us.

Nobody Warned Us

I recently rewatched the 2007 iPhone launch keynote, and I don’t recall Steve Jobs saying: “Yeah, it’s an iPod, a phone, and an internet device — but also it’ll absolutely take over your brain like those parasitic bug things from the first season of Star Trek: The Next Generation.” And while the main smartphone companies have, in recent years, introduced things that can limit a person’s screen time, these features are optional — and I’m not sure that many people even use them.”

If you know, you know.If you know, you know.

But, in fairness, it was a different time back then. Facebook was still a social network. There was no TikTok. Mobile data was expensive. It was hard to imagine what kind of beast smartphones would become.

The funny thing about tech is that the bad stuff is usually rear-loaded. You only ever find out about it long after a new innovation or niche becomes sufficiently mainstream. And that’s not because of any real conspiracy, but because people tend to be biased towards positive outcomes, and they tend to underplay the chance of anything bad happening.

But also, this stuff is hard to predict. At the risk of sounding excessively charitable, I don’t think that Mark Zuckerberg started Facebook with the intention of fostering political polarization, or with a goal to forment a genocide in Myanmar, or to create one of the world’s most sophisticated surveillance systems.

That stuff all happened gradually.

There are bad things with pretty much every technology — especially computer technologies — that only reveal themselves after they’ve reached a point of maturity. Some of those bad things are discovered by bad people — and the creators of the technology didn’t anticipate them because it’s hard to put yourself in the mind of an absolute bastard.

What I find interesting about generative AI is that it’s the first technology where the bad effects — I mean, the really, really bad effects — were front-loaded. From the very beginning, we were told that AI could:

I’ve spent much of this weekend racking my head for examples of a technology where, either the creators or those commercializing it, have said upfront that using said technology might have dire societal consequences — or poses an existential risk for humanity. Eventually, I found one.

The nuclear bomb.

It’s funny. It took roughly three years to go from the start of the Manhattan Project to the first detonation of a nuclear weapon on Hiroshima. Next month marks the third anniversary since the launch of ChatGPT, and the dire consequences we were promised could result from AI development — particularly when it comes to employment — haven’t emerged.

Progress, similarly, seems to have ground to a halt, and the prospect of an AGI apocalypse seems incredibly distant.

Was it all a lie? Were the dire predictions not actually predictions, but simply a component of a marketing campaign based on the contradictory vibes of fear and optimism?

Obviously, if you’ve read this newsletter, you know where I stand. Yes. I don’t believe that AI will — at least, for the very near future — take anyone’s jobs, at least at an observable scale. I don’t believe that AGI (or ASI) is anywhere near fruition, in part because the technology that powers today’s generative AI systems isn’t capable of fulfilling the requirements of AGI or ASI.

I believe that those dire predictions mentioned earlier were, in fact, a marketing tactic designed to make something relatively mundane seem bigger, more complicated, and more dangerous than it really was — and to justify future investments in the handful of insanely capital-intensive companies that develop the models behind generative AI.

It’s this marketing campaign that, I believe, distinguishes generative AI from any other consumer or computer technology that preceded it. It is the quintessential paper tiger — the GPU-powered equivalent of Scrappy Doo yelling “let me at ‘em.

At the same time, I also recognize that generative AI has negatively impacted people in a whole bunch of ways — from civil discourse, to health, to yes, their employment prospects. The key point is that those impacts are, as with every other technology, something that the originators of generative AI didn’t predict.

REGISTER NOW

By Matthew Hughes / What We Lost Substack

What We Lost is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

Note from Matt: Membership costs $8-per-month, or $80 annually. If you want to get in touch, feel free to drop me an email or message me on BlueSky.

(Source: whatwelost.substack.com; October 8, 2025; https://tinyurl.com/2d9qojsx)
Back to INF

Loading please wait...