When people fear AI, what are they really afraid of?

Covering culture, tech, luxury, travel, and media in 5 minutes, 3x a week


People in Paris queued for hours to visit a Shein pop up (@LouisPisano / Twitter)

The no-deposit, 100% mortgage loan. Theyre back in Britain for the first time since 2008 罈罈

MTV News. On the air since the 80s, MTVs news production department was shut down by corporate parent Paramount 罈罈

I still want to kill The Weeknd. Abel Tesfaye may be done making music 罈罈

These airlines have the most luxurious economy seats 罈罈

A Shein pop up in Paris drew thousands 罈罈

With its A380s returning to service, Etihad is also bringing back one of the worlds most impressive first class air travel products, The Residence 罈罈


Fresh off of a debut at Coachella, the Welsh brothers known as Overmono are poised to become the UKs next big dance music act (The Face)

Wendys and Google are testing drive-thru AI 罈罈

Spotify removed thousands of AI-generated songs 罈罈

Brace yourself for Sake P矇t-Nat 罈罈

Why does America have so many thousands of different banks? 罈罈

A virus is killing a massive number of wild birds. Scientists have never seen anything like it 罈罈

Overmono are the UKs next massive dance duo 罈罈

Hyundais bonkers N Vision 74 concept car will go into production. The retro futuristic car is one of the more head-turning concepts in recent memory 罈罈

Elite podcasts are struggling to sell ads, while the podcast masses thrive 罈罈

Ryanair ordered 300(!!) new Boeing 737 Max-10s, in a deal worth 瞿32b 罈罈

ChatGPTs machine learning is (partially) powered by human contractors getting paid US$15/hour 罈罈

2024 may bring the largest iPhone ever 罈罈 The newsletter's writer owns shares of Apple

In his first album in 7 years, the UK producer SBTRKT embraces chaos: 10 guests, 22 tracks, countless genres 罈罈

Lab-grown diamond sales are surging. Why? 罈罈

Londons Canary Wharf financial district is getting its own private wind farm 罈罈

I cant make products just for 41-year-old tech founders. Airbnbs CEO is going back to basics 罈罈

Google passkeys are a no-brainer. Youve turned them on, right? 罈罈

These 7 cities and towns are the real South of France 罈罈


When people fear AI, what are they really afraid of?

The humans in the 2008 Pixar film Wall-E have all their lifestyle needs served by robots. Is that where were headed? (Pixar)


Reading what smart people are writing about AI has reminded me just how valuable a good, old fashioned analogy can be, as we all try to make sense of the emotions and opinions (aka panic and fear) swirling around the topic.


I recently read a piece that was super bearish on faceless chatbots as a user interface. See, a lot of people think they are the future of the web, and that sites of every kind will actually disappear in the years to come, replaced by a single blinking cursor and a rectangular text box, ready to field whatever you can throw at it.

Some future, say some. Conversing with chatbots is like talking a faceless sorcerer:

When I go up the mountain to ask the ChatGPT oracle a question, I am met with a blank face. What does this oracle know? How should I ask my question? And when it responds, it is endlessly confident. I can't tell whether or not it actually understand my question or where this information came from.

Amelia Wattenberger, a UI research engineer, on why chatbots are not the future

Not knowing a lot of this info degrades our overall experience, no matter how cool it feels to ask a computer something in (more or less) natural language.


Analogies work because they use simple concepts everyone knows to help explain complex ones many dont.

This one analogy for thinking about AI is so crisp, easy to grasp, and paradigm-expanding, that I cant believe it hasnt bubbled up in more articles Ive read and conversations Ive had.

Most of us think about generative AI in comparison to humans. This makes logical sense. Its language is life like. It can rhyme. It can also hallucinate and lie.

It can fool Google employees into thinking that its perceptive, awake, even alive. It can do that because we are consciously or subconsciously comparing AI to ourselves.

This is understandable, by the way. (It has the word intelligence in its name, after all.) It is also wrong. We shouldnt think about AI in the context of humans.

Instead, we should think about AI in the same the way we think about the difference between machines and tools.


Some quick definitions to start: A machine receives energy and then does something with it, allby itself. Once a human presses a button or flips a switch to turn on the juice, the machine just goes. On its own.

A tool, by lovely contrast, is totally and completely inert. Useless, without the hand of the the operator. The human. Us.


That, right there, is the question we should ask ourselves about generative AI.

Is it a machine or a tool?

Is AI (a) going to do all the work, leaving us like the overfed cartoon character humans in Wall-E, as the UI researcher Amelia Wattenberger vividly suggested in a recent piece? Or, is generative AI (b) going to help us create things we could never do on our own, like the hammer and chisel in Michelangelo's hands?

UH, (B)?

Interesting food for thought, right?

Of course, AI is both. And that is one of the most complicated and confusing things about reading and writing about generative AI: The absolute variance between what it can do today, what it theoretically could do in the future, and what our deepest fears think it might do.

Casey Newton, of the excellent Platformer newsletter, writes about the heavy shadow that all AI coverage has looming in the background, and how that shadow makes it difficult for him to cover AI.

His point is the discrepancy, between the best AI case and the worst case, is paralyzing our response in the here and now.


The scary shadow is the machine theory of AI. What I mean is, if you see generative AI as a machine, one that is able to do stuff on its own, with nothing more than that first, single push from a human finger, then of course its going to seem super spooky.

If, on the other hand, you see generative AI as a tool something inert until human hands and creative impulse put it into motion then what we're really fearing is not AI but rather the bad things humans will do with it.


That notion is one of the reasons why the so called godfather of AI recently quit his employment at Google last week. The man has become disturbed by his own life's work.

tl;dr Googles former AI maestro is nervous about what ill intentioned humans might do with it, warning that chatbots could be exploited by bad actors, and Apples co founder agrees.

But look around. Bad actors are all around us. They always have been and, unfortunately, always will be.

A lot of the panic and doom about this tech is really rooted in what people will do with it, and not it itself.


Given that, it could be that the solution to all of the fears swirling around generative AI peoples concerns about incorrect info, job losses, and the safety of the nuclear codes is not going to come from regulating or altering the tech itself.

Rather, the answer to our concerns will come from how we humans choose to implement that tech.

Machines or tools, people?


Why I'm having trouble covering AI 罈罈

Why chatbots are not the future 罈罈

Most AI doomers have never trained an ML model in their lives 罈罈

Godfather of AI Geoffrey Hinton quits Google and warns over dangers of misinformation 罈罈

Apple cofounder Steve Wozniak says a human needs to be held responsible for AI creations to stop bad actors tricking the public 罈罈

Written by Jon Kallus. Any feedback? Simply reply. Like this? Share it!

Join the conversation

or to participate.