Covering culture, tech, luxury, travel, and media in 5 minutes, 3x a week
People in Paris queued for hours to visit a Shein pop up (@LouisPisano / Twitter)
🔼 The no-deposit, 100% mortgage loan. They’re back in Britain for the first time since 2008 »»
🔽 MTV News. On the air since the ‘80s, MTV’s news production department was shut down by corporate parent Paramount »»
💬 “I still want to kill The Weeknd.” Abel Tesfaye may be done making music »»
🛫 These airlines have the most luxurious economy seats »»
👗 A Shein pop up in Paris drew thousands »»
💎 With its A380s returning to service, Etihad is also bringing back one of the world’s most impressive first class air travel products, “The Residence” »»
Fresh off of a debut at Coachella, the Welsh brothers known as Overmono are poised to become the UK’s next big dance music act (The Face)
Wendy’s and Google are testing drive-thru AI »»
Spotify removed thousands of AI-generated songs »»
Brace yourself for Sake Pét-Nat »»
Why does America have so many thousands of different banks? »»
A virus is killing a massive number of wild birds. Scientists have never seen anything like it »»
Overmono are the UK’s next massive dance duo »»
Hyundai’s “bonkers” N Vision 74 concept car will go into production. The retro futuristic car is “one of the more head-turning concepts in recent memory” »»
Elite podcasts are struggling to sell ads, while the podcast masses thrive »»
Ryanair ordered 300(!!) new Boeing 737 Max-10s, in a deal worth £32b »»
ChatGPT’s machine learning is (partially) powered by human contractors getting paid US$15/hour »»
2024 may bring the largest iPhone ever »» The newsletter's writer owns shares of Apple
In his first album in 7 years, the UK producer SBTRKT embraces chaos: 10 guests, 22 tracks, countless genres »»
Lab-grown diamond sales are surging. Why? »»
London’s Canary Wharf financial district is getting its own private wind farm »»
“I can’t make products just for 41-year-old tech founders.” Airbnb’s CEO is going back to basics »»
Google passkeys are a no-brainer. You’ve turned them on, right? »»
These 7 cities and towns are the real South of France »»
When people fear AI, what are they really afraid of?
The humans in the 2008 Pixar film “Wall-E” have all their lifestyle needs served by robots. Is that where we’re headed? (Pixar)
Reading what smart people are writing about AI has reminded me just how valuable a good, old fashioned analogy can be, as we all try to make sense of the emotions and opinions (aka panic and fear) swirling around the topic.
I recently read a piece that was super bearish on “faceless” chatbots as a user interface. See, a lot of people think they are the future of the web, and that sites of every kind will actually disappear in the years to come, replaced by a single blinking cursor and a rectangular text box, ready to field whatever you can throw at it.
Some future, say some. Conversing with chatbots is like talking a faceless sorcerer:
“When I go up the mountain to ask the ChatGPT oracle a question, I am met with a blank face. What does this oracle know? How should I ask my question? And when it responds, it is endlessly confident. I can't tell whether or not it actually understand my question or where this information came from.”
Not knowing a lot of this info degrades our overall experience, no matter how cool it feels to ask a computer something in (more or less) natural language.
THE WRONG COMPARISON
Analogies work because they use simple concepts everyone knows to help explain complex ones many don’t.
This one analogy for thinking about AI is so crisp, easy to grasp, and paradigm-expanding, that I can’t believe it hasn’t bubbled up in more articles I’ve read and conversations I’ve had.
Most of us think about generative AI in comparison to humans. This makes logical sense. Its language is life like. It can rhyme. It can also hallucinate and lie.
It can fool Google employees into thinking that it’s perceptive, awake, even alive. It can do that because we are —consciously or subconsciously— comparing AI to ourselves.
This is understandable, by the way. (It has the word “intelligence” in its name, after all.) It is also wrong. We shouldn’t think about AI in the context of humans.
Instead, we should think about AI in the same the way we think about the difference between machines and tools.
Some quick definitions to start: A “machine” receives energy and then does something with it, all by itself. Once a human presses a button or flips a switch to turn on the juice, the machine just goes. On its own.
A “tool,” by lovely contrast, is totally and completely inert. Useless, without the hand of the the operator. The human. Us.
That, right there, 👆 is the question we should ask ourselves about generative AI.
Is it a machine —or a tool?
Is AI (a) going to do all the work, leaving us like the overfed cartoon character humans in “Wall-E”, as the UI researcher Amelia Wattenberger vividly suggested in a recent piece? Or, is generative AI (b) going to help us create things we could never do on our own, like the hammer and chisel in Michelangelo's hands?
Interesting food for thought, right?
Of course, AI is both. And that is one of the most complicated and confusing things about reading and writing about generative AI: The absolute variance between what it can do today, what it theoretically could do in the future, and what our deepest fears think it might do.
Casey Newton, of the excellent Platformer newsletter, writes about “the heavy shadow that all AI coverage has looming in the background,” and how that shadow “makes it difficult for him to cover AI.”
His point is the discrepancy, between the best AI case and the worst case, is paralyzing our response in the here and now.
MACHINES 1, TOOLS 0
The scary shadow is the “machine theory” of AI. What I mean is, if you see generative AI as a machine, one that is able to do stuff on its own, with nothing more than that first, single push from a human finger, then of course it’s going to seem super spooky.
If, on the other hand, you see generative AI as a tool —something inert until human hands and creative impulse put it into motion— then what we're really fearing is not AI but rather the bad things humans will do with it.
THANK YOU, GODFATHER
That notion is one of the reasons why the so called “godfather of AI” recently quit his employment at Google last week. The man has become disturbed by his own life's work.
tl;dr— Google’s former AI maestro is nervous about what ill intentioned humans might do with it, warning that chatbots could be exploited by “bad actors,” and Apple’s co founder agrees.
But look around. Bad actors are all around us. They always have been and, unfortunately, always will be.
A lot of the panic and doom about this tech is really rooted in what people will do with it, and not “it” itself.
OUR CALL (FOR NOW)
Given that, it could be that the solution to all of the fears swirling around generative AI —people’s concerns about incorrect info, job losses, and the safety of the nuclear codes— is not going to come from regulating or altering the tech itself.
Rather, the answer to our concerns will come from how we humans choose to implement that tech.
Machines or tools, people?
Why I'm having trouble covering AI »»
Why chatbots are not the future »»
Most AI doomers have never trained an ML model in their lives »»
‘Godfather of AI’ Geoffrey Hinton quits Google and warns over dangers of misinformation »»
Apple cofounder Steve Wozniak says a human needs to be held responsible for AI creations to stop ‘bad actors’ tricking the public »» 🔐
Written by Jon Kallus. Any feedback? Simply reply. Like this? Share it!👇