đ€ RIP CHATGPT (AGAIN)
Googleâs forthcoming Gemini AI system is expected to blow ChatGPT out of the water.

Culture, tech, luxury, travel, and media news in 5 min. or less


Set on a 60-acre estate, surrounded by more than 3,000 acres of rolling parkland, Estelle Manor is an opulent new (old) addition to Britainâs hospitality landscape (Mark Anthony Fox / Wallpaper)
đŒ VinFast. The Vietnamese EV maker is now worth more than Ford and GM after its Nasdaq debut »»
đœ Meal delivery. More American diners are collecting their food orders themselves. (Many restaurants charge more for delivered food, to offset the commission they owe delivery apps for every incoming order) »»
đŹÂ âAnd he's like, âBree, stop. This is your product.â He's like: âKris Jenner the $#!% out of him.â â Hear Breanna Price and her viral TikTok rapper husband Connor talk about how they went from packing boxes in a warehouse to making more than US$200K per month on Spotify »»
đ« Peek inside Estelle Manor, a new and ultra-opulent 108 key (room) rural escape/luxury lifestyle destination in the English countryside, brought to you by the creative/investor Eiesha Bharti Pasricha and her husband, the founder of Ennismore hotels »»
đ One of Franceâs coolest tailors is now offering its made-to-order suits online »»Â
đ Brooklyn Coachworks builds incredible custom made Defenders »»


Itâs nearly time for AminĂ©âs New Balance 610 âThe Moozâ to touch down on shelves (Hypebeast)
Tipping point: For the first time ever, broadcast and cable programming made up less than 50% of US TV viewing »»
3 questions determine 99% of the happiness in your life »»
Taco Bellâs experimenting with chicken strips (again) »»
More evidence that âextended stayâ properties are the future of the hotel industry »»
Thanks to AI, people can now figure out what youâre typing by just listening to the sound of your keystrokes. Repeat: what youâre typing can be revealed by nothing more than the sound of your keyboardâs keys getting pressed »»
This might be the best time to become a creator on X »»
Rapper Aminé shared a release date for his New Balance 610s »»
The LED light revolution has only just begun »»
How to tell if shoes are well-made »»
The world's 20 most beautiful bookshops »»
ArcelorMittal, the world's second-largest steelmaker, is thinking of buying U.S. Steel »» Related: a purchase would put U.S. Steelâs stock market symbol, X, in play »»
England have reached the Womenâs World Cup final for the first time, after beating co-hosts Australia. The Lionesses will face Spain »»
Engineers in Korea are developing a humanoid robot that can fly planesâ without needing to modify the cockpit »»
Unexpected jobs that can earn you six figures »»
âThese jeans aren't dirty, they're Diesel!â »»

Googleâs forthcoming Gemini AI system is expected to blow ChatGPT out of the water. But donât forget how Google makes its money.

Googleâs forthcoming âmulti-modalâ AI Gemini is expected to change everything
SENSES ROCK
We humans can sometimes take it for granted how amazing our senses are. Seriously. Our brains can effortlessly register information whether it enters by sight, sound, touch, taste, or smell.
For AI, not so much. Most AI systems can process only one or two types of data at a time (for instance, text and still images, but not video).
Yeah, those days are numbered.
MEET GEMINI
Google is hard at work on a ChatGPT-ender it calls Gemini. Googleâs new AI system has reportedly been trained on way, way more data than competing models like GPT-4 have âincluding YouTube videos. This massive training set is expected to make Gemini far more capable than any other AI system weâve seen to date.
Gemini is also expected to be one of the first publicly available multi-modal AI systems.
Awesome! Um, wait, whatâs âmulti-modalâ AI?Â
ALL TOGETHER NOW
Multi-modal AI is when artificial intelligence can handle multiple kinds of input and output at the same time, whether itâs text, speech, images, video, or even sensory data (from electronic sensors).
The reason itâs exciting to people is because, by combining these different "modes" of information, multi-modal AI systems can get a way more complete picture of whatâs going on in the real world, and understand connections between things better âwhich will in turn, help the AI respond to us in ever more natural, and ever more clever, ways.
EASIER SAID THAN DONE
Sidebar: Teaching AI systems to handle multiple data types at once âtext, images, speech, video and moreâ is hard.
First, you need tons of labeled data. (Photos have to be captioned, videos with matching audio also have to be tagged.)
Then, you need some serious engineering talent to build advanced neural nets (fancy AI speak for computers that process data in a similar way to how our brains work) âand those neural nets need different sections, one for each data type.
You also need to somehow not only connect those different sections to one anotherâbut also give the sections a way to talk to one another.
The hardest part? Training the AI to identify relationships across the data types. Like, teaching the multi modal AI to associate the sound of a dog barking with a photo of a dog.
In other words, all you need is endless amounts of labeled and aligned data, special AI neural architecture, cross-modal training techniques, clever simulations, and a ton of human assistance when the AI doesnât know what itâs looking at.
^^ Thatâs what Googleâs been doing.
ITâS GEMINIâS WORLD. WE JUST LIVE IN IT
Overall, Gemini is Google's latest big bet. Geminiâs expected to enhance many of the firmâs services, and to set new benchmarks for AI itself.
.
.
.
But, um, you all know how Google actually makes money, right?
THE DEAL!
Yep, more than 80% of Googleâs revenue comes from selling ads, so itâs probably worth considering how advertising will look in Googleâs forthcoming multi-modal AI world.
First, the obvious: ads are likely to become more integrated and even more hyper-personalized.
More immersive: expect ads to be seamlessly incorporated into applications, devices and even daily environments that are powered by Gemini.
At the same time, expect ads to become even more hyper-personalized.
Look for ad networks like Meta, Apple and Google to leverage biometric (nice new Google Pixel watch you got there!), geographic (Location Settings: on) and behavioral (Screen Time: enabled) data for more precise targeting âand even messaging tailored to your mental/emotional states.
You can also expect ads to get way more conversational too.
I mean, RIP âcopywriterâ as a job. Look for brands and products to automatically anthropomorphize themselves via âintelligentâ avatars, ones capable of natural dialogue, and adapted to each and every one of us.
These avatars could even get predictive, anticipating our needs, and even presenting offers contextually before we know we want them.
And those are just the obvious examples. Expect Meta and Google to be hard at work thinking up all sorts of ways they can monetize Gemini and Llama 2 (Metaâs AI) via ads.
OK, WHATâS THE POINT HERE?
Smart people know that multi-modal AI is about to change the world even more than ChatGPT already has.
Now you know that advertising is going to pay for it.
More:
What to expect from Googleâs Gemini? »»
Written by Jon Kallus. Any feedback? Simply reply.