• fv/
  • Posts
  • šŸ¤– RIP CHATGPT (AGAIN)

šŸ¤– RIP CHATGPT (AGAIN)

Googleā€™s Gemini AI system is expected to blow ChatGPT out of the water

Fv

Culture, tech, luxury, travel, and media news in 5 min. or less

Got 60 seconds?

Set on a 60-acre estate, surrounded by more than 3,000 acres of rolling parkland, Estelle Manor is an opulent new (old) addition to Britainā€™s hospitality landscape (Mark Anthony Fox / Wallpaper)

šŸ”¼ VinFast. The Vietnamese EV maker is now worth more than Ford and GM after its Nasdaq debut Ā»Ā»

šŸ”½ Meal delivery. More American diners are collecting their food orders themselves. (Many restaurants charge more for delivered food, to offset the commission they owe delivery apps for every incoming order) Ā»Ā»

šŸ’¬ ā€œAnd he's like, ā€˜Bree, stop. This is your product.ā€™ He's like: ā€˜Kris Jenner the $#!% out of him.ā€™ ā€ Hear Breanna Price and her viral TikTok rapper husband Connor talk about how they went from packing boxes in a warehouse to making more than US$200K per month on Spotify Ā»Ā»

šŸ›« Peek inside Estelle Manor, a new and ultra-opulent 108 key (room) rural escape/luxury lifestyle destination in the English countryside, brought to you by the creative/investor Eiesha Bharti Pasricha and her husband, the founder of Ennismore hotels Ā»Ā»

šŸ‘— One of Franceā€™s coolest tailors is now offering its made-to-order suits online Ā»Ā» 

šŸ’Ž Brooklyn Coachworks builds incredible custom made Defenders Ā»Ā»

Got 2 minutes?

Itā€™s nearly time for AminĆ©ā€™s New Balance 610 ā€œThe Moozā€ to touch down on shelves (Hypebeast)

Tipping point: For the first time ever, broadcast and cable programming made up less than 50% of US TV viewing Ā»Ā»

3 questions determine 99% of the happiness in your life Ā»Ā»

Taco Bellā€™s experimenting with chicken strips (again) Ā»Ā»

More evidence that ā€œextended stayā€ properties are the future of the hotel industry Ā»Ā»

Thanks to AI, people can now figure out what youā€™re typing by just listening to the sound of your keystrokes. Repeat: what youā€™re typing can be revealed by nothing more than the sound of your keyboardā€™s keys getting pressed Ā»Ā»

This might be the best time to become a creator on X Ā»Ā»

Rapper AminĆ© shared a release date for his New Balance 610s Ā»Ā»

The LED light revolution has only just begun Ā»Ā»

How to tell if shoes are well-made Ā»Ā»

The world's 20 most beautiful bookshops Ā»Ā»

ArcelorMittal, the world's second-largest steelmaker, is thinking of buying U.S. Steel Ā»Ā» Related: a purchase would put U.S. Steelā€™s stock market symbol, X, in play Ā»Ā»

England have reached the Womenā€™s World Cup final for the first time, after beating co-hosts Australia. The Lionesses will face Spain Ā»Ā»

Engineers in Korea are developing a humanoid robot that can fly planesā€” without needing to modify the cockpit Ā»Ā»

Unexpected jobs that can earn you six figures Ā»Ā»

ā€œThese jeans aren't dirty, they're Diesel!ā€ Ā»Ā»

Got 5 mnutes?

Googleā€™s forthcoming Gemini AI system is expected to blow ChatGPT out of the water. But donā€™t forget how Google makes its money

Googleā€™s forthcoming ā€œmulti-modalā€ AI Gemini is expected to change everything

SENSES ROCK

We humans can sometimes take it for granted how amazing our senses are. Seriously. Our brains can effortlessly register information whether it enters by sight, sound, touch, taste, or smell.

For AI, not so much. Most AI systems can process only one or two types of data at a time (for instance, text and still images, but not video).

Yeah, those days are numbered.

MEET GEMINI

Google is hard at work on a ChatGPT-ender it calls Gemini. Googleā€˜s new AI system has reportedly been trained on way, way more data than competing models like GPT-4 have ā€”including YouTube videos. This massive training set is expected to make Gemini far more capable than any other AI system weā€™ve seen to date.

Gemini is also expected to be one of the first publicly available multi-modal AI systems.

Awesome! Um, wait, whatā€™s ā€œmulti-modalā€ AI? 

ALL TOGETHER NOW

Multi-modal AI is when artificial intelligence can handle multiple kinds of input and output at the same time, whether itā€™s text, speech, images, video, or even sensory data (from electronic sensors).

The reason itā€™s exciting to people is because, by combining these different "modes" of information, multi-modal AI systems can get a way more complete picture of whatā€™s going on in the real world, and understand connections between things better ā€”which will in turn, help the AI respond to us in ever more natural, and ever more clever, ways.

EASIER SAID THAN DONE

Sidebar: Teaching AI systems to handle multiple data types at once ā€”text, images, speech, video and moreā€” is hard.

First, you need tons of labeled data. (Photos have to be captioned, videos with matching audio also have to be tagged.)

Then, you need some serious engineering talent to build advanced neural nets (fancy AI speak for computers that process data in a similar way to how our brains work) ā€”and those neural nets need different sections, one for each data type.

You also need to somehow not only connect those different sections to one anotherā€”but also give the sections a way to talk to one another.

The hardest part? Training the AI to identify relationships across the data types. Like, teaching the multi modal AI to associate the sound of a dog barking with a photo of a dog.

In other words, all you need is endless amounts of labeled and aligned data, special AI neural architecture, cross-modal training techniques, clever simulations, and a ton of human assistance when the AI doesnā€™t know what itā€™s looking at.

^^ Thatā€™s what Googleā€™s been doing.

ITā€™S GEMINIā€™S WORLD. WE JUST LIVE IN IT

Overall, Gemini is Google's latest big bet. Geminiā€™s expected to enhance many of the firmā€™s services, and to set new benchmarks for AI itself.

.

.

.

But, um, you all know how Google actually makes money, right?

THE DEAL!

Yep, more than 80% of Googleā€™s revenue comes from selling ads, so itā€™s probably worth considering how advertising will look in Googleā€™s forthcoming multi-modal AI world.

First, the obvious: ads are likely to become more integrated and even more hyper-personalized.

More immersive: expect ads to be seamlessly incorporated into applications, devices and even daily environments that are powered by Gemini.

At the same time, expect ads to become even more hyper-personalized.

Look for ad networks like Meta, Apple and Google to leverage biometric (nice new Google Pixel watch you got there!), geographic (Location Settings: on) and behavioral (Screen Time: enabled) data for more precise targeting ā€”and even messaging tailored to your mental/emotional states.

You can also expect ads to get way more conversational too.

I mean, RIP ā€œcopywriterā€ as a job. Look for brands and products to automatically anthropomorphize themselves via ā€œintelligentā€ avatars, ones capable of natural dialogue, and adapted to each and every one of us.

These avatars could even get predictive, anticipating our needs, and even presenting offers contextually before we know we want them.

And those are just the obvious examples. Expect Meta and Google to be hard at work thinking up all sorts of ways they can monetize Gemini and Llama 2 (Metaā€™s AI) via ads.

OK, WHATā€™S THE POINT HERE?

Smart people know that multi-modal AI is about to change the world even more than ChatGPT already has.

Now you know that advertising is going to pay for it.

More:

What to expect from Googleā€™s Gemini? Ā»Ā»

Written by Jon Kallus. Any feedback? Simply reply.

Reply

or to participate.