nikky

Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever

In a bold move to tackle one of streaming’s biggest frustrations, endless scrolling, Netflix just unveiled a major redesign of its TV and mobile apps featuring a ChatGPT-powered AI chatbot and TikTok-style video reels.

You’ll soon be able to ask Netflix in plain language what you’re in the mood for “funny and fast-paced” or “dark thrillers with strong female leads” and get instant, tailored recommendations.

Netflix is partnering with OpenAI to power this feature, part of a broader overhaul aimed at making content discovery faster, more intuitive, and (finally) less painful.

What’s changing

Conversational AI Search: Powered by OpenAI, this new tool lets you type or speak what you want to watch like you’re chatting with a friend.

TikTok-style Reels: Vertical, swipeable video clips on mobile will let you preview shows and movies. Like it? Tap to watch, save, or share.

Smarter Design: Netflix is simplifying navigation, boosting real-time recommendations, and surfacing “My List” content faster.

Netflix says it’s used AI for years to personalize artwork and recommendations. Now it’s going deeper.

“Generative AI allows us to take this a step further,” said CTO Elizabeth Stone. “It’s great for our members and for the creators we work with.”

Chief Product Officer Eunice Kim added that this is Netflix’s “biggest leap forward” in homepage design in over a decade.

Rollout details

The updates are launching in beta on iOS first, with a broader release expected in the coming months. Users must opt in to test the AI-powered search.

Netflix is quietly sunsetting its last two interactive titles, “Black Mirror: Bandersnatch” and “Unbreakable Kimmy Schmidt: Kimmy vs. the Reverend”, as it refocuses on AI-driven discovery. Meanwhile, competitors like Amazon are already testing generative AI in Prime Video and Alexa.

This could finally fix one of streaming’s most annoying problems and signal a new wave of AI-infused entertainment experiences.

Still wondering what to watch? Soon, you can just ask.

The post Netflix Adds ChatGPT-Powered AI to Stop You From Scrolling Forever appeared first on DailyAI.

Murder Victim Speaks from the Grave in Courtroom Through AI

Chris Pelkey was shot and killed in a road rage incident. At his killer’s sentencing, he forgave the man via AI.

In a historic first for Arizona, and possibly the U.S., artificial intelligence was used in court to let a murder victim deliver his own victim impact statement.

What happened

Pelkey, a 37-year-old Army veteran, was gunned down at a red light in 2021. This month, a realistic AI version of him appeared in court to address his killer, Gabriel Horcasitas.

“In another life, we probably could’ve been friends,” said AI Pelkey in the video. “I believe in forgiveness, and a God who forgives.”

Pelkey’s family recreated him using AI trained on personal videos, pictures, and voice recordings. His sister, Stacey Wales, wrote the statement he “delivered.”

“I have to let him speak,” she told AZFamily. “Everyone who knew him said it captured his spirit.”

This marks the first known use of AI for a victim impact statement in Arizona, and possibly the country, raising urgent questions about ethics and authenticity in the courtroom.

Judge Todd Lang praised the effort, saying it reflected genuine forgiveness. He sentenced Horcasitas to 10.5 years in prison, exceeding the state’s request.

The legal gray area

It’s unclear whether the family needed special permission to show the AI video. Experts say courts will now need to grapple with how such tech fits into due process.

“The value outweighed the prejudicial effect in this case,” said Gary Marchant, a law professor at Arizona State. “But how do you draw the line in future cases?”

Arizona’s courts are already experimenting with AI, for example, summarizing Supreme Court rulings. Now, that same technology is entering emotional, high-stakes proceedings.

The U.S. Judicial Conference is reviewing AI use in trials, aiming to regulate how AI-generated evidence is evaluated.

AI gave a murder victim a voice and gave the legal system a glimpse into its own future. Now the question is: should it become standard, or stay a rare exception?

Would you trust AI to speak for someone you loved?

The post Murder Victim Speaks from the Grave in Courtroom Through AI appeared first on DailyAI.

China Unveils World’s First AI Hospital: 14 Virtual Doctors Ready to Treat Thousands Daily

China has unveiled the world’s first fully AI-powered hospital, marking a radical shift in the future of healthcare.

Developed by Tsinghua University in Beijing, the “Agent Hospital” features 14 AI doctors and 4 AI nurses that can diagnose, treat, and manage up to 3,000 patients per day, without any human staff.

  • Faster, smarter care: What would take human doctors 3 years, the AI doctors can do in 1 day.
  •  High IQ bots: These AI agents scored a 93.06% pass rate on the US Medical Licensing Exam.
  • Training without risk: The virtual hospital allows medical students to practice in a fully simulated, no-risk environment.

How it works

The hospital uses multimodal large language models (MLLMs) to simulate real-time interactions with patients, handle diagnoses, prescribe treatments, and monitor disease progression, all digitally. 

It also includes predictive capabilities that can simulate how diseases spread, potentially helping officials prepare for future pandemics.

While it’s still in the research phase, Agent Hospital points to a future where AI could alleviate overburdened healthcare systems, provide round-the-clock care in underserved areas, and revolutionize medical education.

The technology must still clear regulatory and ethical hurdles, but the direction is clear: the AI doctor will see you now.

The post China Unveils World’s First AI Hospital: 14 Virtual Doctors Ready to Treat Thousands Daily appeared first on DailyAI.

Katy Perry Didn’t Attend the Met Gala, But AI Made Her the Star of the Night

Another year, another viral deepfake of Katy Perry at the Met Gala and once again, she wasn’t even there.

Photos showing the pop star in a sleek black designer gown circulated widely on social media during Monday night’s event, matching the “Superfine: Tailoring Black Style” theme. But the images were AI-generated. Perry quickly clarified she was not at the Met; she was on tour.

Perry’s reaction

“Couldn’t make it to the MET, I’m on The Lifetimes Tour (see you in Houston tomorrow IRL),” she posted to Instagram alongside the fake images.

She added a jab at AI confusion: “P.s. this year I was actually with my mom so she’s safe from the bots… but I’m praying for the rest of y’all.”

 

View this post on Instagram

 

A post shared by KATY PERRY (@katyperry)

The repeat hoax

This marks the second year in a row Perry has gone viral for an AI-generated Met Gala look. In 2024, a fabricated image of her in a floral ball gown fooled thousands, including her own mother.

These deepfakes are getting harder to spot. A fake post claiming Perry wore a never-before-seen Mugler fabric went viral with over 400K views and was even falsely credited to Getty Images.

The spread of believable AI-generated content is becoming a growing concern, especially as it dupes not just fans, but family.

AI is now dressing celebrities for events they don’t attend, and millions are still falling for it.

Perry continues her “Lifetimes Tour” with her next stop in Houston. Meanwhile, the internet keeps grappling with what’s real and what’s algorithm.

Are deepfakes becoming the new celebrity PR?

The post Katy Perry Didn’t Attend the Met Gala, But AI Made Her the Star of the Night appeared first on DailyAI.

Therapists Too Expensive? Why Thousands of Women Are Spilling Their Deepest Secrets to ChatGPT

More women are turning to ChatGPT for emotional support, using the AI chatbot as a stand-in therapist as mental health systems buckle under pressure. With long wait times and soaring costs, AI is filling a growing gap.

Mental health care is harder to access than ever. In the UK, NHS data shows patients are eight times more likely to wait over 18 months for mental health treatment than for physical health. Private therapy isn’t always an option either, with sessions costing £60 or more.

In that vacuum, ChatGPT has become a surprising outlet.

Real voices, real feelings

Charly, 29, from London, turned to ChatGPT while grappling with her grandmother’s terminal illness:

“It’s been so helpful to ask the crass, the gruesome, the almost cruel questions about death… the things I feel twisted for wanting to understand.”

Ellie, 27, from South Wales, said it helped her feel seen when no one else was around:

“It didn’t have full context to my life like my therapist does, but it was accessible and non-judgmental in times of crisis.”

Julia, 30, in Munich, used it when her therapist was booked up. The responses felt similar to a therapy app:

“I was surprised at how good the answers were… but it was too practical. My therapist challenges me. ChatGPT didn’t do that.”

Photo by M. on Unsplash

What AI can and can’t do

ChatGPT offers instant, always-available support. It’s private, non-judgmental, and often comforting. But it lacks emotional nuance, lived context, and the tough questioning that drives real therapeutic growth.

AI isn’t a replacement for trained professionals, but for many women stuck in limbo, it’s become a digital lifeline.

The bigger issue? People are asking robots for empathy because the human systems keep failing them.

The post Therapists Too Expensive? Why Thousands of Women Are Spilling Their Deepest Secrets to ChatGPT appeared first on DailyAI.

WhatsApp Warning: UK Parents Scammed Out of £500K by AI That Pretends to Be Their Kids

A wave of AI-powered scams is sweeping across WhatsApp, costing UK families nearly half a million pounds in 2025 alone and it’s only May.

Cybercriminals are now combining old tricks with new tech. In the evolving “Hi Mum” scam, fraudsters impersonate a loved one over WhatsApp and ask for emergency cash.

The twist

They’re now using AI-generated voice messages to mimic children’s voices, making the deception frighteningly convincing.

“Scammers are increasingly getting better at manipulating people… cloning any voice is now simple, even in a matter of moments,”says Jake Moore, global cybersecurity advisor at ESET.

By the numbers:

  • 506 WhatsApp scams since Jan 2025
  • Victims lost £490,606 ($651,230)
  • April alone: 135 cases, £127,417 lost

How it works:

  1. You get a WhatsApp from an unknown number: “Hi Mum, I lost my phone.”
  2. They claim they’re locked out of their bank.
  3. They send a voice note and it sounds like your child.
  4. They ask you to urgently transfer money to a new account.

A screen-grab excerpt of the WhatsApp ‘Hi mum’ text scam. Photograph: Santander

The danger

Scammers scrape social media for voice clips and personal details. Then they use generative AI to clone the voice and craft a believable story.

“I was able to fool my own mother with an AI version of my voice,” Moore admits.

Who’s at risk:

  • Parents with active kids on social media
  • Elderly users less familiar with AI tricks
  • Anyone receiving messages from unfamiliar numbers

What you can do:

  • Always call back using a saved number before sending money
  • Set up family ‘code words’ to verify real emergencies
  • Never send money to a new account without confirmation
  • Report scams to 7726 (UK scam reporting line)
  • If you fall victim, contact your bank immediately

Stay vigilant

AI scams are advancing fast. WhatsApp, though encrypted, can’t stop someone with your number from messaging you.

“These scams are evolving at breakneck speed,” says Chris Ainsley, head of fraud at Santander.

AI has supercharged a common scam. If your child “calls” from a strange number asking for money, think twice. Then call them on the number you know.

The post WhatsApp Warning: UK Parents Scammed Out of £500K by AI That Pretends to Be Their Kids appeared first on DailyAI.