Blog

The Best AI Health Apps in 2025: Smart Tools for Better Wellbeing

Health apps have come a long way from just counting steps and tracking calories. Now, AI is making them faster, smarter, and more useful in everyday life.

Today’s best AI-driven apps can analyze patterns, adapt to user needs, and even offer real-time support.

They can scan a mole and flag potential skin cancer risks, create personalized workout plans based on your recovery, or guide you through therapy exercises.

But with hundreds of options flooding the app stores, which ones actually deliver on their promises? 

After extensive reviewing and trawling over real user feedback, we found 10 apps that stand out for the features they offer, their thoughtful integration of AI, and their practical usability. 

Let’s dive in and explore them in detail:

Skanna: Scanning and Analyzing Ingredients

  • What it is: Skanna for Android and iOS is an AI-powered app that scans product barcodes to analyze ingredients and provide health insights
  • Ideal for: Health-conscious shoppers, people with allergies, sustainability-minded consumers
  • Unique feature: Real-time ingredient analysis and safety alerts

skanna

In a world of cryptic ingredient lists and vague health claims, Skanna brings a new level of clarity to your shopping decisions. This clever app works like a traditional barcode scanner but with a vital difference: it tells you what those mysterious ingredients actually mean for your health.

Skanna’s AI processes product information in seconds, offering clear insights about ingredients, potential allergens, and environmental impact. The app excels at explaining complex ingredients in plain English – no more Googling “what is tocopherol acetate?”

The app’s AI processes product information from trusted sources like Open Food Facts and OpenBeautyFacts, explaining ingredients in plain language and flagging potential concerns.

For those with specific dietary needs, allergies, or lifestyle choices (like halal or vegan), Skanna quickly identifies suitable products. 

It’s particularly helpful when shopping abroad, as it can translate and explain product labels in different languages – amazing for allergy sufferers. 

How it works

  • Scan any product’s barcode to see what’s inside
  • Get each ingredient explained in everyday language
  • Ask the AI chat about any ingredients you don’t understand
  • If you have allergies or restrictions, you’ll get instant alerts about problem ingredients
  • Save products you like or want to avoid for future shopping

The bottom line

Skanna brings welcome transparency to daily shopping choices, making it easier to make informed decisions about the products you buy – whether you’re watching your diet, avoiding allergens, or pursuing a more sustainable lifestyle.

It cuts through confusing ingredient lists and marketing claims, giving you clear, actionable information in seconds.

Ada: Clinically-Backed Health Assistance

  • What it is: Ada is a conversational AI health assistant that helps assess symptoms and guide health decisions
  • Ideal for: Anyone seeking initial health guidance or symptom-checking
  • Unique feature: Doctor-trained AI that speaks your language

ada

Ada walks you through your symptoms in a way that feels like talking to a doctor.

Instead of simply matching symptoms to conditions, it asks follow-up questions that make the assessment more precise – helping you decide whether to seek medical care or manage things at home.

With over 35 million symptom assessments worldwide, Ada has built a reputation for accuracy while maintaining exceptional user privacy.

How it works

  • Type your main symptom or concern into the chat
  • Answer follow-up questions about how you’re feeling and your medical history
  • Get a clear assessment of what might be causing your symptoms
  • If your symptoms need urgent attention, you’ll be advised to seek immediate care
  • Share your symptom history with your doctor if you need an in-person visit

The bottom line

While no AI can replace a doctor’s expertise, Ada is an excellent tool for understanding your health. It’s particularly valuable for helping you decide whether symptoms demand urgent care or just a good night’s sleep.

Wysa: Your AI Mental Health Companion

  • What it is: Wysa is an emotionally intelligent chatbot combining AI therapy with optional human support
  • Type: Mental Health & Wellbeing
  • Ideal for: People seeking emotional support, stress management, or mental wellness tools
  • Unique feature: Evidence-based therapy techniques with FDA recognition

Wysa is an AI-driven mental health app designed to provide structured emotional support. Built around cognitive behavioral therapy (CBT), it guides users through techniques for managing stress, anxiety, and sleep issues.

What sets Wysa apart is its ability to adapt. Instead of offering generic responses, it asks follow-up questions, suggests personalized exercises – like guided breathing or mindfulness techniques – and remembers what works for you over time. 

Recognized by the FDA and used by over 5 million people, Wysa blends AI-driven support with clinically validated strategies, making it a reliable companion for everyday mental wellness.

How it works

  • Start a chat about whatever’s on your mind
  • Get guided through exercises that match your current feelings
  • Track your mood patterns over days and weeks
  • If you’re feeling overwhelmed, you’ll get suggestions for immediate coping strategies
  • Connect with a human therapist if you need more support

The bottom line

Wysa fuses AI and human therapy to make mental health support both accessible and evidence-driven. 

While it can’t totally replace traditional face-to-face therapy, it’s undoubtedly an invaluable tool for daily emotional support and building resilience.

SkinVision: AI-Powered Skin Health Monitor

  • What it is: SkinVision is a smartphone-based skin cancer detection system using AI analysis
  • Type: Medical Screening & Prevention
  • Ideal for: Anyone concerned about skin changes or seeking regular mole monitoring
  • Unique feature: Clinical-grade skin analysis with 90%+ accuracy

wysa

SkinVision turns your smartphone into a powerful skin health tool, using AI to analyze moles and skin lesions in seconds. 

Trained on millions of medical images, the app identifies potential warning signs that indicate when a mole or skin lesion is worth getting checked out. 

It’s been clinically tested, with studies showing over 90% accuracy in detecting suspicious skin conditions. 

How it works

  • Take photos of concerning moles or skin spots
  • Receive an AI risk assessment within 30 seconds – backed by dermatologists
  • Track changes in your skin spots over time with detailed imaging
  • Get reminders for regular skin checks – personalized to your risk level
  • Share detailed reports with healthcare providers when needed

The bottom line

While SkinVision isn’t a replacement for dermatologist visits, it’s an invaluable tool for regular skin monitoring and early detection of potential issues. It’s very useful for those with multiple moles or a family history of skin cancer.

Buoy Health: For Understanding Symptoms

  • What it is: Buoy Health is an AI health assistant that helps understand symptoms and find appropriate care
  • Type: Medical Guidance & Triage
  • Ideal for: People seeking to understand symptoms and determine next steps
  • Unique features: Evidence-based care navigation with clear action steps

buoy health

Googling symptoms often leads to worst-case scenarios!

Buoy Health helps cut through the panic by offering reliable, AI-driven guidance based on real clinical data.

Unlike basic symptom checkers, Buoy adapts as you answer, asking follow-up questions much like a doctor would. It pulls from a vast database of medical research and real-world cases to provide clear, personalized recommendations –but without the confusing jargon.

Whether you’re wondering if a symptom needs urgent care or just some rest, Buoy helps you make informed decisions without the stress of endless internet searches.

How it works

  • Describe your symptoms in everyday language
  • Get asked relevant follow-up questions – just like at a doctor’s visit
  • Receive a clear explanation of possible causes – from mild to serious
  • See recommended care options – from home remedies to emergency care
  • Find appropriate healthcare providers in your area

The bottom line

Buoy Health helps take the uncertainty out of symptoms – offering clear, reliable guidance without the stress of endless internet searches. It’s not a replacement for a doctor, but it’s a useful first step in assessing your situation and figuring out what to do next.

Youper: AI Therapy Reimagined

  • What it is: Youper is an AI-powered mental health platform using clinical psychology techniques
  • Type: Mental Health & Personal Development
  • Ideal for: People seeking structured emotional support and mental wellness tools
  • Unique features: Clinically-validated emotional support with personalized insights

app

Youper blends AI with clinically backed therapy techniques to provide personalized mental health support. It engages in natural, meaningful conversations, guiding users through structured coping strategies. 

Developed with input from Stanford experts, it uses cognitive behavioral therapy (CBT), acceptance and commitment therapy (ACT), and other evidence-based approaches to help users manage their emotions.

A study in the Journal of the American Medical Association found it had the highest engagement rates among mental health apps, suggesting that it makes therapy tools more accessible and effective.

How it works

  • Begin with a quick emotional check-in – rate your mood and energy
  • Chat naturally about what’s affecting your mental state
  • Learn practical techniques matched to your current situation
  • See your emotional patterns tracked and visualized over time
  • Access personalized exercises – from breathing to behavioral strategies

The bottom line

Youper makes mental health support easier to access – without feeling clinical or impersonal. It doesn’t replace therapy, but it helps users manage stress, track emotions, and build better coping strategies in a structured, evidence-based way.

For those who might not have access to regular therapy – or simply want extra support between sessions – it’s a practical tool for long-term emotional well-being.

Noom: AI-Powered Holistic Healthcare 

  • What it is: Noom is a behavioral change platform combining AI coaching with psychology-based wellness
  • Ideal for: People seeking sustainable lifestyle changes and weight management
  • Unique features: AI coach “Welli” and smart food logging

noom

While Noom has been around for a while, it recently added an AI assistant, “Welli,” which provides 24/7 support while complementing the app’s other features.

The app’s newest AI features include photo-based food logging that can identify meals from photos, making tracking both easier and more accurate.

How it works

  • Take a photo of your meal for instant AI analysis of ingredients and nutrition
  • Get real-time answers from Welli (the AI coach) about food, habits, or medications
  • Follow daily psychology-based lessons that adapt to your progress
  • Connect with human coaches when you need deeper guidance
  • If you’re using GLP-1 medications, you’ll get specific support for side effects and diet adjustments

The bottom line

Noom successfully bridges the gap between traditional health coaching and modern AI. It’s very effective for those who want to understand the ‘why’ behind their health habits while making sustainable changes.

Fitbod: Smart Strength Training Guide

  • What it is: Fitbod is an AI-powered workout planner that adapts to your fitness level and goals
  • Ideal for: Gym-goers seeking personalized strength training plans
  • Unique feature: Real-time workout adaptation based on recovery and progress

Fitbod takes the guesswork out of strength training by delivering personalized workout plans that evolve with you. 

The app’s AI analyzes your past workouts, available equipment, and recovery status to create optimized training sessions. It tracks muscle fatigue and recovery, ensuring you’re always training at the right intensity.

With over 120 million workouts logged to date, the platform leverages this vast dataset to refine its recommendations, helping users progress safely while avoiding plateaus and overtraining.

How it works

  • AI analyzes workout history to prevent overtraining specific muscle groups
  • Real-time adaptation based on available gym equipment
  • Tracks recovery status of muscle groups to optimize training splits
  • Built on data from over 120 million logged workouts
  • Progressive overload calculations based on performance data

The bottom line

Fitbod successfully combines exercise science with AI to deliver a truly personalized training experience. 

It’s brilliant for both beginners seeking workout guidance and experienced lifters looking to optimize their routines.

Aaptiv: AI-Guided Fitness Coach

  • What it is: Aaptiv is a personal AI fitness coach with audio-guided workouts
  • Ideal for: People seeking trainer-led workouts with AI personalization
  • Unique feature: AI-curated audio workouts with personalized recommendations

Aaptiv combines expert-led training with AI-driven personalization. Instead of watching workout videos, you get audio-guided coaching, so you can focus on movement, not a screen.

The app’s AI tunes workouts based on your fitness level, goals, and real-time feedback – adjusting difficulty, intensity, and programming as you progress. Whether you’re at home, in the gym, or outdoors, Aaptiv keeps workouts challenging without being overwhelming.

How it works

  • Pick your preferred workout type and time
  • Receive matches with AI-selected audio workouts
  • Follow trainer guidance through your headphones
  • If you’re struggling with any exercise, you can modify it
  • Rate your workout to obtain better recommendations next time

The bottom line

Aaptiv successfully bridges the gap between personal training and AI technology, making professional-quality workouts more accessible and adaptable to individual needs.

With audio-guided sessions and real-time adjustments, it keeps workouts engaging and effective – whether you’re at home, in the gym, or on the go.

Headspace (Ebb): AI Mindfulness Guide

  • What it is: Headspace is an AI-enhanced meditation and mental wellness companion
  • Ideal for: Anyone seeking guided meditation and emotional support
  • Unique features: Personalized mindfulness recommendations with clinical backing

Ebb

Headspace’s new AI companion, Ebb, helps you choose the right meditation or mindfulness exercise based on what you need in the moment. 

Developed with clinical psychologists, it suggests specific techniques – whether you’re trying to relax, refocus, or manage stress.

It also adjusts over time, remembering what works best for you. All conversations are encrypted, with extra safeguards for those who might need more support.

Key features

  • Share what’s on your mind through Headspace’s Ebb AI
  • Receive matches with relevant meditation exercises
  • Practice mindfulness techniques suited to your current state
  • If you’re experiencing a crisis, you can get connected to appropriate support
  • Track your meditation progress and emotional patterns over time

The bottom line

Ebb makes it easier to find the right kind of mindfulness practice when you need it, without sifting through endless options. 

It builds on Headspace’s strong foundations, offering a smarter way to integrate meditation and mental wellness into daily life.

Choosing an AI Health App That Works For You

These 10 apps slot into your routine and make things easier. They help you track symptoms without overloading you with medical jargon, plan workouts without endless guesswork, or give you mental health support when you don’t have time (or energy) for therapy.

That said, no app can replace a doctor, a coach, or real human connection. But they can fill the gaps – whether that’s helping you decide if something needs urgent care, keeping you accountable with fitness goals, or making sure you actually unwind at the end of the day.

If you’re not sure where to start, most of these apps offer free trials, so you can see what works for you without committing long-term. 

Pick one, try it out, and if it makes life easier, keep it. If not, there are other avenues to explore – because the best health tools are the ones you’ll actually use.

The post The Best AI Health Apps in 2025: Smart Tools for Better Wellbeing appeared first on DailyAI.

10 Top Women in AI in 2025

AI is changing our world, but the stories of who build it often get lost in the noise.

Behind the headlines and hype, a group of women are solving AI’s fundamental challenges – despite working in an industry persisently impacted by gender inequality.

Women make up just 22% of AI professionals worldwide and only 12% of AI researchers. In academic publishing, female researchers account for just 29% of first authors on AI papers, a number that hasn’t increased since the mid-2000s. 

This is a story about ten leaders who have influenced AI despite the odds being stacked against them. 

Their work – from chip architecture to bias detection, from environmental impact to safety systems – is vital to the modern AI industry and its future direction. 

Fei-Fei Li

LinkedIn: https://www.linkedin.com/in/fei-fei-li-4541247/ 

Known as the “godmother of AI,” Fei-Fei Li’s defining influence on AI spans years of commitment to the field.

Born in Beijing and immigrating to the United States at age 16, she has been instrumental in transitioning AI from a niche technology to something scalable and broadly accessible. 

As co-director of Stanford’s Human-Centered AI Institute, Li has brought attention to ‘human-centered AI,’ which places human values at its core. Many of her works feature on university reading lists; papers with Li as a named writer have amassed an extraordinary 285,343 citations in total.

One of Li’s key career milestones was creating ImageNet in 2007, a massive dataset containing over 15 million labeled images across 22,000 categories. 

ImageNet effectively solved one of computer vision’s most fundamental challenges: teaching machines to recognize objects with human-like accuracy. The dataset and the projects that emerged from it catalyzed important advancements in deep learning. 

Simultaneous with her technical achievements, Li’s commitment to diversity in AI led her to co-found AI4ALL, which has provided opportunities for thousands of underrepresented students to enter the field. Many of the organization’s alumni have secured positions at major tech companies and established their own AI startups.

In 2024, Li’s co-founding of World Labs marked a new chapter in her career. The company’s focus on “spatial intelligence” aims to bridge the gap between AI’s understanding of digital and physical spaces. 

With $230 million in funding from leading tech investors, including Andreessen Horowitz and NVIDIA’s NVentures, World Labs rapidly achieved unicorn status.

Joy Buolamwini

LinkedIn: https://www.linkedin.com/in/buolamwini/

Dr. Joy Buolamwini’s journey to becoming a leading voice in AI ethics was triggered by a deeply personal experience. 

While working with facial analysis software as a graduate student at MIT, she discovered these systems struggled to detect her dark-skinned face unless she wore a white mask. This sparked large-scale research into AI bias and prejudice, challenging the discourse around AI decision-making. 

Buolamwini’s “Gender Shades” project, conducted through MIT Media Lab and much-cited across the AI research community and media, including here on Dailyai.com, provided the first comprehensive evidence of racial and gender bias in commercial AI systems. 

The study revealed error rates of up to 34% for darker-skinned females compared to just 0.8% for lighter-skinned males. Leading tech companies, including Microsoft, IBM, and Amazon, subsequently assessed their facial recognition technologies.

As founder of the Algorithmic Justice League (AJL), Buolamwini has created a movement that combines research with advocacy. Combining art and science, AJL has created influential documentaries and projects, building support for AI technologies that truly serve everyone equally.

Buolamwini’s 2023 book “Unmasking AI” became a national bestseller, offering unprecedented insights into the hidden biases within AI systems. The book not only documents technical failures but also provides a blueprint for creating more equitable AI systems.

Timnit Gebru

LinkedIn: https://www.linkedin.com/in/timnit-gebru-7b3b407/

Dr. Timnit Gebru’s impact on AI ethics has accelerated the industry’s critical thinking, forcing it to be more introspective of impacts and consequences.

Born in Addis Ababa, Ethiopia, Gebru’s perspective as both a technical expert and advocate for marginalized communities has been formidable in challenging the industry status quo.

Her co-authored paper “On the Dangers of Stochastic Parrots” was a watershed moment in AI ethics, challenging the fundamental assumptions behind large language models (LLMs). 

Some of the paper’s core points were that today’s frontier AI models fail to build in relativist cultural norms and values, essentially acting as static monoliths that ‘live’ predominantly in the Western world. Rather than understanding language, LLMs primarily manipulate it and are tough – if not impossible – to audit for bias due to their ever-spiraling complexity. 

The paper’s controversial reception and her subsequent departure from Google’s Ethical AI team sparked fierce debate about corporate influence on AI research.

The founding of the Distributed AI Research Institute (DAIR) in 2021 represented Gebru’s vision for truly independent AI research. DAIR’s structure advocates for community-rooted research free from corporate influence. 

Gebru’s selection to receive the 2025 Miles Conrad Award recognizes her transformative impact on the field. Her work continues to inspire a new generation of researchers, positioning AI as both a technical frontier and battleground for social justice and equality. 

Daniela Amodei

LinkedIn: https://www.linkedin.com/in/daniela-amodei-790bb22a/

Daniela Amodei’s career bridges politics, global health, and technology, culminating in her role as co-founder and president of Anthropic, one of generative AI’s most influential startups. 

After graduating summa cum laude from the University of California, Santa Cruz, with a degree in English Literature, Amodei worked in political communications, including managing messaging for the US Representative Matt Cartwright. 

In 2013, she joined Stripe, starting in recruitment before transitioning into risk program management. Working across machine learning (ML), data science, engineering, legal, and finance teams, Amodei developed a deep understanding of technology’s regulatory and operational challenges.

By 2018, she moved to OpenAI as Vice President of Safety and Policy, overseeing safety, policy, engineering, and human resources. During this period, Amodei played a key role in shaping AI governance strategies. However, in 2020, Amodei and several colleagues, including her brother Dario Amodei, left OpenAI over concerns about the company’s direction in AI safety.

In 2021, the Amodei siblings co-founded Anthropic to build AI systems designed for reliability, transparency, and alignment with human values. 

Under Daniela’s leadership, the company has grown from a small team to more than 800 employees and has attracted massive investment, including $4 billion from Amazon. 

Sasha Luccioni

LinkedIn: https://www.linkedin.com/in/sashaluccioniphd/ 

Dr. Sasha Luccioni, born Alexandra Sasha Vorobyova in Ukraine in 1990, moved to Canada at the age of four and quicklydeveloped an early interest in science.

She later earned a B.A. in Language Science from Université Paris III: Sorbonne Nouvelle in 2010, followed by an M.Sc. in Cognitive Science with a focus on Natural Language Processing from École Normale Supérieure in Paris in 2012. In 2018, Luccioni completed her Ph.D. in Cognitive Computing at Université du Québec à Montréal.

Dr. Luccioni began her professional career at Nuance Communications in 2017, focusing on natural language processing and machine learning to enhance conversational agents. She then joined Morgan Stanley’s AI/ML Center of Excellence in 2018, working on explainable AI decision-making systems. 

In 2019, she became a postdoctoral researcher at Université de Montréal and Mila, collaborating with Yoshua Bengio on the “This Climate Does Not Exist” project, which used generative adversarial networks (GANs) to visualize climate change impacts.

In 2021, Dr. Luccioni joined Hugging Face as a research scientist and Climate Lead, where she focuses on quantifying the carbon footprint of AI systems and promoting sustainable practices in machine learning development. She has been instrumental in developing tools like CodeCarbon for real-time tracking of carbon emissions from computing. 

Her research on the BLOOM language model highlighted its potential to generate over 50 metric tons of CO₂ during its lifecycle, equivalent to 80 transatlantic flights – research we’ve cited at Dailyai.com on a few occasions.

Beyond her research, Dr. Luccioni is a founding member of Climate Change AI and serves on the board of Women in Machine Learning, mentoring underrepresented minorities in the AI community. 

In 2024, her contributions were recognized by TIME Magazine, naming her one of the 100 most influential people in AI, and by Business Insider on its AI Power List. 

Mira Murati

LinkedIn: https://www.linkedin.com/mira-murati-4b39a066/ 

Born in Vlorë, Albania, Mira Murati was one of the most influential figures in OpenAI and has since founded her own AI research lab – yet to be named – focusing on artificial general intelligence (AGI). continues 

Within months, Murati assembled an all-star team, including Jonathan Lachman and several key researchers from OpenAI, Character AI, and Google DeepMind.

As OpenAI’s Chief Technology Officer from 2018 to 2024, she orchestrated the development of technologies that have fundamentally altered our relationship with AI.

Under her leadership, OpenAI released ChatGPT, which achieved the fastest user adoption rate in consumer technology history, reaching 100 million users within two months. She also oversaw the development of DALL-E, which revolutionized AI image generation, and GPT-4o, one of 2024’s headline model releases. 

Murati pioneered OpenAI’s “iterative deployment” strategy, which involves releasing AI models gradually to better understand and address potential risks. 

During her brief but pivotal role as interim CEO during OpenAI’s leadership transition in 2024, Murati effectively governed the company during the crisis while maintaining its momentum. 

Rana el Kaliouby

LinkedIn: https://www.linkedin.com/in/kaliouby/ 

Born in Cairo, Egypt, in 1978, Rana el Kaliouby has spent much of her career bridging technology and human emotion. 

She earned her bachelor’s and master’s degrees from the American University in Cairo before completing a Ph.D. at Cambridge University’s Newnham College, where she developed early methods for automated emotion recognition.

In 2009, she co-founded Affectiva, a spin-off from MIT Media Lab, to bring emotion AI into real-world applications. Under her leadership, the company built systems that analyzed facial expressions and vocal cues to interpret emotions. 

Following Affectiva’s acquisition by Smart Eye in 2021, el Kaliouby served as Deputy CEO before founding Blue Tulip Ventures in 2024.

The firm focuses on “human-centric AI,” investing in technologies designed to prioritize well-being, sustainability, and social impact. It has already backed startups developing AI-driven mental health tools and emotion-aware education technology.

Beyond her work in AI, el Kaliouby is an executive fellow at Harvard Business School, where she teaches about AI and entrepreneurship. 

She also serves as a trustee for the Boston Museum of Science and the American University in Cairo. Her memoir, “Girl Decoded,” published in 2020, recounts her journey from a self-described “nice Egyptian girl” to a leader in technology, advocating for the humanization of AI.

Recognized on Fortune’s 40 Under 40 list and Forbes’ Top 50 Women in Tech, el Kaliouby continues to push for diversity in AI, championing initiatives that support women and underrepresented groups in the field.

Daniela Rus

LinkedIn: https://www.linkedin.com/in/daniela-rus-220b3/ 

Born in Cluj-Napoca, Romania, Daniela Rus moved to the US and earned a Bachelor of Science in computer science and mathematics from the University of Iowa in 1985, followed by a Master of Science and a Ph.D. in computer science from Cornell University in 1990 and 1993, respectively. Her doctoral research focused on fine motion planning for dexterous manipulation.

After completing her Ph.D., Rus began her academic career as a professor in the Computer Science Department at Dartmouth College. In 2004, she joined the Massachusetts Institute of Technology (MIT), and since 2012, she has served as the director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)

Under Rus’s leadership, CSAIL has become a global hub for robotics innovation, with breakthroughs in autonomous vehicles, drone technology, and human-robot collaboration.

Rus’s research interests operate at the intersection of AI and robotics. She has made massive contributions to soft robotics, developing machines capable of safely interacting with humans and navigating through confined spaces. 

In recognition of her pioneering work, Rus was awarded the 2024 John Scott Award and the 2025 IEEE Edison Medal. She is a member of the National Academy of Engineering, the American Academy of Arts and Sciences, and the National Academy of Sciences, and a fellow of ACM, AAAI, and IEEE. 

Her recent books, “The Heart and the Chip: Our Bright Future with Robots” and “The Mind’s Mirror: Risk and Reward in the Age of AI,” explore the relationship between humans and machines, with ethical discussions surrounding what it might look like in the not-too-distant future. 

Joelle Pineau

LinkedIn: https://www.linkedin.com/in/joelle-pineau-371574141/ 

Joelle Pineau has advanced machine learning and cutting-edge robotics research while improving how AI is developed and shared through open-source technologies.

As Vice President of AI Research at Meta, Pineau leads the company’s Fundamental AI Research (FAIR) lab, where she has driven breakthroughs in reinforcement learning (RL), robotics, and open-access AI models. Pineau has authored and co-authored many influential studies on robotics, with a particular focus on co-bots for elderly care

Her career began in robotics, earning a Ph.D. from Carnegie Mellon University with research focused on algorithms for complex decision-making. At McGill University, where she remains a professor, she built one of Canada’s leading AI labs and helped develop AI models for medical diagnostics. 

Since joining Meta, Pineau has been a driving force behind efforts to make AI research more open and reproducible. Under her leadership, the FAIR lab released LLaMA, a family of LLMs that have greatly influenced the entire AI industry, challenging orthodox proprietary model releases. 

Beyond research, Pineau has been outspoken about the need for greater transparency and accountability in AI. She has pushed for policies that require AI models, datasets, and benchmarks to be openly shared, ensuring that research can be independently verified. 

Lisa Su

LinkedIn: https://www.linkedin.com/in/lisasu-amd/ 

Lisa Su’s transformation of AMD represents one of technology’s most remarkable turnaround stories. Born in Tainan, Taiwan, in 1969, Su’s journey to becoming one of tech’s most respected CEOs began with a passion for semiconductors and a Ph.D. from MIT. 

When she took the helm of AMD in 2014, the company was struggling with a stock price under $3. Today, under her leadership, AMD has surpassed Intel in market value, with shares trading over $140.

Su’s strategic genius lies in her early recognition of AI’s potential to transform the computing industry. While competitors such as Intel focused on consumer electronics, she directed AMD’s resources toward high-performance computing architectures. This proved prescient as the AI boom created phenomenal demand for high-end AI-centric processors.

In 2024, Su dubbed AI as “the most important technology that has come over the last 50 years.” At AMD’s groundbreaking “Advancing AI 2024” event, she unveiled the MI300X, AMD’s first AI accelerator designed to challenge NVIDIA’s dominance. 

Notable Women Pioneers in AI Research and Development

The women in the top 10 list have led some of the most defining advances in AI, but they’re part of a much larger movement. 

Across research, policy, and industry, many others are influencing how AI is built, deployed, and governed. 

Here are just a few more leaders whose contributions continue to push AI forward:

  • Amba Kak – As co-executive Director of the AI Now Institute, Amba Kak ensures AI systems are accountable to the public rather than corporate interests. She focuses on policy reform and regulation, advocating for stronger AI governance.
  • Anima Anandkumar – Professor at Caltech and former senior director at NVIDIA, Anima Anandkumar develops AI applications for climate science, robotics, and autonomous technologies. Her work ensures AI contributes to solving global challenges, from climate modeling to advanced robotics. 
  • Chelsea Finn – A professor at Stanford and part of Google Brain, Chelsea Finn researches how AI can improve itself through experience, leading advancements in robotics and machine learning. Her work focuses on meta-learning, allowing AI to learn more efficiently from fewer data points.
  • Claire Delaunay – With a career spanning NVIDIA, Uber, and Google, Claire Delaunay has been at the forefront of AI-powered robotics and autonomous systems. She played a key role in developing scalable AI for industrial and mobility applications, bridging academic research and real-world AI deployment.
  • Cynthia Breazeal – An MIT professor and a pioneer in social robotics, Cynthia Breazeal founded the Personal Robots Group at the MIT Media Lab. She has led research into AI-driven companion robots for education, healthcare, and personal assistance.
  • Cynthia Rudin – A Duke University professor, Cynthia Rudin researches interpretable AI, ensuring models are transparent and reliable. Her work has been particularly impactful in healthcare and criminal justice, where AI decision-making must be explainable and fair. She advocates for AI systems that prioritize accountability and user trust.
  • Daniela Braga – As CEO of Defined.ai, Daniela Braga has been instrumental in developing ethically sourced AI training data. She advocates for reducing bias in AI models by diversifying datasets, ensuring AI systems are more inclusive and representative. Her work emphasizes the need for more accurate and fair language models.
  • Daphne Koller – Co-founder of Coursera and CEO of Insitro, Daphne Koller has pioneered AI applications in drug discovery, accelerating treatment development. Her online learning platform, Coursera, has transformed global access to AI-driven education. She remains at the forefront of leveraging AI for life sciences and learning.
  • Francesca Rossi – As IBM’s Global AI Ethics Lead, Francesca Rossi works on designing AI that aligns with human values. Her focus is on ensuring AI is transparent, accountable, and ethically sound. She plays a key role in shaping global discussions on AI responsibility.
  • Irene SolaimanA leader in AI policy, Irene Solaiman heads global policy at Hugging Face, shaping responsible AI governance. Previously at OpenAI, she played a key role in the staged release of GPT-2 and pioneered bias testing in large language models. Her work focuses on AI ethics, cultural value alignment, and policy standards for equitable AI deployment.
  • Ivana Bartoletti – Global Data Privacy Officer at Wipro, Ivana Bartoletti specializes in ensuring AI aligns with data protection laws. She is a leader in AI ethics and privacy best practices.
  • Karine Perset – Head of the OECD AI Policy Observatory, Karine Perset shapes AI governance frameworks internationally.
  • Kate Crawford – A Senior Principal Researcher at Microsoft Research, Kate Crawford explores AI’s societal effects. She focuses on bias in AI, the power structures shaping its development, and the ethical implications of large-scale AI deployments. Her research is instrumental in addressing the unintended consequences of AI on marginalized communities.
  • Kay Firth-Butterfield – As the world’s first Chief AI Ethics Officer back in 2014, Kay Firth-Butterfield leads global discussions on AI responsibility. She is an advisory board member at Fathom.org.
  • Latanya Sweeney – A Harvard professor, Latanya Sweeney researches how AI influences societal structures and works on preventing bias in AI decision-making.
  • Lina Khan – As Chair of the U.S. Federal Trade Commission (FTC), Lina Khan regulates AI-driven monopolies and ensures fair competition in the AI sector. She has led many AI-centric antitrust discussions.
  • Manuela Veloso – Head of AI Research at JPMorgan Chase, Manuela Veloso integrates AI into financial systems. With a background in robotics and machine learning, she explores how AI can improve automation, risk assessment, and security in finance. 
  • Nina Schick – Founder of Tamang Ventures, Nina Schick specializes in AI’s role in media, deep fakes, and journalism. She advocates for responsible AI in information dissemination and political discourse. 
  • Regina Barzilay – An MIT professor, Regina Barzilay is known for applying AI to healthcare and medical research. She has developed pioneering AI research for early cancer detection and drug discovery.  Beyond her work in oncology, Barzilay has also explored how AI can accelerate drug development, helping to identify promising compounds more efficiently. 
  • Rumman Chowdhury – Chief Executive Officer & co-founder at HumaneIntelligence and member of the Artificial Intelligence Safety and Security Board for US Homeland Security, Rumman Chowdhury focuses on identifying and reducing bias in AI systems. She ensures AI is used in a fair and responsible manner.
  • Stephanie Hare – Author of “Technology Is Not Neutral,” Stephanie Hare pushes for AI transparency. She advocates for AI that benefits the broader public.
  • Sue Turner OBE – As CEO of AI Governance, Sue Turner helps companies integrate AI responsibly. She advises on ethical business strategies to ensure AI is used for social good.
  • Tekedra Mawakana – As co-CEO of Waymo, Tekedra Mawakana leads policy efforts in AI-driven transportation, advocating for ethical and safe deployment of autonomous vehicles. She plays a critical role in regulatory discussions around AI in the transport and mobility industry.
  • Yejin Choi – A professor at the University of Washington and a leading researcher at AI2, Yejin Choi works on improving AI’s reasoning abilities. Her research helps AI systems interpret nuanced language and make fairer, more ethical decisions.

AI is advancing at an astonishing pace, and the brilliant women on this list, together with many others, are driving that momentum.

Whether they’re developing the technology itself, combating ethical challenges, or influencing policies that govern its use, their work is instrumental for the industry and its role in the lives of those it affects. 

While there is much work to be done to secure fair, unbiased representation in both AI and the industry behind it, progress is being made thanks to the innovators in this list and the millions of other women beside them. 

The post 10 Top Women in AI in 2025 appeared first on DailyAI.

Apple pulls AI-generated news from its devices after backlash

Apple has killed its Apple Intelligence AI news feature after it fabricated stories and twisted real headlines into fiction. 

Apple’s AI news was supposed to make life easier by summing up news alerts from multiple sources. Instead, it created chaos by pushing out fake news, often under trusted media brands. 

Here’s where it all went wrong:

  • Using the BBC’s logo, it invented a story claiming tennis star Rafael Nadal had come out as gay, completely misunderstanding a story about a Brazilian player.
  • It jumped the gun by announcing teenage darts player Luke Littler had won the PDC World Championship – before he’d even played in the final.
  • In a more serious blunder, it created a fake BBC alert claiming Luigi Mangione, who’s accused of killing UnitedHealthcare CEO Brian Thompson, had killed himself.
  • The system stamped The New York Times’ name on a completely made-up story about Israeli Prime Minister Benjamin Netanyahu being arrested.
deep fake
An AI-generated news summary of a BBC article wrongly stated CEO shooting suspect Luigi Mangione shot himself. The BBC’s logo was attached.

The BBC, angered over seeing its name attached to fake stories, eventually filed a formal complaint. Press groups joined in, such as Reporters Without Borders, who warned that letting AI rewrite the news puts the public’s right to accurate information at risk.

The National Union of Journalists also called for the feature to be removed, saying readers shouldn’t have to guess whether what they’re reading is real.

Research has previously shown that even when people learn that AI-created media is fake, it still leaves a psychological ‘mark’ that persists afterwards. 

Apple Intelligence – which offered a range of AI-powered features including AI news – was one of the headline features of the new iPhone 16 range.

Apple is a company that prides itself on polished products that ‘just work’ – it’s rare for Apple to backtrack – so they evidently had little choice here.

That said, they’re not alone as far as AI blunders go. Not long ago, Google’s AI-generated search summaries told people they could eat rocks and put glue on pizza

Apple plans to resurrect the feature with warning labels and special formatting to show when AI creates the summaries. 

Should readers have to decode different fonts and labels just to know if they’re reading real news? And here’s a radical idea – they could just keep displaying the news headline itself?

It all goes to show that, as AI continues to seep into every corner of our digital lives, some things – like receiving accurate news facts – are simply too important to get wrong.

A big U-turn from Apple, but probably not the last we’ll see of its type.

The post Apple pulls AI-generated news from its devices after backlash appeared first on DailyAI.

Woman scammed out of €800k by an AI deep fake of Brad Pitt

What began as a ski holiday Instagram post ended in financial ruin for a French interior designer after scammers used AI to convince her she was in a relationship with Brad Pitt.

The 18-month scam targeted Anne, 53, who received an initial message from someone posing as Jane Etta Pitt, Brad’s mother, claiming her son “needed a woman like you.” 

Not long after, Anne started talking to what she believed was the Hollywood star himself, complete with AI-generated photos and videos.

“We’re talking about Brad Pitt here and I was stunned,” Anne told French media. “At first, I thought it was fake, but I didn’t really understand what was happening to me.” 

The relationship deepened over months of daily contact, with the fake Pitt sending poems, declarations of love, and eventually a marriage proposal.

“There are so few men who write to you like that,” Anne described. “I loved the man I was talking to. He knew how to talk to women and it was always very well put together.”

The scammers’ tactics proved so convincing that Anne eventually divorced her millionaire entrepreneur husband.

After building rapport, the scammers began extracting money with a modest request – €9,000 for supposed customs fees on luxury gifts. It escalated when the impersonator claimed to need cancer treatment while his accounts were frozen due to his divorce from Angelina Jolie. 

A fabricated doctor’s message about Pitt’s condition prompted Anne to transfer €800,000 to a Turkish account.

Brad Pitt scam
Scammers requested money for fake Brad Pitt’s cancer treatment

“It cost me to do it, but I thought that I might be saving a man’s life,” she said. When her daughter recognized the scam, Anne refused to believe it: “You’ll see when he’s here in person then you’ll say sorry.”

Her illusions were shattered upon seeing news coverage of the real Brad Pitt with his partner Inés de Ramon in summer 2024. 

Even then, the scammers tried to maintain control, sending fake news alerts dismissing these reports and claiming Pitt was actually dating an unnamed “very special person.” In a final roll of the dice, someone posing as an FBI agent extracted another €5,000 by offering to help her escape the scheme.

The aftermath proved devastating – three suicide attempts led to hospitalization for depression. 

Anne opened up about her experience to French broadcaster TF1, but the interview was later removed after she faced intense cyber-bullying.

Now living with a friend after selling her furniture, she has filed criminal complaints and launched a crowdfunding campaign for legal help.

A tragic situation – though Anne is certainly not alone. Her story parallels a massive surge in AI-powered fraud worldwide. 

Spanish authorities recently arrested five people who stole €325,000 from two women through similar Brad Pitt impersonations. 

Speaking about AI fraud last year, McAfee’s Chief Technology Officer Steve Grobman explains why these scams succeed: “Cybercriminals are able to use generative AI for fake voices and deepfakes in ways that used to require a lot more sophistication.”

It’s not just people who are lined up in the scammers’ crosshairs, but businesses, too. In Hong Kong last year, fraudsters stole $25.6 million from a multinational company using AI-generated executive impersonators in video calls. 

Superintendent Baron Chan Shun-ching described how “the worker was lured into a video conference that was said to have many participants. The realistic appearance of the individuals led the employee to execute 15 transactions to five local bank accounts.”

Would you be able to spot an AI scam?

Most people would fancy their chances of spotting an AI scam, but research says otherwise. 

Studies show humans struggle to distinguish real faces from AI creations, and synthetic voices fool roughly a quarter of listeners. That evidence came from last year – AI voice image, voice, and video synthesis have evolved considerably since. 

Synthesia, an AI video platform that generates realistic human avatars speaking multiple languages, now backed by Nvidia, just doubled its valuation to $2.1 billion. Video and voice synthesis platforms like Synthesia and Elevenlabs are among the tools that fraudsters use to launch deep fake scams.

Synthesia admits this themselves, recently demonstrating its commitment to preventing misuse through a rigorous public red team test, which showed how its compliance controls successfully block attempts to create non-consensual deepfakes or use avatars for harmful content like promoting suicide and gambling.

Whether or not such measures are effect – the jury is out. As companies and individuals wrestle with compellingly real AI-generated media, the human cost – illustrated by Anne’s devastating experience – is set to rise. 

The post Woman scammed out of €800k by an AI deep fake of Brad Pitt appeared first on DailyAI.

Two hours of AI conversation can create a near-perfect digital twin of anyone

Stanford and Google DeepMind researchers have created AI that can replicate human personalities with uncanny accuracy after just a two-hour conversation. 

By interviewing 1,052 people from diverse backgrounds, they built what they call “simulation agents” – digital copies that could predict their human counterparts’ beliefs, attitudes, and behaviors with remarkable consistency.

To create the digital copies, the team uses data from an “AI interviewer” designed to engage participants in natural conversation. 

The AI interviewer asks questions and generates personalized follow-up questions – an average of 82 per session – exploring everything from childhood memories to political views.

Through these two-hour discussions, each participant generated detailed transcripts averaging 6,500 words.

ai interview
The above shows the study platform, which includes participant sign-up, avatar creation, and a main interface with modules for consent, avatar creation, interview, surveys/experiments, and a self-consistency retake of surveys/experiments. Modules become available sequentially as previous ones are completed. Source: ArXiv.

For example, when a participant mentions their childhood hometown, the AI might probe deeper, asking about specific memories or experiences. By simulating a natural flow of conversation, the system captures nuanced personal information that standard surveys typically overlook.

Behind the scenes, the study documents what the researchers call “expert reflection” – prompting large language models to analyze each conversation from four distinct professional viewpoints:

  • As a psychologist, it identifies specific personality traits and emotional patterns – for instance, noting how someone values independence based on their descriptions of family relationships.
  • Through a behavioral economist’s lens, it extracts insights about financial decision-making and risk tolerance, like how they approach savings or career choices.
  • The political scientist perspective maps ideological leanings and policy preferences across various issues.
  • A demographic analysis captures socioeconomic factors and life circumstances.

The researchers concluded that this interview-based technique outperformed comparable methods – such as mining social media data – by a substantial margin.

ai interview
The above shows the interview interface, which features an AI interviewer represented by a 2-D sprite in a pulsating white circle that matches the audio level. The sprite changes to a microphone when it’s the participant’s turn. A progress bar shows a sprite traveling along a line, and options are available for subtitles and pausing.

Testing the digital copies

The researchers put their AI replicas through a battery of tests. 

First, they used the General Social Survey – a measure of social attitudes that asks questions about everything from political views to religious beliefs. Here, the AI copies matched their human counterparts’ responses 85% of the time.

On the Big Five personality test, which measures traits like openness and conscientiousness through 44 different questions, the AI predictions aligned with human responses about 80% of the time. The system was particularly good at capturing traits like extraversion and neuroticism.

Economic game testing revealed fascinating limitations, however. In the “Dictator Game,” where participants decide how to split money with others, the AI struggled to perfectly predict human generosity. 

In the “Trust Game,” which tests willingness to cooperate with others for mutual benefit, the digital copies only matched human choices about two-thirds of the time. 

This suggests that while AI can grasp our stated values, it still can’t fully capture the nuances of human social decision-making. 

Real-world experiments

The researchers also ran five classic social psychology experiments using their AI copies. 

In one experiment testing how perceived intent affects blame, both humans and their AI copies showed similar patterns of assigning more blame when harmful actions seemed intentional. 

Another experiment examined how fairness influences emotional responses, with AI copies accurately predicting human reactions to fair versus unfair treatment.

The AI replicas successfully reproduced human behavior in four out of five experiments, suggesting they can model not just individual topical responses but broad, complex behavioral patterns.

Easy AI clones: What are the implications?

AI clones are big business, with Meta recently announcing plans to fill Facebook and Instagram with AI profiles that can create content and engage with users.

TikTok has also jumped into the fray with its new “Symphony” suite of AI-powered creative tools, which includes digital avatars that can be used by brands and creators to produce localized content at scale.

With Symphony Digital Avatars, TikTok is enabling new ways for creators and brands to captivate global audiences using generative AI. The avatars can represent real people with a wide range of gestures, expressions, ages, nationalities and languages.

Stanford and DeepMind’s research suggests such digital replicas will become far more sophisticated – and easier to build and deploy at scale. 

“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made — that, I think, is ultimately the future,” lead researcher Joon Sung Park, a Stanford PhD student in computer science, describes to MIT.

However, Park describes that there are upsides to such technology, as building accurate clones could support scientific research. 

Instead of running expensive or ethically questionable experiments on real people, researchers could test how populations might respond to certain inputs. For example, it could help predict reactions to public health messages or study how communities adapt to major societal shifts.

Ultimately, though, the same features that make these AI replicas valuable for research also make them powerful tools for deception. 

As digital copies become more convincing, distinguishing authentic human interaction from AI-generated content will grow far more complex. 

The research team acknowledges these risks. Their framework requires clear consent from participants and allows them to withdraw their data, treating personality replication with the same privacy concerns as sensitive medical information. 

In any case, we’re entering uncharted territory in human-machine interaction, and the long-term implications remain largely unknown.

The post Two hours of AI conversation can create a near-perfect digital twin of anyone appeared first on DailyAI.

Meta’s AI invasion signals dramatic shift for social media

Meta has announced plans to populate Facebook and Instagram with AI-generated profiles and content. 

Connor Hayes, Meta’s vice-president of product for generative AI, outlined the company’s vision: “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do.”

Hayes added that these AI entities will have “bios and profile pictures and be able to generate and share content powered by AI on the platform.”

Meta has already seen hundreds of thousands of AI characters created through its tools since their US launch in July, though the vast majority of users have not released their creations publicly. 

Hayes notes that making Meta’s apps “more entertaining and engaging” is a “priority” for the next two years, with a particular focus on making AI interactions more social.

Meta’s broader AI plans are ambitious. The company is developing tools to help users create AI assistants that can respond to followers’ questions. For 2025, it plans to release text-to-video generation software enabling creators to insert themselves into AI-generated videos. 

Mark Zuckerberg also recently revealed AI avatars capable of conducting live video calls while perfectly mimicking a creator’s persona, from their speaking patterns to their facial expressions.

This forms part of a broader industry push toward AI-generated content. Snapchat released tools that enable creators to design 3D AI characters for augmented reality purposes, reporting a 50% annual increase in users viewing AI-generated content. 

Meanwhile, ByteDance-owned TikTok is piloting “Symphony,” a series of tools and applications that enables brands and creators to use AI for advertising purposes, such as creating AI-generated avatars and automating content translation.

AI bots on social media: The implications

Industry experts are sounding alarms about the psychological and social implications of embedding social media with AI bots. 

Becky Owen, global chief marketing and innovation officer at Billion Dollar Boy and former head of Meta’s creator innovations team, cautions that “without robust safeguards, platforms risk amplifying false narratives through these AI-driven accounts,” according to the FT.

She emphasizes, “Unlike human creators, these AI personas don’t have lived experiences, emotions, or the same capacity for relatability.”

Owen further warns that AI characters could flood platforms with low-quality material that undermines creators and erodes user confidence. 

This takes on added weight given Meta’s history with data manipulation – most notably the Cambridge Analytica scandal, where user data was exploited to influence political opinions. 

Rather than merely harvesting user data to target content, AI entities could actively engage with users, shape conversations, and influence opinions in real time, all while appearing to be authentic human participants in online discourse.

Meta claims to be implementing protective measures, including mandatory labeling of AI-generated content, but critics argue this may not be sufficient to prevent the erosion of authentic human connection.

Bots threaten to takeover parts of the internet

According to research from Imperva, nearly half of all internet traffic – 49.6% – now originates from non-human sources. 

Bad bots already account for 32% of web traffic, lending credence to what was once dismissed as a conspiracy theory: the concept of a “dead internet” where human voices become increasingly drowned out by artificial ones.

On a deeper level, this signals yet another progression towards an internet ecosystem shaped by AI systems. 

The philosophical implications are dizzying. We’re moving toward a world where our online social circles may include entities that think and respond at superhuman speeds, yet lack any genuine consciousness or emotional experience. 

AI profiles will share “memories” they never had, express “feelings” they cannot feel, and forge “connections” without any capacity for true empathy or understanding.

Ironically, social media, originally created to help humans connect more easily across vast distances, may become a space where human connection is increasingly mediated and diluted by artificial entities. 

The question isn’t simply whether AI can convincingly mimic human interaction but whether we’re prepared for a world where digital entities become equal participants in our online social spaces.

The post Meta’s AI invasion signals dramatic shift for social media appeared first on DailyAI.