Uncategorized

10 Top Women in AI in 2025

AI is changing our world, but the stories of who build it often get lost in the noise.

Behind the headlines and hype, a group of women are solving AI’s fundamental challenges – despite working in an industry persisently impacted by gender inequality.

Women make up just 22% of AI professionals worldwide and only 12% of AI researchers. In academic publishing, female researchers account for just 29% of first authors on AI papers, a number that hasn’t increased since the mid-2000s. 

This is a story about ten leaders who have influenced AI despite the odds being stacked against them. 

Their work – from chip architecture to bias detection, from environmental impact to safety systems – is vital to the modern AI industry and its future direction. 

Fei-Fei Li

LinkedIn: https://www.linkedin.com/in/fei-fei-li-4541247/ 

Known as the “godmother of AI,” Fei-Fei Li’s defining influence on AI spans years of commitment to the field.

Born in Beijing and immigrating to the United States at age 16, she has been instrumental in transitioning AI from a niche technology to something scalable and broadly accessible. 

As co-director of Stanford’s Human-Centered AI Institute, Li has brought attention to ‘human-centered AI,’ which places human values at its core. Many of her works feature on university reading lists; papers with Li as a named writer have amassed an extraordinary 285,343 citations in total.

One of Li’s key career milestones was creating ImageNet in 2007, a massive dataset containing over 15 million labeled images across 22,000 categories. 

ImageNet effectively solved one of computer vision’s most fundamental challenges: teaching machines to recognize objects with human-like accuracy. The dataset and the projects that emerged from it catalyzed important advancements in deep learning. 

Simultaneous with her technical achievements, Li’s commitment to diversity in AI led her to co-found AI4ALL, which has provided opportunities for thousands of underrepresented students to enter the field. Many of the organization’s alumni have secured positions at major tech companies and established their own AI startups.

In 2024, Li’s co-founding of World Labs marked a new chapter in her career. The company’s focus on “spatial intelligence” aims to bridge the gap between AI’s understanding of digital and physical spaces. 

With $230 million in funding from leading tech investors, including Andreessen Horowitz and NVIDIA’s NVentures, World Labs rapidly achieved unicorn status.

Joy Buolamwini

LinkedIn: https://www.linkedin.com/in/buolamwini/

Dr. Joy Buolamwini’s journey to becoming a leading voice in AI ethics was triggered by a deeply personal experience. 

While working with facial analysis software as a graduate student at MIT, she discovered these systems struggled to detect her dark-skinned face unless she wore a white mask. This sparked large-scale research into AI bias and prejudice, challenging the discourse around AI decision-making. 

Buolamwini’s “Gender Shades” project, conducted through MIT Media Lab and much-cited across the AI research community and media, including here on Dailyai.com, provided the first comprehensive evidence of racial and gender bias in commercial AI systems. 

The study revealed error rates of up to 34% for darker-skinned females compared to just 0.8% for lighter-skinned males. Leading tech companies, including Microsoft, IBM, and Amazon, subsequently assessed their facial recognition technologies.

As founder of the Algorithmic Justice League (AJL), Buolamwini has created a movement that combines research with advocacy. Combining art and science, AJL has created influential documentaries and projects, building support for AI technologies that truly serve everyone equally.

Buolamwini’s 2023 book “Unmasking AI” became a national bestseller, offering unprecedented insights into the hidden biases within AI systems. The book not only documents technical failures but also provides a blueprint for creating more equitable AI systems.

Timnit Gebru

LinkedIn: https://www.linkedin.com/in/timnit-gebru-7b3b407/

Dr. Timnit Gebru’s impact on AI ethics has accelerated the industry’s critical thinking, forcing it to be more introspective of impacts and consequences.

Born in Addis Ababa, Ethiopia, Gebru’s perspective as both a technical expert and advocate for marginalized communities has been formidable in challenging the industry status quo.

Her co-authored paper “On the Dangers of Stochastic Parrots” was a watershed moment in AI ethics, challenging the fundamental assumptions behind large language models (LLMs). 

Some of the paper’s core points were that today’s frontier AI models fail to build in relativist cultural norms and values, essentially acting as static monoliths that ‘live’ predominantly in the Western world. Rather than understanding language, LLMs primarily manipulate it and are tough – if not impossible – to audit for bias due to their ever-spiraling complexity. 

The paper’s controversial reception and her subsequent departure from Google’s Ethical AI team sparked fierce debate about corporate influence on AI research.

The founding of the Distributed AI Research Institute (DAIR) in 2021 represented Gebru’s vision for truly independent AI research. DAIR’s structure advocates for community-rooted research free from corporate influence. 

Gebru’s selection to receive the 2025 Miles Conrad Award recognizes her transformative impact on the field. Her work continues to inspire a new generation of researchers, positioning AI as both a technical frontier and battleground for social justice and equality. 

Daniela Amodei

LinkedIn: https://www.linkedin.com/in/daniela-amodei-790bb22a/

Daniela Amodei’s career bridges politics, global health, and technology, culminating in her role as co-founder and president of Anthropic, one of generative AI’s most influential startups. 

After graduating summa cum laude from the University of California, Santa Cruz, with a degree in English Literature, Amodei worked in political communications, including managing messaging for the US Representative Matt Cartwright. 

In 2013, she joined Stripe, starting in recruitment before transitioning into risk program management. Working across machine learning (ML), data science, engineering, legal, and finance teams, Amodei developed a deep understanding of technology’s regulatory and operational challenges.

By 2018, she moved to OpenAI as Vice President of Safety and Policy, overseeing safety, policy, engineering, and human resources. During this period, Amodei played a key role in shaping AI governance strategies. However, in 2020, Amodei and several colleagues, including her brother Dario Amodei, left OpenAI over concerns about the company’s direction in AI safety.

In 2021, the Amodei siblings co-founded Anthropic to build AI systems designed for reliability, transparency, and alignment with human values. 

Under Daniela’s leadership, the company has grown from a small team to more than 800 employees and has attracted massive investment, including $4 billion from Amazon. 

Sasha Luccioni

LinkedIn: https://www.linkedin.com/in/sashaluccioniphd/ 

Dr. Sasha Luccioni, born Alexandra Sasha Vorobyova in Ukraine in 1990, moved to Canada at the age of four and quicklydeveloped an early interest in science.

She later earned a B.A. in Language Science from Université Paris III: Sorbonne Nouvelle in 2010, followed by an M.Sc. in Cognitive Science with a focus on Natural Language Processing from École Normale Supérieure in Paris in 2012. In 2018, Luccioni completed her Ph.D. in Cognitive Computing at Université du Québec à Montréal.

Dr. Luccioni began her professional career at Nuance Communications in 2017, focusing on natural language processing and machine learning to enhance conversational agents. She then joined Morgan Stanley’s AI/ML Center of Excellence in 2018, working on explainable AI decision-making systems. 

In 2019, she became a postdoctoral researcher at Université de Montréal and Mila, collaborating with Yoshua Bengio on the “This Climate Does Not Exist” project, which used generative adversarial networks (GANs) to visualize climate change impacts.

In 2021, Dr. Luccioni joined Hugging Face as a research scientist and Climate Lead, where she focuses on quantifying the carbon footprint of AI systems and promoting sustainable practices in machine learning development. She has been instrumental in developing tools like CodeCarbon for real-time tracking of carbon emissions from computing. 

Her research on the BLOOM language model highlighted its potential to generate over 50 metric tons of CO₂ during its lifecycle, equivalent to 80 transatlantic flights – research we’ve cited at Dailyai.com on a few occasions.

Beyond her research, Dr. Luccioni is a founding member of Climate Change AI and serves on the board of Women in Machine Learning, mentoring underrepresented minorities in the AI community. 

In 2024, her contributions were recognized by TIME Magazine, naming her one of the 100 most influential people in AI, and by Business Insider on its AI Power List. 

Mira Murati

LinkedIn: https://www.linkedin.com/mira-murati-4b39a066/ 

Born in Vlorë, Albania, Mira Murati was one of the most influential figures in OpenAI and has since founded her own AI research lab – yet to be named – focusing on artificial general intelligence (AGI). continues 

Within months, Murati assembled an all-star team, including Jonathan Lachman and several key researchers from OpenAI, Character AI, and Google DeepMind.

As OpenAI’s Chief Technology Officer from 2018 to 2024, she orchestrated the development of technologies that have fundamentally altered our relationship with AI.

Under her leadership, OpenAI released ChatGPT, which achieved the fastest user adoption rate in consumer technology history, reaching 100 million users within two months. She also oversaw the development of DALL-E, which revolutionized AI image generation, and GPT-4o, one of 2024’s headline model releases. 

Murati pioneered OpenAI’s “iterative deployment” strategy, which involves releasing AI models gradually to better understand and address potential risks. 

During her brief but pivotal role as interim CEO during OpenAI’s leadership transition in 2024, Murati effectively governed the company during the crisis while maintaining its momentum. 

Rana el Kaliouby

LinkedIn: https://www.linkedin.com/in/kaliouby/ 

Born in Cairo, Egypt, in 1978, Rana el Kaliouby has spent much of her career bridging technology and human emotion. 

She earned her bachelor’s and master’s degrees from the American University in Cairo before completing a Ph.D. at Cambridge University’s Newnham College, where she developed early methods for automated emotion recognition.

In 2009, she co-founded Affectiva, a spin-off from MIT Media Lab, to bring emotion AI into real-world applications. Under her leadership, the company built systems that analyzed facial expressions and vocal cues to interpret emotions. 

Following Affectiva’s acquisition by Smart Eye in 2021, el Kaliouby served as Deputy CEO before founding Blue Tulip Ventures in 2024.

The firm focuses on “human-centric AI,” investing in technologies designed to prioritize well-being, sustainability, and social impact. It has already backed startups developing AI-driven mental health tools and emotion-aware education technology.

Beyond her work in AI, el Kaliouby is an executive fellow at Harvard Business School, where she teaches about AI and entrepreneurship. 

She also serves as a trustee for the Boston Museum of Science and the American University in Cairo. Her memoir, “Girl Decoded,” published in 2020, recounts her journey from a self-described “nice Egyptian girl” to a leader in technology, advocating for the humanization of AI.

Recognized on Fortune’s 40 Under 40 list and Forbes’ Top 50 Women in Tech, el Kaliouby continues to push for diversity in AI, championing initiatives that support women and underrepresented groups in the field.

Daniela Rus

LinkedIn: https://www.linkedin.com/in/daniela-rus-220b3/ 

Born in Cluj-Napoca, Romania, Daniela Rus moved to the US and earned a Bachelor of Science in computer science and mathematics from the University of Iowa in 1985, followed by a Master of Science and a Ph.D. in computer science from Cornell University in 1990 and 1993, respectively. Her doctoral research focused on fine motion planning for dexterous manipulation.

After completing her Ph.D., Rus began her academic career as a professor in the Computer Science Department at Dartmouth College. In 2004, she joined the Massachusetts Institute of Technology (MIT), and since 2012, she has served as the director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL)

Under Rus’s leadership, CSAIL has become a global hub for robotics innovation, with breakthroughs in autonomous vehicles, drone technology, and human-robot collaboration.

Rus’s research interests operate at the intersection of AI and robotics. She has made massive contributions to soft robotics, developing machines capable of safely interacting with humans and navigating through confined spaces. 

In recognition of her pioneering work, Rus was awarded the 2024 John Scott Award and the 2025 IEEE Edison Medal. She is a member of the National Academy of Engineering, the American Academy of Arts and Sciences, and the National Academy of Sciences, and a fellow of ACM, AAAI, and IEEE. 

Her recent books, “The Heart and the Chip: Our Bright Future with Robots” and “The Mind’s Mirror: Risk and Reward in the Age of AI,” explore the relationship between humans and machines, with ethical discussions surrounding what it might look like in the not-too-distant future. 

Joelle Pineau

LinkedIn: https://www.linkedin.com/in/joelle-pineau-371574141/ 

Joelle Pineau has advanced machine learning and cutting-edge robotics research while improving how AI is developed and shared through open-source technologies.

As Vice President of AI Research at Meta, Pineau leads the company’s Fundamental AI Research (FAIR) lab, where she has driven breakthroughs in reinforcement learning (RL), robotics, and open-access AI models. Pineau has authored and co-authored many influential studies on robotics, with a particular focus on co-bots for elderly care

Her career began in robotics, earning a Ph.D. from Carnegie Mellon University with research focused on algorithms for complex decision-making. At McGill University, where she remains a professor, she built one of Canada’s leading AI labs and helped develop AI models for medical diagnostics. 

Since joining Meta, Pineau has been a driving force behind efforts to make AI research more open and reproducible. Under her leadership, the FAIR lab released LLaMA, a family of LLMs that have greatly influenced the entire AI industry, challenging orthodox proprietary model releases. 

Beyond research, Pineau has been outspoken about the need for greater transparency and accountability in AI. She has pushed for policies that require AI models, datasets, and benchmarks to be openly shared, ensuring that research can be independently verified. 

Lisa Su

LinkedIn: https://www.linkedin.com/in/lisasu-amd/ 

Lisa Su’s transformation of AMD represents one of technology’s most remarkable turnaround stories. Born in Tainan, Taiwan, in 1969, Su’s journey to becoming one of tech’s most respected CEOs began with a passion for semiconductors and a Ph.D. from MIT. 

When she took the helm of AMD in 2014, the company was struggling with a stock price under $3. Today, under her leadership, AMD has surpassed Intel in market value, with shares trading over $140.

Su’s strategic genius lies in her early recognition of AI’s potential to transform the computing industry. While competitors such as Intel focused on consumer electronics, she directed AMD’s resources toward high-performance computing architectures. This proved prescient as the AI boom created phenomenal demand for high-end AI-centric processors.

In 2024, Su dubbed AI as “the most important technology that has come over the last 50 years.” At AMD’s groundbreaking “Advancing AI 2024” event, she unveiled the MI300X, AMD’s first AI accelerator designed to challenge NVIDIA’s dominance. 

Notable Women Pioneers in AI Research and Development

The women in the top 10 list have led some of the most defining advances in AI, but they’re part of a much larger movement. 

Across research, policy, and industry, many others are influencing how AI is built, deployed, and governed. 

Here are just a few more leaders whose contributions continue to push AI forward:

  • Amba Kak – As co-executive Director of the AI Now Institute, Amba Kak ensures AI systems are accountable to the public rather than corporate interests. She focuses on policy reform and regulation, advocating for stronger AI governance.
  • Anima Anandkumar – Professor at Caltech and former senior director at NVIDIA, Anima Anandkumar develops AI applications for climate science, robotics, and autonomous technologies. Her work ensures AI contributes to solving global challenges, from climate modeling to advanced robotics. 
  • Chelsea Finn – A professor at Stanford and part of Google Brain, Chelsea Finn researches how AI can improve itself through experience, leading advancements in robotics and machine learning. Her work focuses on meta-learning, allowing AI to learn more efficiently from fewer data points.
  • Claire Delaunay – With a career spanning NVIDIA, Uber, and Google, Claire Delaunay has been at the forefront of AI-powered robotics and autonomous systems. She played a key role in developing scalable AI for industrial and mobility applications, bridging academic research and real-world AI deployment.
  • Cynthia Breazeal – An MIT professor and a pioneer in social robotics, Cynthia Breazeal founded the Personal Robots Group at the MIT Media Lab. She has led research into AI-driven companion robots for education, healthcare, and personal assistance.
  • Cynthia Rudin – A Duke University professor, Cynthia Rudin researches interpretable AI, ensuring models are transparent and reliable. Her work has been particularly impactful in healthcare and criminal justice, where AI decision-making must be explainable and fair. She advocates for AI systems that prioritize accountability and user trust.
  • Daniela Braga – As CEO of Defined.ai, Daniela Braga has been instrumental in developing ethically sourced AI training data. She advocates for reducing bias in AI models by diversifying datasets, ensuring AI systems are more inclusive and representative. Her work emphasizes the need for more accurate and fair language models.
  • Daphne Koller – Co-founder of Coursera and CEO of Insitro, Daphne Koller has pioneered AI applications in drug discovery, accelerating treatment development. Her online learning platform, Coursera, has transformed global access to AI-driven education. She remains at the forefront of leveraging AI for life sciences and learning.
  • Francesca Rossi – As IBM’s Global AI Ethics Lead, Francesca Rossi works on designing AI that aligns with human values. Her focus is on ensuring AI is transparent, accountable, and ethically sound. She plays a key role in shaping global discussions on AI responsibility.
  • Irene SolaimanA leader in AI policy, Irene Solaiman heads global policy at Hugging Face, shaping responsible AI governance. Previously at OpenAI, she played a key role in the staged release of GPT-2 and pioneered bias testing in large language models. Her work focuses on AI ethics, cultural value alignment, and policy standards for equitable AI deployment.
  • Ivana Bartoletti – Global Data Privacy Officer at Wipro, Ivana Bartoletti specializes in ensuring AI aligns with data protection laws. She is a leader in AI ethics and privacy best practices.
  • Karine Perset – Head of the OECD AI Policy Observatory, Karine Perset shapes AI governance frameworks internationally.
  • Kate Crawford – A Senior Principal Researcher at Microsoft Research, Kate Crawford explores AI’s societal effects. She focuses on bias in AI, the power structures shaping its development, and the ethical implications of large-scale AI deployments. Her research is instrumental in addressing the unintended consequences of AI on marginalized communities.
  • Kay Firth-Butterfield – As the world’s first Chief AI Ethics Officer back in 2014, Kay Firth-Butterfield leads global discussions on AI responsibility. She is an advisory board member at Fathom.org.
  • Latanya Sweeney – A Harvard professor, Latanya Sweeney researches how AI influences societal structures and works on preventing bias in AI decision-making.
  • Lina Khan – As Chair of the U.S. Federal Trade Commission (FTC), Lina Khan regulates AI-driven monopolies and ensures fair competition in the AI sector. She has led many AI-centric antitrust discussions.
  • Manuela Veloso – Head of AI Research at JPMorgan Chase, Manuela Veloso integrates AI into financial systems. With a background in robotics and machine learning, she explores how AI can improve automation, risk assessment, and security in finance. 
  • Nina Schick – Founder of Tamang Ventures, Nina Schick specializes in AI’s role in media, deep fakes, and journalism. She advocates for responsible AI in information dissemination and political discourse. 
  • Regina Barzilay – An MIT professor, Regina Barzilay is known for applying AI to healthcare and medical research. She has developed pioneering AI research for early cancer detection and drug discovery.  Beyond her work in oncology, Barzilay has also explored how AI can accelerate drug development, helping to identify promising compounds more efficiently. 
  • Rumman Chowdhury – Chief Executive Officer & co-founder at HumaneIntelligence and member of the Artificial Intelligence Safety and Security Board for US Homeland Security, Rumman Chowdhury focuses on identifying and reducing bias in AI systems. She ensures AI is used in a fair and responsible manner.
  • Stephanie Hare – Author of “Technology Is Not Neutral,” Stephanie Hare pushes for AI transparency. She advocates for AI that benefits the broader public.
  • Sue Turner OBE – As CEO of AI Governance, Sue Turner helps companies integrate AI responsibly. She advises on ethical business strategies to ensure AI is used for social good.
  • Tekedra Mawakana – As co-CEO of Waymo, Tekedra Mawakana leads policy efforts in AI-driven transportation, advocating for ethical and safe deployment of autonomous vehicles. She plays a critical role in regulatory discussions around AI in the transport and mobility industry.
  • Yejin Choi – A professor at the University of Washington and a leading researcher at AI2, Yejin Choi works on improving AI’s reasoning abilities. Her research helps AI systems interpret nuanced language and make fairer, more ethical decisions.

AI is advancing at an astonishing pace, and the brilliant women on this list, together with many others, are driving that momentum.

Whether they’re developing the technology itself, combating ethical challenges, or influencing policies that govern its use, their work is instrumental for the industry and its role in the lives of those it affects. 

While there is much work to be done to secure fair, unbiased representation in both AI and the industry behind it, progress is being made thanks to the innovators in this list and the millions of other women beside them. 

The post 10 Top Women in AI in 2025 appeared first on DailyAI.

Apple pulls AI-generated news from its devices after backlash

Apple has killed its Apple Intelligence AI news feature after it fabricated stories and twisted real headlines into fiction. 

Apple’s AI news was supposed to make life easier by summing up news alerts from multiple sources. Instead, it created chaos by pushing out fake news, often under trusted media brands. 

Here’s where it all went wrong:

  • Using the BBC’s logo, it invented a story claiming tennis star Rafael Nadal had come out as gay, completely misunderstanding a story about a Brazilian player.
  • It jumped the gun by announcing teenage darts player Luke Littler had won the PDC World Championship – before he’d even played in the final.
  • In a more serious blunder, it created a fake BBC alert claiming Luigi Mangione, who’s accused of killing UnitedHealthcare CEO Brian Thompson, had killed himself.
  • The system stamped The New York Times’ name on a completely made-up story about Israeli Prime Minister Benjamin Netanyahu being arrested.
deep fake
An AI-generated news summary of a BBC article wrongly stated CEO shooting suspect Luigi Mangione shot himself. The BBC’s logo was attached.

The BBC, angered over seeing its name attached to fake stories, eventually filed a formal complaint. Press groups joined in, such as Reporters Without Borders, who warned that letting AI rewrite the news puts the public’s right to accurate information at risk.

The National Union of Journalists also called for the feature to be removed, saying readers shouldn’t have to guess whether what they’re reading is real.

Research has previously shown that even when people learn that AI-created media is fake, it still leaves a psychological ‘mark’ that persists afterwards. 

Apple Intelligence – which offered a range of AI-powered features including AI news – was one of the headline features of the new iPhone 16 range.

Apple is a company that prides itself on polished products that ‘just work’ – it’s rare for Apple to backtrack – so they evidently had little choice here.

That said, they’re not alone as far as AI blunders go. Not long ago, Google’s AI-generated search summaries told people they could eat rocks and put glue on pizza

Apple plans to resurrect the feature with warning labels and special formatting to show when AI creates the summaries. 

Should readers have to decode different fonts and labels just to know if they’re reading real news? And here’s a radical idea – they could just keep displaying the news headline itself?

It all goes to show that, as AI continues to seep into every corner of our digital lives, some things – like receiving accurate news facts – are simply too important to get wrong.

A big U-turn from Apple, but probably not the last we’ll see of its type.

The post Apple pulls AI-generated news from its devices after backlash appeared first on DailyAI.

Woman scammed out of €800k by an AI deep fake of Brad Pitt

What began as a ski holiday Instagram post ended in financial ruin for a French interior designer after scammers used AI to convince her she was in a relationship with Brad Pitt.

The 18-month scam targeted Anne, 53, who received an initial message from someone posing as Jane Etta Pitt, Brad’s mother, claiming her son “needed a woman like you.” 

Not long after, Anne started talking to what she believed was the Hollywood star himself, complete with AI-generated photos and videos.

“We’re talking about Brad Pitt here and I was stunned,” Anne told French media. “At first, I thought it was fake, but I didn’t really understand what was happening to me.” 

The relationship deepened over months of daily contact, with the fake Pitt sending poems, declarations of love, and eventually a marriage proposal.

“There are so few men who write to you like that,” Anne described. “I loved the man I was talking to. He knew how to talk to women and it was always very well put together.”

The scammers’ tactics proved so convincing that Anne eventually divorced her millionaire entrepreneur husband.

After building rapport, the scammers began extracting money with a modest request – €9,000 for supposed customs fees on luxury gifts. It escalated when the impersonator claimed to need cancer treatment while his accounts were frozen due to his divorce from Angelina Jolie. 

A fabricated doctor’s message about Pitt’s condition prompted Anne to transfer €800,000 to a Turkish account.

Brad Pitt scam
Scammers requested money for fake Brad Pitt’s cancer treatment

“It cost me to do it, but I thought that I might be saving a man’s life,” she said. When her daughter recognized the scam, Anne refused to believe it: “You’ll see when he’s here in person then you’ll say sorry.”

Her illusions were shattered upon seeing news coverage of the real Brad Pitt with his partner Inés de Ramon in summer 2024. 

Even then, the scammers tried to maintain control, sending fake news alerts dismissing these reports and claiming Pitt was actually dating an unnamed “very special person.” In a final roll of the dice, someone posing as an FBI agent extracted another €5,000 by offering to help her escape the scheme.

The aftermath proved devastating – three suicide attempts led to hospitalization for depression. 

Anne opened up about her experience to French broadcaster TF1, but the interview was later removed after she faced intense cyber-bullying.

Now living with a friend after selling her furniture, she has filed criminal complaints and launched a crowdfunding campaign for legal help.

A tragic situation – though Anne is certainly not alone. Her story parallels a massive surge in AI-powered fraud worldwide. 

Spanish authorities recently arrested five people who stole €325,000 from two women through similar Brad Pitt impersonations. 

Speaking about AI fraud last year, McAfee’s Chief Technology Officer Steve Grobman explains why these scams succeed: “Cybercriminals are able to use generative AI for fake voices and deepfakes in ways that used to require a lot more sophistication.”

It’s not just people who are lined up in the scammers’ crosshairs, but businesses, too. In Hong Kong last year, fraudsters stole $25.6 million from a multinational company using AI-generated executive impersonators in video calls. 

Superintendent Baron Chan Shun-ching described how “the worker was lured into a video conference that was said to have many participants. The realistic appearance of the individuals led the employee to execute 15 transactions to five local bank accounts.”

Would you be able to spot an AI scam?

Most people would fancy their chances of spotting an AI scam, but research says otherwise. 

Studies show humans struggle to distinguish real faces from AI creations, and synthetic voices fool roughly a quarter of listeners. That evidence came from last year – AI voice image, voice, and video synthesis have evolved considerably since. 

Synthesia, an AI video platform that generates realistic human avatars speaking multiple languages, now backed by Nvidia, just doubled its valuation to $2.1 billion. Video and voice synthesis platforms like Synthesia and Elevenlabs are among the tools that fraudsters use to launch deep fake scams.

Synthesia admits this themselves, recently demonstrating its commitment to preventing misuse through a rigorous public red team test, which showed how its compliance controls successfully block attempts to create non-consensual deepfakes or use avatars for harmful content like promoting suicide and gambling.

Whether or not such measures are effect – the jury is out. As companies and individuals wrestle with compellingly real AI-generated media, the human cost – illustrated by Anne’s devastating experience – is set to rise. 

The post Woman scammed out of €800k by an AI deep fake of Brad Pitt appeared first on DailyAI.

Two hours of AI conversation can create a near-perfect digital twin of anyone

Stanford and Google DeepMind researchers have created AI that can replicate human personalities with uncanny accuracy after just a two-hour conversation. 

By interviewing 1,052 people from diverse backgrounds, they built what they call “simulation agents” – digital copies that could predict their human counterparts’ beliefs, attitudes, and behaviors with remarkable consistency.

To create the digital copies, the team uses data from an “AI interviewer” designed to engage participants in natural conversation. 

The AI interviewer asks questions and generates personalized follow-up questions – an average of 82 per session – exploring everything from childhood memories to political views.

Through these two-hour discussions, each participant generated detailed transcripts averaging 6,500 words.

ai interview
The above shows the study platform, which includes participant sign-up, avatar creation, and a main interface with modules for consent, avatar creation, interview, surveys/experiments, and a self-consistency retake of surveys/experiments. Modules become available sequentially as previous ones are completed. Source: ArXiv.

For example, when a participant mentions their childhood hometown, the AI might probe deeper, asking about specific memories or experiences. By simulating a natural flow of conversation, the system captures nuanced personal information that standard surveys typically overlook.

Behind the scenes, the study documents what the researchers call “expert reflection” – prompting large language models to analyze each conversation from four distinct professional viewpoints:

  • As a psychologist, it identifies specific personality traits and emotional patterns – for instance, noting how someone values independence based on their descriptions of family relationships.
  • Through a behavioral economist’s lens, it extracts insights about financial decision-making and risk tolerance, like how they approach savings or career choices.
  • The political scientist perspective maps ideological leanings and policy preferences across various issues.
  • A demographic analysis captures socioeconomic factors and life circumstances.

The researchers concluded that this interview-based technique outperformed comparable methods – such as mining social media data – by a substantial margin.

ai interview
The above shows the interview interface, which features an AI interviewer represented by a 2-D sprite in a pulsating white circle that matches the audio level. The sprite changes to a microphone when it’s the participant’s turn. A progress bar shows a sprite traveling along a line, and options are available for subtitles and pausing.

Testing the digital copies

The researchers put their AI replicas through a battery of tests. 

First, they used the General Social Survey – a measure of social attitudes that asks questions about everything from political views to religious beliefs. Here, the AI copies matched their human counterparts’ responses 85% of the time.

On the Big Five personality test, which measures traits like openness and conscientiousness through 44 different questions, the AI predictions aligned with human responses about 80% of the time. The system was particularly good at capturing traits like extraversion and neuroticism.

Economic game testing revealed fascinating limitations, however. In the “Dictator Game,” where participants decide how to split money with others, the AI struggled to perfectly predict human generosity. 

In the “Trust Game,” which tests willingness to cooperate with others for mutual benefit, the digital copies only matched human choices about two-thirds of the time. 

This suggests that while AI can grasp our stated values, it still can’t fully capture the nuances of human social decision-making. 

Real-world experiments

The researchers also ran five classic social psychology experiments using their AI copies. 

In one experiment testing how perceived intent affects blame, both humans and their AI copies showed similar patterns of assigning more blame when harmful actions seemed intentional. 

Another experiment examined how fairness influences emotional responses, with AI copies accurately predicting human reactions to fair versus unfair treatment.

The AI replicas successfully reproduced human behavior in four out of five experiments, suggesting they can model not just individual topical responses but broad, complex behavioral patterns.

Easy AI clones: What are the implications?

AI clones are big business, with Meta recently announcing plans to fill Facebook and Instagram with AI profiles that can create content and engage with users.

TikTok has also jumped into the fray with its new “Symphony” suite of AI-powered creative tools, which includes digital avatars that can be used by brands and creators to produce localized content at scale.

With Symphony Digital Avatars, TikTok is enabling new ways for creators and brands to captivate global audiences using generative AI. The avatars can represent real people with a wide range of gestures, expressions, ages, nationalities and languages.

Stanford and DeepMind’s research suggests such digital replicas will become far more sophisticated – and easier to build and deploy at scale. 

“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made — that, I think, is ultimately the future,” lead researcher Joon Sung Park, a Stanford PhD student in computer science, describes to MIT.

However, Park describes that there are upsides to such technology, as building accurate clones could support scientific research. 

Instead of running expensive or ethically questionable experiments on real people, researchers could test how populations might respond to certain inputs. For example, it could help predict reactions to public health messages or study how communities adapt to major societal shifts.

Ultimately, though, the same features that make these AI replicas valuable for research also make them powerful tools for deception. 

As digital copies become more convincing, distinguishing authentic human interaction from AI-generated content will grow far more complex. 

The research team acknowledges these risks. Their framework requires clear consent from participants and allows them to withdraw their data, treating personality replication with the same privacy concerns as sensitive medical information. 

In any case, we’re entering uncharted territory in human-machine interaction, and the long-term implications remain largely unknown.

The post Two hours of AI conversation can create a near-perfect digital twin of anyone appeared first on DailyAI.

Meta’s AI invasion signals dramatic shift for social media

Meta has announced plans to populate Facebook and Instagram with AI-generated profiles and content. 

Connor Hayes, Meta’s vice-president of product for generative AI, outlined the company’s vision: “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do.”

Hayes added that these AI entities will have “bios and profile pictures and be able to generate and share content powered by AI on the platform.”

Meta has already seen hundreds of thousands of AI characters created through its tools since their US launch in July, though the vast majority of users have not released their creations publicly. 

Hayes notes that making Meta’s apps “more entertaining and engaging” is a “priority” for the next two years, with a particular focus on making AI interactions more social.

Meta’s broader AI plans are ambitious. The company is developing tools to help users create AI assistants that can respond to followers’ questions. For 2025, it plans to release text-to-video generation software enabling creators to insert themselves into AI-generated videos. 

Mark Zuckerberg also recently revealed AI avatars capable of conducting live video calls while perfectly mimicking a creator’s persona, from their speaking patterns to their facial expressions.

This forms part of a broader industry push toward AI-generated content. Snapchat released tools that enable creators to design 3D AI characters for augmented reality purposes, reporting a 50% annual increase in users viewing AI-generated content. 

Meanwhile, ByteDance-owned TikTok is piloting “Symphony,” a series of tools and applications that enables brands and creators to use AI for advertising purposes, such as creating AI-generated avatars and automating content translation.

AI bots on social media: The implications

Industry experts are sounding alarms about the psychological and social implications of embedding social media with AI bots. 

Becky Owen, global chief marketing and innovation officer at Billion Dollar Boy and former head of Meta’s creator innovations team, cautions that “without robust safeguards, platforms risk amplifying false narratives through these AI-driven accounts,” according to the FT.

She emphasizes, “Unlike human creators, these AI personas don’t have lived experiences, emotions, or the same capacity for relatability.”

Owen further warns that AI characters could flood platforms with low-quality material that undermines creators and erodes user confidence. 

This takes on added weight given Meta’s history with data manipulation – most notably the Cambridge Analytica scandal, where user data was exploited to influence political opinions. 

Rather than merely harvesting user data to target content, AI entities could actively engage with users, shape conversations, and influence opinions in real time, all while appearing to be authentic human participants in online discourse.

Meta claims to be implementing protective measures, including mandatory labeling of AI-generated content, but critics argue this may not be sufficient to prevent the erosion of authentic human connection.

Bots threaten to takeover parts of the internet

According to research from Imperva, nearly half of all internet traffic – 49.6% – now originates from non-human sources. 

Bad bots already account for 32% of web traffic, lending credence to what was once dismissed as a conspiracy theory: the concept of a “dead internet” where human voices become increasingly drowned out by artificial ones.

On a deeper level, this signals yet another progression towards an internet ecosystem shaped by AI systems. 

The philosophical implications are dizzying. We’re moving toward a world where our online social circles may include entities that think and respond at superhuman speeds, yet lack any genuine consciousness or emotional experience. 

AI profiles will share “memories” they never had, express “feelings” they cannot feel, and forge “connections” without any capacity for true empathy or understanding.

Ironically, social media, originally created to help humans connect more easily across vast distances, may become a space where human connection is increasingly mediated and diluted by artificial entities. 

The question isn’t simply whether AI can convincingly mimic human interaction but whether we’re prepared for a world where digital entities become equal participants in our online social spaces.

The post Meta’s AI invasion signals dramatic shift for social media appeared first on DailyAI.

HelloYou unveils Skanna, a barcode scanner with a twist

For many consumers, deciphering the truth about products feels like solving a riddle. Complex ingredient lists, vague claims, and limited transparency make it nearly impossible to know what you’re really buying—or how it aligns with your health, safety, and values.

This growing demand for clarity has inspired HelloYou, a developer known for creating apps that help simplify daily life, to create Skanna—an AI-powered barcode scanner app. Designed to cut through the noise, Skanna reveals the truth behind product labels, offering detailed insights into ingredients, safety, and environmental impact in seconds.

What is Skanna?

Skanna isn’t just another barcode scanner—it’s a game-changer for anyone seeking clarity in a confusing marketplace. Whether it’s deciphering a long list of ingredients, understanding allergens, or evaluating sustainability, Skanna delivers the truth behind every product you use.

“Transparency matters more than ever in a world where consumers demand clear, honest answers about the products they use,” said Rena Cimen, Senior Performance Marketing Manager at HelloYou. “With Skanna, our goal was to strip away the noise and deliver unfiltered, reliable information—quickly and effortlessly.”

Skanna is tailored for everyone from parents safeguarding their families to health-conscious shoppers and eco-conscious individuals making sustainable choices. With its sleek design and straightforward functionality, the app ensures that anyone can access critical product information with just a scan.     

Key Features

I downloaded Skanna to put its features to the test, and while it’s clear the app has incredible potential, it’s not without its quirks. Here’s what I discovered:

  • Ingredient Analysis: This feature is a standout, providing detailed breakdowns of product ingredients. It’s perfect for identifying allergens or unwanted additives. However, I noticed that not all products were in the database, particularly from smaller or niche brands. For a first version, this is understandable, but it’s something that could be improved over time.
  • Sustainability Insights: Skanna’s efforts to offer eco-conscious data are commendable. The app provides information about sourcing and ethical practices, but the depth of this data varies. It’s a great starting point, though users looking for detailed environmental impact analyses might find it a bit surface-level.
  • Scan History: The personal history feature is useful for keeping track of what you’ve scanned. While functional, it could benefit from more advanced organization tools, like categories or tags, to make it easier to revisit specific products.
  • Personalized Recommendations: Skanna really excels here. The app tailors suggestions based on dietary or lifestyle needs, such as gluten-free or vegan alternatives. However, the recommendations are only as strong as the database, so expanding it will make this feature even better.
  • Database and Variety: Database and Variety: Out of 25 food items, 5 cosmetic products, 2 books, a hairdryer, and a vape that I scanned, Skanna managed to get everything right—impressively so. Admittedly, one of the books didn’t work at first because an Amazon barcode was covering it, but once I peeled that off, it scanned perfectly. Points for accuracy, even with a little extra effort!

While Skanna isn’t perfect—formatting and database coverage still need work—it’s the first of its kind to offer real-time, live feedback to shoppers. That alone is a gamechanger in helping people make healthier and more sustainable choices.

How Skanna Works

Using Skanna is intuitive and straightforward:

  1. Open the app: The barcode scanner is accessible with one tap—no digging through menus.
  2. Scan a product: The scanner works quickly, delivering insights in just seconds for most barcodes. Occasionally, non-standard codes required manual input, but these instances were rare.
  3. Explore insights: The app’s best feature is how it presents information. No fluff, just clear, concise details about whatever you want to know.

While the app isn’t flawless—some barcodes didn’t scan as expected—it’s a strong foundation for what could become an essential tool for shoppers.

Who Can Benefit?

Despite its imperfections, Skanna has something to offer for a wide range of users:

  • Parents: Use Skanna to identify allergen-free, family-safe products with ease.
  • Health Enthusiasts: Avoid hidden additives and toxic ingredients effortlessly.
  • Sustainability Advocates: Gain insights into brands’ ethical and eco-friendly practices.
  • Busy Professionals: Access reliable product data on the go, saving time while shopping smarter.
  • Globe Trotters: Scan globally to decode labels, translate ingredients, and understand what you’re eating.

For a first version, Skanna’s strengths outweigh its limitations. It’s especially helpful for navigating complex ingredient lists in beauty, food, and household products. And while it’s not yet a perfect solution, its potential to evolve into an indispensable shopping companion is undeniable.

Final Thoughts: A Gamechanger in the Making

Skanna may not be a finished masterpiece, but it’s a bold step toward giving consumers real-time, actionable feedback about the products they buy. For anyone frustrated by unclear labels or limited product transparency, this app is a much-needed solution.

It’s refreshing to see an app that doesn’t just promise to help—but actively works to simplify complex decisions. With some refinements to its database and interface, Skanna could genuinely transform the way we shop for the better.

To celebrate Cyber Monday, DailyAI users can unlock a special 7-day free trial of the app’s premium features. Click on the exclusive iOS link below to explore everything the app has to offer—completely free for a week.

Start Your Free Trial on iOS

The post HelloYou unveils Skanna, a barcode scanner with a twist appeared first on DailyAI.