Blog

Woman scammed out of €800k by an AI deep fake of Brad Pitt

What began as a ski holiday Instagram post ended in financial ruin for a French interior designer after scammers used AI to convince her she was in a relationship with Brad Pitt.

The 18-month scam targeted Anne, 53, who received an initial message from someone posing as Jane Etta Pitt, Brad’s mother, claiming her son “needed a woman like you.” 

Not long after, Anne started talking to what she believed was the Hollywood star himself, complete with AI-generated photos and videos.

“We’re talking about Brad Pitt here and I was stunned,” Anne told French media. “At first, I thought it was fake, but I didn’t really understand what was happening to me.” 

The relationship deepened over months of daily contact, with the fake Pitt sending poems, declarations of love, and eventually a marriage proposal.

“There are so few men who write to you like that,” Anne described. “I loved the man I was talking to. He knew how to talk to women and it was always very well put together.”

The scammers’ tactics proved so convincing that Anne eventually divorced her millionaire entrepreneur husband.

After building rapport, the scammers began extracting money with a modest request – €9,000 for supposed customs fees on luxury gifts. It escalated when the impersonator claimed to need cancer treatment while his accounts were frozen due to his divorce from Angelina Jolie. 

A fabricated doctor’s message about Pitt’s condition prompted Anne to transfer €800,000 to a Turkish account.

Brad Pitt scam
Scammers requested money for fake Brad Pitt’s cancer treatment

“It cost me to do it, but I thought that I might be saving a man’s life,” she said. When her daughter recognized the scam, Anne refused to believe it: “You’ll see when he’s here in person then you’ll say sorry.”

Her illusions were shattered upon seeing news coverage of the real Brad Pitt with his partner Inés de Ramon in summer 2024. 

Even then, the scammers tried to maintain control, sending fake news alerts dismissing these reports and claiming Pitt was actually dating an unnamed “very special person.” In a final roll of the dice, someone posing as an FBI agent extracted another €5,000 by offering to help her escape the scheme.

The aftermath proved devastating – three suicide attempts led to hospitalization for depression. 

Anne opened up about her experience to French broadcaster TF1, but the interview was later removed after she faced intense cyber-bullying.

Now living with a friend after selling her furniture, she has filed criminal complaints and launched a crowdfunding campaign for legal help.

A tragic situation – though Anne is certainly not alone. Her story parallels a massive surge in AI-powered fraud worldwide. 

Spanish authorities recently arrested five people who stole €325,000 from two women through similar Brad Pitt impersonations. 

Speaking about AI fraud last year, McAfee’s Chief Technology Officer Steve Grobman explains why these scams succeed: “Cybercriminals are able to use generative AI for fake voices and deepfakes in ways that used to require a lot more sophistication.”

It’s not just people who are lined up in the scammers’ crosshairs, but businesses, too. In Hong Kong last year, fraudsters stole $25.6 million from a multinational company using AI-generated executive impersonators in video calls. 

Superintendent Baron Chan Shun-ching described how “the worker was lured into a video conference that was said to have many participants. The realistic appearance of the individuals led the employee to execute 15 transactions to five local bank accounts.”

Would you be able to spot an AI scam?

Most people would fancy their chances of spotting an AI scam, but research says otherwise. 

Studies show humans struggle to distinguish real faces from AI creations, and synthetic voices fool roughly a quarter of listeners. That evidence came from last year – AI voice image, voice, and video synthesis have evolved considerably since. 

Synthesia, an AI video platform that generates realistic human avatars speaking multiple languages, now backed by Nvidia, just doubled its valuation to $2.1 billion. Video and voice synthesis platforms like Synthesia and Elevenlabs are among the tools that fraudsters use to launch deep fake scams.

Synthesia admits this themselves, recently demonstrating its commitment to preventing misuse through a rigorous public red team test, which showed how its compliance controls successfully block attempts to create non-consensual deepfakes or use avatars for harmful content like promoting suicide and gambling.

Whether or not such measures are effect – the jury is out. As companies and individuals wrestle with compellingly real AI-generated media, the human cost – illustrated by Anne’s devastating experience – is set to rise. 

The post Woman scammed out of €800k by an AI deep fake of Brad Pitt appeared first on DailyAI.

Two hours of AI conversation can create a near-perfect digital twin of anyone

Stanford and Google DeepMind researchers have created AI that can replicate human personalities with uncanny accuracy after just a two-hour conversation. 

By interviewing 1,052 people from diverse backgrounds, they built what they call “simulation agents” – digital copies that could predict their human counterparts’ beliefs, attitudes, and behaviors with remarkable consistency.

To create the digital copies, the team uses data from an “AI interviewer” designed to engage participants in natural conversation. 

The AI interviewer asks questions and generates personalized follow-up questions – an average of 82 per session – exploring everything from childhood memories to political views.

Through these two-hour discussions, each participant generated detailed transcripts averaging 6,500 words.

ai interview
The above shows the study platform, which includes participant sign-up, avatar creation, and a main interface with modules for consent, avatar creation, interview, surveys/experiments, and a self-consistency retake of surveys/experiments. Modules become available sequentially as previous ones are completed. Source: ArXiv.

For example, when a participant mentions their childhood hometown, the AI might probe deeper, asking about specific memories or experiences. By simulating a natural flow of conversation, the system captures nuanced personal information that standard surveys typically overlook.

Behind the scenes, the study documents what the researchers call “expert reflection” – prompting large language models to analyze each conversation from four distinct professional viewpoints:

  • As a psychologist, it identifies specific personality traits and emotional patterns – for instance, noting how someone values independence based on their descriptions of family relationships.
  • Through a behavioral economist’s lens, it extracts insights about financial decision-making and risk tolerance, like how they approach savings or career choices.
  • The political scientist perspective maps ideological leanings and policy preferences across various issues.
  • A demographic analysis captures socioeconomic factors and life circumstances.

The researchers concluded that this interview-based technique outperformed comparable methods – such as mining social media data – by a substantial margin.

ai interview
The above shows the interview interface, which features an AI interviewer represented by a 2-D sprite in a pulsating white circle that matches the audio level. The sprite changes to a microphone when it’s the participant’s turn. A progress bar shows a sprite traveling along a line, and options are available for subtitles and pausing.

Testing the digital copies

The researchers put their AI replicas through a battery of tests. 

First, they used the General Social Survey – a measure of social attitudes that asks questions about everything from political views to religious beliefs. Here, the AI copies matched their human counterparts’ responses 85% of the time.

On the Big Five personality test, which measures traits like openness and conscientiousness through 44 different questions, the AI predictions aligned with human responses about 80% of the time. The system was particularly good at capturing traits like extraversion and neuroticism.

Economic game testing revealed fascinating limitations, however. In the “Dictator Game,” where participants decide how to split money with others, the AI struggled to perfectly predict human generosity. 

In the “Trust Game,” which tests willingness to cooperate with others for mutual benefit, the digital copies only matched human choices about two-thirds of the time. 

This suggests that while AI can grasp our stated values, it still can’t fully capture the nuances of human social decision-making. 

Real-world experiments

The researchers also ran five classic social psychology experiments using their AI copies. 

In one experiment testing how perceived intent affects blame, both humans and their AI copies showed similar patterns of assigning more blame when harmful actions seemed intentional. 

Another experiment examined how fairness influences emotional responses, with AI copies accurately predicting human reactions to fair versus unfair treatment.

The AI replicas successfully reproduced human behavior in four out of five experiments, suggesting they can model not just individual topical responses but broad, complex behavioral patterns.

Easy AI clones: What are the implications?

AI clones are big business, with Meta recently announcing plans to fill Facebook and Instagram with AI profiles that can create content and engage with users.

TikTok has also jumped into the fray with its new “Symphony” suite of AI-powered creative tools, which includes digital avatars that can be used by brands and creators to produce localized content at scale.

With Symphony Digital Avatars, TikTok is enabling new ways for creators and brands to captivate global audiences using generative AI. The avatars can represent real people with a wide range of gestures, expressions, ages, nationalities and languages.

Stanford and DeepMind’s research suggests such digital replicas will become far more sophisticated – and easier to build and deploy at scale. 

“If you can have a bunch of small ‘yous’ running around and actually making the decisions that you would have made — that, I think, is ultimately the future,” lead researcher Joon Sung Park, a Stanford PhD student in computer science, describes to MIT.

However, Park describes that there are upsides to such technology, as building accurate clones could support scientific research. 

Instead of running expensive or ethically questionable experiments on real people, researchers could test how populations might respond to certain inputs. For example, it could help predict reactions to public health messages or study how communities adapt to major societal shifts.

Ultimately, though, the same features that make these AI replicas valuable for research also make them powerful tools for deception. 

As digital copies become more convincing, distinguishing authentic human interaction from AI-generated content will grow far more complex. 

The research team acknowledges these risks. Their framework requires clear consent from participants and allows them to withdraw their data, treating personality replication with the same privacy concerns as sensitive medical information. 

In any case, we’re entering uncharted territory in human-machine interaction, and the long-term implications remain largely unknown.

The post Two hours of AI conversation can create a near-perfect digital twin of anyone appeared first on DailyAI.

Meta’s AI invasion signals dramatic shift for social media

Meta has announced plans to populate Facebook and Instagram with AI-generated profiles and content. 

Connor Hayes, Meta’s vice-president of product for generative AI, outlined the company’s vision: “We expect these AIs to actually, over time, exist on our platforms, kind of in the same way that accounts do.”

Hayes added that these AI entities will have “bios and profile pictures and be able to generate and share content powered by AI on the platform.”

Meta has already seen hundreds of thousands of AI characters created through its tools since their US launch in July, though the vast majority of users have not released their creations publicly. 

Hayes notes that making Meta’s apps “more entertaining and engaging” is a “priority” for the next two years, with a particular focus on making AI interactions more social.

Meta’s broader AI plans are ambitious. The company is developing tools to help users create AI assistants that can respond to followers’ questions. For 2025, it plans to release text-to-video generation software enabling creators to insert themselves into AI-generated videos. 

Mark Zuckerberg also recently revealed AI avatars capable of conducting live video calls while perfectly mimicking a creator’s persona, from their speaking patterns to their facial expressions.

This forms part of a broader industry push toward AI-generated content. Snapchat released tools that enable creators to design 3D AI characters for augmented reality purposes, reporting a 50% annual increase in users viewing AI-generated content. 

Meanwhile, ByteDance-owned TikTok is piloting “Symphony,” a series of tools and applications that enables brands and creators to use AI for advertising purposes, such as creating AI-generated avatars and automating content translation.

AI bots on social media: The implications

Industry experts are sounding alarms about the psychological and social implications of embedding social media with AI bots. 

Becky Owen, global chief marketing and innovation officer at Billion Dollar Boy and former head of Meta’s creator innovations team, cautions that “without robust safeguards, platforms risk amplifying false narratives through these AI-driven accounts,” according to the FT.

She emphasizes, “Unlike human creators, these AI personas don’t have lived experiences, emotions, or the same capacity for relatability.”

Owen further warns that AI characters could flood platforms with low-quality material that undermines creators and erodes user confidence. 

This takes on added weight given Meta’s history with data manipulation – most notably the Cambridge Analytica scandal, where user data was exploited to influence political opinions. 

Rather than merely harvesting user data to target content, AI entities could actively engage with users, shape conversations, and influence opinions in real time, all while appearing to be authentic human participants in online discourse.

Meta claims to be implementing protective measures, including mandatory labeling of AI-generated content, but critics argue this may not be sufficient to prevent the erosion of authentic human connection.

Bots threaten to takeover parts of the internet

According to research from Imperva, nearly half of all internet traffic – 49.6% – now originates from non-human sources. 

Bad bots already account for 32% of web traffic, lending credence to what was once dismissed as a conspiracy theory: the concept of a “dead internet” where human voices become increasingly drowned out by artificial ones.

On a deeper level, this signals yet another progression towards an internet ecosystem shaped by AI systems. 

The philosophical implications are dizzying. We’re moving toward a world where our online social circles may include entities that think and respond at superhuman speeds, yet lack any genuine consciousness or emotional experience. 

AI profiles will share “memories” they never had, express “feelings” they cannot feel, and forge “connections” without any capacity for true empathy or understanding.

Ironically, social media, originally created to help humans connect more easily across vast distances, may become a space where human connection is increasingly mediated and diluted by artificial entities. 

The question isn’t simply whether AI can convincingly mimic human interaction but whether we’re prepared for a world where digital entities become equal participants in our online social spaces.

The post Meta’s AI invasion signals dramatic shift for social media appeared first on DailyAI.

HelloYou unveils Skanna, a barcode scanner with a twist

For many consumers, deciphering the truth about products feels like solving a riddle. Complex ingredient lists, vague claims, and limited transparency make it nearly impossible to know what you’re really buying—or how it aligns with your health, safety, and values.

This growing demand for clarity has inspired HelloYou, a developer known for creating apps that help simplify daily life, to create Skanna—an AI-powered barcode scanner app. Designed to cut through the noise, Skanna reveals the truth behind product labels, offering detailed insights into ingredients, safety, and environmental impact in seconds.

What is Skanna?

Skanna isn’t just another barcode scanner—it’s a game-changer for anyone seeking clarity in a confusing marketplace. Whether it’s deciphering a long list of ingredients, understanding allergens, or evaluating sustainability, Skanna delivers the truth behind every product you use.

“Transparency matters more than ever in a world where consumers demand clear, honest answers about the products they use,” said Rena Cimen, Senior Performance Marketing Manager at HelloYou. “With Skanna, our goal was to strip away the noise and deliver unfiltered, reliable information—quickly and effortlessly.”

Skanna is tailored for everyone from parents safeguarding their families to health-conscious shoppers and eco-conscious individuals making sustainable choices. With its sleek design and straightforward functionality, the app ensures that anyone can access critical product information with just a scan.     

Key Features

I downloaded Skanna to put its features to the test, and while it’s clear the app has incredible potential, it’s not without its quirks. Here’s what I discovered:

  • Ingredient Analysis: This feature is a standout, providing detailed breakdowns of product ingredients. It’s perfect for identifying allergens or unwanted additives. However, I noticed that not all products were in the database, particularly from smaller or niche brands. For a first version, this is understandable, but it’s something that could be improved over time.
  • Sustainability Insights: Skanna’s efforts to offer eco-conscious data are commendable. The app provides information about sourcing and ethical practices, but the depth of this data varies. It’s a great starting point, though users looking for detailed environmental impact analyses might find it a bit surface-level.
  • Scan History: The personal history feature is useful for keeping track of what you’ve scanned. While functional, it could benefit from more advanced organization tools, like categories or tags, to make it easier to revisit specific products.
  • Personalized Recommendations: Skanna really excels here. The app tailors suggestions based on dietary or lifestyle needs, such as gluten-free or vegan alternatives. However, the recommendations are only as strong as the database, so expanding it will make this feature even better.
  • Database and Variety: Database and Variety: Out of 25 food items, 5 cosmetic products, 2 books, a hairdryer, and a vape that I scanned, Skanna managed to get everything right—impressively so. Admittedly, one of the books didn’t work at first because an Amazon barcode was covering it, but once I peeled that off, it scanned perfectly. Points for accuracy, even with a little extra effort!

While Skanna isn’t perfect—formatting and database coverage still need work—it’s the first of its kind to offer real-time, live feedback to shoppers. That alone is a gamechanger in helping people make healthier and more sustainable choices.

How Skanna Works

Using Skanna is intuitive and straightforward:

  1. Open the app: The barcode scanner is accessible with one tap—no digging through menus.
  2. Scan a product: The scanner works quickly, delivering insights in just seconds for most barcodes. Occasionally, non-standard codes required manual input, but these instances were rare.
  3. Explore insights: The app’s best feature is how it presents information. No fluff, just clear, concise details about whatever you want to know.

While the app isn’t flawless—some barcodes didn’t scan as expected—it’s a strong foundation for what could become an essential tool for shoppers.

Who Can Benefit?

Despite its imperfections, Skanna has something to offer for a wide range of users:

  • Parents: Use Skanna to identify allergen-free, family-safe products with ease.
  • Health Enthusiasts: Avoid hidden additives and toxic ingredients effortlessly.
  • Sustainability Advocates: Gain insights into brands’ ethical and eco-friendly practices.
  • Busy Professionals: Access reliable product data on the go, saving time while shopping smarter.
  • Globe Trotters: Scan globally to decode labels, translate ingredients, and understand what you’re eating.

For a first version, Skanna’s strengths outweigh its limitations. It’s especially helpful for navigating complex ingredient lists in beauty, food, and household products. And while it’s not yet a perfect solution, its potential to evolve into an indispensable shopping companion is undeniable.

Final Thoughts: A Gamechanger in the Making

Skanna may not be a finished masterpiece, but it’s a bold step toward giving consumers real-time, actionable feedback about the products they buy. For anyone frustrated by unclear labels or limited product transparency, this app is a much-needed solution.

It’s refreshing to see an app that doesn’t just promise to help—but actively works to simplify complex decisions. With some refinements to its database and interface, Skanna could genuinely transform the way we shop for the better.

To celebrate Cyber Monday, DailyAI users can unlock a special 7-day free trial of the app’s premium features. Click on the exclusive iOS link below to explore everything the app has to offer—completely free for a week.

Start Your Free Trial on iOS

The post HelloYou unveils Skanna, a barcode scanner with a twist appeared first on DailyAI.

As AI advances, gaming studios, developers, and players face a new reality

From the rise of 3D graphics to the explosion of mobile gaming, technological progress has always driven the gaming industry forward. 

AI marks the latest chapter in its evolution. Once the masters of their virtual worlds, game developers must confront a fundamental question: What role will human creators play in an industry dominated by AI-driven processes? 

And beyond that, what are the broader ethical challenges of a world where AI, video games, and human lives increasingly intertwine?

Developers are already talking about how AI might transform the industry, but they’re also raising concerns. 

A recent survey by the Game Developers Conference found that 84% of developers are somewhat or very concerned about the ethics of generative AI, from fears of job displacement to issues like copyright infringement and the risk of AI systems scraping game data without consent.

At Hong Kong-based Gala Technology, the sense of urgency has reached a fever pitch. CEO Jia Xiaodong confessed to Bloomberg News, “Basically every week, we feel that we are going to be eliminated.”

The company has entered full crisis mode, freezing non-AI projects, mandating machine learning crash courses for department heads, and even tempting $7,000 bonuses for innovative AI ideas. 

In the US, gaming giants like Electronic Arts and Ubisoft are similarly pouring millions into AI research, even as they weather waves of layoffs and restructuring.

For those on the front lines of game development, AI displacement is already picking up pace. In 2023, 10,500 game developers will lose their jobs across over 30 studios.

“I’m very aware that I could wake up tomorrow and my job could be gone,” confesses Jess Hyland, a video game artist with 15 years of experience under her belt. Hyland told the BBC that she’s already heard of colleagues losing gigs because of AI.

There’s a sense of inevitability that this trend will only accelerate. Masaaki Fukuda, a Sony’s PlayStation division veteran who now serves as vice president at Japan’s largest AI startup, explained, “Nothing can reverse, stop, or slow the current AI trend.” 

When machines dream of electric sheep

For decades, video games have been the product of intensely collaborative human effort, melding the skills of artists, writers, designers, and programmers into immersive, interactive experiences. 

Now that AI systems can generate levels, worlds, and even entire games from simple text prompts, the nature of authorship is being questioned.

Take GameNGen, an AI model developed by Google and Tokyo University that generates fully playable levels for first-person shooters in real-time, making them nearly indistinguishable from those crafted by human designers. 

Or consider DeepMind’s Genie, a foundation model that can generate interactive 2D environments from rough sketches or brief descriptions, blending elements from existing games to create entirely new worlds with distinct logic and aesthetics.

These examples showcase the direction of travel for AI in game development, a glimpse of what we might expect to see in a few years as AI advances.

However, change is already very much in the pipeline. AI tools like Unity’s Muse are actively reshaping game design workflows today, automating asset creation, animation, and environmental building.

This level of AI integration is already making it possible for developers to accomplish in hours what once took days. The intention is to remove the drudgery of repetitive tasks while leaving artistic control primarily in human hands.

For some in the industry, these tools and others herald a new era of democratized creation. “AI is the game changer I’ve been waiting for,” stated Yuta Hanazawa, a 25-year industry veteran who recently founded an AI game art company. 

Hanazawa believes that AI will “revitalize the entire industry ” by liberating developers from the drudgery of asset creation, enabling a newfound focus on innovative gameplay and storytelling.

Yosuke Shiokawa, founder of a two-year-old AI gaming startup, similarly predicts that “soon, it will be a matter of your creativity, not your budget, that determines the value of games.” 

However, others fear that the rise of generative AI threatens to reduce human artists to mere machine operators, endlessly fine-tuning and debugging its output. 

“The stuff that AI generates, you become the person whose job is fixing it,” Hyland said. It’s not why I got into making games.”

The double-edged sword of democratization

For AI evangelists, one of the technology’s most tantalizing promises is the radical democratization of game creation. 

They envision a future in which anyone with a spark of imagination can conjure their dream game with a few simple prompts, where the line between player and creator blurs into irrelevance.

But for each individual intoxicated by the prospect of AI-powered creative freedom, there’s at least one skeptic. 

Chris Knowles, a veteran game developer and founder of the indie studio Sidequest Ninja, points to cloned games that are already plaguing app stores and online marketplaces.

“Anything that makes the clone studios’ business model even cheaper and quicker makes the difficult task of running a financially sustainable indie studio even harder,” Knowles cautions. 

He and many others fear that the advent of AI-assisted game generation will only exacerbate the problem, flooding the market with predominantly derivative, low-effort content.

There’s also the risk of creative homogenization. If every developer is drawing from the same small pool of AI models and associated datasets, will the result be a gaming landscape that feels increasingly generic and interchangeable? 

Will the idiosyncrasies and happy accidents that often define truly memorable games be lost in the pursuit of algorithmic optimization?

AI gaming’s ethical minefields

AI’s role in game development is blurring the line between the virtual and the real – pushing gaming closer to its long-standing goal of creating immersive, lifelike experiences.

Many games already allow players to customize their digital personas. With AI-powered tools capable of generating hyper-realistic, photo-quality images, the potential for players to create avatars that uncannily resemble real individuals – and then use those avatars for exploitative or abusive purposes – is disturbingly high.

The building blocks for these scenarios have already been laid. For example, the recent case of Spanish schoolchildren using AI ‘games’ to generate nude images of their classmates illustrates how easily these tools can be weaponized, especially against vulnerable populations like women and minors.

AI ‘games’ capable of producing explicit or abusive imagery are rife on the Apple App Store and Google Play, and age limits largely ineffective. 

Transpose this same dynamic into the context of more immersive, detailed gaming environments, and the potential for harm is enormous.

Further, moderating AI’s functionality to prevent this form of abuse or manipulation is exceptionally tricky, if not impossible. All AI models, no matter how sophisticated, are vulnerable to jailbreaking. This involves finding loopholes or weaknesses in moderation systems to generate content that’s supposed to be restricted. 

Filters designed to block explicit content often become the very target for manipulation by players who push AI systems to their limits, creating content that breaks ethical boundaries. 

The challenge is ensuring AI doesn’t undermine the very communities it aims to enhance – both among gamers and across wider society. Developers, studios, etc., can’t just push the boundaries of what AI can do but also understand where AI stops being a tool and starts taking over our lives – our mental faculties – our culture, our creativity, our selves. 

In the end, the conversation around AI in gaming isn’t about whether it will happen – it already has. The focus needs to shift towards ensuring that AI complements rather than erases human creativity while preventing forms of harm and misuse. 

The post As AI advances, gaming studios, developers, and players face a new reality appeared first on DailyAI.

The gaming industry is facing a midlife crisis – is AI its future?

For years, the gaming industry seemed like an unstoppable juggernaut, with revenues rising to stratospheric heights on the backs of ever-more-immersive titles and the explosion of mobile gaming. 

However, as we enter the mid-2020s, there are growing signs that the industry is reaching a plateau.

After the pandemic-fueled boom of 2020 and 2021, global gaming revenues dipped in 2022. That contraction gave way to tepid growth of just 0.5% in 2023, bringing the worldwide gaming market to around $184 billion, according to data from Newzoo

While still an impressive figure, it’s a far cry from the double-digit percentage growth the industry had come to expect.

This slowdown is even more pronounced in mature markets like North America and Europe, where key sectors such as console and mobile gaming are approaching saturation. 

Mobile gaming revenue, once propelling the industry’s continued growth, actually declined in 2022 and is only now beginning to stabilize.

Revenue stagnation is only part of the story, however. Even as growth slows, the cost of developing top-tier AAA games continues to soar. 

Budgets for marquee franchises like Call of Duty and Grand Theft Auto now routinely exceed $300 million. Some titles are nearing combined development and marketing costs of $660 million, a staggering sum that would have been unthinkable a decade ago.

These ballooning budgets are forcing studios to play it safe, leaning heavily on established franchises and proven formulas rather than taking risks. Innovation is taking a backseat to iteration.

There’s also evidence that people aren’t enjoying games as much as they once did, with sentiment to releases dropping from 3.4/5 in 2014 to 2.9/5 in 2021

Even the buzz of the latest CoD and FIFA games seems to be waning. Although we’ve witnessed groundbreaking releases, like Elden Ring, for example, this took some five years to create. It’s a once-in-a-generation title rather than the year’s best release.

The human toll of this financial squeeze is also becoming more prominent. Layoffs are picking up pace, with over 10,500 game developers losing their jobs across over 30 studios in 2023 alone. 

Simultaneously, the industry is grappling with a rising tide of labor activism as workers push back against the notorious “crunch culture” that has long plagued game development. 

The indie incursion

Amid tensions in AAA studios, indie developers are making a greater impact in the industry – a powerful counterpoint to mainstream game development. 

In 2024, indie games are claiming five of the top ten spots on Steam’s highest-grossing list. Titles like Palworld ($6.75m budget, 25 million units sold) and Enshrouded are resonating with players, showcasing the potential for indie games to achieve commercial success on par with AAA releases.

This indie surge is part of a larger trend, with the market share of indie games on Steam growing from 25% in 2018 to 43% in 2024.

Even in years with highly anticipated AAA launches, like 2023’s Baldur’s Gate 3 and Spider-Man 2, indie revenue has held steady, indicating a robust and growing audience for these titles.

The rise of indie games reflects a growing appetite among some players for novel experiences and creative risks. 

While AAA development often focuses on established franchises and proven formulas, indie developers push boundaries and experiment with new ideas.

Meanwhile, tools like Unity and Unreal Engine have made high-quality game development more accessible, while digital marketplaces like Steam provide a platform for indie games to find an audience. 

This brings us to AI. By automating and streamlining virtually any aspect of game development, AI could further level the playing field, allowing small teams to create experiences that rival those of major studios.

The AI paradigm shift

AI’s potential to disrupt gaming has been discussed for decades, but the prospect is no longer just theoretical.

Recent breakthroughs, such as Google’s GameNGen and DeepMind‘s Genie, provide a glimpse into a future where AI drives game design. 

GameNGen can generate entirely playable levels of classic games like DOOM in real-time, while Genie can conjure up interactive 2D environments from simple images or text prompts. 

These breakthroughs are part of a longer trend of AI-driven innovation in gaming, though the industry is still very young.

The journey began with early milestones like IBM’s Deep Blue, which famously defeated world chess champion Garry Kasparov in 1997. Deep Blue’s victory was a landmark moment that demonstrated the potential for AI to excel in rule-based, strategic challenges.

Fast forward to 2016, and we saw another significant leap with Google DeepMind’s AlphaGo. This AI system mastered the ancient Chinese game of Go, known for its immense complexity and reliance on intuition. By defeating world champion Lee Sedol 4-1, AlphaGo showed that AI could tackle domains that were once thought to be the exclusive domain of human intelligence.

It was in 2018 that researchers David Ha & Jürgen Schmidhuber published World Models, demonstrating how an AI could learn to play video games by building an internal model of the game world. 

A year later, DeepMind‘s AlphaStar showcased the power of reinforcement learning by mastering the complex strategy game StarCraft II, even competing against top human players.

Representing the cutting-edge of this field today, GameNGen was trained on actual DOOM gameplay footage, allowing it to internalize the game’s mechanics, level design, and aesthetics.

It then uses this knowledge to generate novel levels on the fly, complete with coherent layouts and gameplay flow.

Conversely, Genie uses a foundation model to generate interactive environments from more freeform inputs like sketches or descriptions. By training on a diverse range of game genres and visual styles, Genie can adapt to create content across a broad spectrum of aesthetics.

Genie, a generative model, can function as an interactive environment, accepting various prompts such as generated images or hand-drawn sketches. Users can guide the model’s output by providing latent actions at each time step, which Genie then uses to generate the next frame in the sequence.  Source: DeepMind via ArXiv (open access).

Under the hood, these AI systems are powered by deep neural networks, which are becoming miniature game engines unto themselves, capable of generating complete, playable experiences from scratch.

Essentially, the game world is created inside the neural network itself, not through traditional programming techniques, but by a deep neural network that has learned game design rules, patterns, and structures.

Moreover, because the game world is generated by a neural network, it has the potential to be far more dynamic and responsive than traditional game environments.

The same network that generates the world itself could also be used to simulate NPC behaviors, adjust difficulty on the fly, or even reshape the environment in real-time based on player actions.

With AI handling the heavy lifting of world-building and level design, the optimistic narrative is that developers will be free to focus on higher-level creative decisions, such as developing art, concepts, and storylines. 

While jobs would be placed at risk, AI will surely be the major level-up the gaming industry is looking for.

Empowering players, disrupting business models

The real revolution will kickstart when these AI tools are placed directly in the hands of players. 

Imagine a world where gamers can conjure up their dream titles with a few simple prompts, then jump in and start playing instantly. 

Want to mash-up the neon-soaked cityscape of Cyberpunk 2077 with the frenetic combat of DOOM Eternal? Just describe it to the AI and watch your vision come to life.

The line between developer and player blurs. Games become living, breathing entities, evolving in response to the collective creativity of their communities. 

This democratization of game creation could birth entirely new genres and upend traditional notions of what a game can be. 

We could see a shift toward platforms that provide AI tools for creation and curation, taking a cut of user-generated content sales or charging for access to premium features.

For established studios, it would undoubtedly represent both an existential threat and an unprecedented opportunity. Those who cling to the old ways risk being left behind. Those who embrace the new paradigm stand to capitalize. 

Of course, realizing this vision won’t be without its challenges. Issues of content moderation, intellectual property rights, job displacement, and revenue sharing will all need to be confronted. 

However, the wheels are in motion. As the technology continues to evolve, we can expect to see more and more examples of AI not just assisting in game development, but fundamentally reshaping what games can be.

The post The gaming industry is facing a midlife crisis – is AI its future? appeared first on DailyAI.