Blog

OpenAI shut down the Ghibli craze – now users are turning to open source

When OpenAI released its latest image generator a few days ago, they probably didn’t expect it to bring the internet to its knees.

But that’s more or less what happened, as millions of people rushed to transform their pets, selfies, and favorite memes into something that looked like it came straight out of a Studio Ghibli movie. All you needed was to add a prompt like “in the style of Studio Ghibli.”

For anyone unfamiliar, Studio Ghibli is the legendary Japanese animation studio behind Spirited Away, Kiki’s Delivery Service, and Princess Mononoke.

Its soft, hand-drawn style and magical settings are instantly recognizable – and surprisingly easy to mimic using OpenAI’s new model. Social media is filled with anime versions of people’s cats, family portraits, and inside jokes.

It took many by surprise. Normally, OpenAI’s tools resist any prompts that name an artist or designer by name, as this shows, more-or-less unequivocally, that copyright imagery is rife in training datasets.

For a while, though, that didn’t seem to matter anymore. Even OpenAI CEO Sam Altman even changed his own profile photo to a Ghibli-style image and posted on X:

At one point, over a million people had signed up for ChatGPT within an hour.

Then, quietly, it stopped working for many.

Users started to notice that prompts referencing Ghibli, or even trying to describe the style more indirectly, no longer returned the same results.

Some prompts were rejected altogether. Others just produced generic art that looked nothing like what had been going viral the day before. Many are speculating now that the model was updated. OpenAI had rolled out copyright restrictions behind the scenes.

OpenAI later said that, despite spurring on the trend, they were throttling Ghibli-style images by taking a “conservative approach,” refusing any attempt to create images in the likeness of a living artist.

This sort of thing isn’t new. It happened with DALL·E as well. A model launches with stacks of flexibility and loose guardrails, catches fire online, then gets quietly dialed back, often in response to legal concerns or policy updates.

The original version of DALL·E could do things that were later disabled. The same seems to be happening here.

One Reddit commenter explained:

“The problem is it actually goes like this: Closed model releases which is much better than anything we have. Closed model gets heavily nerfed. Open source model comes out that’s getting close to the nerfed version.”

OpenAI’s sudden retreat has left many users looking elsewhere, and some are turning to open-source models, such as Flux, developed by Black Forest Labs from Stability AI.

Unlike OpenAI’s tools, Flux and other open-source text-to-image tools doesn’t apply server-side restrictions (or at least, they’re looser and limited to illicit or profane material). So, they haven’t filtered out prompts referencing Ghibli-style imagery.

Control doesn’t mean open-source tools avoid ethical issues, of course. Models like Flux are often trained on the same kind of scraped data that fuels debates around style, consent, and copyright. 

The difference is, they aren’t subject to corporate risk management – meaning the creative freedom is wider, but so is the grey area.

The post OpenAI shut down the Ghibli craze – now users are turning to open source appeared first on DailyAI.

Bill Gates: AI will replace most human jobs within a decade

In a series of recent interviews, Microsoft co-founder Bill Gates made a bold prediction: within the next 10 years, AI will render humans largely obsolete in the workplace. 

Gates believes that as AI rapidly advances, it will take over “most things” currently done by people, including key roles in medicine and education.

“With AI, over the next decade, that will become free, commonplace — great medical advice, great tutoring,” Gates said during an appearance on NBC’s “The Tonight Show” with Jimmy Fallon.

While he acknowledged that human expertise in fields like healthcare and teaching remains “rare” today, Gates envisions a near future where AI will democratize access to top-tier knowledge and skills.

Gates also elaborated on this vision of “free intelligence” in a conversation last month with Arthur Brooks, a Harvard professor and happiness expert. 

He described an era where AI permeates daily life, transforming healthcare, education, and beyond. “It’s very profound and even a little bit scary — because it’s happening very quickly, and there is no upper bound,” Gates said. 

The billionaire’s comments have reignited the undulating debate over AI’s impact on the workforce. 

Some experts believe AI will primarily augment human labor, boosting efficiency and economic growth. OpenAI CEO Sam Altman similarly said that AI would be like calculators to education. 

NVIDIA CEO Jensen Huang similarly declared, “While some worry that AI may take their jobs, someone who is expert with AI will.”

But others, like Microsoft AI CEO Mustafa Suleyman, warn of disruption.

In his 2023 book “The Coming Wave,” Suleyman wrote that AI tools “will only temporarily augment human intelligence” before ultimately replacing many jobs. 

“They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing,” he argued.

Despite the much-discussed downsides of AI, Gates remains confident in AI’s societal benefits, from medical breakthroughs to climate solutions to universal access to quality education. 

He also said, if he were to launch a new venture today, it would be “AI-centric.”

“Today, somebody could raise billions of dollars for a new AI company [that’s just] a few sketch ideas,” which is true, given that many generative AI startups have achieved unicorn status without a public product. 

As for which human endeavors might prove irreplaceable in a robot-dominated future, Gates offered a few predictions to Fallon. “There will be some things we reserve for ourselves,” he said, suggesting that humans would still prefer to watch other humans play sports, for instance. 

“But in terms of making things and moving things and growing food, over time those will be basically solved problems.”

Gates’ vision of a world where human expertise is largely obsolete within a decade may seem jarring – even dystopian – to some.

However, for many in the tech industry, it’s simply the inevitable endpoint of a revolution decades in the making.

The post Bill Gates: AI will replace most human jobs within a decade appeared first on DailyAI.

AI stirs up trouble in the science peer review process

Scientific publishing in confronting an increasingly provocative issue: what do you do about AI in peer review? 

Ecologist Timothée Poisot recently received a review that was clearly generated by ChatGPT. The document had the following telltale string of words attached: “Here is a revised version of your review with improved clarity and structure.” 

Poisot was incensed. “I submit a manuscript for review in the hope of getting comments from my peers,” he fumed in a blog post. “If this assumption is not met, the entire social contract of peer review is gone.”

Poisot’s experience is not an isolated incident. A recent study published in Nature found that up to 17% of reviews for AI conference papers in 2023-24 showed signs of substantial modification by language models.

And in a separate Nature survey, nearly one in five researchers admitted to using AI to speed up and ease the peer review process.

We’ve also seen a few absurd cases of what happens when AI-generated content slips through the peer review process, which is designed to uphold the quality of research. 

In 2024, a paper published in the Frontiers journal, which explored some highly complex cell signaling pathways, was found to contain bizarre, nonsensical diagrams generated by the AI art tool Midjourney. 

One image depicted a deformed rat, while others were just random swirls and squiggles, filled with gibberish text.

This nonsense AI-generated diagram appeared in the peer-reviewed paper Frontiers in 2024. Source: Frontiers.

Commenters on Twitter were aghast that such obviously flawed figures made it through peer review. “Erm, how did Figure 1 get past a peer reviewer?!” one asked. 

In essence, there are two risks: a) peer reviewers using AI to review content, and b) AI-generated content slipping through the entire peer review process. 

Publishers are responding to the issues. Elsevier has banned generative AI in peer review outright. Wiley and Springer Nature allow “limited use” with disclosure. A few, like the American Institute of Physics, are gingerly piloting AI tools to supplement – but not supplant – human feedback.

However, gen AI’s allure is strong, and some see the benefits if applied judiciously. A Stanford study found 40% of scientists felt ChatGPT reviews of their work could be as helpful as human ones, and 20% more helpful.

peer review
Researchers often have positive reactions to AI-generated peer reviews. Source: Nature

Academia has revolved around human input for a millenia, though, so the resistance is strong. “Not combating automated reviews means we have given up,” Poisot wrote.

The whole point of peer review, many argue, is considered feedback from fellow experts – not an algorithmic rubber stamp.

The post AI stirs up trouble in the science peer review process appeared first on DailyAI.

US faces crucial decision on AI chip export rules

The US is poised to implement sweeping restrictions on the sale of advanced AI chips overseas. 

If the rules take effect as planned on May 15, American tech companies such as NVIDIA could face major obstacles in the global AI race.

Under the proposed system called ‘AI diffusion‘ – which comes from the tail-end of Biden’s regime – countries are grouped into three tiers based on their closeness with the US. Top allies like Japan and most of Europe would have relatively smooth access to AI tech. 

However, a broad second tier, including nations like India, Brazil, and Saudi Arabia, would face tighter controls. They’d be limited in the computing power they can buy and would have to meet strict security standards.

China and Russia are predictably in the third tier, effectively blocked from importing cutting-edge US AI chips.

The restrictions have raised alarm bells among American chipmakers. NVIDIA, for one, gets almost half its revenue abroad. The company warns the rules could put a large dent in its sales.

But it’s not just about money. The export controls are part of a wider US effort to maintain its AI advantage. Some experts, though, caution that being too restrictive could backfire. They point out that many key AI breakthroughs have come from global collaboration. Cutting too many countries out, they argue, could ultimately hurt American interests.

As the May 15 deadline looms, the Trump administration faces a balancing act. There’s bipartisan support for protecting US tech, but also economic risks in alienating allies. 

The rise of China’s AI industry has only raised the pressure. Beijing has made tech self-sufficiency a top priority. It’s pouring money into homegrown chip development. And it’s getting results.

Just look at DeepSeek. In months, the Chinese startup has gone from obscurity to drawing comparisons with OpenAI. Its rapid progress, fueled by ample government support and unrivaled access to data, is turning heads from Silicon Valley to Washington.

For some, DeepSeek’s ascent is AI’s “Sputnik moment” – a wake-up call that America could be losing its edge. 

As the clock ticks down to May 15, the choices made – to clamp down on AI exports or take a more open approach – could have ripple effects across a tech industry facing uncertainty. 

The chips, as they say, are on the table. The question now is how the US will play its hand.

The post US faces crucial decision on AI chip export rules appeared first on DailyAI.

Cloudflare weaponizes AI against web crawlers

Cloudflare has unleashed a devious new trap for data-hungry AI bots that ignore website permissions – the “AI Labyrinth.”

The AI Labyrinth attempts to actively sabotage AI bots by serving realistic-looking pages filled with irrelevant information and hidden links that lead deeper into a rabbit hole of AI-generated nonsense.

“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” Cloudflare revealed.

“But while real looking, this content is not actually the content of the site we’re protecting.”

Here’s exactly how the system works:

  1. It generates convincing fake pages with scientifically accurate but irrelevant content
  2. Hidden invisible links within these pages lead to more fake content, creating endless loops
  3. All trap content remains completely invisible to human visitors
  4. Bot interactions with these fake pages help improve detection systems
  5. Content is pre-generated rather than created on demand for better performance
  6. Crawlers waste their resources rather than wasting Cloudfares’ resources

Such tools are needed because bot internet traffic is growing alarmingly.

According to Imperva’s 2024 Threat Research report, bots generated 49.6% of web traffic last year, with malicious bots accounting for a whopping 32% of the total. 

AI crawlers bombard Cloudfare’s network with more than 50 billion requests daily – nearly 1% of all web traffic they handle – wasting their resources in the process. 

These numbers lend credibility to what many dismissed as the “dead internet theory” an internet conspiracy claim that most online content and interaction is artificially generated.

Cloudflare is attempting to support its customers in the cat-and-mouse game between website owners and AI companies. The trap remains completely invisible to human visitors, so they shouldn’t be able to accidentally stumble into the maze. 

As Cloudfare describes: “No real human would go four links deep into a maze of AI-generated nonsense. Any visitor that does is very likely to be a bot, so this gives us a brand-new tool to identify and fingerprint bad bots, which we add to our list of known bad actors.”

The post Cloudflare weaponizes AI against web crawlers appeared first on DailyAI.

AI-generated art cannot be copyrighted, says US Court of Appeals

The bizarre legal saga of whether an AI system can be granted a copyright took another turn this week. 

In a unanimous ruling, the U.S. Court of Appeals for the D.C. Circuit held that works created autonomously by AI are not eligible for copyright protection under current law. 

The three-judge panel affirmed a lower court’s 2023 decision that only works with human authors can be registered with the U.S. Copyright Office.

The case traces back to computer scientist Stephen Thaler’s failed attempt to copyright “A Recent Entrance to Paradise,” an eerie, dreamlike image conjured up in 2012 by Thaler’s AI ‘Creativity Machine.’ 

ai art

Above: Stephen Thaler’s “A Recent Entrance to Paradise” was created in 2012.

When Thaler tried to register the work, the Copyright Office flatly rejected his application, contending it “lacks the human authorship necessary to support a copyright claim.”

Thaler sued, insisting the Copyright Office’s “human authorship” requirement had no basis in law. He argued that granting copyrights to AI creations would further the constitutional goal of promoting “the progress of science and useful arts.” 

In 2023, a federal judge sided decisively with the Copyright Office, calling human authorship “a bedrock requirement of copyright.”

“We are approaching new frontiers in copyright as artists put AI in their toolbox,” the judge wrote at the time. “This case, however, is not nearly so complex.”

The appeals court agreed, finding that “authors are at the center of the Copyright Act” and that the law’s plain meaning limits authorship to humans. Thaler says he strongly disagrees with the ruling and plans to appeal.

As AI-generated content proliferates, courts are grappling with mind-bending questions of ownership and rights. 

While this case provides some clarity on wholly autonomous AI art, many issues around human/AI collaborative works remain primarily unsettled. 

The post AI-generated art cannot be copyrighted, says US Court of Appeals appeared first on DailyAI.