Blog

Power-hungry AI will devour Japan-sized energy supply by 2030

AI is already straining power grids around the world, but according to a new report, we’re only just getting started. 

By 2030, AI data centers will devour almost as much electricity as the entire country of Japan consumes today, according to the latest forecasts from the International Energy Agency (IEA).

Today’s data centers already gulp down about 1.5% of the world’s electricity – that’s roughly 415 terawatt hours every year. The IEA expects this to more than double to nearly 950 TWh by 2030, claiming almost 3% of global electricity.

The specialized hardware running AI systems is the real consumer. Electricity demand for these “accelerated servers” will jump by a stunning 30% each year through 2030, while conventional servers grow at a more modest 9% annually.

Some data centers already under construction will consume as much power as 2 million average homes, with others already announced for the future set to consume as much as 5 million or more. 

IEA
Some data centers may consume more than 4 million homes. Source: IEA.

A very uneven distribution

By 2030, American data centers will consume about 1,200 kilowatt-hours (kWh) per person – which is roughly 10% of what an entire US household uses in a year, and “one order of magnitude higher than any other region in the world,” according to the IEA. Africa, meanwhile, will barely reach 2 kWh per person.

Regionally, some areas are already feeling the squeeze. In Ireland, data centers now gulp down an incredible 20% of the country’s electricity. Six US states devote more than 10% of their power to data centers, with Virginia leading at 25%.

Can clean energy keep up?

Despite fears that AI’s appetite might effectively sabotage climate goals, the IEA believes these concerns are “overstated.” 

Nearly half the additional electricity needed for data centers through 2030 should come from renewable sources, though fossil fuels will still play a leading role.

The energy mix varies dramatically by region. In China, coal powers nearly 70% of data centers today. In the US, natural gas leads at 40%, followed by renewables at 24%.

IEA
Renewables will do plenty of heavy lifting for AI. Source: IEA.

Looking ahead, small modular nuclear reactors (SMRs) may become increasingly important for powering AI after 2030. 

Tech companies are already planning to finance more than 20 gigawatts of SMR capacity – a sign they’re thinking about long-term energy security.

Efficiency vs. expansion

The IEA outlines several possible futures for AI’s energy footprint. 

In their “Lift-Off” scenario with accelerated AI adoption, global data center electricity could exceed 1,700 TWh by 2035 – nearly 45% higher than their base projection.

IEA lift off
The IEA’s “Lift-Off” scenario signals massive energy consumption. Source: IEA.

Alternatively, their “High Efficiency” scenario suggests that improvements in software, hardware, and infrastructure could cut electricity needs by more than 15% while delivering the same AI capacity and performance.

In sum, the next decade will test our ability to develop AI that’s both powerful and energy-efficient.

Whether the tech industry can solve this puzzle may impact not just the future of AI, but also its role in addressing, rather than worsening, the global climate crisis.

The post Power-hungry AI will devour Japan-sized energy supply by 2030 appeared first on DailyAI.

AI tariff report: Everything you need to know

In a week that saw nearly $10 trillion wiped from global markets, AI finds itself caught up in the most intense economic storm of living memory. 

Since Trump presented his cardboard placard of tariffs last week, the market reaction has been brutal, and tech stocks are bearing the brunt. 

Apple shares have fallen nearly 20% since the tariff announcement, with the company heavily exposed to Chinese manufacturing. Tesla dropped another 5% on Monday alone, and NVIDIA similarly, now trading at 25% lower than the beginning of the year. 

AI’s spectacular rise has been built on the foundation of a borderless economy. The industry thrives on global supply chains – Taiwanese chips, Chinese assembly, European research centers, and American venture capital – all working in relative harmony. 

Taiwan has been hit particularly hard, slapped with a 32% tariff that sent its stock market into its worst nosedive ever, plunging nearly 10% in days. By Tuesday, Taiwan’s Foreign Minister Lin Chia-lung was scrambling to arrange negotiations with the US, telling reporters they’re “ready for talks at any time.”

It’s not just about semiconductors, which received a temporary reprieve from tariffs. The massive data centers powering ChatGPT and other AI services rely on a global supply network for everything from cooling systems to power equipment to construction materials – all primarily now subject to tariffs.

Non-semiconductor components represent up to one-third of data center costs, Gil Luria of D.A. Davidson & Co. explained to Fortune, adding ominously that the semiconductor exemption “was not meant to be permanent.”

China, meanwhile, has retaliated with its own tariffs while its state media produces AI-generated videos mocking Trump’s economic policies. 

Did ChatGPT design Trump’s tariffs?

Here’s where the story takes a bizarre turn. Shortly after Trump unveiled his tariffs, economist James Surowiecki noticed something peculiar: the formula behind the tariff calculations looked strangely familiar.

As it turns out, if you ask ChatGPT, Claude, Gemini, or Grok for “an easy way to solve trade deficits,” they all recommend essentially the same method – to divide a country’s trade deficit with the US by their exports to the US. That’s remarkably similar to what the White House appears to have done.

“This is extraordinary nonsense,” Surowiecki noted, with other economists quickly piling on criticism of what appears to be overly primitive calculations. 

Shifting the blame to China

As markets tanked, Treasury Secretary Scott Bessent told Tucker Carlson it wasn’t the tariffs causing the market crash, but China’s DeepSeek AI platform.

“For everyone who thinks these market declines are all based on the President’s economic policies, I can tell you that this market decline started with the Chinese AI announcement of DeepSeek,” Bessent claimed. “It’s more a Mag 7 problem, not a MAGA problem.”

When Bessent refers to the “Mag 7,” he’s talking about the “Magnificent Seven,” Apple, Microsoft, Alphabet, Amazon, Meta, Nvidia, and Tesla, which have collectively driven much of the market’s recent gains.

The timeline, however, tells a different story. Global markets were relatively stable until Trump’s tariff announcement on Wednesday, after which they immediately plunged across the board. DeepSeek’s latest version was released in January, months before the current crisis began, and markets showed no comparable reaction at that time.

Market figures from other industries also undermine Bessent’s claim. The Dow Jones dropped 1,679 points in a single day following Trump’s tariff announcement – the largest single-day point drop since 2020. The timing and magnitude of the decline leave little doubt about the primary catalyst.

What comes next?

Despite the market meltdown, Trump shows no signs of backing down. When asked about pausing the tariffs, he said bluntly, “We’re not looking at that.”

Not everyone believes AI will suffer long-term damage, with some claiming AI is virtually ‘tariff-proof’ due to its inherently borderless nature and long-term strategic importance. 

However, today’s frontier models run on warehouse-sized data centers filled with specialized chips, cooling systems, and power equipment sourced from all over the world. Plus, despite years of talk about reshoring semiconductor production, the US remains heavily dependent on foreign chip manufacturing. 

The CHIPS Act was supposed to change that, but new domestic fabs are still years away from meaningful production, a warning sign that re-shoring is famously slow and painful.

Either way, for now, the AI industry – and virtually every other – watches and waits, hoping that market turmoil will be temporary and continuity in some form will eventually prevail.

The post AI tariff report: Everything you need to know appeared first on DailyAI.

It’s been a massive week for the AI copyright debate

It’s rare for legal reports, government consultations, and anime-styled selfies to feel part of the same story – but the last few days, they do. 

On Tuesday, the U.S. Copyright Office released Part Two of its long-awaited report on the copyrightability of AI-generated works.

Its core message? Human creativity remains the foundation of US copyright law – and AI-generated material, on its own, doesn’t qualify.

The Office was unambiguous. Prompts alone, no matter how detailed or imaginative, are not enough. What matters is authorship, and authorship must involve human originality. 

If a person curates, edits, or meaningfully transforms an AI output, that contribution may be protected. But the machine’s output itself? No.

In practice, this means that someone who generates an image using a text prompt likely doesn’t own it in the traditional sense.

The report outlines three narrow scenarios in which copyright might apply: when AI is used assistively, when original human work is perceptibly incorporated, or when a human selects and arranges AI-generated elements in a creative way.

Sounds generous in some ways, but the fact remains that courts have consistently rejected copyright claims over purely machine-made works, and this report affirms that position. 

The Copyright Office likens prompts to giving instructions to a photographer: they might influence the result, but they don’t rise to the level of authorship. 

But just as that line was being redrawn in Washington, OpenAI was urging lawmakers in the UK to take a different path.

On Wednesday, the company submitted its formal response to the UK government’s AI and copyright consultation. 

OpenAI argues for a “broad text and data mining exception” – a legal framework that would allow AI developers to train on publicly available data without first seeking permission from rights holders. 

OpenAI
OpenAI’s proposition to the UK government

The idea is to create a pro-innovation environment that would attract AI investment and development. In effect, let the machines read everything, unless someone explicitly opts out. It’s a stance that puts OpenAI firmly at odds with many in the creative sector, where alarm bells have been ringing for months.

Artists, authors, and publishers see the proposed exception as a backdoor license to scrape the web, turning years of human work into fuel for algorithmic engines.

Critics argue that even an opt-out model places the burden on creators, not companies, and risks eroding the already fragile economics of professional content.

Chucked into this copyright melting pot was the release of a new study this week from the AI Disclosures Project, which claims that OpenAI’s newest model, GPT-4o, shows a suspiciously high recognition of paywalled content.

And all of this came on the heels of a much more public – and wildly popular – example of AI’s blurred boundaries: the Studio Ghibli trend.

Over the weekend, OpenAI’s image generator, newly improved in ChatGPT, went viral for its ability to transform selfies into Ghibli scenes – despite the studio’s co-founder publicly stating he hated AI back in 2016.

A career distilled into a prompt. Or is AI creativity truly blooming in the public consciousness?

None of this is happening in isolation. Copyright law, historically slow-moving and text-bound, is being forced to change and adapt. 

Governments, regulators, tech companies, and creators are all scrambling to define the rules – or bend them – to get the better of this debate. 

The post It’s been a massive week for the AI copyright debate appeared first on DailyAI.

OpenAI shut down the Ghibli craze – now users are turning to open source

When OpenAI released its latest image generator a few days ago, they probably didn’t expect it to bring the internet to its knees.

But that’s more or less what happened, as millions of people rushed to transform their pets, selfies, and favorite memes into something that looked like it came straight out of a Studio Ghibli movie. All you needed was to add a prompt like “in the style of Studio Ghibli.”

For anyone unfamiliar, Studio Ghibli is the legendary Japanese animation studio behind Spirited Away, Kiki’s Delivery Service, and Princess Mononoke.

Its soft, hand-drawn style and magical settings are instantly recognizable – and surprisingly easy to mimic using OpenAI’s new model. Social media is filled with anime versions of people’s cats, family portraits, and inside jokes.

It took many by surprise. Normally, OpenAI’s tools resist any prompts that name an artist or designer by name, as this shows, more-or-less unequivocally, that copyright imagery is rife in training datasets.

For a while, though, that didn’t seem to matter anymore. Even OpenAI CEO Sam Altman even changed his own profile photo to a Ghibli-style image and posted on X:

At one point, over a million people had signed up for ChatGPT within an hour.

Then, quietly, it stopped working for many.

Users started to notice that prompts referencing Ghibli, or even trying to describe the style more indirectly, no longer returned the same results.

Some prompts were rejected altogether. Others just produced generic art that looked nothing like what had been going viral the day before. Many are speculating now that the model was updated. OpenAI had rolled out copyright restrictions behind the scenes.

OpenAI later said that, despite spurring on the trend, they were throttling Ghibli-style images by taking a “conservative approach,” refusing any attempt to create images in the likeness of a living artist.

This sort of thing isn’t new. It happened with DALL·E as well. A model launches with stacks of flexibility and loose guardrails, catches fire online, then gets quietly dialed back, often in response to legal concerns or policy updates.

The original version of DALL·E could do things that were later disabled. The same seems to be happening here.

One Reddit commenter explained:

“The problem is it actually goes like this: Closed model releases which is much better than anything we have. Closed model gets heavily nerfed. Open source model comes out that’s getting close to the nerfed version.”

OpenAI’s sudden retreat has left many users looking elsewhere, and some are turning to open-source models, such as Flux, developed by Black Forest Labs from Stability AI.

Unlike OpenAI’s tools, Flux and other open-source text-to-image tools doesn’t apply server-side restrictions (or at least, they’re looser and limited to illicit or profane material). So, they haven’t filtered out prompts referencing Ghibli-style imagery.

Control doesn’t mean open-source tools avoid ethical issues, of course. Models like Flux are often trained on the same kind of scraped data that fuels debates around style, consent, and copyright. 

The difference is, they aren’t subject to corporate risk management – meaning the creative freedom is wider, but so is the grey area.

The post OpenAI shut down the Ghibli craze – now users are turning to open source appeared first on DailyAI.

Bill Gates: AI will replace most human jobs within a decade

In a series of recent interviews, Microsoft co-founder Bill Gates made a bold prediction: within the next 10 years, AI will render humans largely obsolete in the workplace. 

Gates believes that as AI rapidly advances, it will take over “most things” currently done by people, including key roles in medicine and education.

“With AI, over the next decade, that will become free, commonplace — great medical advice, great tutoring,” Gates said during an appearance on NBC’s “The Tonight Show” with Jimmy Fallon.

While he acknowledged that human expertise in fields like healthcare and teaching remains “rare” today, Gates envisions a near future where AI will democratize access to top-tier knowledge and skills.

Gates also elaborated on this vision of “free intelligence” in a conversation last month with Arthur Brooks, a Harvard professor and happiness expert. 

He described an era where AI permeates daily life, transforming healthcare, education, and beyond. “It’s very profound and even a little bit scary — because it’s happening very quickly, and there is no upper bound,” Gates said. 

The billionaire’s comments have reignited the undulating debate over AI’s impact on the workforce. 

Some experts believe AI will primarily augment human labor, boosting efficiency and economic growth. OpenAI CEO Sam Altman similarly said that AI would be like calculators to education. 

NVIDIA CEO Jensen Huang similarly declared, “While some worry that AI may take their jobs, someone who is expert with AI will.”

But others, like Microsoft AI CEO Mustafa Suleyman, warn of disruption.

In his 2023 book “The Coming Wave,” Suleyman wrote that AI tools “will only temporarily augment human intelligence” before ultimately replacing many jobs. 

“They will make us smarter and more efficient for a time, and will unlock enormous amounts of economic growth, but they are fundamentally labor replacing,” he argued.

Despite the much-discussed downsides of AI, Gates remains confident in AI’s societal benefits, from medical breakthroughs to climate solutions to universal access to quality education. 

He also said, if he were to launch a new venture today, it would be “AI-centric.”

“Today, somebody could raise billions of dollars for a new AI company [that’s just] a few sketch ideas,” which is true, given that many generative AI startups have achieved unicorn status without a public product. 

As for which human endeavors might prove irreplaceable in a robot-dominated future, Gates offered a few predictions to Fallon. “There will be some things we reserve for ourselves,” he said, suggesting that humans would still prefer to watch other humans play sports, for instance. 

“But in terms of making things and moving things and growing food, over time those will be basically solved problems.”

Gates’ vision of a world where human expertise is largely obsolete within a decade may seem jarring – even dystopian – to some.

However, for many in the tech industry, it’s simply the inevitable endpoint of a revolution decades in the making.

The post Bill Gates: AI will replace most human jobs within a decade appeared first on DailyAI.

AI stirs up trouble in the science peer review process

Scientific publishing in confronting an increasingly provocative issue: what do you do about AI in peer review? 

Ecologist Timothée Poisot recently received a review that was clearly generated by ChatGPT. The document had the following telltale string of words attached: “Here is a revised version of your review with improved clarity and structure.” 

Poisot was incensed. “I submit a manuscript for review in the hope of getting comments from my peers,” he fumed in a blog post. “If this assumption is not met, the entire social contract of peer review is gone.”

Poisot’s experience is not an isolated incident. A recent study published in Nature found that up to 17% of reviews for AI conference papers in 2023-24 showed signs of substantial modification by language models.

And in a separate Nature survey, nearly one in five researchers admitted to using AI to speed up and ease the peer review process.

We’ve also seen a few absurd cases of what happens when AI-generated content slips through the peer review process, which is designed to uphold the quality of research. 

In 2024, a paper published in the Frontiers journal, which explored some highly complex cell signaling pathways, was found to contain bizarre, nonsensical diagrams generated by the AI art tool Midjourney. 

One image depicted a deformed rat, while others were just random swirls and squiggles, filled with gibberish text.

This nonsense AI-generated diagram appeared in the peer-reviewed paper Frontiers in 2024. Source: Frontiers.

Commenters on Twitter were aghast that such obviously flawed figures made it through peer review. “Erm, how did Figure 1 get past a peer reviewer?!” one asked. 

In essence, there are two risks: a) peer reviewers using AI to review content, and b) AI-generated content slipping through the entire peer review process. 

Publishers are responding to the issues. Elsevier has banned generative AI in peer review outright. Wiley and Springer Nature allow “limited use” with disclosure. A few, like the American Institute of Physics, are gingerly piloting AI tools to supplement – but not supplant – human feedback.

However, gen AI’s allure is strong, and some see the benefits if applied judiciously. A Stanford study found 40% of scientists felt ChatGPT reviews of their work could be as helpful as human ones, and 20% more helpful.

peer review
Researchers often have positive reactions to AI-generated peer reviews. Source: Nature

Academia has revolved around human input for a millenia, though, so the resistance is strong. “Not combating automated reviews means we have given up,” Poisot wrote.

The whole point of peer review, many argue, is considered feedback from fellow experts – not an algorithmic rubber stamp.

The post AI stirs up trouble in the science peer review process appeared first on DailyAI.