nikky

ChatGPT now remembers everything you’ve ever told it – Here’s what you need to know

OpenAI has rolled out a major update to ChatGPT’s memory feature that allows the AI to remember everything from your past conversations.

The new feature, announced on X by OpenAI and CEO Sam Altman, enables the chatbot to automatically reference all your previous interactions to deliver more personalized responses without requiring you to repeat information.

Previously, ChatGPT’s memory had to be manually toggled on for specific pieces of information you wanted it to remember. The update expands this function in two key ways:

  1. Saved memories: Details you’ve explicitly asked ChatGPT to remember
  2. Chat history: Insights the AI automatically gathers from all your past conversations

This means ChatGPT can now recall your preferences, interests, recurring topics, and even stylistic choices without being prompted.

For example, if you’ve mentioned being a rock music fan or preferring short, bullet-pointed answers in previous chats, ChatGPT should remember. 

“New conversations naturally build upon what it already knows about you, making interactions feel smoother and uniquely tailored to you,” OpenAI stated in its announcement.

Privacy controls remain available

No surprises, not everyone wants ChatGPT to know or remember more about them than it already might. 

With ChatGPT hoarding a comprehensive record of every conversation you’ve had with it, some of which will inevitably include sensitive info on you, the company you work for, etc, you’re putting a lot of trust in OpenAI’s ability (and willingness) to keep that info under lock and key.

In the age of data leaks and targeted ads, that’s a leap some may not be ready to take.

Right now, you can opt out of Memory. Altman said: “You can of course opt out of this, or memory all together. and you can use temporary chat if you want to have a conversation that won’t use or affect memory.”

The temporary chat will be useful – ChatGPT’s previous memory feature used to get so clogged up that it wouldn’t work properly. 

The enhanced memory feature is currently available to ChatGPT Pro subscribers on the mega-costly $200/month tier and will soon roll out to Plus users.

There are exceptions to the roll-out. Altman posted on X “(Except users in the EEA, UK, Switzerland, Norway, Iceland, and Liechtenstein. Call me, Liechtenstein, let’s work this out!)” 

As exciting as ChatGPT’s newfound ability to remember everything we’ve ever told it may be, recent studies suggest that this level of AI personalization could come with serious risks. 

A pair of studies from OpenAI and MIT Media Lab found that frequent, sustained use of ChatGPT may be linked to higher levels of loneliness and emotional dependence among its most loyal users.

The research, which included a randomized controlled trial and an analysis of nearly 40 million ChatGPT interactions, revealed that “power users” who engaged in personal conversations with the chatbot were more likely to view it as a friend and experience greater loneliness compared to those who used it less frequently.

A great feature to some. Another slip towards a Black Mirror episode for others.

The post ChatGPT now remembers everything you’ve ever told it – Here’s what you need to know appeared first on DailyAI.

AI craze mania with AI action figures and turning pets into people

In the 90s, we collected Pokémon cards, in the 2000s, we all had a weird phase with those rubber charity bracelets, and now in 2025, we’re turning our pets into strange human beings with AI. 

Progress? Possibly.

In the wake of the “Ghibli style” phenomenon that melted OpenAI’s servers and had CEO Sam Altman practically begging people to stop generating images, we’ve hit a new wave of AI crazes taking off on social media. 

Your social media feed might now likely be invaded by action figure versions of your friends and humanized versions of their cats. Welcome to 2025. 

Make your own AI action figure

The latest trend sweeping across X, TikTok, and Instagram involves people transforming themselves into action figures complete with packaging. 

Users are generating images of themselves as collectible toys sealed in plastic blister packs with colorful cardboard backings.

“Create a toy of a person based on the photo. Let it be a collectible action figure. The figure should be placed inside a clear blister pack, in front of a backing card,” reads a popular prompt being shared widely.

Some are even creating elaborate backstories for their action figure alter-egos, complete with fictional accessories and “special abilities” listed on the packaging.

 

View this post on Instagram

 

A post shared by BoshGram (@boshgram_)

Some have gone further, using AI tools to animate the figure being removed from the pack.

Pets become people

Probably the weirder of the two trends, people are turning their pets into their human forms. 

One circulated prompt reads: “Transform this animal into a human character, removing all animal features (such as fur, ears, whiskers, muzzle, tail, etc.) while preserving its personality and recognizable traits. Focus on capturing the expression, posture, emotional vibe, and visual identity.”

The results…let’s just say they’re both charming and unsettling.

“Let them be cats… that’s why we love them… they are not people lol,” commented one Instagram user, while another simply wrote, “Hell no.”

According to Creative Bloq’s Natalie Fear, who tried the trend with her own cat, “the results range from the cute to the downright confusing.” 

Fear described her own AI-generated pet-human as a “weird slobby man,” noting that many cat transformations seem to result in “grumpy old men.” Which, to be fair, is probably how most cats would manifest if they suddenly became human.

Despite some disturbing results, the trend has captivated social media users, with one Redditor commenting, “Oh my god this is amazing!” while another lamented, “Oh god, queue the next week of people turning their pets into people posts. SAVE ME.”

What’s next?

Text-to-image has truly entered the public consciousness as AI tools go mobile and become more efficient and easier to use.

Being able to upload photos and ask AI to manipulate them in situ is a key driving force behind these crazes, too – everything you need to build your own action figure is in the palm of your hand. 

So, if you want to give them a try – go ahead. Just don’t blame us when your cat’s human form gives you nightmares.

The post AI craze mania with AI action figures and turning pets into people appeared first on DailyAI.

Power-hungry AI will devour Japan-sized energy supply by 2030

AI is already straining power grids around the world, but according to a new report, we’re only just getting started. 

By 2030, AI data centers will devour almost as much electricity as the entire country of Japan consumes today, according to the latest forecasts from the International Energy Agency (IEA).

Today’s data centers already gulp down about 1.5% of the world’s electricity – that’s roughly 415 terawatt hours every year. The IEA expects this to more than double to nearly 950 TWh by 2030, claiming almost 3% of global electricity.

The specialized hardware running AI systems is the real consumer. Electricity demand for these “accelerated servers” will jump by a stunning 30% each year through 2030, while conventional servers grow at a more modest 9% annually.

Some data centers already under construction will consume as much power as 2 million average homes, with others already announced for the future set to consume as much as 5 million or more. 

IEA
Some data centers may consume more than 4 million homes. Source: IEA.

A very uneven distribution

By 2030, American data centers will consume about 1,200 kilowatt-hours (kWh) per person – which is roughly 10% of what an entire US household uses in a year, and “one order of magnitude higher than any other region in the world,” according to the IEA. Africa, meanwhile, will barely reach 2 kWh per person.

Regionally, some areas are already feeling the squeeze. In Ireland, data centers now gulp down an incredible 20% of the country’s electricity. Six US states devote more than 10% of their power to data centers, with Virginia leading at 25%.

Can clean energy keep up?

Despite fears that AI’s appetite might effectively sabotage climate goals, the IEA believes these concerns are “overstated.” 

Nearly half the additional electricity needed for data centers through 2030 should come from renewable sources, though fossil fuels will still play a leading role.

The energy mix varies dramatically by region. In China, coal powers nearly 70% of data centers today. In the US, natural gas leads at 40%, followed by renewables at 24%.

IEA
Renewables will do plenty of heavy lifting for AI. Source: IEA.

Looking ahead, small modular nuclear reactors (SMRs) may become increasingly important for powering AI after 2030. 

Tech companies are already planning to finance more than 20 gigawatts of SMR capacity – a sign they’re thinking about long-term energy security.

Efficiency vs. expansion

The IEA outlines several possible futures for AI’s energy footprint. 

In their “Lift-Off” scenario with accelerated AI adoption, global data center electricity could exceed 1,700 TWh by 2035 – nearly 45% higher than their base projection.

IEA lift off
The IEA’s “Lift-Off” scenario signals massive energy consumption. Source: IEA.

Alternatively, their “High Efficiency” scenario suggests that improvements in software, hardware, and infrastructure could cut electricity needs by more than 15% while delivering the same AI capacity and performance.

In sum, the next decade will test our ability to develop AI that’s both powerful and energy-efficient.

Whether the tech industry can solve this puzzle may impact not just the future of AI, but also its role in addressing, rather than worsening, the global climate crisis.

The post Power-hungry AI will devour Japan-sized energy supply by 2030 appeared first on DailyAI.

AI tariff report: Everything you need to know

In a week that saw nearly $10 trillion wiped from global markets, AI finds itself caught up in the most intense economic storm of living memory. 

Since Trump presented his cardboard placard of tariffs last week, the market reaction has been brutal, and tech stocks are bearing the brunt. 

Apple shares have fallen nearly 20% since the tariff announcement, with the company heavily exposed to Chinese manufacturing. Tesla dropped another 5% on Monday alone, and NVIDIA similarly, now trading at 25% lower than the beginning of the year. 

AI’s spectacular rise has been built on the foundation of a borderless economy. The industry thrives on global supply chains – Taiwanese chips, Chinese assembly, European research centers, and American venture capital – all working in relative harmony. 

Taiwan has been hit particularly hard, slapped with a 32% tariff that sent its stock market into its worst nosedive ever, plunging nearly 10% in days. By Tuesday, Taiwan’s Foreign Minister Lin Chia-lung was scrambling to arrange negotiations with the US, telling reporters they’re “ready for talks at any time.”

It’s not just about semiconductors, which received a temporary reprieve from tariffs. The massive data centers powering ChatGPT and other AI services rely on a global supply network for everything from cooling systems to power equipment to construction materials – all primarily now subject to tariffs.

Non-semiconductor components represent up to one-third of data center costs, Gil Luria of D.A. Davidson & Co. explained to Fortune, adding ominously that the semiconductor exemption “was not meant to be permanent.”

China, meanwhile, has retaliated with its own tariffs while its state media produces AI-generated videos mocking Trump’s economic policies. 

Did ChatGPT design Trump’s tariffs?

Here’s where the story takes a bizarre turn. Shortly after Trump unveiled his tariffs, economist James Surowiecki noticed something peculiar: the formula behind the tariff calculations looked strangely familiar.

As it turns out, if you ask ChatGPT, Claude, Gemini, or Grok for “an easy way to solve trade deficits,” they all recommend essentially the same method – to divide a country’s trade deficit with the US by their exports to the US. That’s remarkably similar to what the White House appears to have done.

“This is extraordinary nonsense,” Surowiecki noted, with other economists quickly piling on criticism of what appears to be overly primitive calculations. 

Shifting the blame to China

As markets tanked, Treasury Secretary Scott Bessent told Tucker Carlson it wasn’t the tariffs causing the market crash, but China’s DeepSeek AI platform.

“For everyone who thinks these market declines are all based on the President’s economic policies, I can tell you that this market decline started with the Chinese AI announcement of DeepSeek,” Bessent claimed. “It’s more a Mag 7 problem, not a MAGA problem.”

When Bessent refers to the “Mag 7,” he’s talking about the “Magnificent Seven,” Apple, Microsoft, Alphabet, Amazon, Meta, Nvidia, and Tesla, which have collectively driven much of the market’s recent gains.

The timeline, however, tells a different story. Global markets were relatively stable until Trump’s tariff announcement on Wednesday, after which they immediately plunged across the board. DeepSeek’s latest version was released in January, months before the current crisis began, and markets showed no comparable reaction at that time.

Market figures from other industries also undermine Bessent’s claim. The Dow Jones dropped 1,679 points in a single day following Trump’s tariff announcement – the largest single-day point drop since 2020. The timing and magnitude of the decline leave little doubt about the primary catalyst.

What comes next?

Despite the market meltdown, Trump shows no signs of backing down. When asked about pausing the tariffs, he said bluntly, “We’re not looking at that.”

Not everyone believes AI will suffer long-term damage, with some claiming AI is virtually ‘tariff-proof’ due to its inherently borderless nature and long-term strategic importance. 

However, today’s frontier models run on warehouse-sized data centers filled with specialized chips, cooling systems, and power equipment sourced from all over the world. Plus, despite years of talk about reshoring semiconductor production, the US remains heavily dependent on foreign chip manufacturing. 

The CHIPS Act was supposed to change that, but new domestic fabs are still years away from meaningful production, a warning sign that re-shoring is famously slow and painful.

Either way, for now, the AI industry – and virtually every other – watches and waits, hoping that market turmoil will be temporary and continuity in some form will eventually prevail.

The post AI tariff report: Everything you need to know appeared first on DailyAI.

It’s been a massive week for the AI copyright debate

It’s rare for legal reports, government consultations, and anime-styled selfies to feel part of the same story – but the last few days, they do. 

On Tuesday, the U.S. Copyright Office released Part Two of its long-awaited report on the copyrightability of AI-generated works.

Its core message? Human creativity remains the foundation of US copyright law – and AI-generated material, on its own, doesn’t qualify.

The Office was unambiguous. Prompts alone, no matter how detailed or imaginative, are not enough. What matters is authorship, and authorship must involve human originality. 

If a person curates, edits, or meaningfully transforms an AI output, that contribution may be protected. But the machine’s output itself? No.

In practice, this means that someone who generates an image using a text prompt likely doesn’t own it in the traditional sense.

The report outlines three narrow scenarios in which copyright might apply: when AI is used assistively, when original human work is perceptibly incorporated, or when a human selects and arranges AI-generated elements in a creative way.

Sounds generous in some ways, but the fact remains that courts have consistently rejected copyright claims over purely machine-made works, and this report affirms that position. 

The Copyright Office likens prompts to giving instructions to a photographer: they might influence the result, but they don’t rise to the level of authorship. 

But just as that line was being redrawn in Washington, OpenAI was urging lawmakers in the UK to take a different path.

On Wednesday, the company submitted its formal response to the UK government’s AI and copyright consultation. 

OpenAI argues for a “broad text and data mining exception” – a legal framework that would allow AI developers to train on publicly available data without first seeking permission from rights holders. 

OpenAI
OpenAI’s proposition to the UK government

The idea is to create a pro-innovation environment that would attract AI investment and development. In effect, let the machines read everything, unless someone explicitly opts out. It’s a stance that puts OpenAI firmly at odds with many in the creative sector, where alarm bells have been ringing for months.

Artists, authors, and publishers see the proposed exception as a backdoor license to scrape the web, turning years of human work into fuel for algorithmic engines.

Critics argue that even an opt-out model places the burden on creators, not companies, and risks eroding the already fragile economics of professional content.

Chucked into this copyright melting pot was the release of a new study this week from the AI Disclosures Project, which claims that OpenAI’s newest model, GPT-4o, shows a suspiciously high recognition of paywalled content.

And all of this came on the heels of a much more public – and wildly popular – example of AI’s blurred boundaries: the Studio Ghibli trend.

Over the weekend, OpenAI’s image generator, newly improved in ChatGPT, went viral for its ability to transform selfies into Ghibli scenes – despite the studio’s co-founder publicly stating he hated AI back in 2016.

A career distilled into a prompt. Or is AI creativity truly blooming in the public consciousness?

None of this is happening in isolation. Copyright law, historically slow-moving and text-bound, is being forced to change and adapt. 

Governments, regulators, tech companies, and creators are all scrambling to define the rules – or bend them – to get the better of this debate. 

The post It’s been a massive week for the AI copyright debate appeared first on DailyAI.

OpenAI shut down the Ghibli craze – now users are turning to open source

When OpenAI released its latest image generator a few days ago, they probably didn’t expect it to bring the internet to its knees.

But that’s more or less what happened, as millions of people rushed to transform their pets, selfies, and favorite memes into something that looked like it came straight out of a Studio Ghibli movie. All you needed was to add a prompt like “in the style of Studio Ghibli.”

For anyone unfamiliar, Studio Ghibli is the legendary Japanese animation studio behind Spirited Away, Kiki’s Delivery Service, and Princess Mononoke.

Its soft, hand-drawn style and magical settings are instantly recognizable – and surprisingly easy to mimic using OpenAI’s new model. Social media is filled with anime versions of people’s cats, family portraits, and inside jokes.

It took many by surprise. Normally, OpenAI’s tools resist any prompts that name an artist or designer by name, as this shows, more-or-less unequivocally, that copyright imagery is rife in training datasets.

For a while, though, that didn’t seem to matter anymore. Even OpenAI CEO Sam Altman even changed his own profile photo to a Ghibli-style image and posted on X:

At one point, over a million people had signed up for ChatGPT within an hour.

Then, quietly, it stopped working for many.

Users started to notice that prompts referencing Ghibli, or even trying to describe the style more indirectly, no longer returned the same results.

Some prompts were rejected altogether. Others just produced generic art that looked nothing like what had been going viral the day before. Many are speculating now that the model was updated. OpenAI had rolled out copyright restrictions behind the scenes.

OpenAI later said that, despite spurring on the trend, they were throttling Ghibli-style images by taking a “conservative approach,” refusing any attempt to create images in the likeness of a living artist.

This sort of thing isn’t new. It happened with DALL·E as well. A model launches with stacks of flexibility and loose guardrails, catches fire online, then gets quietly dialed back, often in response to legal concerns or policy updates.

The original version of DALL·E could do things that were later disabled. The same seems to be happening here.

One Reddit commenter explained:

“The problem is it actually goes like this: Closed model releases which is much better than anything we have. Closed model gets heavily nerfed. Open source model comes out that’s getting close to the nerfed version.”

OpenAI’s sudden retreat has left many users looking elsewhere, and some are turning to open-source models, such as Flux, developed by Black Forest Labs from Stability AI.

Unlike OpenAI’s tools, Flux and other open-source text-to-image tools doesn’t apply server-side restrictions (or at least, they’re looser and limited to illicit or profane material). So, they haven’t filtered out prompts referencing Ghibli-style imagery.

Control doesn’t mean open-source tools avoid ethical issues, of course. Models like Flux are often trained on the same kind of scraped data that fuels debates around style, consent, and copyright. 

The difference is, they aren’t subject to corporate risk management – meaning the creative freedom is wider, but so is the grey area.

The post OpenAI shut down the Ghibli craze – now users are turning to open source appeared first on DailyAI.