Blog

Apple’s AI Promises Just Got Exposed — Here’s What They’re Not Telling You

Apple has removed the “Available Now” label from its Apple Intelligence webpage after the National Advertising Division (NAD) deemed the claim misleading.

The NAD found that features like Priority Notifications, Genmoji, Image Playground, and ChatGPT integration were not fully available at the iPhone 16 launch, contrary to Apple’s marketing. ​

Why Apple’s words are under the microscope

This move underscores the importance of accurate advertising in the tech industry, especially as companies race to showcase AI advancements. Misleading claims can erode consumer trust and invite regulatory scrutiny.​

Apple’s AI initiatives have faced multiple challenges:​

  • Privacy Concerns: The National Legal and Policy Center accused Apple of outsourcing data collection to partners like OpenAI and Meta, potentially compromising user privacy.
  • AI Missteps: Apple’s AI-generated news summaries have been criticized for inaccuracies, leading to temporary suspension of the feature.
  • Legal Challenges: A federal lawsuit alleges that Apple misled consumers about the availability of AI features at the iPhone 16 launch, potentially violating false advertising laws.

Photo by Ralph Olazo on Unsplash


“The Company pretends to be privacy friendly, but the monetization potential of its massive user base is too large to pass up, so it outsources its unethical practices to another company in exchange for large fees.” – Luke Perlot, NLPC

What’s next

Apple is working to balance AI innovation with its longstanding commitment to user privacy.

The company plans to use synthetic data and differential privacy techniques to improve AI models without compromising personal information.

The bottom line

As Apple navigates the complexities of AI development, transparency and user trust remain paramount.

The company’s recent challenges highlight the need for clear communication and ethical practices in the rapidly evolving AI landscape.​

 

The post Apple’s AI Promises Just Got Exposed — Here’s What They’re Not Telling You appeared first on DailyAI.

AI etiquette comes with a price tag, says Altman, but is it worth it?

OpenAI CEO Sam Altman has revealed that merely being polite to ChatGPT might be costing “tens of millions of dollars” in extra computing resources.

When asked much money OpenAI has lost in electricity costs from people saying “please” and “thank you” to their AI models, Altman responded: “tens of millions of dollars well spent. You never know.”

Every word sent to ChatGPT – even common courtesies – requires additional processing power. Even seemingly small interactions add up quickly across billions of conversations, driving up both electricity costs and server usage

AI models rely heavily on colossal data centers that already account for about 2% of global electricity consumption, and this is expected to climb, with AI potentially draining the same quantity of energy as an industrialized country such as Japan.

According to a Washington Post investigation conducted in collaboration with University of California researchers, generating a single 100-word email using AI requires 0.14 kilowatt-hours of electricity – enough to power 14 LED lights for an hour. At scale, these small interactions create a massive energy footprint.

The water usage is equally striking. UC Riverside researchers found that using GPT-4 to generate 100 words consumes up to three bottles of water for cooling the servers, and even a simple three-word response like “You are welcome” uses about 1.5 ounces of water.

Why are users being polite to machines?

Evidence suggests people are genuinely attempting to practice sound AI etiquette. A late 2023 survey found that 67% of US respondents reported being nice to their chatbots.

Of those practising digital politeness, 55% said they do it “because it’s the right thing to do,” while a more cautious 12% admitted doing it to “appease the algorithm in the case of an AI uprising.” AI has come a long way since 2023, and we’ve seen plenty of doomsday theories make headlines, so that figure might be much higher now!

Rather than AI etiquette being purely a psychological or behavioral matter, a study by researchers at Waseda University found that using polite prompts can actually produce higher-quality responses from large language models (LLMs).

LLMs are trained on human interactions, after all; “LLMs reflect the human desire to be respected to a certain extent,” the researchers explain.

The human element

Beyond technical performance, some experts argue there’s value in maintaining politeness toward AI for our own sake. Using disrespectful language with AI might normalize rudeness in our human interactions.

This phenomenon has already been observed with earlier voice assistants. Some parents have reported their children becoming less respectful after growing accustomed to barking commands at Siri or Alexa, leading Google to introduce their “Pretty Please” feature to encourage politeness among children back in 2018.

So, whether you’re nice to your AI for performance reasons, etiquette best practice, or to stay on the right side of AI in case of a Matrix-esque takeover, just be aware that every interaction comes with a cost.

The post AI etiquette comes with a price tag, says Altman, but is it worth it? appeared first on DailyAI.

Stolen faces, stolen lives: The disturbing trend of AI-powered exploitation

Most social users will have come across an influencer who looks a little… off. 

Maybe their facial features are a bit too symmetrical, and their poses are a bit too rigid. Chances are, you’re not looking at a human at all – but an AI-generated forgery.

In some cases, these AI influencers are reasonably benign – just digital forms of their real counterparts but not overtly trying to deceive or manipulate. 

However, this isn’t always the case. Disturbingly, there’s a network of Instagram accounts using artificial intelligence to create fake influencers with Down syndrome. 

These bad actors steal content from real creators, then leverage AI to swap in computer-generated faces of people with Down syndrome. The goal? To exploit a vulnerable community for likes, shares, and ultimately, cash. 

But the deception doesn’t end there. Many of these accounts link out to shady adult websites, where the AI-generated content is monetized. 

Sadly, this is just the latest evolution of the “AI pimping” trend, where unscrupulous operators use machine learning to create counterfeit influencers for monetary gain. It’s not just Down syndrome, but fake amputee models, burn victims, and other forms of AI-generated pornography.

AI image and video models are now approaching a level of realism that makes them very viable substitutes for real humans. It’s affecting the fashion industry – where real models are facing replacement at the hands of the AI clones. 

Even household names like H&M are wading into these murky waters. The fast fashion giant recently announced a campaign featuring AI-generated “digital twins” of real models. Back in 2023, a company called lalaland.ai released tools for creating AI models for a subscription fee. 

While H&M insists the models maintain control over their digital likenesses, many in the industry are skeptical. After all, in an era of cost-cutting and consolidation, why hire human talent when you can license a cheap, infinitely replicable digital avatar?

The latest, most insidious twist here concerns the fundamental dignity and humanity of marginalized communities. 

People with Down syndrome – or any disability – are not props to be manipulated for profit. 

Moreover, the proliferation of AI-generated content threatens to erode public trust in media altogether. If we cannot trust the images we see online, the very foundation of digital discourse starts to erode.

So next time you’re scrolling through your feed and an influencer seems too good to be true, trust your gut. 

The post Stolen faces, stolen lives: The disturbing trend of AI-powered exploitation appeared first on DailyAI.

Meta resumes AI training using EU user data

After nearly a year’s pause due to regulatory concerns, Meta has begun harvesting public content from its European users to train its AI models, just as EU officials prepare to issue the first-ever fines under the bloc’s Digital Markets Act (DMA).

Meta announced Monday it will start using public posts, comments, and AI interactions from adult users across Facebook, Instagram, and WhatsApp in the EU to improve its generative AI systems. The European Data Protection Board (EDPB) approved the rollout.

“This training will better support millions of people and businesses in Europe, by teaching our generative AI models to better understand and reflect their cultures, languages, and history,” Meta said in its official announcement.

Meta was previously barred from using EU data, stating in 2024 “Without EU user data Meta says “we’d only be able to offer people a second-rate experience. This means we aren’t able to launch Meta AI in Europe at the moment.”

Meta is offering opt-out options 

European users will begin receiving notifications this week, both in their apps and via email, explaining exactly what data will be collected and how it will be used. These notifications will include a link to an objection form where users can opt out.

“We have made this objection form easy to find, read, and use, and we’ll honor all objection forms we have already received, as well as newly submitted ones,” Meta’s press release stated.

The company emphasized that “we do not use people’s private messages with friends and family to train our generative AI models.” 

Additionally, “public data from the accounts of people in the EU under the age of 18 is not being used for training purposes.”

Meta also pointed out it’s “following the example set by others including Google and OpenAI,” noting both companies have already used data from European users to train their AI models.

DMA fines on the horizon

The timing of Meta’s announcement is noteworthy, coming just as the European Commission prepares to issue what are expected to be substantial fines against both Meta and Apple for alleged violations of the new Digital Markets Act.

Competition Commissioner Teresa Ribera reinforced the Commission’s enforcement intentions Tuesday, telling the European Parliament: “If we do not see willingness to cooperate we will not shy away from imposing the fines identified by the law.”

Companies found in breach of the DMA can be fined up to 10% of their total worldwide turnover, increasing to 20% for repeated infractions.

While the EU wants to enforce fines against Big Tech, preventing model training on citizens’ data has proved not to be possible. 

For European users who don’t want their data harvested by Meta, keep an eye out for Meta’s notifications. 

The post Meta resumes AI training using EU user data appeared first on DailyAI.

ChatGPT now remembers everything you’ve ever told it – Here’s what you need to know

OpenAI has rolled out a major update to ChatGPT’s memory feature that allows the AI to remember everything from your past conversations.

The new feature, announced on X by OpenAI and CEO Sam Altman, enables the chatbot to automatically reference all your previous interactions to deliver more personalized responses without requiring you to repeat information.

Previously, ChatGPT’s memory had to be manually toggled on for specific pieces of information you wanted it to remember. The update expands this function in two key ways:

  1. Saved memories: Details you’ve explicitly asked ChatGPT to remember
  2. Chat history: Insights the AI automatically gathers from all your past conversations

This means ChatGPT can now recall your preferences, interests, recurring topics, and even stylistic choices without being prompted.

For example, if you’ve mentioned being a rock music fan or preferring short, bullet-pointed answers in previous chats, ChatGPT should remember. 

“New conversations naturally build upon what it already knows about you, making interactions feel smoother and uniquely tailored to you,” OpenAI stated in its announcement.

Privacy controls remain available

No surprises, not everyone wants ChatGPT to know or remember more about them than it already might. 

With ChatGPT hoarding a comprehensive record of every conversation you’ve had with it, some of which will inevitably include sensitive info on you, the company you work for, etc, you’re putting a lot of trust in OpenAI’s ability (and willingness) to keep that info under lock and key.

In the age of data leaks and targeted ads, that’s a leap some may not be ready to take.

Right now, you can opt out of Memory. Altman said: “You can of course opt out of this, or memory all together. and you can use temporary chat if you want to have a conversation that won’t use or affect memory.”

The temporary chat will be useful – ChatGPT’s previous memory feature used to get so clogged up that it wouldn’t work properly. 

The enhanced memory feature is currently available to ChatGPT Pro subscribers on the mega-costly $200/month tier and will soon roll out to Plus users.

There are exceptions to the roll-out. Altman posted on X “(Except users in the EEA, UK, Switzerland, Norway, Iceland, and Liechtenstein. Call me, Liechtenstein, let’s work this out!)” 

As exciting as ChatGPT’s newfound ability to remember everything we’ve ever told it may be, recent studies suggest that this level of AI personalization could come with serious risks. 

A pair of studies from OpenAI and MIT Media Lab found that frequent, sustained use of ChatGPT may be linked to higher levels of loneliness and emotional dependence among its most loyal users.

The research, which included a randomized controlled trial and an analysis of nearly 40 million ChatGPT interactions, revealed that “power users” who engaged in personal conversations with the chatbot were more likely to view it as a friend and experience greater loneliness compared to those who used it less frequently.

A great feature to some. Another slip towards a Black Mirror episode for others.

The post ChatGPT now remembers everything you’ve ever told it – Here’s what you need to know appeared first on DailyAI.

AI craze mania with AI action figures and turning pets into people

In the 90s, we collected Pokémon cards, in the 2000s, we all had a weird phase with those rubber charity bracelets, and now in 2025, we’re turning our pets into strange human beings with AI. 

Progress? Possibly.

In the wake of the “Ghibli style” phenomenon that melted OpenAI’s servers and had CEO Sam Altman practically begging people to stop generating images, we’ve hit a new wave of AI crazes taking off on social media. 

Your social media feed might now likely be invaded by action figure versions of your friends and humanized versions of their cats. Welcome to 2025. 

Make your own AI action figure

The latest trend sweeping across X, TikTok, and Instagram involves people transforming themselves into action figures complete with packaging. 

Users are generating images of themselves as collectible toys sealed in plastic blister packs with colorful cardboard backings.

“Create a toy of a person based on the photo. Let it be a collectible action figure. The figure should be placed inside a clear blister pack, in front of a backing card,” reads a popular prompt being shared widely.

Some are even creating elaborate backstories for their action figure alter-egos, complete with fictional accessories and “special abilities” listed on the packaging.

 

View this post on Instagram

 

A post shared by BoshGram (@boshgram_)

Some have gone further, using AI tools to animate the figure being removed from the pack.

Pets become people

Probably the weirder of the two trends, people are turning their pets into their human forms. 

One circulated prompt reads: “Transform this animal into a human character, removing all animal features (such as fur, ears, whiskers, muzzle, tail, etc.) while preserving its personality and recognizable traits. Focus on capturing the expression, posture, emotional vibe, and visual identity.”

The results…let’s just say they’re both charming and unsettling.

“Let them be cats… that’s why we love them… they are not people lol,” commented one Instagram user, while another simply wrote, “Hell no.”

According to Creative Bloq’s Natalie Fear, who tried the trend with her own cat, “the results range from the cute to the downright confusing.” 

Fear described her own AI-generated pet-human as a “weird slobby man,” noting that many cat transformations seem to result in “grumpy old men.” Which, to be fair, is probably how most cats would manifest if they suddenly became human.

Despite some disturbing results, the trend has captivated social media users, with one Redditor commenting, “Oh my god this is amazing!” while another lamented, “Oh god, queue the next week of people turning their pets into people posts. SAVE ME.”

What’s next?

Text-to-image has truly entered the public consciousness as AI tools go mobile and become more efficient and easier to use.

Being able to upload photos and ask AI to manipulate them in situ is a key driving force behind these crazes, too – everything you need to build your own action figure is in the palm of your hand. 

So, if you want to give them a try – go ahead. Just don’t blame us when your cat’s human form gives you nightmares.

The post AI craze mania with AI action figures and turning pets into people appeared first on DailyAI.