nikky

ChatGPT Spots Cancer Missed by Doctors; Woman Says It Saved Her Life

Lauren Bannon, 40, began experiencing unexplained weight loss and persistent stomach pain. Multiple doctors dismissed her concerns as rheumatoid arthritis or acid reflux. She was prescribed medication, but her condition worsened.

“I felt abandoned by the medical profession,” Bannon said. “It was like I was just in and out of the door, without real answers.”

Frustrated, Bannon turned to ChatGPT, a tool she already used for work in marketing.

She typed in her symptoms and asked what else could mimic arthritis. ChatGPT suggested Hashimoto’s disease, an autoimmune thyroid disorder, and told her to request a test for thyroid peroxidase (TPO) antibodies.

Her doctor was skeptical, no family history, no fatigue, no classic signs. But Bannon insisted: “Just humour me,” she told them.

The outcome

Tests came back abnormal. Further scans revealed two small malignant tumors in her neck. “I know for sure the cancer would’ve spread without ChatGPT,” she said. “It 100% saved my life.”

Bannon had no textbook symptoms and normal results in routine checkups. If she had accepted the initial diagnosis and stayed on arthritis meds, the cancer might’ve gone undetected for much longer.

Lauren Bannon, 40, took to social media to share that Chat GPT helped save her life (Image: Kennedy News and Media)

AI tools like ChatGPT are increasingly used to support medical decisions

  • Already applied in spotting breast cancer, skin cancer, and diabetic eye disease.
  • Patients use them to understand symptoms, suggest tests, and prepare for appointments.
  • Bannon’s story is one of growing patient empowerment, using AI to be heard when the system fails.

AI isn’t a replacement for doctors. But it can flag rare issues, fill diagnostic gaps, and help patients push for better care.

Bannon’s advice

“I’d encourage others to use ChatGPT with caution. But it gives you something to bring to your doctor. It can’t hurt; and it might just save your life.”

ChatGPT didn’t diagnose cancer. But it asked the right question. And that changed everything.

The post ChatGPT Spots Cancer Missed by Doctors; Woman Says It Saved Her Life appeared first on DailyAI.

AI algorithm predicts heart disease risk from bone scans

Researchers from Edith Cowan University (ECU) and the University of Manitoba have developed an automated program that can identify cardiovascular problems and fall risks from routine bone density scans. 

This could make it considerably easier to detect serious health issues before they become life-threatening.

The algorithm, developed by ECU research fellow Dr. Cassandra Smith and senior research fellow Dr. Marc Sim, works by analyzing vertebral fracture assessment (VFA) images taken during standard bone density tests, which are often part of treatment plans for osteoporosis. 

By assessing the presence and extent of abdominal aortic calcification (AAC) in these scans, the program can quickly flag patients at risk of heart attack, stroke, and dangerous falls.

What’s truly impressive is the speed at which the algorithm works. While an experienced human reader might take five to six minutes to calculate an AAC score from a single scan, the machine learning program can predict scores for thousands of images in less than a minute. 

That level of efficiency could be a significant benefit for healthcare systems looking to screen large populations for hidden health risks.

The need for such screening is evident. In the research, Dr. Smith found that a staggering 58% of older individuals who underwent routine bone density scans had moderate to high levels of AAC.

Even more concerning, one in four of those patients were completely unaware of their elevated risk.

“Women are recognized as being under-screened and under-treated for cardiovascular disease,” Dr. Smith noted. “This study shows that we can use widely available, low-radiation bone density machines to identify women at high risk of cardiovascular disease, which would allow them to seek treatment.”

But the algorithm’s predictive power doesn’t stop at heart health. Using the same program, Dr. Sim discovered that patients with moderate to high AAC scores were also at greater risk of fall-related hospitalizations and fractures compared to those with low scores.

“The higher the calcification in your arteries, the higher the risk of falls and fractures,” Dr. Sim explained. While traditional fall risk factors like previous falls and low bone density are well-known, vascular health is rarely considered. 

“Our analysis uncovered that AAC was a very strong contributor to falls risks and was actually more significant than other factors that are clinically identified as falls risk factors.”

As with any new technology, there are questions to be answered and challenges to overcome before this kind of AI-assisted screening becomes standard practice. 

First and foremost, the algorithm will need to be validated in larger, more diverse patient populations and integrated seamlessly into existing clinical workflows.

However, if those challenges can be met, a simple bone scan – something millions of older adults already undergo regularly – could become an early warning system for some of the most common and devastating health problems we face. 

The post AI algorithm predicts heart disease risk from bone scans appeared first on DailyAI.

Reddit Users Secretly Manipulated by AI in Shocking Psychological Experiment

Researchers from the University of Zurich secretly ran a months-long AI experiment on Reddit’s 3.8M-member r/changemyview community without user consent.

A group of researchers from the University of Zurich ran a covert months-long experiment using AI-generated comments to influence Reddit users without their knowledge or consent.

The experiment took place on r/changemyview, a 3.8M-member subreddit known for civil debate. The researchers deployed large language models (LLMs) to write personalized, persuasive replies, posing as real people, to see if AI could change users’ views.

The details

  • AI-generated comments, posing as real users, were posted to sway opinions.
  • Personas included trauma survivors and controversial political voices.
  • The AI used personal data scraped from users’ histories to tailor persuasive replies.
  • None of the activity was disclosed, violating subreddit and Reddit-wide rules.

This wasn’t just about bots.

It was psychological manipulation using personal data to test how effective AI can be at changing people’s minds.

Reddit’s Chief Legal Officer Ben Lee called it “deeply wrong on both a moral and legal level” and confirmed that Reddit is preparing formal legal action.

Photo by Brett Jordan on Unsplash

The researchers’ defense

They claim their ethics board approved the project and argue their goal was to show how AI could be misused to manipulate elections or spread hate speech.

“We believe the potential benefits of this research substantially outweigh its risks,” the researchers wrote.

The backlash

Mods of r/changemyview were outraged, calling the research “unwelcome” and “manipulative.”

They’ve filed a complaint and asked the university to block publication of the study. This experiment reveals a growing threat: AI-powered persuasion at scale, without consent. It shows how easily people can be influenced when AI knows your beliefs and can mimic your peers.

Redditors were used as test subjects.

This wasn’t just an academic exercise. It shows how powerful and subtle AI manipulation can be, especially when it mimics trusted human voices using personal data.

With generative AI becoming widespread, the risk of AI-driven manipulation is no longer theoretical. It’s real, and it just happened on one of the internet’s biggest communities.

Now Reddit, one of the world’s largest online communities, is fighting back.

The post Reddit Users Secretly Manipulated by AI in Shocking Psychological Experiment appeared first on DailyAI.

ChatGPT Now Recommends Products and Prices With New Shopping Features

OpenAI is turning ChatGPT into a shopping assistant, ushering in a new era of “conversational commerce” and directly challenging tech giants like Google, Amazon, and media-backed review sites.

On April 28, OpenAI rolled out a major shopping update to ChatGPT that gives users:

  • Personalized product recommendations
  • Side-by-side price comparisons
  • Reviews and direct buying links

The feature is live for all users, free, Plus, and even logged-out visitors.

“We’re looking to bring a new kind of conversational shopping experience into ChatGPT,” said Matt Weaver, OpenAI’s EMEA solutions head.

How it works

Users can now ask ChatGPT for shopping advice, like “best red t-shirt under $30”, and get visual tiles, retailer options, user reviews, and purchase links.

OpenAI says product results are AI-chosen, not ads or sponsored content.

ChatGPT is already seeing over a billion searches per week. Now it’s competing directly with:

  • Google, which dominates search but is losing ground on commercial queries
  • Amazon, which launched its own AI shopping assistant
  • Publishers, who rely on affiliate links for review-based revenue

Game-changer

ChatGPT aims to collapse the multi-tab shopping journey; research, compare, review, and buy into a single, chat-driven interface.

It will also learn from users’ preferences, storing past conversations to personalize future recommendations. (Memory features are off in the EU and UK due to privacy laws.)

Not just search, but strategy:

With $125B in projected revenue by 2029, OpenAI is exploring affiliate models but says it’s prioritizing quality user experience first.

“It’s trying to understand how people are reviewing this, how people are talking about this,” said Adam Fry, OpenAI’s search product lead.

Online shopping may never look the same. ChatGPT is no longer just a chatbot; it’s coming for the digital storefronts of its biggest rivals.

The post ChatGPT Now Recommends Products and Prices With New Shopping Features appeared first on DailyAI.

Forget ChatGPT? Alibaba’s Qwen3 Might Be the New AI King

Alibaba just launched Qwen3, a new series of open-source AI models that experts say could rival top systems from Google and OpenAI.

It’s a bold move in China’s growing push to dominate open-source AI.

Qwen3 isn’t just another large language model. It’s a hybrid reasoning system that blends rapid-response “non-thinking” with slower, deliberate “thinking” modes, giving developers control over performance vs. speed.

“We have seamlessly integrated thinking and non-thinking modes,” Alibaba said.

Models range from 0.6B to 235B parameters, and many are now free to download on Hugging Face, GitHub, and Alibaba Cloud. The largest version, Qwen-3-235B-A22B, beat OpenAI’s o3-mini and Google’s Gemini 2.5 Pro on key coding and reasoning benchmarks, though it’s not yet publicly available.

Inside the AI

  • Trained on 36T tokens
  • Supports 119 languages
  • Uses “mixture of experts” (MoE) architecture for efficiency
  • Built for everything from mobile AI to cloud deployment

Photo by Solen Feyissa on Unsplash

The race for dominance

Qwen3 shows that China’s AI labs are rapidly closing the gap with U.S. giants and doing it open source. It’s the latest escalation in the U.S.-China AI rivalry, despite American chip restrictions aimed at slowing Chinese progress.

“This is a significant breakthrough… despite mounting pressure from tightened U.S. export controls,” said analyst Ray Wang.

The rise

Alibaba says the Qwen series has been downloaded over 300M times, with 100K+ spinoff models on Hugging Face. With DeepSeek’s R2 on the horizon and Baidu pivoting toward open-source too, China’s AI momentum is hard to ignore.

Qwen3 proves that cutting-edge AI isn’t just coming from Silicon Valley anymore, and the open-source frontier may now be led by China.

The post Forget ChatGPT? Alibaba’s Qwen3 Might Be the New AI King appeared first on DailyAI.

UPS Might Be the First to Deploy Real Humanoid Robots And They Could Soon Be Handling Your Packages

UPS is exploring a game-changing tech upgrade: humanoid robots. The logistics giant is in active talks with Figure AI, a robotics startup backed by Microsoft, to integrate these robots into its operations, according to sources familiar with the matter.

This move signals a major evolution in UPS’s automation strategy, expanding beyond robotic arms to more advanced, mobile AI-powered systems.

“We regularly explore and deploy a wide range of technologies, including robotics,” UPS said in a statement, without naming Figure.

What’s happening

Figure’s humanoid robot, showcased in a viral February video, was seen sorting parcels beside a conveyor belt, hinting at real-world warehouse tasks. Standing 5’6”, it’s built for environments designed for humans.

The exact scope of UPS’s potential deployment remains under wraps, but ongoing talks suggest growing momentum.

UPS has been ramping up tech investments, spending around $1B annually on automation and AI to cut costs and boost efficiency.

Past moves include:

  • AI-powered ORION routing system, saving 10M miles per year
  • EDGE and Network Planning Tools saving hundreds of millions
  • Partnerships with Dexterity Inc. for “human-like” robotic arms

The why

UPS, and the logistics industry at large, is battling chronic labor shortages. About 76% of logistics businesses report staffing gaps. Humanoid robots offer a scalable fix, performing tasks with dexterity that traditional systems can’t handle.

What’s next

Experts see humanoid robots going mainstream in logistics within 5–10 years. UPS’s early move could give it a significant edge, especially as Figure seeks $1.5B in funding at a $39.5B valuation.

If UPS goes all-in on humanoid robots, it could redefine warehouse work, and set a precedent for the future of global logistics.

The post UPS Might Be the First to Deploy Real Humanoid Robots And They Could Soon Be Handling Your Packages appeared first on DailyAI.