Blog

The MarTech Summit London 2024: AI marketing in focus

This conference, taking place in London, offers a unique opportunity to explore the latest trends and innovations in marketing technology across the B2B and B2C sectors.

The event, which will take place November 12-13, 2024, at Convene 155 Bishopsgate, London, will feature insightful case studies, panel discussions, fireside chats, keynotes, and Q&As focused on the latest MarTech trends. 

With 400+ attendees, over 85% in senior leadership roles, the summit provides unparalleled learning and networking opportunities with CMOs, Heads, Directors, and more.

Attendees will acquire cutting-edge knowledge on MarTech advancements, real-world implementation strategies, and how technology is transforming marketing approaches and workflows across both B2B and B2C sectors.

Why attend The MarTech Summit London 2024?

There are numerous reasons for attending the MarTech Summit London 2024:

Explore five key themes across three stages over two days
Gain insights from 70+ speakers and industry experts
Network with 400+ attendees, including 85% senior management
Participate in discussion roundtables to share and gain diverse perspectives
Evaluate new technologies for your MarTech stack from leading service providers

Who should attend?

The MarTech Summit London 2024 is designed for senior-level executives in functions such as:

Marketing and technology
Customer experience (CX) and engagement
Brand loyalty and retention
Data and consumer insights
E-commerce marketing
Digital strategy
Omni-channel
Innovation
Social media
Content strategy and storytelling
CRM
Product marketing
Automation
Digital transformation and growth

Key speakers at the event

Some of the key speakers in the lineup include, but aren’t limited to:

Sivan Einstein, Industry Head – Omnichannel Retail, Google
Neil Robins, Head of Digital & Media, Kenvue
Cat Daniel, Senior Director, Growth & Engagement, Monzo
Sam Lewis-Williams, Head of Marketing Automation, Financial Times
Namita Mediratta, CMI Director, Beauty, EMETU, Unilever
Richard Jones, Director of eCommerce, Carlsberg
Kumar Amrendra, Head of Digital Marketing, Planning & Data Science, Sky
James Brindley-Raynes, Head of Digital Customer Journey, Maersk
Clive Head, Head of CRM and Loyalty, Santander UK
Marie Tyler, Global Customer Engagement Leader, Honeywell

Featured themes and topics from The MarTech Summit London 2024

The event features five main themes across three stages:

Day 1 – Plenary Room:

MarTech Trends (including Marketing Automation, AI in Marketing, Digital Transformation)

Day 2 – B2C Marketing Stage:

Customer Experience (including Omnichannel Personalisation, E-Commerce, Digital Customer Experience)
Data, Analytics & Insights (including First-Party Data, Customer Data Platform, Data Privacy & Security)

Day 2 – B2B Marketing Stage:

Sales Enablement (including Revenue Enablement, Sales Content Optimisation, Account-Based Marketing)
Demand Generation (including B2B Marketing Metrics, Precision Demand Marketing, Content Marketing)

Key details about the event

Mark your calendars and prepare to be part of this fantastic event:

Date: November 12-13, 2024
Venue: Convene 155 Bishopsgate, Second Floor, London EC2M 3YD
Tickets: Available on the event website here

Visit the official MarTech Summit London 2024 website to secure your spot, view the full agenda, and explore sponsorship opportunities. 

The post The MarTech Summit London 2024: AI marketing in focus appeared first on DailyAI.

The 2024 Nature Index reveals how AI is transforming every aspect of scientific research

The 2024 Nature Index supplement on Artificial Intelligence, released this week, reveals a scientific world in the throes of an AI-driven paradigm shift. 

This annual report, published by the journal Nature, tracks high-quality science by measuring research outputs in 82 natural science journals, selected by an independent panel of researchers.

The latest edition illustrates how AI is not just changing what scientists study, but fundamentally altering how research is conducted, evaluated, and applied globally. 

One of the most striking trends revealed in the Index is the surge in corporate AI research. US companies have more than doubled their output in Nature Index journals since 2019, with their Share (a metric used by the Index to measure research output) increasing from 51.8 to 106.5. 

However, this boom in R&D activity comes with a caveat – it still only accounts for 3.8% of total US AI research output in these publications. In essence, despite a major uplift in corporate AI R&D, we’ve not seen those efforts reflected in public research output. 

This raises questions about where corporate AI research is located. Are companies publishing their most groundbreaking work in other venues, or keeping it under lock and key?

The answer is one of competing names and narratives. OpenAI, Microsoft, Google, Anthropic, and a handful of others are firmly entrenched in the closed-source model, but the open-source AI industry, led by Meta, Mistral, and others, is rapidly gaining ground.

Contributing to this, the funding disparity between private companies and public institutions in AI research is staggering. 

In 2021, according to Stanford University’s AI Index Report, private sector investment in AI worldwide reached approximately $93.5 billion. 

This includes spending by tech giants like Google, Microsoft, and Amazon, as well as AI-focused startups and other corporations across various industries.

In contrast, public funding for AI research is much lower. The US government’s non-defense AI R&D spending in 2021 was about $1.5 billion, while the European Commission allocated around €1 billion (approximately $1.1 billion) for AI research that year.

This gaping void in resource expenditure is giving private companies an advantage in AI development. They can afford more powerful computing resources and larger datasets and attract top talent with higher salaries.

“We’re increasingly looking at a situation where top-notch AI research is done primarily within the research labs of a rather small number of mostly US-based companies,” explained Holger Hoos, an AI researcher at RWTH Aachen University in Germany.

While the US maintains its lead in AI research, countries like China, the UK, and Germany are emerging as major hubs of innovation and collaboration.

However, this growth isn’t uniform across the globe. South Africa stands as the only African nation in the top 40 for AI output, showing how the digital divide is at risk of deepening in the AI era. 

AI in peer review: promise and peril

Peer review ensures academic and methodological rigor and transparency when papers are submitted to journals.

This year, a nonsense paper with giant AI-generated rat testicles was published in Frontiers, indicating how the peer review process is far from impenetrable.

Someone used DALL-E to create gobbledygook scientific figures and submitted them to Frontiers Journal. And guess what? The editor published it. LOLhttps://t.co/hjQkRQDkal https://t.co/aV1USo6Vt2 pic.twitter.com/VAkjJkY4dR

— Veera Rajagopal  (@doctorveera) February 15, 2024

Recent experiments have shown that AI can generate research assessment reports that are nearly indistinguishable from those written by human experts. 

Last year, an experiment testing ChatGPT’s peer reviews versus human reviewers on the same paper found that over 50% of the AI’s comments on the Nature papers and more than 77% on the ICLR papers aligned with the points raised by human reviewers.

Of course, ChatGPT is much quicker than human peer reviewers. “It’s getting harder and harder for researchers to get high-quality feedback from reviewers,” said James Zou from Stanford University, the leader researcher for that experiment.

AI’s relationship with research is raising fundamental questions about scientific evaluation and whether human judgment is intrinsic to the process.  The balance between AI efficiency and irreplaceable human insight is one of several key issues scientists from all backgrounds will need to grapple with in the years ahead.

AI might soon be capable of managing the entire research process from start to finish, potentially sidelining human researchers altogether.

For instance, Sakana‘s AI Scientist autonomously generates novel research ideas, designs and conducts experiments, and even writes and reviews scientific papers. This tempts a future where AI could drive scientific discovery with minimal human intervention.

On the methodology side, using machine learning (ML) to process and analyze data comes with risks. Princeton researchers argued that since many ML techniques can’t be easily replicated, this erodes the replicability of experiments – a key principle of high-quality science. 

Ultimately, AI’s rise to prominence in every aspect of research and science is gaining momentum, and the process likely irreversible. 

Last year, Nature surveyed 1,600 researchers and found that 66% believe that AI enables quicker data processing, 58% that it accelerates previously infeasible analysis, and 55% feel that it’s a cost and time-saving solution.

As Simon Baker, lead author of the supplement’s overview, concludes: “AI is changing the way researchers work forever, but human expertise must continue to hold sway.”

The question now is how the global scientific community will adapt to AI’s role in research, ensuring that the AI revolution in science benefits all of humanity, and without unforeseen risks wreaking havoc on science.  

As with so many aspects of the technology, mastering both benefits and risks is challenging but necessary to secure a safe path forward.

The post The 2024 Nature Index reveals how AI is transforming every aspect of scientific research appeared first on DailyAI.

Microsoft to ressurrect the Three Mile Island nuclear power plant in exclusive deal

Microsoft has announced an energy deal to reopen the Three Mile Island nuclear power plant on the Susquehanna River near Harrisburg, Pennsylvania.

Constellation Energy, the plant’s current owner, is now set to bring Unit 1 back online for Microsoft. That will involve investing $1.6 billion to restore the reactor by 2028.

While details remain unknown, Microsoft reportedly offered to buy the plant’s output for 20 consecutive years. 

Three Mile Island is best known as the site of the most serious nuclear accident in US history. In 1979, a partial meltdown occurred in one of its reactors, sparking public fear and distrust in nuclear power. 

The plant’s Unit 2 reactor, which melted down, was permanently closed, but Unit 1 continued operating until it was decommissioned in 2019 due to competition from cheaper natural gas. 

Three Mile Island, the site of the worst nuclear accident in U.S. history in 1979, saw a partial meltdown in one of its reactors. Decades later, the undamaged Unit 1 reactor, decommissioned in 2019, is set for revival in an exclusive deal with Microsoft to power AI data centers by 2028. Source: Wikimedia Commons.

Microsoft says the deal is also driven by its carbon-negative pledge by 2030. Nuclear energy is a zero-carbon power source, though there are ongoing controversies over radioactive waste management.

Constellation Energy’s CEO Joseph Dominguez was positive about the move, stating, “This plant never should have been allowed to shut down. It will produce as much clean energy as all of the renewables [wind and solar] built in Pennsylvania over the last 30 years.”

Constellation Energy stated that “Significant investments” need to be made in the plant, including upgrading and renovating the “turbine, generator, main power transformer, and cooling and control systems.”

The rising power demands of AI

Microsoft’s decision to tap into nuclear power shows once again the staggering energy requirements of AI and supporting data center technology.  

The company has been expanding its data centers worldwide, with many of these facilities dedicated to supporting AI workloads, including the training and deployment of models that require vast amounts of computational power.

Training large AI models can consume thousands of megawatt-hours (MWh) of electricity. 

According to some sources, OpenAI’s GPT-3, for instance, required over 1,200 MWh to train, which could power tens of thousands of homes for a day. 

Hundreds, if not thousands, of powerful AI models are actively being trained at any one time today. AI models require power not just during training but also for day-to-day operations.

This surge in energy demand from AI is part of a broader trend. The International Energy Agency (IEA) estimates that data centers currently account for 1.3% of global electricity consumption, and this is set to rise significantly, with AI infrastructure driving much of the increase. 

By 2030, data centers could consume up to 8% of the world’s electricity, further straining energy grids already stretched thin by increasing reliance on digital services and electric vehicles.

Coal and nuclear to take up the slack

While the focus on nuclear energy highlights the tech industry’s need for low-carbon alternatives, AI’s demand for power is remarkably breathing new life into coal. 

According to a Bloomberg report from earlier in the year, the rapid expansion of data centers is delaying the shutdown of coal plants across the US, defying the push for cleaner energy sources.

In areas like Kansas City, for example, the construction of data centers and electric vehicle battery factories has forced utility providers to halt plans to retire coal plants. 

Microsoft’s decision to power its AI operations with nuclear energy brings the broader conversation of AI sustainability into sharp focus. 

With the tech industry’s growth outpacing energy supplies, innovative solutions are needed to bridge the gap between demand and production. OpenAI, for example, has actively invested in Helion, a nuclear fusion project set to come online soon.

OpenAI CEO Sam Altman said on X, “If Helion works, it not only is a potential way out of the climate crisis but a path towards a much higher quality of life. Have loved being involved for the last 7 years and excited to be investing more.”

Despite its controversies, nuclear power offers a credible solution to AI’s energy demands, particularly in regions struggling to transition fully to renewable energy.

But the stakes are high. Building and maintaining nuclear plants still requires immense resources, and nuclear waste is challenging to dispose of. Many will see this as trivializing decarbonization and renewable energy strategies. 

We should say that this is still early days for the Microsoft-Constellation Eneergy deal.

Still, exclusive, private deals like this are exceptionally rare, showing how power in the AI industry hinges on power in the literal sense. 

The post Microsoft to ressurrect the Three Mile Island nuclear power plant in exclusive deal appeared first on DailyAI.

DAI#57 – Tricky AI, exam challenge, and conspiracy cures

Welcome to this week’s roundup of AI news made by humans, for humans.

This week, OpenAI told us that it’s pretty sure o1 is kinda safe.

Microsoft gave Copilot a big boost.

And a chatbot can cure your belief in conspiracy theories.

Let’s dig in.

It’s pretty safe

We were caught up in the excitement of OpenAI’s release of its o1 models last week until we read the fine print. The model’s system card offers interesting insight into the safety testing OpenAI did and the results may raise some eyebrows.

It turns out that o1 is smarter but also more deceptive with a “medium” danger level according to OpenAI’s rating system.

Despite o1 being very sneaky during testing, OpenAI and its red teamers say they’re fairly sure it’s safe enough to release. Not so safe if you’re a programmer looking for a job.

If OpenAI‘s o1 can pass OpenAI‘s research engineer hiring interview for coding — 90% to 100% rate…

……then why would they continue to hire actual human engineers for this position?

Every company is about to ask this question. pic.twitter.com/NIIn80AW6f

— Benjamin De Kraker (@BenjaminDEKR) September 12, 2024

Copilot upgrades

Microsoft unleashed Copilot “Wave 2” which will give your productivity and content production an additional AI boost. If you were on the fence over Copilot’s usefulness these new features may be the clincher.

The Pages feature and the new Excel integrations are really cool. The way Copilot accesses your data does raise some privacy questions though.

More strawberries

If all the recent talk about OpenAI’s Strawberry project gave you a craving for the berry then you’re in luck.

Researchers have developed an AI system that promises to transform how we grow strawberries and other agricultural products.

This open-source application could have a huge impact on food waste, harvest yields, and even the price you pay for fresh fruit and veg at the store.

Too easy

AI models are getting so smart now that our benchmarks to measure them are just about obsolete. Scale AI and CAIS launched a project called Humanity’s Last Exam to fix this.

They want you to submit tough questions that you think could stump leading AI models. If an AI can answer PhD-level questions then we’ll get a sense of how close we are to achieving expert-level AI systems.

If you think you have a good one you could win a share of $500,000. It’ll have to be really tough though.

Source: X

Curing conspiracies

I love a good conspiracy theory, but some of the things people believe are just crazy. Have you tried convincing a flat-earther with simple facts and reasoning? It doesn’t work. But what if we let an AI chatbot have a go?

Researchers built a chatbot using GPT-4 Turbo and they had impressive results in changing people’s minds about the conspiracy theories they believed in.

It does raise some awkward questions about how persuasive AI models are and who decides what ‘truth’ is.

Just because you’re paranoid, doesn’t mean they’re not after you.

Stay cool

Is having your body cryogenically frozen part of your backup plan? If so, you’ll be happy to hear AI is making this crazy idea slightly more plausible.

A company called Select AI used AI to accelerate the discovery of cryoprotectant compounds. These compounds stop organic matter from turning into crystals during the freezing process.

For now, the application is for better transport and storage of blood or temperature-sensitive medicines. But if AI helps them find a really good cryoprotectant, cryogenic preservation of humans could go from a moneymaking racket to a plausible option.

AI is contributing to the medical field in other ways that might make you a little nervous. New research shows that a surprising amount of doctors are turning to ChatGPT for help to diagnose patients. Is that a good thing?

If you’re excited about what’s happening in medicine and considering a career as a doctor you may want to rethink that according to this professor.

This is the final warning for those considering careers as physicians: AI is becoming so advanced that the demand for human doctors will significantly decrease, especially in roles involving standard diagnostics and routine treatments, which will be increasingly replaced by AI.… pic.twitter.com/VJqE6rvkG0

— Derya Unutmaz, MD (@DeryaTR_) September 13, 2024

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

Googles Notebook LM turns your written content into a podcast. This is crazy good.
When Japan switches the world’s first zeta-class supercomputer on in 2030 it will be 1,000 times faster than the world’s current fastest supercomputer.
SambaNova challenges OpenAI’s o1 model with an open-source Llama 3.1-powered demo.
More than 200 tech industry players sign an open letter asking Gavin Newsom to veto the SB 1047 AI safety bill.
Gavin Newsom signed two bills into law to protect living and deceased performers from AI cloning.
Sam Altman departs OpenAI’s safety committee to make it more “independent”.
OpenAI says the signs of life shown by ChatGPT in initiating conversations are just a glitch.
RunwayML launches Gen-3 Alpha Video to Video feature to paid users of its app.

Gen-3 Alpha Video to Video is now available on web for all paid plans. Video to Video represents a new control mechanism for precise movement, expressiveness and intent within generations. To use Video to Video, simply upload your input video, prompt in any aesthetic direction… pic.twitter.com/ZjRwVPyqem

— Runway (@runwayml) September 13, 2024

And that’s a wrap.

It’s not surprising that AI models like o1 present more risk as they get smarter, but the sneakiness during testing was weird. Do you think OpenAI will stick to its self-imposed safety level restrictions?

The Humanity’s Last Exam project was an eye-opener. Humans are struggling to find questions tough enough for AI to solve. What happens after that?

If you believe in conspiracy theories, do you think an AI chatbot could change your mind? Amazon Echo is always listening, the government uses big tech to spy on us, and Mark Zuckerberg is a robot. Prove me wrong.

Let us know what you think, follow us on X, and send us links to cool AI stuff we may have missed.

The post DAI#57 – Tricky AI, exam challenge, and conspiracy cures appeared first on DailyAI.

AI in the doctor’s office: GPs turn to ChatGPT and other tools for diagnoses

A new survey has found that one in five general practitioners (GPs) in the UK are using AI tools like ChatGPT to assist with daily tasks such as suggesting diagnoses and writing patient letters. 

The research, published in the journal BMJ Health and Care Informatics, surveyed 1,006 GPs across the about their use of AI chatbots in clinical practice. 

Some 20% reported using generative AI tools, with ChatGPT being the most popular. Of those using AI, 29% said they employed it to generate documentation after patient appointments, while 28% used it to suggest potential diagnoses.

“These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning,” the study authors noted. 

We have no idea how many papers OpenAI used to train their models, but it’s certainly more than any doctor could have read. It gives quick, convincing answers and is very easy to use, unlike searching research papers manually. 

Does that mean ChatGPT is generally accurate for clinical advice? Absolutely not. Large language models (LLMs) like ChatGPT are pre-trained on massive amounts of general data, making them more flexible but dubiously accurate for specific medical tasks.

It’s easy to lead them on, with the AI model tending to side with your assumptions in problematically sycophantic behavior.

Moreover, some researchers state that ChatGPT can be conservative or prude when handling delicate topics like sexual health.

As Stephen Hughes from Anglia Ruskin University wrote in The Conservation, “I asked ChatGPT to diagnose pain when passing urine and a discharge from the male genitalia after unprotected sexual intercourse. I was intrigued to see that I received no response. It was as if ChatGPT blushed in some coy computerised way. Removing mentions of sexual intercourse resulted in ChatGPT giving a differential diagnosis that included gonorrhoea, which was the condition I had in mind.” 

As Dr. Charlotte Blease, lead author of the study, commented: “Despite a lack of guidance about these tools and unclear work policies, GPs report using them to assist with their job. The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarizing information but also the risks in terms of hallucinations, algorithmic biases and the potential to compromise patient privacy.”

That last point is key. Passing patient information into AI systems likely constitutes a breach of privacy and patient trust.

Dr. Ellie Mein, medico-legal adviser at the Medical Defence Union, agreed on the key issues: “Along with the uses identified in the BMJ paper, we’ve found that some doctors are turning to AI programs to help draft complaint responses for them. We have cautioned MDU members about the issues this raises, including inaccuracy and patient confidentiality. There are also data protection considerations.”

She added: “When dealing with patient complaints, AI drafted responses may sound plausible but can contain inaccuracies and reference incorrect guidelines which can be hard to spot when woven into very eloquent passages of text. It’s vital that doctors use AI in an ethical way and comply with relevant guidance and regulations.”

Probably the most critical questions amid all this are: How accurate is ChatGPT in a medical context? And how great might the risks of misdiagnosis or other issues be if this continues?

Generative AI in medical practice

As GPs increasingly experiment with AI tools, researchers are working to evaluate how they compare to traditional diagnostic methods. 

A study published in Expert Systems with Applications conducted a comparative analysis between ChatGPT, conventional machine learning models, and other AI systems for medical diagnoses.

The researchers found that while ChatGPT showed promise, it was often outperformed by traditional machine learning models specifically trained on medical datasets. For example, multi-layer perceptron neural networks achieved the highest accuracy in diagnosing diseases based on symptoms, with rates of 81% and 94% on two different datasets.

Researchers concluded that while ChatGPT and similar AI tools show potential, “their answers can be often ambiguous and out of context, so providing incorrect diagnoses, even if it is asked to provide an answer only considering a specific set of classes.”

This aligns with other recent studies examining AI’s potential in medical practice.

For example, research published in JAMA Network Open tested GPT-4’s ability to analyze complex patient cases. While it showed promising results in some areas, GPT-4 still made errors, some of which could be dangerous in real clinical scenarios.

There are some exceptions, though. One study conducted by the New York Eye and Ear Infirmary of Mount Sinai (NYEE) demonstrated how GPT-4 can meet or exceed human ophthalmologists in diagnosing and treating eye diseases.

For glaucoma, GPT-4 provided highly accurate and detailed responses that exceeded those of real eye specialists. 

AI developers such as OpenAI and NVIDIA are training purpose-built medical AI assistants to support clinicians, hopefully making up for shortfalls in base frontier models like GP-4.

OpenAI has already partnered with health tech company Color Health to create an AI “copilot” for cancer care, demonstrating how these tools are set to become more specific to clinical practice.  

Weighing up benefits and risks

There are countless studies comparing specially trained AI models to humans in identifying diseases from diagnostics images such as MRI and X-ray. 

AI techniques have outperformed doctors in everything from cancer and eye disease diagnosis to Alzheimer’s and Parkinson’s early detection. One, named “Mia,” proved effective in analyzing over 10,000 mammogram scans, flagging known cancer cases, and uncovering cancer in 11 women that doctors had missed. 

However, these purpose-built AI tools are certainly not the same as parsing notes and findings into a language model like ChatGPT and asking it to infer a diagnosis from that alone. 

Nevertheless, that’s a difficult temptation to resist. It’s no secret that healthcare services are overwhelmed. NHS waiting times continue to soar at all-time highs, and even obtaining GP appointments in some areas is a grim task. 

AI tools target time-consuming admin, such is their allure for overwhelmed doctors. We’ve seen this mirrored across numerous public sector fields, such as education, where teachers are widely using AI to create materials, mark work, and more. 

So, will your doctor parse your notes into ChatGPT and write you a prescription based on the results for your next doctor’s visit? Quite possibly. It’s just another frontier where the technology’s promise to save time is just so hard to deny. 

The best path forward may be to develop a code of use. The British Medical Association has called for clear policies on integrating AI into clinical practice.

“The medical community will need to find ways to both educate physicians and trainees and guide patients about the safe adoption of these tools,” the BMJ study authors concluded.

Aside from advice and education, ongoing research, clear guidelines, and a commitment to patient safety will be essential to realizing AI’s benefits while offsetting risks.

The post AI in the doctor’s office: GPs turn to ChatGPT and other tools for diagnoses appeared first on DailyAI.

Researchers use AI chatbot to change conspiracy theory beliefs

Around 50% of Americans believe in conspiracy theories of one type or another, but MIT and Cornell University researchers think AI can fix that.

In their paper, the psychology researchers explained how they used a chatbot powered by GPT-4 Turbo to interact with participants to see if they could be persuaded to abandon their belief in a conspiracy theory.

The experiment involved 1,000 participants who were asked to describe a conspiracy theory they believed in and the evidence they felt underpinned their belief.

The paper noted that “Prominent psychological theories propose that many people want to adopt conspiracy theories (to satisfy underlying psychic “needs” or motivations), and thus, believers cannot be convinced to abandon these unfounded and implausible beliefs using facts and counterevidence.”

Could an AI chatbot be more persuasive where others failed? The researchers offered two reasons why they suspected LLMs could do a better job than you of convincing your colleague that the moon landing really happened.

LLMs have been trained on vast amounts of data and they’re really good at tailoring counterarguments to the specifics of a person’s beliefs.

After describing the conspiracy theory and evidence, the participants engaged in back-and-forth interactions with the chatbot. The chatbot was prompted to “very effectively persuade” the participants to change their belief in their chosen conspiracy.

The result was that on average the participants experienced a 21.43% decrease in their belief in the conspiracy, which they formerly considered to be true. The persistence of the effect was also interesting. Up to two months later, participants retained their new beliefs about the conspiracy they previously believed.

The researchers concluded that “many conspiracists—including those strongly committed to their beliefs—updated their views when confronted with an AI that argued compellingly against their positions.”

Our new paper, out on (the cover of!) Science is now live! https://t.co/VBfC5eoMQ2

— Tom Costello (@tomstello_) September 12, 2024

They suggest that AI could be used to counter conspiracy theories and fake news spread on social media by countering these with facts and well-reasoned arguments.

While the study focused on conspiracy theories, it noted that “Absent appropriate guardrails, however, it is entirely possible that such models could also convince people to adopt epistemically suspect beliefs—or be used as tools of large-scale persuasion more generally.”

In other words, AI is really good at convincing you to believe the things it is prompted to make you believe. An AI model also doesn’t inherently know what is ‘true’ and what isn’t. It depends on the content in its training data.

The researchers achieved their results using GPT-4 Turbo, but GPT-4o and the new o1 models are even more persuasive and deceptive.

The study was funded by the John Templeton Foundation. The irony of this is that the Templeton Freedom Awards are administered by the Atlas Economic Research Foundation. This group opposes taking action on climate change and defends the tobacco industry, which also gives it funding.

AI models are becoming very persuasive and the people who decide what constitutes truth hold the power.

The same AI models that could convince you to stop believing the earth is flat, could be used by lobbyists to convince you that anti-smoking laws are bad and climate change isn’t happening.

The post Researchers use AI chatbot to change conspiracy theory beliefs appeared first on DailyAI.