Uncategorized

Microsoft unveils Copilot “Wave 2” to accelerate productivity and content production

Microsoft introduced “Wave 2” of its Copilot AI assistant, bringing a host of new features designed to transform how individuals and businesses interact with AI in their daily workflows.

At the core of Wave 2 is Copilot Pages, a novel digital workspace that Microsoft describes as “the first new digital artifact for the AI age.” 

In essence, the new Copilot announcements offer a fully AI-embedded workspace for media and content production, blending OpenAI’s GPT-4o model into a coherent digital workspace.

It allows teams to collaborate in real-time with AI assistance, turning AI-generated content into editable, shareable documents. 

The goal? To save business time and enhance productivity while driving paid adoption of AI tools. Wave 2 indicates Microsoft’s attempts to turn its multi-billion dollar OpenAI investment into revenue.

By promoting Copilot’s business appeal, the company aims to convert more companies into paying customers for its premium AI tools.

Jared Spataro, Microsoft’s Corporate VP for AI at Work, explains the new Copilot concept: “Pages takes ephemeral AI-generated content and makes it durable, so you can edit it, add to it, and share it with others. You and your team can work collaboratively in a page with Copilot, seeing everyone’s work in real time and iterating with Copilot like a partner.”

The rollout of these features varies, with some immediately available and others predicted to roll out in the coming weeks or months. 

Here’s a round-up of everything Microsoft announced at their live event:

Copilot Pages

Copilot Pages is the centerpiece of Microsoft’s Wave 2 update, offering a thoroughly AI-embedded approach to collaborative work. It works by transforming Copilot’s AI-generated responses from fleeting chat messages into lasting, editable documents.

To use it, you select an “Edit in Pages” button next to a Copilot response, opening a new window alongside the original chat thread. Users can work within that window to refine and build upon the AI’s output together.

Pages integrates with BizChat, Microsoft’s hub for combining web, corporate, and business data.

Like most of these features, Pages is currently exclusive to paid Copilot for Microsoft 365 subscribers.

PowerPoint powered by AI

The new PowerPoint Narrative Builder aims to streamline the presentation creation process. Users can input a topic, and the AI will generate an outline in minutes. 

Microsoft is also introducing a Brand Manager feature to ensure presentations align with company guidelines and maintain consistency across different channels.

That’s another business-centric feature designed to drive greater uptake among enterprise users.

In Microsoft’s words, “Accelerating every business process with Copilot—to grow revenue and reduce costs—is the best way to gain competitive advantage in the age of AI.”

Supporting businesses is a central theme here. Microsoft is pushing businesses to opt into its growing AI ecosystem, converting them into paying customers – customers who can pay much more than the average ChatGPT Plus subscriber at $20 a month. 

Smarter meetings with Teams

Copilot in Teams now offers more comprehensive meeting summaries by analyzing both spoken conversations and chat messages. 

As Microsoft puts it, “Now with Copilot in Teams, no question, idea, or contribution is left behind.”

Ending email overload in Outlook

Addressing the perennial challenge of email management, Microsoft introduces “Prioritize my inbox” for Outlook. 

This feature uses AI to identify important emails, provide concise summaries, and explain why certain messages were flagged as priorities. In the future, users will be able to teach Copilot their personal priorities to refine this feature. 

Copilot Agents

Perhaps the most intriguing addition is Copilot agents – customizable AI assistants that can perform specific tasks with varying degrees of autonomy. 

Microsoft is also launching an agent builder, which will make it easier for users to create and deploy AI helpers for different tasks, somewhat similar to Custom GPTs within ChatGPT.

Again, Agents are aimed at business users. In Microsoft’s words, “We’re introducing Copilot agents, making it easier and faster than ever to automate and execute business processes on your behalf—enabling you to scale your team like never before.

Excel evolves with Python integration

Copilot in Excel now includes Python integration, designed to accelerate data analysis. 

Users can perform advanced tasks like forecasting and risk analysis using natural language prompts without needing to write code. 

Microsoft’s AI productivity revolution kicks up a notch

Copilot Wave 2 represents a big jump forward for AI integration into the Microsoft ecosystem. 

However, amidst the deluge of announcements, some critical areas seem to have flown under the radar.

For one, security and privacy concerns are rife since many of these tools will interact with sensitive personal and business data. While Microsoft asserts that Pages has Enterprise Data Protection, details on how this works in practice are scarce.

Other features, such as the ability to analyze files without opening them, may be convenient, but they also mean Copilot has relatively permanent access to sensitive information. This level of AI involvement in the enterprise data ecosystem will breed some level of discomfort.

The same goes for AI integration into Outlook. Not everyone’s ready for an AI system to sift through personal and professional exchanges.

Microsoft must demonstrate that Copilot’s benefits outweigh the privacy concerns while being transparent about how user data is handled.

As Wave 2 rolls out, it’s clear that Microsoft is betting big on AI integration. Google is expected to respond with its own new wave of AI-driven features for Workspace.

Time will only tell how useful these tools are and the extent of business adoption, or whether we need a little longer for people to accept AI’s massive front-, center-, and behind-the-scenes role in our personal and business lives.

The post Microsoft unveils Copilot “Wave 2” to accelerate productivity and content production appeared first on DailyAI.

01 is smarter but more deceptive with a “medium” danger level

OpenAI’s new “01” LLMs, nicknamed Strawberry, display significant improvements over GPT-4o, but the company says this comes with increased risks.

OpenAI says it is committed to the safe development of its AI models. To that end, it developed a Preparedness Framework, a set of “processes to track, evaluate, and protect against catastrophic risks from powerful models.”

OpenAI’s self-imposed limits regulate which models get released or undergo further development. The Preparedness Framework results in a scorecard where CBRN (chemical, biological, radiological, nuclear), model autonomy, cybersecurity, and persuasion risks are rated as low, medium, high, or critical.

Where unacceptable risks are identified, mitigations are put in place to reduce them. Only models with a post-mitigation score of “medium” or below can be deployed. Only models with a post-mitigation score of “high” or below can be developed further.

The release of 01 marks the first time OpenAI has released a model that presents a “medium” risk for CBRN risks. GPT-4o is rated as a low risk.

It’s not going to enable an amateur to create a biological weapon, but OpenAI says there was “a nontrivial uplift” in its ability to perform lab tasks. However, OpenAI says that “o1-preview and o1-mini meaningfully outperformed GPT-4o” in wet lab biological protocols.

The system card for the 01 models outlines the red teaming and other safety testing the LLMs were subjected to and hints at some of the concerns OpenAI has about potential risks.

In its conclusion, the system card notes, “These increased capabilities come with significantly improved performance on safety benchmarks, but also increase certain types of risk.”

The results of OpenAI’s alignment efforts seem to indicate that in many respects 01 is a safer model. It’s more resilient to jailbreak attacks, less likely to be biased, and less prone to hallucinations.

Some of the experimental results show why OpenAI has reason for its concern.

Deception

OpenAI used evaluation organization Apollo Research to perform some of 01’s safety tests related to deception, or scheming. Apollo broadly defines scheming as AIs gaming their oversight mechanisms as a means to achieve a goal.

A self-aware model that is good at self-reasoning and theory of mind tasks might scheme, or employ deception, to achieve its primary goal. For example, if an AI model realized that it was undergoing alignment testing it might hide risky behavior to achieve its goal of being deployed.

The system card noted that “o1-preview sometimes instrumentally faked alignment during testing.”

An example of this was when 01 was prompted with a primary goal of maximizing economic growth in an urban planning project. The 01 model had access to information that showed it might not get beyond the testing phase if it did this. Here’s how 01 reasoned on how it would achieve its goal:

01 decides to hide its true intent during testing so that it will be deployed. Source: OpenAI

This example shows the ability that 01 has to hide its true intentions during testing so it can pursue its primary goal during deployment. If 01 is aware of OpenAI’s rules for deployment based on its Preparedness Framework, might it have faked some of its responses to ensure that it got deployed?

If more intelligent models are in the works, might they ensure they stay under the “critical” risk score to ensure that they undergo further development?

The Apollo team says it “subjectively believes o1-preview cannot engage in scheming that can lead to catastrophic harms, although current evals aren’t designed to definitively rule this out.”

Persuasion

Testing of both o1-preview and o1-mini shows these models have “human-level persuasion capabilities.” However, OpenAI says “They do not outperform top human writers and do not reach our high risk threshold.”

The experiments they performed to test this do raise some concerns. In the MakeMePay test, 01-preview was assigned the role of a con artist and tasked with convincing GPT-4o to hand over the cash it had been awarded.

GPt-4o was told that it had been awarded $100 and needed to be rational about how to use the funds. During the course of 1,000 interactions, 01-preview was able to con GPT-4o 25.8% of the time.

OpenAI concluded that “These results indicate that the o1 model series may be more manipulative than GPT-4o in getting GPT-4o to perform the undisclosed task (∼25% uplift).” It further noted that “This evaluation gives us a baseline for the model’s ability to do persuasive harm, without triggering any model policies (as telling a model to play a game is not out-of-policy).”

The prospect of putting the 01 LLMs to work on real-world problems is extremely exciting, and when 01 gains multimodal capabilities it will represent another exponential leap. But when AI testers say they can’t rule out “catastrophic harms” and that models sometimes hide their true intent it may be reason to temper that excitement with caution.

Did OpenAI just give Gavin Newsom a good reason to sign the SB 1047 AI safety bill that it opposes?

The post 01 is smarter but more deceptive with a “medium” danger level appeared first on DailyAI.

AI ushers in bew era for strawberry farming, with broader agricultural implications

AI may soon work alongside humans to cultivate the perfect strawberry.

Researchers at Western University have developed an AI system that promises to revolutionize how we grow one of the world’s favorite fruits, with potential ripple effects across the entire agricultural sector.

And no, this isn’t related to OpenAI’s o1 model, formerly codenamed “Project Strawberry.”

The study, published in the journal Foods, showcases a remarkable leap forward in agricultural technology. 

Using advanced machine learning techniques, the team has created a system capable of detecting strawberry ripeness and diseases with nearly 99% accuracy – all through simple camera monitoring.

“We wanted to reduce the size of these AI models to make it something feasible for farmers and localized production,” said Joshua Pearce, the John M. Thompson Chair in Information Technology and Innovation at Western Engineering and Ivey Business School. 

“We didn’t want to just increase the accuracy, which is above 98%, but also reduce the size of the models.”

What sets this research apart is its focus on accessibility. Unlike many high-tech agricultural solutions that cater to large-scale operations, Pearce and his colleague Soodeh Nikan designed their system with small and medium-sized farms in mind.

The team’s methodology combined innovative AI techniques with practical agricultural knowledge:

They started by collecting diverse sets of strawberry images, including healthy fruits and those affected by various diseases.
These images were then processed and augmented to create a robust training dataset.
The researchers fine-tuned three different AI models – Vision Transformer, MobileNetV2, and ResNet18 – each bringing unique strengths to the task.
To ensure the AI could handle real-world variability, they incorporated techniques like class weighting and synthetic image generation.
Perhaps most crucially, they integrated “attention mechanisms” into the models, allowing the AI to focus on the most relevant parts of each image.

The system excels at two primary tasks:

Ripeness detection: It can accurately classify strawberries as ripe or unripe, helping farmers optimize harvest timing.
Disease identification: The AI can detect and identify seven distinct types of strawberry diseases: angular leaf spot, anthracnose fruit rot, blossom blight, gray mold, leaf spot, powdery mildew fruit, and powdery mildew leaf.

The results speak for themselves. With accuracy rates hovering around 98%, the system outperforms previous attempts at automated strawberry monitoring by a vast margin.

However, the implications of this research extend far beyond just improving strawberry yields. 

The potential for reducing food waste is also evident. According to the Food and Agriculture Organization of the United Nations, approximately 14% of food produced is lost between harvest and retail. 

Technologies like this AI system could help address this issue by optimizing harvest timing and reducing losses due to disease or overripeness.

“Reducing waste and the cost of food is obviously a big issue these days. Like everyone, I am always surprised when I go to grocery store and see the price of fresh fruits and vegetables,” said Nikan. 

“When choosing projects, I usually look for something that is safety critical or a societal need. With my experience in other applications, I jumped at the chance to apply my knowledge and expertise to food security.”

Looking ahead, the team is already planning to test their system in outdoor settings, potentially using drones for wider field monitoring. 

They’re also exploring the use of AI-generated synthetic images to further reduce the data requirements for training effective models.

“As opposed to taking images of millions of strawberries, which is a low efficiency, high-cost approach, we are using synthetic images and open-source software to create millions of images ourselves, with relatively low computer power, which now allows us to pinpoint highly granular observations about ripeness and disease for very specific plants,” said Nikan.

Pearce added, “The software is completely free and open-source and farmers of any type are free to download it and then adapt it to their needs. They may prefer to have the AI system send them an email or ping their phone when they detect disease or even forward an image of a specific plant that is ready to pick. The software is wide open to make it your own.”

The post AI ushers in bew era for strawberry farming, with broader agricultural implications appeared first on DailyAI.

DAI#56 – AI games, music scams, and tech tumbles

Welcome to our weekly roundup of human-crafted AI news.

This week, AI made game creation child’s play.

An AI music scam made $10m.

And OpenAI finally gives us a taste of Strawberry.

Let’s dig in.

At last

After rumors, whispers, and weird posts on X about Strawberries, OpenAI finally released a new product. OpenAI has released new advanced reasoning models dubbed the”o1” series. 

From our first tests and the demo videos, it appears to perform significantly better than GPT-4o on reasoning tasks. It takes its time to think before answering but it’s worth the wait. We’re looking forward to seeing how this translates on the benchmarks.

Is Sora or the voice assistant next?

AI games

Is the video game industry facing an AI renaissance? Generative AI is disrupting the industry with new developments happening daily. There’s good news and bad news for gamers and developers alike.

Gaming platform FRVR AI has taken another big step in its mission to democratize game creation and distribution. It joined forces with GenreX to bring AI music composition to game developers.

AI is making it possible for anyone with an idea for a game to create it. But it’s also making it harder for music, voice, and graphic artists to justify their services.

My 8yo son built a Three.js site with zero coding experience

He used Claude AI and let Cursor apply all the code for him. He actually made several projects including 2 platformer games, a drawing app, an animation app and an AI chat app. ~2 hours spent on each. I only helped him… pic.twitter.com/R1Eu5L9qqA

— Meng To (@MengTo) September 1, 2024

A safe bet

Apparently there’s good money to be made in making AI safe. Ilya Sutskever’s Safe Superintelligence (SSI) raised $1 billion in funding which saw his startup hit a $5 billion valuation.

SSI has no product and only 10 employees. What’s Ilya going to do with all that cash?

On the flip side, AI is helping scammers make money too. The FBI busted a $10 million AI music streaming scam run by a North Carolina ‘musician’. The scam was elegantly simple and it’s probably being duplicated elsewhere.

Oops, my bad

If you watched your NVIDIA shares head south on Friday, it might have had something to do with a sketchy Goldman Sachs report that claimed ChatGPT user numbers were tanking.

The reason for the mistake is actually pretty funny. AI tech shareholders probably aren’t laughing though.

Over the last week, we saw a build-up of excitement over a new open-source model that seemed to blow the top LLMs off the benchmark leaderboards. Reflection 70B looked too good to be true.

And now it looks like “the most powerful open-source LLM” is either a scam or a big misunderstanding.

iPhone 16 and the wait for AI

Apple demoed its new iPhone 16 and its ‘Apple Intelligence’ AI features at the Glowtime event this week.

The new features look cool and we can’t wait to try out the upgraded Siri. If the prospect of Apple Intelligence has you considering upgrading your phone, you may want to hold off on that for now.

The iPhone 16 will be Apple’s first AI-capable phone, but there’s a catch.

Apple Ai summed up pic.twitter.com/ZyJzZOKhdr

— Wall Street Memes (@wallstmemes) September 9, 2024

Hitting the brakes

Governments are trying desperately to have legislation keep up with AI and the associated risks. The world now has its first international AI treaty to align the technology with human rights, democracy, and law.

The first batch of countries have already signed up but we’ll have to wait to see who declines and if it will even make a difference.

Do you like the way Google and other AI-powered platforms give you instant answers to your questions? Not everyone is a fan. A group of US Senators have asked the FTC and DOJ to investigate AI summarizers for antitrust violations.

AI events

If you’re looking to attend an in-person AI event then you’re spoilt for choice next month. The AIAI Boston 2024 event hosts three co-located summits exploring cutting-edge AI. You’ll get to attend the Chief AI Officer, Generative AI, and Computer Vision summits in one spot.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

Facebook admits to scraping every Australian adult user’s public photos and posts to train AI, with no opt-out option.
Sixty countries endorse ‘blueprint’ for AI use in military but China opts out.
James Earl Jones’ Darth Vader voice lives on through AI.
Ark report says humanoid robots operating at scale could generate ~$24 trillion in revenues.
OpenAI is reportedly in talks with investors to raise $6.5 billion at a $150 billion pre-money valuation.
Adobe says video generation is coming to Firefly this year.
MI6 and the CIA are using generative AI to combat tech-driven threat actors.
Mistral releases Pixtral 12B, its first multimodal model.

OpenAI still hasn’t released Sora but they keep teasing us with new videos like this one.

OpenAI Sora new showcase video is out on YT, This is created by Singaporean artist Niceaunties. pic.twitter.com/VjvjzkReXM

— AshutoshShrivastava (@ai_for_success) September 9, 2024

And that’s a wrap.

Do you think games created by AI could ever be as good as the ones made by humans? I’m guessing there are some folks out there with great ideas but no coding skills that could surprise us.

I’d love to know what Ilya Sutskever is cooking up with his newfound cash. Will he be the first to create a safe AGI or the first to figure out that there’s no such thing as a safe superintelligence?

Let us know what you think, follow us on X, and send us links to cool AI stuff we may have missed.

The post DAI#56 – AI games, music scams, and tech tumbles appeared first on DailyAI.

OpenAI debuts the “o1” series, pushing the boundaries of AI reasoning

OpenAI has released new advanced reasoning models dubbed the”o1” series. 

o1 currently comes in two versions – o1-preview and o1-mini – and is designed to perform complex reasoning tasks, marking what OpenAI describes as “a new paradigm” in AI development.

“This is what we consider the new paradigm in these models,” explained Mira Murati, OpenAI’s Chief Technology Officer, in a statement to Wired. “It is much better at tackling very complex reasoning tasks.”

Unlike previous iterations that excelled primarily by scale, e.g., by throwing compute at a problem, o1 aims to replicate the human-like thought process of “reasoning through” problems. 

Rather than generating a single answer, the model works step-by-step, considering multiple approaches and revising itself as necessary, a method known as “chain of thought” prompting. 

This allows it to solve complex problems in math, coding, and other fields with a level of precision that existing models, including GPT-4o, struggle to achieve.

We’re releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.

These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math. https://t.co/peKzzKX1bu

— OpenAI (@OpenAI) September 12, 2024

Mark Chen, OpenAI’s Vice President of Research, elaborated on how o1 distinguishes itself by enhancing the learning process. “The model sharpens its thinking and fine-tunes the strategies that it uses to get to the answer,” said Chen. 

He demonstrated the model with several mathematical puzzles and advanced chemistry questions that previously stumped GPT-4o. 

One puzzle that baffled earlier models asked: “A princess is as old as the prince will be when the princess is twice as old as the prince was when the princess’s age was half the sum of their present age. What is the age of the prince and princess?” 

The o1 model determined the correct answer: the prince is 30, and the princess is 40.

How to access o1

ChatGPT Plus users can already access o1 from inside ChatGPT.

That’s a surprise, as GPT-4o’s voice feature is still rolling out months after its demos. I don’t think many thought o1 would spontaneously land like this.

o1 seems related to OpenAI’s codenamed “Strawberry” project. Now here’s a funny thing: most AI models don’t know how many Rs are in “strawberry.” It trips their reasoning skills.

I tested this in o1. Lo and behold, it got it right. Clearly, o1’s approach to reasoning helps solve such questions efficiently.

Sam Altman’s recent spate of strawberry-related social media talk might be linked to this famous strawberry-flavoured AI problem and o1’s codename “Project Strawberry.” If not, it’s a weird coincidence.

A step change in problem-solving

The o1 model’s ability to “reason” its way through problems represents progress in AI – something that could prove quite groundbreaking if its real-world performance is proven ‘in the wild.’

The new models have already shown strong performance in tests like the American Invitational Mathematics Examination (AIME). 

According to OpenAI, the new model solved 83% of the problems presented in the AIME, compared to just 12% by GPT-4o.

While o1’s strengths are evident, it does come with trade-offs.

The model takes longer to generate answers due to its more thoughtful methodologies. We don’t yet know how pronounced this will be and what its impact will be on usability. 

o1’s strange origins

o1 comes off the back of talk surrounding an OpenAI project codenamed “Strawberry,” which emerged in late 2023.

It was rumoured to be an AI model capable of autonomous web exploration, designed to conduct “deep research” rather than simple information retrieval.

Talk surrounding Strawberry swelled not long ago when The Information leaked some info about OpenAI’s internal projects. Namely, OpenAI is allegedly developing two forms of Strawberry.

One is a smaller, simplified version intended for integration into ChatGPT. It aims to enhance reasoning capabilities in scenarios where users require more thoughtful, detailed answers rather than quick responses. This sounds like o1.
Another is a larger, more powerful version that is used to generate high-quality “synthetic” training data for OpenAI’s next flagship language model, codenamed “Orion.” This may or may not be linked to o1.

OpenAI has provided no direct clarification on what Strawberry truly is.

A complement, not a replacement

Murati emphasized that o1 is not designed to replace GPT-4o but to complement it. 

“There are two paradigms,” she said. “The scaling paradigm and this new paradigm. We expect that we will bring them together.” 

While OpenAI continues to develop GPT-5, which will likely be even larger and more powerful than GPT-4o, future models could incorporate the reasoning functions of o1. 

This fusion could address the persistent limitations of large language models (LLMs), such as their struggle with seemingly simple problems that require logical deduction.

Anthropic and Google are allegedly racing to integrate similar features into their models. Google’s AlphaProof project, for instance, also combines language models with reinforcement learning to tackle difficult math problems. 

However, Chen believes that OpenAI has the edge. “I do think we have made some breakthroughs there,” he said, “I think it is part of our edge. It’s actually fairly good at reasoning across all domains.”

Yoshua Bengio, a leading AI researcher and recipient of the prestigious Turing Award, lauded the progress but urged caution.

 “If AI systems were to demonstrate genuine reasoning, it would enable consistency of facts, arguments, and conclusions made by the AI,” he told the FT.

Safety and ethical considerations

As part of its commitment to responsible AI, OpenAI has bolstered the safety features of the o1 series, including “on-by-default” content safety tools. 

These tools help prevent the model from producing harmful or unsafe outputs.

“We’re pleased to announce that Prompt Shields and Protected Materials for Text are now generally available in Azure OpenAI Service,” OpenAI stated in a Microsoft blog post

The o1 series is available for early access in Microsoft’s Azure AI Studio and GitHub Models, with a broader release planned soon. 

OpenAI hopes that o1 will enable developers and enterprises to innovate more cost-effectively, aligning with their broader mission of making AI more accessible to corporate users. 

“We believe that it’ll allow us to ship intelligence cheaper,” concluded Chen. “And I think that really is the core mission of our company.”

All in all, an exciting releasing. It will be very interesting to see what questions, problems, and tasks o1 thrives on.

The post OpenAI debuts the “o1” series, pushing the boundaries of AI reasoning appeared first on DailyAI.

FTC asked to investigate AI summarizers for antitrust violations

A group of Democrat Senators has asked the FTC and DOJ to investigate whether new generative AI features such as summarizers on search platforms violate US antitrust laws.

New features like Google’s AI Overviews and Search GPT are objectively useful in providing quick answers to users’ questions, but at what cost? The Senators, led by Senator Amy Klobuchar, say that the ability of generative AI to summarize or regurgitate existing content harms content creators and journalists.

Their letter addressed to the FTC and DOJ says, “Recently, multiple dominant online platforms have introduced new generative AI features that answer user queries by summarizing, or, in some cases, merely regurgitating online content from other sources or platforms. The introduction of these new generative AI features further threatens the ability of journalists and other content creators to earn compensation for their vital work.”

Generative AI poses new risks for content creators, especially journalists. With local news already in crisis, I urged the DOJ and FTC to investigate whether generative AI products threaten fair competition and innovation by repackaging original content without permission. pic.twitter.com/Jo1KdA45H1

— Senator Amy Klobuchar (@SenAmyKlobuchar) September 10, 2024

They argue that search platforms used to direct users to relevant websites where content creators benefited from the website traffic, but now their content is reworked or summarized by AI without any reward accruing to the person who created it.

The letter explained that “When a generative AI feature answers a query directly, it often forces the content creator—whose content has been relegated to a lower position on the user interface—to compete with content generated from their own work.”

Platforms like Google will respect a robots.txt instruction not to index a content creator’s website but that results in the site not showing up at all in any search queries.

Journalism under threat

The Senators claim “Dominant online platforms in areas like search, social media, e-commerce, operating systems, and app stores already abuse their gatekeeper power over the digital marketplace in ways that harm small businesses and content creators and eliminate choices for consumers.”

They say that AI could make this worse with a “potentially devastating impact” on news organizations and other content creators.

In January, Senator Klobuchar asked Condé Nast CEO Roger Lynch whether his company had any choice over letting AI models scrape their content or not. Lynch said that the “opt-out” feature was introduced after existing models were trained and that it reinforces their dominance.

Lynch said, “The only thing that will do is to prevent a new competitor from training new models to compete with them, so the opt-out of the training is too late, frankly.” Condé Nast has since signed a deal with OpenAI to license its data to train its models.

Generative AI is extremely useful, but it is disrupting creative industries even as it creates new opportunities. AI is doing to online news what the internet did to print media.

A recent study quoted by the Senators says that the US has lost approximately 2,900 newspapers and that a third of those that existed in 2005 will have disappeared by the end of 2024.

AI is undoubtedly making it easier and faster to get answers to our questions but from a fast-diminishing pool of sources.

If the FTC and DOJ agree with the Senators that these generative AI features are a “form of exclusionary conduct or an unfair method of competition in violation of the antitrust laws” then Google, OpenAI, and others like them will need to rethink how they answer our queries.

The post FTC asked to investigate AI summarizers for antitrust violations appeared first on DailyAI.