Jay

DAI#56 – AI games, music scams, and tech tumbles

Welcome to our weekly roundup of human-crafted AI news.

This week, AI made game creation child’s play.

An AI music scam made $10m.

And OpenAI finally gives us a taste of Strawberry.

Let’s dig in.

At last

After rumors, whispers, and weird posts on X about Strawberries, OpenAI finally released a new product. OpenAI has released new advanced reasoning models dubbed the”o1” series. 

From our first tests and the demo videos, it appears to perform significantly better than GPT-4o on reasoning tasks. It takes its time to think before answering but it’s worth the wait. We’re looking forward to seeing how this translates on the benchmarks.

Is Sora or the voice assistant next?

AI games

Is the video game industry facing an AI renaissance? Generative AI is disrupting the industry with new developments happening daily. There’s good news and bad news for gamers and developers alike.

Gaming platform FRVR AI has taken another big step in its mission to democratize game creation and distribution. It joined forces with GenreX to bring AI music composition to game developers.

AI is making it possible for anyone with an idea for a game to create it. But it’s also making it harder for music, voice, and graphic artists to justify their services.

My 8yo son built a Three.js site with zero coding experience

He used Claude AI and let Cursor apply all the code for him. He actually made several projects including 2 platformer games, a drawing app, an animation app and an AI chat app. ~2 hours spent on each. I only helped him… pic.twitter.com/R1Eu5L9qqA

— Meng To (@MengTo) September 1, 2024

A safe bet

Apparently there’s good money to be made in making AI safe. Ilya Sutskever’s Safe Superintelligence (SSI) raised $1 billion in funding which saw his startup hit a $5 billion valuation.

SSI has no product and only 10 employees. What’s Ilya going to do with all that cash?

On the flip side, AI is helping scammers make money too. The FBI busted a $10 million AI music streaming scam run by a North Carolina ‘musician’. The scam was elegantly simple and it’s probably being duplicated elsewhere.

Oops, my bad

If you watched your NVIDIA shares head south on Friday, it might have had something to do with a sketchy Goldman Sachs report that claimed ChatGPT user numbers were tanking.

The reason for the mistake is actually pretty funny. AI tech shareholders probably aren’t laughing though.

Over the last week, we saw a build-up of excitement over a new open-source model that seemed to blow the top LLMs off the benchmark leaderboards. Reflection 70B looked too good to be true.

And now it looks like “the most powerful open-source LLM” is either a scam or a big misunderstanding.

iPhone 16 and the wait for AI

Apple demoed its new iPhone 16 and its ‘Apple Intelligence’ AI features at the Glowtime event this week.

The new features look cool and we can’t wait to try out the upgraded Siri. If the prospect of Apple Intelligence has you considering upgrading your phone, you may want to hold off on that for now.

The iPhone 16 will be Apple’s first AI-capable phone, but there’s a catch.

Apple Ai summed up pic.twitter.com/ZyJzZOKhdr

— Wall Street Memes (@wallstmemes) September 9, 2024

Hitting the brakes

Governments are trying desperately to have legislation keep up with AI and the associated risks. The world now has its first international AI treaty to align the technology with human rights, democracy, and law.

The first batch of countries have already signed up but we’ll have to wait to see who declines and if it will even make a difference.

Do you like the way Google and other AI-powered platforms give you instant answers to your questions? Not everyone is a fan. A group of US Senators have asked the FTC and DOJ to investigate AI summarizers for antitrust violations.

AI events

If you’re looking to attend an in-person AI event then you’re spoilt for choice next month. The AIAI Boston 2024 event hosts three co-located summits exploring cutting-edge AI. You’ll get to attend the Chief AI Officer, Generative AI, and Computer Vision summits in one spot.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

Facebook admits to scraping every Australian adult user’s public photos and posts to train AI, with no opt-out option.
Sixty countries endorse ‘blueprint’ for AI use in military but China opts out.
James Earl Jones’ Darth Vader voice lives on through AI.
Ark report says humanoid robots operating at scale could generate ~$24 trillion in revenues.
OpenAI is reportedly in talks with investors to raise $6.5 billion at a $150 billion pre-money valuation.
Adobe says video generation is coming to Firefly this year.
MI6 and the CIA are using generative AI to combat tech-driven threat actors.
Mistral releases Pixtral 12B, its first multimodal model.

OpenAI still hasn’t released Sora but they keep teasing us with new videos like this one.

OpenAI Sora new showcase video is out on YT, This is created by Singaporean artist Niceaunties. pic.twitter.com/VjvjzkReXM

— AshutoshShrivastava (@ai_for_success) September 9, 2024

And that’s a wrap.

Do you think games created by AI could ever be as good as the ones made by humans? I’m guessing there are some folks out there with great ideas but no coding skills that could surprise us.

I’d love to know what Ilya Sutskever is cooking up with his newfound cash. Will he be the first to create a safe AGI or the first to figure out that there’s no such thing as a safe superintelligence?

Let us know what you think, follow us on X, and send us links to cool AI stuff we may have missed.

The post DAI#56 – AI games, music scams, and tech tumbles appeared first on DailyAI.

OpenAI debuts the “o1” series, pushing the boundaries of AI reasoning

OpenAI has released new advanced reasoning models dubbed the”o1” series. 

o1 currently comes in two versions – o1-preview and o1-mini – and is designed to perform complex reasoning tasks, marking what OpenAI describes as “a new paradigm” in AI development.

“This is what we consider the new paradigm in these models,” explained Mira Murati, OpenAI’s Chief Technology Officer, in a statement to Wired. “It is much better at tackling very complex reasoning tasks.”

Unlike previous iterations that excelled primarily by scale, e.g., by throwing compute at a problem, o1 aims to replicate the human-like thought process of “reasoning through” problems. 

Rather than generating a single answer, the model works step-by-step, considering multiple approaches and revising itself as necessary, a method known as “chain of thought” prompting. 

This allows it to solve complex problems in math, coding, and other fields with a level of precision that existing models, including GPT-4o, struggle to achieve.

We’re releasing a preview of OpenAI o1—a new series of AI models designed to spend more time thinking before they respond.

These models can reason through complex tasks and solve harder problems than previous models in science, coding, and math. https://t.co/peKzzKX1bu

— OpenAI (@OpenAI) September 12, 2024

Mark Chen, OpenAI’s Vice President of Research, elaborated on how o1 distinguishes itself by enhancing the learning process. “The model sharpens its thinking and fine-tunes the strategies that it uses to get to the answer,” said Chen. 

He demonstrated the model with several mathematical puzzles and advanced chemistry questions that previously stumped GPT-4o. 

One puzzle that baffled earlier models asked: “A princess is as old as the prince will be when the princess is twice as old as the prince was when the princess’s age was half the sum of their present age. What is the age of the prince and princess?” 

The o1 model determined the correct answer: the prince is 30, and the princess is 40.

How to access o1

ChatGPT Plus users can already access o1 from inside ChatGPT.

That’s a surprise, as GPT-4o’s voice feature is still rolling out months after its demos. I don’t think many thought o1 would spontaneously land like this.

o1 seems related to OpenAI’s codenamed “Strawberry” project. Now here’s a funny thing: most AI models don’t know how many Rs are in “strawberry.” It trips their reasoning skills.

I tested this in o1. Lo and behold, it got it right. Clearly, o1’s approach to reasoning helps solve such questions efficiently.

Sam Altman’s recent spate of strawberry-related social media talk might be linked to this famous strawberry-flavoured AI problem and o1’s codename “Project Strawberry.” If not, it’s a weird coincidence.

A step change in problem-solving

The o1 model’s ability to “reason” its way through problems represents progress in AI – something that could prove quite groundbreaking if its real-world performance is proven ‘in the wild.’

The new models have already shown strong performance in tests like the American Invitational Mathematics Examination (AIME). 

According to OpenAI, the new model solved 83% of the problems presented in the AIME, compared to just 12% by GPT-4o.

While o1’s strengths are evident, it does come with trade-offs.

The model takes longer to generate answers due to its more thoughtful methodologies. We don’t yet know how pronounced this will be and what its impact will be on usability. 

o1’s strange origins

o1 comes off the back of talk surrounding an OpenAI project codenamed “Strawberry,” which emerged in late 2023.

It was rumoured to be an AI model capable of autonomous web exploration, designed to conduct “deep research” rather than simple information retrieval.

Talk surrounding Strawberry swelled not long ago when The Information leaked some info about OpenAI’s internal projects. Namely, OpenAI is allegedly developing two forms of Strawberry.

One is a smaller, simplified version intended for integration into ChatGPT. It aims to enhance reasoning capabilities in scenarios where users require more thoughtful, detailed answers rather than quick responses. This sounds like o1.
Another is a larger, more powerful version that is used to generate high-quality “synthetic” training data for OpenAI’s next flagship language model, codenamed “Orion.” This may or may not be linked to o1.

OpenAI has provided no direct clarification on what Strawberry truly is.

A complement, not a replacement

Murati emphasized that o1 is not designed to replace GPT-4o but to complement it. 

“There are two paradigms,” she said. “The scaling paradigm and this new paradigm. We expect that we will bring them together.” 

While OpenAI continues to develop GPT-5, which will likely be even larger and more powerful than GPT-4o, future models could incorporate the reasoning functions of o1. 

This fusion could address the persistent limitations of large language models (LLMs), such as their struggle with seemingly simple problems that require logical deduction.

Anthropic and Google are allegedly racing to integrate similar features into their models. Google’s AlphaProof project, for instance, also combines language models with reinforcement learning to tackle difficult math problems. 

However, Chen believes that OpenAI has the edge. “I do think we have made some breakthroughs there,” he said, “I think it is part of our edge. It’s actually fairly good at reasoning across all domains.”

Yoshua Bengio, a leading AI researcher and recipient of the prestigious Turing Award, lauded the progress but urged caution.

 “If AI systems were to demonstrate genuine reasoning, it would enable consistency of facts, arguments, and conclusions made by the AI,” he told the FT.

Safety and ethical considerations

As part of its commitment to responsible AI, OpenAI has bolstered the safety features of the o1 series, including “on-by-default” content safety tools. 

These tools help prevent the model from producing harmful or unsafe outputs.

“We’re pleased to announce that Prompt Shields and Protected Materials for Text are now generally available in Azure OpenAI Service,” OpenAI stated in a Microsoft blog post

The o1 series is available for early access in Microsoft’s Azure AI Studio and GitHub Models, with a broader release planned soon. 

OpenAI hopes that o1 will enable developers and enterprises to innovate more cost-effectively, aligning with their broader mission of making AI more accessible to corporate users. 

“We believe that it’ll allow us to ship intelligence cheaper,” concluded Chen. “And I think that really is the core mission of our company.”

All in all, an exciting releasing. It will be very interesting to see what questions, problems, and tasks o1 thrives on.

The post OpenAI debuts the “o1” series, pushing the boundaries of AI reasoning appeared first on DailyAI.

FTC asked to investigate AI summarizers for antitrust violations

A group of Democrat Senators has asked the FTC and DOJ to investigate whether new generative AI features such as summarizers on search platforms violate US antitrust laws.

New features like Google’s AI Overviews and Search GPT are objectively useful in providing quick answers to users’ questions, but at what cost? The Senators, led by Senator Amy Klobuchar, say that the ability of generative AI to summarize or regurgitate existing content harms content creators and journalists.

Their letter addressed to the FTC and DOJ says, “Recently, multiple dominant online platforms have introduced new generative AI features that answer user queries by summarizing, or, in some cases, merely regurgitating online content from other sources or platforms. The introduction of these new generative AI features further threatens the ability of journalists and other content creators to earn compensation for their vital work.”

Generative AI poses new risks for content creators, especially journalists. With local news already in crisis, I urged the DOJ and FTC to investigate whether generative AI products threaten fair competition and innovation by repackaging original content without permission. pic.twitter.com/Jo1KdA45H1

— Senator Amy Klobuchar (@SenAmyKlobuchar) September 10, 2024

They argue that search platforms used to direct users to relevant websites where content creators benefited from the website traffic, but now their content is reworked or summarized by AI without any reward accruing to the person who created it.

The letter explained that “When a generative AI feature answers a query directly, it often forces the content creator—whose content has been relegated to a lower position on the user interface—to compete with content generated from their own work.”

Platforms like Google will respect a robots.txt instruction not to index a content creator’s website but that results in the site not showing up at all in any search queries.

Journalism under threat

The Senators claim “Dominant online platforms in areas like search, social media, e-commerce, operating systems, and app stores already abuse their gatekeeper power over the digital marketplace in ways that harm small businesses and content creators and eliminate choices for consumers.”

They say that AI could make this worse with a “potentially devastating impact” on news organizations and other content creators.

In January, Senator Klobuchar asked Condé Nast CEO Roger Lynch whether his company had any choice over letting AI models scrape their content or not. Lynch said that the “opt-out” feature was introduced after existing models were trained and that it reinforces their dominance.

Lynch said, “The only thing that will do is to prevent a new competitor from training new models to compete with them, so the opt-out of the training is too late, frankly.” Condé Nast has since signed a deal with OpenAI to license its data to train its models.

Generative AI is extremely useful, but it is disrupting creative industries even as it creates new opportunities. AI is doing to online news what the internet did to print media.

A recent study quoted by the Senators says that the US has lost approximately 2,900 newspapers and that a third of those that existed in 2005 will have disappeared by the end of 2024.

AI is undoubtedly making it easier and faster to get answers to our questions but from a fast-diminishing pool of sources.

If the FTC and DOJ agree with the Senators that these generative AI features are a “form of exclusionary conduct or an unfair method of competition in violation of the antitrust laws” then Google, OpenAI, and others like them will need to rethink how they answer our queries.

The post FTC asked to investigate AI summarizers for antitrust violations appeared first on DailyAI.

FRVR AI and GenreX join forces to bring AI music composition to game developers

FRVR AI, a leader in AI-powered game development tools, has announced a groundbreaking partnership with GenreX, a cutting-edge music generation service.

The partnership will directly integrate GenreX’s AI music technology into FRVR AI’s game development platform.

It allows creators to seamlessly incorporate high-quality, adaptive music into their games without musical production expertise.

Simply input a prompt to generate and integrate soundtracks and effects directly into your project.

Chris Benjaminsen, Founder of FRVR AI, stated, “We are excited to partner with Genrex and bring this innovative technology to the gaming industry. Our goal is to empower game creators with the tools they need to create exceptional gaming experiences, and GenreX’s AI-generated music is a game-changer in this regard.”

You can describe audio using natural language within the FRVR editor interface. Source: FRVR.

Music can make or break a game’s atmosphere, but getting it right isn’t always easy or cheap. FRVR’s new feature will relieve some of that pressure, enabling designers to create more fun, immersive games.

The integration of GenreX’s technology into FRVR AI’s platform will allow developers to:

Enhance game atmosphere and immersion with adaptive, high-quality music
Reduce production time and costs associated with traditional music composition
Focus on core game design while AI handles the musical elements

Yihao Chen, Co-Founder at GenreX, added, “We are thrilled to collaborate with FRVR, a company that shares our passion for innovation and elevating the game developer experience. Our generative AI music technology is perfectly suited for the dynamic nature of games, and we are excited to see how it will enhance FRVR’s already impressive game development offerings.”

Audio clips can be edited within FRVR. Source: FRVR.

More About FRVR

FRVR, co-founded in 2014 by industry veterans Chris Benjaminsen and Brian Meidell, has been on a mission to revolutionize how people access and enjoy games.

The company’s AI-powered platform allows anyone to create games simply by interacting with AI using natural language.

In a recent interview with DailyAI, Benjaminsen shared his vision for democratizing game creation: “Rather than having very few people decide what games people should be allowed to play, we want to allow anyone to create whatever they want and then let the users figure out what is fun.”

FRVR’s impact has been momentous, with games created on their platform accessed by 1.5 billion players worldwide.

The company’s cloud-based editor enables users to iterate on games by playing them and providing further instructions to refine gameplay. It’s intuitive to use, not to mention super-fun. We got to try it ourselves earlier in the year.

The platform also vastly simplifies the process of publishing and sharing across more than 30 channels, including web, mobile app stores, and social media platforms.

Those interested in experiencing FRVR AI’s games design platform, including the new music generation features, can join the public beta here.

The post FRVR AI and GenreX join forces to bring AI music composition to game developers appeared first on DailyAI.

Goldman Sachs ChatGPT mistake causes AI market panic

A flawed report by Goldman Sachs analyst Peter Oppenheimer may have been behind the significant negative sentiment for AI shares over the last few days.

There have been rumors of a potential AI bubble as tech prices continue to rise and Oppenheimer’s reports indicated that the tide was about to turn. Oppenheimer’s report relied on a graph that seemed to indicate that the number of users of OpenAI’s ChatGPT was declining.

Here’s the graph that made Oppenheimer believe that ChatGPT was losing users.

Graph erroneously showing decline in ChatGPT users. Source: Similarweb, Data compiled by Goldman Sachs Global Investment Research

In his analysis of the graph, Oppenheimer said, “Furthermore, the original ‘excitement’ about chat-GPT is fading in terms of monthly users (Exhibit 11). This does not mean, of course, that the growth rates in the industry will not be strong, but it does suggest that the next wave of beneficiaries may come from the new products and services that can be created on the back of these foundation models.”

Investors wondering if it was time to take profits or delay investing in OpenAI and related stocks were spooked when they read the report featured in the Financial Times. NVIDIA, which is heavily dependent on the future of AI, saw its shares fall 4% on Friday, hitting their lowest point in weeks.

The problem though is that the graph Oppenheimer used in his report didn’t capture the reality of what was happening. The decline in visitors to chat.openai.com was not because users were leaving ChatGPT. It was because OpenAI was migrating the service to its new URL at chatgpt.com.

Similarweb, which tracks website traffic, noted that even though there was a slight dip in ChatGPT traffic in July, the trend in ChatGPT users continues to grow.

For the first time since December 2023, ChatGPT showed a drop in month-over-month traffic during July 2024. However, as a testament to the significant progress of the OpenAI tool, July’s traffic still increased by 74% year-over-year.https://t.co/dL9iz4jLT6 pic.twitter.com/ODcXb1BF6w

— Similarweb (@Similarweb) August 11, 2024

Not realizing that OpenAI was using a new domain for ChatGPT, Oppenheimer assumed that the good times could be over and his report’s effect on share prices was evident.

It’s an example of how volatile the AI investor market is and how prone it is to announcements, rumors, and disinformation, albeit unintentional in this case.

OpenAI is eyeing new investors with a $100B valuation in its sights and expects revenue of between $3.5 to $4.5 billion this year. If it releases Strawberry in the fall, it could see a continuation of positive AI sentiment which could be good news for tech stocks like NVIDIA.

California’s proposed SB 1047 AI safety bill is on Governor Newsom’s desk waiting for him to either sign it into law or veto it. Employees at OpenAI, Anthropic, Google, Meta, and others came out in support of the bill in an open letter published yesterday.

Newsom’s decision may have an even bigger effect on tech stock prices than a misread ChatGPT user graph.

The post Goldman Sachs ChatGPT mistake causes AI market panic appeared first on DailyAI.