Uncategorized

HelloYou unveils Skanna, a barcode scanner with a twist

For many consumers, deciphering the truth about products feels like solving a riddle. Complex ingredient lists, vague claims, and limited transparency make it nearly impossible to know what you’re really buying—or how it aligns with your health, safety, and values.

This growing demand for clarity has inspired HelloYou, a developer known for creating apps that help simplify daily life, to create Skanna—an AI-powered barcode scanner app. Designed to cut through the noise, Skanna reveals the truth behind product labels, offering detailed insights into ingredients, safety, and environmental impact in seconds.

What is Skanna?

Skanna isn’t just another barcode scanner—it’s a game-changer for anyone seeking clarity in a confusing marketplace. Whether it’s deciphering a long list of ingredients, understanding allergens, or evaluating sustainability, Skanna delivers the truth behind every product you use.

“Transparency matters more than ever in a world where consumers demand clear, honest answers about the products they use,” said Rena Cimen, Senior Performance Marketing Manager at HelloYou. “With Skanna, our goal was to strip away the noise and deliver unfiltered, reliable information—quickly and effortlessly.”

Skanna is tailored for everyone from parents safeguarding their families to health-conscious shoppers and eco-conscious individuals making sustainable choices. With its sleek design and straightforward functionality, the app ensures that anyone can access critical product information with just a scan.     

Key Features

I downloaded Skanna to put its features to the test, and while it’s clear the app has incredible potential, it’s not without its quirks. Here’s what I discovered:

  • Ingredient Analysis: This feature is a standout, providing detailed breakdowns of product ingredients. It’s perfect for identifying allergens or unwanted additives. However, I noticed that not all products were in the database, particularly from smaller or niche brands. For a first version, this is understandable, but it’s something that could be improved over time.
  • Sustainability Insights: Skanna’s efforts to offer eco-conscious data are commendable. The app provides information about sourcing and ethical practices, but the depth of this data varies. It’s a great starting point, though users looking for detailed environmental impact analyses might find it a bit surface-level.
  • Scan History: The personal history feature is useful for keeping track of what you’ve scanned. While functional, it could benefit from more advanced organization tools, like categories or tags, to make it easier to revisit specific products.
  • Personalized Recommendations: Skanna really excels here. The app tailors suggestions based on dietary or lifestyle needs, such as gluten-free or vegan alternatives. However, the recommendations are only as strong as the database, so expanding it will make this feature even better.
  • Database and Variety: Database and Variety: Out of 25 food items, 5 cosmetic products, 2 books, a hairdryer, and a vape that I scanned, Skanna managed to get everything right—impressively so. Admittedly, one of the books didn’t work at first because an Amazon barcode was covering it, but once I peeled that off, it scanned perfectly. Points for accuracy, even with a little extra effort!

While Skanna isn’t perfect—formatting and database coverage still need work—it’s the first of its kind to offer real-time, live feedback to shoppers. That alone is a gamechanger in helping people make healthier and more sustainable choices.

How Skanna Works

Using Skanna is intuitive and straightforward:

  1. Open the app: The barcode scanner is accessible with one tap—no digging through menus.
  2. Scan a product: The scanner works quickly, delivering insights in just seconds for most barcodes. Occasionally, non-standard codes required manual input, but these instances were rare.
  3. Explore insights: The app’s best feature is how it presents information. No fluff, just clear, concise details about whatever you want to know.

While the app isn’t flawless—some barcodes didn’t scan as expected—it’s a strong foundation for what could become an essential tool for shoppers.

Who Can Benefit?

Despite its imperfections, Skanna has something to offer for a wide range of users:

  • Parents: Use Skanna to identify allergen-free, family-safe products with ease.
  • Health Enthusiasts: Avoid hidden additives and toxic ingredients effortlessly.
  • Sustainability Advocates: Gain insights into brands’ ethical and eco-friendly practices.
  • Busy Professionals: Access reliable product data on the go, saving time while shopping smarter.
  • Globe Trotters: Scan globally to decode labels, translate ingredients, and understand what you’re eating.

For a first version, Skanna’s strengths outweigh its limitations. It’s especially helpful for navigating complex ingredient lists in beauty, food, and household products. And while it’s not yet a perfect solution, its potential to evolve into an indispensable shopping companion is undeniable.

Final Thoughts: A Gamechanger in the Making

Skanna may not be a finished masterpiece, but it’s a bold step toward giving consumers real-time, actionable feedback about the products they buy. For anyone frustrated by unclear labels or limited product transparency, this app is a much-needed solution.

It’s refreshing to see an app that doesn’t just promise to help—but actively works to simplify complex decisions. With some refinements to its database and interface, Skanna could genuinely transform the way we shop for the better.

To celebrate Cyber Monday, DailyAI users can unlock a special 7-day free trial of the app’s premium features. Click on the exclusive iOS link below to explore everything the app has to offer—completely free for a week.

Start Your Free Trial on iOS

The post HelloYou unveils Skanna, a barcode scanner with a twist appeared first on DailyAI.

As AI advances, gaming studios, developers, and players face a new reality

From the rise of 3D graphics to the explosion of mobile gaming, technological progress has always driven the gaming industry forward. 

AI marks the latest chapter in its evolution. Once the masters of their virtual worlds, game developers must confront a fundamental question: What role will human creators play in an industry dominated by AI-driven processes? 

And beyond that, what are the broader ethical challenges of a world where AI, video games, and human lives increasingly intertwine?

Developers are already talking about how AI might transform the industry, but they’re also raising concerns. 

A recent survey by the Game Developers Conference found that 84% of developers are somewhat or very concerned about the ethics of generative AI, from fears of job displacement to issues like copyright infringement and the risk of AI systems scraping game data without consent.

At Hong Kong-based Gala Technology, the sense of urgency has reached a fever pitch. CEO Jia Xiaodong confessed to Bloomberg News, “Basically every week, we feel that we are going to be eliminated.”

The company has entered full crisis mode, freezing non-AI projects, mandating machine learning crash courses for department heads, and even tempting $7,000 bonuses for innovative AI ideas. 

In the US, gaming giants like Electronic Arts and Ubisoft are similarly pouring millions into AI research, even as they weather waves of layoffs and restructuring.

For those on the front lines of game development, AI displacement is already picking up pace. In 2023, 10,500 game developers will lose their jobs across over 30 studios.

“I’m very aware that I could wake up tomorrow and my job could be gone,” confesses Jess Hyland, a video game artist with 15 years of experience under her belt. Hyland told the BBC that she’s already heard of colleagues losing gigs because of AI.

There’s a sense of inevitability that this trend will only accelerate. Masaaki Fukuda, a Sony’s PlayStation division veteran who now serves as vice president at Japan’s largest AI startup, explained, “Nothing can reverse, stop, or slow the current AI trend.” 

When machines dream of electric sheep

For decades, video games have been the product of intensely collaborative human effort, melding the skills of artists, writers, designers, and programmers into immersive, interactive experiences. 

Now that AI systems can generate levels, worlds, and even entire games from simple text prompts, the nature of authorship is being questioned.

Take GameNGen, an AI model developed by Google and Tokyo University that generates fully playable levels for first-person shooters in real-time, making them nearly indistinguishable from those crafted by human designers. 

Or consider DeepMind’s Genie, a foundation model that can generate interactive 2D environments from rough sketches or brief descriptions, blending elements from existing games to create entirely new worlds with distinct logic and aesthetics.

These examples showcase the direction of travel for AI in game development, a glimpse of what we might expect to see in a few years as AI advances.

However, change is already very much in the pipeline. AI tools like Unity’s Muse are actively reshaping game design workflows today, automating asset creation, animation, and environmental building.

This level of AI integration is already making it possible for developers to accomplish in hours what once took days. The intention is to remove the drudgery of repetitive tasks while leaving artistic control primarily in human hands.

For some in the industry, these tools and others herald a new era of democratized creation. “AI is the game changer I’ve been waiting for,” stated Yuta Hanazawa, a 25-year industry veteran who recently founded an AI game art company. 

Hanazawa believes that AI will “revitalize the entire industry ” by liberating developers from the drudgery of asset creation, enabling a newfound focus on innovative gameplay and storytelling.

Yosuke Shiokawa, founder of a two-year-old AI gaming startup, similarly predicts that “soon, it will be a matter of your creativity, not your budget, that determines the value of games.” 

However, others fear that the rise of generative AI threatens to reduce human artists to mere machine operators, endlessly fine-tuning and debugging its output. 

“The stuff that AI generates, you become the person whose job is fixing it,” Hyland said. It’s not why I got into making games.”

The double-edged sword of democratization

For AI evangelists, one of the technology’s most tantalizing promises is the radical democratization of game creation. 

They envision a future in which anyone with a spark of imagination can conjure their dream game with a few simple prompts, where the line between player and creator blurs into irrelevance.

But for each individual intoxicated by the prospect of AI-powered creative freedom, there’s at least one skeptic. 

Chris Knowles, a veteran game developer and founder of the indie studio Sidequest Ninja, points to cloned games that are already plaguing app stores and online marketplaces.

“Anything that makes the clone studios’ business model even cheaper and quicker makes the difficult task of running a financially sustainable indie studio even harder,” Knowles cautions. 

He and many others fear that the advent of AI-assisted game generation will only exacerbate the problem, flooding the market with predominantly derivative, low-effort content.

There’s also the risk of creative homogenization. If every developer is drawing from the same small pool of AI models and associated datasets, will the result be a gaming landscape that feels increasingly generic and interchangeable? 

Will the idiosyncrasies and happy accidents that often define truly memorable games be lost in the pursuit of algorithmic optimization?

AI gaming’s ethical minefields

AI’s role in game development is blurring the line between the virtual and the real – pushing gaming closer to its long-standing goal of creating immersive, lifelike experiences.

Many games already allow players to customize their digital personas. With AI-powered tools capable of generating hyper-realistic, photo-quality images, the potential for players to create avatars that uncannily resemble real individuals – and then use those avatars for exploitative or abusive purposes – is disturbingly high.

The building blocks for these scenarios have already been laid. For example, the recent case of Spanish schoolchildren using AI ‘games’ to generate nude images of their classmates illustrates how easily these tools can be weaponized, especially against vulnerable populations like women and minors.

AI ‘games’ capable of producing explicit or abusive imagery are rife on the Apple App Store and Google Play, and age limits largely ineffective. 

Transpose this same dynamic into the context of more immersive, detailed gaming environments, and the potential for harm is enormous.

Further, moderating AI’s functionality to prevent this form of abuse or manipulation is exceptionally tricky, if not impossible. All AI models, no matter how sophisticated, are vulnerable to jailbreaking. This involves finding loopholes or weaknesses in moderation systems to generate content that’s supposed to be restricted. 

Filters designed to block explicit content often become the very target for manipulation by players who push AI systems to their limits, creating content that breaks ethical boundaries. 

The challenge is ensuring AI doesn’t undermine the very communities it aims to enhance – both among gamers and across wider society. Developers, studios, etc., can’t just push the boundaries of what AI can do but also understand where AI stops being a tool and starts taking over our lives – our mental faculties – our culture, our creativity, our selves. 

In the end, the conversation around AI in gaming isn’t about whether it will happen – it already has. The focus needs to shift towards ensuring that AI complements rather than erases human creativity while preventing forms of harm and misuse. 

The post As AI advances, gaming studios, developers, and players face a new reality appeared first on DailyAI.

The gaming industry is facing a midlife crisis – is AI its future?

For years, the gaming industry seemed like an unstoppable juggernaut, with revenues rising to stratospheric heights on the backs of ever-more-immersive titles and the explosion of mobile gaming. 

However, as we enter the mid-2020s, there are growing signs that the industry is reaching a plateau.

After the pandemic-fueled boom of 2020 and 2021, global gaming revenues dipped in 2022. That contraction gave way to tepid growth of just 0.5% in 2023, bringing the worldwide gaming market to around $184 billion, according to data from Newzoo

While still an impressive figure, it’s a far cry from the double-digit percentage growth the industry had come to expect.

This slowdown is even more pronounced in mature markets like North America and Europe, where key sectors such as console and mobile gaming are approaching saturation. 

Mobile gaming revenue, once propelling the industry’s continued growth, actually declined in 2022 and is only now beginning to stabilize.

Revenue stagnation is only part of the story, however. Even as growth slows, the cost of developing top-tier AAA games continues to soar. 

Budgets for marquee franchises like Call of Duty and Grand Theft Auto now routinely exceed $300 million. Some titles are nearing combined development and marketing costs of $660 million, a staggering sum that would have been unthinkable a decade ago.

These ballooning budgets are forcing studios to play it safe, leaning heavily on established franchises and proven formulas rather than taking risks. Innovation is taking a backseat to iteration.

There’s also evidence that people aren’t enjoying games as much as they once did, with sentiment to releases dropping from 3.4/5 in 2014 to 2.9/5 in 2021

Even the buzz of the latest CoD and FIFA games seems to be waning. Although we’ve witnessed groundbreaking releases, like Elden Ring, for example, this took some five years to create. It’s a once-in-a-generation title rather than the year’s best release.

The human toll of this financial squeeze is also becoming more prominent. Layoffs are picking up pace, with over 10,500 game developers losing their jobs across over 30 studios in 2023 alone. 

Simultaneously, the industry is grappling with a rising tide of labor activism as workers push back against the notorious “crunch culture” that has long plagued game development. 

The indie incursion

Amid tensions in AAA studios, indie developers are making a greater impact in the industry – a powerful counterpoint to mainstream game development. 

In 2024, indie games are claiming five of the top ten spots on Steam’s highest-grossing list. Titles like Palworld ($6.75m budget, 25 million units sold) and Enshrouded are resonating with players, showcasing the potential for indie games to achieve commercial success on par with AAA releases.

This indie surge is part of a larger trend, with the market share of indie games on Steam growing from 25% in 2018 to 43% in 2024.

Even in years with highly anticipated AAA launches, like 2023’s Baldur’s Gate 3 and Spider-Man 2, indie revenue has held steady, indicating a robust and growing audience for these titles.

The rise of indie games reflects a growing appetite among some players for novel experiences and creative risks. 

While AAA development often focuses on established franchises and proven formulas, indie developers push boundaries and experiment with new ideas.

Meanwhile, tools like Unity and Unreal Engine have made high-quality game development more accessible, while digital marketplaces like Steam provide a platform for indie games to find an audience. 

This brings us to AI. By automating and streamlining virtually any aspect of game development, AI could further level the playing field, allowing small teams to create experiences that rival those of major studios.

The AI paradigm shift

AI’s potential to disrupt gaming has been discussed for decades, but the prospect is no longer just theoretical.

Recent breakthroughs, such as Google’s GameNGen and DeepMind‘s Genie, provide a glimpse into a future where AI drives game design. 

GameNGen can generate entirely playable levels of classic games like DOOM in real-time, while Genie can conjure up interactive 2D environments from simple images or text prompts. 

These breakthroughs are part of a longer trend of AI-driven innovation in gaming, though the industry is still very young.

The journey began with early milestones like IBM’s Deep Blue, which famously defeated world chess champion Garry Kasparov in 1997. Deep Blue’s victory was a landmark moment that demonstrated the potential for AI to excel in rule-based, strategic challenges.

Fast forward to 2016, and we saw another significant leap with Google DeepMind’s AlphaGo. This AI system mastered the ancient Chinese game of Go, known for its immense complexity and reliance on intuition. By defeating world champion Lee Sedol 4-1, AlphaGo showed that AI could tackle domains that were once thought to be the exclusive domain of human intelligence.

It was in 2018 that researchers David Ha & Jürgen Schmidhuber published World Models, demonstrating how an AI could learn to play video games by building an internal model of the game world. 

A year later, DeepMind‘s AlphaStar showcased the power of reinforcement learning by mastering the complex strategy game StarCraft II, even competing against top human players.

Representing the cutting-edge of this field today, GameNGen was trained on actual DOOM gameplay footage, allowing it to internalize the game’s mechanics, level design, and aesthetics.

It then uses this knowledge to generate novel levels on the fly, complete with coherent layouts and gameplay flow.

Conversely, Genie uses a foundation model to generate interactive environments from more freeform inputs like sketches or descriptions. By training on a diverse range of game genres and visual styles, Genie can adapt to create content across a broad spectrum of aesthetics.

Genie, a generative model, can function as an interactive environment, accepting various prompts such as generated images or hand-drawn sketches. Users can guide the model’s output by providing latent actions at each time step, which Genie then uses to generate the next frame in the sequence.  Source: DeepMind via ArXiv (open access).

Under the hood, these AI systems are powered by deep neural networks, which are becoming miniature game engines unto themselves, capable of generating complete, playable experiences from scratch.

Essentially, the game world is created inside the neural network itself, not through traditional programming techniques, but by a deep neural network that has learned game design rules, patterns, and structures.

Moreover, because the game world is generated by a neural network, it has the potential to be far more dynamic and responsive than traditional game environments.

The same network that generates the world itself could also be used to simulate NPC behaviors, adjust difficulty on the fly, or even reshape the environment in real-time based on player actions.

With AI handling the heavy lifting of world-building and level design, the optimistic narrative is that developers will be free to focus on higher-level creative decisions, such as developing art, concepts, and storylines. 

While jobs would be placed at risk, AI will surely be the major level-up the gaming industry is looking for.

Empowering players, disrupting business models

The real revolution will kickstart when these AI tools are placed directly in the hands of players. 

Imagine a world where gamers can conjure up their dream titles with a few simple prompts, then jump in and start playing instantly. 

Want to mash-up the neon-soaked cityscape of Cyberpunk 2077 with the frenetic combat of DOOM Eternal? Just describe it to the AI and watch your vision come to life.

The line between developer and player blurs. Games become living, breathing entities, evolving in response to the collective creativity of their communities. 

This democratization of game creation could birth entirely new genres and upend traditional notions of what a game can be. 

We could see a shift toward platforms that provide AI tools for creation and curation, taking a cut of user-generated content sales or charging for access to premium features.

For established studios, it would undoubtedly represent both an existential threat and an unprecedented opportunity. Those who cling to the old ways risk being left behind. Those who embrace the new paradigm stand to capitalize. 

Of course, realizing this vision won’t be without its challenges. Issues of content moderation, intellectual property rights, job displacement, and revenue sharing will all need to be confronted. 

However, the wheels are in motion. As the technology continues to evolve, we can expect to see more and more examples of AI not just assisting in game development, but fundamentally reshaping what games can be.

The post The gaming industry is facing a midlife crisis – is AI its future? appeared first on DailyAI.

DAI#59 – APIs, dead bills, and NVIDIA opens up

Welcome to our weekly roundup of human-crafted AI news.

This week OpenAI handed out API goodies.

California’s AI safety bill got killed.

And NVIDIA surprised us with a powerful open model.

Let’s dig in.

Here come the agents

OpenAI didn’t announce any new models (or Sora) at its Dev Day event, but developers were excited over new API features. The Realtime API will be a game-changer for making smarter applications that speak with users and even act as agents.

The demo was really cool.

There have been rumors about OpenAI going the “for-profit” route and awarding Sam Altman billions of dollars in equity but Altman dismissed these. Even so, the company is on the drive for more investment and they’ll expect some return for their cash.

Apple has integrated OpenAI’s models into its devices but dropped out of the latest funding round for OpenAI which is expected to raise approximately $6.5 billion.

We’re not sure why Apple doesn’t want a piece of the OpenAI pie but it might have something to do with new developments with its Apple Intelligence. Or maybe it’s related to Sam Altman’s demand for exclusivity.

Tell me you have no moat without telling me you have no moat. pic.twitter.com/3I18MosvOg

— Pedro Domingos (@pmddomingos) October 3, 2024

Kill bill

Gavin Newsom had to decide between putting a safety rev limiter on AI developers or letting them go full steam ahead. In the end, he decided to veto California’s SB 1047 AI safety bill and offered some interesting reasons why.

Are we really at a point where we face genuine AI risks yet?

Well that escalated quickly https://t.co/xhZCITRJjE pic.twitter.com/aLZn4blS8G

— AI Notkilleveryoneism Memes (@AISafetyMemes) September 30, 2024

Newsom has signed a range of AI bills over the last month related to deepfakes, AI watermarking, child safety, performers’ AI rights, and election misinformation. Last week he signed AB 2013 which will really shake things up for LLM creators.

The bill says that on or before January 1, 2026, developers will need to provide a high-level summary of the training dataset of any models made from January 1, 2022 onward if the model is made available in California. Some of these requirements could air some dirty secrets.

More EU AI regs

The EU is clearly a lot more concerned about AI safety than the rest of the world. Either that or they just enjoy writing and passing legislation. This week they kickstarted a project to write an AI code of practice to attempt to balance innovation & safety.

When you see who heads up the safety technical group you’ll have a good idea of which way they’ll be leaning.

Liquid foundation models

​Transformer models are what gave us ChatGPT but there’s been a lot of debate recently about whether they will be up to delivering the next leap in AI. A company called Liquid AI is shaking things up with its Liquid Foundation Models (LFMs).

These aren’t your typical generative AIs. LFMs are specifically optimized to manage longer-context data, making them ideal for tasks have to handle sequential data like text, audio, or video.

The LFMs achieve impressive performance with a much smaller model, less memory, and less compute.

NVIDIA opens up

Nvidia just dropped a game-changer: an open-source AI model that goes head-to-head with big players like OpenAI and Google. Their new NVLM 1.0 lineup, led by the flagship 72B parameter NVLM-D-72B, shines in both vision and language tasks while also leveling up text-only capabilities.

With open weights and NVIDIA’s promise to release the code, it’s getting harder to justify paying for proprietary models for a lot of use cases.

NVLM-D benchmarks. Source: arXiv

Just say know

A new study found that the latest large language models (LLMs) are less likely to admit when they don’t know an answer to a user’s question. When users ask these models a question they’re more likely to make something up rather than admit they don’t know the answer.

The study highlights the need for a fundamental shift in the design and development of general-purpose AI, especially when it’s used in high-stakes areas. Researchers are still trying to understand why AI models are so keen to please us instead of saying: ‘Sorry, I don’t know the answer.’

AI inside

It seems like everyone is slapping an “AI” label on their products to pull in customers. Here are a few AI-powered tools that are actually worth checking out.

Bluedot: Record, transcribe, and summarize your meetings with AI-generated notes without a bot.

Guidde: Guidde magically turns your workflows into step-by-step video guides, complete with AI-generated voiceovers and pro-level visuals, all in just a few clicks.

In other news…

Here are some other clickworthy AI stories we enjoyed this week:

Meta’s AI-powered smart glasses raise concerns about privacy and user data.
AI tools are spilling the beans on awkward company secrets.
MIT makes a breakthrough in robotics to handle real-life chaos.
LinkedIn faces legal complications over its AI integration.
Anthropic hires OpenAI co-founder Durk Kingma.
Y Combinator is being criticized after it backed an AI startup that admits it basically cloned another AI startup.

And that’s a wrap.

If you live in California we’d love to know how you feel about SB 1047 getting vetoed. Is it a missed opportunity for AI safety or a positive step that will see us get AGI soon? With powerful open-source models like NVIDIA’s new bombshell, it’s going to be harder to regulate LLMs anyway.

OpenAI’s Realtime API was the highlight this week. Even if you’re not a developer, the prospect of interacting with smarter customer service bots that talk to you is pretty cool. Unless you work as a customer service agent and you’d like to keep your job that is.

Let us know what you think, follow us on X, and send us links to cool AI stuff we may have missed.

The post DAI#59 – APIs, dead bills, and NVIDIA opens up appeared first on DailyAI.

OpenAI unveils Realtime API and other features for developers

OpenAI didn’t release any new models at its Dev Day event but new API features will excite developers who want to use their models to build powerful apps.

OpenAI has had a tough few weeks with its CTO, Mira Murati, and other head researchers joining the ever-growing list of former employees. The company is under increasing pressure from other flagship models, including open-source models which offer developers cheaper and highly capable options.

The new features OpenAI unveiled were the Realtime API (in beta), vision fine-tuning, and efficiency-boosting tools like prompt caching and model distillation.

Realtime API

The Realtime API is the most exciting new feature, albeit in beta. It enables developers to build low-latency, speech-to-speech experiences in their apps without using separate models for speech recognition and text-to-speech conversion.

With this API, developers can now create apps that allow for real-time conversations with AI, such as voice assistants or language learning tools, all through a single API call. It’s not quite the seamless experience that GPT-4o’s Advanced Voice Mode offers, but it’s close.

It’s not cheap though, at approximately $0.06 per minute of audio input and $0.24 per minute of audio output.

The new Realtime API from OpenAI is incredible…

Watch it order 400 strawberries by actually CALLING the store with twillio. All with voice. pic.twitter.com/J2BBoL9yFv

— Ty (@FieroTy) October 1, 2024

Vision fine-tuning

Vision fine-tuning within the API allows developers to enhance their models’ ability to understand and interact with images. By fine-tuning GPT-4o using images, developers can create applications that excel in tasks like visual search or object detection.

This feature is already being leveraged by companies like Grab, which improved the accuracy of its mapping service by fine-tuning the model to recognize traffic signs from street-level images​.

OpenAI also gave an example of how GPT-4o could generate additional content for a website after being fine-tuned to stylistically match the site’s existing content.

Prompt caching

To improve cost efficiency, OpenAI introduced prompt caching, a tool that reduces the cost and latency of frequently used API calls. By reusing recently processed inputs, developers can cut costs by 50% and reduce response times. This feature is especially useful for applications requiring long conversations or repeated context, like chatbots and customer service tools.

Using cached inputs could save up to 50% on input token costs.

Price comparison of cached and uncached input tokens for OpenAI’s API. Source: OpenAI

Model distillation

Model distillation allows developers to fine-tune smaller, more cost-efficient models, using the outputs of larger, more capable models. This is a game-changer because, previously, distillation required multiple disconnected steps and tools, making it a time-consuming and error-prone process.

Before OpenAI’s integrated Model Distillation feature, developers had to manually orchestrate different parts of the process, like generating data from larger models, preparing fine-tuning datasets, and measuring performance with various tools.

Developers can now automatically store output pairs from larger models like GPT-4o and use those pairs to fine-tune smaller models like GPT-4o-mini. The whole process of dataset creation, fine-tuning, and evaluation can be done in a more structured, automated, and efficient way.

The streamlined developer process, lower latency, and reduced costs will make OpenAI’s GPT-4o model an attractive prospect for developers looking to deploy powerful apps quickly. It will be interesting to see which applications the multi-modal features make possible.

The post OpenAI unveils Realtime API and other features for developers appeared first on DailyAI.

EU kickstarts AI code of practice to balance innovation & safety

The European Commission has kicked off its project to develop the first-ever General-Purpose AI Code of Practice, and it’s tied closely to the recently passed EU AI Act.

The Code is aimed at setting some clear ground rules for AI models like ChatGPT and Google Gemini, especially when it comes to things like transparency, copyright, and managing the risks these powerful systems pose.

At a recent online plenary, nearly 1,000 experts from academia, industry, and civil society gathered to help shape what this Code will look like.

The process is being led by a group of 13 international experts, including Yoshua Bengio, one of the ‘godfathers’ of AI, who’s taking charge of the group focusing on technical risks. Bengio won the Turing Award, which is effectively the Nobel Prize for computing, so his opinions carry deserved weight.

Bengio’s pessimistic views on the catastrophic risk that powerful AI poses to humanity hint at the direction the team he heads will take.

These working groups will meet regularly to draft the Code with the final version expected by April 2025. Once finalized, the Code will have a big impact on any company looking to deploy its AI products in the EU.

The EU AI Act lays out a strict regulatory framework for AI providers, but the Code of Practice will be the practical guide companies will have to follow. The Code will deal with issues like making AI systems more transparent, ensuring they comply with copyright laws, and setting up measures to manage the risks associated with AI.

The teams drafting the Code will need to balance how AI is developed responsibly and safely, without stifling innovation, something the EU is already being criticized for. The latest AI models and features from Meta, Apple, and OpenAI are not being fully deployed in the EU due to already strict GDPR privacy laws.

The implications are huge. If done right, this Code could set global standards for AI safety and ethics, giving the EU a leadership role in how AI is regulated. But if the Code is too restrictive or unclear, it could slow down AI development in Europe, pushing innovators elsewhere.

While the EU would no doubt welcome global adoption of its Code, this is unlikely as China and the US appear to be more pro-development than risk-averse. The veto of California’s SB 1047 AI safety bill is a good example of the differing approaches to AI regulation.

AGI is unlikely to emerge from the EU tech industry, but the EU is also less likely to be ground zero for any potential AI-powered catastrophe.

The post EU kickstarts AI code of practice to balance innovation & safety appeared first on DailyAI.