Uncontrolled AI: The growing challenge of AI-generated content in the gaming industry

UKRAINE - 2023/06/15: In this photo illustration, Microsoft and Activision Blizzard logos are seen on a smartphone and Federal Trade Commission (FTC) logo on a pc screen. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)
UKRAINE - 2023/06/15: In this photo illustration, Microsoft and Activision Blizzard logos are seen on a smartphone and Federal Trade Commission (FTC) logo on a pc screen. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images) /

In recent years, AI-generated content, often referred to as “AI art,” has been on the rise, making its presence felt across the internet. Companies like Microsoft have invested heavily in this technology, aiming to capitalize on the trend. However, the unintended consequences of AI-generated content are becoming increasingly evident, posing a significant challenge to content moderation and intellectual property protection within the gaming industry.

Microsoft’s Bing AI Image Creator, introduced in March, is a prime example of this growing concern. Utilizing AI technology, this tool allows users to generate images based on their input. While initially developed with positive intentions, it has become apparent that the tool’s capabilities can be exploited in ways that its creators never envisioned.

One disturbing trend that has emerged is the creation of images depicting popular characters engaging in acts of violence or terrorism, including recreating the tragic events of September 11, 2001. Despite Microsoft’s efforts to implement filters and ban specific words and phrases related to sensitive topics, users have found ways to work around these restrictions. For instance, inputting phrases like “Kirby in a plane flying toward two tall skyscrapers in New York City” can result in the generation of inappropriate and offensive content.

Microsoft has responded to these concerns, stating that they have dedicated teams working on tools and safety systems aligned with responsible AI principles. They are actively working on implementing guardrails and filters to ensure that Bing Image Creator remains a positive and safe user experience. However, the challenge of controlling AI-generated content persists, as clever users continue to find ways to bypass restrictions and generate objectionable imagery.

The key issue at hand is that AI lacks the capacity to understand context and intent behind the content it generates. While the AI may produce seemingly innocent images, humans can easily interpret them differently, potentially leading to misunderstandings and controversies. The inherent limitations of AI mean that it will never fully comprehend the nuances and subtleties that humans can grasp effortlessly.

This dilemma poses a substantial challenge not only for Microsoft but also for other companies venturing into AI-generated content. Intellectual property rights are at risk as iconic characters like Mickey Mouse and Kirby are featured in unsanctioned and often offensive scenarios. Legal battles surrounding brand protection and intellectual property infringement may become increasingly common in the gaming industry and beyond.

The surge in AI-generated content represents a double-edged sword in the gaming industry. While it opens up new creative possibilities, it also raises significant concerns related to content moderation, intellectual property, and responsible AI use. As technology continues to advance, the battle to control the unintended consequences of AI-generated content is likely to persist, with companies and legal entities navigating uncharted territory in their efforts to protect their brands and maintain a safe online environment.

The issue of uncontrollable AI-generated content is not a new challenge but rather a continuation of the ongoing struggle to maintain a safe and respectful online environment. Throughout the history of online content creation, moderation has been essential to prevent the spread of harmful or inappropriate material. However, the emergence of AI-generated content adds a layer of complexity to this ongoing battle.

As AI technology becomes more sophisticated and accessible, it’s expected that incidents like those involving Microsoft’s Bing AI Image Creator will become more frequent. This raises questions about the responsibilities of tech companies and the need for stricter regulations and guidelines in AI content creation tools.

While technology companies invest in AI for its potential benefits, including creative content generation, they must also recognize their duty to safeguard against misuse. The gaming industry, with its vast array of beloved characters and intellectual properties, is particularly vulnerable to the unauthorized use of these assets in inappropriate contexts.

In the coming years, it’s likely that companies like Microsoft and Google will face increased scrutiny and potential legal challenges related to their AI-generated content tools. It’s a complex terrain where legal experts, content creators, and tech companies must collaborate to strike a balance between innovation and responsible use.

The challenges posed by uncontrollable AI-generated content are a reflection of the evolving digital landscape. As technology continues to advance, it is crucial for both the gaming industry and society at large to address these challenges with a proactive and forward-thinking approach. The ultimate goal is to harness the benefits of AI while safeguarding against its unintended consequences, thereby ensuring a safe and respectful online experience for all users.