Stop blaming the avatar-generating AI for unnecessarily sexualized images – blame the creators instead

Stop blaming the avatar-generating AI for unnecessarily sexualized images – blame the creators instead

Stop blaming the avatar-generating AI for unnecessarily sexualized images – blame the creators instead

In December 2022, the internet was abuzz with a new app. For £1.79, Lensa AI would generate 50 artistic portraits based on uploaded photos, quickly topping upload charts when users shared the images on social media. When some people complained about sexualized and disturbing body modifications, the app’s creators noted that they couldn’t guarantee non-offensive content. But when artificial intelligence (AI) blunders, this kind of disclaimer isn’t enough.

When I tried Lensa AI’s magic avatar feature for myself, I selected my gender and uploaded 10-20 portraits. He quickly returned to flower fairies, fantasy warriors, and other creative figures, all with recognizable characteristics. Magical, indeed, except two of my images were nude, and oddly, sported giant breasts. Other users identifying as women also said they were depicted nude, although they only uploaded professional portraits.

In addition to stripping women, the app also appears to “beautify” their faces and slim their bodies. Other users reported that their dark skin was lightening, and an Asian journalist found that her images were too sexualized compared to those of her white colleagues. From a technical point of view, it is unfortunately not surprising that these AI portraits incorporate harmful stereotypes, including the fetishism of Asian women.

The reason is “Garbage in, garbage out”, a saying that applies to most AI systems today. The exit is not magic, it mainly depends on what we feed there. Lensa AI uses Stable Diffusion, a model that was trained on 5.85 billion images retrieved from the Internet. If you indiscriminately fetch material from the web, you invariably end up with an app that likes to draw big boobs on my perfectly thin little chest.

Generative AI models require such massive amounts of training data that it is difficult to organize them. And while it is possible to add certain safeguards, it is impossible to anticipate everything that AI will create. In order to release these tools, it makes sense that companies want users to use them at their own risk. For example, Open AI’s ChatGPT website warns users that their chat tool may generate incorrect information, harmful instructions, or biased content.

But these companies also benefit from our willingness to blame AI systems. Because autonomous systems can take their own content and make their own decisions, people project a lot of agency onto them. The smarter a system seems, the more we are willing to consider it as an actor in its own right. As a result, companies can post a disclaimer, and many users will accept that it’s the AI’s fault when a tool creates offensive or harmful output.

The problem goes far beyond “magical” body modifications. Chatbots, for example, have gotten better since Microsoft’s infamous Tay started spewing racist replies hours after launch, but they still surprise users with toxic language and dangerous prompts. We know that image generators and hiring algorithms suffer from gender bias and that the AI ​​used in facial recognition and the criminal justice system is racist. In short, algorithms can cause real harm to people.

Imagine if a zoo let a tiger loose in the city and said “we’ve done our best to train it, but we can’t guarantee the tiger won’t do anything offensive”. We wouldn’t let them off the hook. And even more so than the tiger, an AI system does not make autonomous decisions in a vacuum. Humans decide how and for what purpose to design it, select its training data and parameters, and choose when to unleash it on an unsuspecting populace.

Companies may not be able to anticipate every outcome. But their claims that the output is merely a reflection of reality are a turnoff. The creators of Lensa AI say that “unfiltered artificial data from the Internet introduced the pattern into the existing biases of humanity. Essentially, AI holds up a mirror of our society. But is the app a reflection of society, or is it a reflection of historical bias and injustice that a company chooses to entrench and amplify?

The persistent assertion that the AI ​​is neutral is not only incorrect, it obscures the fact that the choice above is not neutral either. It’s nice to get new profile pictures, and there are many other valuable and important applications for generative AI. But we don’t need to absolve corporations of their moral or legal responsibility to achieve this. In fact, it would be easier for society to lean into the potential of AI if the creators were responsible. So, let’s stop pointing the finger at AI and talk about who is really driving the results of our technological future.

Learn more about artificial intelligence:

Leave a Reply

Your email address will not be published. Required fields are marked *