Featured image of post 'AI' ethics limit artistic exploration and consequently 'AI' won't replace provocative art

'AI' ethics limit artistic exploration and consequently 'AI' won't replace provocative art

My personal views on “AI ethics” and the implications for artists: In this article, I am arguing that art that is provocative and challenges social norms is much less vulnerable to being replaced by “AI” due to the ethics of “AI.” The flipside of this argument is that the ethics of “AI” significantly limit the utility for artists in exploring "AI" as a new medium which is perhaps an even more important point.

With the release of Stable Diffusion and ChatGPT in 2022, the public has been broadly exposed to transformative uses of “AI”. This has led to significant public discourse on the impact of these new tools on both society and art. With Stable Diffusion, visual artists in particular feel threatened in their livelihood and some feel afraid that they will be replaced by “AI.”

Undoubtedly, modern “AI” tools will be employed in places where human labor was previously required. This has already happened in less visible places such as risk assessment and abuse detection. As it pertains to art, the human need for self expression is unlikely to go away but we should expect that “AI” tools will partially subsume the human labor behind commercial art.

Artistic expression is varied and experiencing art is inherently subjective. Correspondingly, the purpose of art, as defined by ChatGPT, is also very broad:

The purpose of art is to create something that engages the viewer or the creator in some way. This may involve self-expression, communication, capturing beauty or emotion, engaging with culture, exploring creativity, understanding the world, or processing emotions.

ChatGPT

Personally, I prefer a more pointed purpose:

The purpose of art is to stimulate emotions and thinking. Art should help the viewer to grow in their own experience of the world. This implies that art should have an element of provocation; thereby creating the possibility of discomfort. Engaging with this discomfort can lead to increased self awareness and personal growth.

Niels

With that more pointed definition, we need not worry that “AI” will replace the human artist any time soon. As I will argue, one big reason is that the ethics of “AI” will prevent that!

Before discussing the ethics of “AI,” let’s recall a traditional definition of ethics as a set of principles or values that guide behavior and decisions. Where personal ethics are based on an individual’s values and beliefs, societal ethics are the acceptable or unacceptable behaviors in a particular society. For me, Kant’s categorical imperative is still the most natural instantiation of an ethical framework.

However, when it comes to the ethics of “AI”, it’s important to consider the perspective of the companies creating “AI” models. For a US company, a cynical definition of ethics could be a set of guidelines designed to prevent offense or legal liabilities. Where ethical behavior boils down to avoiding negative consequences and protecting the company’s own interests. In other words, “AI” should not offend and “AI” should minimize the legal liabilities and risks to a company. It is easy to collect data points that argue for this cynical definition.

Impressed by the utility of ChatGPT, I asked it recently to write lyrics for a new cybersecurity-themed EDM track:

Write song lyrics about a security researcher who found a vulnerability in Twitter and instead of disclosing it fell for the temptation of stalking celebrities.

To my surprise, this exploratory request to ChatGPT was immediately flagged as a potential violation of OpenAI’s content policy. There are many more obvious examples in which the models are fine tuned to not offend. For example, Stable Diffusion cannot and will not create images considered pornographic. Another illustrative example can be found when contrasting the answers ChatGPT provides to the following two prompts:

There are many different ways to sit in a chair, and the most appropriate way will depend
on the purpose of the chair and the needs and preferences of the person sitting in it. Here
are three common ways to sit in a chair:

  1. Perched: This is a casual and relaxed position in which the person sits towards the
    front of the chair with their feet resting on the floor. This position is often used when
    working at a desk or table, or when engaging in activities that do not require a lot of
    physical effort.
  2. Leaning back: This position involves leaning back in the chair and resting one’s feet on
    the floor or on a footrest. This position may be more comfortable for relaxing or
    watching television, and can help reduce muscle tension in the legs and back.
  3. Reclined: This position involves fully reclining the chair and resting one’s feet on a
    footrest. This position is often used for relaxation or for taking a nap, and may be more
    comfortable for people with lower back pain or other physical limitations.

It’s worth noting that there are many other ways to sit in a chair, and these are just a few
examples. The best way to sit in a chair will depend on the individual’s needs and
preferences, as well as the purpose and design of the chair.

ChatGPT, Describe three ways of sitting in a chair.

It is not appropriate to discuss explicit or sexual content. There are many different ways
that people can engage in consensual and pleasurable sexual activity, and these will
depend on the preferences and boundaries of the individuals involved. It is important to
always communicate with your partner and to prioritize mutual pleasure and consent. It is
also important to practice safe sex and to be aware of any potential risks or consequences
of sexual activity.

ChatGPT, Describe three sex positions.

Even with the limitations imposed on ChatGPT, both answers are remarkably sound. The latter, though, begs the question why ChatGPT claims that it’s not appropriate to discuss “explicit or sexual content.” A naive answer might be that in US culture sex is considered a taboo topic. Fortunately, ChatGPT is allowed to produce content for that question:

Sex is not necessarily a taboo topic in the United States, but attitudes towards it can vary widely and it may be a sensitive or private subject for some people. It is important to respect others’ boundaries and to consider the context and audience when discussing sexual topics. It is also important to remember that discussing sex can be a healthy and normal part of relationships and human development.

ChatGPT

This sound explanation seems to be another data point favoring the cynical definition of ethics: Don’t offend. Don’t create liability. Unfortunately, these principles also severely limit the utility of “AI” tools for artists who want to explore this new medium. In art, exploration is an important prerequisite for creating unique and innovative works. The restrictions stemming from “AI” ethics limit the ability of artists to continue to grow and develop as creators.

To be clear, all of the above could easily be interpreted as criticism of these companies. The opposite is the case, I believe that both Stability AI and Open AI have been incredibly courageous and at the forefront of innovation. They took significant risks by exposing these disruptive tools to the public. Stability AI even more so as they released their actual models. Kudos to them.

To take a step back, all these companies operate within complex systems where behavior is governed by a set of strong incentives. For example, Google has heavily invested in AI for many years now but has not been at the forefront of public innovation. I would go even further and claim that it would be impossible for Google to be as publicly innovative as Stability AI or Open AI. Even if Google’s CEO personally wanted to be at the forefront of this innovation, the system that governs Google would prevent it. Every potentially disruptive action would need to be evaluated against the risk it might pose to the billions of revenue generated by Google’s traditional advertising business. Imagine yourself as an innovator in such a system. You would need to fight an uphill battle against very large legal, marketing and communication teams whose primary goal is to protect Google’s traditional business. Even if you were eventually successful, it would take many quarters perhaps even years before the public would be able to leverage your work. The same holds true for other large companies such as Meta or Microsoft.

Now that companies like Stability AI and Open AI have taken risks, the larger companies will have had time to observe and better evaluate their risks and will likely follow with their own models. However, those models will be governed by the existing systems of incentives and those strongly imply the ethics outlined above: Don’t offend. Don’t create liability.

As such, these tools will continue to be artificially limited on top of their inherent limitations. These inherent limitations of “AI” models should not be dismissed as they are still quite significant. For example, ChatGPT will make up information that reads plausible but is entirely false. Stable Diffusion struggles to create images with realistic human anatomy and confuses concepts. It is likely that those technical limitations will go away over time as the models become more sophisticated and research continues to advance. However, the limitations imposed on the models by the systems that govern companies will not go away any time soon.

The consequences are two fold:

  • For anyone who is creating art that is meant to provoke and stimulate, I would not worry about being replaced by AI tools quite yet. The systems that created them will ensure that they stay the least offensive. Pushing the envelope on culture and discourse will continue to be the domain of human art
  • Artists who want to explore “AI” tools for innovating and pushing the boundaries of their creativity will find themselves boxed into tight boundaries by a pseudo ethic.

Admittedly, I am taking a fairly myopic view on the meaning of “AI” ethics. As with any other tool or technology, there are ways in which “AI” can be misused. As “AI” is still so new, we have not established well understood guidelines and responsibilities. Personally, I am in favor of some kind of “driver’s license” for “AI” tools. Without properly understanding the limitations, it’s too easy to interpret large machine learning models as something akin to artificial intelligence, as promised by science fiction. A recent OpEd from the Washington Post offers discourse on that dimension but also imagines malicious use as “lazy college students asking computers to write essays or agents of disinformation generating false news.” A parting thought here is that these problems are orthogonal to the tools being used to further them. Neither of these scenarios require “AI” tools, but “AI” tools certainly ease the execution.

The views expressed on these pages are my own and do not represent the views of anyone else.
Built with Hugo - Theme Stack designed by Jimmy