top of page

"Roy Batty Was Wrong: Why It Will Always Be Valuable to Have Humans Make Art"

Writer's picture: Ryan M. SeroRyan M. Sero

For science-fiction fans, the scene at the end of the first Blade Runner film is etched into memories. Atop a rain-drenched cathedral –

Oh. Spoilers.

Atop a rain-drenched cathedral Roy Batty (played by Rutger Hauer) crouches over Deckard (Harrison Ford) it seems as though the demise of Ford’s neo-noir detective is as close as the street below him. But Batty doesn’t kill Deckard. He lifts him up and laments that all of his memories will be lost, “…like tears in the rain,” before dying himself. This hits home, showing the replicant (basically an android) has all of the thoughts, feelings, and emotional resonance as a human being.

Of course, the profound impact of what makes a person “human” is undercut a bit if you think about it for a second: Roy Batty isn’t actually a real person, but not in the sense that he’s a replicant; it’s because he’s a character in a movie played by a human actor. The real world’s truth is that no machine could have done what Rutger Hauer did, with the performance, as well as whittling down his final monologue to a bone-clean few sentences with greater impact than what was initially written for him to say.

But the real world has changed, and AI is on the minds and consciences of more people than ever. We are confronting the possibility of falling from a precipice of our own design into a world where machines can think, learn, and perform artistic tasks in ways that mimic human artists. I almost wrote “real” artists, but is that accurate? Could not an A.I. be just as “real” an artist as a human? Is the message of Blade Runner directly applicable to our future?

One year ago, this would have been unimaginable for most members of the general populace. Tech gurus and the athletes of the cyber world would have been aware of the rapid advancements being made in terms of thinking technology, but for most people, ignorance reigned, and the concept of a computer program writing screenplays or novels would have seemed like sci-fi nonsense. Well, assuming sci-fi is nonsense. Often, the worlds of cyborgs and aliens have a strange prescience to our own world, which shouldn’t be too surprising. After all, the writers of such stories live here. They know what it’s like. If Tom Clancy could mimic political realities, why couldn’t Ray Bradbury know what we’d be getting up to years from when he wrote Fahrenheit 451 (if only he hadn’t been so darned accurate with that one…)

Even present A.I. can’t turn out good writing. I’ve read a treatment one did for “Annie Hall 2,” and it was filled with cliches, basic misunderstandings about character and motivation, and plot holes. Nevertheless, we all are aware of how much better A.I. has become aping artistic accomplishments as AI-generated images are getting better all the time. The mimicry of human voices is the least, uncanny and unsettling. Where will we be next year? Or ten years from now? It seems almost inevitable that, at some point, a computer program will be able to write a novel just as well as a human being can. At that point, do we want human artists or care at all? Even if we’ll never actually get there, let’s enter a thought experiment where we do have a computer program that will write novels according to the specifications of a user.


Is there any value in having humans make our art?


For the purposes of this experiment, let’s use writing as an example of art. I’m more familiar with it than many other art forms, but it would become tedious to run the experiment over and over with all the different forms of artistic expression. So, while I might touch on other forms from time to time, let’s just go with “writing.”

To determine the value, or lack thereof, of specifically human writers, firstly let’s ask ourselves if writing has value at all. Why do we find it valuable?

To my thinking, there are three main reasons to set words to a page or screen:

  1. You want to communicate something to somebody else, or yourself at a later date. (Broadly, “nonfiction”)

  2. You want to express yourself, producing an emotional reaction in a reader. (This is mostly the function of fiction).

  3. You want to give a reader information. (Journalism).

If we assume that any of these three main functions can be performed by a computer just as well as if it were a human, to the point that its writing was indistinguishable from any given human writer, does that matter?

I say, “Yes.”

But why?


Let’s consider the first two forms of writing: nonfiction and fiction. When a human sits down to turn thought into language, this process is to connect with the reader. We write poetry to move other people based on experiences and feelings that we ourselves have. But, more importantly to the topic, we read those poems to feel what that person felt. C.S. Lewis writes in his book Mere Christianity about his ideas that there is a fundamental law to the universe.

“Quarreling means trying to show that the other man is in the wrong. And there would be no sense in trying to do that unless you and he had some sort of agreement as to what Right and Wrong are; just as there would be no sense in saying that a footballer had committed a foul unless there was some agreement about the rules of football.” (excerpted from Mere Christianity)

He is arguing against subjective morality. What if a machine made that argument? What if the machine made the argument just as well as a human could?

Who cares?

A moral machine is still too far over the horizon even with our current technology. In other words, just because the machine can put an argument onto paper doesn’t mean that it has a stake in there being morality or not. Lewis’s words mean something, not just because they are built to persuade a person of natural law, but because we the readers know that Lewis is a person who wants to put forward a worthwhile idea. A program trying to persuade a user of a new course of moral action seems almost comical.

This logical disinterest in the thoughts of machines becomes even more apparent when applied to poetry. If an artificial intelligence has no real consciousness, and therefore, no meaningful concept of a summer’s day, does it matter who he compares it to? Tennyson’s understanding of nobility, sacrifice, and honor makes The Charge of the Light Brigade a moving, stirring piece of verse. Would, “Theirs but to do and die,” have value if put forward by a machine, which has no choice but to do, and (given memory banks’ capabilities) might not be able to die?

The thought experiment demands that we imagine reading a piece of poetry and then learning it is an A.I. generation. We read it, are moved by it, and then discover its origins. Is that not enough? This is the counter-stroke: if a reader cannot tell the differences, they are moved. They feel the emotion of the piece, whether it was obtained through a human author or a machine. But if you found out that an algorithm produced those verses or that screenplay, would you feel cheated?

Perhaps not, but I would. I would be disappointed that what I was looking at was not “true”. The essence would have shifted. One of my favorite songs is by the highly underrated artist Warren Zevon. It is titled, Keep Me in Your Heart for Awhile. Lyrics like, “Shadows are falling and I’m running out of breath/ keep me in your heart for a while/ If I leave it doesn’t mean I love you any less,” strike me every time I hear them. It’s an elegant elegy and evokes a sense of loss and bittersweet memory. When I first heard that song, it was beautiful.

Then I heard that Zevon had composed the song while dying. He wasn’t just fiddling around with some sentiments: he was sending a message to the people he loved about what he was going through and telling them to keep him around after he was gone. If you re-read those lyrics, do you feel differently about them? I did. When I heard the song again, it was much more potent.

In the same way, if I had found out that this was a computer-generated cluster of words, put together by a clever algorithm, I think the poetry would lose some potency.


A.I. is just another tool, they say. It’s like a word processor. Why not employ it? Even here, A.I. art isn’t like word processing or photoshop (for visual artists). Other tools assist artists to quickly lay down their work. The art itself isn’t affected. The process is the same. If I want to conjure a script or a short story, I have to whang my head off of a keyboard seventeen times a minute until ideas bleed into the buttons. But with an A.I., I would just have to have a concept and the computer would do the rest. I’d spell-check it, but I haven’t actually made anything.

Ideas cannot be copyrighted. Concepts are not covered by copyright law. The reason is that premises apply to multiple executions. “Hero gets wizard mentor, magic sword, sets out to destroy evil empire,” applies equally to Lord of the Rings and Star Wars, and the reason Lucasfilm doesn’t owe the Tolkien estate money is because it is recognized that a basic premise alone doesn’t make the thing a complete work, intellectual property, or piece of art.

If one uses an A.I. to generate the actual story, characters, and dialogue after plugging in a few basic concepts, who wrote it? You can claim that you just “used a tool,” but you’re robbing yourself of the opportunity to learn and grow as a writer. The process is valuable in and of itself. Short term, it might mean cranking out a lot more “work,” but you are never getting any better. Imagine setting up a chess board and then using Deep Blue to pick your moves for you. Are you becoming a better chess player? Not really. Maybe you’ll observe some gambit or other, but you won’t get progressively better. Your practice makes you a great writer.


The intrinsic problem is that we live in a culture of all consumption. We have reduced art to “content,” and we want to gulp it down as fast as we are able. The general public might not be so easily persuaded that A.I. art is a bad thing when they just want to gluttonously slurp it up and spit out the bones. If A.I. has made a painting and it shows up in a social media feed, we view it for two seconds and go back to doomscrolling. This is a terrible lifestyle. Why not contemplate the picture? We want everything so fast.

But whether good ol’ John Q. cares about A.I. art, I think we have to continue to pursue the human-generated, truly worthwhile stuff. We have to resist the easy path or the disposable junk. Get into a mindset of connection and communication. If you are trying to learn another person’s thoughts, connect with their ideas, or resonate on an empathetic level, that experience will be much more valuable if you are in harmony with another human being, not just a thinking machine. That is the intrinsic value of human-generated art. If somebody on the other end cares about it, it isn’t just content.

The surprise of the title is that Roy Batty was actually right all along. These moments and memories that mean something to each of us are valuable because of that emotional resonance. Our humanity is linked to our feelings, thoughts, and memories. But, the thing is, that is not a connection that we can get with an L.L.M. or other algorithmic program. And, unless you meet with a real, actual replicant, that connection is only coming through human intelligence, not an artificial one.


Top Stories

Stay informed about the latest news

Become a writer
Apply Here

Thank you for subscribing!

  • Instagram
  • Linkedin
Humanity Knocks Magazine received 501(c)(3) non-profit tax exempt status from the Internal Revenue Service (IRS).
EIN #93-3653843
 
bottom of page