With the growing popularity of AI image-generating apps this year, I was curious to introduce an AI project into the drawing class I’m teaching this term at UNR@LakeTahoe. I approached it in a very exploratory spirit. I didn’t have a very prescriptive outcome for the lesson—I wanted to introduce the technology to the class, and see how the students responded. The following article collects some of my impressions. Readers hoping for a verdict on the “validity” of AI-generated art will be disappointed—but for those who are interested in some of the possibilities it presents and questions it poses (both pedagogically and artistically), I hope you find something of value here.

I introduced the students to a couple of AI image generating apps, but will focus on Wombo’s Dream app for the sake of this article. It’s very easy to use. You enter a text prompt, and the app uses two artificial neural networks (one that analyzes how well text descriptions correspond to images, and another that generates images that look similar to other images) to create a unique picture derived from the prompt. As the neural networks work in tandem, the app shows the image in progress, coming into more detailed “focus” through its iterations, with a clever haptic jolt given to the phone at stages, as if the machine had to undergo peristalsis to properly digest the informational input.

Some early prompts I tried, feeling out what Dream knew about iconic figures—prompting “Mona Lisa,” or “Marilyn Monroe reading.”

That process of “coming into focus” is a useful echo of the process of drawing from the imagination by traditional means. My drawing class has mostly non-Art majors, and I find there are many misconceptions about drawing among those who aren’t in the habit of doing it. A major misconception is that, when artists draw from the imagination, the finished image is fully realized in the mind, and the hand is like a photocopy machine, transferring it to paper. In fact, it is the physical act of drawing that realizes the image in all its details.

I ask my students to take five seconds to imagine a dragon, which they are all able to do without effort. I then ask how many toes the dragon has on its hind legs, and how many ribs it has on its wings. These answers aren’t immediately available for most of them, and it’s only in the process of drawing (first sketching the basics of the full form, then tightening up the various elements of its anatomy) that those details become resolved. This is surprising because, when the image is first held in the mind, it has the feeling of an image that is already present in all its details. I leave to the reader to determine whether it’s reassuring or unnerving that the artificial neural networks seem to bring an image into perception in an analogous way.

Dream resolving the prompt “dragon.”

The task I set for the students had two parts. First, they had to come up with a text prompt and generate an image they liked. Second, they had to create a traditional drawing on paper, using the prompt-generated AI image as a reference, or a jumping-off point. They were free to make an accurate copy of the AI image in their own hand—or to adapt it very freely. I’ve strewn a few student examples below, among the concluding paragraphs.

Artwork by Emma Montgomery

The students found this a fun, engaging task. On the one hand it solved, at a stroke, the problem of what to draw. This is actually a perennial problem for artists, which social forces try to solve for them—rewarding artists for creating images that give form to our collective psychological or socio-historical desires. In this assignment, image-making could instead be turned into a game, without the burden of a social or aesthetic justification. Rather the students were taking language by the shoulders and giving it a shake to see what sorts of images might fall out—and feeling some personal ownership for the end result, because they had set the game in motion. They were the ones who’d rolled the dice.

Artwork by Cameron Duprey

This AI assignment came on the heels of an observational drawing assignment, and in contrast to that exercise, it seemed the prompt-generated images didn’t exert any of the pressures of reality, or “realism” that the subjects of the observational drawings had. The AI images themselves are fundamentally unreal, existing in a mediated zone of virtuality. In the same way one might feel obliged to re-tell a true story as accurately as possible, but not feel the same obligation to re-tell a made-up story in all its original details, the AI images were weightless—free of the gravity of fidelity. They invited the students to approach the traditionally drawn adaptation in a spirit of invention.

Artwork by Breanna Harris

What AI-generated art misses is what has preoccupied art during our lifetimes—the idea that it is a process by which human beings encode their experiences, through a medium that fixes them in a form, and then other human beings decode that experience and obtain some vicarious understanding of it. With an AI image, you can unpack the fixed form as much as you like, but there’s never another human being (alive or dead—and the truly remarkable thing about art is how vivid the encounter can be with artists who are dead) on the other side of the decoding. Or, more properly said, the other human beings have been so far mediated out of the experience—as writers of code, or originators of the images that are being abstracted into raw material—that their human presence has been relegated to an encrypted footnote.

Artwork by Sarah Murray

Of course, art has not always been preoccupied by this sense of personal communion. For long stretches its primary social use has been to glorify the powerful. And here it’s like we’re shedding (with each tap of the “generate” button) fungible icons in tribute to Big Tech, translating the digitized history of human image-making into Big Tech’s language of disposability and reproducibility. Creating data that can be used to further target and monetize us. Tribute may not be our intention, but it’s ingrained in the platform, the way interactivity is engineered as a tool to make us proffer profitable statistics. 

Art has also been used, in many contexts, to glorify God/Nature, to fix its profligate beauty and preserve a little bit of its mystery. And perhaps we’re in the act of imputing, to AI, a certain Godliness—knocking on the door of language, in a denatured idea of “searching” (which has its religious connotations)—and releasing, or actualizing, a seemingly infinite capacity for beautiful images. In Dream’s unresolved “almost-ness,” looking like something while not quite arriving at it, they still perhaps retain a scrap of impervious mystery.

Posted by Chris Lanier

Chris Lanier is an artist and critic who generally likes to mix things up – words and pictures, video and performance, design and art. He’s had work shown and published in the U.S., Mexico, England, Japan, France, Canada, and Serbia – and has written for The Believer, HiLobrow, Furtherfield, Rhizome, the San Francisco Chronicle, and the Comics Journal. He is a Professor of Digital Art at the University of Nevada, Reno at Lake Tahoe (formerly Sierra Nevada College). More at chrislanierart.wordpress.com.