Page contents
A deepfake video still of singer Taylor Swift, projected on a large rectangular screen in a dimly lit room. The deepfake Taylor Swift sits on a woven rattan chair, wearing a white short-sleeved dress, sitting crossed leg with hands on her right knee.A deepfake video still of singer Taylor Swift, projected on a large rectangular screen in a dimly lit room. The deepfake Taylor Swift sits on a woven rattan chair, wearing a white short-sleeved dress, sitting crossed leg with hands on her right knee.

Christopher Kulendran Thomas, Being Human (2019/2023), in collaboration with Annika Kuhlmann; Installation view: Kunsthalle Zürich, (2023); Courtesy the artist & Kunsthalle Zürich; Image: Andrea Rossetti

Get Real

Taylor Swift has something to say. “I feel like all artists feel the need to be authentic. Everybody demands authenticity. And every artist believes they are for real,” she intones in Christopher Kulendran Thomas’s 2019 film, Being Human. “You know, everyone believes that even if the whole industry is corrupt, at least I’m true to myself. So, if believing in your own authenticity is the basic price of admission, then authenticity itself becomes the most contested object of synthesis.” The irony is that Swift’s image and words here were generated by a deepfake algorithm. When he toured me around his Kunsthalle Zürich exhibition this summer, Kulendran Thomas told me that using computers to make aesthetic decisions on his behalf was the only way he knew how to be truly original. The show included a suite of oil paintings and sculptures that resembled hallmark works of postwar abstraction; these, too, were devised by algorithms trained on visual data sets from the art-historical canon. His comments reminded me of Rosalind Krauss’s famous assertion that the very concept of originality was a “myth” of modernism, which our contemporary era has seen “splintering into endless replication.” Affectations of authenticity on social media, which are often augmented by photo and video editing software, only reinforce this claim. To update a phrase of Walter Benjamin’s, the work of art in the age of artificial intelligence cannot be taken at face value. Or, as deepfake Swift puts it, “Maybe simulating simulated behavior is the only way we have of being ‘for real.’”

In November 2022, OpenAI released its artificially intelligent bot, ChatGPT. This followed the proliferation of generative AI models like DALL-E 2, which can create images from prompts based on content scrubbed from the internet. ChatGPT’s uncanny ability to provide convincingly opinionated responses to a wide range of questions, in turn passing the Turing Test, led some observers to conclude that OpenAI had achieved the “singularity,” the point at which technological progress exceeds humans’ ability to control it, in effect spelling the beginning of the end of the world. Setting such hysterical predictions aside, AI does spell a certain end to large sectors of the labor force: the World Economic Forum estimates that 14 million jobs could be lost to AI, while Goldman Sachs puts that number closer to 300 million. In February, before OpenAI had even released its current version of their program, the artistic director of a major British art museum admitted to me that she was using it to write the wall texts for her exhibitions. A job that might have previously been outsourced to curatorial assistants is now being entrusted to a robot, which can’t bode well for other junior museum staffers, such as press agents and graphic designers. The software has likely hastened the demise of the art world’s permanent precariat, who have always been underpaid and overworked. In this sense, AI facilitates the deeper entrenchment of class divisions in the cultural sector and beyond, which surely serves OpenAI’s corporate overlords just fine.

Artificial intelligence is fundamentally conservative. When predictive algorithms are trained on existing data, they will only provide us with more of the same. AI furnishes what theorist Mark Fisher termed “capitalist realism,” a guise of neutrality beneath which capitalism obscures any possible alternatives to its structures of exploitation. “We the audience are not subjected to a power that comes from the outside,” he wrote. “Rather, we are integrated into a control circuit that has our desires and preferences as its only mandate–but those desires and preferences are returned to us, no longer as ours, but as the desires of the big Other.” Supercomputers promise us objective insights by processing more information than we ever could as individuals, but that promise is always a ruse, because the information available to them–created and disseminated by humans–is necessarily flawed. Programs designed by private industry are unlikely to undermine the interests of capital. Art that uses such programs to simulate other worlds will be limited by the horizons of the world in which we live. 

Last August, during a talk at the Art Barn, a private film and video art collection in Aspen, the artist Ian Cheng spoke about his practice of “world-building.” In 2018, Cheng created an artificially intelligent lifeform he called BOB, for “bag of beliefs –rendered in a digital simulation as a chimerical red serpent. When displayed in a gallery, BOB writhes across a battery of LED screens, eating spiny fruit visitors can feed him through a mobile phone app. BOB’s evolution depends on his encounters with IRL humans and other creatures who amble across virtual space, all of whom behave independently of Cheng’s control. The artist was able to build a comprehensive world for BOB because he knew how to construct its component bricks–a process requiring a high degree of technical coding expertise–but even that can’t undo the influence of his bias as creator. The artist is God of such a system, and everything in it is subject to His laws. When I asked him how he deals with his own subject-position, a common consideration in other art forms such as documentary filmmaking, Cheng gave a response that seemed unsatisfactory. “I think we’ll soon see AI that is more agential, and less like a service,” he said, in which public trust will depend on AI’s increased “disagreeability” with its users. Citing users’ fondness for asking ChatGPT questions it struggles to answer as a way to test the limits of its service, he argued that such challenges offer feedback that OpenAI can integrate into a more sentient and critical version of the robot interlocutor. In other words, the market will correct itself.

Already in 2009, Fisher predicted such a development as a “cul-de-sac” of human creativity, the fulfillment of the “end of history” predicted by Francis Fukuyama almost two decades prior. “How long can a culture persist without the new?” he asked, citing Nietzsche’s description of the “Last Man” as one “who has seen everything, but is decadently enfeebled precisely by this excess of (self) awareness.” If artificial intelligence is treated as an artistic medium autonomous from the systems that support and disseminate it, its criticality will be stunted by the capitalist structures upon which it depends, in turn reinforcing our passive acceptance of inequality. Creating a world beyond such conditions will require building programs that look nothing like anything we’ve ever seen before. Art, more than anything, ought to be able to restore our faith in the power of the human imagination to envision radical alternatives.

A still from a film shows a digital avatar, a woman with light grey skin, mid-length, shoulder length light brown hair dancing around a black chair in a dark gray space. She wears a royal purple sleeveless, short dress.

Still from Cyborgian Rhapsody by Lynn Hershman Leeson. 2023. Copyright Hotwire Productions LLC.

A still from a digitally-generated video. A woman with straight, blond hair, holds her phone out to watch a video of an AI avatar, whose face on the screen, is displayed in portrait mode. This avatar is generated as a woman with shoulder-length straight, light brown hair. Her skin is light grey, and she wears aviator sunglasses.

Still from Cyborgian Rhapsody by Lynn Hershman Leeson. 2023. Copyright Hotwire Productions LLC.

“Technology is neutral,” the artist Lynn Hershman Leeson told MoMA Magazine last July. “We invent these things, and as humans we give it meaning. So, if humans are utopian, then the technology will be also. And if humans are greedy, and need things, and use it in a negative way, then it’s dystopian, but technology itself depends on what its partner is.” This summer, Hershman Leeson completed a film narrated by an avatar named Sarah whose script was produced by ChatGPT3, which she fed specific prompts about its hopes for the human race. Its utopian language, which predicts a future of peaceful collaboration and connectivity, recalls the marketing tactics of tech moguls in the 2010s, before companies like Meta were accused of eroding democracy and fomenting genocide. Towards the end of the film, Sarah expresses the belief that artificial intelligence will expand the realm of real-world narrative possibilities. “My translator wanted me to write a script with a beginning, middle, and end, because that’s what humans are used to, but my story is open-ended, like an Instagram scroll,” Sarah says. “I’m not afraid of the future. I’m excited about it, and I’m looking forward to seeing what comes next.” Then, Hershman Leeson herself appears onscreen to confront her creation with the grim facts. “I wasn’t just your translator, Sarah. I was also your editor,” she says. “I learned how to make prompts, and I knew what questions to ask you… I don’t think you’ll have a chance to see what comes next, because GPT4 has just been released, and GPT5 is being developed. It’s all part of evolution. And that evolution becomes our immortality.”

Artificial intelligence will outlast us, even as it assumes new forms. Of his simulated chimera, Cheng told the time-based media conservator Cass Fino-Radin that “as long as that spirit of BOB and its behaviors and its affordances and the landscape of possibilities by which it can change and evolve as a creature are maintained, we could show this holographically in the future, or we could beam this into peoples’ mirror-linked brains.” For such works, the medium of delivery cannot be the message. Instead, the criticality of AI art must depend on the “behaviors and affordances” of its source code. If such works can offer a solution to the urgent crises of social and inequality, it must start there, at the level of zeroes and ones. Only then can we expect anything to change “for real.” 


Evan Moffitt is a writer and critic based in London. 

Eyebeam models a new approach to artist-led creation for the public good; we are a non-profit that provides significant professional support and money to exceptional artists for the realization of important ideas that wouldn’t exist otherwise. Nobody else is doing this.

Support Our Work