Taylor Swift has something to say. “I feel like all artists feel the need to be authentic. Everybody demands authenticity. And every artist believes they are for real,” she intones in Christopher Kulendran Thomas’s 2019 film, Being Human. “You know, everyone believes that even if the whole industry is corrupt, at least I’m true to myself. So, if believing in your own authenticity is the basic price of admission, then authenticity itself becomes the most contested object of synthesis.” The irony is that Swift’s image and words here were generated by a deepfake algorithm. When he toured me around his Kunsthalle Zürich exhibition this summer, Kulendran Thomas told me that using computers to make aesthetic decisions on his behalf was the only way he knew how to be truly original. The show included a suite of oil paintings and sculptures that resembled hallmark works of postwar abstraction; these, too, were devised by algorithms trained on visual data sets from the art-historical canon. His comments reminded me of Rosalind Krauss’s famous assertion that the very concept of originality was a “myth” of modernism, which our contemporary era has seen “splintering into endless replication.” Affectations of authenticity on social media, which are often augmented by photo and video editing software, only reinforce this claim. To update a phrase of Walter Benjamin’s, the work of art in the age of artificial intelligence cannot be taken at face value. Or, as deepfake Swift puts it, “Maybe simulating simulated behavior is the only way we have of being ‘for real.’”
In November 2022, OpenAI released its artificially intelligent bot, ChatGPT. This followed the proliferation of generative AI models like DALL-E 2, which can create images from prompts based on content scrubbed from the internet. ChatGPT’s uncanny ability to provide convincingly opinionated responses to a wide range of questions, in turn passing the Turing Test, led some observers to conclude that OpenAI had achieved the “singularity,” the point at which technological progress exceeds humans’ ability to control it, in effect spelling the beginning of the end of the world. Setting such hysterical predictions aside, AI does spell a certain end to large sectors of the labor force: the World Economic Forum estimates that 14 million jobs could be lost to AI, while Goldman Sachs puts that number closer to 300 million. In February, before OpenAI had even released its current version of their program, the artistic director of a major British art museum admitted to me that she was using it to write the wall texts for her exhibitions. A job that might have previously been outsourced to curatorial assistants is now being entrusted to a robot, which can’t bode well for other junior museum staffers, such as press agents and graphic designers. The software has likely hastened the demise of the art world’s permanent precariat, who have always been underpaid and overworked. In this sense, AI facilitates the deeper entrenchment of class divisions in the cultural sector and beyond, which surely serves OpenAI’s corporate overlords just fine.
Artificial intelligence is fundamentally conservative. When predictive algorithms are trained on existing data, they will only provide us with more of the same. AI furnishes what theorist Mark Fisher termed “capitalist realism,” a guise of neutrality beneath which capitalism obscures any possible alternatives to its structures of exploitation. “We the audience are not subjected to a power that comes from the outside,” he wrote. “Rather, we are integrated into a control circuit that has our desires and preferences as its only mandate–but those desires and preferences are returned to us, no longer as ours, but as the desires of the big Other.” Supercomputers promise us objective insights by processing more information than we ever could as individuals, but that promise is always a ruse, because the information available to them–created and disseminated by humans–is necessarily flawed. Programs designed by private industry are unlikely to undermine the interests of capital. Art that uses such programs to simulate other worlds will be limited by the horizons of the world in which we live.
Last August, during a talk at the Art Barn, a private film and video art collection in Aspen, the artist Ian Cheng spoke about his practice of “world-building.” In 2018, Cheng created an artificially intelligent lifeform he called BOB, for “bag of beliefs –rendered in a digital simulation as a chimerical red serpent. When displayed in a gallery, BOB writhes across a battery of LED screens, eating spiny fruit visitors can feed him through a mobile phone app. BOB’s evolution depends on his encounters with IRL humans and other creatures who amble across virtual space, all of whom behave independently of Cheng’s control. The artist was able to build a comprehensive world for BOB because he knew how to construct its component bricks–a process requiring a high degree of technical coding expertise–but even that can’t undo the influence of his bias as creator. The artist is God of such a system, and everything in it is subject to His laws. When I asked him how he deals with his own subject-position, a common consideration in other art forms such as documentary filmmaking, Cheng gave a response that seemed unsatisfactory. “I think we’ll soon see AI that is more agential, and less like a service,” he said, in which public trust will depend on AI’s increased “disagreeability” with its users. Citing users’ fondness for asking ChatGPT questions it struggles to answer as a way to test the limits of its service, he argued that such challenges offer feedback that OpenAI can integrate into a more sentient and critical version of the robot interlocutor. In other words, the market will correct itself.
Already in 2009, Fisher predicted such a development as a “cul-de-sac” of human creativity, the fulfillment of the “end of history” predicted by Francis Fukuyama almost two decades prior. “How long can a culture persist without the new?” he asked, citing Nietzsche’s description of the “Last Man” as one “who has seen everything, but is decadently enfeebled precisely by this excess of (self) awareness.” If artificial intelligence is treated as an artistic medium autonomous from the systems that support and disseminate it, its criticality will be stunted by the capitalist structures upon which it depends, in turn reinforcing our passive acceptance of inequality. Creating a world beyond such conditions will require building programs that look nothing like anything we’ve ever seen before. Art, more than anything, ought to be able to restore our faith in the power of the human imagination to envision radical alternatives.