In a haunting scene from Amanda Kim’s new documentary devoted to the life and work of Nam June Paik, a former Whitney Museum director recalls receiving a call from Paik in the dead of night. The perspicacious video artist, who coined the term “electronic superhighway” in 1974, wanted to amend his characterization of electronic communications: they were not a highway, but a vast ocean. “We’re in a boat in the ocean, and we don’t know where the shore is,” Paik said. [i] In 2016, a decade after Paik’s death, another video artist, Hito Steyerl, published an e-flux article in which she described analysts drowning in an unintelligible “sea of data” and questioned the machine vision tasked with separating signal from noise in that sea.[ii] The flow of data through undersea cables that crisscross the globe has offered a previously unimaginable degree of interconnectivity and information and has also made us vulnerable to scalable disciplinary and capitalist agendas, and given fodder to increasingly sophisticated artificial intelligence with a track record of perpetuating misinformation through generated text and images. We may be past the point of shores.
Describing “digital natives” in her 2020 book on the relationship between art and artificial intelligence, artist and media theorist Joanna Zylinska references a famous commencement speech that David Foster Wallace gave in 2005.[iii] Wallace opens with a parable in which two fish are baffled when a third fish asks them, “How’s the water?” Wallace goes on to underscore the importance of “awareness of what is so real and essential, so hidden in plain sight all around us, all the time, that we have to keep reminding ourselves over and over: ‘This is water.’”[iv] In a sense, artists thinking with and about technology can be that third fish, drawing our attention to technology’s pervasive material, social, and political effects. Some artists address the prejudices and prejudgments that are part of the fabric of our networks, perpetuated by purportedly neutral algorithms and data structures. In her long-term project Conversations with Bina48 (2014–ongoing), Stephanie Dinkins engages in philosophical dialogues with a social robot whose ignorance of many facets of Black female identity—despite the AI’s superficial resemblance to a Black woman—reveals not only a lack of embodied knowledge but also the racial biases in its coding. Meanwhile, Mimi Ọnụọha’s installation, The Library of Missing Datasets (2016), highlights the biases that cause swaths of data not to be collected in the first place, using empty folders to illuminate omissions around topics like, “Cause of June 2015 black church fires” and “LGBT older adults discriminated against in housing.” Other artists interrogate the resource extraction or precarious human labor on which slick technologies are built and sustained. Made during the NFT boom amid growing concerns about their environmental impact, Kyle McDonald’s web-based piece Ethereum Emissions (2021) estimates the energy expenditures and emissions associated with that blockchain, which used an energy-inefficient proof-of-work consensus mechanism until September 2022.
The tools, tactics, and spaces that artists proffer can enable us to be more intentional or subversive in our own engagements with technology. Irritating the surveillance infrastructures in which we are mired, Harris Kornstein’s photographic project Screen Queen Face Fail (2014–ongoing) explores a form of queer resistance wherein drag makeup thwarts facial recognition algorithms whose static notions of gender don’t account for fluidity, performance, or play. As part of Facial Weaponization Suite (2012–2014), Zach Blas has held workshops in which surveillance-scrambling masks—later used in performances—are produced from the collective facial data of workshop participants. In addition to acting as functional camouflage, Blas’s masks have addressed violences such as efforts to establish a physiognomy of queerness via facial recognition technology, algorithmic incompetence at recognizing Black faces, and forced visibility imposed by hijab legislation in France. Through their radically imaginative propositions as to how we might approach technology differently, artists make dreams of a more just, equitable, and livable future that much more possible. Xin Xin’s TogetherNet (2021), a platform for discussion and archive-building in small communities, is one such proposition. The open-source software is designed around principles of Consentful Tech, or consent practices regarding one’s digital body—in stark contrast to the normalized belligerence of surveillance capitalism, wherein personal data is forcibly taken or necessarily bartered. And Dinkins, after embarking on Conversations with Bina48, realized a different, expanded vision of artificial intelligence by creating a chatbot of her own: Not The Only One (N’TOO) (2019–ongoing), a “conversant archive”[v] trained on oral histories from three generations of Black women from one family. Between the release of popular AI image generators like DALL-E (2021), DALL-E 2 (2022), Stable Diffusion (2022), and Midjourney (2022), and institutional and market interest in artwork made using Generative Adversarial Networks (GANs), the art world’s eye is currently trained on artificial neural network art—which, in the legacy of DeepDream (2015), can often have a dreamy or hallucinatory effect, as if simulating a technological unconscious. A number of artists, particularly those who have built their own training datasets to specific ends, have made compelling and meaningful artwork using these tools. But at this point in the proverbial hype cycle, it’s worth remembering that there are many ways to dream with machines.
Cassie Packard is an art historian and art writer based in New York.