Trust and Believe
Trust and Believe is a symposium organized around writer – and current Eyebeam resident – Nora Khan’s continuing research on the self-modifying hermeneutics of cognitive capitalism. A paradigm in which algorithmic language buries and conceals its own extractive intent. Khan will detail her work mapping how technocapital, artificial intelligence, and global-scale computation fortify one another along several dimensions: narrative engineering, the proliferation of lexical interfaces equipping commercial smartbots to optimize our trust and belief; experiential simulation, where the limits and possibilities of our personal reality can be tested; and the corporate workplace, incubating creative self-pitching as a mode of survival which envelops consumers in its evacuated corporate poetics. Traversing individual, institutional, and state technologies of algorithmic control, Khan will portray a stark portrait of technoculture inside the capitalist megastructure which pervades all language, sight, and cognition.
This event will focus on a set of provocations gathered from Nora’s latest research, interviews and conversations with narrative engineers from AI startups and established companies seeking to create the next virtual assistant. One of the most accessible venues for examining cognitive capital in full effect is the conversational interface, namely, virtual assistants and chatbots. Stripped of all visual signifiers, chatbots are left with language alone, a medium in which humans’ essential connection affords unique powerful and opportunities. Designing the software of thought – what Khan terms narrative engineering – is a fundamental organizing principle for brand-personalities which now drives an astounding mobilization of resources and investment.
The nascent art of narrative engineering designs for the user’s “good feeling,” their comfort. And while exceedingly abstract, the engineering process also demands granularity and context, employing scholars from cognitive linguistics who map syntax down to the phoneme—the institution of trust in conversation. And the result of this cognitive labor…? The succession of ever tighter feedback between collective empathy and artificial intelligence.
Through discussion, lectures, and a hands-on workshop, Trust and Believe will present the nuanced considerations when building the ideal AI conversationalist—one not too human, not too mechanical, but humanoid—sweet, soft, compliant. An automaton which precisely molds itself to the rhythm and favor of our conversation.
Throughout the day, Trust and Believe offers the following questions:
How does the language of increasingly invisible interfaces bury intent?
How are trust and belief constructed through imperceptible inflections in tone, word choice, and attitude?
What is a convincing bounded AI personality, which do we choose to tolerate in our intimate spaces, and why?
How does the logic of artificial language replicate only the labor that is already unseen?
How might our own language change in response to continual engagement with consumer chatbots?
How do our interpersonal relationships change in a culture which constant interaction with designed-for affect and feeling?
Why might companies want to design for a different kind of chatbot and what ethical concerns might they need to explore at in the creation of such systems?
This event marks a series of essays as part of a project partnership with publisher Avant.org, with editor and co-curator Sam Hart.
Image: Shapes associated with the graphic identity of Cortana, virtual assistant of Microsoft.