HOLO is an editorial and curatorial platform, exploring disciplinary interstices and entangled knowledge as epicentres of critical creative practice, radical imagination, research, and activism. Read more
“A space to consider AI as a prompt to abandon positive and positivist visions of the future, for meaningful resistance in a time of an overdetermined computational life.”
Animation:
Basics09 & N O R M A L S
Art and design education history is filled with revered experiments. The Bauhaus (1925-32) famously brought fine artists and craftspeople together under one roof, to share knowledge about materials and methods—nurturing approaches and aesthetics that would become foundational to modernist art and graphic design. Black Mountain College (1933-57) jettisoned most of the bureaucratic conventions of what makes a school a school, creating a gradless co-op environment where students took charge of their education (and even grew their own food). And under John Maeda, the MIT Media Lab’s Aesthetics + Computation Group (1996-2003) prototyped new forms of digital expression—software art, interactive displays, data visualization—that are now commonplace.
Collectively, these schools paved the way for much of the thinking about how we make art that is now convention. They did so through radical pedagogy that challenged academic hegemony, and reinvented what and how students learned, often inspiring entirely new types of work and ways of working. Since then, with tuition rising worldwide and increased economic pressure on students, the stakes of education have only risen. With few exceptions of post-austerity experimentation—the School for Poetic Computation (SPFC), the School of Machines, Making & Make-Believe—universities and colleges increasingly resemble vocational factories. In them, the likelihood of critical reflection diminishes as resources get scant, as programs retool around profitability and industry expectations, and as students’ economic prospects become more dire. Any time a group of educators tries to break this mold we should take note.
With the incontestable fact that the world needs more radical pedagogy in mind, HOLO is very excited to announce its participation in the upcoming AI Anarchies Autumn School, co-curated by Maya Indira Ganesh and Nora N. Khan for JUNGE AKADEMIE. Taking place at Berlin’s Akademie der Künste Oct 13-20, the school will conduct “experiments in study, collective learning and unlearning,” through a variety of talks, panels, and workshops drawing on a impressive roster of multidisciplinary fellows, tutors, and a study group. Its primary focus: taking thinking about AI to task and expanding the conversation about what is possible.
“The school pushes back against both capitalism and corporatism. It aims to repair our psyches and social practices—now that AI and automation are so thoroughly entwined with the fabric of everyday life.”
Countering the ‘AI Ethics’ discourse that has taken root over the last decade—the idea that if we could only get Big Tech’s engineers and entrepreneurs to be better citizens, and design systems with responsibility in mind—the school pushes back against both capitalism and corporatism. And most crucially, against a general lack of imagination about what our lives with AI might be. It also acknowledges that our tableau of algorithmic timelines, natural language processing bots, and bustling data marketplaces is the world we now live in, and pragmatically aims to repair our psyches and social practices—now that AI and automation are so thoroughly entwined with the fabric of everyday life.
As Ganesh and Khan put in their statement: “We want to create a space to consider AI as a prompt to abandon positive and positivist visions of the future, for necessary disregard, for meaningful resistance in a time of an overdetermined computational life. By consciously moving beyond a re-statement of the status quo of solutions, of solutionist discourse, we insist on undoing our own positions.”
‘Undoing our own positions’ resonates, because it posits personal agency instead of (only) pointing a finger at Amazon, Google, or Palantir. The school’s program of panels and workshops is a kaleidoscope of responsibility and resistance, play and provocation, with threads of dreaming, feminist organizing, multisensory stimulation, and whimsical performance running through it. Beyond the savvy sociopolitical framing you’d expect from the assembled brain trust, the school looks like an awful lot of fun.
“It offers a kaleidoscope of responsibility and resistance, play and provocation, with threads of dreaming, feminist organizing, multisensory stimulation, and whimsical performance running through it.”
As the embedded editorial partner, HOLO will respond to the Autumn School’s radical pedagogy and playfulness in kind. Over the coming weeks, this dossier—the AI Anarchies Tracker—will become a rich repository of notes and highlights from its discussions and workshops, but it will also be a place where HOLO pokes, prods, and questions the provocations that bubble through. Beyond our excitement about joining Ganesh, Khan (who we thoroughly enjoyed working with on HOLO 3), the participants, and the JUNGE AKADEMIE team of Clara Herrmann and Nataša Vukajlović, we’re thrilled to share our (un)learnings with all of you.
The JUNGE AKADEMIE’s AI Anarchies Autumn School, curated by Maya Indira Ganesh & Nora N. Khan. The school is part of the AI Anarchies project, initiated and curated by Clara Herrmann & coordinated by Natasa Vukajlovic. It includes an artist residency programme for six artists in cooperation with ZK/U Berlin and a concluding exhibition in June, 2023 at Akademie. AI Anarchies is supported by the German Federal Commissioner for Culture and the Media.
002 – In Conversation:
Maya Indira Ganesh & Nora N. Khan on (Un)Learning (11/10/2022)
“The anarchic-strange is not a point of arrival, but a search for practices of memory, body, collectivity, and fierceness, other logics that can also sustain our hybrid selves. AI Anarchies is that kind of search; for theories, culture, and stories.”
Animation:
Basics09 & N O R M A L S
JUNGE AKADEMIE’s AI Anarchies Autumn School is curated by Maya Indira Ganesh and Nora N. Khan. In advance of the school’s launch, the duo share insights on their “experiments in study, collective learning and unlearning,” and what they’re looking forward to. Beyond plumbing the depths of their memories for experiences in (and outside) the classroom that inspired their thinking about how school should work, Ganesh and Khan sketch some of the “anarchic, strange, and improper” AIs they’d like to see in the world.
We tend to think about algorithms and AI as something ‘top down,’ that is imposed on us by a corporation or platform. What are some examples of ‘bottom up’ problem solving or organization that illustrate how people can (or already do) have more agency?
Maya: There’s a very easy way to stop being shown ads about bras, weight loss, light therapy, hot yoga, or whatever else annoys you on Instagram: you can go into the ad settings and (de)select what you want to see less of. Algorithms can feel like magic, and the tech we have can do pretty wonderful things, this is why we love the digital. But algorithms are not necessarily magical. People can and do have agency within the digital by learning how it works, and spending a bit of time in shaping the digital space they want to inhabit.
Nora: I tend to visualize algorithms as ubiquitous: as a cloud, a set of ghosts around us, in the air, the ether, in our minds. Driven, catalyzed, conjured, planted, built by others, much like the companies, institutions, and structures they support and strengthen. Like we learn to navigate and then be slightly anarchic within the physical structures, we do the same with ‘AI.’ People and their machines. We practice disobedience within the nodes, while learning the forest of rhetoric (an old teaching tool), while learning two or three languages. Algorithms are pattern languages with sub-leaves and trees and nodes to cross, with their own practical beauty. Their limits make the world. We practice if-then-maybes at every juncture in our relationships to each other, to these monoliths of institutions. I see agency in this miasma is first found through rhetorical training: learning to map the logic trees. Somewhere in the folds of the map, we find one another.
“Algorithms are pattern languages with sub-leaves and trees and nodes to cross, with their own practical beauty. Their limits make the world. We practice if-then-maybes at every juncture in our relationships to each other, to these monoliths of institutions.”
You both have a wealth of experience in cultural production and education. What have you taken from other schools and institutions, to incorporate into the Autumn School? What have you been careful to avoid?
Nora: Every experience I have had as a student, from my first school (a graduate research lab in a psychology department—the Child Study Center—with double-mirrored observation walls; I have a picture of my 4th birthday party with the mirrors glinting behind me), into middle school, high school, college, graduate school, has given lessons on the myths of meritocracy, mastery, and knowing it all. I’ve learned from many brilliant and generous teachers, but their most imprinted lessons tended to be outside of the classroom: when they’d pull you aside after a critique, or say they could see what you were trying to do.
Those lessons were ephemeral, anecdotal, improper, and always outside the curriculum. They were moments the teacher was vulnerable, a peer, unafraid to show they were learning, making mistakes, and trying new methods along with their students. They were loose. A tender demeanour. I tear up thinking of those moments, the shield dropping. It takes such grace to be intellectually vulnerable. Being loose didn’t threaten their authority; teaching was not about authority! They met you to preserve the thread of an idea, half-formed. As a teacher, working with artists and artists writing, I want to preserve these threads and germs of ideas. I want to stay sharp and recognize them, how they give way to a shift, a turn. AI studies and discourse has long had a violent emphasis on legible mastery. But the critical humanities and art have a pretty strong practice of resistance where needed 🙂
Maya: In a life before academia, I used to work with feminist and human rights-defender movements. The ‘workshop’ has been a key technique in bringing together people working with technology and politics. There was rarely any top-down didactic ‘teaching,’ because everyone was considered an expert in their own right and with something to share. When you make politics, theory, art, and culture, you need particular kinds of spaces for togetherness, and for solitude. You need spaces to play, too. Leadership and direction are essential in taking responsibility for architecting those spaces. Being in nature and by bodies of water is always appealing. Workshops (i.e people!) taught me to think about hosting such spaces in terms of flows, and small experiences of feeling welcome and comfortable; and to foresee the (necessary, positive) frictions that thoughtful people coming together generate.
The school schedule reads like a bucket list of timely provocations and delightfully eccentric workshops. What are some of the more unconventional workshops, activities, or performances each of you are really excited about?
Maya: I’m excited about the many workshops and performances related to sound and listening, and particularly in a moment when visual culture and language culture are being ‘discovered’ by AI industries creating ‘transformer’ technologies. I am also looking forward to Lauren Lee McCarthy’s performance Surrogate. Lee McCarthy’s work confronts the tragicomic, and the tender aspects of being human and inhabiting vast surveillance apparatuses. I believe this work speaks to many of the concerns and questions we have about AI technologies, not just in terms of their extractive or surveillant infrastructures, but also in terms of the absurd conditions of the digital we inhabit.
Nora: We worked on HOLO 3 together, and a review of it (a compendium of essays and works about ‘incomputable’ ideas in relation to AI and machine learning) hit it: many of the pieces don’t seem to be about technology or AI. Here, I’m delighted most by the full, varied range of performers, workshops, and playful strategic sessions, which don’t necessarily seem to relate 1:1 with AI. Maybe this is part of the anarchic piece; perhaps the argument here is that all the things that don’t look like AI do in fact relate to its making, because AIs are myths, imaginaries, narratives, as much as they are algorithms, pattern recognition, data, logics, systems, platforms, decisions. AI—and thinking hard about it and being really confused and frustrated and weirded out by It—creates a special psychological and emotional “situation” in which we’re thinking about what we’re making and what that says about our desires, needs, and perceptions of what constitutes the human, the inhuman, the non-human … I see these workshops as a hydra that collectively try to probe this field. Study is messy and anarchic by nature. My hope is that the workshops and mornings and discussions bubbling in the space … will create emergence—ideas and thoughts we can’t predict.
“Things that don’t look like AI do in fact relate to its making, because AIs are myths, imaginaries, narratives, as much as they are algorithms, pattern recognition, data, logics, systems, platforms, decisions.”
You’ve been quite careful with language and describe the participants as a ‘study group’ not ‘students.’ How do you see the cohort engaging and contributing to the program, and what kind of outcomes would you love to see from them?
Maya: I think of AI Anarchies as something like a caravanserai, the traditional inns and resting places along the historic, medieval Silk Road that linked the geographies that we now refer to as Europe and Asia. So I think of the study group as fellow travellers meeting by chance, and by design, and having fascinating stories, ambitions, and insights to share. There is so much that AI is reshaping in our human social worlds that we struggle to comprehend and articulate. I think I want to hear new frames and language through which to make sense of things going on. Caravanserais have been sites of cultural cross-pollination and dispersal of ideas-spores, so I hope the study group makes new and stimulating connections that are much more than just *networks.*
Nora: I think Maya has said it perfectly. I’m happy to see the idea-spores dance. Siegfried Zielinski, who was on the advisory panel for the program, early on urged that we keep the anarchic in view, and the meaning of Anarchy in clear sight as we built out the program. And made it feel a bit less anarchic than we had desired. The pressure to formalize the Thing, and our own desires to conclude, make tidy: we had to keep undoing these as curators who like conclusive forms, maybe 😉
You could say designing the school required that we put some of its tentative propositions into practice from the start. We can’t create a space of intellectual play if we’re not practicing that play all along the way. The amazing Akademie der Künste team made this possible, arranging countless calls with speakers and thinkers who’ve already been so generous, and shaped our understanding of what the space could feel like. It felt like a constant rhizomatic negotiation. I see this negotiation and thinking together only continuing, in which we will all work to keep the space open. I think about how spaces for inquiry can close up. Getting amazing people in the rooms, which we feel very proud of doing, is the first step. Maintaining the space, tending to it, responding to the group’s critiques and redirects and evolving needs will be the most profound—and rewarding—challenge. Check in with us along the way 🙂
“I think of AI Anarchies as something like a caravanserai, the traditional inns and resting places along the historic Silk Road. The study group are fellow travellers meeting by chance, and by design, and having fascinating stories, ambitions, and insights to share.”
You close your curatorial statement with the provocation that we should think bigger than ‘what should an ethical AI be?’ and instead dream up “anarchic, strange, and improper AIs.” What are some examples of those kinds of AIs that you would love to see?
Nora: One that plans its own end; one that uses the unimaginably vast bank of personal data gathered from us over a decade absolutely none of the time; as I’ve said with others, forever, a leftist or socialist or anti-fascist AI, however naive that seems. One that takes history and its own history and decisions into account. An AI that works to respond to unseen ‘bias,’ names it, works with knowledge of it, rather than encoding it unreflectingly and more deeply. An AI that’s a genuine confidante who never tells anyone anything. An AI you can barely find and has no interest in what you do and where you consume.
Maya: AI is a set of computational methods and techniques; what might be anarchic and strange and improper are how other methods for being in and knowing human, inhuman, nonhuman worlds might exist alongside and inside it. It is not about better or more appropriate applications of AI, though there are certainly opportunities for this, no doubt. The digital is a source of unending delight and creativity, and many of us are, at this point, part-digital. So I struggle with valuing a place outside the digital as better. I find myself ambivalent and agnostic about this condition, uncertain about the gains and losses.The anarchic-strange is not a point of arrival, but a search for practices of memory, body, collectivity, and fierceness, other logics that can also sustain our hybrid selves. This event, AI Anarchies, is that kind of search; it needs its theories, its culture, its stories; and we are here with this community to do just that.
Maya Indira Ganesh researches the politics of AI and machine learning. She is a senior researcher at the Leverhulme Centre for the Future of Intelligence, and an assistant professor co-convening the AI Ethics and Society program at the University of Cambridge. Ganesh earned a PhD from Leuphana University examining the re-shaping of the ‘ethical’ through the driverless car, big data, and cultural imaginaries of robots.
Nora N. Khan is a curator, editor, and critic of digital visual culture, the politics of software, and philosophy of emerging technology. She is the Executive Director of Project X for Art and Criticism, publishing X-TRA Contemporary Art Journal in Los Angeles; author of Seeing, Naming, Knowing (2019), Fear Indexing the X-Files (2017), and the forthcoming No Context: AI Art, Machine Learning, and the Stakes for Art Criticism; and editor for Topical Cream and HOLO.
003 – Note:
INTERVENTION (12/10/2022)
Intervention: A bot to help participants become their best (most machine readable) selves
For your daily dose of absurd, alt- and broken future connoisseurs N O R M A L S will sprinkle the dossier with insights from their AI Anarchy Flattener: a little crawler that scans recordings and notes for all sorts of hidden information to help Autumn School participants know themselves better. In order to perform optimally, this energy monster needs clean, easily computable data: well-formatted speech, reliable behaviours, and as little noise as possible. But worry not: to help participants make their activities more machine-friendly, it will provide simple templates to reduce their anthropic entropy. The bot provokes big questions: Can we, through this cooperation, learn to come and grow together as one? And do we want to forfeit our inner chaos for the sake of better computation in times of energy crisis? Tell it by shaping your arms into a perfect square.
[Unk] Photos and Violent Machine Learning Datasets
Speakers:
Hito Steyerl, Alex Hanna, Maya Indira Ganesh, Nora N. Khan
PROFILE:
Hito Steyerl
Hito Steyerl is a German filmmaker and moving image artist whose work explores the divisions between art, philosophy, and politics. She has had solo exhibitions at venues including MOCA Los Angeles, the Reina Sofia, and ICA London, and participated in the Venice Biennale and Documenta. She is a Professor of New Media Art at the Berlin University of the Arts and author of Duty Free Art: Art in the Age of Planetary Civil War (2017).
Curatorial Frame:
What does the old Nazi-era Tempelhof airport have to do with current machine learning datasets? What does intelligence as common sense have to do with common sensing? In the panel discussion that opens AI Anarchies, Hito Steyerl and Alex Hanna trace histories of forgotten possibilities for anti-fascist technologies, and assess the present conditions of determinism embedded in AI. Working with Saidya Hartman’s notion of “critical fabulation”, Steyerl and Hanna discuss how we might refuse, reject, and re-imagine the sociotechnical systems we have inherited.
Soundbite:
“In the context of the larger discussion of extractivism and energy, data is one resource that can be extracted but energy and other resources—like labour—can be extracted as well.”
Rather than ask philosophical questions about where AI cognition and execution succeeds and fails, Steyerl looks to images of extraction and capitalism as visual evidence of the forces that really drive culture. She outlines how she’s focusing on how we do and might caption these images—largely drawing on media from the 20th century—as a way to set up a conversation about the contemporary AI prompt and the images it generates.
Soundbite:
“In the 1983 Soviet television series Acceleration, a group of cybernetic scientists try to computerize a natural gas network. At one point, they come to the conclusion that it has become sentient, en-souled. They understand it as an AI with a subjectivity of its own.”
Hito Steyerl, paraphrasing a text by Oleksiy Radynski who she is collaborating with on a Nord Stream-related research project
Reference:
The role of the Nord Stream 2 pipeline demonstrates a prime example of what Steyerl calls ‘fossil facism.’ Recently, it’s stream of natural gas forced energy-hungry Germany to cede leverage to Russia. Those tensions were exasperated by the series of explosions that ripped through both the Nord Stream 2 and 1 pipelines in September 2022, which caused leaks into international waters between Denmark and Sweden. Still uncredited, most speculation on the origins of the sabotage point to Russia, who would stand to gain the most from (further) disrupting the already strained European energy market.
Soundbite:
“What does this photo of the Nord Stream pipeline leak show? If you ask the DenseCap captioning neural network model it will give you an answer: the water is calm, the water is white, the water is brown, the water is blue.”
Hito Steyerl, linking geopolitics and machine-readability
Soundbite:
“If you ask DenseCap to analyze Bretch’s War Primer it cannot deal with it. In particular the epigram or caption, it tries to identify it as a photo and says ‘the photo is [unk].” Which I believe is an abbreviation for unknown.”
Hito Steyerl, on what happened when she fed Bertolt Brecht’s poetry book to the DenseCap captioning neural network
Reference:
Steyerl’s talk draws heavily from (and expands considerably on) Bertolt Brecht’s War Primer (1955). Described by its contemporary publisher Verso as “a devastating visual and lyrical attack on war under modern capitalism,” in it the playwright annotated conflict photography with poetry. Of particular interest to Steyerl: a seemingly benign image of coastal bathers whose extremities are stained with the oil from ships sunk in nearby naval battle, that the DenseCap captioning neural network struggles with.
Soundbite:
“If we follow Brecht’s thinking about captions: Could the AI create a caption for itself that would take into account the infrastructure of this gas network—and maybe even of itself, of its own energy extractivism? Which caption would we need for this image to capture those things that current machine learning tools are incapable of extracting from images?
Hito Steyerl, on extracting real value from images
Soundbite:
“In 2021, Vladimir Putin was the patron of an art exhibition in Berlin called ‘Diversity United.’ Outside the same building, there are now temporary homes in use by Ukrainian refugees.”
Hito Steyerl, on the politics of the Nord Stream 2 pipeline coming full circle
Soundbite:
“Both coal and oil are likely to decline as energy sources. So—instead of the famous ‘data is the new oil’ claim—what if data is the new gas?”
Hito Steyerl, quoting Radynski’s take on Wendy Chun’s 2017 claim that ‘data is the new coal’ (not oil)
Reference:
Inspired by the war photography she researched (and its violent prompts), Steyerl created a battle scene of her own with an AI image generator. Sited at Berlin’s Tempelhof Airport, she cheekily rendered it in the contemporary (and suddenly ubiquitous) ‘trending on artstation’ aesthetic.
Soundbite:
“Prompts can now be used as tools in machine learning based image creation. Neural networks, as you all know, are able to conjure up images from prompts or maybe from projective captions. This is why the whole topic of captions is so interesting to me. This basically reverses the DenseCap project, which extracts captions from images. This projects images from captions.”
Hito Steyerl, contrasting extracting information from images to projecting images with information
Soundbite:
“Which caption would create a prompt that would be able to project a future that leaves behind energy colonialism, data, and fossil extraction. We don’t know the caption, but we do know that once we find it, DenseCap will label it ‘the photo is [unk].’”
Hito Steyerl, on the kind of unknowability that we desperately need more of
Reference:
The culmination of several years of research by Goldsmith’s Creative & Social Computing Lecturer Dan McQuillan, Resisting AI: An Anti-Facist Approach to Artificial Intelligence (2022) dramatically intervenes with mainstream AI discourse. Challenging assumptions made by AI-boosters, McQuillan broadly argues for a humanistic use of AI to bolster social structures versus (technocratically) treating social formations as problems to be optimized. “Our ambition must stretch beyond the timid idea of AI governance, which accepts a priori what we’re already being subjected to, and instead look to create a transformative technical practice that supports the common good,” writes McQuillan in an excerpt published on Logic. In conversation after her talk, Steyerl recommends the book as offering solutions to help avoid the types of dilemmas discussed in her talk.
Soundbite:
“What Hito has done tonight, is set up an alternate history of AI for us. We’re told that AI started in 1956 in Dartmouth, Massachusetts, where a group of scientists thought they just needed a summer to crack AI—and here we are 70 years later LOL. Hito has read an existing history of the nation state, of energy futures and imaginaries, and histories of technology—and she’s projected them forward.”
Alex Hanna is Director of Research at the Distributed AI Research Institute (DAIR). Holding a PhD in sociology from the University of Wisconsin-Madison, her work centers on the data used in new computational technologies, and the ways in which these data exacerbate racial, gender, and class inequality. Hanna has published widely in journals including Mobilization, American Behavioral Scientist, and Big Data & Society.
Soundbite:
“I’m not sure there is something recoverable in the history of or from the body of techniques that have come to be thought of as artificial intelligence. Whichever way AI has or will go, the fact is that modern AI needs data to subsist and to perform even the most perfunctory of tasks.”
Alex Hanna, burning down the popular history of AI
Takeaway:
Hanna offers a history of AI that is not a tale about progress, computation, or simulating cognition, but one “littered with violence and warmaking.” A direct line can be drawn from gatherings of elite white males at American Ivy League schools to an accumulation of venture capital in Silicon Valley.
Takeaway:
Machine Learning datasets enact violence twice, first at the moment of capture and then again during the process of classification. That harm is then ‘baked in’ to future models created with the data.
Soundbite:
“Every modern AI technology, from OpenAI’s DALL-E and GPT-3 to the new open source StableDiffusion image generation model, requires web-scale data—typically drawn from the public web, with little regard to the people in those datasets.”
Alex Hanna, on how conversations about AI are conversations about data ethics
Soundbite:
“Saidiya Hartman defines critical fabulation as a method of writing and reading the archive. She says, the intention in Venus and Two Acts ‘isn’t as miraculous as reclaiming the lives of the enslaved, or redeeming the dead, but rather labelling to paint as full a picture of the lives of captives as possible.’”
Alex Hanna, drawing parallels between Saidiya Hartman’s work on the colonial archive of the transatlantic slave trade and the contemporary dataset
Soundbite:
“I’d like to note the necropolitical impulse of Paris is Burning, Venus Xtraveganza stands as an abject tale of death, her life as a coda to the story. In the penultimate scene of the film, Angie—who herself died of AIDS three years later—discusses Venus’ death as being about ‘being transsexual in New York City and surviving.”
Alex Hanna, on the Paris is Burning documentary’s extractive legacy
Reference:
A Jennie Livingston-directed documentary capturing the waning golden age of New York City’s drag balls, Paris is Burning was released in 1990 to widespread critical acclaim. Providing a window into drag and LGBTQ+ culture it documented the (then much more underground) drag scene and chronicled queer cultural innovation, ultimately making a career for Livingston without compensating its precarious subjects. Hanna deploys the film in a broader conversation about extractivism, archives, and datasets, underscoring the plight of one of it’s protagonists, Venus Xtraveganza (image), who brings vibrance to the film, but at its end, the trans latina sex worker is revealed to have been murdered.
Soundbite:
“I want to trouble Crawford and Paglen’s ImageNet archaeology. Who are the people retrieved in these datasets? And what productive work does the excavating of these AI datasets perform?”
Alex Hanna, foregrounding the personhood of the subjects of large image datasets (and practice of those that engage them)
Reference:
A paper by AI researchers Kate Crawford and Trevor Paglen published by AI Now Institute in 2019, “Excavating AI“ digs into the politics of categorization in the ubiquitous ImageNet image database. With 15 million images, the database is widely used in visual object recognition software research, and one of the issues flagged by the authors is, the biases of classification render the datasets widespread use problematic. Crawford and Paglen explore this “sharp and dark turn“ of classification within the ‘person’ subclass in the database, collecting evidence of rampant sexism, misogyny, homophobia, and racism that creates conceptually suspect and ethically problematic distinctions between different types of bodies, genders, and identities.
Soundbite:
“With Lacework, Everest Pipkin taps into the roof of the matter: Machine learning datasets are a violent archive. Of faces, actions, of moments taken without context. Many of the frames which are available for people to contest being included in these archives are limited, be it informed consent as a scientific mechanism, data subjectivity within privacy and data regulation, and other liberal rights of protecting one’s likeness.”
Alex Hanna, outlining how much work remains to be done in ethical and consent-based data collection
Reference:
A 2020 project by Everest Pipkin, Lacework is a neural network reinscription of the MIT Moments in Time dataset. Taking that source collection of a million, labelled 3 second videos of dynamic scenes as a starting point, Pipkin slowed down the source clips and configured them to morph into one another, creating what they describe as a slow-motion “cascade of gradual, unfolding details.” The resulting work may be poetic, but Pipkin, who spent considerable time examining the source material, found dark corners of the dataset, stating they were exposed to “moments of questionable consent, including pornography. Racist and fascist imagery. Animal cruelty and torture … I saw dead bodies. I saw human lives end.”
Soundbite:
“It seems like there are some very fruitful overlaps between each of your discussions—as poetic inquiries, about missing historical information, about the violence of the archive and datasets, and about historical information and images as ways to talk about machine learning and AI.”
Nora N. Khan, highlighting commonalities between Steyerl and Hanna’s talks
005 – 13/10:
INTERVENTION (13/10/2022)
Intervention: A bot to help participants become their best (most machine-readable) selves
The AI Anarchy Flattener scans daily dossier distillations for hidden information to help AI Anarchies Autumn School participants become their best (most machine-readable) selves. Read more about the N O R M A L S provocation here.
006 – 14/10:
Wildflowers and Broken Machines (14/10/2022)
Morning Provocation
Wildflowers and Broken Machines: A Tech Check for Feminist Philosophies and Politics
Speakers:
Sarah Sharma, Jac sm Kee, Maya Indira Ganesh, Nora N. Khan
“I stand up here with a contradictory relationship I have with being somebody who thinks of themselves as a techno-feminist theorist who is very incapable technologically. But also, that makes me able to do weird things with technology, as you’ll see—I’m more interested in things like light bulbs than AI. That’s the spirit with which I come to this.”
Sarah Sharma, positioning her scepticism (and fascination) with technology
Takeaway:
Researchers can’t put their heads in the sand and pretend bad or influential actors in their field don’t exist. Sharma refers to spending time reading the “blogs of incels, tech bros, and masculinist and problematic media theory” as part of her work. By reading sources like these or alt-right voices, she is able to understand the rhetoric and ideology at play. Know thy enemy.
Takeaway:
Activists and critical researchers working against the interests of tech monoliths like Google or Meta should not assume those companies (and the entrepreneurs helming them) do not have a guiding ideology around what media shapes the social realm. To think or hope “there is no intelligible media theory” in the design or policy decisions made in Silicon Valley “might be a feminist misstep,” warns Sharma.
Soundbite:
“One of the things we were asked to think about as a prompt, is the kinds of terms and concepts we’ve inherited within techno-feminism. And I think this is super interesting, because techno-feminism only operates with an inherted technology world. And with an inherited intellectual canon of media theory. So there’s a lot we’ve inherited, and how do we work within this inheritance?”
Sarah Sharma, positioning her scepticism (and fascination) with technology
Precedent:
Going after an always-on corporatized personal virtual assistant might seem like low hanging fruit for a media theorist but Sharma’s reading goes beyond the usual commentary on Alexa’s (seeming) female gender. Sharma is interested in “how Alexa itself reorchestrates the labour of others” and that network of relations that connects a desire for detergent with an Amazon driver setting in motion from a fufillment centre. Amazon’s device “expands the world of social reproduction and consumption,” says Sharma.
Image: Kate Crawford & Vladan Joler, Anatomy of an AI System (2018)
Soundbite:
“When Elon Musk launched his Tesla Roadster into space, he tweeted out a photo of Marshall McLuhan with no accompanying words. I found it a little ominous and creepy.”
Sarah Sharma, starting her discussion with the biggest tech bro of all
Soundbite:
“In McLuhan’s key text Understanding Media, there is a phrase that is often passed over. Quote: ‘Man becomes, as it were, the sex organs of the machine world. As the bee of the plant world, enabling it to fecundate and evolve ever new forms. The machine world reciprocates Man’s love expediting his wishes and desires, namely in providing him one wealth.’ And that is to me, one of the creepiest lines of what is going on, when I see Elon Musk tweeting that photo of McLuhan.”
Sarah Sharma, lining up Musk’s actions with McLuhan’s words
Precedent:
As a scholar based in Toronto, part of Sharma’s taking media theory’s patriarchal legacy to task involves engaging the late media theorist Marshall McLuhan. A pop-culture sensation in the 1960s and ’70s, his most well known work is Understanding Media (1964), which makes a broad case for how new mass media (television, film, radio) were engaging human senses differently than print culture; and also that new media retrieves old cultural forms. Of particular relevance to Sharma’s talk is McLuhan’s reading of the light bulb, which she uses to demonstrate the limitations of his analysis. Sharma brought her aspirations for a more inclusive media theory to the University of Toronto McLuhan Centre named after the scholar, serving as its director from 2017-22.
Soundbite:
“McLuhan never got the full message of the light bulb. The techno-feminist reading of the light bulb would have us focus on something else—not just day and night or a shifting definition of day—the lightbulb, for example, shifted the gendered labour of the day and ushered in a new politics of night. Replete with new subaltern politics, transgressive politics, and even different modes of policing become possible.”
Sarah Sharma, picking up and extending McLuhan’s reading of the light bulb
Soundbite:
“Latent and blatant within these conceptions of media that dominate our culture, one of the other connections I’ve been making is that these tech elites operate with a conception of technology that also explains how they think of social difference. There are also clues in the designs of the products they make.”
Sarah Sharma, on trying to reverse engineer masculinist and tech bro philosophy through studying their talk and actions
Soundbite:
“There’s an overriding misogynistic formulation of women as technological tools. And feminists, and all the other non-abiding—it’s not just about women, it’s about the non-confirming and non-abiding—as broken tools. In other words, feminists are the abberations of otherwise well-working tools.”
Sarah Sharma, distilling misogynistic overtones emanating from the alt-right (and fairly recent media theory)
Soundbite:
“I find it really interesting that Big Tech is invested in design, in media theorizing, and machine logics. But their feminism campaigns are turned away from technology and only focus on the workplace—and ameliorating any difficulties.”
Sarah Sharma, on where Big Tech does and doesn’t engage gender
Profile:
Jac sm Kee
Jac sm Kee is a feminist activist working at the intersection of internet technologies, social justice and collective power. Kee’s activism includes sexuality and gender justice, feminist movement building in the digital age, internet governance, open culture, and epistemic justice. A co-founder of the Take Back the Tech! collaborative campaign on ending online gender-based violence, Kee also stewarded the development of Feminist Principles of the Internet.
Soundbite:
“A big part of my practice is to make technology intelligible to feminism. And to make feminism intelligible to technology.”
“For the last ten years, I’ve been really lucky to work with some of the most inspiring feminists, thinking about and pushing and engaging with technology in the larger world, or what we call the majority world—what used to be known as the Global South. Activists in Brazil, Mexico, Keyna, Indonesia, Bosnia, and India.”
Jac sm Kee, speaking to her collaborative (and global) practice
Takeaway:
Activists don’t always need to be organizing around a specific cause or goal. One of Kee’s responses to the drastic change of pace (slowness) of the pandemic was to try to get many of her favourite feminists and artists together without a goal in mind—to brainstorm. “It’s about really creating an extended process of imagining together,” she says of the loose collectives she participated in that led to several design projects and initiatives.
Takeaway:
Idealism comes much easier when you’re young and bright-eyed. Kee shares how when she was a “baby feminist” the outlook on the inevitability of social justice was much rosier. Mid-career activists can and should make time for self-care and reflection to reevaluate assumptions—and also to rekindle the optimism that motivated them in the first place.
Soundbite:
“Initially it was very challenging. It’s so hard for activists to come together with no objective, without a purpose of change. But the purpose was for us to be in community with each other and inquire; and to inquire in the spirit of play. And we let go of the objective—of the capitalist logic of productivity and production.”
Jac sm Kee, talking about creating structures for play
Precedent:
A project by the Coding Rights and Design Justice Network that Kee helped shape is the Oracle for Transfeminist Technologies, a part-Tarot, part-feminist, and part-tech futures card game for brainstorming more inclusive futures. Card groups within the deck include values (intersectionality, interoperability) that can be combined with situations (rethink and reclaim pornography, grant unrestricted access to information), and objects (plant, mirror) to generate new technological and cultural configurations.
Soundbite:
“Now there are like three, maybe five companies shaping all of the tech infrastructure. Is it even possible to be online without Amazon, for example? It can feel extremely overwhelming.”
Jac sm Kee, contextualizing why it’s important to make space for cultivating feminist tech-imaginairies not oriented around monoculture
Precedent:
Many of the roots of Kee’s talk can be found in her 2021 DING essay “Tending to wildness: field notes on movement infrastructure,” in which the activist outlines where her thinking about platforms and social media started—and how the challenges of the pandemic changed it. “The seemingly desolate space of crisis can be a generous site for deep shifts and unruly imaginings of how things can be,” she writes, summarizing how the shortcomings of Zoom regime served as a way to rethink both her methods and priorities.
Soundbite:
“The famous Indonesian poet Wiji Thukul wrote about how to fight against a big autocratic state. If that autocratic state is a big wall, the way to defeat it is to create cracks in the wall by becoming persistent wallflowers—that will just live because they have to—and I imagine us becoming this cluster of wallflowers.”
Jac sm Kee, closing her talk with a poetic metaphor
007 – 14/10:
Self-Hosted (14/10/2022)
Workshop
Self-Hosted
Instructor:
Sarah Grant
Profile:
Sarah Grant
Sarah Grant is an American media artist and educator based in Berlin. Her practice engages with the electromagnetic spectrum and telecommunication networks as artistic material, social habitat, and political landscape. Grant holds a Masters from New York University’s ITP program, and she is an organizer of Radical Networks, a community event exploring social justice activations and creative experiments in telecommunications.
Brief:
Learn how to ‘self-host’—install, maintain, run—your own websites and services. Using the open source YunoHost platform, Autumn School participants will learn to set up a web server (with webmail, cloud, and workflow services capable of supporting 100+ users) that can be logged into securely and remotely.
Soundbite:
“It takes just one in a community to give the gift of high-quality, low-carbon Internet infrastructure—to free yourself and others from centralised, privacy-eroding services!”
Sarah Grant, on a small investment of time that makes for a better web
Takeaway:
The infrastructure to set up and run your own server doesn’t need to be sophisticated or expensive, “you can host on a laptop or Raspberry Pi,” says Grant.
Takeaway:
Arduously jumping through hoops to solve technical problems is acquiring knowledge—which can then be passed on, creating a more inclusive, resilient, and democratic web. Most Autumn School participants arrived in the session with little or no prior knowledge about self-hosting—by the end of the workshop cheers and applause are sounding across the room as new servers go live.
Soundbite:
“75% of the Web is just WordPress—that’s the actual figure! And 99% of those sites are set up by people that don’t care about server configurations. WordPress is notorious for getting hacked, that’s why it’s good practice to change the default settings. It’s just the best way to protect that server.”
Sarah Grant, on internet vulnerabilities due to platform hegemony (and naive users)
Soundbite:
“Bots break into WordPress servers because of weak passwords. Most likely they will install their own spamming software. Why? It’s free hosting for them. The spam comes back to your identity, not theirs. They get to impersonate you and use your hosting.”
Sarah Grant, on what usually happens when a WordPress site gets hacked
Process:
After getting their bearings, the tribulations of network configuration, and emerging from troubleshooting victorious, the participants—among them software developers, media artists, activists, and social designers—are able to set up a chat server and chat with each other across federated networks: Hello World! This was met with considerable excitement.
“If the apartment building is a computer, the different applications running on the computer are the residents and they each have their own door. And that door corresponds to a port number.”
Sarah Grant, using a real estate metaphor to explain nano port changing
Fave:
Funny moment: The website of the Hetzner server provider fired an Error 429 Message because too many people from the same IP address—the whole workshop—were suddenly trying to access the site simultaneously. “That could be seen as a feature,” quipped Grant.
008 – 14/10:
Dub Intelligence (14/10/2022)
Workshop
A Dub Approach to Machine Listening
Instructor:
Pedro Oliveira
Profile:
Pedro Oliveira
Pedro Oliveira is a researcher, sound artist, and educator whose work seeks to dismantle the articulations of colonial (sonic) violence perpetrated on the borders of the EU. He holds a PhD from the Berlin University of Arts, has undertaken residencies at the Max-Planck Institute for Empirical Aesthetics and EMS Stockholm, and is a founding member of the Decolonising Design research platform.
Brief:
Dub music, a bodily approach to sound that (re-)centres listening in and with time, originated as technological leftovers in ‘post-colonial’ Jamaica in the late 1960s. Autumn School participants will learn to approach dub as a playful, facetious alternative pathway for a material inquiry of machine listening.
Soundbite:
“Dub is about serendipity and technique with a lot of space for error—in dub, error is good. But what defines an error, who decides? In the context of AI, errors determine people’s lives. It means someone gets deported. How are we measuring errors before they enter into applications that affect people’s lives?”
Pedro Oliveira, on sitting with and reflecting on errors and glitches
Takeaway:
Dub can be read as an echo of colonialism. Its first musicians used equipment left behind by the British to create new forms of meaning. Techniques involve decomposition, recomposition, and versioning, and it is “ultimately about doing and undoing in a way that escapes what is a good and bad sound.”
Takeaway:
The dub approach subverts music technology and uses it against itself. Instead of making or shaping sounds as intended by their designers, dub methods stage an intervention with sound technologies to “make them stop working in productive and aesthetic (‘beautiful sounding’) ways.”
Soundbite:
“If we add noise to the way we speak into the machine, how does it confuse the machine more than resolve it?”
Analog tape delay is a defining feature of dub production. The multiple playheads of the Roland Space Echo (1974–90), for example, allow producers to create dense rhythmic echo effects. Oliveira demonstrated the iconic machine to resounding effect for workshop participants. Perfected by producers including King Tubby and Lee Scratch Perry, tape delay techniques widely influenced popular music production—and 1970s delay units like the Space Echo remain revered by contemporary producers.
“More, more, more,” shouted an excited workshop participant into the microphone as Oliveira amped up the bass to create Space Echo distortions. Meanwhile, others marvelled at the tape lacing into patterns inside the machine. “Everything I’m saying is being recorded, but because the tape is tangled, it takes time,” explains Oliveira during demonstration. “I’m feedbacking my voice back into the tape, and what this does, it stretches time, thus creating a new temporality.”
009 – 14/10:
INTERVENTION (14/10/2022)
Intervention: A bot to help participants become their best (most machine-readable) selves
The AI Anarchy Flattener scans daily dossier distillations for hidden information to help AI Anarchies Autumn School participants become their best (most machine-readable) selves. Read more about the N O R M A L S provocation here.
010 – 15/10:
Anti-Computing (15/10/2022)
Provocation
Anti-Computing: Models and Thought Experiments
Speakers:
Jackie Wang, Ramon Amaro, Maya Indira Ganesh, Nora N. Khan
Profile:
Jackie Wang
Jackie Wang is a poet, multimedia artist, and scholar of the history and political economy of prisons and police. She is an assistant professor of American Studies and Ethnicity at the University of Southern California. Wang’s first book, Carceral Capitalism (2018), is a widely cited collection of essays on the racial, economic, political, legal, and technological dimensions of the U.S. carceral state.
Profile:
Ramon Amaro
Ramon Amaro is an engineer, sociologist, and cultural theorist in machine learning and AI research. He received a PhD from Goldsmiths, researching the philosophy of machine learning and the history of racial bias in mathematics. He is an Assistant Professor at University College London, and author of The Black Technical Object: On Machine Learning and the Aspiration of Black Being (2022).
Curatorial Frame (Excerpt):
Ramon Amaro’s and Jackie Wang’s respective practices as thinkers and writers have guided many of us into the break: a break where we can begin to consider the possibility of what Caroline Bassett calls “anti-computing,” the histories of dissent against computational logics, utopias, and imaginaries. In their research, study, and forthcoming book (see bio), Amaro delves into the gaps in the relationship between the black experience, machine learning research, and the racial history of scientific explanation. Wang, using books, poems, performance, and scholarship, traces an understanding of carceral politics, predictive logic towards a politics of disruption, resistance, and liberation. Jackie Wang meets Ramon Amaro in the dream state of co-thinking, to discuss Sylvia Wynter, Frantz Fanon, and sociogenic alienation. Together, they offer us ways to understand the condition of being subject to computing.
Soundbite:
“I study the history of prisons and police, looking at the political economy and technological dimension of the U.S. carceral state. A lot of my recent research is on the history of voice surveillance, looking at how World War 2 research happening at Bell Labs evolved into forensic voice identification.”
Computer vision is widely discussed but how AI surveills sound is less familiar to the public. In Wang’s telling, ‘voiceprints’ of the incarcerated are made for what she derides as dubious “fraud prevention,” on outgoing phone calls from prisons. “It’s actually used as a mechanism of controlling the incarcerated population. And even non-incarcerated people who talk to incarcerated people can be in these databases,” she says, encapsulating the widespread audio surveillance.
Takeaway:
AI and computational social sorting and disciplinary regimes are not new. They have a long history of “using symbolic languages in order to disrupt not only a sense of motion, and being,” notes Amaro. An important step in engaging the contemporary AI and machine learning discourse is to peel back the veneer of novelty that (conveniently) obstructs a much longer history of problematic statistical modelling.
Soundbite:
“I have a brother who is incarcerated. I was creeped out when I learned about audio surveillance because we often talk on prison phone lines. The only way you can circumvent them is to have a contraband cellphone.”
Jackie Wang, revealing voice surveillance and monitoring hits close to home for her and is not just ‘research’
Soundbite:
“Telecom companies take the data from prison calls and create other security products to sell to prisons. They say: ‘we can use this technology to identify criminal activity on these phone lines’ or ‘this phone number is in contact with multiple incarcerated people, maybe there is political organizing going on.’”
Jackie Wang, on the extractive industries built out of datasets collected from incarcerated populations
Precedent:
Coming under increasing scrutiny in recent years are inmate telephone systems (ITS), a privatized sub-sector adjacent to the American prison industry that profiteers off phone calls of the incarcerated. Essentially a duopoly held by two companies (Global Tel Link and Securus Technologies) with a 70% market share, prison populations and their families pay extortionate rates in order to stay in contact. Beyond further punishing the precarity of families with incarcerated members, the price gouging is doubly damaging, potentially discouraging or limiting contact, further compounding the deleterious mental health effects of serving time in prison.
Images: Securus Technologies promotional video
Soundbite:
“This year, Elon Musk was asked about the current state of limitations of autonomous vehicle systems and AI, and he answered ‘nothing is where it is supposed to be.’ I keep returning to that idea when thinking about racial processes and epistemic violence: Care is not where it is supposed to be. Community isn’t where it is supposed to be. Equitability isn’t where it is supposed to be.”
Ramon Amaro, zooming turning Elon Musk’s claims about engineering problems into a prognosis of sweeping cultural maladies
Soundbite:
“If we’re to relate that to Jackie’s intervention: these individuals in the carceral system are not where they’re supposed to be. Because the system itself is designed as an apparatus which is in place to disrupt an authentic sense of being.”
Ramon Amaro, connecting his sentiments with Wang’s research
Predecent:
Much of Wang and Ramon’s discussion plumbs the depths of the of French psychoanalyst Jacques Lacan’s notion of the mirror stage—as interpreted by postcolonial theorist Frantz Fanon. This largely centres around a footnote in Fanon’s 1952 book Black Skin White Masks, and using that reading of the mirror stage of development to consider the nature of the contemporary subject. “In Fanon’s analysis of the mirror stage, you have a child who is in front of a mirror and their caretaker is affirming their identification in the mirror. In the Fanonian formulation, the image in the mirror is already loaded with the social and the historical and the cultural … it’s not just a universal process that is devoid of race, essentially,” says Wang, summarizing the analysis.
Soundbite:
“You hear people in the AI ethics space talking about bias in facial recognition. ‘How can we make facial recognition more accurate?’ becomes the framework for thinking about the ethics of facial recognition. I’d be curious to hear about your thinking about legibility and opacity related to this technology.”
“Especially with the carceral system, this attempt to capture that moment. Discretize that moment into a symbolic language. Re-articulate that moment into a prognosis or type of prediction—and then take the audacious step of regurgitating that moment back to us. If we think about the real violence in that step, it’s there in which I’m concerned about what you call the irreducibility of this very personal, individual, and collective experience into that which becomes a technological problem.”
Ramon Amaro, breaking down what happens during data capture
Precedent:
Wang puts her research on carceral capitalism into conversation with current Big Tech meets law enforcement discourse in “Abolition of the Predictive State,” an essay guest editor Nora N. Khan commissioned for HOLO 3. Tracing a long historical arc of risk modelling and eugenic profiling she undermines the notion that AI-assisted policing could ever be neutral: “though it is possible to come up with abolitionist applications for data-driven tools—allocating resources, modelling environmental racism, identifying violent cops—we must remain vigilant to the seduction of techno-solutionism,” she writes.
Image: Predictive policing GUI mockup depicting ‘hotspot’ analysis with suggestions for officers to make to residents “to reduce or deter crime” / patent: George O. Mohler, US 8,949,164 B1 (2015)
Soundbite:
“For me, the carceral state is more damning to those that are not locked up than those who are. Because what it really says is the process whereby an individual becomes a normal citizen flows through physical confinement, systemic assignment, assignment to violence, and if you reach the end—because this is recidivism right?—you’re a normal citizen. What does that say about us? Have we actually gone through that process?”
Ramon Amaro, asking the American public to look in the mirror and reflect on the nature of their freedom
Soundbite:
“That’s a major Foucauldian insight. That disciplinary logics don’t begin and end at the prison. It’s embedded in our schooling and in the workplace. You know, the workhouse and the prison were one institution, and those logics are diffused across different social domains.”
“I wonder if you’ll join me, and I’m going to put my algorithmic glasses on: I want to read this poem, but I want to think about the algorithmic at the same time. And then I have a provocative question for you.”
Ramon Amaro, before reading Wang’s poem “Masochism of the Knees” to her
Precedent:
Masochism of the Knees
Who is the girl forced to kneel on dried chickpeas to atone for the sin of being alive?
In the dream blindfold and bandage are one.
My hands go numb as I carry dried chickpeas.
In my head there is a voice that says “naked forest” and “a tiny photograph that is passed between hands in the dark.”
Why doesn’t the girl on the floor of the world talk?
Because she is a wound on the earth’s hide.
Not mouth.
Do you understand?
Wound, not mouth.
“My provocation is: Are you suggesting that there is then the potentiality of atonement for this type of being in the numbness for what we might call the algorithmic?
Ramon Amaro, putting Wang’s poem under the microscope
Soundbite:
“Gosh! I’m not sure if I have an answer for you. But it was really uncanny hearing you read the poem. I was thinking: Maybe asking a poet what their poem means is kind of tricky. But as you were reading—and this connects to our Fanon conversation—I realized there was a shift in the landscape of my dream life … I would have these guilt dreams, but you couldn’t really trace the source of the guilt.”
Jackie Wang, delving into the psychology underpinning her poem
Soundbite:
“I’m not necessarily pessimistic about the possibility of creating disruptive technologies, but I think we have to deal with the racial capitalism, homophobia, white supremacy piece of it. I don’t really think you can separate the two.”
Jackie Wang, answering an audience questioning inquiring about the possibility of designing “soft, yielding, reasonable machines”
Soundbite:
“Me asking for a benevolent machine is me taking away my own accountability to make a benevolent life. I think if a machine were to be benevolent we wouldn’t even see it. Because we’ve never seen that before—not on a global scale.”
Ramon Amaro, reminding us that tech will not save us (from ourselves)
011 – 15/10:
My Grandmother and AI (15/10/2022)
Workshop
My Grandmother and AI
Instructor:
Lex Fefegha
Profile:
Lex Fefegha
Lex Fefegha is co-founder of COMUZI, a London-based design working to create positive human interactions. Recent personal projects include The Hip Hop Poetry Bot, a collaboration with Google AI and Google Arts & Culture Lab exploring speech generation trained on rap lyrics by Black artists. Previously, Fefegha was a lecturer at the University of the Arts London Creative Computing Institute, where he taught a module on computational futures and AI.
Brief:
Step outside the ongoing intellectual conversations around AI and its impact on society and create AI provocations for people like convener Lex Fefegha’s grandmother: a septuagenarian Nigerian immigrant living in the UK who is not comfortable with technology. Autumn School participants will explore (and carefully consider) how we integrate people from various global communities into the discourse and development of AI.
Soundbite:
“If the pastor gave a sermon about AI, my Grandma would call me tomorrow and she would say ‘the pastor told me this and that,’ because she trusts him. He influences her decision. And that perpetuates her thinking. So I ask: how does religion shape the world of AI? There are certain cultures that see AI as an extension of God.”
Lex Fefegha, underscoring that not all thinking about tech is secular
Takeaway:
Narrative environment is a space, whether physical or virtual, where stories can unfold. The intentional design of narratives with the needs of particular subjects (personaes) in mind can shift our thinking about access to technology, and lead to better design practices.
Soundbite:
“I think it’s about necessity and hacking. She can’t drive and because of the necessity of not being able to walk she learned to use Uber. When I left Cairo, she learned to use WhatsApp out of necessity.”
Lex Fefegha, on how older folks can be (and are) more adaptable than we give them credit for sometimes
Precedent:
A central reference in the workshop is Tricia Austin’s Narrative Environments and Experience Design (2020), a book that reconciles divisions between architecture, environmental design, UX, and other fields, and argues for a holistic approach to space and experience design. Drawing on research and teaching within the Central Saint Martins, University of the Arts London Narrative Environments Masters program, Austin advocates for what she describes as a “multidisciplinary approach whereby content and design are considered together from the start and the experience of the future inhabitant or visitor is researched, envisaged and incorporated as part of the planning,” in its introduction.
Process:
Autumn School participants are split into groups of three and given a table with three blank columns: future, thing, theme. Underneath is a long list of words. Each team group member selects a word from the list and the group brainstorms on the resulting combination.
“In a volatile future there is a festival related to money.”
“In a bizarre future there is a disaster related to family.“
Too often, reflections in the art-tech space are “by artists for artists,” notes workshop participant and HOLO contributor Michelle O’Higgins. In this session, the focus on accessibility and access was duly appreciated. “While I don’t think everything needs to be legible to everyone,” steady streams of inaccessible language can be “extremely exhausting.” Simply putting the needs of others, and the least expert stakeholders in this workshop was refreshingly accessible.
Debrief:
Often when I think about AI and the values that we build these systems upon, I search for my own places of truth, my most important values. And then I think about my grandmother—the person who raised me and taught me love and compassion. What if AI was built for her? I was thrilled to discover that other people were embracing similar questions. In the workshop, we played with structured speculations and imagined futures in which technologies are built to empower the people that we cherish the most.
With a double background in computer science and performing arts, Diana Serbanescu works on interdisciplinary approaches to culture, society and technology, with a strong focus on human factors. As team-lead of the group Criticality of Artificial Intelligence at the Weizenbaum Institute, she promoted culturally-aware research on the topics of bias, symbolic power and explainability in machine learning algorithms.
012 – 15/10:
Haunted Echoes (15/10/2022)
Workshop
Haunted Echoes
Instructor:
Wesley Goatley
Profile:
Wesley Goatley
Wesley Goatley is a sound artist and researcher based in London who examines the aesthetics and politics of data, machine learning, and voice recognition. His work has been exhibited at venues including Eyebeam, Nam June Paik Art Center, and the Victoria & Albert Museum. He holds a PhD in the philosophy of aesthetics and is a course leader of MA Interaction Design at the University of the Arts London.
Brief:
The Amazon Echo is a critical object that reflects the hopes and dreams of tech companies, and a totem for discussions about extractive capitalism, critical futures, and civilization-scale myth-making. Using discarded Echo devices, Autumn School participants will create sound sculptures that tell new stories about the relationships between humans and AI.
Soundbite:
“When you’re thinking about what to write in your scripts—an idea at the core of comedy—is to present something true and use humour as a critical tool. We can critique the world but we don’t have to do it in a joyless way.”
Wesley Goatley, encouraging Autumn School participants to reprogram their Echos into funny cultural critics
Takeaway:
New Alexa models tend to become obsolete within 18 months of purchase, creating considerable e-waste. There is no meaningful AI processing on the device itself—all the ‘magic’ happens in the data centre. As an owner, you’re the steward of this device while it is operational, but when it fails you have no recourse.
Takeaway:
The environmental politics of Alexa are deeply troubling. Beyond the aggressive planned obsolescence, the devices are made for a global minority that ends up dumped on (in) the Global South. Any object with batteries, printed-circuit boards (PCBs), and plastics will take hundreds of years to break down. ‘Luxury devices’ exacerbate “the global exploitation of bodies and earth,” says Goatley.
Soundbite:
“What does a certain type of person who works in a certain type of industry think about what emotion sounds like?”
Wesley Goatley, reflecting on the fact that since 2020 Alexa can be instructed to approximate tones of excitement and sadness when delivering certain messages
013 – 15/10:
Troll Swamp (15/10/2022)
Workshop
Troll Swamp
Instructors:
The Mycological Twist (Eloïse Bonneviot & Anne de Boer)
Profile:
The Mycological Twist
The Mycological Twist is a project by Berlin’s Eloïse Bonneviot and Anne de Boer. They take mycology as a source of inspiration in engaging with ecological and social practices—from the mushroom fruiting body to rotting matter deep below ground. Recent projects include: ECLIPSE (the 7th Athens Biennale), Quadrat Sampling E-Ecologies (with HAU, Berlin), and L’Académie des Mutants, (Musee d’Art Contemporain Bordeaux).
Brief:
Troll Swamp is a large-scale board game for multiple players, based on the popular Dungeons & Dragons tabletop game. The game uses role-play and teamwork to act out scenarios about online trolling. Guided by the Troll Master (a storyteller navigating the players through the game), Autumn School participants will play the game, reconsidering their online habits and relationships with others in the process.
Process:
There is a definite sense of immersion entering the workshop. There is a layered configuration of amorphous tables clustered inside a semi-circle of banners that hang from ceiling to floor, designating an intimate space for this Troll Swamp to unfold within. New age, quasi-mystical music plays in the background and imbues the room with a sense of fairytale and adventure. The spatial design of the workshop space uses platforms in a literal and physical way, setting up an environment for imagining new ways of constructing physical representations of something that we are used to inhabiting very immaterially.
Choose your fighter: Troll Swamp character assignments include an 11-year-old dreamer, a half-organism half-fairy, a legal advisor, a cottagecore virtual avatar, a 1-day-old broccoli, a power user of pet advice forums, and a cactipus. The game characters, represented as 3D-printed rainbow clusters of castles, crystals, and creatures, all live on the foodie internet, each inhabiting their own websites and blogs.
Character strengths:
Making fiction reality
Creating glitches that open portals to new works
Comments everywhere on everything
Quick while being slow and not moving
Knows a little about many things
Goes in two directions at once, without moving much
Character weaknesses:
Can’t distinguish between fiction and reality
Unaware of own weaknesses
Hates cooking with olive oil
Overly occupied with aesthetics
Uncertain about their support
Paranoid due to fear of being eaten
Fave:
Funny moment: An hour into role-playing, storytelling, and trolling, one workshop participant asked the Troll Master “are we going to take a break at some point?” to which another responded: “you have to roll the dice first.”
014 – 15/10:
INTERVENTION (15/10/2022)
Intervention: A bot to help participants become their best (most machine-readable) selves
The AI Anarchy Flattener scans daily dossier distillations for hidden information to help AI Anarchies Autumn School participants become their best (most machine-readable) selves. Read more about the N O R M A L S provocation here.
015 – 17/10:
Wild Imaginings (16/10/2022)
Provocation
Towards AI Anarchies: Wild Imaginings and Alternatives
Speakers:
Mimi Ọnụọha, Tiara Roxanne
Profile:
Mimi Ọnụọha
Mimi Ọnụọha is a Brooklyn-based Nigerian-American artist creating work about a world made to fit the form of data. By foregrounding absence and removal, her practice makes sense of the power dynamics that result in disenfranchised communities’ different relationships to digital, cultural, historical, and ecological systems. Onuoha has lectured and exhibited internationally, and been in residence at venues including Studio XX, Data & Society, and the Royal College of Art.
Profile:
Tiara Roxanne
Tiara Roxanne is a Tarascan Indigenous Mestiza scholar and artist based in Berlin. They are a postdoctoral fellow at Data & Society, and their practice investigates the encounter between Indigeneity and AI by interrogating colonial structures embedded in machine learning systems. Tiara has presented work at venues including at the Images Festival, European Media Art Festival, Laboratorio Arte Alameda, and transmediale.
Curatorial Frame:
How do we understand the AI of the future in relation to the AI of the past? How do we grapple with the scale and speed of contemporary AI and imagine a leftist, communal, distributive, or transfeminist AI? Mimi Ọnụọha’s work has investigated the political, computational, and aesthetic implications of being both invisible and hypervisible to datasets. Tiara Roxanne‘s work involves decolonialism, arguing its impossibility. They ultimately ask: What other gestures of healing exist? Together, they invite us all to consider: What does it mean to gather and hold space for each other, to think big and dream wild about technology outside contemporary ambitions and frames?
Soundbite:
“What is imagination and why do we need it?”
“In more mainstream spaces I think it is tempting to dismiss the idea that imagination is powerful. The word comes off as kind of cuddly, nonthreatening, the verbal equivalent of a crayon drawing on a fridge. Maybe that air of triviality stems from a misconception: that the act of imagining only involves idly dreaming ideas that have no connection to reality.”
“AI is machine learning. It’s a machine learning tool inscribed with the simulation of human intelligence. AI is mimetic. AI needs us. We are possessed by it. And to be possessed by something is to be controlled.”
Ọnụọha quotes adrienne maree brown’s claims that we are immersed in “an imagination battle.” Sourced from brown’s 2017 book Emergent Strategy, that claim fits into a larger conversation about maintenance, repair, and restoration, inspired by the science fiction author Octavia Butler. Outlining her thinking and approach in the chapter “Principles of Emergent Strategy,” brown writes “move at the speed of trust. Focus on critical connections more than critical mass—build the resilience by building the relationships.”
Soundbite:
“Can you trust AI? Can you trust imagination?”
“I don’t know that I trust either. But I trust in ecosystems and in what we can develop through interrelationality. My friend and I have created these values for ourselves and for others around technology, around AI, and around togetherness. And in holding them near they help us think about what we would need for an ecosystem. I’m offering up these cards to all or any of you, as a reminder to hold our values near.”
“A myth is a story we tell ourselves in order to feel connected, in order to feel safe. In most cases this is often based on what is untrue. On what is not possible. A myth that is told is that AI is here, to save humanity. As a mode of convenience, as a placeholder for processing. Something that might provoke manipulation, shapeshifting, and transformation into or from our projection, desire, or lack.”
Midway through the performance, Ọnụọha shares a project that vividly captures the relationship between kin and soil spoken of in her and Roxanne’s offerings. A Creative Capital Project, in Ground Truth (2022) Ọnụọha responds to the discovery of the remains of 95 Black people who were victims of the nineteenth century practice of convict leasing. “I’m curious about why the remains of the Sugarland 95 were able to cut through that veil of unknowability when others did not,” says Ọnụọha in a video synopsis of the project, which uses machine learning to determine what other U.S. counties may contain similar mass graves.
Soundbite:
“Do you ever have a need to be categorized? Or classified, or recorded?”
“Often. In data colonialism forms of technological hauntings are experienced, when Indigenous peoples are marked as others and remain unseen and unacknowledged. Without acknowledgement, without classification there is no evidence of existence. Without record I am extinct. I want to be acknowledged, but I don’t want to be seen.”
Part performative conversation, part ritual, Roxanne sporadically moves away from the stage and weaves through the room engaging Autumn School participants. She walks the length of the front row, offering bags of soil to outstretched hands. “I am carrying the soil, I am moving territory, I am holding ancestry as I walk across the room,” Roxanne says solemnly. “I am leaving this earth, I am leaving this soil with you. I am leaving this soil with you. I place this soil in your hands as a marker of fertility. As an imagination. As a point of departure. I am holding the bloodline of those before me, as a practice of ceremony. I move my body and I carry you all with me.”
“What have you learned from your past beliefs or work, theories or actions? What were you wrong about?”
“I’m not sure what I have learned in a way that I can describe such a thing articulately. But something I return to time and time again is the body. Is my body. Because my body holds the memory, the blood, the lineage, the story, and the safety. The body is a cosmology of truth, perspective, wisdom. I often get too caught up in impossibility, the impossibility of the material border, the digital, the digital border, the constraints of the body, and the constraints of the machine.”
“Land. Cosmology. The sacred. Trust. Bodies. Memories. The stories our elders pass on to us. Language. Food. Intimate exchanges. But, how do we protect when the gaze is centred on the surface? The surface of the machine.”
Julia Kloiber is the managing director and co-founder of the feminist organization SUPERRR Lab. She has launched numerous initiatives and organisations with a focus on public interest tech and digital infrastructures including Prototype Fund, a public open source fund, and the network Code for Germany. Through her work, Kloiber explores just and fair digital futures.
Profile:
Quincey Stumptner
Quincey Stumptner has a background in philosophy, politics and economics and researched on data ethics and the ethics of policy making at the London School of Economics and Political Science. At SUPERRR Lab he works on intersectional technology foresight and connects SUPERRR’s research to actors from policy and politics.
Brief:
Using a living collection of approaches (combining more than 25 different global perspectives) Autumn School participants will take a holistic view of digitization and look at it in terms of patterns of discrimination that are intersectional in impact. The methodology used in this workshop will help participants draft preferable future scenarios and identify what actions are needed to make these visions a reality.
Soundbite:
“Current digitization narratives are economic in nature and serve the interests of corporations over societies and minorities. By contrast, Feminist Tech Policy takes a holistic view to examine digitisation in terms of intersectional patterns of discrimination.”
Julia Kloiber & Quincey Stumptner, on SUPERRR Lab’s first principles
Soundbite:
“We’re so busy reacting to our current conditions that there is very little time to go beyond the here and now. That’s why we work on foresight methodologies.”
Julia Kloiber & Quincey Stumptner, on the need for—and process of—imagining (feminist) futures
Process:
Divided into four groups, Autumn School participants are tasked with three exercises to get the foresight juices flowing.
Write down one value for the future, then interview your collaborators about its importance. Also: make friendship bracelets for each other while explaining your values.
Select a value and a principle and create a future scenario around it.
Among the values discussed were freedom of movement, sustainability over growth, doubt, everything for everyone, and adaptability.
“We developed the Feminist Tech principles together with activists, educators, writers, technologists, and designers. To make them more tangible and accessible to a wider audience, we turn to storytelling. Using the principles as a starting point, we invite people to develop narratives that envision more just technological futures around them.”
Julia Kloiber & Quincey Stumptner, sparking imagination through participation
Precedent:
Putting theory into action, SUPERRR Lab took their Feminist Tech principles, a set of guidelines for tech policy-making and technology creation, and translated them into an idea-generating card set. “The principles take a critical lens to tech,” states the SUPERRR Lab website, “observing and addressing the power structures behind the creation, regulation and use of technology, and offering a framework for creating new, emancipatory structures.” The Feminist Tech Card Deck is used in the workshop to iterate ideas and brainstorm the progressive futures workshopped across the groups. In addition to putting the core twelve generative principles into play, the deck also contains blank cards—so that users might add their own.
Takeaway:
It’s important to remember that by asking a question you establish the context within which you are thinking about the issue. Instead, feminist organizing can start from a place of challenging definitions in order to allow people to generate their own questions rather than providing questions for others to use as starting points.
Soundbite:
“I joined a conversation about maintenance and repair. Discussing the vertical structures of production, planned obsolescence, and what we mean by sustainability. We concluded that maybe it isn’t products that should be made sustainable but the production process.”
Miche O’Higgins, workshop participant and HOLO contributor, summarizing her group’s internal discussion
Process:
The near and far futures that emerge from the four groups range from biosphere dwellers, to decentralized communities, to radical transparency. One group, for example, imagined the Papaya Republic, a degrowth utopia set in the year 3000 that guarantees “coconut and papaya for everyone:”
The Papaya Republic in a nutshell:
Set in Colombia in a degrowth future, papayas are greatly desired but available in limited quantities
Europe doing reparation work in global rebalancing
Citizens allowed one long-distance flight every five years, ‘papaya pilgrimage’ becomes a ritual
Papaya access metred out based on (sustainable) availability
“In the year 3000, the Papaya Republic has replaced the Banana Republic. Europe is doing reparations work and Colombia has become home to a thriving degrowth economy, bountiful commons, and a global destination for papaya pilgrimages.”
Group Papaya Republic, sharing the fruits of their labour
017 – 17/10:
Intersectional AI Zine-Making (17/10/2022)
Workshop
Intersectional AI Zine-Making Collaboratory
Instructor:
Sarah Ciston
Profile:
Sarah Ciston
Sarah Ciston builds critical-creative tools to bring intersectional approaches to machine learning. Ciston is an AI Anarchies Fellow at the Akademie der Künste; a Mellon PhD Fellow in Media Arts and Practice at the University of Southern California; and author of the forthcoming A Critical Field Guide to Working with Machine Learning Datasets, from the Knowing Machines research project.
Brief:
Existing AI tools carry an aura of magic and require massive technical resources to be able to wield. Ignoring them, this zine-making session invites Autumn School participants to imagine easily accessible AI tools—as tangible and approachable as a hammer or a garden spade—that ask ‘what infrastructure, language, communities, approaches, or perspectives need to take place in the field?’
Soundbite:
“Anyone should be able to understand what AI is and to help imagine what AI ought to be. The Intersectional AI Toolkit is a collaborative collection of zines that introduce intersectional approaches to AI, building on established but marginalized practices to fundamentally reshape the development and use of AI technologies.”
Sarah Ciston, getting Autumn School participants on the same page
Precedent:
The Intersectional AI Toolkit is a series of booklets for “artists, activists, makers, engineers, and you” that gather ideas, ethics, and tactics for more ethical, equitable tech through publishing sprints. Past editions emerged from workshops at Creative Code Collective, HIIG IAI Edit-a-thon, and Mozilla Festival and ask questions like “How can artists help reshape AI” and “What shouldn’t AI be used for?”
Soundbite:
“Rather than asking whether AI is biased, fair, or good, we need to be asking how AI is shifting power—that’s the most important question. Intersectionality can helps us understand that power.”
The Intersectional AI A-to-Z is just one of the many gems in the Intersectional AI Toolkit Ciston launched in early 2022. Bridging the technical and the cultural (C is for ‘confidence interval’ in the former, ‘code of conduct’ in the latter) the glossary provides definitions to help someone new to the field navigate its intricate discourse—or to help someone more on the computer science side acquaint themselves with equity-related perspectives.
Soundbite:
“No special expertise is required. Instead, we are co-learning and co-creating, invested in what all of us have to teach each other. We will investigate key questions around how AI affects us differently, what we need to understand about AI in order to reshape it, and—most importantly—what shapes we want AI to take in order to be more ethical and equitable.”
To start, Autumn School participants discuss their varied paths into technology and AI: the good, the bad, and the ugly. From having no background in technology, to the jarring experience of being in a computer science classroom that was over 85% men—struggling to feel like you belong—to coming to technology through an interest in film, to growing up with a parent who was a programmer. Experiences were shared and recounted to help participants hone the messages they want to convey in their collectively-produced zine.
“A zine page is a small space and that’s kind of the point: the constraint is part of the magic, as it forces you to be short and sweet and express yourself swiftly and joyfully. General rule of thumb: first thought, best thought!”
“There’s so much good stuff here, I feel like we can easily make seven different issues.”
Sarah Ciston, describing the ideal position an editor likes to find themselves in
Outcome:
By the end of the session, the group compiled their writing and thinking into an 8-page issue of the Intersectional AI Toolkit series of zines. A compendium of theorizations and feelings, it “not only demystities but dismantles and decentres” AI through a playful pastiche of handwriting and diagrams. Bringing queer, p[unk], and critical perspectives to our current data and information regime, it encapsulates the tone, urgency, and thoughtfulness of the pressing conversations that have unfolded at the Autumn School.
Zines provide a low-cost, low-barrier, and deeply communal way of coming to terms with emergent technologies. Countering the complexities and opacity surrounding AI with handwritten notes gathered in an intuitive, tactile medium might be just what is needed to process our own anxieties about Big Tech—and inform the general public in the process.
018 – 17/10:
INTERVENTION (17/10/2022)
Intervention: A bot to help participants become their best (most machine-readable) selves
The AI Anarchy Flattener scans daily dossier distillations for hidden information to help AI Anarchies Autumn School participants become their best (most machine-readable) selves. Read more about the N O R M A L S provocation here.
019 – 18/10:
Bodies as/in Systems (18/10/2022)
Provocation
Bodies as Systems; the Body in Systems
Speakers:
Laura Forlano, Louise Hickman
Profile:
Laura Forlano
Laura Forlano, a Fulbright award-winning and National Science Foundation scholar, is a writer, social scientist and design researcher. She is an associate professor of design at the Institute of Design, and affiliated faculty for the College of Architecture at Illinois Institute of Technology, where she is director of the Critical Futures Lab. Forlano’s research is focused on aesthetics and politics at the intersection between design and emerging technologies.
Curatorial Frame:
Louise Hickman and Laura Forlano explore how the knowledge, activism, and practices of crip- activists, crip-engineers, and scholars might reframe AI, and demand more from it. Both insist that if we look to these communities as inventive hackers of digital technologies, then we might find that we have never been short on imaginaries, or modes of interdependence. Hickman and Forlano will lead this session alongside a human live-captioner to show different kinds of skills, mutuality, and care possible within sociotechnical infrastructures. But, all technological infrastructures are prone to failure. What if we were, to take up technology as an affirmative kind of failure? Their provocations push for a deep consideration of difference, and a renewed focus on the role of the body in our built technological systems.
Soundbite:
“For four years from 2018 to 2022, I used one of the world’s first automated systems for delivering insulin in order to manage and control my blood sugar. I identify, research, and write as a disabled cyborg. I’m cyborg not because of my body being partly made of machines—the insulin pump and sensor system—but because of my knowledge of cyborg knowledges, practices, and politics.”
Forlano’s methods are worth underscoring. Her auto-ethnographic field notes blur autobiographical observations of health and wellness with the log of her maintenance regime. “This includes my use of synthetically produced hormones, insulin, my machine, an insulin pump, an assortment of digital and analog parts including sensors, tubing, charging cables, alcohol swabs, insertion devices, needles, and the like,” she notes.
Takeaway:
One issue with fields like User Experience and Interaction Design as—solutionist—they romantacize preferable outcomes. The disabled cyborg approach touted by Forlano sees “failure rather than perfection as the default setting” and recognizes that design oversights have tangible repercussions for edge-case users. “There may not be a single place to put the blame,” says Forlano.
Soundbite:
“Perhaps it’s not so unusual to talk about computational technologies as disabled. For example, it’s a common colloquial expression when talking about computing: your account has been disabled. But we rarely understand disability to be a property of both humans and machines.”
Laura Forlano, setting up a conversation about ‘crip human-computer interaction’ (HCI)
Soundbite:
“Often, I am not sure whether I am taking care of the devices or they are taking care of me. A turn towards a more relational more-than-human or posthuman subjectivity, that sees humans and machines as intimately entangled rather than discrete entities as they have been traditionally conceived.”
In 2015, Forlano reached out to Sky Cubacub of Rebirth Garments, a fashion designer that creates QueerCrip garments with disabled people, in order to design a custom-made bathing suit that accommodates her insulin pump. “This piece allowed me to think more about the invisibility of diabetes as a disease in tandem with the visibility of the machines that I use to control my blood sugar,” she explains.
Soundbite:
“I was struck by the ways in which the choice of materials including cement, spandex, and rubber suggested an alternative narrative about computing in contrast to the shiny metal and glass of the latest line of mobile phones, tablets, and computers.”
Forlano previews three realizations of her disabled cyborg perspective that are currently being made in collaboration with the visual artist Itziar Barrio. One translates the alert and alarm data from her smart insulin pump into movements that symoblize the years of sleep deprivation Forlano endured; another is a cement abstraction of an arm wrapped in what appears to be a blood pressure monitoring cuff. “The work is a reminder of the human labour that makes these automated systems work,” explains Forlano.
Laura Forlano, sharing a stream of alert and alarm data transcriptions she transcribed from her smart insulin pump
Soundbite:
“AI, like all technological systems, is disabled. But our design processes are still overwhelmingly skewed towards optimization, perfection, efficiency, success, and a happy path. We vastly underestimate and minimize the ways in that they may fail, leaving others to suffer the consequences.”
Laura Forlano, on human hubris and technological delusions
Profile:
Louise Hickman
Louise Hickman is a research associate at the Minderoo Centre of Technology and Democracy, University of Cambridge. She holds a PhD in Communication from the University of California, San Diego, and her research draws on critical disability studies, feminist labour studies, and science and technology studies to examine the historical conditions of access work.
Soundbite:
“What value do we place on autobiographical encounters with technology? When I was two years old I started wearing a hearing aid. When I was sixteen I started using an electric wheelchair. More recently, in the last two months I’ve started using a white cane. I’ve named my white cane Gregory Bateson.”
Louise Hickman, introducing herself and her assistive technologies
Precedent:
Hickman’s cane is named after British anthropologist and social scientist Gregory Bateson. The cited cane analogy is from Bateson‘s 1972 collection of essays Steps To An Ecology of Mind, where he writes [assistive technology] “is a pathway along which differences are transmitted under transformation, so that to draw a delimiting line across this pathway is to cut off a part of the systemic circuit which determines the blind person’s locomotion.”
Soundbite:
“Gregory Bateson asked his students one day: Where does the body start and where does the vision begin? Is it at the bottom of the cane? Is it halfway up? Is it at the tip? Or is it the path of travel? What he was asking his students to think about was the feedback loop and debunking the idea that the brain is the place of cognition.”
“How do stenographers build dictionaries? How do they code speech when it is happening in real-time? Everytime a stenographer produces a word it’s based on their own embodiment with their keyboard, laptop, and dictionary.”
In the LUX-commissioned short film Captioning on Captioning (2020), Hickman and the disabled artist Shannon Finnegan—an EYEBEAM resident at the time—meet with Hickman’s longtime collaborator and real-time writer Jennifer to reveal the machinations of access work. Over the course of four recorded Zoom sessions, the three discuss dictionary building, transcription errors, and speech-to-text latency, as Jennifer demos her use of the Case CATalyst stenography software. “By inverting the invisibility of real-time writing, we edited the film to show discrete moments of care, vulnerability, and intimacy,” Hickman writes on her website.
Soundbite:
“German sociologist Max Weber kept coming up as Darth Vader. Again and again my stenographer repeated the brief for Weber, but all that appeared on screen was Vader. Darth Vader, Darth Vader, Darth Vader.”
Louise Hickman, discussing “technological failure” with a personal anecdote: When her real-time writer Jennifer lost her laptop, she also lost the vocabulary she had been building with Hickman over two years.
Soundbite:
“Stenographers have ownership over their dictionaries—they’re not part of the datasets used for training AutoAI captioning services. In that sense, the politics of transcription become this local system the stenographer can be the gatekeeper of.”
Louise Hickman, on authorship and labour in an increasingly automated real-time economy
Precedent:
“There’s not one same dictionary for the entire cohort of stenographers,” explains Hickman. Underscoring the intimacy of a stenographer’s vocabulary, Hickman shares several steno briefs that, depending on dictionary, have widely different meanings: ‘A-BG,’ for example, can mean ‘access,’ ‘academic,’ or ‘accusation,‘ while ’PHA’ can mean ‘machine,’ ‘mental anguish,’ or ‘parole.’ “This is what I mean with stenographers having a relationship with their software,” notes Hickman.
Takeaway:
The word ‘universal’ often precedes the word access, but Hickman’s argument undermines the idea that some kind of totalizing access could ever be granted. Her nuanced discussion about the organic relationships that evolve between speakers and their stenographers—their mutual crafting of custom vocabularies—spells out a much more bespoke notion of access. Access is not some magic policy that levels the playing field, access provides time, space, and resources to support folks as they make their own way—on their terms.
Precedent:
Hickman harkens back to the bygone stenography devices and namechecks the Digitext-ST (Steno Translator) as a key benchmark. Invented in 1987 by Jerry Lefler, the clunky device was the first real-time shorthand machine for stenographers. “I’m a complete nerd, I want this in my house,” Hickman beams while telling the audience about its capabilities.
Soundbite:
“My thinking about the steno brief is really about captioning as a political economy. The choices that we make to interpret information and provide access. A deaf person in the Netherlands, for example, is entitled to 160 hours a year of access to a trained transcriptionist. That’s rationing of access, on a state level.”
Midway through her talk, Hickman invites participants to stand up and inspect the captioning stations installed on either side of the stage. “This is about embodiment,” she reminds the audience, encouraging everyone to centre themselves around the screens. “I want you to check on everyone around you, and check about access—whether they can see, whether they have space.” As the audience follows, Hickman points out that host Autumn School has pathways to access built into it—and invisible labour that makes them run. Offstage, out of sight, speech-to-text translators Luisa and Christina labour to provide real-time captioning. “I’m naming them to recognize there are these other people in the room who are producing the access,” says Hickman.
“Access work is intimacy work. As AI is exceedingly taking over access work—real-time captioning, for example—I’m interested in the distribution of that intimacy, how it becomes fragmented and contested.”
Louise Hickman, pondering intimacy when access work is automated
020 – 18/10:
Adversarial Acoustics (18/10/2022)
Workshop
An Introduction to Adversarial Acoustics
Instructors:
Murad Khan, Martin Disley
Profile:
Murad Khan
Murad Khan is a course leader and senior lecturer at UAL’s Creative Computing Institute. His research explores the relationship between pathology, perception and prediction across cognitive neuroscience and computer science, outlining a philosophy of noise and uncertainty in the development of predictive systems. He is a member of the collaborative research studio Unit Test, which explores the place of investigative methods in counter data-science practices.
Profile:
Martin Disley
Martin Disley is an artist, researcher and software developer at Edinburgh University. His visual practice centres around critical investigations into emerging data technologies, manifesting their internal contradictions and logical limitations in beguiling images, video and sound. He is a member of the collaborative research studio Unit Test, which explores the place of investigative methods in counter data-science practices.
Brief:
Through theoretical discussion and practical computing exercises, Autumn School participants will explore the legacy of vocal forensics, the use of the voice as a tool for profiling and measurement, and the use of voice in machine learning systems. Taking sound studies scholar Jonathan Sterne’s account of a ‘crip vocal technoscience’ as a starting point, the relationship between interiority and exteriority will be complicated, during the development of an adversarial approach to vocal acoustics.
Soundbite:
“The development of techniques for facial recognition models have reinvented the relationship between interior and exterior common to the pseudoscience of physiognomy and phrenology.”
Murad Khan & Martin Disley, on working back from notions of visual similarity to sonic similarity
Takeaway:
Modelling voices is difficult but not impossible. Khan & Disley’s suggested workflow is to start with airflow, then apply a filter (mimicking the manipulation of the oral cavity), then model the speech sound. Focusing on these three variables, it is possible to develop a model.
Precedent:
Demonstrating the complex physiology of speech with aplomb is Pink Trombone (2017), a web-based speech synthesizer by Neil Thapan that allows users to physically model speech sounds by modulating a simulated oral cavity, tongue, and voicebox. With its slightly unnerving continuous drone of “owwwwwwww,” a user quickly gets a sense of the enormous complexity of human speech.
Soundbite:
“Asking how a machine hears is really like asking how it sees.”
Murad Khan & Martin Disley,
Precedent:
Sound studies scholar Jonathan Sterne’s 2019 Journal of Interdisciplinary Vocal Studies article “Ballad of the dork-o-phone: Towards a crip vocal technoscience,” provides a key point of reference for the workshop. There, drawing on his highly personal relationship with the Spokeman Personal Voice Amplifier (which he calls the ‘dork-o-phone’) which he has used since a 2009 surgery, he carefully considers his relationship with his assistive technology, ultimately concluding “every speech act involving my voice raises anew the relationship between intent and expression, interiority and exteriority” (p.187).
Takeaway:
In many ways the session distils down to imparting three key questions on participants: How is acoustic data transformed in a way that makes it understandable to a training model? How do you turn the voice into data? How do we measure difference?
The AI Anarchies Autumn School: Experiments in Study, Collective Learning and Unlearning (Oct 13–20, 2022)
The JUNGE AKADEMIE’s AI Anarchies Autumn School, curated by Maya Indira Ganesh & Nora N. Khan. The school is part of the AI Anarchies project, initiated and curated by Clara Herrmann & coordinated by Nataša Vukajlović. It includes an artist residency programme for six artists in cooperation with ZK/U Berlin and a concluding exhibition in June 2023 at Akademie der Künste. AI Anarchies is supported by the German Federal Commissioner for Culture and the Media.
An exclusive set of three meta-visualizations abstracted from HOLO’s news archive into glorious RGB noise.
Edition of 600, A3 risography prints on 120g metapaper, signed and numbered by the artist.
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.OKNOPrivacy policy