Last week the Australian Anthropological Society held their annual conference, Moral Horizons, at Melbourne University. My paper “The Ambience of Automation: Big Data, A.I. and Drone Culture” discussed the moral ambiguities at play in the development of autonomous systems of war and mass surveillance and how as a society we are challenging the orthodoxy of the machine through art and culture.
The notion of the vision machine is embedded in our popular cultural fictions and scientific explorations. It operates at the foundation of our interpretation of the farthermost reaches of space and the inner most structures of matter. The machine sees the machine knows but the mechanics are invisible. Concepts such as military futurism, meta-data, kill lists, terra-forming and drones are riddled with ethical conundrums that are rarely discussed in mainstream media discourse yet haunt the background atmosphere of contemporary technoculture. How do artists, designers and film makers working in the epicentre of the Hollywood dream machine and at the further most extremities of media arts practice depict notions of A.I. and machine ambience? What meaningful opportunities exist for informed open debate about their moral implications in the crowded vision streams of contemporary screen culture?
Technological visions of the future: political ontologies and ethics
The panel was organised by Jonathan Marshall (University of Technology, Sydney) and Rebekah Cupitt (KTH Royal Institute of Technology) and was designed to explore the complex interrelations of technology, ethics, politics, conflict, uncertainty, unintended consequences and visions of the future.
The future cannot be predicted in detail and is radically uncertain. Consequently visions of the future represent ontologically based understandings of what it could be, ought to be and ought not to be. Technology is often important in imagining these futures and can be framed as empowering, alienating, transformative or destructive. Persuasive technological visions of the future draw upon ‘underlying’ moralities and ethics which express conflicting social and political ‘realities’. Visions of the future proposed by one group can appear unwanted, or destructive, to another. Technologies and other future-making projects can also have unintended consequences which compound moral and visionary complexities.
We aim to explore contrasting views of technology, its ontologies and moralities, by understanding morality as fundamentally driven by disagreement, uncertainty and difference. We are interested in how moralities are strategically employed to reinforce particular technologically-driven visions of the future. Relevant questions include: a) How does technological intervention in the name of higher moral goals such as ‘saving the planet’ from the ecological crisis, or enhancing equality for those suffering discrimination or disempowerment, actually function in its political complexity and deal with unintended consequences? b) How does technologically mediated communication affect moral and political discourse, activism and our ability to handle futures? c) Does technology, or technological research implicitly carry a gendered ethics? d) What kind of ontologies and ethics are implied by, or implemented by, particular technologies?