Algorithmic mediations of a media in crisis

One of my key research interests is the philosophical and political implications of new developments in artificial intelligence, and, in particular, how these methods are now used to control the “dangers” of the so-called “dark side” of digital cultures such as online extremism and hate speech.  While these ideas are under development, I will be exploring these ideas further at a talk I will give at the University of Melbourne in February 2018. The abstract reads as follows:

Algorithmic mediations of a media in crisis

Where there once was a widespread optimism about the future of digital technologies, the debate has now shifted to one of a media in a permanent crisis. Digital technology is haunted by its “dark” side: fake news, information disorder and violent political extremism run rampant. Alongside this discursive shift, critical scholars have warned that a new “dispositif of risk” has emerged whereby these imagined dangers need to be contained and controlled through the development of political, legal and technological mechanisms such as surveillance and censorship, predictive policing, and cooperation with internet intermediaries to monitor and remove extremist and hateful content online.

What is interesting about these debates is that digital technologies are now thus articulated in terms of the dangers it poses: in the vulnerable youths potentially recruited by terrorist propaganda; in finding ways to preemptively stop attacks through detecting signals from online behaviour; in mitigating the social and political fallout of fake news, disinformation, and racist, xenophobic and misogynist hate speech. These attempts to pre-mediate the imagined future also happens through technological innovation: state-of-the-art predictive algorithms in artificial intelligence are used to identify those kinds of practices online that could potentially lead to an unwanted outcome.

This paper looks at research done in violent online political extremism around such “algorithmic mediations of a media in crisis”. Through an auto-ethnography of methods and approaches that have been developed to identify dangerous behaviour online, it examines, in particular, the philosophical presuppositions behind these predictive epistemes used in especially big data research. What kinds of philosophical assumptions are embedded into these probabilistic methods through which the unpredictability of the future is reified? What do these say more fundamentally about changing notions of temporality and human agency underlying contemporary digital culture?