In March 2016, Google’s AI program AlphaGo beat one of the top players in the world; later that month a short novel co-authored by humans and an AI program made it through the first round of judging for a Japanese literary prize; all the while, “automated reporting” bots produce narrative content out of data, from corporate earnings to earthquakes and sports statistics—in fact, can you be entirely sure that a human has written this course description?
So, too, researchers now mine large textual corpora for grammatical and semantic patterns, quantifying or otherwise measuring such matters as word usage, punctuation, and character relationships (see also: forensic stylistics and copyright infringement detection). Artists experiment with machine reading as well, as in Ben Fry’s Valence reading Mark Twain’s The Innocents Abroad; Daniel C. Howe and John Cayley’s Readers Project; or Shakespeare Machine, a data analysis and visualization of Shakespeare’s plays.
These examples outline the conceptual terrain of this course. What do machine readers, text generators, and ‘robot novelists’ (more precisely, neural networks) mean for literature, and textual analysis? In what sense do our notions of authorship, style, and voice need to be reconsidered? How do we know who or what is writing, and does it matter? What is the status and function of the “human” vis-à-vis language in our contemporary socio-technical milieu? What new reading practices have emerged, or do we need now? How are writers and artists using, for example, QR codes, bots, and Google Translate for expressive purposes, and to what end? What is at stake in the rise of machine readers and writers—aesthetically, socially, and politically?
Class discussions will also touch upon algorithmic cultures, computer vision, sentiment analysis, conversation bots, spam, and CAPTCHA.
// This [insert] was created by an algorithm written by the author.