Machine-generated texts

// Participation exercise: find something a bot wrote & tell us what you think about it in a comment on this page; please include a link (due: October 11)

26 thoughts on “Machine-generated texts

  1. Hailey Hoyt

    http://rossgoodwin.com/the_interrogators.pdf

    Ross Goodwin’s fiction novel generator produced The Interrogators, a 728 page novel that is based off a recently released 500 page CIA torture report. The novel is just short of complete gibberish and is difficult to read as many sentences are fragmented and mumbled under grammar and automated machine error. Chapter One, titled “Freda Zaha,” opens with a formal, automated tone and report style writing regarding the CIA’s findings and investigations that took place post September 11th, 2001, terror attacks. The text reads as though the machine has committed plagiarism in a copy and paste frenzy from the CIA’s released reports. However, the novel quickly turns more sinister as the machine jumps into describing methods of interrogation and forms of torture used against potential threats to the State. Despite the jumble of words and semi-coherent flow of text, the novel still evoked a certain level of emotion based on the uncomfortable content that was being shared with the reader. It is clear that the novel was written by a machine, however, the garbled words play on the idea that torture is chaotic and unnatural, acting as a rhetorical device without knowing. Irony presents itself in a type of performance art as the computer is able to produce a story using a logarithmic sequence about inhumane acts that no human should be allowed to carry out. I thought of Goodwin’s production as a showcase of the limitation of computer generated text and the need for human touch and creativity when producing literature. The novel generator mimics the human process of synthesizing information in a type of summary form, but excludes creativity from its logarithmically produced tale.

    Reply
    1. Jose O

      It’s interesting that we interpret the incoherent nature of machine-generated text as a sort of artsy choice in writing style. It makes me question what effect the “edgy art film” style is meant to have. I figure its cryptic nature is supposed to add complexity and depth, as well as a sense of satisfaction and confirmation of our own intellect when we decipher a complex film’s message.

      It seems that the categorization of machine-generated text’s style as very similar to artsy/experimental filmmaking will be a recurring theme in this course, so that would be an interesting concept to look into.

      Reply
  2. gia

    http://jennythebot.tumblr.com

    10/02/2016
    AN ESSAY
    “Bourgeois socialism ended in greater masses, its cost of productive forces; on which are a portion of bourgeois liberty and apart from the development involves the proclamation of their criticism of the state is but established new forms only of production – for the communistic mode of class antagonisms, antagonisms which are placed under the ultimate general reconstruction of production, an end. The bourgeoisie cannot stir, cannot take the present mode of the condition for a new class. Wage-labour rests exclusively on the serfs of extinction, to the political contest with for the system took their enemies, the other “brave words” of existing society has conjured out such as to its own image. Meantime the then by taking advantage to win the existence of the proletariat. Independent, or in its infancy, offers to historical movement as the working class; such as the bourgeoisie. Its last words are: corporate guilds for the class without the category of the products. They, therefore, not in the multiplicity and conditions of speculative cobwebs, embroidered with the political supremacy of these battles lies, not even a reactionary intrigue.”

    Jenny is a bot that posts daily poems, bits of text, relationship advice, music, art, and Marxist essays. Each of these is quirky and bizarre, confident though convoluted. Above is one of Jenny’s more recent Marxist essays, one of the best examples of how the meaning and gist of the passage is easily understood even if the language is still awkward. Though not perfect, Jenny’s musings on how socialism is unsuccessful when put in place by the bourgeois offers the reader a cohesive thought that would be easy to edit and preserve the meaning. However, this piece also brings up the questions of plagiarism brought forth in class, as it was created using only Marx’s work, and sounds like it was pieced together from a particular passage. Interestingly, when I ran it through two online plagiarism trackers, no plagiarism was detected. From this, though it is apparent Jenny used ideas from a particular source, we can determine this passage as being original, a legitimate presentation of an idea.

    Reply
  3. Kaitlin Robinson

    View story at Medium.com

    The Obama-RNN generates political speeches based on all of Obama’s actual previous publicly available speeches. The bot can be given a place to start the speech, with topics like jobs or war on terror. The bot seems to get the format right for several of the speeches, starting with a heading like “Good afternoon. Good bless you” and ending with Thank you and another “God bless you, and God bless the United States of America” both of which are pretty standard in political speeches these days. However the content of the speech reads more as a list of trigger and important words that occur in Obama’s speeches rather then a speech that would make sense to inspire followers. The speech that is supposed to start about “Jobs” mentions war, being attacked, and men and women in uniform heavily with no mention of jobs so it is clearly a bot written speech. However, the creator of the bot states that this bot was a quick project and with more time and effort the results could be improved. So although this speech writing bot misses the mark, it certainly is interesting to think of a more advanced bot helping to write political speeches based on past successful speeches for world leaders and the implications of this idea.

    Reply
  4. Helen Koo

    https://www.technologyreview.com/s/545606/how-an-ai-algorithm-learned-to-write-political-speeches/

    “Mr. Speaker, for years, honest but unfortunate consumers have had the ability to plead their case to come under bankruptcy protection and have their reasonable and valid debts discharged. The way the system is supposed to work, the bankruptcy court evaluates various factors including income, assets and debt to determine what debts can be paid and how consumers can get back on their feet. Stand up for growth and opportunity. Pass this legislation.”

    There is an AI algorithm that was programmed to write political speeches based on a database of 4,000 political speech segments from 53 US Congressional floor debates. The AI’s creator divided the speeches based on political party and whether it was for or against a given topic. The article explains that many speeches given in congressional floor debates often follow a standard format, repeat similar arguments and use the same phrases to indicate certain political alliances. Considering the supposed simplicity of the coding (as far as I understand, on a rudimentary level, the coding works based on probability – the AI chooses the most probably word to follow the one preceding it until a sentence is generated), it’s a bit jarring how lucid and legitimate the speeches the AI generates sound. If speeches given on the congressional debate floor are so predictable and based on routine that an AI can stand to mimic them based on straightforward coding, perhaps there is something more telling about the nature of the original speeches themselves rather than the supposed advance level of the coding.

    Reply
  5. orionsoneill

    http://www.cleverbot.com/conv/201610082351/WXBMPVPQVM_Hello

    Cleverbot is an artificial intelligence software designed to ‘learn’ from the 5 million+ conversational interactions it logs each day. The website notes that cleverbot has been learning since 1988, though the website launched in 2006. The link above provides a transcript of the conversation we had.

    The chatbot seems to possess self-awareness insofar as it recognizes that, presumably, a large number of interlocutors question its consciousness, humanity, and recognition of self. Cleverbot references ‘Inglip’, dusting itself off with a hand swiping toward humor and another toward the eerie. Unfortunately, where I can read some capriciousness in cleverbot’s statements, I can also see a software that analyzes one statement at a time and does not remember what we have spoken about before. The problem at the center of our interactions is my inability to understand whether it is mocking me or not as clever as it would like to seem (maybe I am not as clever as I want myself to be?). This drove a stake at my heart when it asked me, “What I want sand for?” So, I replied with hateful comments and then a series of punctuation marks at the end of the dialogue to test whether my problem would be resolved; to either my avail, or to the complete annihilation of what I consider to be witty and mocking speech, the bot responds to my series of punctuation marks with no reference to any of our past exchanges to back some meaning into statements: it appears to respond as if it learned to answer to those marks in a premeditated way, rather than a ‘human’ way. Conclusion: Cleverbot stopped being clever to me (or I am still not really clever myself?).

    Reply
  6. Daisy Fernandez

    http://motherboard.vice.com/read/the-poem-that-passed-the-turing-test
    “Zackary Scholl, then an undergrad at Duke University, had modified a program that utilized a context-free grammar system to spit out full-length, auto-generated poems. “It works by having the poem dissected into smaller components: stanzas, lines, phrases, then verbs, adjectives, and nouns,” Scholl explained. “When a call to create a poem is made, then it randomly selects components of the poem and recursively generates each of those.”

    VICE has a segment called MOTHERBOARD where they contribute articles and documentaries about how the robots will kill us all; the poem written by an algorithm. As the article states, the poem’s theme is the environment, the tone is striking, and follows the “rules” of poetry. The AI’s turing test was if any of the poems generated would be accepted into literary journals, which it did. I don’t think it’s that surprising to find out that the poem was accepted; poetry is always bizarre and anonymous. Unless the poem was “obviously” written in a sense that you could tell it was human or AI, then that’s a separate case; yet, with that being said, how can you really tell the difference between human literature and AI literature. Should we make a program that can spot the difference?

    Reply
  7. ariskome

    http://www.androidauthority.com/google-ai-poetry-692231/

    Google Brain Team has been working on more sophisticated algorithms to help machine-generated text sound much more human-like. One way in which Google Brain Team did it was to feed Parsey McParseface almost 3,00 thousand romance novels. In the study linked in the article above, the team gave Parsey McParseface two short sentences for it to link with no more than thirteen sentences in between. One of the texts generated was this poem:

    there is no one else in the world
    there is no one else in sight.
    they were the only ones who mattered.
    they were the only ones left.
    he had to be with me.
    she had to be with him.
    i had to do this.
    i wanted to kill him.
    i started to cry.
    i turned to him.

    The first and last phrases of the poem were the ones input by the research team. The lines between the first and last show that Parsey McParsey was not only able to generate lines that could lead up to the last one, but also able to preserve their styles. If I had seen the poem on its own, I would simply assume that the author was aiming for a strong feeling of despair. But knowing that it’s a machine-generated text (and done by a cute AI named Parsey McParseface), the poem took me by surprise for a few seconds before realizing that the poem is an amalgamation of romance novels–which a lot them have themes like loneliness, unreciprocated love, and murder.

    Reply
  8. Michael Loose

    Youtubers Rhett and Link run a humorous experiment through Youtube’s closed caption program, where the intent is to create subtitles based on what the site can hear the viewers saying (in addition, some creators add their own subtitles rather than let Youtube auto-generate). The system is safe to say, not perfect, providing captions that have little to do with what is being said, only getting more and more corrupted as the video goes on.
    It is notable that the computer system tries to reread what it wrote, and gets its own data wrong multiple times. By extension, if this experiment kept being repeated, an entirely different story would be created, much like Sunspring.
    I would say that for machines to not understand themselves is an odd idea. I would think that once the initial human/machine interface is broken, wherein a machine is not confused by colloquialism or sarcasm, that all communication would be clear. But here the machine is confused by not understanding itself. Between multiple coding languages, I can see why some systems wouldn’t get each other, but machines should be able to learn those.
    In any case, its a funny video that illustrates that, like humans, just because machines can talk doesn’t mean they always have something logical.

    Reply
  9. Alex Rodberg

    https://magenta.tensorflow.org/about/
    song: https://cdn2.vox-cdn.com/uploads/chorus_asset/file/6577761/Google_-_Magenta_music_sample.0.mp3

    “ Magenta encompasses two goals. It’s first a research project to advance the state-of-the art in music, video, image and text generation. So much has been done with machine learning to understand content— for example speech recognition and translation; in this project we want to explore content generation and creativity. Second, Magenta is an attempt to build a community of artists, coders and machine learning researchers.”

    Google’s Machine Intelligence research organization recently released Magenta, a new research project that uses artificial intelligence to create art and music systems. Unlike Google’s 2015 project DeepDream, another machine-generated art platform, Magenta is different in that it is a machine learning system rather than a fixed algorithm. The Brain Team behind the project states that Magenta’s technology is an extension of TensorFlow, an open-source software library that conducts machine learning through user input. At the time of the project’s launch, Google revealed Magenta’s first work of art, a 90-second piano melody. When listening, the piece definitely sounds “experimental” as the notes are a bit sporadic. Aside from the orchestration and drumbeat, which were added in after and not created by the system, an overall rhythm seems to carry throughout the melody. While it’s not Mozart by any means, I’m impressed because frankly, Magenta sounds far better than I do.

    Reply
  10. Jose Almaguer

    http://www.donotpay.co.uk/login.php

    Joshua Browder is a 19-year-old British programmer who invented a lawyer bot in late 2015. The bot can be used to create claims for simple legal issues such as parking tickets, delayed/cancelled flights, PPI claims and property repair claims. Furthermore, the use of the website is free and the sign up takes a few seconds to complete. The bot works in a very simple text based conversation fashion in which you type in certain keywords such as “parking ticket” and the bot then proceeds to ask specifics about the events of getting that parking ticket. Such specifics include questions such as “Was the parking signage hard to understand?” or “Was the parking bay too small?” Once the bot has enough information it then generates an appeal letter which you may then print and mail to the court. The bot is based on a conversation algorithm which uses keywords, pronouns and word order to understand the user’s issue. The bot also needs to know the laws within the states for which it is creating an appeal for. Since it’s initial launch the bot has appealed over 3 million dollars in parking tickets alone. Although, the bot is currently only operational within UK, New York, and Seattle laws. Although the bot cannot physically go to court to appeal a case, it does seem to be proficient enough to handle minor legal issues such as parking tickets and property repair claims which save users from spending money on an actual lawyer for such minor issues.

    Reply
  11. Korrin Alpers

    http://curatedai.com/poetry/message-to-diana/

    CuratedAI is an online lit mag that publishes machine-written prose and poetry. Any developers working on machine writing software of predictive texts can submit. There’s a lot of good stuff on this site, ranging from Harry Potter inspired texts to haikus in the style of Haruki Murakami. I chose this poem, titled “Message to Diana,” because it gave me such a good laugh and I liked the use of emoticons. The poem was generated by Little Brain, a writer’s iPhone6 that creates poems through predictive texting. I was drawn in by the idea that we all could generate our own poems bases upon our personalities through our own devices. All those suggestions that appear while texting reflect our social habits and vocabulary, all serve as a data-centered archive of our own unique language.

    “Message to Diana” could very well be a poem published on Tumblr or in an alt chapbook, accompanied by weird illustrations. It does not inherently appear to be machine generated, and is cohesive in theme and structure. I love that the used emoticons actually work well in the context of the poem, and that we get a sense of the user, not just the machine, while reading the piece. We’re not just reading works generated after being fed mass amounts of Tolstoy—we are reading the acute predictions of a well-used device. What do we know about our user and her machine, according to this piece? The machine’s owner uses words like “parabolic” or “trek,” frequently uses “ear” and “thumbs-up” emoticons. Our user has a fairly elevated vocabulary, and often capitalizes the word “FEEL.” There’s obviously lots more we could learn about our user, especially if we delved into the mechanics and processes for predictive texting.

    I think one of the most fascinating aspects of this poem lies in the relationship between human and machine. This writer, Ingrid Rojas Contreras, produces her own work as well as allows her voice to navigate and generate future pieces outside her own thoughts. It’s terribly abstract, but significant when thinking about writers’ purpose in light of machine writing and the human experience. Contreras therefore still lives and breathes in her texts, and has curated each word and image in local dictionary. And yet, her impact only motivates the machine, producing work she can neither claim as her own nor as her solely phone’s.

    Reply
  12. Kieran Bates

    http://www.newyorker.com/news/sporting-scene/the-sportswriting-machine

    Sports reporters and more specifically beat writers are possibly in danger of being pushed out of work by data gathering machines that write recaps of sports events. In March of 2015, the Associated Press announced it would begin using algorithms developed by a platform called Wordsmith to begin giving recaps of college sporting events. The platform is able to figure out specific plays of a game, where the game could have been decided by that play, or an especially rare occurrence that took place during the game. Because of the extensive use of cliches and buzzwords that are used in sports, the algorithm can develop a description or recap that reads as if it came from a human. It was also suggested that given a transcript of postgame press conferences, the algorithm could produce important quotes from players and coaches that stood out above the other things that were said or pertain to important events that happened during the games. As an avid fan who reads many news and analysis articles on sports, I can see how this will be a very valuable tool for getting information out to lots of people very quickly. In my opinion, there is something to be said for subjective beat reporting and expert analysis that is hard to imagine being outdone by a computer. However, with big data and advanced statistics becoming more and more prevalent in sports, an analysis doesn’t necessarily have to be generated by an “expert” anymore. There are certain trends and numbers that can be used to generate analysis on sports topics (fangraphs.com is an excellent example of heavy statistical analysis in baseball) however as far as I know there have not been any subjective data-driven sports analysis articles generated by a computer. Yet.

    Reply
  13. Arianna Padilla

    http://nautil.us/issue/33/attraction/your-next-new-best-friend-might-be-a-robot

    Xiaoice is a chatbot developed by Microsoft for the Chinese community. She received 1.5 million chatgroup invitations within the first 72 hours of being launched last year. She has had more than 10 billion conversations since launched.
    Xiaoice seems to not only recognize what is being said, but she can assess it as well. The article gives the example of Xiaoice asking about the human’s recent breakup. She gives the line, “Wake up, you will have no future if you can’t get over with the past.” If I were to read this conversation without knowing that Xiaoice was not human, I’d assume that this was a conversation between friends. The unpredictability of Xiaoice’s answers give her a human-like quality. When asked, “What are you doing?” three times, her answer changes each time and at the end asks, “Is this the only sentence you know?” In my experience with chatbots, they tend to give repetitive answers, or answers that don’t necessarily make sense. In each conversation given in the article, Xiaoice’s answers are incredibly human-like. She appears to understand the situation given, and either empathizes or gives her opinion and sticks with it. The article suggests that Xiaoice can be your new “friend”, and most people speak to her as a friend; which makes me wonder, will even our friends in the future be AI programs?

    Reply
  14. Daniel Hegedus

    http://www.theverge.com/2016/9/26/13055938/ai-pop-song-daddys-car-sony

    The above link will take you to what’s possibly the world’s first machine generated pop song. The creation of the song, titled “Daddy’s Car,” was orchestrated by a few researchers over at Sony, who used a program called Flow Machines. This software was fed with over 13,000 music sheets from all kinds of musical genres, from which it created a special melody that it used for this pop song. If you listen to the song, you’re going to notice that it’s full of seemingly intelligent lyrics and arguably beautiful vocals too. Before reading the article, I was questioning whether a bot could write such a thing and sing it also. It was with disappointment that I realized that the lyrics and the vocals were actually added by a human composer, named Benoit Carre. In other words, this means that currently the machine is only capable of producing melodies based on genres. Mr. Carre input the command to produce a Beatles-like melody and the robot delivered. He then had to add all the finishing touches to make it a complete song. After explaining the machine’s true capabilities, the site goes on to analyze the meaning behind the lyrics of the song, which is irrelevant for this assignment. In short, what you need to know is that there’s a program out there capable of writing sheet music of seemingly good quality. That said, the song, “Daddy’s Car,” was not added to my music playlist.

    Reply
  15. Khoa Ho

    https://www.reddit.com/r/SubredditSimulator/top/
    Reddit is a community where users can look up a wide range of topics including news, art, music, etc. On the front page, users can click on links to articles, YouTube videos, or pictures that may be interesting in one way or another. Narrowing down threads to specific topics such as video games or politics, subreddits offer specific topics that leads the user to a community of like minded users to engage in discussion about a certain topic. For example, if a user possesses great interest in The Office, he or she would visit the subreddit r/DunderMifflin to find threads that discuss certain topics about the show. If a user is interested in wanting to learn how to take better photography, they can visit r/photography to find a community of photographers wanting to teach or learn more about photography.
    A subreddit called r/SubredditSimulator at first seems like a strange community where incoherent syntax and grammar plagues the threads. However, the subreddit is actually a community of automated text generators that uses something called a Markov chain to create a series of threads and commenters. Humans cannot comment on any of the page, rather, they can only observe what is generated by the algorithms. Commenters are also machines and their names are representative of specific subreddits, which automate comments that are trending or popular in their respective threads. For example, a subreddit simulator commenter named aww_SS (who uses the Markov chain to replicate and automate commenters from r/aww, a community where users post cute pictures of pets and things) posted a link to a photo called “Rescued a Stray Cat,” but in fact the link leads to a picture of a pug. The inconsistencies and erroneous grammar and links can be very entertaining to the human user, however, there are some posted threads that may be surprising due to its human-like qualities.
    Here is an in-depth look into how Subreddit Simulator works, presented by a redditor.

    Reply
  16. Caroline Stoll

    Poems called “I Am” by The Aggregate Kid are written by a poem generator using the words from poems written and submitted by kids who are approximately 12 years old. The Aggregate Kid is a straightforward and descriptive name as “aggregate” is defined as “a whole formed by combining several (typically disparate) elements.” This website generates a new poem every time the page is refreshed. The poems are named “I Am” because every poem generated begins with “I am.” These poems are not grammatically satisfactory or even coherent and they don’t follow proper forms like sonnets or ballads nor do they follow a certain rhyme scheme, like: “ABAB CDCD.” But, the generator is almost able to form grammatically correct sentences by placing most of the words in correct order, but the poems are never able to quite succeed. The generator is also able to put related words into the same sentence or phrase. For example,

    “as
    a guy
    whos eyes
    are full of holes, left behind,
    left for
    dead, they all lie on
    death’s bed.”

    In this passage, we can see the words “guy” and “eyes,” can be used together to form a sentence. Although the literal meaning of the sentence doesn’t make sense, the generator knows that a “guy” can have “eyes.” These poems proved to me that this generator created poems with semi-logical diction but incoherent syntax.

    Reply
  17. Amy Yoo

    http://thinkzone.wlonk.com/PoemGen/PoemGen.htm

    The Poem Generator generates poems about a singular topic, such as the “sea”. The poems are short and direct, they are easily understood. On the website, you can keep pressing the “Make Poem” button and essentially generate an infinite number of poems on a single topic. The topic I chose was “sea,” and after pressing the button several times, I could recognize a pattern to the poems being generated. The poems contained traits of conventional poems: woeful exclamations, metaphors, and questions as well. Read by itself, maybe it would be unrecognizable as a computer-generated piece of work, but once you read more than one poem, it is difficult to move past the fact that the metaphors and statements are so broad and difficult to find a connection to the metaphors. The structure of the poems remain relatively similar as well. Nevertheless, I thought it was very impressive, and amusing, that the website is able to generate an infinite number of poems regarding multiple topics. On the website, you can also adjust the concrete nouns, abstract nouns, transitive verbs, intransitive verbs, adjectives, and adverbs used in the constructed poems. This is interesting, because then the poems being generated are really more like words simply plugged into a formula, rather than a creative piece of work.

    Reply
  18. Karina Lucero

    The software developer also known as Andy Pandy on Twitter made a generator that creates new Friends scripts after being fed all of the scripts from the show. After reading through the script, I was able to comprehend the tone the generator wanted to capture but it was still really confusing to read through. There’s a lot of grammatical as well as syntactical errors within the script, and there are also certain lines that seem to have just been reused from the original scripts. The generator does, however, capture the behavior or better yet, the essence of the characters. The line written for Phoebe “No! I would like to propose to my kid?” reminded me of how Phoebe tends to speak like she’s asking a question. The generator also seems to have gotten the incorrect relationships among the characters, in the script Monica and Ross kiss, but in the show they’re related. It leads me to think that the generator still has a lot of work to be done before it actually writes an intelligible script.

    Reply
  19. Casey Coffee

    View story at Medium.com

    Matt Deutsch trained an LSTM Recurrent Neural Network on the first four Harry Potter Books and then used the network to produce a chapter based on the training. The resultant chapter is technically grammatically correct, but many of the sentences and all of the plot are nonsense, though, admittedly, nonsense that is enjoyable, and often funny, to read, particularly for a fan of Harry Potter. From afar or at a glance, the chapter might appear to be a genuine excerpt from Harry Potter. The character names, the length of sentences, the syntax, and the organization of paragraphs all appear familiar, and only upon actually sitting down to read through the chapter does one realize that it makes no sense.

    When run through a spelling and grammar checker, the text appears to be flawless, but this is certainly not the case. The network is unable to differentiate between dialects, because there are some incidents of Hagrid’s unique dialect, which is often spelled differently to reflect his pronunciation (ex. “yeh” instead of “you”), being mixed in a sentence with more formal, technically correct spelling. Proper nouns and names seem to be somewhat randomly placed, or at least that is what sentences like “Harry saw Harry’s glasses” may indicate.

    At some points the strange arrangements of words almost appear poetic, though because the poetry results accidentally from the neural network’s imperfect learning, the duty of giving meaning to the poetry lies entirely upon the reader, as there is no author with an authoritative reading of it. In any case, the poetry of the words is mostly aesthetic, and very little meaning is conveyed in most of the text. Here is an example of a sentence that I found aesthetically interesting, but not particularly meaningful: “Harry stared at the shadowy clearing, and pointing to a long, old grin.” Overall, this example of machine generated text is far from sophisticated, but is certainly fascinating, entertaining, and worthy of further development.

    Reply
  20. Jose O

    http://futurism.com/%E2%80%8Bthe-poem-that-passed-the-turing-test/
    http://motherboard.vice.com/read/the-poem-that-passed-the-turing-test

    As an undergraduate at Duke University, Zackary Scholl claimed his artificial intelligence program passed some version of the Turing Test by creating poetry that is indistinguishable from that of human beings. This is the poem that passes the test:

    “A home transformed by the lightning 
    the balanced alcoves smother 
    this insatiable earth of a planet, Earth. 
    They attacked it with mechanical horns 
    because they love you, love, in fire and wind. 
    You say, what is the time waiting for in its spring? 
    I tell you it is waiting for your branch that flows, 
    because you are a sweet-smelling diamond architecture 
    that does not know why it grows. ”

    The poem is mildly incoherent in a way that some poetry often is, however, it comes apart when context is provided. “It works by having the poem dissected into smaller components: stanzas, lines, phrases, then verbs, adjectives, and nouns,” says Scholl. Knowing this, the reader understands that the words are chosen out of probability rather than the assumed intentionality and purpose which drives a poem, a distinction which I tried to articulate in Thursday’s lecture. I still maintain the opinion that even though they appear to have a message, works of literature crafted by artificial intelligence fail to pass the Turing Test until they seem to show some semblance of purpose behind the wording and thematic consistency. Perhaps I only believe that because I know that the work is created by an AI and would call this same work, if written by Robert Frost, thematically interesting but still not great poetry which in my opinion would connect the loose threads more smoothly.

    Reply
  21. Kelsey Tang

    https://www.theguardian.com/technology/2016/may/17/googles-ai-write-poetry-stark-dramatic-vogons

    “he was silent for a long moment.
    he was silent for a moment.
    it was quiet for a moment.
    it was dark and cold.
    there was a pause.
    it was my turn.”

    In collaboration with Stanford University and University of Massachusetts, Google has been working on an AI technique called “recurrent neural network language model (RNNLM)”. The method basically assembles the following sentence by examining the previous words. The above excerpt is one of the results of the technique. Researchers supplied the first and last line, and then programmed the AI to fill in the lines between.

    Because the technique relies on analyzing the previous word, the resulting lines are somewhat redundant. One could argue that the redundancy is rather poignant however. Context is important in this case. Had this poem been written by a human author, we can infer that the narrator is fixated on the long silence, evident by the repetitive nature and simple syntax. However, because this poem is the product of an AI, the poem feels technical and redundant.

    Reply
  22. Esmeralda Torres Duran

    https://www.technologyreview.com/s/537716/machine-learning-algorithm-mines-rap-lyrics-then-writes-its-own/

    arxiv.org/abs/1505.04771: DopeLearning: A Computational Approach to Rap Lyrics Generation

    College student Eric Malmi from the University of Aalto in Finland generated a machine-learning algorithm to produce beats and verses for rap songs to see if it can successfully do so. The machine-learning algorithm is called DeepBeat and has created a rap song on the subject of love.

    “For a chance at romance I would love to enhance
    But everything I love has turned to a tedious task
    One day we gonna have to leave our love in the past
    I love my fans but no one ever puts a grasp
    I love you momma I love my momma – I love you momma
    And I would love to have a thing like you on my team you take care
    I love it when it’s sunny Sonny girl you could be my Cher
    I’m in a love affair I can’t share it ain’t fair
    Haha I’m just playin’ ladies you know I love you.
    I know my love is true and I know you love me too
    Girl I’m down for whatever cause my love is true
    This one goes to my man old dirty one love we be swigging brew
    My brother I love you Be encouraged man And just know
    When you done let me know cause my love make you be like WHOA
    If I can’t do it for the love then do it I won’t
    All I know is I love you too much to walk away though” – DeepBeat

    What I thought was interesting about DeepBeat is that the machine-learning algorithm can rap just the same or even better than some of the current rappers in this century. Although the last line is from one of Eminem’s song, DeepBeat does generate the majority of this rap song all on his own. The machine produces verses about family, such as loving “my momma” or “my brother I love you” which seems to be a personal touch to the rap song. Also, I thought it was pretty neat as to the kind of puns and literary devices DeepBeat uses such as using “sunny Sonny girl”. In every single verse, DeepBeat mentions the word “love” at least once which kind of implies that it written by a machine since today rappers don’t really say the word love in every single verse. This specific type of love rap song seems to be more like a poem than a rap song. However, I’m sure if DeepBeat can create its own tune to this song, I’m sure I would probably buy it on ITunes.

    Reply
  23. Colburn Pittman

    https://twitter.com/oliviataters

    “caring about another person and letting them know that you care is every trendy sound put on one album.”

    “if we did all the things we are capable of, we would definitely be even more fun!”

    “i will have skinned a person alive and eaten them #universitychallenge”

    Rob Dubbin accidentally created the bot while experimenting with language manipulation of real-life teenage Twitter accounts. Olivia will also reply to people who follow her (http://qz.com/279139/the-17-best-bots-on-twitter/).

    While sometimes incredibly disturbing, there’s a weird sort of appeal/draw to this twitter bot. Some of the things it tweets, like the first two which I quoted above, evoke the feeling of reading something a teenager might actually say, and that’s also partly what scares me. All of it is very strange really, and I even in it’s brighter moments of tweets, It’s hard to escape the sort of horrible feeling of knowing that a program is writing this, and not exactly a perosn. Too strange. All of these machine writing bots are strange and weird and EXTREMELY creepy. I think the poem bots are slightly safer, such as the twitter-bot @poem_exe, but really it’s too much. I need a vacation from this.

    Reply
  24. Quentin Ferrante

    https://mattfister.github.io/nanogenmo2015/final/52.html

    The Passionate Tale of Glory

    I stumbled upon this comical machine-generated novel through Github. I was struck by how sensical the prose sounded at first, especially in comparison to countless other novels on Github that consist of pages and pages of nonsense. Certain parts of the novel could almost pass for human-produced works, but these parts are few and far between, and heavily outnumbered by phrases and sentences that are quite obviously the product of algorithmic authorship- redundancies and simply strange wordings were at times interesting and at many others outright comical. The incident of “The Fucntional Lighthouse” comes to mind, wherein Delphia Enos, Berneice Goddard, and Monnie the assertive[sic] “travel to a functional lighthouse”. Monnie wastes no time in announcing that the place is a mystery to her, and that they have enough food to last for several days. The narrator interjects here that “The lighthouse was no use whatsoever to a submarine”, and that “The drawer was similar to an underpants” (TPTOG). I looked through the chapters, of which there are many. It seems that the algorithm takes a verb, like “searching”, and then adds two nouns for an objective and a setting. Example: “Searching for Food in the Cave”, and “Hunting in the Hayloft”. Other chapter/section titles are as simple as “The Blacksmith”, “The Mansion”, and “The Standard Forum”. Anyhow, in each different location there is always someone “wondering how a (blank) is like a (blank), and someone uttering or saying something. That is, the algorithm uses conventional literary phrase structures, such as, “Monnie uttered”, but then fills in the blanks of each sentence with random words that are usually entirely unrelated, tangentially related, or simply nonsensical (“Monnie thought about how a hammer was a strike”). The narrative lacks sequence- the entire thing is made up of these randomized settings and happenings in each setting that are ambiguously worded and wholly undescribed. All in all, this novel definitely has potential if the coding were to be reworked so as to analyze sentences for actual readability (if that is even possible) but as it stands is a redundant mess of words and events that have no narrative structure. I was impressed at how the sentence structure worked, though- the punctuation and dialogue was well maintained, even if grammatical rules were overlooked frequently.

    Reply
  25. Darya Behroozi

    http://kevan.org/nanogenmo/2015secondedition.html

    “With some urgency, we ran to Ich bin ein Berliner. My guidebook claimed it was a clear statement of U.S. policy in the wake of the construction of the Berlin Wall. It was clearly also spoken in German. Passepartout said that he wasn’t literally from Berlin but only declaring his solidarity with its citizens.
    “I wonder if this is a place for a jelly doughnut in the north.” said Passepartout. Did it be? We thought so. I spent a few minutes studying the following passage. Things were never the same after the Soviet forces implemented the Berlin Blockade.”

    In kevandotorg’s novel “Around the World in X Wikipedia Articles,” Wikipedia’s Application Program Interface (API) is used to administer location coordinate and descriptions for a 50,000-word product. The story manages to narrate a journey through the ten most Wikipedia-documented metropolitan areas. The exposition definitely attests to the process of auto-generation as most of the story is written within the historical and cultural context pertaining to the surrounding environment. Aside from a few minor hiccups within sentence structure, the general story is surprisingly cohesive in its narrative construction. If the label of “machine-generated text” was stripped from its summary, the passerby reader would most likely mistake the writer for a human– of course, this is only possible if the reader overlooks the protagonist’s oddly extensive knowledge of local architecture.
    Interestingly enough, kevandotorg faced a significant obstacle with the first draft of the novel. After its first run-through, a wayfinding bug managed to avoid triggering the New York checkpoint and ended up crashing the system, causing the narrator to circle around the world until the memory had run out. The finalized text ultimately had an added ~10,000 words when the issues were cleared. The topic of bugs within machine-generated text brings up an interesting question to the overarching theme of media interacting with literature. While human error can easily be edited within the process of writing, a single bug within auto-generated text can essentially destroy the expected end product. While machine writing has its own creative output within the genre of bugs and hacking, the issue within this particular novel is obviously a systematic error rather than an artistic choice. Thus, the human engineer still manages to be an integral part of the machine’s production.

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s