From Olivia Solomon, “Facial recognition database used by FBI is out of control, House committee hears,” Guardian (March 27, 2017):
“Approximately half of adult Americans’ photographs are stored in facial recognition databases that can be accessed by the FBI, without their knowledge or consent, in the hunt for suspected criminals. About 80% of photos in the FBI’s network are non-criminal entries, including pictures from driver’s licenses and passports. The algorithms used to identify matches are inaccurate about 15% of the time, and are more likely to misidentify black people than white people. These are just some of the damning facts presented at last week’s House oversight committee hearing, where politicians and privacy campaigners criticized the FBI and called for stricter regulation of facial recognition technology at a time when it is creeping into law enforcement and business.”
Bence Kollanyi, Philip N. Howard, and Samuel C. Woolley on Twitter bots and the 2016 election:
“We find that that political bot activity reached an all-time high for the 2016 campaign. (1) Not only did the pace of highly automated pro-Trump activity increase over time, but the gap between highly automated pro-Trump and pro-Clinton activity widened from 4:1 during the first debate to 5:1 by election day. (2) The use of automated accounts was deliberate and strategic throughout the election, most clearly with pro-Trump campaigners and programmers who carefully adjusted the timing of content production during the debates, strategically colonized pro-Clinton hashtags, and then disabled activities after Election Day.”
Project on Algorithms, Computational Propaganda, and Digital Politics:
“Political bots are manipulating public opinion over major social networking applications. This project enables a new team of social and information scientists to investigate the impact of automated scripts, commonly called bots, on social media. We study both the bot scripts and the people making such bots, and then work with computer scientists to improve the way we catch and stop such bots. Experience suggests that political bots are most likely to appear during an international crisis, and are usually designed to promote the interests of a government in trouble. Political actors have used bots to manipulate conversations, demobilize opposition, and generate false support on popular sites like Twitter and Facebook from the U.S. as well as Sina Weibo from China.”
Tim Johnson for McClatchy (November 4, 2016): “A stream of recent sneaky tweets and social media posts tell people they can “vote from home” by simply sending a text message, a devious tactic to suppress votes.
The U.S. election is not “American Idol,” and voters cannot – repeat CANNOT! – cast ballots by texting from their cellphones. Twitter says it is taking the tweets down.
The last-ditch appeals underscore a broader issue of concern in an especially contentious political year: the increasing usage of robotic networks, or botnets, to flood the internet in an attempt to influence the election, squelch public debate, spread lies and manipulate voters….”
From The Verge (October 7, 2016):
“For the last month, a Twitter bot named “Liz,” with the handle @arguetron, has been quietly engaging with the internet’s seediest subculture.
Well-versed in internet bigotry of all stripes, the bot makes simple statements (five or six per hour) designed to rile up 4chan commenters, Breitbart disciples, Trump supporters, anti-vaxxers, “censorship” whiners, Gamergaters, anti-feminists, transphobic Reddit boys, and the rest of the alt-right. The tweets aren’t particularly florid or aggressive, just calculated and crisp. The perfect bait….”
Representative thread <here>