Machine vision

// Post as a comment on this page your comment, along with a link, on a particular instance of machine vision. Topics: surveillance, drones, facial recognition, policing, population management.

Advertisements

25 thoughts on “Machine vision

  1. gia

    Have you checked for blobs lately? Probably not, but luckily blob detection is being taken care of by expert computer vision programs. Blob Detection is a computer vision method that is aimed at detecting regions in a specific digital image that differ in properties. A blob is a region of an image in which some properties are constant or approximately constant; all the points in a blob can be considered in some sense to be similar to each other. Automatic detection of blobs from image datasets is an important step in the analysis of large-scale scientific data. In astronomy, blob detection is especially necessary to detect potentially interesting regions in the vastness of space.
    A team of astronomers using Atacama Large Millimeter/submillimeter Array (ALMA) and the European Space Station’s (ESO) Very Large Telescope discovered a rare object from a distant universe and it has many secrets. ALMA’s “unparalleled” abilities to study, observe and analyze light coming from cool dust clouds in distant regions were used by a team of astronomers led by Jim Geach of the Center for Astrophysics Research of the University if Hertfordshire to analyze the far regions of deep space. ALMA detected several rare gas cloud formations that are extremely bright. By successfully identifying these blobs in space, astronomers were able to finally identify certain sources of emissions coming from the Lyman-alpha Blobs (LABs) and understand them as gigantic gas clouds. With this new data, it was discovered that sources of emissions are the located at the heart of the blob where stars form over 100 times faster that Milky Way.
    Without machine vision programs like ALMA’s blob detection, discoveries like the Lyman-alpha Blobs would be much harder to find. Space is vast, and contains a large amount of information that is as yet unseen. Machine vision allows scientists and astronomers to narrow their searches to regions of space with something to study, rather than spending hours combing the skies fruitlessly for gas clouds like those above.
    http://www.discoversdk.com/blog/blob-detection
    http://www.natureworldnews.com/articles/29017/20160921/secrets-of-giant-space-blobs-discovered-by-alma.htm

    Reply
  2. Orion_0

    Bank Leumi, based in Israel, recently moved to incorporate SecureTouch’s touch-screen method of confirming a user’s identity when using their Leumi Card mobile application. SecureTouch’s behavioral biometric process runs as a background application that ‘watches’ and analyzes a host of a cell phone user’s unique habits and signature data points: navigation across/in applications, finger pressure, speed, etc. Through this, fraudulent banking activities and bot attacks on credit card holder accounts may be prevented and thus mitigated significantly, while the user enjoys the perks of no longer being denied access for forgetting a long password, being present in another country, or using another device. The program establishes a stronger connection between a unique user and their device(s), safeguarding the idea that machinic identity confirmation constitutes some beneficial security-based utility insofar as it notes a user’s identity for the good of that user. Yet, it bodes the insecurity users experience when considering their privacy in relation to their devices. If the program runs in the background while a user cursorily glances at any illicit or even mundane material…; if, by the same coin, a user deeply considers the content present on any number of web pages — Is the user being [passively] asked to share the essence of who they are (or the facade of who they appear to be) to their device? This also calls into question whether or not the application itself routes any or all information to a corporate network or even SecureTouch’s company for ‘advancement purposes.’ While it tempts the security-conscious element of who we can be, it dissuades the self-aware and personal element of who we are and ought to be i.e. social animals that retain some privacy.

    http://www.biometricupdate.com/201611/leumi-card-integrates-mobile-identity-verification-solution-based-on-behavioral-biometrics

    http://securedtouch.com/solutions/

    Reply
  3. Kaitlin Robinson

    A Chinese tech company, Baidu, claims its facial recognition technology has industry high accuracy. Baidu is selling use of this technology to a Chinese tourist destination, the historic town of Wuzhen, that is preserved as a historic park, which will be the software’s first use in a tangible physical situation. In lieu of passes, ID or passport, or fingerprinting, guests will have their picture taken upon arrival and facial recognition will be used to regulate visitor’s comings and goings, speeding up what the park previously claimed was a lengthy process of entry. Baidu claims to have talks in the works with other theme parks as well. Technology of this kind seems suited for a larger scale operations as well, such as a theme park like Disneyland, which could use facial recognition to eliminate tickets and replace processes like hand stamps, while saving its customers time waiting in lines. As a replacement for tickets the technology seems useful and innovative.
    However, this technology leads to questions regarding the database created. Although Baidu owns the facial recognition technology, and could sell it to other companies, they will not operate the system in Wuzhen park; instead Baidu will license them the product, distancing Baidu from responsibility for the data created. This means that the database of biometric data created by the software belongs to Wuzhen park and it’s at the park owner’s discretion to decide what to do with it. And while having a public park having access to your photo might not seem like a huge deal, the implications of normalizing this technology span beyond this situation. Imagine more widespread implementation of this technology and a large amount of privately owned enterprises each having a database of information that they alone decide how to use. This technology could easily translate to surveillance in a larger area, rather than scanning at certain known locations, where the boundaries are not as clear cut as they are at the park. Used as security, rather than a substitute for tickets, leads to more troubling questions of boundaries and implications of the intended use of the database.
    http://www.theverge.com/2016/11/17/13663084/baidu-facial-recognition-park-wuzhen

    Reply
  4. Alice He

    Whenever people talk about “facial recognition,” the usual reaction is fear for lack of privacy. But what most of them don’t realize is that, besides being used to find your perfect soulmate on dating services, facial recognition software can be used for identity protection. In the beginning of 2015, the Arizona Department of Transportation (ADOT) adopted a facial recognition software to detect identity thieves. This software was developed by the NEC Corporation of America and it’s called NeoFace. Due to the fact that it’s not difficult to get an identification card without a photo-ID (you’re getting an ID for that purpose!), it’s easy for identity thieves to use your name to get themselves and ID. In order to combat identity theft, the software scans ADOT’s database of photographs taken for IDs, flags any instance of similar facial features, which is then reviewed by a human staff for further investigation. During the past month, ADOT caught two thieves who were trying to avoid criminal charges. This shows that facial recognition can be used for protecting one’s identity.
    The adoption of facial recognition software to protect someone’s identity shows that society is relying less and less on physical forms of identification. Instead, society is moving toward using the human body as a way of identification. The government already uses fingerprints because they are unique for each person to verify one’s identity instead of solely relying on facial recognition by another human. After a trip abroad, the officer in charge of logging your entrance to the country is supposed to match your face to the photograph of your passport or ID—which can be easy to fake—and scan your fingerprints—which is almost impossible to fake—and match against a database. Facial recognition software is just another way of identification, but without wasting resources.
    The biggest advantage, however, is that by using this software, workers won’t have to manually scan for each photograph and they would be able to allocate resources toward speeding up the dreaded long wait at the DMV.

    http://www.abc15.com/news/region-west-valley/glendale/adot-using-facial-recognition-software-to-combat-license-forgery (video might autoplay)
    http://www.abc15.com/news/region-phoenix-metro/central-phoenix/adot-using-facial-recognition-software-to-catch-thieves (video might autoplay)

    Reply
  5. Hailey Hoyt

    Machine Vision Participation Post

    Cameras, drones, and other forms of surveillance lurk around every corner, and it seems impossible to escape modern day Big Brother. Even more disturbing, though, is the fact that surveillance technology and machine vision also have a hand in modern, contemporary warfare as U.S. drone strikes are common occurrences in places such as Pakistan and Yemen. While it is argued that self-flying drones programmed to make kill shots may possess some agency, humans and governments that produce these machines are ultimately accountable for actions carried out by these machines. Drone use makes war and violence easy since empathy is removed from the equation as it does not possess a conscious, human mind. Unsurprisingly, American drones have killed several thousand people in recent years, and in response to these tragedies, many artists are using the opportunity to create pieces that address these senseless human rights violations. Artist Tomas van Houtryve created a photo series as a memorial to these drone kill incidents entitled “Blue Sky Days,” a nod to the fact that drones can only fly when the sky is clear. Houtryve uses his own personal drone to capture images over the United States that replicate locations where drone strikes have occurred. Some of the locations in his photo series include a wedding, a school yard, and a public park.
    Houtryve’s series aims to draw attention to the “nature of personal privacy, surveillance, and contemporary warfare.” The audience is provided with an eerie example of how the government views the world, looking down, acting as a god. While the perspective is particularly moving, the most significant portions of his series are the descriptions attached to the images. Despite geographic separation from the actual drone kill sites, Houtryve captions his image replicas with a brief description of a drone kill event that had occurred in a similar setting. For instance, the image of a wedding in Pennsylvania is paired with the caption “December 2013, a U.S. drone reportedly struck a wedding in Radda, in central Yemen, killing 12 people and injuring 14.” While exceptionally morbid, Houtryve conveys his message about thoughtless killing and the repercussions of modern surveillance technology and warfare. His “Blue Sky Days” collection leaves the viewer pondering if grey, stormy skies are better suited for our modern world.

    https://www.opensocietyfoundations.org/moving-walls/22/blue-sky-days

    Reply
  6. Daniel Hegedus

    Arizona Department of Transportation (ADOT), the agency that issues state ID cards to its citizens, has recently developed a facial recognition program that helps stop identity theft. When your driver’s license is scanned, the program attempts to see if your facial features match with any other licenses in the U.S. If a match comes up, that means that there are more than one driver’s license belonging to the same face, meaning that one of those licenses most likely belongs to the identity thief. When the program finds a match, it sends the two identification cards to detectives for further investigation. According to the article, with the help of this program, there has been over a dozen identity thieves put forward this year alone. Among these people, a man with multiple felonies was also caught. He was using someone else’s ID to avoid his warrants.
    Facial recognition software such as ADOT’s one can make the world a much safer place. It is a software that works together with detectives, which means that instead of taking jobs away from humans, it merely makes the job easier for them. As long as humans are also involved in ADOT’s function, the machine poses no threat to humanity. It is a program that can reduce crime greatly if it were to be implemented in other states, and all over the world as well. If it were to be implemented in such a large scale, the practice of identity theft would slowly have to die, since the thieves could be caught much easier than before.
    There is, however, also the possibility of abusing this program. If for example some employee wanted to find out everything about a person (let’s say a celebrity), all he would have to do is input a picture, and that person’s drivers license, address, and so forth would be put forward. This plays with people’s privacy rights, and could even put people in danger. The concept of facial recognition is a very intriguing one that could be used both for security and for anti-security. If ADOT’s program is going to be used nationwide, it must be completely secure so as to be able to perform its function without putting anyone in danger.

    http://www.abc15.com/news/region-phoenix-metro/central-phoenix/adot-using-facial-recognition-software-to-catch-thieves

    Reply
  7. Arianna Padilla

    Self-driving cars appear to be the new trend in car companies; Tesla’s “autopilot” feature stirred up some interest. Just recently, General Motors (GM) has revealed plans for a self-driving system that uses facial recognition, referred to as “Super Cruise.” If the system detects twists and turns in the road, or senses that the driver is not paying attention, then a series of alerts will notify the driver. If the driver still does not respond, the car will slow down and automatically put on the hazard lights. The system uses facial recognition software to determine if the driver is falling asleep or not attentive.
    Self-driving features are already controversial; the death of Joshua Brown occurred while Tesla’s autopilot feature was engaged. Will using facial recognition with the self-driving feature make it safer? In order for this feature to work correctly, the facial recognition program would have to be advanced enough to recognize an array of faces. Theoretically, the idea sounds effective. Because the self-driving feature is not intended to be completely independent, initiating a facial recognition feature would force the driver to stay focused in order to avoid hazardous situations.
    However, most software comes with faults. Testing a program as big as this could be extremely dangerous. Tesla’s autopilot feature needed improvements, and it was not recognized until a fatal accident occurred. The same issue could exist with this program. The program is also required to detect dangerous turns in the road, what happens if the system fails and does not slow down as it’s supposed to? The use of artificial intelligence sounds efficient in writing, but there are still many risks to using it, especially in something already so dangerous.
    GM has not revealed any more details regarding “Super Cruise,” but the company plans to do so in 2017.

    http://www.reuters.com/article/us-gm-selfdriving-idUSKBN13N2CY

    Reply
  8. Esmeralda Torres Duran

    As if facial recognition and image recognition couldn’t get any more advanced. Amazon decided to come out with a program called Amazon Rekognition in which it “recognizes human faces, identify their emotions, and label objects” (The Verge). This specific type of program is kind of like a game since it learns to identify what the people look like and then later on seems to cateogorizes them. For example, in the link posted here, it demonstrates how Amazon Rekognition can identify a photo as to whether there’s a female, face, and her emotion whether or not the figure is smiling or not. It seems as though the facial recognition is advancing to add to the list of providing emotions!
    Not only is this an interesting tool, but it also can help businesses add a “layer of security” (The Verge). It’s kind of like Amazon Rekognition learns to “tag” photos and categorizes them so the owner does not have to do this individually. Amazon Rekognition can be a great stepping stone is learning how to identify images and machines faces into a more advanced way. This particular program can even identify a dog for example and even give a breed. It’s almost as if it’s like a Google search for specifically only for image recognition!
    Amazon Rekognition is a good program for anyone looking for a “smart’ marketing billboard as The Verge seems to state. Although it is an image recognition it seems to stand out of the ordinary since I have really heard of a program who can look into an image and state directly what type of breed and gender it is. It may seem like complete randomness to try to identify an animal but Amazon Rekognition does seem to remove the aspect of computer vision (Amazon Web Services). Also, with this specific program, one does not only need a computer since Amazon Rekognition can be accessed through a smartphone mobile as well in order to analyze and sort the images for the consumer.

    http://www.theverge.com/2016/11/30/13799582/amazon-rekognition-machine-learning-image-processing
    https://aws.amazon.com/rekognition/

    Reply
  9. Casey Coffee

    https://www-03.ibm.com/press/us/en/pressrelease/50688.wss

    Aerialtronics, a maker of unmanned commercial aircraft, has paired with the IBM Watson Internet of Things (IoT) Platform. This pairing allows drones to carry computing capabilities that can analyze images intelligently and relate information back to the users of the drones in real time. These drones have a variety of positive potential uses. Aerialtronics hopes to use them with American businesses to analyze traffic patterns or monitor and inspect cell phone towers, wind turbines, and oil rigs. They also have the potential to work with governments and law enforcement. While the tone of the IBM article on this business pairing was positive, detailing the potential uses of Aerialtronics and Watson IoT for crowd safety, the idea of governments and police with access to intelligent surveillance technology may have some more daunting connotations.

    These Watson-enabled drones will undoubtedly be incredibly useful and contribute to the effectiveness of regulations on business and the increased efficiency of inspections of business operations. They also have the potential to be used for more sinister purposes. Given the examples in the past few years of local law enforcement equipped with military-grade weapons and crowd dispersal strategies that are often more violent than would appear necessary, drones add another level of power to these already unequal encounters. Intelligent drones designed for crowd safety (i.e. identifying exits that are overcrowded and should be cleared) could be used to locate crowds of protestors and analyze points of attack. This view is admittedly dystopian, but it is not too far from our reality.

    I do not, however, think that this possibility should discourage the development of intelligent machine vision. These Aerialtronics drones can, after all, be used to give government organizations like the EPA more power to do inspections than they would have with cut budgets and insufficient numbers of employees. On the other hand, drones with cameras have already been used to patrol the border between the United States and Mexico. While they have proved incredibly ineffective so far, drones with intelligence might be used for profiling, encoding our biases into technology with unforeseeable consequences. So the use of these intelligent drones by governments and law enforcement should be monitored closely and warily, lest they become tools of oppression, rather than progress.

    Reply
  10. Helen Koo

    https://www.theguardian.com/technology/2016/oct/25/airport-body-scanner-artificial-intelligence

    The TSA’s introduction of full-body scanners – machines that use backscatter x-ray and more recently, millimeter wave scans in order to produce images of the individual undergoing inspection – in the last decade has been a source of controversial debate, as issues of privacy and consent are raised when machines are able to render in-depth detail images of the bodies being scanned. Now, a Bill Gates-backed startup company, Evolv Technology, is in the process of conducting the first public trials of their high-speed body scanners; these AI-powered machines are able to do the work of such technology of the TSA’s body scanners in a fraction of a second by employing computer vision and machine learning.

    This would allow for security procedures that would survey and scan individuals for concealed items and safety hazards without stopping, or even slowing down, any of the subjects. Such technology will allow its mass implementation in a much wider range of venues than the airports that have been its primary users, from the kind of train stations (Los Angeles’s Union Station and Washington DC’s Union Station) that Evolv will conduct its public trials in, to eventually, perhaps even as small scale structures as shopping malls or even individual stores.

    More pressing here is the matter of privacy and consent. Arguably, a post 9/11 America and the War on Terror campaign had brought about the beginnings of a mentality in which citizens are prompted to accept, as an automatic and assumed sort of understanding, that they are to sacrifice certain and at times, significant, claims to privacy in exchange for the improved safety of the nation as a whole. The revelation that the backscatter X-Ray machines previously implemented by the TSA produced in-detail images of the scanned body produced a backlash as individuals deemed it an explicit and uncomfortable invasion of their privacy. What this new technology, then – scanners so rapid that individuals do not have to be stopped in their normal paths in order for the surveillance to be conducted – entails is the same question of privacy, but with larger consequences and more at stake.

    With the TSA’s body scanners, individuals are fully cognizant that such searches were being conducted, and are actively aware of the very precise moment in time and space in which the surveillance is conducted. When scanners eliminate the necessity for such stops and individuals are allowed to go about their typical routines without an explicit alert when they are being scanned, it’s not hard to imagine a security routine in which individuals are able to be scanned, surveyed and policed without their active knowledge. And while to some degree, the TSA’s security processes eliminated some degree of freedom in presenting a “choice” in which individuals could either submit themselves to these checks or not be able to fly at all, such technology as the one Evolv Technology is hoping to put into play could potentially eliminate whatever small agency the individual previously held in being able to actively place himself under surveillance at the moment at which he chose, under the terms of his choosing.

    If such technology were to become available in mass quantities and availability as well, any sort of public domain – concert venues, bookstores, and perhaps even classrooms – could become spaces of constant and silent virtual surveillance. It’s important to examine the acceptance that is so easily granted to the notion of such willing privacy sacrifices in the name of security; already, the ideas such as governmental omnipresence and oversight in what individuals deem as private correspondence both online and over the phone, have been cause for alarm for some who deem such surveillance an invasion of their rights. If – and as it seems now is the case, when – technology begins to develop and shift towards a practice in which the kind of surveillance that allowed for full-body and in-detail images of individuals’ bodies is available and implemented without the need for subjects’ explicit submission, it risks the question of agency, and the lack thereof. More specifically, it raises the idea that this agency is not simply just given up on the part of the subjects, but rather, quietly and unassumingly taken by institutions and systems; and with how easy it would be to implement this technology, there exists also the threat of its mass implementation and how deeply it could pervade the everyday spaces of daily lives.

    Reply
  11. Kieran Bates

    Self-driving cars are the almost universally undisputed future of personal transportation.
    Once widespread use of self-driving cars occurs, many lives will be saved and transportation will be much more efficient. Tesla and Google already have self driving cars out on the road, with companies like Volvo, BMW, Nissan, and Toyota promising autonomous cars as early as 2020. So far there has only been one fatal crash in a self-driving car; a Tesla model that wasn’t able to distinguish the side of a truck from the sky. By the time the first big wave of affordable cars hits the market, safety technology will probably be many times better than it is right now. Although autonomous vehicles will drastically decrease automobile deaths, it will be impossible to completely eliminate them. In fact, in the wake of humans handing control of their lives and others’ over to computers, there will be ethical and moral questions that must be addressed. For instance; what should the car do in the case of an unavoidable crash? The seemingly obvious answer would be to perform a maneuver that would result in the least chance of death to its occupants and others, but not all of these situations are so cut-and-dry. What happens when the car is left with a choice of either killing its two occupants or two pedestrians? Will the car be able to analyze the societal impact of each decision and choose the path that has the least affect on the rest of the world? Or will the car always choose to protect its occupants? This is a fascinating “catch-22” dilemma that surfaced in MIT’s technology magazine in 2015. The best solution drawn by the article was that the car should simply choose the path with the best chance to result in the fewest deaths even if it meant the death of the occupants. While this solution would likely result in the best chance for a clear conscience ethically, it would be hard for people to hand their livelihood over to a machine, no matter how much safer it actually makes them. I think that the amount of lives saved by self-driving cars would drastically outweigh the ethical implications of an autonomous driver, although I equally understand the qualms of those who would oppose them. As self-driving cars become more prevalent in society, it will be interesting to see the legal and social responses to the unfortunately inevitable accidents that will occur.

    https://innovately.wordpress.com/2015/10/27/this-is-the-solution-to-the-infamous-self-driving-car-ethics-paradox/

    Reply
  12. Daisy Fernandez

    If we thought jobs for English majors are screwed, mafias and police officers are next; they will soon be replaced by robot hitmen, thanks to advances in facial recognition and human apathy for a job well done. We don’t seem to care that harmless black men are brutally killed, yet, we do care when we are giving jobs to machines.
    University of Montreal’s Yoshua Bengio believes that such technology and doom is near; using deep learning (machine learning based on a set of algorithms that attempt to model high level abstractions in data) and high-tech facial recognition, these killing machines will be able to take out one singe victim in a crowd of thousands, and perhaps pull the trigger for no apparent reason (just like the officers who shot Michael Brown). The machine doesn’t necessarily have to look human either; it will be modeled and programmed specially for killing. Once it receives a bounty, the bot could stroll/roll/fly through the streets, scroll through images in the database, and execute.
    Luckily (or not), this type of technology is still in theory, though it is not farfetched. Officers in Dallas used a “bomb robot” after a sniper killed five policeman. The robot was controlled by a human officer. However, it’s important to note that machines can help get the job done, especially when it’s very dangerous and when people are recording on their phones. Currently, there are no laws or regulations for lethal autonomous weapons, yet there are advocacy groups who argue that it’s way too easy for a machine to kill (no emotion).
    Facial recognition is the star in this development, and recently there have been advances in programs that a “person’s face, including distances between the eyes, nose, and mouth.” Bengio’s deep learning research also helps with facial recognition in which if a computer was only given half an image of a person’s face, it could guess the other half well.
    A similar contrast can be made with that of the use of drones and the US military. Scientists such as Stephen Hawking have also urged to ban autonomous weapons. If robots that are created for our entertainment are scary (WestWorld, Her, Ex-Machina), I only feel the wet spot I made since reading the article.

    http://motherboard.vice.com/read/facial-recognition-robot-hitmen

    Reply
  13. Jose Almaguer

    The emergence of drones and facial recognition in our society seems to be gradually progressing in its technological advancement. Each of these technologies alone pose several questions on the benefits, as well as the ethical issues, that come with their development. But what if these two technologies were to combine? A tech company known as Zero Zero has done just that with the creation of Hover Camera. Zero Zero’s Hover Camera is a small and portable drone that is able to lock onto a specific person using visual sensors in conjunction with face and body algorithms. At the date of the articles release the company stated that the drone is capable of only following a person that was initially selected, but at the time of the products availability for purchase Zero Zero stated that Hover Camera will be able to scan the entire area for faces. The product is now available for purchase, but the website does not seem to specify if it is able to scan an entire area for faces.

    Yet, there are still multiple layers of conversation topics to address here. Hover Camera is marketed as a drone that is to be used to record personal vacations or adventures with a hands-free aerial camera. This in itself is an exciting advancement for the leisurely use of drones, but what I’m more interested in is the possible ethical issues that this presents. There are privacy issues that come with a drone such as Hover Camera if this technology were to develop to a point capable of filming a specific person without their knowledge of it. This takes the meaning of stalker to a whole new level as the stalker is able to follow someone without even having to operate the drone itself. Furthermore, if the facial recognition part of the drone were to develop it then leads to what governmental or security agencies are putting this technology forward in having your personal facial features on file. We know now that the NSA is constantly monitoring emails, phone calls, and internet use. So what’s to stop agencies such as those from implementing it a step further to the widespread use of drones.

    Hover Camera does not seem to be at a point where it can recognize a specific face miles away from the user’s location and follow them undetected. So for the public’s misuse of drone and facial recognition, I believe we are safe at the time being. With that said, there is no telling how far government and security agencies have developed their drone technology. The overall point being is that technology such as Zero Zero’s Hover camera is exciting and is capable of being used for positive activities. Although, there are always people out there that see technology such as this as an opportunity to be used for malicious purposes, and I believe that to be the area that we as a whole must have a discussion on for the necessary rules and regulations for drones and facial recognition.

    Reply
  14. Darya Behroozi

    If the concept of racial profiling wasn’t already concerning enough, company’s such as Cognitec have the perfect solution for the flawed facial recognition system within law enforcement. Since its conception in 1995 Cognitec has been a leading company in the development of algorithms for FaceVACS facial recognition technology. While the company states that it provides its services to the general market, the main surplus of its facial recognition software is aimed toward government consumers.

    FaceVACS video scan manages to detect and find a person of interest in real time while computing other elements of data such as group demographics and behavior. Beyond its obvious applicability in government departments, the video scan suggests that it can also function in industrial departments such as marketing and operations management. The face stream’s ability to scan compute the people count could be useful within in this alternate setting where foot trafficking and general demographic populations can be used for business development purposes.

    Interestingly enough, an extensive amount of Cognitec’s has been completed under a university-led research team. Dresden’s University of Applied Sciences and Technical University have played a role in developing both a 2D and 3D facial recognition system that performs without the limitations of the subject’s positioning. This essentially means that, if the research were to create a successful surveillance system, the location or viewpoint of any person in a crowd would not hinder the quality of their recognition within the system. The University of Applied Sciences is specifically focused on utilizing its department of artificial intelligence to build humanoid robots that will react organically to the human user. The incorporation of AI within surveillance brings up the question of whether robotics will play a role in the future of law enforcement. From looking at the research developed from the two university’s, the set goal of the corporation looks eerily similar to the Omni Group’s privatization of the police force in the dystopian RoboCop.

    The incorporation of the millennial generation to surveillance research also peaked my interest. The mere fact that the university has an entire department dedicated to artificial intelligence sets the school system apart from what I have personally observed in America’s higher education system. While UCSB itself is a research school, there has not been a lot of talk about branching the Computer Science major further into an AI department.

    Furthermore, as seen with the researched deduced from the Macau casino’s deployment of facial recognition systems, it appears as though surveillance equipment is not just in demand from government bases. The joint partnership with Cognitec is made to create a safer casino environment, offering a life feed of foot traffic and tapping into databases of banned persons to identify certain people entering the premises. Whether Cognitec has access to the casino’s personal listing of banned people or a government-created one was not disclosed.

    While still small in its staffing, Cognitec appears to be at the forefront for the future of machine surveillance. Its use of resources such as university research speaks to the company’s grasp of the growing millennial population and their accountability for machine development. However, with the inclusion of surveillance technology also comes the question of privacy. The use of government databases for facial recognition and evolution of humanoid robots as surveillance devices creates a concerning question of AI responsibility and their future role in society. The topic of AI within law enforcement brings up the question of whether we can trust robots. The the success of AI is measurably dependent on their ability to mirror human qualities. Given the history of police brutality of the U.S. alone, one has to wonder whether the AI would correct the previous mistakes of police offers or whether its surveillance capabilities will create an unstoppable force of racial profiling.

    http://www.cognitec.com/research.html
    http://www.planetbiometrics.com/article-details/i/4120/desc/macau-casinos-deploy-cognitec-face-recognition-system/

    Reply
  15. Caroline Stoll

    When people travel internationally, they arrive at a large airport and can check-in by themselves, using a simple machine. This machine asks for your name, flight information, and to scan your passport. These machines are called EGates. Considering the increasing population, the EGate system is efficient and allows hundreds of thousands, maybe more, to fly around the world every day. But can this system be easier with a new machine vision?

    In her article, Jessica Rowbury explores the possibility of “ticketless travel,” which would hypothetically allow humans to travel without showing identification or tickets- they would just have to be present in the airport to check in. Advanced imaging and machine vision technology would allow this but government regulations along with its very high cost would be the technology’s biggest obstacle.

    The technology is, “MFlow Track v3.0 from Human Recognition Systems, which uses iris recognition to identify individuals from a distance, from the moment they check-in, up until they board their flight.” The iris is humans’ most unique biometric, which is why they have chosen it.
    This technology has the potential to be very easy because the camera would find the eye, and not the other way around.

    Once the computer has the image of the iris, it is computed into a template by a specific algorithm. Then the human would need to get their iris scanned again, at the airport, to have the two irises compared. In fact, this article states that, “The goal is that passengers will no longer need to carry travel documents because their identities will be verified by machine vision from the moment the ticket is bought, to when the passenger boards the aircraft.”

    Reply
  16. Michael "Fresh to Death" Loose

    http://unmanned.molleindustria.org/
    This is a game where you play as a man who pilots a drone, serving the goals of the American military, whatever that may be. This game, made by Molleindustria, is less about the drone and more of how it is operated by a man who must make the calls to drop bombs or not. As we mentioned in class briefly, the actions of machines can be seen as more than just the drone or the pilot, but similar to a conglomerate force, like a corporation where many contribute.
    I chose this game to break up the articles, which while fun to read, I feel like most of us will be or are already tired of reading and could use a bit of fun to break things up. As we make the transition to true autonomous machine action and vision, we deal with a cooperative world of humans “manning” a drone and dropping bombs. The player actually can choose whether or not to drop the bombs, and there is no clear fault for not doing so. The player can also choose what they say to their partner, play a game within a game with their son, and receive medals for reaching some milestones, essentially achievements.
    In this way the player is rewarded, but at the end of the game, you are asked to step back and realize this is not structured like the notable video games in popular media: it frames the story as the average life of a drone pilot. There is no glamour, just his life, with its own problems and struggles. It removes the glory that war is often given, and strips away that layer of separation that flying drone gives, of being removed from the action.
    By being a game, killing becomes a goal, a task to be fulfilled, putting us in the same situation the soldier is, of wondering how responsible we are for the actions of the technology we use. By extension, we are killing people through this game. As machine vision evolves, we must also consider what will happen when they develop a conscious, and their own aversion to killing.

    Reply
  17. Khoa Ho

    During Cyber Monday, I browsed Amazon looking for great deals on tech toys such as brand new computer speakers and camera lenses that I can recklessly spend my hard-earned money on. Stumbling on a Nikon 35mm lens camera for less than $200, I added it onto my digital cart and went to the check out menu to see shipping costs and information. However, unable to input my credit card number into the field due to second thoughts about the purchase I exited Amazon and started browsing social media. As I looked through my Facebook feed, a sponsored ad appears from Amazon showing the deals that I can save on the camera lens that I had looked at not five minutes before, however, I looked past it and picked up my phone to look at my Instagram feed. And there it was again, a sponsored ad from Amazon reminding me to purchase the camera lens again. I felt as if someone from Amazon has been tracking my web browsing and reminding me to by that Nikon camera lens.
    Tracking user browsing history, agencies are able to insidiously gather information about their subjects or leads effortlessly. In an advertising technique called “remarketing,” advertisers focus on a lead that was close to buying a product. In the past, remarketing has come in the form on a salesman calling their leads and reminding them to purchase a product that had previously been advertised. Now, in the digital age, remarketing has become automated. An Amazon algorithm was tracked my cookies and in an effort to remind me to buy the camera lens, their ads reached me on my social media newsfeeds. History of digital media has shown that advertising agencies has used technology for mass advertising. Since the invention of digital media, such as the radio and television, distributors of media products have been able to directly market to viewers and listeners. Indeed, because of social networking sites such as Facebook, Instagram, and Twitter, user privacy seems difficult to protect since information is distributed through the World Wide Web in the blink of an eye.
    In a study by Princeton University and Stanford University researchers, the NSA used cookies as a tracking device. They stated that the NSA “passively [identified] Tor users by associating cookies with non-Tor sessions,” using cookies to give hints to what users were browsing on the dark web. By exploiting cookies and other tracking methods, agencies such as the NSA can risk Americans’ privacy, leading us to become paranoid about technological advancement.

    Source used: http://senglehardt.com/papers/www15_cookie_surveil.pdf

    Reply
  18. kelseytang

    https://www.fbi.gov/services/cjis/fingerprints-and-other-biometrics/ngi

    https://news.vice.com/article/your-face-voice-and-tattoos-are-the-fbis-business-now

    In order to further advance the Integrated Automated Fingerprint Identification System (IAFIS) implemented by the FBI, the project Next Generation Identification (NGI) was created in an attempt to identify individuals and their criminal history by recognition of fingerprints. The system encompasses a number of recognition processes such as a database of a million fingerprints and “faceprints” along with “iris scan details.” While the NGI dedicates itself to cultivating an expansive collection of information regarding criminals, those who haven’t been arrested are still subjected to the system’s gathering however: “You could become a suspect in a criminal case merely because you applied for a job that required you to submit a photo with your background check.” The “Rap Back” service monitors individuals in “positions of trust” as well, such as school teacher or daycare workers. This eliminates the need for repeated background checks on an individual.

    So the question of privacy emerges. To what extent will, if ever, these programs be deemed too invasive of our privacy? To consistently keep record of an individual’s personal criminal background does provide the advantage of safety for others and convenience overall. 18,000 law enforcement agencies will be able access the databases of NGI, and that number may come off as somewhat of a violation to our personal privacy.

    Reply
  19. Amy Yoo

    https://www.technologyreview.com/s/603019/apple-wants-to-use-drones-to-give-its-maps-app-a-lift/?utm_campaign=internal&utm_medium=homepage&utm_source=top-stories_1

    Apple has recently gotten special permission to fly drones, by the Federal Aviation Administration. They reportedly want to use the images from the drones “examine street signs, track changes to roads, and monitor if areas are under construction” to improve and update Apple Maps. However, according to the article on Technology Review, this update using drones will take a long time; the FFA’s commercial drone regulations do not allow for drones to leave its operator’s line of sight. This regulation apparently makes it difficult for quick data acquisition.

    It seems like Apple wants this data to create a more detailed compilation of data for its users. However, this biggest concern, it seems like, with this new use of drones is security; consumer security and non-consumer security. It seems relatively natural that the Federal Aviation Administration is heading the regulation of drones, but at the same time, it also is concerning that other federal organizations are not involved in regulation. Drones are not simply small planes, they are also video cameras and data mining tools. Physical security is important, drones shouldn’t weigh above 55 pounds or fly above 400 feet, but their mechanics are not their most dangerous aspects. Apple says now that they want to use drones to improve the quality of their navigation system, but in five years, what will they use that information for? Theoretically, the information that they collect is eternal, and the same image can used to cheat the consumer or to manipulate the consumer. If an image is worth a thousand words, it seems more than plausible that information that Apple collects will be used beyond simply navigation.

    On the contrary, Google has been collecting information about its global consumers for decades now. Anyone with computer and internet access can look up someone’s home, they just need an address. When did people give their consent to have their homes, their private property, photographed and released to the world? It seems an unfair advantage that corporations have assumed because they developed faster than the law. But even more, the law doesn’t necessarily work for the individual (as seen with the Patriot Act and the data mining that the NSA has been heading for the past several years). The law works for the order of the masses. Security, wireless security, has now become an individual’s responsibility.

    Reply
  20. Karina Lucero

    http://www.wsj.com/articles/chinas-new-tool-for-social-control-a-credit-rating-for-everything-1480351590

    China’s goal for a Social Credit system is supposed to use data from all aspects of life to track individual behavior and then rate the citizen, it’ll use data from: government departments, financial institutions, and internet behavior to police the population of Beijing. The system plans to reward good behavior such as visiting one’s parents and volunteering to pick up trash by bettering their credit score. Similarly, it will drop a person’s score if they do something bad such as violating traffic laws or posting something undesirable on the internet. If a person’s rating drops too low, then they can be prevented from traveling abroad, take out loans, and even prevent a person’s child from attending a good school.

    Beijing’s plan for a credit system based off all aspects of a person’s behavior is an unprecedented form of 1984-like surveillance that will control a population that has already had much of their freedoms limited. This system will also infringe upon the liberties of children, if their parents have bad ratings then their kids will suffer too. They will be unable to attend good schools and will unlikely receive higher education due to their parent’s bad ratings. This kind of high-surveillance will not only apply to people but to businesses too. Restaurants will have ratings as well as have monitors surveying their every move in the kitchen. Like the monitors in the restaurant, there will be monitors of all kinds throughout the city, as well as people, and even your own parents.

    This article reminded me of an episode from Netflix’s Original show Black Mirror, the episode explores a very similar system in which the characters begin to police themselves and others all based on a social media rating. With their phone in hand, there comes a pressure for the characters to be at their best behavior to receive the best ratings people. In the case of the show, people go to the lengths of meeting with social media advisors in order to increase their ratings.

    Machine vision: I know that my piece is not necessarily about machine vision but I think it really does cover important issues regarding policing and surveillance. With this new rating system, there will be cameras and people watching each other’s every move.

    Reply
  21. Colburn Pittman

    http://www.adweek.com/adfreak/bruised-woman-billboard-heals-faster-more-passersby-look-her-163297

    Advertising is poised to become a significantly larger influence in our lives with the evolution of facial recognition technology. The advertising agency WCRS partnered with the Women’s Aid charity to create interactive billboards that incorporate facial recognition technology. Each billboard displays the bruised and beaten face of a woman suggestive of the occurrence of domestic violence. As passersby on the street begin to look at the billboard, the bruises slowly begin to disappear, subtly communicating the message that domestic violence can be stopped if we would only stop and notice/pay attention to it. To further hammer this message home, the individual faces of those who stopped to look at the billboard are posted near the bottom of it in a live motion feed, rewarding them for their attention and awareness to its cause.

    On the one hand, the interactive potential with advertising and facial recognition seems overwhelmingly positive, exciting and fun. An example: a separate yet equally tantalizing advertising billboard – https://www.youtube.com/watch?v=GGC6EY0AXPk – used facial recognition to force people involuntarily to play a game of “What’s the time Mr. Wolf,” and the reactions are priceless. That our eyes and awareness of advertising can be instrumental in the play of advertising is also quite frightening. We do not always want to be sold a product or an idea all the time, and in fact, most of the time we would rather not have anything to do with advertising (except maybe if one happens to be in the business of advertising). It gets old, and can distract us from the real issues at hand and more permanent solutions. Instead, for example, of being sold quick 5-minute workout fixes to “get our bodies healthier and into shape,” maybe we would be better of actually going to the gym and working out, or going for a run/bike ride, or playing a sport. The point is that we need to maintain some sort of choice at least, to give us the potential to make these alternate solutions, and that may be harder and harder to find in an age when you not only see ads, but ads now also see you. This especially, when the punishment of not looking at a particular add punishes you with the shame of a woman’s face growing more bruised or a scary looking wolf jumping out at you frightfully.

    Reply
  22. Alex Rodberg

    Fellow church-skippers and party dippers beware, Churchix is out there and checking your attendance. In 2015, the company Face-Six released Churchix; originally created for church administrators to keep an eye on member attendance, this facial recognition software program that allows hosts to check the attendance at their event. After downloading the desktop application, users import high definition pictures of their members to the systems database. Each facial image is analyzed until the system has created a unique template for each person being identified. From there, the user attaches a live video camera or uploads recorded video that they want to be analyzed. Once up and running, through the use of this biometric technology, the software, “relies on criteria like the distance between your eyes, the measurements of your nose, lips and other facial features and matches them against [the] existing database” of members (Fortune). In comparison to Google’s or Microsoft’s recognition technology, Churchix sits on the more primitive side of the spectrum; that being said, there is still something to take from Churchix on a more general level.

    With all the noise surrounding facial recognition software going to government agencies, Fortune500 companies, or media platforms, we cannot fathom the idea that such technology could existence for the everyday consumer. This is because such technology is viewed not only as too advanced, but too powerful. As a society, we have a difficult time grasping the idea that our own government can use such programs- and that’s with our knowledge; therefore, the idea that other civilians can manipulate/abuse the same technology is terrifying. It’s with products like Churchix, that we see the torch of higher technology get handed to the civilian domain. While the creators had good intentions for the program, stripped down, Churchix is still software that offers users a private platform to collect identification data through live video streams and photos. Call me a pessimist, but I don’t see the majority of buyers using the product because their congregation is getting lazy.

    http://churchix.com/
    http://fortune.com/2015/06/23/facial-recognition-freak-out/

    Reply
  23. Jose Ochoa

    In 2011, self-proclaimed gamblers were able to submit photos of their own faces in order to bar themselves from casinos. This preventative measure was implemented at a casino in Ottawa but “The same technology has been installed at 19 of 27 OLG sites across the province, with the remaining eight sites to come online by the end of the year,” according to a local Ottawa news website. Canada seems to have done well to keep itself accountable, with gamblers preemptively submitting their own photos rather than waiting until their gambling habit becomes problematic. This is a surprising show of self-restraint, or at least surprising expression of their self-awareness to the fact that they have none.
    According to that same news site, the database system is capable of “[granting] access to the province-wide database, which would stop a Rideau Carleton regular even if he or she travelled to Woodbine’s slots in Toronto.” The ubiquity of the software is actually much more concerning than it seems to let on. Though this statement might have been written in an optimistic manner, it is much grimmer than it might suggest at first glance. The simplicity with which the database may organize and share gamblers’ face data has the potential for massive exploitation, perhaps by police or other investigative authorities. While well-meaning (the technology is after all intended to help self-proclaimed gamblers curb their habits), the potential for sinister misusage is major with this sort of technology.

    This particular instance of machine vision is interesting because it is put in place by the public’s own preference. As opposed to a facial recognition software to be used by authorities against the will of criminals, as is being done with “MORIS” technology in Florida, this gambler facial recognition program is used at the gambler’s own decision. Personally, I find the decision to willingly resign your personal information to some unknown entity whose potentially exploitative ambitions could easily be occulted to be entirely ridiculous, equivalent to strapping on slabs of meat to yourself while walking through a den of lions. When an individual willingly resigns their right to privacy, the normalization of this sort of technology becomes even more widespread, creating the potential for even more exploitation. And once placed in a situation that might require your face to be recognized, the police even claim that they no longer need an individual’s consent in order to log their faces onto a database. “Deputies are required to ask people for permission to take a photo and use the facial-recognition technology,” says a Wall Street Journal report. However, a systems analyst for a Florida sheriff’s office claims that “Legally, we don’t have to, but we have a policy in place to ask for consent.” This is terrifying to know that police acknowledge that they have power which they adorn with the illusion of consent from the public.

    And while no immediate abuses of facial recognition come to mind, the concept that privacy may be willfully threatened by authority, and that some may even openly submit themselves to this of their own accord, is quite ominous and concerning. While the value of privacy may at this moment be ambiguous and undefined, if we continue to consent to clear instances of its deterioration we may reach a point at which we might be unable to reclaim a right that we have so enthusiastically allowed to be stripped of us.

    Source:
    http://blogs.wsj.com/digits/2011/07/13/how-a-new-police-tool-for-face-recognition-works/
    http://www.ottawacommunitynews.com/news-story/3797588-raceway-uses-facial-recognition-to-help-problem-gamblers/

    Reply
  24. Korrin Alpers

    Blippar is an app that utilizes facial recognition and augmented reality as a sort of “Shazam for faces.” More precisely, the technology works in a way that you can scan an actor’s face, and see the past and future projects he/she is a part of. To that end, Blippar solves all riddles concerning “where do I know that guy from?” To be more sinister, the app provides a breeding ground for stalking and profiling. For example, imagine if you could secretly scan the face of your local barista, or an attractive person passing by. If desired, the app could easily pinpoint their name and location, simple social networking profiles. Even more so, the app could perhaps also be used to determine people’s criminal history, or past discrepancies. With that in mind, how would profiling become more of a public issue, when people are constantly raising their phones to scan certain groups of people. How would situations escalate, when someone’s identity can be found and noted, as well as their private address. Though a simple Google search can offer the same information, this app makes it easier to identify people, to disrupt privacy and anonymity. Though this post comes off as a bit more paranoid than I would like, it’s important to understand that technology like this is never created for silly purposes as finding out that “one guy’s” name from a TV show. It is almost always used as a way to store data in order to regulate, police, and profit over categorizing people. When apps such as Blippar can reveal people’s histories and personalities through scanning, there’s bound to be unmanageable fallout. For example, if someone consistently scanned black males faces and could easily access their criminal history, their understandings of black males would be linked to their appearance, their facial structure, rather than the judicial and racial systems. More eloquently, we have to be careful when we create technology that allows people to create patterns, and derive meaning from those patterns. It’s human instinct to do so, to categorize and segment groups in order to better understand and survive within our own communities. However, it’s important to be cognizant of how technology can perpetuate stereotypes or incorrect data and assumptions. This goes beyond simple scanning and apps with trendy names. We have to think of the implications of authorities using such technologies. How would police, the TSA, or parole officers be hindered or bettered by technology such as this? How are we to direct the future of technology: to better bridge people’s understanding of the “Other,” or validate their actions in othering?

    https://blippar.com/en/

    Reply

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s