The bombings in Boston and the subsequent investigation of the suspects will likely revive the debate about pervasive security camera surveillance in the U.S. Stationary security cameras near the bombing location evidently gave police a key lead in identifying the suspects, as the past day of breaking news reports has revealed.
Such technology has long been envisioned in Hollywood films, ranging from comic romps like The Truman Show to paranoid dystopias like Minority Report. The idea was popularized in George Orwell’s novel, 1984, about life in a totalitarian state. With the rise of terrorist attacks and improved technology, security cameras have become part of everyday life for many.
As soon as the bombs went off and the investigation was under way, law enforcement agencies such as the FBI asked the public for their photo evidence. In the first four days of the investigation, more than 2,000 tips of all kinds came in. It appears that photos taken by a Lord & Taylor store provided key evidence in this case.
Civil libertarians are likely to be on the defensive in their argument that pervasive cameras represent an invasion of privacy and an unwarranted search. In the United Kingdom, surveillance cameras have been in use for more than a decade, and they number in the millions. China has also installed large numbers of security cameras, but those countries clearly have different positions on privacy and freedom.
Organization such as the Surveillance Studies Network have used such monitoring for years. They define a “surveillance society” as the one that engages in the “extensive collection, recording, storage, analysis and application of information on individuals and groups in those societies as they go about their lives.” The American Civil Liberties Union has set up a site dubbed youarebeingwatch.us, and Observing Surveillance has also attempted to raise awareness about surveillance cameras.
The ACLU site carries a story that notes that one crime has been solved for every 1,000 cameras in the U.K. Another story, citing a study by New York University, questions whether cameras actually result in reduced crime.
“We already live in a state of Big Brother and Little Brother,” said Dave Maass, the media relations coordinator for the Electronic Frontier Foundation in an interview with VentureBeat. “Little Brother takes the form of people using their own cameras. We are sort of monitoring ourselves. That is an indicator of where we are as a society.”
Considering that every smartphone has a camera, he has a point. He also notes that more surveillance cameras, operated by the government in stationary locations, might not have prevented the bombings. It is often too much work to scan a crowd for faces and the cameras become more useful after the fact. In that respect, people should understand the limitations of general, compared to targeted, surveillance, Maass said.
Boston law enforcers focused on activity around the Forum Restaurant on Boylston Street. Those video cameras captured the faces of the suspects, but they were blurry. The police asked if anyone recognized the suspects, hoping that crowdsourcing the investigation would turn up names. That suggests that they (or other law enforcement agencies) didn’t possess their own accurate face-recognition software that could come up with the answers automatically (or that it didn’t work on what images they had).
Photos from the public emerged that gave sharper images of one suspect at the scene of the crime. Of course, security cameras are only part of the investigation effort, and they can lead the public astray, particularly if subjects are misidentified or wrongly accused. Faster communication technology helped spread information quickly, but, in one case, it came up with the wrong suspect. Another person was reportedly afraid to leave his house because a newspaper misidentified him as a suspect.
Face recognition is notoriously difficult. One tech executive told me that his company tried to reduce the data processing problem by reducing the facial imagery to 3D graphical representations. But the problem was that on every plane, the face-recognition software would come up with one person who looked like a known terrorist. Most likely those were false positives. Face recognition also works on people who are known to law enforcement, not just anyone walking down the street. It is a huge “big data: challenge.
Surveillance technology has been evolving for a long time, with a lot more dollars going into it since 9/11. Imaging chip companies such as Pixim in Silicon Valley have been working on high-dynamic range imaging for years. Cameras equipped with such chips can adjust for any kind of lighting situation. So if there are dark and bright images in the same scene, Pixim’s chips can adjust the image so you can make out the details in both the dark and light sections. That helps improve the recognition of faces.
Based on news about the investigation, the blurry photos were still evidently clear enough for people who knew the suspects to recognize them.
Here is the video camera imagery posted by the FBI on Thursday.
Our upcoming GrowthBeat event — August 5-6 in San Francisco — is exploring the data, apps, and science of successful marketing. Get the scoop here, and grab your tickets before they're gone!