U.S. Customs and Border Protection (CBP) this week announced the expansion of its Simplified Arrival program, which uses facial recognition to verify the identity of airline travelers arriving in the U.S. According to a press release, Simplified Arrival is now in use at San Francisco International Airport and Norman Y. Mineta San Jose International Airport following recent installations in Detroit and Houston.
As early as 2016, CBP began laying the groundwork for the program of which Simplified Arrival is a part: the $1 billion Biometric Entry-Exit Program. Through partnerships with airlines like Delta and JetBlue, CBP has access to manifests that it uses to build facial recognition databases incorporating photos from entry inspections, U.S. visas, and other U.S. Department of Homeland Security corpora. Camera kiosks at airports capture live photos and compare them with photos in the database, attempting to identify matches. When there’s no existing photo available for matching, the system compares the live photos to photos from physical IDs including passports and travel documents.
As the CBP explains, with Simplified Arrival, travelers on international flights pause for photos at primary inspection points after debarking from the plane. If the alternative photo-matching process fails, they undergo the traditional inspection process.
CBP says that to date, more than 53 million travelers have participated in the Biometric Entry-Exit Program (up from 23 million as of March 2020) and that nearly 300 imposters have been prevented from illegally entering the U.S. since September 2018. But Simplified Arrival and other pilots under the umbrella of the Biometric Entry-Exit Program remain inconsistent, opaque, and potentially discriminatory.
While travelers can opt out of Simplified Arrival by notifying CBP officers at inspection points, a Government Accountability Office audit found that CBP resources regarding the Biometric Entry-Exit Program provide limited information and aren’t always complete. At least one CBP call center information operator the GAO reached in November 2019 wasn’t aware of which locations had deployed the technology, and some airport gate signage is outdated, missing, or obscured.
Moreover, the CBP has a poor track record of securing biometric data like facial images after it’s been stored. (New photos of U.S. citizens captured through Simplified Arrival are deleted after 12 hours, while photos of most foreign nationals are stored in a Department of Homeland Security system.) Last June, a subcontractor breach exposed over 184,000 photos of people collected as part of the Vehicle Face System, a CBP facial recognition program tested at selected ports of entry. While CBP initially declined to say whether any of that data made its way onto the dark web, a September inspector general report from the U.S. Department of Homeland Security found that at least 19 images were published online due to lapses in security protocols by Perceptics, the third party responsible for securing the images.
It’s also unclear the extent to which CBP’s facial recognition might exhibit bias against certain demographic groups. In a CBP test conducted from May to June 2019, the agency found that 0.0092% of passengers leaving the U.S. were incorrectly identified, a fraction that could translate to a total in the millions. (CBP inspects an estimated over 2 million international travelers every day.) More damningly, photos of departing passengers were successfully captured only 80% of the time due to camera outages, incorrectly configured systems, and other confounders. The match failure rate in one airport was 25%.
Despite the controversial nature of the CBP’s ongoing efforts, the U.S. Transportation Security Administration recently announced that it, too, would begin piloting checkpoints at airports that rely on facial scans to match ID photos. The White House has mandated that facial recognition technology be in use at the 20 busiest U.S. airports for “100 percent of all international passengers” entering and exiting the country by 2021.
In the wake of the Black Lives Matter movement, an increasing number of cities and states have expressed concerns about facial recognition technology and its applications. California’s Oakland and San Francisco and Massachusetts’ Somerville are among the metros where law enforcement is prohibited from using facial recognition. In Illinois, companies must get consent before collecting biometric information of any kind, including face images. New York recently passed a moratorium on the use of biometric identification in schools until 2022. Lawmakers in Massachusetts are considering a suspension of government use of any biometric surveillance system within the commonwealth. And in Portland, the use of facial recognition is prohibited in places of “public accommodation” (excepting airports).
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform
- networking features, and more