The AI Now Institute, People Power Media, and the Anti-Eviction Mapping Project today launched Landlord Tech Watch, a crowdsourced map examining where surveillance and AI technologies are being used by landlords to potentially disempower tenants and community members. The site invites tenants to self-report the types of tech being installed in their residences and neighborhoods and aims to help educate people about the widespread use and potential harms of these technologies.

There’s currently scant legislation governing the collection and use of data in the context of real estate. Owners and landlords typically purchase and install tech products and platforms without notifying tenants in advance or discussing potential harms with them — and sometimes without ever letting them know.

In New York City, for example, rent-stabilized tenants at the Atlantic Plaza Towers in Brownsville were subjected to a facial recognition security system from a third-party vendor. Elsewhere in the city, a tenant in Hell’s Kitchen sued his landlord on the grounds that a keyless system was too complicated and he feared his movements would be tracked through the technology.

Residents and local elected officials have been quick to rail against these types of systems. Last October, the New York City Council proposed regulation that would force landlords to provide tenants with traditional metal keys for entering their buildings and apartments. The tenant in Hell’s Kitchen secured the right to physical keys for himself and his neighbors in May, after suing his landlord.

Landlord Tech Watch aims to offer tenants and researchers a better sense of the scope and scale of technology currently in use — like cameras, payment platforms, and screening tools. The site includes examples of different types of tech and lists specific harms associated with each type, along with a deployment map that indicates where such tech is being used and a survey that encourages people to share their experience with the installation and use of such technology.

Landlord Tech Watch

Residents at 406 West 129th Street in Manhattan have already used Landlord Tech Watch to report that intercoms from GateGuard have been installed at buildings without their permission. (CNET recently reported that GateGuard has been pitching its technology to landlords in New York as a way to sidestep rent-control regulations.) At 61 Wyckoff Ave in Brooklyn, a tenant claims the landlord recently replaced traditional buzzers with camera-equipped electronic buzzers.

“Facial and movement recognition cameras made by the Israeli-based FST21 [have been installed in our building],” a resident of New York’s 10 Monroe Street wrote. “This came after Hurricane Sandy inflicted damage on the building. The landlord then installed this without our consent … We don’t know what happens with the data being collected about us. It also doesn’t work well, and we all have to do humiliating dances to be recognized by it.”

The Landlord Tech Watch website notes that tech can be used to perform potentially prejudicial background, income, and credit checks on prospective tenants. While there’s no comprehensive registry of tenant screening companies, there are estimated to be over 2,000 in the U.S. (Last year, the U.S. Department of Housing and Urban Development began circulating rules that would make it harder for tenants to sue landlords when algorithms disproportionately deny housing to people of color.) Virtual property management platforms might also prevent tenants from communicating with their actual landlords or property managers, resulting in neglect. And flawed or biased AI security systems could target and potentially endanger tenants.

Consider facial recognition technology, which countless studies have shown to be susceptible to bias. A study last fall by University of Colorado, Boulder researchers showed that AI from Amazon, Clarifai, Microsoft, and others maintained accuracy rates above 95% for cisgender men and women but misidentified trans men as women 38% of the time. Separate benchmarks of major vendors’ systems by the Gender Shades project and the National Institute of Standards and Technology (NIST) suggest that facial recognition technology exhibits racial and gender bias and facial recognition programs can be wildly inaccurate, misclassifying people upwards of 96% of the time. Installing facial recognition technology as part of a screening or entry system could potentially put individuals at risk of being misidentified by police or other authorities.