The Montreal AI Ethics Institute, a nonprofit research organization dedicated to defining humanity’s place in an algorithm-driven world, today published its inaugural State of AI Ethics report. The 128-page multidisciplinary paper, which covers a set of areas spanning agency and responsibility, security and risk, and jobs and labor, aims to bring attention to key developments in the field of AI this past quarter.
The State of AI Ethics first addresses the problem of bias in ranking and recommendation algorithms, like those used by Amazon to match customers with products they’re likely to purchase. The authors note that while there are efforts to apply the notion of diversity to these systems, they usually consider the problem from an algorithmic perspective and strip it of cultural and contextual social meanings.
“Demographic parity and equalized odds are some examples of this approach that apply the notion of social choice to score the diversity of data,” the report reads. “Yet, increasing the diversity, say along gender lines, falls into the challenge of getting the question of representation right, especially trying to reduce gender and race into discrete categories that are one-dimensional, third-party, and algorithmically ascribed.”
The authors advocate a solution in the form of a framework that does away with rigid, ascribed categories and instead looks at subjective ones derived from a pool of “diverse” individuals: determinantal point process (DPP). Put simply, it’s a probabilistic model of repulsion that clusters together data a person feels represents them in embedding spaces — the spaces containing representations of words, images, and other inputs from which AI models learn to make predictions.
In a paper published in 2018, researchers at Hulu and video sharing startup Kuaishou used DPP to create a recommendation algorithm enabling users to discover videos with a better relevance-diversity trade-off than previous work. Similarly, Google researchers tested a YouTube recommender system that statistically modeled diversity based on DPPs and led to a “substantial” increase in user satisfaction.
The State of AI Ethics authors acknowledge that DPP leaves open the question of sourcing ratings from people about what represents them well and encoding these in a way that’s amenable to “teaching” an algorithmic model. Nonetheless, they argue DPP provides an interesting research direction that might lead to more representation and inclusion in AI systems across domains.
“Humans have a history of making product design decisions that are not in line with the needs of everyone,” the authors write. “Products and services shouldn’t be designed such that they perform poorly for people due to aspects of themselves that they can’t change … Biases can enter at any stage of the [machine learning] development pipeline and solutions need to address them at different stages to get the desired results. Additionally, the teams working on these solutions need to come from a diversity of backgrounds including [user interface] design, [machine learning], public policy, social sciences, and more.”
The report examines Google’s Quick Draw — an AI system that attempts to guess users’ doodles of items — as a case study. The goal of Quick Draw, which launched in November 2016, was to collect data from groups of users by gamifying it and making it freely available online. But over time, the system became exclusionary toward objects like women’s apparel because the majority of people drew unisex accessories.
“Users don’t use systems exactly in the way we intend them to, so [engineers should] reflect on who [they’re] able to reach and not reach with [their] system and how [they] can check for blind spots, ensure that there is some monitoring for how data changes, over time and use these insights to build automated tests for fairness in data,” the report’s authors write. “From a design perspective, [they should] think about fairness in a more holistic sense and build communication lines between the user and the product.”
The authors also recommend ways to rectify the private sector’s ethical “race to the bottom” in pursuit of profit. Market incentives harm morality, they assert, and recent developments bear that out. While companies like IBM, Amazon, and Microsoft have promised not to sell their facial recognition technology to law enforcement in varying degrees, drone manufacturers including DJI and Parrot don’t bar police from purchasing their products for surveillance purposes. And it took a lawsuit from the U.S. Department of Housing and Urban Development before Facebook stopped allowing advertisers to target ads by race, gender, and religion.
“Whenever there is a discrepancy between ethical and economic incentives, we have the opportunity to steer progress in the right direction,” the authors write. “Often the impacts are unknown prior to the deployment of the technology at which point we need to have a multi-stakeholder process that allows us to combat harms in a dynamic manner. Political and regulatory entities typically lag technological innovation and can’t be relied upon solely to take on this mantle.”
The State of AI Ethics makes the powerful, if obvious, assertion that progress doesn’t happen on its own. It’s driven by conscious human choices influenced by surrounding social and economic institutions — institutions for which we’re responsible. It’s imperative, then, that both the users and designers of AI systems play an active role in shaping those systems’ most consequential pieces.
“Given the pervasiveness of AI and by virtue of it being a general-purpose technology, the entrepreneurs and others powering innovation need to take into account that their work is going to shape larger societal changes,” the authors write. “Pure market-driven innovation will ignore societal benefits in the interest of generating economic value … Economic market forces shape society significantly, whether we like it or not.”