We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
This week at NeurIPS, the biggest AI research conference in the world, attendees of the Queer in AI workshop could choose multiple stickers for their badges. There were stickers for preferred gender pronouns, and next to the nonprofit’s rainbow brain logo stickers was a stack of red and blue stickers that indicate if a member is comfortable being photographed. Blue = photo, red = no photo. For some it may be about privacy, and for others it’s about feeling safe.
Most Queer in AI speakers’ names were publicly available in the NeurIPS diversity workshop brochure, a livestream, and a poster session together with groups like Black in AI, Latinx in AI, and Women in Machine Learning, but unlike virtually every other workshop collocated at NeurIPS, poster presenters and attendees who didn’t want to be photographed were also given the choice to keep their identity private.
Far more people in attendance appeared to choose blue stickers over red, but the latest results of a community survey conducted by the Queer in AI group through its website highlights why organizers extended community members more privacy.
Queer in AI is a nonprofit for the LGBTQIA+ community and allies in the field of AI. More than 100 people responded to the demographic survey, a portion of the approximately 350 who have attended Queer in AI since it began as an informal meetup at NeurIPS in 2016. Raphael Gontijo Lopes, an AI resident at Google Brain and a founding organizer of Queer in AI, presented the survey results at the start of the workshop Monday.
Of Queer in AI’s community, 42% of respondents said they’re out of the closet, which is up from 33% in 2018. Yet the survey also found that there’s been some regression YoY from 2018 to 2019. Lopes did not share hard numbers in every case, but a pie chart showed that whereas in 2018 approximately half of respondents responded positively to the statement, “The law affirms my rights; people are almost always supportive of my identity,” this year it was closer to one third. According the pie chart, the percentage of respondents who agreed that “I would be in danger if my identity were exposed” is still small, but it grew significantly between 2018 and 2019.
“Many people reported that they live or are from places where being their full selves is not safe, which also shows the importance of coming here and creating a safe space for those people to be themselves,” Lopes said.
Another survey question asked respondents if they feel welcome, as a queer person, at AI conferences. One a scale of 0 to 5, with 0 being “not welcome” and 5 being “welcome,” the mean was 3. Respondents indicated that obstacles to feeling welcome included a lack of community and role models.
Sixty percent of respondents said they have been the target of derogatory jokes, while 20% said they have feared for their safety. And 20% said they’re treated as the resident authority on queer issues at their workplaces, an indication of tokenism.
The survey also asks about people’s position in a company to determine the number of junior versus senior employees attending its events. Although the survey didn’t break out specific numbers, a “majority” of Queer in AI respondents working in academia and industry consider themselves “junior” employees.
The survey results were shared and presented but not published, although Lopes said the group may decide in the future to release the full demographic survey at its annual NeurIPS workshop.
Now in its second year, the Queer in AI workshop at NeurIPS introduced the photo/no photo option for the first time. Aside from Queer in AI, workshop organizer Natalia Bilenko said groups like Code for America sometimes ask people to avoid taking pictures in community meetings to respect people’s privacy.
Research at the Queer in AI workshop explored topics like how gender is represented in natural language models, inequality enshrined in algorithms, deconstruction of gender predictions in NLP, and how machine learning practitioners can design systems that shift power to the powerless. Panelists also discussed facial analysis systems’ poor accuracy in recognizing the faces of people who do not conform to a single gender identity.
NeurIPS workshops will continue Saturday for reinforcement learning and federated learning, as well as AI for the developing world, social good, and climate change. NeurIPS organizers announced top paper and Test of Time honors in a Medium post Sunday.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.