The Transform Technology Summits start October 13th with Low-Code/No Code: Enabling Enterprise Agility. Register now!


One belief Yakaira Núñez, VP of research and insights at Salesforce, holds firmly is that in product development, it’s just as crucial to identify your non-target audience and consider them in your roadmap.

“Communicate with unintended targets to see whether or not there might be any opportunities for new development, products, or features,” she told VentureBeat. “But also to help manage any risk.”

This approach for addressing the bias and lack of representation in data and AI algorithms stems from many facets of Núñez’s lived experience: growing up communist, the technology divide between her and her cousins in the Dominican Republic, and even watching NBC’s The Good Place. And it’s what she’s worked to instill at Salesforce, where she not only leads a cross-discipline team curating data for product decisions, but also acts as a safeguard for Salesforce’s own AI algorithms and products.

To learn more about Núñez’s perspective and approach to wrangling data and rooting out bias, we chatted about her background, her work at Salesforce, and her advice for creating products more thoughtfully.

This interview has been edited for brevity and clarity. 

VentureBeat: Tell me a little bit about your background and what you do at Salesforce.

Yakaira Núñez: I support the platform organization at Salesforce — that’s our security, privacy, developer and admin tools, and what we call “trusted services.” Basically, all of the things that people build on using Salesforce to make their implementations go and continue to optimize and improve their Salesforce experiences. And I work with researchers from multiple disciplines, mostly anthropology, sociology, and human-computer interaction (HCI). I have a background in HCI, and at Salesforce I’ve been focusing on research and insights. And one thing I pull through to my work here is a background in civic responsibility and fairly socialist leanings. My parents were card-carrying communists, and I’m considered a red diaper baby. And that afforded me the opportunity to learn what community means and, from a very young age, think about the impacts of all the things happening on the larger body of humanity. So I’ve very much been active in anti-racist and sustainability initiatives, because I find that those things are very closely tethered. And I’ve woven that through sort of an ethos of what I bring to my team and how we do research within Salesforce.

VentureBeat: I want to jump deeper into that research in a moment. But one thing I find really interesting about what you just said is regarding the backgrounds in sociology, anthropology, and such. A lot of people think of AI only in terms of computer scientists. So can you just talk a little bit about how those other disciplines integrate into and are important in the context of data and AI?

Núñez: Anthropologists, researchers, and sociologists all create data, but in different ways. It’s not with an Excel spreadsheet, but rather is derived from really good questions they’re asking of their environment and people they engage with. And so to your question, they are yet another input into the way that we build products because they’re bringing the value chain of what humans need and how they might engage with our products. And they’re crunching that information so it can be consumed by our product organizations or by the algorithms that we build, so that we can build better products going forward. So in effect, we become sort of datamongers, like a fishmonger. And I would say, we’re just as important as any other data collection service within the product development lifecycle. And I guess I should mention the alternative version of collecting data, which is collecting data on the way that people are using your product, right? Because it could be by clicks or views, but all of that is the “what,” and what the anthropologists and sociologists uncover is the “why.” And so balancing those two together then informs what might be the best possible future state solution.

VentureBeat: And when we’re thinking about how people are interacting with products and technologies, what are some of the consequences of bad data and poorly trained AI algorithms that are already impacting people’s lives or may in the future? And how do these consequences disproportionately affect historically discriminated-against populations?

Núñez: The bias is inherent. Those with access to the internet have more opportunities and more access in general than those that don’t. Just from a class perspective, because I was in the United States and my parents were academics and we had a little bit more money, data that represents me exists, whereas data for my cousins — who lived in a shanty town in the Dominican Republic and didn’t get access until years after me — didn’t. And you can’t create algorithms that represent individuals who don’t have data in the system. Period.

And in terms of the consequences associated with that lack of representation, that results in individuals being othered. They’re not even being considered, and so neither are their needs. If there’s an algorithm for insurance coverage, why would these individuals who have no access to the internet and who are not represented in the data ever be considered as a variable to inform whether or not they should get insurance? Having lived in New Orleans, I witnessed that certain individuals had no problem being able to get their FEMA money to rebuild their homes, while others had a lot of difficulties. And so, why is that? Because they were not represented by the data that was being collected by the insurance companies. Bias has been top of mind for me as of late, but I also think about those individuals who are not being represented on so many different levels.

VentureBeat: And of course, it’s pervasive in the field. So I’m interested to know what you think we could do about it? What steps could those who are using AI — especially enterprises — take to address and mitigate these problems and risks?

Núñez: When you’re building a product, we all recognize that there are these target markets that you’re trying to sell to. But it’s fundamental to also identify those individuals you’re not targeting, ensure they’re a part of your research plan, and consider them, too. Communicate with unintended targets to see whether or not there might be any opportunities for new development, products, or features, but also to help manage any risk. The easiest way to think about it is if you built a product that wasn’t intended for kids to use, but now kids are using it. Oh, no! We should’ve just interviewed kids to find out where there might have been a challenge. That would’ve been a risk, but it also could have been an opportunity. Managing risk is two sides of the coin, but it also opens up the doors for thoughtfully built opportunities because you’ve considered all all of the target markets, unintended target markets, and the risks associated with those.

VentureBeat: I understand that at Salesforce, you and your work act as a safeguard of sorts for the company’s own AI and products. What does that look like for a given product? And how do you balance the goal to create better AI with the business’ interests and the technical difficulties of curating the data?

Núñez: Well, you just described the product development lifecycle, which is always a balancing act between all of these things. And so what I’ve seen work is weaving conversations through the life cycle of product development so that product owners, designers, and researchers can feel like they’re part of the narrative. Putting the onus on one person to sort of be the gatekeeper just creates a psychological barrier and feeling that you can’t move quickly because of this one person. Everyone should have some sort of measure of responsibility, and that also helps build a better product in the long run. We should each have our own checks and balances that are associated with our functions. Now, of course, that’s sort of aspirational — for everyone to understand about ethics and AI. And I do recognize that we’re not there. But we, as humans who are building products, should be responsible and informed in the basics of what it means to be ethical.

VentureBeat: It’s really interesting to hear how you’re thinking about and approaching this. Could you share an example of a time at Salesforce where you were working through one of these challenges or looking to prevent some of those harmful consequences?

Núñez: Recently, we had a research intern who was focusing on the exploration of sensitive fields and identifying sort of the generalized value of ethics. Like, what is the value of ethics? Are our customers talking about it? And if they aren’t, what should we explore and what could we provide to our customers so that they do consider it top of mind? There were explorations around if offering tools to help manage and mitigate risk would make someone more inclined to purchase Salesforce, as well as very specific explorations around features we’re going to ship and if they’re going to be perceived as positive or negative. No one’s going to bark over your product if you provide ethical features, but this, of course, provokes the next question: What are they going to pay for it? And I don’t have an answer for you on that one.

VentureBeat: From your experience doing this work, do you have any other advice or takeaways to share? Is there anything you wish you knew earlier on?

Núñez: I wish I had believed it earlier on when people told me The Good Place is actually a great way to learn about ethics. The show is just useful for teaching ethical stances, and people in the ethics circle oftentimes cite and will speak to it because really, it’s a foundation. If you’re going to build products, do it for yourself or for others, but also make sure that you’re doing it for the common good. And yes, making money should be a part of that story, but the course should be to be good for others and build awesome things.

VentureBeat

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more
Become a member