Join gaming leaders online at GamesBeat Summit Next this upcoming November 9-10. Learn more about what comes next.
Today algorithms can shape what you buy, where you live, whether you get a job or a bank loan, and many other aspects of your life. Autocomplete now predicts your words in text messages, Gmail, and search terms. Even Tinder is controlled by algorithms — did you pick your love or did Tinder?
Do you pick what you watch or buy if more than 80 percent of what you watch on Netflix and 30 percent of purchases on Amazon are the result of an algorithm? Those statistics come from Kartik Hosanagar, an entrepreneur, professor, and researcher who has spent more than a decade studying and teaching courses about algorithms. In his work at the Wharton School of the University of Pennsylvania, Hosanagar has explored topics like filter bubbles and whether or not algorithms expose us to new points of view (by and large they do not).
As an entrepreneur and cofounder he built core algorithms and constructed data science practices for multiple ventures.
In his new book out today, A Human’s Guide to Machine Intelligence: How Algorithms Are Shaping Our Lives and How We Can Stay in Control, Hosanagar spells out the pitfalls of algorithmic control through a combination of personal narratives, stats, and historical analysis.
He also lays out ways people can recognize the influence of algorithms used by tech companies and the things the average person who feels overwhelmed by these forces can do in the face of massive multinational companies.
In an interview with VentureBeat, Hosanagar discussed what individuals and lawmakers or regulators can do to wrestle control back from tech giants like Amazon, Facebook, and Google. While he agrees that each of these companies control monopolies, Hosanagar also shares why he thinks U.S. Senator Elizabeth Warren is wrong to suggest they be broken up by regulators.
(This interview has been modified for brevity and clarity.)
VentureBeat: What sort of steps do you feel like individuals can take in their personal lives to wrestle control from these sorts of algorithms?
Hosanagar: A lot of people express this viewpoint that we are as individuals somewhat helpless against powerful technology and the algorithms unleashed on us. But I’m of the view that while individual effort alone will not solve this problem, we actually do have some amount of power here, and that power is in the form of our knowledge, our votes, and our dollars.
In terms of knowledge, the idea is somewhat straightforward, but I think it’s under appreciated, which is becoming aware of the technologies we’re using and what’s happening behind the scenes with them. Instead of being very passive users of technologies and algorithms, more deliberate choices should be made. We have to ask ourselves how algorithms change the decisions we’re making or that others are making about us.
If you look at what Facebook is doing, what they announced this past week in terms of changes to its products, how they’re going to support encryption of messages and sort of treat messages and sort of appreciate the privacy needs that people have, I think that’s a direct outcome of, of pushback from users.
The other is our votes, and basically backing representatives who understand the nuances here, and who take consumer protection seriously. In just the last year or two, there have been a number of U.S. Senators and representatives who proposed bills related to privacy and algorithm bias and so on, and being aware of who’s doing what in the voting decisions. I think that’s a pretty important one.
And finally with dollars the idea is vote with your wallet. We ultimately have the option to walk away from these tools. So if we feel like a company is using our data, and we don’t find it acceptable, for some people that might be where they draw the line and walk away. For somebody else maybe that’s okay, but maybe they draw a line somewhere else. It might be how that data is shared, or maybe how these systems are listening to us, but ultimately we have to draw the line somewhere and kind of say, I’m willing to walk away from the technology if certain things are violated.
VentureBeat: When you say walk away, it reminds me of a conversation I had with somebody after one of these Facebook controversies. Can’t remember which one, there’s one every week, but basically they were saying something along the lines of somebody should start something new that would be able to take some of the market share from them as a result of these controversies, and I was like, yeah I agree, but what are you going to do? It’s a monopoly.
You can say you’re going to walk away, but you can be drawn back in by the fact that the rest of the network or ecosystem is there.
Do you have any thoughts about the monopolies at play here? Because it can seem sort of inescapable in that regard.
Hosanagar: These companies are all monopolies, you know, Facebook or Google or Amazon. Within their domain they’re monopolies, and it’s natural with technology for them to be monopolies, because I think there’s such strong network effects in these markets that it’s very hard to sustain multiple players, because everyone will gravitate to the dominant platform or whichever platform has more users.
So it’s unavoidable, which is why we’re ending up in this situation. But I go back to this idea that I think we do have some power, and we should use that wisely. And I think being, you know — again, people have uninstalled Facebook when they complain about this. Even when people are unhappy with Uber, there was the uninstall Uber movement, and all of those ultimately do have an impact on decision making in these companies. Yes, eventually the users come back, but at least it sends a message saying, ‘Hey, I want you to take action.’ But I also think at the same time, I think individual action alone will not solve this problem. That’s where I think regulation needs to come in as well.
VentureBeat: Yeah, in places where you might have governments that would actually do that.
Hosanagar: Right, right.
VentureBeat: Can you talk a bit more about the Algorithmic Bill of Rights mentioned in your book? Is that central to the idea of the regulation you have in mind? That some of these tenets should be enshrined in law?
Hosanagar: So the Algorithmic Bill of Rights addresses some key protections consumers can and should expect here. A few pillars I have there, the first couple are around transparency, and that’s simply the idea of transparency around data that companies are using when they’re making decisions. For example, you might apply for a job and you don’t even know that the company has access to your social media and they’re analyzing your tweets, and just knowing what was the data used to make decisions, at least in socially important settings, I think, is important.
The second is transparency with regard to the actual decisions. GDPR, for example, has a clause in there for explanations regarding algorithmic decisions so that consumers can ask and companies should provide answers. For example that credit was denied, what were the three or four most important factors that led to your credit being denied? And that’s how we might discover that one of the factors was your address which doesn’t feel right, and the address is correlated to race, and that’s why there’s a race bias here, or maybe we find out all these are very reasonable criteria that the algorithm is using, but explanation regarding the choice is the second pillar.
And the third I have is a little bit of user control. Users at the very least should have some ability to turn on or turn off some of these systems, for example, to be able to tell a smart speaker ‘Don’t listen to me right now’ or ‘Don’t listen until I say I’m ready for you to listen.’ Or like with the Facebook fake news example: Initially Facebook had no user feedback and no way for users to give feedback to the algorithm, but now two years later, they have this feature where with two clicks I can let Facebook’s news feed algorithm know that I think a post is false news or offensive content. So that third pillar is essentially around some feedback loop where users can have some impact on algorithmic choice.
So these are some main pillars. I think another one that I’ve been pushing for are formal audits for large companies, where before they deploy these algorithms they actually do some sort of an audit.
Again, not every algorithm, but in certain socially important spheres, algorithms should be audited before they’re rolled out. After 2008, banks were required to do some audits of some of their models. I think we can think of related ideas within this setting as well.
VentureBeat: To this question of monopolies, last week, Senator Elizabeth Warren posed the idea that tech giants like Amazon, Facebook, and Google should be broken up. Do you have any opinions on the subject?
Hosanagar: I think her concerns are valid, but I’m not sure I agree with her proposal on a couple counts. The first is I think breaking up these companies is going to be quite costly and expensive.
The tech sector is such a driver, it’s a growth engine of the economy creating so many jobs — it may be a risk breaking up these companies that are growing so well. I think you can say let’s raise the bar and be careful about future MnA, and that’s good. Let’s put more conditions in place like I mentioned with the Bill of Rights, other legislation so we control their actions, but I think breaking up is costly. And also, if you look at the Microsoft antitrust case, it was a case about breaking up Microsoft, but eventually it worked out fine. Microsoft was not broken up but instead there were lots of constraints placed on Microsoft, lots of new regulation that Microsoft agreed with as part of that, and that worked just fine. So my overall take is you don’t need to break up these companies. I think we just need tighter regulation and we need to look at it more carefully.
Another Warren proposal says let’s separate platform from services, like she says Amazon Marketplace cannot also have Amazon Basics.
I think it’s going in the right direction, but I think this clear separation of platform and service is not feasible because platforms often, when they get started, they need to offer some of the core services in there because you can’t just open up a platform and say, ‘Hey people, come offer services on our platform with no users.’ You have to build some of the core services yourself, bring in the users, and then ask others to put their services on top. So I think the proposal to separate platforms from services is interesting, it’s in the right direction, but that clear separation that she wants I think that’s infeasible and somewhat impractical.
VentureBeat: Can you talk a little bit about the nature versus nurture question that you bring up in the book?
Hosanagar: Yeah, so I think the idea there is that it used to be the case not too long ago, maybe 10-15 years back, that almost all algorithms around us were manually programmed by an engineer. The entire logic was determined end to end by an engineer.
And so these are highly predictable as a result. I mean, they couldn’t perform super well because if you ask somebody, for example, to come up with all the rules for how to drive a car, it’s very hard. You can come up with an algorithm that may be reasonable at driving a car, but it eventually will fail. Or similarly, diagnosing a disease. If I program every rule in there, then it’s going to fail at some point.
And it can work maybe reasonably well. I think where we’ve gone now is the machine learning direction where we’re saying, okay, we don’t have to hard code in all the rules, let the system learn the relevant rules by learning from data.
The implication of that is that the systems are highly robust, that you can make a mistake and they learn from it, and they keep improving over time, and that’s great, but they become more unpredictable. And the analogy that I give is it’s like human behavior where we attribute characteristics to nature and nurture.
Nature is our genetic code that we have inherited and nurture is our environment from which we learn.
If you look at algorithm behavior, algorithm behavior also comes down to nature and nurture. Nature is the human code, the code that is essentially given to the algorithm or that’s part of the algorithm, like the equivalent of genetic code. So it’s the nature of the algorithm. And nurture is the data from which it learns.
So a lot of these issues that we see is basically an issue with the nurture issue with the data. All these biases and problems are in the data sets from which they’re learning, which is why I want us to focus more on the data and have transparency of data and so on.
VentureBeat: One thing that comes to mind, though — you mentioned regulators. I recall during Facebook testimony before Congress last spring, most people I spoke to in tech after the interview were less concerned with Mark Zuckerberg’s answers to questions than they were with the apparent lack of understanding of how the tech works from lawmakers. People were very afraid after watching those hearings that these people would be in charge of regulating the use of algorithms or artificial intelligence.
What sort of things do you think lawmakers need to do to stay informed on the sort of cutting edge in this area?
Hosanagar: Yeah, I think at the end of the day lawmakers need to really educate themselves super fast, and I think we’re in a situation where it’s a constant cat-and-mouse game, where by the time the lawmakers arrive at some level of understanding, the technology is morphed. It’s now something new.
So it’s a tough battle for them.
I think the other thing is they need to set up certain advisory boards and boards of experts that can help them think this through. One of the things I suggested in the book is a need for an Algorithmic Safety Board.
And so the Algorithmic Safety Board, it would be an independent agency like, you know, there’s the CFTC, which is Commodities Futures Trading Commission, which is an independent board that’s looking into trading and stock markets. The Federal Reserve is a nice example of an independent board.
But I think there is a need for an independent board of experts that can help set the mandate a little bit and also educate the lawmakers on the relevant issues, because I think we can’t keep waiting for them to catch up.
VentureBeat: For the average person overwhelmed by the abundance of algorithms in their lives, where would you suggest somebody begin to, I guess, understand when an algorithm is in play in the decision-making processes in their lives?
Hosanagar: I felt like that kind of understanding was limited among laypeople and there wasn’t a whole lot of material or resources to help them understand. That’s what led me to write this book, the primary motivation being to deconstruct how this works and where is it?
Where are we using it without realizing there’s an algorithm behind the scenes that’s driving these choices and recommendations?
I’m going to say that I think perhaps what’s even needed is a basic understanding of technology, and that algorithms be part of school curriculum going forward, because it’s going to be very hard to expect individuals to always keep up and stay alert and knowledgeable about this as it’s evolving.
We talk about programming, we’re teaching kids programming and that’s a good thing, but I think just like we talk about computer literacy, I think this whole idea of algorithm literacy and overall technology literacy is needed, and I think it should go into school curriculum.
VentureBeatVentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact. Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more