Tim Hwang: Governments probably don’t need a Minister of AI

Tim Hwang, director of the Ethics + Governance of AI Initiative

As machine learning-fueled tech touches every industry, new job titles are becoming more common. IPsoft has a chief cognitive officer, and Autodesk hires conversational engineers. But Tim Hwang has those beat: He’s director of the Ethics and Governance of Artificial Intelligence Fund, an initiative based at Harvard and MIT that doles out money to AI projects for the public good. The $27 million fund was created in January and is backed by LinkedIn cofounder Reid Hoffman.

For two years prior to that, he led AI and ML efforts for Google’s public policy arm. He’s worked with groups like the Electronic Frontier Foundation, Data and Society, and the Institute for the Future. In 2011 The Globe and Mail named Hwang one of a dozen people changing philanthropy, alongside Bill Gates and George Soros.

In his free time, Hwang has fun writing about a range of topics, from the impact of automation to dank memes. Last week, Hwang released his latest creation, a collection of essays by academic analyzing images of Mark Zuckerberg, from an awkward hug with Indian prime minister Narendra Modi to a deep dive into some of his old profile photos.

VentureBeat spoke with Hwang about his many job titles, the portions of AI he thinks are underappreciated, and whether governments should consider the appointment of a Minister of Artificial Intelligence (TL;DR: Probably not).

This interview has been edited for brevity and clarity.

VentureBeat: So as director of the Ethics and Governance of Artificial Intelligence Fund, what do you do?

Tim Hwang: Basically the fund got started earlier this year, and sort of my responsibility is to direct everything from just kind of logistically getting the fund rolling — so everything from paperwork to the actual kind of strategy work around the fund to figure out where we want to deploy the fund, and how what we fund links into the anchor institutions Harvard and MIT.

VentureBeat: What are some areas the fund is looking at that don’t get much attention today in AI but you think are pretty exciting and potentially impactful?

Hwang: Right now AI is just deep in the hype cycle, essentially, and so there’s a lot of investment flowing towards it. And I think the big question from the fund’s point of view is: What’s going to get systematically underinvested in to public detriment? And that’s guiding a lot of what we’re thinking about funding and guides a lot of the grants that we have given out so far.

To name some of the spaces that are interesting and relevant, there’s a lot of work and interest in kind of pushing the fund and both of the anchor institutions, MIT and Harvard, toward work on the criminal justice side of things — how these technologies are being sold and deployed to do things like risk assessment and recidivism scores, and thinking about both where these products may not be as good as they’re marketed to be. But also you’ve got: What are the tools that we can build to really practically audit these systems and determine if these systems are behaving in a fair or unfair way?

There’s also a number of really interesting things happening in the privacy space. So as you may know, the way machine learning is done nowadays is you need huge amounts of data; it’s often done with a single large dataset or computing power, and there are some really interesting alternative models emerging around how you do machine learning. So there’s a cool technology called Federated Learning. Basically the notion is you keep a lot of the personal data on devices, with a lot less personal data going up into the cloud.

So the commonality between these things: One is kind of a topic area, like criminal justice, and the other one is kind of a technical area like privacy, but I think in both cases, our assessment is even in the midst of all this investment, these types of topics and tools won’t get the kind of investment that they need.

VentureBeat: For businesses with that dual desire to have purpose and turn a profit, are there any verticals you see right now that are getting less attention as it relates to research or funding but have the potential to be relatively profitable businesses?

Hwang: Oh sure, I think that some of the opportunities that people are paying a little less attention to which I think will nonetheless could be really big businesses are, for example, the use of machine learning in logistics, so warehouse management, manufacturing, and supply chain stuff.

I think it’s really going to change a lot of those businesses, and I think because it’s kind of boring, it’s talked about less than some of these more high-profile cases you’re seeing, like autonomous vehicles, for instance.

I also think machine learning’s application in financial markets — like high frequency trading has been done for some time now, but it’s interesting to think about what happens when you have these algorithms actually evolve on their own. Essentially being in the marketplace, I think, is also a really interesting possibility that also raises a number of really interesting questions and risks, and again a little bit less talked about because it’s a little bit wonky and technical. So I think there’s a lot of stuff just hiding in plain sight, if you will, that will be big businesses.

VentureBeat: Did you check AI Now’s recent report?

Hwang: Yeah, they’re a grantee; we’re supporting AI Now.

VentureBeat: The report suggests the idea that it may be essential for an AI company to bring in professionals from the industry they impact. So, like, a medical AI startup would have doctors on staff, for example. What do you think about that idea? That engineers using datasets from specific verticals should be working directly with or even have people from impacted industries on staff?

Hwang: Yeah, I’m totally on board with that recommendation. I think that’s sort of an interesting thing about AI. It’s often talked about as its own freestanding thing, but a lot of these systems only make sense in context. Like if you’re going to apply automation, it depends very much on the path you’re trying to do. Like you can use AI to sort cat photos or you can use it for targeting autonomous weapons, and I think some of the things, some of the technology, might be used across both of those things, and so yeah, I’m totally onboard with domain expertise in informing how we deploy some of these systems. Now I would say that the big question is how, right? I mean, we can all agree that knowing what you’re doing before you deploy some kind of automation in itself is definitely good practice, but now the question is, substantively, what does that look like in terms of workflow, profit, and decision making?

VentureBeat: You were at the public policy team at Google focusing on AI and ML. Uh, what exactly does that mean?

Hwang: Yeah, my role at Google doing public policy was pretty straightforward. So public policy is basically the team responsible for keeping the company abreast of what’s happening in government and regulations. And at the time when I joined, they didn’t really have defined positions on a whole range of different issues around AI, so questions like “What does Google think the impact of AI on jobs or the ag economy is going to be?” or “What’s Google’s position on what fairness should look like in these systems?”

And so I think I was brought on board to do two things: One of them was doing a fair amount of work talking to policy makers, regulators, and civil society groups about the technology itself to make sure there’s kind of a good common understanding of what’s being talked about, particularly because AI can be a very confusing space in terms of what you’re referring to in a given moment of time. The second part of it was helping them develop their positions, which is basically coordinating across legal and research and a whole different set of actors to try to figure out what we should say, what is the company’s position on these issues, and so I basically did that for about two years, going from 2014 to 2016 basically.

VentureBeat: Did you see the United Arab Emirates (UAE) has appointed a minister of AI? I’m not really sure what that entails for UAE’s purposes, but what do you think about that idea? Is that something that should be duplicated in other governments?

Hwang: I’m skeptical of the notion of having a cabinet-level minister that focuses specifically on AI. We’re talking about a technology that has very broad potential use cases, and I think the question here is just what’s the right way of dividing government responsibility around this. And I think by and large it’s most effective for domain experts to think about what are the rules that should apply in the space; like, when we think about autonomous vehicles on the road, I think we want people who have thought about highway safety for a really long time to help.

VentureBeat: Yeah, and I guess I think about this entire idea, and there’s two sides in my head: One says I guess this is something a CTO-equivalent in government can handle, but then you think about the entire idea of it being pervasive and nation-states now talking about the need to win the AI race.

Hwang: I mean, we’ll see how this evolves, but it is becoming — it does really feel like a number of countries are trying to make it a cornerstone of economic development, essentially, and you know, you could imagine kind of these sort of czar type positions being created to try to think about how we accelerate a whole range of things using AI. But I think this is going to depend a lot on local politics and specifics.

CHANNELS

Subscribe to our Newsletters