The media stereotype about robots stealing our jobs and taking over the world might have captured the public’s imagination, but we’re not there yet. The problem is that even the narrow scope artificial intelligence that we do have now behaves unpredictably, says data scientist and AI ethicist, Dr. Rumman Chowdhury. And that is what keeps Chowdhury, also the global responsible artificial intelligence lead at Accenture, up at night.
Can artificial intelligence play nice?
One data scientist believes that AI can be fair and ethical, if humans make it so.


Famously, in March 2016, Google’s AlphaGo AI defeated a top champion at the strategy game of Go. There was a pivotal step in the match when the AI made a move that a trained veteran would never execute. That step confounded everyone to the point where even the champion wondered if the AI knew what it was doing. But in the end, the AI came out on top. The key takeaway: the AI had a logical reasoning that we could not understand. “That is the kind of thing that people worry about with AI: What if it’s taking some step or thinking of some action that doesn’t make sense to us, and what if that action is harmful?” asks Chowdhury.
We’re already seeing instances where AI can go awry. Case in point: a controversial algorithm that selectively served up fewer advertisements for high-paying jobs to women.
“That is the kind of thing that people worry about with AI: What if it’s taking some step or thinking of some action that doesn’t make sense to us, and what if that action is harmful?” — Dr. Rumman Chowdhury
Why ethics in AI matters
Why do we make so much noise about ethics in AI in particular when all technology should be viewed through a similar lens? The challenge with AI is that it is different from other technology in three fundamental ways — ways that make the application of ethics more pressing.
First, says Chowdhury, AI is immediate. Any small change in a Facebook algorithm, for example, has an immediate impact. Second, it’s global with a potential to affect millions. Third, it’s invisible. As the example of low-paying job leads for women shows, AI algorithms can perpetuate existing societal biases in insidious ways.
And while the general AI that will drive the robotics of the future is not here yet, it will get here eventually, and we need to lay a robust foundation of ethics to recognize when AI is doing harm, Chowdhury says.
The statistics-ethics dance
Understanding AI and ethics means understanding what data can (and cannot) do. “Scientists who come from a math or science background view data as objective. It is actually not,” Chowdhury says. “It is subject to someone collecting it, somebody making decisions about which data to measure, how the data is picked up.”
Since AI algorithms are built on data, they are riddled with the subjectivity that data presents.
Unfortunately, people view the output of these AI algorithms as absolute truths. The public often views a probabilistic model such as a poll — polls are only a predictor of events, they don’t guarantee outcomes — as deterministic. This gap, between what can be accurately predicted and what cannot, is a gray area where biases can mold social behavior.
In AI too, this gray area invites bias — and it’s where Chowdhury saw her career take shape. By bringing a data scientist’s point of view to AI, the MIT, Columbia, and University of California, San Diego-educated Chowdhury hopes to weave ethics into our collective discussion about the technology and how we can use AI to shape the future we want.
The diversity equation
Diversity in the workforce, both among the programmers developing the AI algorithms and the ones interpreting the results, is a good start to developing responsible AI. This includes ensuring women have a seat at the table, says Chowdhury, a passionate advocate for women in tech.
“The questions we choose to answer and what we choose to prioritize is a function of who is in power and who is making these decisions. When we solve problems, we solve the problems that we think are the most salient. So it is absolutely critical to have women in these decision-making roles so that there’s a wider range of voices prioritizing what should and shouldn’t be created or fixed,” Chowdhury says. Pay equity, the retention and promotion of women in tech, and creating environments for women to flourish are just a few of the lenses through which to parse this important discussion, she adds.
“When we solve problems, we solve the problems that we think are the most salient. So it is absolutely critical to have women in these decision-making roles so that there’s a wider range of voices prioritizing what should and shouldn’t be created or fixed.” — Dr. Rumman Chowdhury
Although diversity is important, ethics in AI shouldn’t begin and end there, Chowdhury says. “It is also important to have the technical understanding to figure out what issues may exist in the existing data and also understand how to interrogate the models you’re building,” she says.
The human in the loop
Equally important: When you’re implementing an AI solution, what does it mean to the end user receiving the information? It’s not just enough to have a human in the loop, that human needs to have the expertise to recognize problems and the authority to do something about it. “You really have to consider the structure of the organization, whether it empowers people to speak up [when they detect anomalies],” Chowdhury says. If a bank moves to automated tellers, for example, which subset of the population gets disproportionately affected? Who gets to speak up about these issues and when?
In her role at Accenture, Chowdhury helps organizations align their technology practices with their stated values. When a bank says that they value diversity in the workforce, they need to ensure that the AI used in hiring and promotions is not ridden with biases that would negate that mission.
AI is part of a sociotechnical system, Chowdhury says, so there’s no single node for things to go wrong, there are multiple points of potential failure. “What we need to do now is figure out what are people’s responsibilities, how do they manifest themselves, and how can we empower people to make the right decisions,” she says.
Chowdhury believes that part of her responsibility is to change people’s thinking about AI: Instead of viewing it as a mere tool, people should look at it as something tangible that affects people’s lives. She expects the discussion around AI to evolve over the next few years to intersect with other societal considerations. “Change is the norm in technology,” Chowdhury says. “Technology is [always] a function of what’s happening in the world.” Climate change, anyone?



