Meet the Bay Area Women in Tech Fighting Bias in AI

Artificial Intelligence - the Technology That’s Changing Everything On The Planet

By Karen Gullo

Artificial intelligence systems are impacting the lives and futures of millions of Americans, whether they realize it or not. We’re not talking about AI technology that helps Netflix predict what movies you want to watch. This is something much bigger. AI is being used by corporations, government agencies, and law enforcement to decide who gets a loan, a job, a spot at the local school for their kids, entry into the country, and jail time if they’re arrested. When AI systems make biased, unjust decisions, it has real-world consequences for people—namely women and people of color. That’s because AI systems, for all their promise to transform fields like health care and education, can and often do perpetuate the inherent biases—such as gender inequity and racial discrimination—we see all around us.

“There’s so much potential in AI to be used for good, but if these systems have bias it could not only mirror inequities but also exacerbate them,” said Tess Posner, CEO at AI4ALL, an Oakland nonprofit whose mission is to increase diversity and inclusion in AI.

Tess Posner - CEO of AI4ALL. Photo by Tumay Aslay

Tess Posner - CEO of AI4ALL. Photo by Tumay Aslay

Posner is among a group of innovative female creators in the Bay Area who are at the forefront in the battle against bias in AI. They include social activists, data scientists, and academics from diverse backgrounds. Some have been coding since grade school, others have run city-wide data operations. What they share is a commitment to raising awareness and finding solutions to end bias in AI. They are spearheading programs to provide tools for spotting and mitigating bias in algorithms, providing programs to give women a seat at the AI table, and urging companies to focus on ethics and diversity in AI programs.

“I think people didn’t understand how big of a problem it is,” said Ayori Selassie, a San Francisco-based software engineer, applied AI expert, and CEO of Selfpreneur, which provides consulting and workshops about methodologies she has created for personal development and the ethical use of technology. “When AI is the gatekeeper, when it grants permission or makes a prediction about whether or not you get a life-saving drug, that’s real world.'“

Ayori Selassie – applied AI expert and CEO of Selfpreneur. Photo by Tumay Aslay

Ayori Selassie – applied AI expert and CEO of Selfpreneur. Photo by Tumay Aslay

AI enables computers to make decisions, which normally require human expertise, by analyzing data, recognizing patterns and trends, and using that learning to predict outcomes. A simple example: Amazon suggests products you might like based on the company’s analysis of what you’ve purchased in the past. But what if the data in AI-based loan application programs is biased, incomplete, or discriminatory? What if the creators of AI algorithms are biased?

Researchers have found that AI systems will spit out biased decisions when they’ve “learned” how to solve problems using data that’s exclusive and homogeneous—and those mistakes disproportionately affect women, people of color, and low-income communities. The AI field is littered with examples of AI systems that discriminate.

Amazon scrapped secret AI recruitment software its engineers created around 2015 that was supposed to simplify searches for new hires. Turns out the software was biased against female applicants. It had been trained to find good candidates based on patterns from resumes submitted to Amazon over a 10-year period. Since the majority of Amazon’s applicants were male, the system “learned” a preference for men and downgraded female candidates.

MIT Media Lab researcher Joy Buolamwini studied the accuracy of commercially available AI-powered facial recognition software. Face recognition systems use algorithms to pick out specific details on a photo of a person’s face, such as chin shape, and convert them into mathematical representations that can be compared to other faces. In a study published last year, Buolamwini, founder of the Algorithmic Justice League at MIT, found that the systems were more likely to misidentify the gender of dark-skinned women than white men. One system misidentified gender in 35 percent of darker-skinned females.

Algorithms used by courts and parole boards to assess the risk that defendants will commit further crimes were found to show bias against black defendants in a study conducted by ProPublica, a nonprofit news organization. The news group looked at the risk scores of thousands of arrestees in Florida and checked to see how many were charged with new crimes in the two years after their arrest. The 2016 study showed that the algorithms mistakenly predicted that black defendants would commit future crimes. Black defendants were 45 percent more likely to receive higher risk scores than white defendants. Meanwhile, white defendants were mistakenly rated as lower risk more often than black defendants. These risk assessments are used to determine which defendants should be set free and which should be sent to jail.

“We know bias exists in every data set, but our society hasn’t come to grips with that,” says Selassie. “We need to admit we have a problem and start working together so that we have some standards that work.”

Ayori Selassie at Oakland Impact Hub

Ayori Selassie at Oakland Impact Hub

Raised in poverty in West Oakland and homeschooled by her single mother with seven other siblings, Selassie got into computers when she was 11. Her mother gave her a book on Basic programming and had her go through the lessons one by one. Selassie taught herself how to code and at 16 was running her own tech startup. She worked as a web designer early in her professional career and founded a pre-incubator in Oakland that connected local entrepreneurs of diverse backgrounds with funders. Today she’s manager of product marketing at a major software company and an activist for gender and racial equity in tech.

Selassie got involved in AI randomly more than a decade ago while working as an analyst calculating rates for utility clients. She saw the potential for AI to solve big problems like reducing industrial carbon emissions, but had major concerns about AI applications being developed at tech companies by small non-inclusive groups of data scientists and advanced developers.

“I call it AI happy feet—you find really cool applications for this innovative technology and it seems really helpful until you identify that it doesn’t work for all segments of your population,” said Selassie. “If it doesn’t work for women your tool is sexist. If it doesn’t work for black people, Asians, your tool is racist. The amount of bias in these systems is severe and it can really hurt people.”

The solution, said Selassie, is what she calls social solution design, a methodology for ethical decision-making. The idea is to involve inclusive groups of stakeholders—customers, policy advisors, community members, diversity experts—in every step of product development and validation to ensure that bias is detected and fixed at the outset. Companies test for bugs and vulnerabilities before releasing new software. Selassie maintains that they should also have a multi-stakeholder process for detecting racial and gender bias in AI systems. To that end, she consults with companies about how to implement ethical decision-making processes and runs AI workshops for nontechnical business people so they can learn about the technology and collaborate with developers and other stakeholders in the design of AI systems.

Identifying and mitigating bias in AI is critical for governments, which have a duty to be transparent and accountable about how they use technology. As San Francisco began looking at developing algorithm-based tools for big data projects, concerns about bias in AI were paramount. Every day brought a new story of some AI service across the country gone wrong. Joy Bonaguro, who from 2014 to last fall was the city’s first chief data officer, sought a solution, but didn’t find much in the way of practical guidance for assessing the ethical implications of using algorithms.

“We saw ethics pledges and policy papers, but we needed something very hands on and practical,” said Bonaguro. “I proposed that we adopt a municipal standard, a code of practice as opposed to a code of conduct to move the idea forward.”

The city partnered with John Hopkins University and Harvard University to develop and launch last year a first-of-its-kind Ethics & Algorithms Toolkit for governments. It’s essentially a process-based risk management approach to using AI responsibly, says Bonaguro. The toolkit, which is available online, takes users through a series of questions to help governments understand the ethical risks of using algorithms and identify what can be done to mitigate the risks. Users are asked to identify who will be impacted by the technology, the risks of the data being used to “train” the algorithms, among other questions, to come up with a risk score of low, medium, or high. Mitigation strategies are recommended for each level of risk.

“There’s a lot of what I call hand-wringing about the problem,” said Bonaguro, who’s now head of people, operations and data at Corelight, a cybersecurity company. “I personally just love turning that into something practical.”

Women hold only 26 percent of tech jobs, and the stats are even worse in AI. Like the tech industry as a whole, there’s a massive gender gap in the field of AI. Only 23 percent of U.S. professionals with AI skills are women.

Tess Posner and her 10-person team at AI4ALL hope to change that. The nonprofit’s mission is to increase diversity in AI by giving young people opportunities to take classes and work on research projects in AI at universities around North America. The program aims to increase the pipeline of underrepresented people in AI and tech, including women, who will go on to jobs and leadership roles in tech.

Tess Posner in downtown Oakland. Photo by Tumay Aslay

Tess Posner in downtown Oakland. Photo by Tumay Aslay

Based in downtown Oakland, AI4ALL partners with major universities like Stanford and Princeton and funding from tech companies to offer summer residential programs in AI studies to ninth, tenth, and eleventh graders from diverse and underrepresented populations. No programming experience is necessary and financial aid is available at universities that charge tuition for AI4ALL camp (not all of them do). The students spend two to three weeks in university AI labs working with professors and graduate student instructors on research projects, attending lectures and field trips to tech companies, and learning to apply AI to real world problems. Classes in computer science, Python language programming, neural networks, and social bias are offered, as well as mentoring and career counseling. Two hundred and fifty young people, the majority young women, have attended AI4ALL camps since they began four years ago.

“It’s taking people who are usually totally left out and setting them up with cutting edge technology,” said Posner, former managing director of TechHire, a White House initiative to help underrepresented Americans start careers in tech.

“We need to address the bias issues to reach the full potential of AI.”

AI4ALL was founded in 2015 by Stanford AI Lab Director Dr. Fei-Fei Li, Dr. Olga Russakovsky, assistant professor in computer science at Princeton University, and Dr. Rick Sommer, executive director of Stanford University’s pre-collegiate studies program.

Today ten universities, including UC-Berkeley, UCSF, Boston University, Arizona State, and Carnegie Mellon, offer AI4ALL summer camps for 300 students, where they have participated in projects including natural language processing to aid disaster relief and writing algorithms to detect cancers in the human genome. Posner says 61 percent of the program’s alumni have gone on to start their own AI projects.

Later this year, the organization is launching the AI4ALL Open Learning Program, a free online curriculum about the basics of AI and how it can work in your daily life. The project is funded by a grant from Google.org. The goal is to teach AI to young people and encourage them to use their skills in their communities. AI4ALL did a pilot project, with middle school and high school students with no prior exposure to AI, to develop computer vision projects that used neural networks (a set of algorithms molded loosely after the network of neurons in the brain) to learn images. The students learned how this technology is utilized in digital tools that help blind and partially-sighted people identify objects, texts, or people in front of them. Posner says the goal is to have 1 million users of the Open Learning Program within 5 years.

Rebekah Agwunobi was 13 and a high school freshman when she attended the Stanford AI4ALL camp in Palo Alto. A native of Washington state, Agwunobi had been coding since she was in the third grade after her mother put her in a JavaScript class. So she was no stranger to computers or programming. But the AI camp opened her eyes to concepts she hadn’t considered.

“In terms of being exposed to new technologies, I had never really thought about social advocacy in tech until I entered the program,” she said. “It was one of the most transformative experiences I’ve ever had—all the mentors were about supporting diversity in tech.”

AI was new to Agwunobi, and it was something she never thought she could do, in spite of the fact that she had been working with computers, coding for years, and got interested in AI in middle school. She remembers being the only African American and female student in computer science classes, feeling isolated, and not knowing where she stood. She didn’t know where to begin learning about AI, there were no classes available to her in elementary school and she thought it was too complex to take on herself. She applied to AI4ALL camp and got in. The experience demystified the technology, and the support and mentorship of her camp mates, the graduate students and faculty, made her realize she was capable of taking AI on.

“No freshman is confident, but the program empowers you and then you think, this is something I can do,” she said. “We really supported each other. It’s not just ‘girl power,’ it’s that we’re working together and learning.”

She was introduced to various AI based projects during her two weeks at camp, including computer vision, self-driving cars, and applying natural language processing to disaster relief. She came away with a keen interest in AI research. The following semester at high school she created a directed study class about machine learning, a branch of AI, where she applied general techniques she learned at Stanford. Agwunobi learned about different applications in machine learning in areas like art generation and music.

She says the camp cemented her beliefs in advocating for diversity and gender equity in tech and STEM. She now participates in hackathons, and is helping to organizing the annual MAHacks, which is open to high schoolers from diverse backgrounds. She teaches all-girl coding classes and obtained an internship at the Massachusetts Institute of Technology's Media Lab working on an AI-project to gather data on the pretrial process of courts to evaluate how judges behave when setting bail for criminal defendants. The data can be used in efforts to reform the bail system and fight mass incarceration.

Agwunobi is now applying to colleges and deciding her next move (she’s leaning toward law). Her takeaways from AI4ALL camp are both complex and insightful.

“Yes, we need to increase diversity in tech and other fields,” said Agwunobi. “But I can also work in environments that are homogeneous and bring a different perspective.”


Karen Gullo is a freelance writer and former Associated Press and Bloomberg News reporter covering technology, law, and public policy. She is currently an analyst and senior media relations specialist at Electronic Frontier Foundation (EFF) in San Francisco.