Akshita Jha’s primary research focuses on how to prevent automated machine learning models from exacerbating existing biases.
“As an example,” Jha said, “the commercial algorithm, COMPAS, used by judges and officers across the United States to assess a defendant’s likelihood to re-offend has been shown to discriminate unfairly against African American defendants.”
Another example, she said, is a nationwide healthcare risk-score algorithm that provides healthcare decisions for over 70 million patients in the United States. It suggests better health resources for some demographics when compared to those suggested for others.
“Models like these unfairly discriminate against already marginalized social groups and the goal of my research is to build models providing fairness, accountability, and transparency that can help prevent such discrimination,” she said.
How did Jha become interested in this area of research?
“A couple of years ago I was presenting my work in an open-source conference and noticed the abysmal number of women in a conference attended by hundreds of people from across the world,” she said. “This left a jarring impression on me and I became extremely aware of the pervasiveness of sexism. Being a computer science major, I started thinking of ways in which machine learning and natural language processing could be used to analyze social data and mitigate this issue. That motivated me to delve deeper into this field.”
Jha, who holds both a bachelor’s degree in computer science and a master’s degree by research in natural language processing from IIIT-Hyderabad, India, is a Ph.D. student in computer science advised by Chandan Reddy.
What Jha likes best about being a student at the Discovery Analytics Center is the opportunity for discussions with other Ph.D. students on different research topics.
“Not only are they interesting, but also provide a varied perspectives to the problem at hand,” she said.
This past summer Jha interned with the Interdigital AI Lab in Palo Alto, California. Her work involved building a human-interpretable long document comparison model.
She is projected to graduate in 2023.