skip to main content

Just Like Humans, AI Has Biases. Two Fordham Professors Received Nearly $500K to Study Them.

0
Ruhul Amin, Ph.D., and Mohamed Rahouti, Ph.D., assistant professors of computer and information science at Fordham, were awarded a $493,000 grant from the Qatar Research, Development and Innovation Council to study and improve the biases of artificial intelligence. 

“The main idea is to identify and understand the different types of biases in these large language models, and the best example is ChatGPT,” said Rahouti. “Our lives are becoming very dependent on [artificial intelligence]. It’s important that we enforce the concept of responsible AI.” 

Like humans, large language models like ChatGPT have their own biases, inherited from the content they source information from—newspapers, novels, books, and other published materials written by humans who, often unintentionally, include their own biases in their work. 

In their research project, “Ethical and Safety Analysis of Foundational AI Models,” Amin and Rahouti aim to better understand the different types of biases in large language models, focusing on biases against people in the Middle East. 

“There are different types of bias: gender, culture, religion, etc., so we need to have clear definitions for what we mean by bias. Next, we need to measure those biases with mathematical modeling. Finally, the third component is real-world application. We need to adapt these measurements and definitions to the Middle Eastern [population],” said Rahouti. 

Starting this April, Amin and Rahouti will work on the project with researchers and graduate students from Hamad Bin Khalifa University and Qatar University, both located in Qatar. Among the scholars are three Fordham students: a master’s student in data science, a master’s student in computer science, and an incoming Ph.D. student in computer science. The grant funding will partially support these students. 

This research project is funded by a Qatar-based organization that aims to develop Qatar’s research and development, said Amin, but their research results will be useful for any nation that uses artificial intelligence.

“We’re using the Middle Eastern data as a test for our model. And if [it works], it can be used for any other culture or nation,” he said. 

Using this data, the researchers aim to teach artificial intelligence how to withhold its biases while interacting with users. Ultimately, their goal is to make AI more objective and safer for humans to use, said Amin. 

“Responsible AI is not just responsibly using AI, but also ensuring that the technology itself is responsible,” said Amin, who has previously helped other countries with building artificial intelligence systems. “That is the framework that we’re after—to define it, and continue to build it.” 

Ruhul Amin and Mohamed Rahouti

Amin and Rahouti

Share.

Comments are closed.