As the integration of artificial intelligence (AI) into processes becomes more and more the norm, issues of privacy, bias, and accessibility all must be taken into account when considering what is ethical use of AI, a National Institute of Health (NIH) official said May 10.
Issues of privacy and bias are often, rightfully, mentioned prominently when talking about ethical AI, but the goal should be to also make sure the technologies have broad benefit, Laura Biven, the data science technical lead in the NIH’s Office of Data Science Strategy, said at a May 10 FedInsider webinar.
“I think [AI ethics] is a really broad space actually and it’s incredibly important to make sure that we think about all aspects of this,” Biven said on the webinar. “Privacy, bias, these are all really important areas of focus within this space of ethics, but I think also thinking about how we can make sure that these technologies really benefit the broadest number of people.”
In order for AI to have the broadest impact, there needs to be broad engagement and representation within the data sets, Biven said. Part of that means having a testing framework that addresses any issues of bias as they arise and also making sure that there is broad engagement in determining what type of questions people can ask of the datasets and models.
“I think engagement and having access to these technologies [are important], so that we have the broadest possible engagement but also so that people can influence the trajectory of how these technologies evolve, especially within the context,” Biven said.
Biven sees AI as a valuable tool for data scientists and said she does not see AI going anywhere anytime soon and expects to see advances in explainable AI and more robust transfer learning in the next five years. She also expects there to be a robust conversation around ethics in the next few years to go along with whatever technical advances arise.