A panel held at the Darla Moore School of Business on Thursday discussed the positive and negative impacts artificial intelligence (AI) could have on the future of fields such as medicine, education and criminal justice.
The four-person panel included two USC associate professors, an assistant professor and the associate dean for natural sciences.
AI can be broken down into two subjects according to panelist Forest Agostinelli, an assistant professor of economics.
"One is concerned with biological intelligence and how we can emulate that in computers, and the other one is concerned with how you can behave rationally using computing," Agostinelli said.
Panelists specifically discussed differentiating when problems should be solved using AI and when they should not.
"I do think there are some tasks uniquely suited and those that are very poorly suited to AI," Jane Roberts, associate dean for natural sciences, said. "Objective, strong data sets that are mostly associated with predictions are going to be some of the best tasks for AI, and some of the tasks that are going to be least effective using those techniques are going to be the complicated tasks like trying to detect maybe psychological disorders."
The question of when artificial intelligence should be used to solve medical questions was also brought up to the panel. Roberts said a potential benefit is AI's ability to identify medical issues faster.
"AI methods can be applied to multiple different scans, (it) just outpaces human ability, and that's going to allow us to detect conditions earlier and with more precision," Roberts said.
The discussion also centered around artificial intelligence's role in education and its potential to replace educators. Orgul Ozturk, associate professor of economics and chair of the economics department, spoke about how she believes AI will act as an aid to educators.
"It's going to help us, hopefully, be better teachers," Ozturk said. "It will help us be more, in our teaching, be more personalized, be connected to target our students and their learning."
Panelists also discussed the future of image generation, a tool that generates images based on a text prompts. Associate professor of law Bryant Walker-Smith spoke about how such developments can demonstrate racial bias within artificial intelligence, which is prevalent across the field according to the American Civil Liberties Union.
“I ran a number of runs in one of these programs on the prompt 'photo of a criminal,' and the 16 images that I got back were almost entirely of a male of color wearing a hoodie. That was what the algorithm gave me for a criminal," Walker-Smith said.
However, Walker-Smith does not attribute these issues of bias solely to AI, but also to the world the image generation is reflecting.
"A lot of these systems are based on data sets, and those data sets are us," Walker-Smith said. "The phone call is coming from within the house. So a lot of these tools are like funhouse mirrors, they reflect us back to ourselves, sometimes it's distorted."
Ozturk also highlighted human bias during the panel. Though some judges use AI to help make legal decisions, Ozturk said some judges disregard its predictions in order to give harsher sentences to Black defendants.
"We see some increased disparity, but not because of the poor job by AI," Ozturk said. "Better job by AI not being used properly by humans."
AI is advanced enough to create significant change — whether it's positive or negative — panelists said.
"Certain applications of AI ... are nonetheless sufficient to incredibly change our lives for better or for worse," Walker-Smith said.