In the digital age, people rely more and more on advice — driving directions, weather reports, dining suggestions and fitness tips — generated by a computational process, or algorithm.
Researchers at Texas A&M are examining these computations in an effort to explain this reasoning process, why algorithms reach the conclusions they do, as part of a four-year, $1.6 million project funded by the Defense Advanced Research Projects Agency (DARPA), a division of the U.S. Department of Defense that explores new technologies.
An algorithm performs data processing and automated reasoning by referencing large datasets in a wide range of applications, improving its performance as new data is received — a process called “machine learning.”
Many health care systems, for example, are using evidence-based, machine-learning algorithms to make recommendations regarding patient diagnosis, treatment, chronic disease management, and more.
To have confidence in an algorithm’s recommendations, end users — in this case, physicians and patients, need to know why an algorithm is advising them to take a particular action, said Eric Ragan, assistant professor of visualization and co-principal investigator for the study.
“An end user who understands the ‘why’ of an algorithm’s recommendation is better able to trust its results and use them to confidently make decisions,” said Ragan, who also has a courtesy appointment to the university's Department of Computer Science and Engineering. “People don’t want to blindly accept a computer’s recommendations if they don’t understand where they came from.”
In the project, Ragan and fellow researchers are modeling the steps that result in an algorithm’s outcomes and creating visualizations of the models in an effort to make the “why” easily understood to end users. The study is referencing data from online health discussion forums and a database of millions of images.
“Our goal is to make simple visual designs like bar charts,” said Ragan. “Tentative plans are to test the effectiveness of the charts to show the most important information in the model. Then we’ll create graphics to represent data in more detail. Ultimately, we’re going to test a variety of representations.”
The project is one of 13 DARPA-funded studies to develop artificial intelligence techniques that include explanations of algorithmic reasoning.
The defense department is interested in this topic because it is developing numerous autonomous systems that rely on machine learning or similar techniques, said David Gunning, a DARPA program manager. Military end users, he said, will need explanations of algorithm-derived recommendations in critical applications.
Ragan is performing the study with principal investigator Xia Hu, Texas A&M assistant professor of computer science and engineering, and co-principal investigator Shuiwang Ji, associate professor of electrical engineering and computer science at Washington State University.