Kate Ehrlich, Susanna Kirk, John Patterson, Jamie Rasmussen, Steven Ross, Daniel Gruen represent IBM Research in this paper.
This paper was presented at IUI 2011.
Summary
Hypothesis
The hypothesis of this research paper was that when advice from an intelligent system is available to a user, the user tends to make different decisions.
Methods
In order to verify their hypothesis, the researchers set out to create an intelligent system that could classify various types of cybersecurity incidents. This system was known as NIMBLE (Network Intrusion Management Benefitting from Learned Expertise). Based on previous information, the NIMBLE system would attempt to classify new cybersecurity events. NIMBLE was also able to offer explanations as to why it chose the even that it did.
To complete the study, researchers found participants who were highly trained in the field of cybersecurity. All of these participants had a minimum of three years in the cybersecurity and network management field.
The researchers had these professionals complete 24 timed trials. The participants had 2 minutes to guess what type of event occurred and categorize it. NIMBLE was available to assist the participants.
Results
Researchers found that in the cases where NIMBLE suggested an answer, if the correct answer was available, correct response accuracy high. It was even higher when NIMBLE offered a justification to it's suggestion.
However, the interesting part of this study was when there were no correct choices available listed by NIMBLE. If NIMBLE offerred a justification (even though it was incorrect), researchers went with NIMBLE's suggestion most of the time.
Discussion
The results of this study is very interesting. It shows, in my opinion, that if an intelligent system presents information like it knows what it's talking about, we humans will follow what it says.
This can be very dangerous. For example, if we have an intelligent system diagnose humans, much of the time, the intelligent system will be correct. But this study shows we still need doctors to think critically. If a machine pretends to know what it's talking about, when it is incorrect, the doctor still needs to be able to form his own opinion about this.
No comments:
Post a Comment