Saturday 28 February 2015




FEB15_27_72723137
by Walter Frick
culled from:https://hbr.org

You’d think after years of using Google Maps we’d trust that it knows what it’s doing. Still, we think, “Maybe taking the backroads would be faster.”
That’s an example of what researchers call “algorithm aversion”: even when an algorithm consistently beats human judgment, people prefer to go with their gut. This can have very real costs, from getting stuck in traffic to missing a sales target to misdiagnosing a patient.
Algorithms make better assessments in a wide range of contexts, and it might seem logical that if people understood that they’d be more trusting. In fact, seeing how algorithms perform makes things worse, because it means seeing the algorithm occasionally make a mistake.
In a paper published last year, Berkeley Dietvorst, Joseph Simmons, and Cade Massey of Wharton found that people are even less trusting of algorithms if they’ve seen them fail, even a little. And they’re harder on algorithms in this way than they are on other people. To err is human, but when an algorithm makes a mistake we’re not likely to trust it again.
In one of the experiments, participants were asked to look at MBA admissions data and guess how well the students had done during the program. Then they were told they would win a small amount of money for accurate guesses and given the option to either submit their own estimates or to submit predictions generated by an algorithm. Some participants were shown data on how well their earlier guesses had turned out, some were shown how accurate the algorithm was, some were shown neither, and some were shown both.
Participants who had seen the algorithm’s results were less likely to bet on it, even if they were shown their performance too, and could see that the algorithm was superior. And those who had seen the results were much less likely to believe the algorithm would perform well in the future. This finding held across several similar experiments in other contexts, and even when the researchers made the algorithm more accurate to accentuate its superiority.
It’s not all egotism either. When the choice was between betting on the algorithm and betting on another person, participants were still more likely to avoid the algorithm if they’d seen how it performed and therefore, inevitably, had seen it err.
When asked to explain themselves, the most common response from participants was that “human forecasters were better than the model at getting better with practice [and] learning from mistakes.” Nevermind that algorithms can improve, too, or that learning over time wasn’t a part of the study. It seems our faith in human judgment is tied to our belief in our own ability to improve.
If showing results doesn’t help avoid algorithm aversion, allowing human input might. In a forthcoming paper, the same researchers found that people are significantly more willing to trust and use algorithms if they’re allowed to tweak the output a little bit. If, say, the algorithm predicted a student would perform in the top 10% of their MBA class, participants would have the chance to revise that prediction up or down by a few points. This made them more likely to bet on the algorithm, and less likely to lose confidence after seeing how it performed.
Of course, in many cases adding human input made the final forecast worse. We pride ourselves on our ability to learn, but the one thing we just can’t seem to grasp is that it’s typically best to just trust that the algorithm knows better.

0 comments:

Post a Comment