October 5, 2020

Do explanations for data-based predictions actually increase users’ trust in AI?

By Tech Online Things

In recent years, many artificial intelligence (AI) and robotics researchers have been trying to develop systems that can provide explanations for their actions or predictions. The idea behind their work is that as AI systems become more widespread, explaining why they act in particular ways or why they made certain predictions could increase transparency and consequently users’ trust in them.