Skip to content
Product cover

Nudge Users to Catch Generative AI Errors

Having a human in the loop is critical to mitigating the risks of generative AI errors and biases. But humans are also vulnerable to errors and biases and may trust artificial intelligence either too much or not enough. Findings from a field experiment by MIT and Accenture suggest that targeted friction in the form of labels that flag potential errors and omissions can direct users’ attention to content that should be given closer inspection — without sacrificing efficiency.

Purchase Options

Learn more about academic discounts »