Artificial Intelligence (AI) is increasingly becoming a trusted advisor in people’s lives . A new concern arises if AI persuades people to break ethical rules for profit . Employing a large-scale behavioural experiment (N = 1,572), we test whether AI-generated advice can corrupt people . We further test whether transparency about AI presence, a commonly proposed policy, mitigates potentialharm of advice . Results reveal that AI’s corrupting force is as strong as humans’, even when they know the source of the advice . In fact, the corruptinging force of AI’s . corrupting . force is . as strongas humans’.

Author(s) : Margarita Leib, Nils C. Köbis, Rainer Michael Rilke, Marloes Hagens, Bernd Irlenbusch

Links : PDF - Abstract

Code :
Coursera

Keywords : ai - advice - force - people - test -

Leave a Reply

Your email address will not be published. Required fields are marked *