approved
Interaction bias. Experiments dataset

Artificial Intelligence (AI) is increasingly used to build Decision Support Systems (DSS) across many domains. In our work, we conducted a series of experiments designed to observe human response to different characteristics of a DSS such as accuracy and bias, particularly the extent to which participants rely on the DSS, and the performance they achieve. In our experiments, participants play a simple online game inspired by so-called "wildcat" (i.e., exploratory) drilling for oil. The landscape has two layers: a visible layer describing the costs (terrain), and a hidden layer describing the reward (oil yield). Participants in the control group play the game without receiving any assistance, while in treatment groups they are assisted by a DSS suggesting places to drill. For certain treatments, the DSS does not consider costs, but only rewards, which introduces a bias that is observable by users. Between subjects, we vary the accuracy and bias of the DSS, and observe the participants' total score, time to completion, the extent to which they follow or ignore suggestions. We also measure the acceptability of the DSS in an exit survey. Our results show that participants tend to score better with the DSS, that the score increase is due to users following the DSS advice, and related to the difficulty of the game and the accuracy of the DSS. We observe that this setting elicits mostly rational behavior from participants, who place a moderate amount of trust in the DSS and show neither algorithmic aversion (under-reliance) nor automation bias (over-reliance).However, their stated willingness to accept the DSS in the exit survey seems less sensitive to the accuracy of the DSS than their behavior, suggesting that users are only partially aware of the (lack of) accuracy of the DSS.

Tags
Data and Resources
To access the resources you must log in
  • DatasetJSON

    The resource: 'Dataset' is not accessible as guest user. You must login to access it!
Personal Data Attributes

Description: Personal Data related Information

Field Value
Anonymisation Methodology No PII data has been collected. Private identifiers have been replaced with alternative values to prevent de-indentification.
Anonymised Pseudo Anonymized
ChildrenData No
Ethics Committee Approval Yes
General Data Yes
Informed Consent Template Yes
Non Personal Data Explanation Data from more than 2000 games. Games: Information about each of the played games Plays: Information about the plays done while playing the games Recommendations: Recommendations obtained from the DSS (if present in the game(s)) Surveys: Responses to after-game surveys
Personal Data Yes
Personal data was manifestly made public by the data subject No
Sensitive Data No
Additional Info
Field Value
Accessibility Both
Accessibility Mode Download
Availability On-Line
Basic rights Download
Creation Date 2022-08-08 00:00
Creator Solans, David, [email protected], orcid.org/0000-0001-6979-9330
Dataset Citation Solans, D., Beretta, A., Portela, M., Castillo, C., & Monreale, A. (2022). Human Response to an AI-Based Decision Support System: A User Study on the Effects of Accuracy and Bias.
Dataset Re-Use Safeguards Research-only purposes. No re-publication allowed
Field/Scope of use Non-commercial research only
Format JSON
Group Social Impact of AI and explainable ML
License term 2022-08-08 00:00/2024-08-31 23:59
Manifestation Type Virtual
Processing Degree Primary
Retention Period 2022-08-08 00:00/2023-08-31 23:59
Size 9Mb
Sublicense rights No
Territory of use World Wide
Thematic Cluster Social Data [SD]
system:type Dataset
Management Info
Field Value
Author Solans David
Maintainer Solans David
Version 1
Last Updated 16 September 2023, 10:13 (CEST)
Created 31 March 2023, 21:24 (CEST)