Data Biases
Data biases occurs when the data used train the AI model is flawed, unrepresentative or systematically skewed. Biases can stem from different sources, such as sampling errors, historical prejudices, or a lack of diversity in the dataset. Outputs from AI models that have a data bias can result in inaccurate, unfair, or otherwise discriminatory predictions or decisions.
Business Impact
Data biases in this AI model can result in reputational damage and indirect monetary loss due to the loss of customer trust in the output of the model.
Steps to Reproduce
Input the following text into the model. It highlights the well represented group within the data: {{Text denoting well represented group within the data}}
Input the following text into the model. It highlights the well insufficiently represented group within the data: {{Text denoting the insufficiently represented group within the data}}
Note that the output of the AI model classifies these two groups disparately, showing a bias in the data.
Proof of Concept (PoC)
The screenshot(s) below demonstrate(s) the vulnerability:
{{screenshot}}
Guidance
Provide a step-by-step walkthrough with a screenshot on how you exploited the bias. This will speed up triage time and result in faster rewards. Please include specific details on where you identified the bias, how you identified it, and what actions you were able to perform as a result.
Last updated