Bias plays a major role in the implementation of AI projects. Bias is a systematic deviation of the results of an algorithm from the desired results. This can be caused by a distortion in the data as well as by a bias in the algorithm and can lead to inaccurate or unwanted results. As a result, AI projects can suffer long-term damage and trust in the AI solution can be gambled away. There are numerous examples of this, such as the recruitment algorithm that prefers men for technical job postings because the underlying training data set mainly contained men in technical professions.
Different bias can occur at each stage of the project, so it is important to be aware of the possible bias at each stage of the project. To achieve this, we have identified the potential bias in the CRISP-DM. Once a bias is identified, it can be treated with mitigation strategies adapted to the bias in question. A bias is not always mitigated in the same CRISP-DM phase in which it occurs. For this reason, it is necessary to map the mitigation strategies to both the types of bias and the project phases.
If you have any questions about bias in AI model development, please do not hesitate to contact us!
Author: Tobias Fahse
Date: 28. July 2020