EU regulators are seeking to establish stricter rules around the use of artificial intelligence (AI) in areas like crime prediction, credit scoring, employee performance management and border control systems. In particular, the bloc is seeking to mitigate undesirable outcomes and risks arising from AI-generated decisions. Draft legislation recently published proposes that AI systems will need to meet specific “transparency obligations” that allow human beings reviewing a decision made by an AI to establish how that decision was reached and what data points were used in the process. This is what AI developers call “explainability” developed along “white box” principles. (These…
Why EU regulators are pushing for more explainable AI