As a summer reading, I am looking into ways of understanding automated decision-making and its impact.
What was formerly known as Expert Systems is now applied to a much broader scope of tasks and fields of expertise thanks to the versatility of AI models.
Letting algorithms do complex jobs which normally require the ability to detect situations and circumstances where the application of a decision-making schema leads to erroneous and undesired results increased the risk that the latter happens.
Automated decision-making is currently hailed as the saviour of many human-facing activities like customer service, help-desk, public service and others, where humans are deemed too expensive for doing the job.
The Dutch authorities had a spectacular fail, which led scientists at the University of Utrecht to create a tool that helps assessing and understanding the context and possible outcomes and consequences of such an automation project: DEDA
Coincidentally, I stumbled over an article in The Register about such a fail in Australia.
The Royal Commission’s final report (link the article cited above) is a good example of how to clean up such a mess after it actually happened.
I wonder how many algorithms (be it AI or something else), working with false assumptions, are actually out in the wild.