The more people are involved in AI processes, the better the outcome due to diversification of skills, points of view, etc.learn more
Going from producing one machine learning model a year to thousands is well within the average company’s reach, and operationalization has made it possible for a single model to impact millions of decisions (as well as people). On the surface, there of course aren’t any businesses that plan to do irresponsible AI; but on the other hand, they aren’t doing anything to explicitly ensure they are responsible, either – and therein lies the problem.
Why Does Responsible AI Matter?
In practical terms, responsible AI matters because for some industries (financial services, healthcare, human resources, etc.), it’s a legal requirement and under growing scrutiny from regulators. Even if compliance with requirements asking for white-box solutions, interpretability, and proving efforts to eliminate bias isn’t required, it’s good business for anyone because it lowers risk.
“I believe that when it comes to AI technology, software vendors have a responsibility here too. AI technologies should make it costly to not see bias or other problems in AI systems. Human responsibility should be explicit, and software systems should prompt this change.”
– Florian Douetteau, Dataiku CEO, What It Will Take to Make AI Likeable?