Session: eXplainable Predictive Decisioning: combine ML and Decision Management to promote trust on automated decision making
The increased demand for transparent, explainable decision making, that is accurate, consistent and effective, has never been greater. Legislations like GDPR are just a result of increasing concerns about privacy, safety and transparency in general. While AI/ML solutions are great at making sense of high volumes of data, the reasoning process for most of the generated analytic models is usually quite opaque. Decision Management on the other hand, is a discipline that aims to provide full transparency on the decision process, but requires formalization of knowledge into decisions/rules, using some form of knowledge engineering (automated or not).
During this presentation, attendees will learn about a standards based, pragmatic approach to achieve the goals of eXplainable AI (XAI), combining decision models and analytic models. The approach promotes an effective method to increase transparency on automated decision making, without losing effectiveness.
In particular, presenters will demo how PMML (Predictive Modelling Markup Language), a well established standard for the representation of predictive models generated using Machine Learning can be transparently combined with DMN (Decision Model and Notation), a Decision Modeling standard that defines a high level language for decision automation. Attendees will have the opportunity to learn how the combination of these two Standards enhances and creates a high level effective solution for AI which can be explained and trusted.