Please use this identifier to cite or link to this item:
https://research.matf.bg.ac.rs/handle/123456789/488
Title: | MoËT: Mixture of Expert Trees and its application to verifiable reinforcement learning | Authors: | Vasić, Marko Petrović, Andrija Wang, Kaiyuan Nikolić, Mladen Singh, Rishabh Khurshid, Sarfraz |
Affiliations: | Informatics and Computer Science | Keywords: | Deep learning;Explainability;Mixture of Experts;Reinforcement learning;Verification | Issue Date: | 2022 | Journal: | Neural networks : the official journal of the International Neural Network Society | Abstract: | Rapid advancements in deep learning have led to many recent breakthroughs. While deep learning models achieve superior performance, often statistically better than humans, their adoption into safety-critical settings, such as healthcare or self-driving cars is hindered by their inability to provide safety guarantees or to expose the inner workings of the model in a human understandable form. We present MoËT, a novel model based on Mixture of Experts, consisting of decision tree experts and a generalized linear model gating function. Thanks to such gating function the model is more expressive than the standard decision tree. To support non-differentiable decision trees as experts, we formulate a novel training procedure. In addition, we introduce a hard thresholding version, MoËTh, in which predictions are made solely by a single expert chosen via the gating function. Thanks to that property, MoËTh allows each prediction to be easily decomposed into a set of logical rules in a form which can be easily verified. While MoËT is a general use model, we illustrate its power in the reinforcement learning setting. By training MoËT models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models. Moreover, we show that MoËT can also be used in real-world supervised problems on which it outperforms other verifiable machine learning models. |
URI: | https://research.matf.bg.ac.rs/handle/123456789/488 | ISSN: | 08936080 | DOI: | 10.1016/j.neunet.2022.03.022 |
Appears in Collections: | Research outputs |
Show full item record
SCOPUSTM
Citations
17
checked on Dec 18, 2024
Page view(s)
13
checked on Dec 25, 2024
Google ScholarTM
Check
Altmetric
Altmetric
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.