Please use this identifier to cite or link to this item: https://research.matf.bg.ac.rs/handle/123456789/488
DC FieldValueLanguage
dc.contributor.authorVasić, Markoen_US
dc.contributor.authorPetrović, Andrijaen_US
dc.contributor.authorWang, Kaiyuanen_US
dc.contributor.authorNikolić, Mladenen_US
dc.contributor.authorSingh, Rishabhen_US
dc.contributor.authorKhurshid, Sarfrazen_US
dc.date.accessioned2022-08-13T09:51:53Z-
dc.date.available2022-08-13T09:51:53Z-
dc.date.issued2022-
dc.identifier.issn08936080en
dc.identifier.urihttps://research.matf.bg.ac.rs/handle/123456789/488-
dc.description.abstractRapid advancements in deep learning have led to many recent breakthroughs. While deep learning models achieve superior performance, often statistically better than humans, their adoption into safety-critical settings, such as healthcare or self-driving cars is hindered by their inability to provide safety guarantees or to expose the inner workings of the model in a human understandable form. We present MoËT, a novel model based on Mixture of Experts, consisting of decision tree experts and a generalized linear model gating function. Thanks to such gating function the model is more expressive than the standard decision tree. To support non-differentiable decision trees as experts, we formulate a novel training procedure. In addition, we introduce a hard thresholding version, MoËTh, in which predictions are made solely by a single expert chosen via the gating function. Thanks to that property, MoËTh allows each prediction to be easily decomposed into a set of logical rules in a form which can be easily verified. While MoËT is a general use model, we illustrate its power in the reinforcement learning setting. By training MoËT models using an imitation learning procedure on deep RL agents we outperform the previous state-of-the-art technique based on decision trees while preserving the verifiability of the models. Moreover, we show that MoËT can also be used in real-world supervised problems on which it outperforms other verifiable machine learning models.en
dc.language.isoenen
dc.relation.ispartofNeural networks : the official journal of the International Neural Network Societyen
dc.subjectDeep learningen
dc.subjectExplainabilityen
dc.subjectMixture of Expertsen
dc.subjectReinforcement learningen
dc.subjectVerificationen
dc.subject.meshMachine Learningen
dc.subject.meshReinforcement, Psychologyen
dc.titleMoËT: Mixture of Expert Trees and its application to verifiable reinforcement learningen_US
dc.typeArticleen_US
dc.identifier.doi10.1016/j.neunet.2022.03.022-
dc.identifier.pmid35381441-
dc.identifier.scopus2-s2.0-85127353743-
dc.identifier.urlhttps://api.elsevier.com/content/abstract/scopus_id/85127353743-
dc.contributor.affiliationInformatics and Computer Scienceen_US
dc.relation.firstpage34-47en
dc.relation.lastpage47en
dc.relation.volume151en
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.grantfulltextnone-
item.openairetypeArticle-
crisitem.author.deptInformatics and Computer Science-
Appears in Collections:Research outputs
Show simple item record

SCOPUSTM   
Citations

17
checked on Dec 18, 2024

Page view(s)

13
checked on Dec 24, 2024

Google ScholarTM

Check

Altmetric

Altmetric


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.