Show simple item record

dc.contributor.editorHolzinger, Andreas
dc.contributor.editorGoebel, Randy
dc.contributor.editorFong, Ruth
dc.contributor.editorMoon, Taesup
dc.contributor.editorMüller, Klaus-Robert
dc.contributor.editorSamek, Wojciech
dc.date.accessioned2022-05-14T04:03:03Z
dc.date.available2022-05-14T04:03:03Z
dc.date.issued2022
dc.date.submitted2022-05-13T12:19:29Z
dc.identifierONIX_20220513_9783031040832_35
dc.identifierhttps://library.oapen.org/handle/20.500.12657/54443
dc.identifier.urihttps://directory.doabooks.org/handle/20.500.12854/81682
dc.description.abstractThis is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.
dc.languageEnglish
dc.relation.ispartofseriesLecture Notes in Computer Science; Lecture Notes in Artificial Intelligence
dc.rightsopen access
dc.subject.classificationthema EDItEUR::U Computing and Information Technology::UY Computer science::UYQ Artificial intelligenceen_US
dc.subject.classificationthema EDItEUR::U Computing and Information Technology::UY Computer science::UYQ Artificial intelligence::UYQM Machine learningen_US
dc.subject.otherComputer Science
dc.subject.otherInformatics
dc.subject.otherConference Proceedings
dc.subject.otherResearch
dc.subject.otherApplications
dc.titlexxAI - Beyond Explainable AI
dc.title.alternativeInternational Workshop, Held in Conjunction with ICML 2020, July 18, 2020, Vienna, Austria, Revised and Extended Papers
dc.typebook
oapen.identifier.doi10.1007/978-3-031-04083-2
oapen.relation.isPublishedBy9fa3421d-f917-4153-b9ab-fc337c396b5a
oapen.relation.isbn9783031040832
oapen.imprintSpringer International Publishing
oapen.pages397
oapen.place.publicationCham
dc.seriesnumber13200


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record

open access
Except where otherwise noted, this item's license is described as open access