Eintrag weiter verarbeiten
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development
Gespeichert in:
Zeitschriftentitel: | JMIR Medical Informatics |
---|---|
Personen und Körperschaften: | , |
In: | JMIR Medical Informatics, 8, 2020, 11, S. e18752 |
Format: | E-Article |
Sprache: | Englisch |
veröffentlicht: |
JMIR Publications Inc.
|
Schlagwörter: |
author_facet |
Ammar, Nariman Shaban-Nejad, Arash Ammar, Nariman Shaban-Nejad, Arash |
---|---|
author |
Ammar, Nariman Shaban-Nejad, Arash |
spellingShingle |
Ammar, Nariman Shaban-Nejad, Arash JMIR Medical Informatics Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development Health Information Management Health Informatics |
author_sort |
ammar, nariman |
spelling |
Ammar, Nariman Shaban-Nejad, Arash 2291-9694 JMIR Publications Inc. Health Information Management Health Informatics http://dx.doi.org/10.2196/18752 <jats:sec> <jats:title>Background</jats:title> <jats:p>The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem.</jats:p> </jats:sec> <jats:sec> <jats:title>Objective</jats:title> <jats:p>In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance.</jats:p> </jats:sec> <jats:sec> <jats:title>Methods</jats:title> <jats:p>We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology.</jats:p> </jats:sec> <jats:sec> <jats:title>Results</jats:title> <jats:p>To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation.</jats:p> </jats:sec> <jats:sec> <jats:title>Conclusions</jats:title> <jats:p>This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.</jats:p> </jats:sec> Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development JMIR Medical Informatics |
doi_str_mv |
10.2196/18752 |
facet_avail |
Online Free |
finc_class_facet |
Medizin Informatik |
format |
ElectronicArticle |
fullrecord |
blob:ai-49-aHR0cDovL2R4LmRvaS5vcmcvMTAuMjE5Ni8xODc1Mg |
id |
ai-49-aHR0cDovL2R4LmRvaS5vcmcvMTAuMjE5Ni8xODc1Mg |
institution |
DE-L229 DE-D275 DE-Bn3 DE-Brt1 DE-Zwi2 DE-D161 DE-Gla1 DE-Zi4 DE-15 DE-Pl11 DE-Rs1 DE-105 DE-14 DE-Ch1 |
imprint |
JMIR Publications Inc., 2020 |
imprint_str_mv |
JMIR Publications Inc., 2020 |
issn |
2291-9694 |
issn_str_mv |
2291-9694 |
language |
English |
mega_collection |
JMIR Publications Inc. (CrossRef) |
match_str |
ammar2020explainableartificialintelligencerecommendationsystembyleveragingthesemanticsofadversechildhoodexperiencesproofofconceptprototypedevelopment |
publishDateSort |
2020 |
publisher |
JMIR Publications Inc. |
recordtype |
ai |
record_format |
ai |
series |
JMIR Medical Informatics |
source_id |
49 |
title |
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_unstemmed |
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_full |
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_fullStr |
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_full_unstemmed |
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_short |
Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_sort |
explainable artificial intelligence recommendation system by leveraging the semantics of adverse childhood experiences: proof-of-concept prototype development |
topic |
Health Information Management Health Informatics |
url |
http://dx.doi.org/10.2196/18752 |
publishDate |
2020 |
physical |
e18752 |
description |
<jats:sec>
<jats:title>Background</jats:title>
<jats:p>The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem.</jats:p>
</jats:sec>
<jats:sec>
<jats:title>Objective</jats:title>
<jats:p>In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance.</jats:p>
</jats:sec>
<jats:sec>
<jats:title>Methods</jats:title>
<jats:p>We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology.</jats:p>
</jats:sec>
<jats:sec>
<jats:title>Results</jats:title>
<jats:p>To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation.</jats:p>
</jats:sec>
<jats:sec>
<jats:title>Conclusions</jats:title>
<jats:p>This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.</jats:p>
</jats:sec> |
container_issue |
11 |
container_start_page |
0 |
container_title |
JMIR Medical Informatics |
container_volume |
8 |
format_de105 |
Article, E-Article |
format_de14 |
Article, E-Article |
format_de15 |
Article, E-Article |
format_de520 |
Article, E-Article |
format_de540 |
Article, E-Article |
format_dech1 |
Article, E-Article |
format_ded117 |
Article, E-Article |
format_degla1 |
E-Article |
format_del152 |
Buch |
format_del189 |
Article, E-Article |
format_dezi4 |
Article |
format_dezwi2 |
Article, E-Article |
format_finc |
Article, E-Article |
format_nrw |
Article, E-Article |
_version_ |
1792347958289629192 |
geogr_code |
not assigned |
last_indexed |
2024-03-01T18:03:17.722Z |
geogr_code_person |
not assigned |
openURL |
url_ver=Z39.88-2004&ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fvufind.svn.sourceforge.net%3Agenerator&rft.title=Explainable+Artificial+Intelligence+Recommendation+System+by+Leveraging+the+Semantics+of+Adverse+Childhood+Experiences%3A+Proof-of-Concept+Prototype+Development&rft.date=2020-11-04&genre=article&issn=2291-9694&volume=8&issue=11&pages=e18752&jtitle=JMIR+Medical+Informatics&atitle=Explainable+Artificial+Intelligence+Recommendation+System+by+Leveraging+the+Semantics+of+Adverse+Childhood+Experiences%3A+Proof-of-Concept+Prototype+Development&aulast=Shaban-Nejad&aufirst=Arash&rft_id=info%3Adoi%2F10.2196%2F18752&rft.language%5B0%5D=eng |
SOLR | |
_version_ | 1792347958289629192 |
author | Ammar, Nariman, Shaban-Nejad, Arash |
author_facet | Ammar, Nariman, Shaban-Nejad, Arash, Ammar, Nariman, Shaban-Nejad, Arash |
author_sort | ammar, nariman |
container_issue | 11 |
container_start_page | 0 |
container_title | JMIR Medical Informatics |
container_volume | 8 |
description | <jats:sec> <jats:title>Background</jats:title> <jats:p>The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem.</jats:p> </jats:sec> <jats:sec> <jats:title>Objective</jats:title> <jats:p>In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance.</jats:p> </jats:sec> <jats:sec> <jats:title>Methods</jats:title> <jats:p>We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology.</jats:p> </jats:sec> <jats:sec> <jats:title>Results</jats:title> <jats:p>To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation.</jats:p> </jats:sec> <jats:sec> <jats:title>Conclusions</jats:title> <jats:p>This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.</jats:p> </jats:sec> |
doi_str_mv | 10.2196/18752 |
facet_avail | Online, Free |
finc_class_facet | Medizin, Informatik |
format | ElectronicArticle |
format_de105 | Article, E-Article |
format_de14 | Article, E-Article |
format_de15 | Article, E-Article |
format_de520 | Article, E-Article |
format_de540 | Article, E-Article |
format_dech1 | Article, E-Article |
format_ded117 | Article, E-Article |
format_degla1 | E-Article |
format_del152 | Buch |
format_del189 | Article, E-Article |
format_dezi4 | Article |
format_dezwi2 | Article, E-Article |
format_finc | Article, E-Article |
format_nrw | Article, E-Article |
geogr_code | not assigned |
geogr_code_person | not assigned |
id | ai-49-aHR0cDovL2R4LmRvaS5vcmcvMTAuMjE5Ni8xODc1Mg |
imprint | JMIR Publications Inc., 2020 |
imprint_str_mv | JMIR Publications Inc., 2020 |
institution | DE-L229, DE-D275, DE-Bn3, DE-Brt1, DE-Zwi2, DE-D161, DE-Gla1, DE-Zi4, DE-15, DE-Pl11, DE-Rs1, DE-105, DE-14, DE-Ch1 |
issn | 2291-9694 |
issn_str_mv | 2291-9694 |
language | English |
last_indexed | 2024-03-01T18:03:17.722Z |
match_str | ammar2020explainableartificialintelligencerecommendationsystembyleveragingthesemanticsofadversechildhoodexperiencesproofofconceptprototypedevelopment |
mega_collection | JMIR Publications Inc. (CrossRef) |
physical | e18752 |
publishDate | 2020 |
publishDateSort | 2020 |
publisher | JMIR Publications Inc. |
record_format | ai |
recordtype | ai |
series | JMIR Medical Informatics |
source_id | 49 |
spelling | Ammar, Nariman Shaban-Nejad, Arash 2291-9694 JMIR Publications Inc. Health Information Management Health Informatics http://dx.doi.org/10.2196/18752 <jats:sec> <jats:title>Background</jats:title> <jats:p>The study of adverse childhood experiences and their consequences has emerged over the past 20 years. Although the conclusions from these studies are available, the same is not true of the data. Accordingly, it is a complex problem to build a training set and develop machine-learning models from these studies. Classic machine learning and artificial intelligence techniques cannot provide a full scientific understanding of the inner workings of the underlying models. This raises credibility issues due to the lack of transparency and generalizability. Explainable artificial intelligence is an emerging approach for promoting credibility, accountability, and trust in mission-critical areas such as medicine by combining machine-learning approaches with explanatory techniques that explicitly show what the decision criteria are and why (or how) they have been established. Hence, thinking about how machine learning could benefit from knowledge graphs that combine “common sense” knowledge as well as semantic reasoning and causality models is a potential solution to this problem.</jats:p> </jats:sec> <jats:sec> <jats:title>Objective</jats:title> <jats:p>In this study, we aimed to leverage explainable artificial intelligence, and propose a proof-of-concept prototype for a knowledge-driven evidence-based recommendation system to improve mental health surveillance.</jats:p> </jats:sec> <jats:sec> <jats:title>Methods</jats:title> <jats:p>We used concepts from an ontology that we have developed to build and train a question-answering agent using the Google DialogFlow engine. In addition to the question-answering agent, the initial prototype includes knowledge graph generation and recommendation components that leverage third-party graph technology.</jats:p> </jats:sec> <jats:sec> <jats:title>Results</jats:title> <jats:p>To showcase the framework functionalities, we here present a prototype design and demonstrate the main features through four use case scenarios motivated by an initiative currently implemented at a children’s hospital in Memphis, Tennessee. Ongoing development of the prototype requires implementing an optimization algorithm of the recommendations, incorporating a privacy layer through a personal health library, and conducting a clinical trial to assess both usability and usefulness of the implementation.</jats:p> </jats:sec> <jats:sec> <jats:title>Conclusions</jats:title> <jats:p>This semantic-driven explainable artificial intelligence prototype can enhance health care practitioners’ ability to provide explanations for the decisions they make.</jats:p> </jats:sec> Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development JMIR Medical Informatics |
spellingShingle | Ammar, Nariman, Shaban-Nejad, Arash, JMIR Medical Informatics, Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development, Health Information Management, Health Informatics |
title | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_full | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_fullStr | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_full_unstemmed | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_short | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
title_sort | explainable artificial intelligence recommendation system by leveraging the semantics of adverse childhood experiences: proof-of-concept prototype development |
title_unstemmed | Explainable Artificial Intelligence Recommendation System by Leveraging the Semantics of Adverse Childhood Experiences: Proof-of-Concept Prototype Development |
topic | Health Information Management, Health Informatics |
url | http://dx.doi.org/10.2196/18752 |