Explainable AI: Machine Learning Interpretation in Blackcurrant Powders
cris.virtual.author-orcid | 0000-0002-2535-8370 | |
cris.virtualsource.author-orcid | 898dc715-0fc1-42af-a4d7-0bc909752fee | |
dc.abstract.en | Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online. | |
dc.affiliation | Wydział Nauk o Żywności i Żywieniu | |
dc.affiliation.institute | Katedra Mleczarstwa i Inżynierii Procesowej | |
dc.contributor.author | Przybył, Krzysztof | |
dc.date.access | 2024-07-04 | |
dc.date.accessioned | 2024-07-08T08:54:11Z | |
dc.date.available | 2024-07-08T08:54:11Z | |
dc.date.copyright | 2024-05-17 | |
dc.date.issued | 2024 | |
dc.description.abstract | <jats:p>Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.</jats:p> | |
dc.description.accesstime | at_publication | |
dc.description.bibliography | il., bibliogr. | |
dc.description.finance | publication_nocost | |
dc.description.financecost | 0,00 | |
dc.description.if | 3.4 | |
dc.description.number | 10 | |
dc.description.points | 100 | |
dc.description.review | review | |
dc.description.version | final_published | |
dc.description.volume | 24 | |
dc.identifier.doi | 10.3390/s24103198 | |
dc.identifier.issn | 1424-8220 | |
dc.identifier.uri | https://sciencerep.up.poznan.pl/handle/item/1575 | |
dc.identifier.weblink | https://www.mdpi.com/1424-8220/24/10/3198 | |
dc.language | en | |
dc.relation.ispartof | Sensors | |
dc.relation.pages | art. 3198 | |
dc.rights | CC-BY | |
dc.sciencecloud | send | |
dc.share.type | OPEN_JOURNAL | |
dc.subject.en | explainable artificial intelligence (XAI) | |
dc.subject.en | Local Interpretable Model Agnostic Explanations (LIMEs) | |
dc.subject.en | machine learning | |
dc.subject.en | classifiers ensembles | |
dc.subject.en | gray-level co-occurrence matrix (GLCM) | |
dc.subject.en | Random Forest (RF) | |
dc.subject.en | blackcurrant powders | |
dc.title | Explainable AI: Machine Learning Interpretation in Blackcurrant Powders | |
dc.type | JournalArticle | |
dspace.entity.type | Publication | |
oaire.citation.issue | 10 | |
oaire.citation.volume | 24 |