Explainable AI: Machine Learning Interpretation in Blackcurrant Powders

cris.virtual.author-orcid0000-0002-2535-8370
cris.virtualsource.author-orcid898dc715-0fc1-42af-a4d7-0bc909752fee
dc.abstract.enRecently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.
dc.affiliationWydział Nauk o Żywności i Żywieniu
dc.affiliation.instituteKatedra Mleczarstwa i Inżynierii Procesowej
dc.contributor.authorPrzybył, Krzysztof
dc.date.access2024-07-04
dc.date.accessioned2024-07-08T08:54:11Z
dc.date.available2024-07-08T08:54:11Z
dc.date.copyright2024-05-17
dc.date.issued2024
dc.description.abstract<jats:p>Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.</jats:p>
dc.description.accesstimeat_publication
dc.description.bibliographyil., bibliogr.
dc.description.financepublication_nocost
dc.description.financecost0,00
dc.description.if3.4
dc.description.number10
dc.description.points100
dc.description.reviewreview
dc.description.versionfinal_published
dc.description.volume24
dc.identifier.doi10.3390/s24103198
dc.identifier.issn1424-8220
dc.identifier.urihttps://sciencerep.up.poznan.pl/handle/item/1575
dc.identifier.weblinkhttps://www.mdpi.com/1424-8220/24/10/3198
dc.languageen
dc.relation.ispartofSensors
dc.relation.pagesart. 3198
dc.rightsCC-BY
dc.sciencecloudsend
dc.share.typeOPEN_JOURNAL
dc.subject.enexplainable artificial intelligence (XAI)
dc.subject.enLocal Interpretable Model Agnostic Explanations (LIMEs)
dc.subject.enmachine learning
dc.subject.enclassifiers ensembles
dc.subject.engray-level co-occurrence matrix (GLCM)
dc.subject.enRandom Forest (RF)
dc.subject.enblackcurrant powders
dc.titleExplainable AI: Machine Learning Interpretation in Blackcurrant Powders
dc.typeJournalArticle
dspace.entity.typePublication
oaire.citation.issue10
oaire.citation.volume24