Repository logoRepository logoRepository logoRepository logo
Repository logoRepository logoRepository logoRepository logo
  • Communities & Collections
  • Research Outputs
  • Employees
  • AAAHigh contrastHigh contrast
    EN PL
    • Log In
      Have you forgotten your password?
AAAHigh contrastHigh contrast
EN PL
  • Log In
    Have you forgotten your password?
  1. Home
  2. Bibliografia UPP
  3. Bibliografia UPP
  4. Explainable AI: Machine Learning Interpretation in Blackcurrant Powders
 
Full item page
Options

Explainable AI: Machine Learning Interpretation in Blackcurrant Powders

Type
Journal article
Language
English
Date issued
2024
Author
Przybył, Krzysztof 
Faculty
Wydział Nauk o Żywności i Żywieniu
Journal
Sensors
ISSN
1424-8220
DOI
10.3390/s24103198
Web address
https://www.mdpi.com/1424-8220/24/10/3198
Volume
24
Number
10
Pages from-to
art. 3198
Abstract (EN)
Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the ‘glass box’ group of Decision Tree, among others, and the ‘black box’ group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.
Keywords (EN)
  • explainable artificial intellige...

  • Local Interpretable Model Agnost...

  • machine learning

  • classifiers ensembles

  • gray-level co-occurrence matrix ...

  • Random Forest (RF)

  • blackcurrant powders

License
cc-bycc-by CC-BY - Attribution
Open access date
May 17, 2024
Fundusze Europejskie
  • About repository
  • Contact
  • Privacy policy
  • Cookies

Copyright 2025 Uniwersytet Przyrodniczy w Poznaniu

DSpace Software provided by PCG Academia