Show simple item record

dc.contributor.advisorΚόλλια, Ηλιάννα
dc.contributor.authorKarvonidis, Vasileios
dc.contributor.otherΚαρβωνίδης, Βασίλειος
dc.coverage.spatialΚύπροςel_GR
dc.date.accessioned2023-02-08T09:57:12Z
dc.date.available2023-02-08T09:57:12Z
dc.date.copyright2023-02-03
dc.date.issued2022-12
dc.identifier.otherCOS/2022/00017el_GR
dc.identifier.urihttp://hdl.handle.net/11128/5489
dc.descriptionIncludes bibliographical references.el_GR
dc.description.abstractDeep learning models represent the cutting edge in Artificial Intelligence - AI. However, they lack one key element, explainability. Even though such models are able to perform so accurately, they act as black boxes, without letting us having any insight into how they made a prediction, and what drove them to reach it. This problem concerns the researchers for many years, as explainability is of utmost importance in order to use Deep Learning models in critical applications, such as those in medical, finance and automotive sectors. Explainable AI consists of a set of methods and processes that allow the users to understand and trust the results of the AI model, due to the fact that there is clarity in the decision making, and we can easily characterise the accuracy, the transparency and the fairness of the model, even in a complex situation. This will help AI to become more ‘responsible’ to its decisions, more trustworthy and able to help larger sectors and industries to adopt it. In recent years, researchers developed various techniques, which can identify the reasons behind the decision of a deep learning model. These techniques inspire and pave the way for new methods to be developed. For example, Class Activation Mapping produces heatmaps to highlight the region of the image which the model focused on in order to classify it. This visualisation of where the model is looking helps to identify whether the model is trustful or not. For example, a model could classify a train image by looking at the train tracks rather than the actual train. Despite the correct classification, the model might take into account wrong parts of the image, which could be a consequence of poor training. A more evolved technique which is based on Class Activation mapping, is Grad-CAM. This technique is considered class-specific, meaning that for the same image, it can produce a separate visualisation for each class which is present in it. Another interesting approach is Structured Attention Graphs (SAGs). This method is inspired from attention maps, which are popular tools for explaining the decision of deep learning models. The researchers argue that just one attention map is not enough. With SAGs, we can have a set of patches as attention maps, and we record the confidence level of the model on each one in order to evaluate how the model is impacted. This thesis will mainly focus on Grad-Class Activation mapping and Structured Attention Graphs. We will explain the procedures behind the image classification, we will benchmark the techniques and see how they apply in various datasets. We will also analyse their role in the general structure of explainable artificial intelligence. el_GR
dc.format.extent273 σ. ; 30 εκ.el_GR
dc.languageenel_GR
dc.language.isoenel_GR
dc.publisherΑνοικτό Πανεπιστήμιο Κύπρουel_GR
dc.rightsinfo:eu-repo/semantics/closedAccessel_GR
dc.subjectLearning systemel_GR
dc.titleCreation and analysis of an explainable deep learning systemel_GR
dc.typeΜεταπτυχιακή Διατριβήel_GR
dc.description.translatedabstract-------el_GR
dc.format.typepdfel_GR


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record