Mentorship

Master theses supervised

  • 2024-03-25
    Evaluating the Quality of Graph Neural Network-Based Embeddings Over a Procedural Knowledge Graph
    • Baalakrishnan Aiyer Manikandan
    • Otto von Guericke University Magdeburg, Germany
    • The student primarily evaluates the performance of Graph Neural Networks (GNNs) in embedding procedural knowledge graphs within the manufacturing domain. It introduces a novel evaluation metric, Matches@k, and compares GNN models against traditional baseline models and various embedding techniques. The study finds that GNNs, especially Relational Graph Convolution Networks (RGCN), outperform baseline models in capturing the semantic meaning of procedural knowledge graphs. It highlights the influence of model complexity and graph representation on performance, with multi-relational models showing superior results. The thesis identifies limitations in handling complex graph structures and suggests future research directions, including the exploration of more flexible architectures and synthetic data generation.
  • 2023-11-10
    Explainable Artificial Intelligence Applied to Long Short-Term Memory Neural Networks for Estimating the Capacity of Lithium-Ion Batteries
    • Leonardo Dal Ronco
    • Università degli Studi Guglielmo Marconi, Rome, Italy
    • This thesis focuses on developing a Long Short-Term Memory (LSTM) neural network to accurately estimate lithium-ion battery capacity within energy storage systems. By simplifying the architecture, the model achieves accuracy comparable to existing literature with fewer training parameters, making it suitable for deployment on Battery Management Systems (BMS) with limited computational resources. Additionally, the thesis explores Explainability techniques to validate the model's reliability and develops a web application for interactive exploration of the dataset and model outputs. This work provides a foundation for further research on Explainability techniques applied to LSTM networks in lithium-ion battery capacity estimation.
  • 2023-07-27
    GNN xEval: Design and Implementation of a framework for Graph Neural Network Explainers Evaluation
    • Affan Ahmed
    • Otto von Guericke University Magdeburg, Germany
    • This thesis project focuses on the importance of explainability in Graph Neural Networks (GNNs) and the student proposed a framework called GNNxEval. The presented research aims to address the problem of interpretability in GNNs by exploring different datasets, tasks, models, and explainers, and evaluating their effectiveness. The implementation of GNNxEval allows users to select different configurations for GNNs, datasets, explainers, and explainer evaluation metrics.
  • 2023-02-06
    Design and development of a standardized framework for fairness analysis and mitigation for Graph Neural Network-based user profiling models
    • Mohamed Abdelrazek
    • Otto von Guericke University Magdeburg, Germany
    • The student presented the implementation of a framework for the analysis and mitigation of fairness in state-of-the-art Graph Neural Networks (GNNs) for user profiling. In particular, the student developed a component to standardize the input of every GNN analysed, allowing a user to provide a single graph and obtain the output for all of them. The assessment of fairness and performance included three GNN architectures, four public datasets, four fairness metrics and four debiasing approaches.
  • 2022-05-09
    Leveraging explainability to improve ranking as part of information retrieval
    • Shivani Jadhav
    • Otto von Guericke University Magdeburg, Germany
    • The student presented a Learning to Rank (LTR) approach with the support of explainability to solve information retrieval tasks in the supply chain industry. The three LTR approaches are experimented by using traditional machine learning algorithms and neural networks to find a ranking model that ranks the relevant suppliers at the top, given a specific demand.
  • 2022-07-28
    FACADE: Fake Articles Classification And Decision Explanation
    • Saijal Shahania
    • Otto von Guericke University Magdeburg, Germany
    • The student presented a two-level cascading system for distinguishing fake from real news, by exploiting several types of features, both low-level and high-level descriptors. Furthermore, the system provides an explainable user interface to make end-users able to understand which features influenced more the system outcome.
  • 2022-07-20
    Generating Plausible Counterfactual Images using Generative Network
    • Mahantesh Vishvanath Pattadkal
    • Co-supervised with Soumick Chatterjee
    • Otto von Guericke University Magdeburg, Germany
    • The student presented a novel Generative Adversarial Network (GAN) architecture for generating plausible counterfactual images, applied in the medical domain by exploiting a dataset of MRI images of the brain. As a first contribution of this thesis work, the high-quality images generated by the novel GAN architecture are compared with the results of Deep Convolutional GAN (DC-GAN) and Wasserstein GAN (WGAN) models. The second contribution is provided by an analysis on the usage of plausibility regulator in counterfactual image generation to attain plausible images in comparison to some baseline methods on simple and complex datasets.
  • 2022-04-25
    Iconography in Cristian Historic Artwork
    • Syed Abdullah Rizvi
    • Otto von Guericke University Magdeburg, Germany
    • The student presented an iconographic analysis of Christian artworks representing pictures of Holy Mary. Exploiting object detection techniques, he aimed to detect the presence of Holy Mary in paintings based on specific and peculiar properties of the character, such as a blue robe and the typical halo. Two different state-of-the-art object detection models, namely YOLOv5 and Faster R-CNN, have been employed in the experiments, along with an explainability technique, i.e. CAM, used to describe the models' predictions.
  • 2021-10-11
    Fairness and Biases in Arabic Language: A Case Study on Named Entity Recognition Models
    • Khaled Seddik Tawfik
    • Otto von Guericke University Magdeburg, Germany
    • The student presented a study on fairness in Arabic language and proposed a method to analyse, quantify and mitigate existing textual biases in Arabic Natural Language Processing (ANLP) resources. The thesis focuses on three different forms of biases (i.e. gender, stereotype and religion) and exploits five state-of-the-art named entity recognition (NER) models to investigate their capability to correctly detect male and female names, or Coptic and Muslim names, given a specific context or pattern. The pre-trained models have been tested on a manual generated dataset containing 70 male and female names, 35 unisex names and 31 Coptic and Muslim names. After the first evaluation, which led to the confirmation of the biased results, a bias mitigation method has been proposed by applying a data augmentation approach.
  • 2019-12-16
    Techniques for trustworthy artificial intelligence systems in the context of a loan approval process
    • Flavio Lorenzo
    • Polytechnic University of Turin, Italy
    • Work realized during the student's internship at Blue Reply.
    • This thesis focuses on two key technical aspects of a trustworthy AI: interpretability of the underlying machine learning model and fairness in the decisions taken by the system. The topics of interpretability and fairness are applied to the use case of a loan approval process. The presented algorithms and frameworks are exploited to build a web-based application that allows the user to manage the whole life cycle of a machine learning model, provide an interpretation of the model's output, and monitor the model's decisions to detect and react to unfair behaviours.