XLI Cycle
Up to 12 students can enroll in the XLI Cycle of the PhD Program in Computer and Data Science for Technological and Social Innovation.
● 7 positions are funded with a scholarship:
– 3 scholarships are without a specific topic (the topic can be chosen among the ones available in the first list “a” below);
– 4 scholarships have a specific topic (the topics are specific to the scholarship, and are shown in the list “b” after the first one).
● 2 positions are funded by an employment contract with the funding company (contratto di apprendistato di alta formazione). The topics are specific to the scholarship, and are shown in the list “c” at the bottom.
● 3 positions are not funded (senza borsa).
Themes for the 3 scholarships without specific topic (and for the 3 positions without scholarship)
a.1 | Expressive Runtime Monitoring for Safe and Reliable Reinforcement Learning |
Keywords: Reinforcement Learning, Runtime Verification, Reward Machines, Non-Markovian Tasks, Safety-Critical Systems, Artificial Intelligence, Autonomous Systems Research Motivation Reinforcement Learning (RL) has demonstrated remarkable potential in enabling autonomous systems to learn complex tasks through interactions with their environment. However, the safety and reliability of RL systems in safety-critical domains – such as autonomous driving, robotics, and healthcare – remain significant concerns due to the risk of reward misspecification and unintended behaviors. Traditional RL reward structures are limited in expressiveness, often incapable of adequately capturing complex, temporal, or non-Markovian tasks, resulting in unintended and potentially harmful outcomes. Runtime monitoring languages and Reward Machine-based approaches have recently emerged as promising frameworks to overcome these limitations. By specifying sophisticated reward conditions at runtime, such approaches enable safer, more reliable, and interpretable RL systems capable of handling complex temporal dependencies and context-sensitive tasks. Research Objectives This PhD project aims to advance the theory and practice of runtime verification for reinforcement learning systems. The primary objectives include: ● Expressive Reward Specification: Developing expressive runtime verification techniques to specify and evaluate complex non-Markovian reward structures that are beyond regular-language specifications. ● Safety Assurance in RL: Integrating runtime monitors with RL algorithms to provide continuous assurance of safety-critical properties and to mitigate risks associated with reward misspecification. ● Formal and Empirical Validation: Validating the developed methods through rigorous theoretical analysis and extensive experimental studies in realistic and safety-critical environments. Expected Outcomes and Impact ● Enhanced safety and reliability for reinforcement learning systems through expressive, runtime-verifiable reward specifications. ● New theoretical foundations and practical tools for integrating runtime verification within RL, significantly mitigating the risks of reward misspecification. ● Demonstrated applicability in real-world scenarios, improving trust and enabling broader adoption of RL technologies in safety-critical domains. This research will advance safe and reliable reinforcement learning, bridging theoretical expressiveness and real-world applicability. Supervisor: Angelo Ferrando | |
a.2 | Neuroscience-Inspired Cognitive Architectures for Socially Intelligent Multi-Agent Systems |
Keywords: Multi-Agent Systems, Neuroscience, Cognitive Architectures, Theory of Mind, Emotions, Human-Agent Interaction, Belief-Desire-Intention (BDI), Social Robotics, Artificial Intelligence Research Motivation Integrating insights from neuroscience into cognitive multi-agent systems (MAS) offers significant potential to enhance human-agent interactions across various fields, including social robotics, healthcare, education, and virtual environments. Despite extensive work on cognitive architectures, especially those based on Belief-Desire-Intention (BDI) models, explicit incorporation of neuroscientific findings – such as Theory of Mind, emotional processing, and memory structures – remains limited. Addressing this gap can lead to more adaptive, empathetic, and socially intelligent artificial agents. Research Objectives This PhD project aims to design, develop, and validate neuroscience-inspired cognitive architectures for socially intelligent MAS by: ● Neuroscience-based Cognitive Modeling: Integrating neuroscientific concepts (e.g., Theory of Mind, emotional regulation, episodic/semantic memory) into computational agent models. ● Emotion-driven Decision Making: Developing cognitive architectures that simulate human-like emotions to enhance decision-making and interaction believability. ● Advanced Theory of Mind: Improving agents’ abilities to understand and respond to human intentions, emotions, and mental states. ● Adaptive Communication Models: Designing computational frameworks distinguishing casual (“small talk”) and meaningful (“deep talk”) communication to dynamically enhance interactions. ● Experimental Validation: Conducting human-agent studies to evaluate the effectiveness, usability, and acceptance of these architectures in realistic scenarios. Expected Outcomes and Impact ● Novel neuroscience-inspired cognitive models improving social intelligence in MAS. ● Enhanced naturalness and empathy in interactions, fostering greater human acceptance. ● Broader applicability in critical domains such as healthcare, education, social robotics, and interactive digital environments. This research will significantly advance understanding of how neuroscience can inform the development of socially intelligent and human-compatible artificial agents. Supervisor: Angelo Ferrando | |
a.3 | Formal Verification of AI-Based Autonomous Systems |
Keywords: Formal Methods, Formal Verification, Model Checking, Runtime Verification, Multi-Agent Systems, Robotics, Artificial Intelligence, Reliability, Safety. Research Motivation The rapid adoption of autonomous systems – particularly multi-agent systems (MAS) – across sectors such as robotics, transportation, and smart infrastructures has transformed modern technological landscapes. However, the increasing complexity, adaptive learning mechanisms, and inherent unpredictability of AI-based autonomous systems pose significant challenges to ensuring their reliability and safety, especially when these systems directly influence human lives and critical infrastructure. Formal methods offer rigorous, mathematically grounded approaches for system verification, yet their application to autonomous, AI-driven systems is still limited by scalability, adaptability, and real-time operational constraints. Addressing these limitations is critical for building confidence in autonomous systems deployed in safety-critical applications. Research Objectives This PhD project aims to advance formal verification techniques specifically tailored to autonomous AI-based systems, with a strong focus on multi-agent systems (MAS). Key objectives include: ● Systematic Evaluation: Conducting an extensive survey and critical analysis of existing formal verification methods for AI-driven systems, pinpointing gaps and challenges related to scalability, adaptability, and robustness. ● Development of Novel Methods: Creating innovative verification algorithms and techniques capable of addressing the dynamic, adaptive, and high-dimensional characteristics of MAS, particularly those incorporating machine learning and AI components. ● Practical Validation: Implementing and validating these new methods through case studies involving realistic scenarios in robotics, transportation, or smart infrastructures, demonstrating their effectiveness and robustness. ● Generalizability and Impact: Ensuring that the developed verification methods can be generalized beyond MAS, providing practical tools and methodologies applicable across diverse AI-driven autonomous systems. Expected Outcomes and Impact The outcomes of this research are anticipated to significantly enhance the reliability, safety, and trustworthiness of autonomous AI systems. By bridging the existing gap between formal verification theory and practical AI implementations, this research will contribute essential tools to the scientific community and industry practitioners, ultimately facilitating safer deployment of intelligent, autonomous technologies. Supervisor: Angelo Ferrando (Unimore) Co-Supervisor: Vadim Malvone (Telecom Paris) | |
a.4 | Natural Language Interfaces for Intelligent Agent-Based Virtual Environments |
Keywords: Natural Language Processing, Multi-Agent Systems, Virtual Reality, Human-AI Interaction, Simulation, Cognitive Agents, Digital Twins Research Motivation The integration of artificial intelligence and virtual reality is reshaping how humans interact with complex systems.Traditional simulation tools for environments like smart factories or digital twins often demand advanced programming skills, limiting accessibility for domain experts without a technical background. The VEsNA framework (Virtual Environments via Natural Language Agents) [https://github.com/VEsNA-ToolKit] addresses this gap by enabling users to construct and manage virtual environments through natural language commands. By combining agent-based reasoning, natural language processing, and immersive simulations, VEsNA offers a user-friendly platform for designing and interacting with intelligent virtual spaces. Research Objectives This PhD project aims to advance the VEsNA framework by enhancing its natural language interfaces and agent-based reasoning capabilities. The key objectives are: ● Enhance Natural Language Understanding: Develop advanced NLP models tailored for interpreting user commands within virtual environments, ensuring accurate and context-aware interactions. ● Integrate Cognitive Agents: Implement cognitive agents capable of reasoning about user intents, environmental constraints, and potential outcomes to facilitate dynamic and intelligent responses. ● Expand Virtual Environment Capabilities: Augment the virtual environments with richer simulations, including physics-based interactions and real-time feedback mechanisms, to provide more immersive experiences. ● Evaluate User Interaction: Conduct user studies to assess the effectiveness, usability, and accessibility of the enhanced VEsNA framework for individuals with varying technical backgrounds. Expected Outcomes and Impact The project is expected to deliver a robust, user-centric platform that democratizes the creation and management of intelligent virtual environments. By enabling natural language interactions, the enhanced VEsNA framework will empower a broader range of users to design, simulate, and analyze complex systems without requiring extensive programming knowledge. This advancement has the potential to accelerate innovation in fields such as manufacturing, education, and urban planning, where virtual simulations play a critical role. Supervisor: Angelo Ferrando | |
a.5 | Formal Methods for Certification of Autonomous Vehicle Systems under ISO 26262 |
Keywords: Formal Verification, Runtime Verification, ISO 26262, Automotive Safety Integrity Level (ASIL), Autonomous Vehicles, Certification, Model Checking, Safety-Critical Systems Research Motivation The automotive industry is rapidly advancing towards higher levels of vehicle autonomy, introducing complex software-driven functionalities. Ensuring the safety and reliability of these autonomous systems is paramount, especially as they operate in unpredictable real-world environments. The ISO 26262 standard provides a framework for functional safety in road vehicles, emphasizing the need for rigorous verification and validation processes to achieve Automotive Safety Integrity Levels (ASIL). Traditional testing methods often fall short in covering the vast state space and dynamic behaviors of autonomous systems. Formal methods, offering mathematically rigorous techniques, present a promising avenue to enhance the certification process by providing stronger guarantees of system correctness and safety. Research Objectives This PhD project aims to integrate formal methods into the certification process of autonomous vehicle systems, aligning with ISO 26262 requirements. The primary objectives include: ● Framework Development: Design a comprehensive framework that incorporates formal verification techniques, such as model checking and theorem proving, into the safety lifecycle defined by ISO 26262. ● Runtime Verification Integration: Explore the application of runtime verification to monitor system behaviors during operation, providing continuous assurance and facilitating adaptive responses to unforeseen scenarios. ● Toolchain Evaluation: Assess existing formal verification tools for their applicability in the automotive domain, identifying gaps and proposing enhancements to meet industry-specific requirements. ● Case Studies: Apply the developed methodologies to real-world autonomous vehicle subsystems, evaluating their effectiveness in achieving ASIL compliance and improving overall system safety. Expected Outcomes and Impact The research is expected to yield a validated methodology for integrating formal methods into the certification process of autonomous vehicles, enhancing the rigor and efficiency of achieving ISO 26262 compliance. By bridging the gap between formal verification techniques and industry certification standards, this work aims to contribute to the development of safer autonomous systems and foster greater trust in their deployment. Supervisor: Angelo Ferrando | |
a.6 | Autonomic computing for collective self-adaptive systems |
Keywords: Autonomic computing, distributed systems, adaptive systems, IoT, autonomous vehicles Research objectives: Collective adaptive systems are more and more spreading in our life. Situations like sets of mobile phones or IoT devices are becoming real and can be exploited to support human activities, but this requires appropriate approaches to be managed; in the future we envision sets of autonomous vehicles that must be coordinated. Autonomic computing is a very good candidate paradigm to address these scenarios. The objective of the research is to define a framework for the development of collective self-adaptive systems, based on autonomic computing. The framework will be composed of a methodology that guides the developers in the development addressing the self-* properties, and of tools enabling the support for the developers. Some case studies will be proposed to test the framework. Proposed research activity: • State of the art in autonomic computing • State of the art in collective adaptive systems • Definition of a framework for autonomic computing in collective adaptive systems • Definition of a methodology • Definition of case studies • Test of the framework • Participation to relevant international school Supporting research projects (and Department) H2020 – FIRST (FIM) Possible connections with research groups, companies, universities. Dr. Antonio Bucchiarone, FBK Trento (I) Dr. Lai Xu, Bournemouth University (UK) Prof. Emma Hart, Edinburgh Napier University (UK) Prof. Marco Aiello, Stuttgart University (D) Supervisor: Prof. Giacomo Cabri | |
a.7 | Software engineering for autonomous vehicles |
Keywords: Software engineering, autonomous vehicles, distributed systems, adaptive systems, IoT Research objectives: Autonomous vehicles are spreading and more and more research is needed to enable their engineering. In particular, both the single vehicle and the coordination of sets of vehicles rely on software components that must be designed, implemented and verified; the current methods and methodologies could not be suitable for this new scenario. Appropriate approaches are needed. The objective of the research is studying the (meta)requirements of the development of software components and systems for autonomous vehicles, in order to define one or more approaches that are suitable for this scenario. Proposed research activity: • State of the art in software engineering • State of the art in autonomous vehicles • Definition of approaches to engineer the development of software components and systems for autonomous vehicles • Definition of a methodology • Definition of case studies • Test of the proposed approaches • Participation to relevant international school Supporting research projects (and Department): WASABI 2023 (FIM) Possible connections with research groups, companies, universities: Dr. Antonio Bucchiarone, FBK Trento (I) Prof. Emma Hart, Edinburgh Napier University (UK) Prof. Marco Aiello, Stuttgart University (D) Supervisor: Prof. Giacomo Cabri | |
a.8 | Real-time Collaborative 3D Editing |
Keywords: Research objectives: Collaborative editing ala Google Docs is still not widespread in the 3D world. The goal of this thesis is to explore real-time collaborative models for 3D. We obtain encouraging first results by extending distributed version control, ala git and GitHub, to 3D content. In this thesis, we plan to explore the design space of real-time collaborative 3D editing, focusing on local-first models such as CRDTs and differential synchronization. The thesis work be fully focused on developing new algorithm and data structure for graphics, disregarding all other aspects of real-world collaborative systems, such as security, networking, authentication, etc. No prior knowledge of distributed systems is required.Our prior work on version control for 3D includes: • MeshGit • MeshHisto • cSculpt • SceneGit • others under review Supervisor: Prof. Fabio Pellacini | |
a.9 | AI-assisted Computer Graphics: AI-Assisted Editing of Procedural Programs |
Keywords: Computer Graphics, Artificial Intelligence, Neural Networks, Program Synthesis, Automatic Differentiation Research objectives: Assets used to describe 3D scenes are either measured, hand-painted, or synthesized by programs. The latter category, sometimes called procedural models, is the most scalable in the production of large amounts of content. But procedural models are hard to author. The goal of this project is to explore new algorithms for authoring procedural models using recent results in artificial intelligence. In particular, we will explore the idea of writing procedural programs using LLMs, using recent result in LLMs for code generation. The main novelty of this work lies in linking final appearance with program synthesis using multimodal models for the synthesis. Supervisor: Prof. Fabio Pellacini | |
a.10 | AI-assisted Computer Graphics: Procedural Proxies for Parameter Estimation and Program Synthesis |
Keywords: Computer Graphics, Artificial Intelligence, Neural Networks, Automatic Differentiation Research objectives: The textures and patterns used to define the appearance of 3D objects are either measured, hand-painted, or synthesized by programs. The latter category, sometimes called procedural models, is the most scalable in the production of large amounts of content. But procedural models are hard to control. The goal of this project is to explore new algorithms for controlling procedural appearance models using recent results in artificial intelligence. In particular, we will explore the idea of building neural network proxies that for each model produce a simplified program that is easier to control. The proxy will be used to determine the program parameters via automatic differentiation, and possibly change the program via program synthesis. We expect the work to focus on the AI aspects of the synthesis, rather than its graphics counterparts. Supervisor: Prof. Fabio Pellacini | |
a.11 | Multi-level organization of complex systems |
Keywords: Information theory, evolutionary computation, nonlinear dynamics, criticality, adaptation Research objectives: The detection of emerging intermediate structures in complex systems is not always a trivial task, while in contrast their characterization can lead to a meaningful description of the overall properties of the system, and in this way to its understanding. A large part of these structures is characterized by groups of variables (genes, chemical species, individuals, agents, robots…) that appear to be well coordinated among themselves and have a relatively weaker interaction with the remainder of the system. Notable examples are functional neuronal regions in the brain, autocatalytic systems in chemistry, or communities in socio-technological systems. We are therefore interested in identifying and studying such structures, proposing algorithms for their understanding and analyzing data from the world of biology, of socio-technological systems and of artificial systems. Supervisor: Prof. Marco Villani | |
a.12 | Evolving artificial systems |
Keywords: Information theory, evolutionary computation, nonlinear dynamics, criticality, adaptation Research objectives: Finding general properties of evolving systems has proven extremely difficult. A particularly interesting proposal is that evolution (either natural or artificial) drives complex systems towards “dynamically critical” states, which may have relevant advantages with respect to systems whose dynamics is ordered or disordered. According to this hypothesis, these critical systems can provide an optimal balance between stability and responsiveness. In this thesis we aim at determining under which conditions this turns out to be the case, by using abstract models. The systems described by these models will interact in an abstract “environment”, and the conditions under which critical systems have an edge will be analyzed. To achieve this ambitious goal, we will exploit the synergy among dynamical systems methods, information theory and evolutionary computation. Supervisor: Prof. Marco Villani | |
a.13 | Innovation ecosystems, industrial districts, and global value chains: a network approach |
Keywords: social network analysis, industrial districts, global value chain, sustainability, innovation Research objectives: Globalization has increased the speed of competition, continuously generating new opportunities and threats, where flexibility and innovation play a fundamental role. Industrial districts and global value chains are particularly affected by international dynamics and processes, and innovation is key for companies – located in industrial districts and embedded in global value chains – aiming to be successful in global markets. As part of their strategy, these companies rely on business networks to become more innovative and improve their performance. Social Network Analysis (SNA) is a powerful tool to assess the importance of (local and global) business networks and their impact on companies’ performance, and the proposed research will be focusing on the analysis of: • inter-organizational networks; • intra-organizational networks; • dynamic business networks; • novel approaches for mapping business networks. Supervisor: Prof. Stefano Ghinoi Possible connections with other international universities: • Prof. Bodo Steiner, University of Helsinki (FI) • Prof. Riccardo De Vita, Manchester Metropolitan University (UK) • Prof. Guido Conaldi, University of Greenwich (UK) | |
a.14 | Latency Sensitive and Safety Critical GPU-accelerated real-time computing |
Keywords: GP-GPU, Massively parallel computing, Real-Time, Compute Architecture, Programming Models. Research Objectives: Nowadays cyber physical systems are characterized by data hungry algorithms within a wide variety of applications. This implies facing notable challenges for reaching the desired performance, hence the hardware deployed in domain such as Automotive, Robotics, Telecommunication and industrial automation are implemented as heterogeneous systems in which multi-core CPU hosts work in concert with massively parallel accelerators. In this context, a widely known accelerator is the Graphic Processing Unit (GPU), a hardware device designed to maximize compute throughput for general purpose computations (GP-GPU). It is not trivial, however, to exploit the full potential of the GPU processing power due to the notable architectural differences between GPUs and more traditional multi-core CPUs. Significant effort is therefore required, for instance, to exploit the recently released architectural features of modern GPUs, such as specialized cores for tensor processing and traversal of bounding volume hierarchies. Moreover, GPUs are designed to maximize throughput, hence inherently sacrificing latencies. This research aims at understanding how programming models, APIs and compilers could be enhanced in order to facilitate the work of the system engineer when implementing GPU accelerated applications, but also for accounting for stringent latency and safety requirements imposed by modern applications in the autonomous systems domain. Proposed research activity: ● State of the art on GP-GPU computing: from programming models to applications. ● Design and implementation of mechanisms that act at the level of APIs/programming models to enable real-time/safety critical GPU computing. ● Enhancing current compilers/source-to-source translators for simplyfing the programmer access to the GPU’s specilized cores (e.g. for tensor operations). ● Participation to relevant international schools and conferences. Supervisor: Nicola Capodieci Co-supervisor: Andrea Marongiu | |
a.15 | Enhancing Performance and Efficiency of Deep Learning Models for Human-machine Interaction Applications |
Keywords: neural machine translation, image captioning, language detection, performance, deep learning Research Objectives: Human-machine interaction domain is composed of important applications such as neural machine translation, image captioning, and language detection. However, these models often suffer from issues such as poor performance, high computational complexity, and limited scalability. The proposed research project aims to investigate and develop novel techniques to improve the performance of these models. The research project also aims to evaluate the effectiveness of these techniques on various benchmark datasets and compare them with existing state-of-the-art techniques. Proposed Research Activity: ● Literature review and state-of-the-art analysis of neural machine translation, image captioning, and language detection models ● Development of novel techniques to improve the performance of these models, such as attention mechanisms, transfer learning, and model compression ● Evaluation of the proposed techniques on various benchmark datasets, such as WMT, COCO, and LDC ● Comparison of the proposed techniques with existing state-of-the-art techniques ● Participation in relevant international conferences and workshops Supervisor: Roberto Cavicchioli Co-supervisor: Alessandro Capotondi | |
a.16 | Digital Intelligent Assistants for Industry 5.0 |
Keywords: voice assistant, human-in-the-loop, digital factory, data analytics Research objectives: Voice assistants, alternatively mentioned as conversational agents or Digital Intelligent Assistants (DIA), allow users to interact intuitively by using their natural language. In the industrial sector, the adoption of conversational agents has the potential to drive the digital transformation of organizations, improve both customer and user experience, and make their internal processes more efficient. This PhD project proposal aims to delve into the main research challenges that emerge in the design and development of DIAs in the industrial context where shopfloor operators need to interact with the available physical assets and data sources to solve data analytics requests. Topics of interest are, for instance, the integration of LLMs, approaches for DIA evaluations in realistic contexts and their continuous improvements. Proposed research activity: ● Investigate platforms and technology stacks for DIA development ● Address DIA benchmarking and evaluation issues in Industry 5.0 ● Explore Tool-augmented LLMs solutions ● Explore continual learning solutions in the context of DIA models Supporting research projects (and Department): The PhD student will be hosted at the Department of Physics Informatics and Math where she/he will be a member of the ISGroup (www.isgroup.unimore.it) led by Prof. Federica Mandreoli. The group has been working in different projects on digital factories and it is currently involved in the Horizon Europe project WASABI https://wasabiproject.eu/ . Possible connections with research groups, companies, universities: On the topics of the proposal, the ISGroup has connections with BIBA – Bremer Institut für Produktion und Logistik (Germany), ICCS (Greece), and Sapienza University of Rome (Italy). Moreover, connections with the companies involved in the WASABI project are currently active. Supervisor: Prof. Federica Mandreoli | |
a.17 | Data-centric machine learning |
Keywords: Model fairness, model robustness, data drift, real-world data Research objectives: Artificial Intelligence (AI) has traditionally relied on two key components: data and algorithms. However, the dominant model-centric paradigm has primarily focused on refining algorithms, often treating data as static and secondary. This has led to increasingly complex and opaque models, which tend to be unreliable and unfair when applied to real-world scenarios. A new paradigm — data-centric AI — has recently emerged, placing data at the center of the AI development process and extending its role beyond the pre-processing phase. This PhD project aims to investigate the main research challenges within this paradigm, including topics such as data drift, learning from real-world data, model interpretability, robustness, and fairness. Proposed research activity Investigate key principles and challenges in data-centric machine learning Propose novel approaches to address these challenges Develop prototypes of the proposed solutions Evaluate the prototypes using international benchmarks Compare the results with state-of-the-art methods Supporting research projects (and Department) The PhD student will be hosted at the Department of Physics Informatics and Math where she/he will be a member of the ISGroup (www.isgroup.unimore.it) led by Prof. Federica Mandreoli. Possible connections with research groups, companies, universities. On the topics of the proposal, the ISGroup has connections with Prof. Paolo Missier (University of Birmingham UK) and Prof. Paolo Ciaccia (Univ. Of Bologna). Supervisor: Prof. Federica Mandreoli | |
a.18 | Environmental information and communication: fake news, bias and distortion, and data analysis. Possible impacts on policy and public opinion. |
Keywords: Environmental information, Information and communication theory, social media and conversation analysis, Cognitive and perceptual biases and climate change, Qualitative and quantitative methods in data analysis. Research objectives: The purpose of this research area is to analyze the processes of dissemination and production of fake news, cognitive bias related to information on the issues of environment, climate crisis and climate change. The intention is to work on textual corpora and conversations regarding environmental communication and information issues, both online and social media, and offline, by: (a) acquiring skills regarding the relationship between quantitative and qualitative data analysis; and (b) using tools such as NVivo and Altas and qualitative methodologies related to thematic, narrative analysis and different forms of “netnography” of data. Supervisor: Prof. Federico Montanari, DCE | |
a.19 | AI, social discourses, metaphors and agency. |
Keywords: AI, responsibility, social discourse, sociosemiotics, agency, images. Research objectives: This research project aims to explore and critically analyze the social and sociosemiotic implications of artificial intelligence technologies, with a particular focus on two main lines of inquiry. Firstly, we intend to examine the metaphors, rhetoric, and public discourse that develop around AI. This aspect concerns the analysis of the languages, narratives, and interpretative frameworks through which AI is represented, discussed, and made socially understandable in the media, popular culture, political and institutional discourse, in images, as well as in scientific and technological contexts. The goal is to understand how these discursive constructions contribute to the production of meaning, influence collective perceptions, and shape social expectations and fears regarding artificial intelligence. Second, the project aims to investigate how it is possible to conceptualize, analyze, and—from a design perspective—imagine and develop forms of non-human agency and social agents. This involves theoretical and critical work on the very idea of agency, questioning established categories and proposing alternatives that take into account the specificity of artificial intelligence. This area of research questions how AI can be understood not only as a technical tool, but as a social subject (or quasi-subject) endowed with a form of presence, influence, and participation in communicative, cultural, and relational dynamics. Overall, the project aims to contribute to an in-depth and interdisciplinary reflection on the growing role of artificial intelligence in the contemporary social fabric, with the goal of proposing interpretative models and critical perspectives that help to understand—and perhaps rethink—the relationships between humans and non-humans in the age of cognitive automation. Supervisor: Prof. Federico Montanari, DCE, Unimore. Possible connections with research groups, companies, universities: University of California, San Diego, Univ. Liège, Univ. Paris Cité. | |
a.20 | Sustainable Edge Computing Architectures for Green IoT Applications |
Keywords: green computing, edge computing, sustainable IoT, energy-aware systems, load balancing, network simulators, microservices, performance evaluation Research Objectives: In the last few years Edge computing has emerged as a novel approach to support modern IoT applications based on microservices. These applications typically involve sensors located in multiple geographic locations producing big amounts of data. While edge computing addresses the limitations of cloud-only models by bringing computation closer to data sources, it also introduces new challenges in terms of energy consumption, carbon footprint, and resource efficiency. The main goal of this research is to design and develop sustainable, energy-efficient distributed computing systems—spanning both edge and cloud infrastructures—that support modern IoT applications while minimizing environmental impact. Central to this goal is the development of an adaptive decision-making model for workload distribution that incorporates carbon footprint as a core weighting factor, alongside traditional metrics. The model will estimate the optimal placement of microservice-based workloads across heterogeneous computing nodes (edge and cloud), based on real-time conditions and environmental metrics. The study will leverage realistic smart mobility and IoT application traces, evaluate the trade-offs between energy efficiency and performance, and validate the effectiveness of the proposed model through simulation and experimentation. Proposed Research Activities: ● State-of-the-art energy-aware and carbon-aware computing models in edge and cloud environments, with a focus on IoT applications ● Collection and analysis of real-world IoT and smart mobility traces ● Study and modeling of carbon emissions associated with various edge and cloud deployment strategies ● Design of a carbon-aware workload distribution model ● Development of simulation and/or emulation tools to test the proposed model in realistic edge-cloud IoT scenarios ● Test and validation of the proposed solutions platform ● Participation to relevant international schools and conferences Supervisor: Prof. Claudia Canali | |
a.21 | Virtualized collaborative platforms for STEM and computer science education |
Keywords: collaborative platforms, data science, virtualization, performance evaluation Research objectives: The European job market is suffering an increased mismatch due to a severe shortage of workers with STEM and computer science skills. Several factors contribute to this issue, such as the need to train and support educators to include computer science in teaching activities and the lack of tools allowing trainers to collaborate. The main goal of this research is to design and develop a collaborative platform that supports educators in sharing projects, methodologies and experiences related to computer and data science education, including evaluation methodologies focused on continuous monitoring of students’ self-efficacy. The aim of the platform is twofold: a) provide educators with the possibility to share projects and innovative teaching approaches; b) allow educators to find resources and communities helpful to improve their experience. The underlying technology will allow remote execution of Notebooks combined with storage support and containerization environments, to offer advantages such as storage and performance scalability, collaboration, privacy, and security thanks to self-hosted solutions. Proposed research activity: ● Analysis on state of the art on educational collaborative platforms and tools ● Design and development of a collaborative platform based on Python Notebooks and online storage tool (e.g. NextCloud) ● Test and validation of the developed platform, with active collaboration with school educators ● Participation to relevant international schools and conferences Supervisor: Prof. Claudia Canali | |
a.22 | Efficient CNN and LLM deployment on edge devices |
Keywords: Efficient AI; TinyML; Embedded systems; Edge devices; HW/SW co-design; Research Objectives: The rapid advancements in AI and machine learning have led to the development of state-of-the-art models, such as Convolutional Neural Networks (CNNs) and Large Language Models (LLMs). However, deploying these models on resource-constrained edge devices remains a significant challenge due to their computational and memory requirements. This research aims to explore efficient deployment strategies and on-device training mechanisms for AI models on emerging edge devices (e.g., FPGA, ASIC accelerators, like Axelera and Hailo). The project will focus on optimizing inference, enabling continual learning and domain adaptation by exploiting hardware accelerators for edge computing. The outcomes will contribute to the development of robust, adaptive, and efficient AI systems for real-world edge applications. Supporting research projects (and Department): dAIEDGE (https://daiedge.eu/). Possible connections with research groups, companies, universities: SUPSI Lugano (Svizzera) ETH Zurich (Svizzera) KU Leuven (Belgio) Northeastern University (USA) Supervisor: Prof. Andrea Marongiu Co-supervisor: Dott. Alessandro Capotondi, UNIMORE Co-supervisor: Prof. Francesco Restuccia, Northeastern University (USA) | |
a.23 | Compiler-aided parallel programming model for next generation high performance predictable heterogeneous platforms |
Keywords: Compilers; Parallel Programming Models; Runtime; Heterogenous Systems; Research Objectives: The primary focus of this project is to address the programming challenges associated with emerging high-performance heterogeneous systems. The project aims to develop compiler and runtime support for heterogeneous and parallel programming models, explicitly targeting the cyber-physical systems domain (robotics, automation, manufacturing). The objective is to enhance the adoption of these systems by improving performance and timing predictability, while maintaining a simple programming interface. Proposed Research Activity: ● In-depth study of the challenges involved ● Design and development of compiler and runtime system extensions specifically tailored for Commercial-off-the-Shelf (COTS) platforms and open-source hardware architectures like RISC-V. ● Validate the proposed solutions on real-life problems from the targeted application domains ● Participation in relevant international conferences and workshops Prior work on the topic: ● HEPREM: https://ieeexplore.ieee.org/document/9035630 ● HERO: https://dl.acm.org/doi/10.1145/3295816.3295821 ● PULP: https://pulp-platform.org/ Possible connections with research groups, companies, universities: ● ETH Zurich, Switzerland ● Barcelona Supercomputing Center, Spain ● RI.SE., Sweden Supervisor: Prof. Andrea Marongiu Co-supervisor: Dott. Alessandro Capotondi | |
a.24 | Hardware-software Co-Design for Resilient Adaptive AI/ML in FPGA Hardware |
Keywords: Efficient AI; FPGA; Embedded systems; HW/SW co-design; Research objectives: The overall objective is to design and evaluate new hardware-software co-design strategies that will enable real-time reconfiguration of artificial intelligence (AI) algorithms implemented in field-programmable gate arrays (FPGA) to ensure resilience against intentional and unintentional perturbations and to support fast and effective AI reconfigurability in case of change of operational objectives and/or systems constraints. The ideal outcome of the project would be reconfigurable AI software/hardware architectures that will withstand by design extremely dynamic scenarios. The research will be prototyped on testbeds composed of System-on-Chip (SoC) platforms connected to wireless transceivers and sensors for multi-modal data acquisition. Applications of interested include spectrum sensing and computer vision-based classification and control. Possible connections with research groups, companies, universities: Northeastern University (USA) Supervisor: Prof. Andrea Marongiu Co-supervisor: Dott. Alessandro Capotondi, UNIMORE Co-supervisor: Prof. Francesco Restuccia, Northeastern University (USA) | |
a.25 | AI-based extraction of deep language data from reduced textual data |
Goal. Developing a computational tool capable of automatically extracting the linguistic evidence necessary to set a predefined list of abstract syntactic parameters, using a small, carefully selected corpus of texts. Premises. The project builds on the parametric model developed by the Parametric Comparison Method (PCM, www.parametriccomparison.unimore.it) and particularly on the parameter-setting algorithm proposed by Crisma et al. (2020). This framework is grounded in the widespread assumption that grammar acquisition is based on processing a limited amount of empirical evidence—known as Primary Linguistic Data (PLD)—within a relatively short time span. In Crisma et al’s (2020) model, each syntactic parameter corresponds to a list of empirical manifestations: structural configurations that are observable when the parameter is active in a given grammar (technically: manifestations). Research goals and methodology. This project aims to create a system capable of identifying, from a reduced textual dataset, the structural evidence needed to set values for a list of syntactic parameters. To achieve this, the following steps are required: Reformulating parameter manifestations in a way that they are machine-processable and robust against ambiguity or misclassification; Designing a tool that can perform formal syntactic analysis—i.e., detect abstract structural features that go beyond surface-level lexical patterns; Training and adapting the system to handle multiple languages, including typologically diverse and underrepresented ones, where existing large language models (LLMs) may offer limited performance; Designing dedicated extraction queries to retrieve the relevant syntactic structures from textual data; Validating the extracted results against the available descriptive and theoretical literature on the languages and structures involved. The central challenge is that such syntactic information is not trivially retrievable from linear sequences of words. It often requires parsing mechanisms that approximate or replicate formal syntactic analysis, particularly when lexical cues are insufficient or misleading. Who can apply. (a) Students with background in computer science, especially those with experience in artificial intelligence, natural language processing, and the development or application of large language models (LLMs); (b) Students with a background in theoretical linguistics, particularly those familiar with generative syntax, parameter theory, and formal approaches to grammatical structure. Interdisciplinary profiles that combine computational and linguistic expertise are especially encouraged. Possible connections with research groups, companies, universities. Project “Parameter theory on historical corpora: Measuring the power of parameter setting theory on historical corpora” (MUR PRIN 2022 20224XEE9P – PARTHICO). Center for Language History and Diversity, York Department of Linguistics, York Scuola Universitaria Superiore IUSS, Pavia Supervisor. Prof. Cristina Guardiano | |
a.26 | Modeling structural interdependencies and their role in constraining language diversity |
This project aims to develop a computational model that captures the multilayered deductive structure of natural language grammars and accounts for the complexity of their empirical outputs. The focus is on the structural interdependencies among abstract grammatical features—syntactic parameters—and their role in shaping crosslinguistic diversity. Premises. Formal approaches to language assume that structural variation is governed by a finite set of abstract syntactic parameters (see www.parametriccomparison.unimore.it). These parameters are not independent: their mutual interactions and logical dependencies give rise to the observed variation in surface syntax. According to this view, once the space of parameters and their interrelations is formally defined, the full range of crosslinguistic diversity becomes derivable and ultimately predictable (cf. Bortolussi et al. 2011). Goals and activities. The project investigates interdependencies among syntactic parameters and their implications for language acquisition, variation, and change. It aims to leverage machine learning techniques and data-driven methods to: Identify structural dependencies that are not easily accessible through manual inspection; Formalize implicational rules among parameters and compare them with linguists’ expert judgments; Build computational representations of the logical architecture underlying grammatical variation. Specific objectives include: (a) Discovering parameters that are redundant because they are logically deducible from others; (b) Detecting partial or probabilistic dependencies among parameters; (c) Mapping the distribution of parameter values and their interrelations within and across language families. These investigations will contribute to the formal understanding of the internal structure of human grammars and help refine models of grammatical acquisition and historical change. Who can apply. (a) Students with a background in computer science, especially in network modeling, statistical learning, or AI; (b) Students with a background in theoretical linguistics, with a focus on parameter theory, grammatical architecture, or formal syntax. Interdisciplinary profiles combining formal and computational skills are especially encouraged. Possible connections with research groups, companies, universities: Project “Parameter theory on historical corpora: Measuring the power of parameter setting theory on historical corpora” (MUR PRIN 2022 20224XEE9P – PARTHICO). Center for Language History and Diversity, York Department of Linguistics, York Scuola Universitaria Superiore IUSS, Pavia Supervisor: Prof. Cristina Guardiano | |
a.27 | Quantitative modeling of cultural diversity through language |
This project focuses on the analysis and quantitative modeling of deep linguistic data as a window into human diversity. By leveraging computational tools and algorithmic approaches, the aim is to investigate how syntactic parameters—abstract features of grammatical structure—can reveal historical signals and patterns of variation across human languages. Goals. The project pursues the following objectives: (a) Design and refine algorithms to extract historical information from syntactic parameters, as structured by the Parametric Comparison Method (www.parametriccomparison.unimore.it); (b) Investigate the internal structure of syntactic parametric variation and its diachronic evolution using quantitative and computer-assisted techniques; (c) Develop data-driven computational models that capture the structure and variation of human grammars from a cross-linguistic perspective. Research activities. Research tasks will include: (a) Applying tools and methodologies from the quantitative sciences to the study of human linguistic diversity and its historical development; (b) Exploring and implementing algorithmic strategies to explain language variation and syntactic change; (c) Investigating the use of machine learning and data-driven techniques to model the structure of human language through the analysis of deep language data—abstract syntactic information not reducible to surface-level lexical features. Who can apply. (a) Students with a background in computer science (e.g., machine learning, algorithm design, deep learning, artificial intelligence) interested in novel applications of computational techniques to formal and historical linguistics; (b) Students with a background in linguistics (e.g., computational linguistics, formal syntax, historical linguistics, psycholinguistics, neurolinguistics, language acquisition) who wish to apply quantitative and computational methods to the analysis of language structure and change. Interdisciplinary profiles combining computational and linguistic skills are especially encouraged. Possible connections with research groups, companies, universities: Project “Parameter theory on historical corpora: Measuring the power of parameter setting theory on historical corpora” (MUR PRIN 2022 20224XEE9P – PARTHICO). Center for Language History and Diversity, York Department of Linguistics, York Scuola Universitaria Superiore IUSS, Pavia Supervisor: Prof. Cristina Guardiano | |
a.28 | Phylogenetic analyses of language families through syntactic parameters |
This project explores the phylogenetic structure of selected language families or subfamilies through the application of the Parametric Comparison Method (PCM, www.parametriccomparison.unimore.it). The goal is to reconstruct historical relationships between languages by modeling their deep syntactic structures and tracing patterns of diachronic change. Goal. The primary objective is to implement the PCM in the analysis of a specific language family, investigating both its internal diversification and its historical connections to other families. The study will involve the analysis of contemporary and/or historical linguistic varieties to extract and compare syntactic parameter settings. Research tools. The research will be based on the following resources and procedures (which are already partially available as developed by the PCM): A set of binary syntactic parameters describing variation in nominal structures across languages; A system of implicational formulas encoding interdependencies among parameters; A list of surface manifestations associated to each paraeter; A formal parameter-setting procedure, which translates observable surface patterns into strings of binary values representing the deep syntactic structure of each language; A suite of computational tools to extract historical signals from parameter settings and automatically generate phylogenetic trees. For currently spoken languages, data may be gathered directly from native speakers. For historical varieties, analysis will be based on annotated or curated textual corpora representative of earlier stages of the language. The project will test and refine the PCM tools to accommodate both types of data sources and trace patterns of syntactic change over time and across geographic space. Who can apply. This project is designed for students with a background in historical linguistics and/or theoretical syntax, and with specific expertise in one or more language families or subgroups. Familiarity with language documentation, corpus analysis, or typological databases is an asset. Possible connections with research groups, companies, universities: Project “Parameter theory on historical corpora: Measuring the power of parameter setting theory on historical corpora” (MUR PRIN 2022 20224XEE9P – PARTHICO). Center for Language History and Diversity, York Department of Linguistics, York Scuola Universitaria Superiore IUSS, Pavia Supervisor: Prof. Cristina Guardiano | |
a.29 | Automated Cyber Operations |
Keywords: cyber security, graph theory, planning, automation, artificial intelligence Research objectives: The volume and complexity of modern attacks and defenses (as testified by the size of the CVE, CWE, CAPEC, MITRE ATTACK catalogues) is making manual exploitation and hardening of systems increasingly unfeasible. In the next few years we foresee an adoption of even more automated offensive and defensive tools and decision makers. This research thesis is motivated by the need for frameworks, algorithms, tools that help the human operator to carry on offensive operations (vulnerability assessments, penetration testing) and defensive operations (systems hardening, source code auditing) in a semi-automated fashion. The main goal is to explore the potential for automation and artificial intelligence in the activities involved in classical security assessments (representation of security-related knowledge, decision making, task planning and execution, creating digital twins, monitoring of progress, reporting) in order to improve efficiency and efficacy. Supervisor: Mauro Andreolini | |
a.30 | Scoring Systems for Cyber Security |
Keywords: cyber security, graph theory, algorithms, cyber ranges, monitoring To increase the resilience of their infrastructures, both military and civilian organizations have started to train security personnel on cyber ranges, pre-arranged virtual environments through which it is possible to effectively simulate realistic security scenarios on a system architecture closely resembling the original one. Training goals are many and diverse in nature: to discover vulnerabilities in existing systems, to harden existing systems, to evaluate the security of a soon-to-be deployed component, to teach secure programming practices, to perform incident response on a compromised system. Unfortunately, current evaluation strategies of student performance in an exercise share a severe limitation: they are focused exclusively on goal achievement (yes or no), and not on the specific path followed by the student. Therefore, giving a more precise assessment and, more generally, understanding the reasons behind success or failure, is impossible. This research is motivated by the need for techniques, algorithms and tools to model user activities in an exercise, compare the path carried out by the student with an “optimal” path devised by an instructor and suggest avenues for improvement. The goal is to propose novel cyber scores that can be used to capture the abilities of a student (speed, precision, ability to discover new vulnerabilities), highlight potential weaknesses, compare different students in the same scenario. Supervisor: Mauro Andreolini | |
a.31 | Attacking and defending cryptographic protocols implementations |
Keywords: Cyber security, applied cryptography, secure development, side channel attacks Research objectives: Implementing cryptographic schemes and protocols is a hard task, related to having interdisciplinary knowledge on theoretical and applied cryptography, secure code development, and real-world attacks to code and system architectures. Moreover, many dedicated attack and defense techniques specifically related to cryptographic implementations have been designed throughout years of research, and novel techniques are still emerging, in particular related to implementation of post-quantum cryptography and to defense against advanced attack surfaces encompassing gray and white -box security. This research thesis involves the study of such techniques, on studying their applicability to existing and emerging protocols and systems, and on designing novel tools to detect non-secure implementations. Proposed research activity: State of the art in cryptographic engineering attacks and defenses Analyzing emerging security threats due to implementation of post-quantum cryptography Designing and implementation of known and novel strategies for attacking and defending cryptographic implementations Advisor: Luca Ferretti Co-advisor: Mauro Andreolini | |
a.32 | Efficient confidential and verifiable data management strategies |
Keywords: Cyber security, Applied cryptography, Databases, Data structures Research objectives: Emerging security solutions try to minimize systems attack surface and mitigate information leakage even in presence of partially compromised systems. Among those, promising advanced cryptographic strategies are being studied to enable efficient query execution on encrypted databases and to verify correct execution of queries on data. These security solutions help mitigating data breaches perpetrated by external adversaries or even worse by legitimate insiders, and enable strong auditing strategies for outsourced data managed by external parties. The research activity focuses on studying state-of-the-art for database security and encryption, analyzing practical techniques for achieving practical performance and acceptable security, designing engineering strategies to embed these techniques within real-world database systems. Proposed research activity: • State of the art on database encryption and verification • Analyzing trade-offs in terms of security and performance • Designing novel strategies for improving security and performance of encrypted data • Evaluating security of existing and novel techniques • Studying solutions for proper integration with existing database management systems • Implementation of proof of concepts for performance evaluation Supervisor: Luca Ferretti | |
a.33 | Privacy, Security and Resiliency of Authentication and Authorization |
Keywords: Cyber security, Applied cryptography, Authentication, Authorization, Distributed systems, Anonymity, Tracking Research objectives: The emergence of novel computing systems is showing limitations of existing security solutions for authentication and authorization procedures, either due to limited capabilities of constrained devices and networks, limited usability and scalability within systems consisting of a huge amount of devices, attack surfaces including physical access to devices. Moreover, novel security paradigms require designing augmented security guarantees including increased privacy of users identities and reduced trust in identity providers. This research involves students in studying state-of-the-art authentication and authorization protocols, applied cryptography, and network security, acquiring expertise in analyzing and designing threat models and for novel secure computer systems. Proposed research activity: • State of the art on Web Authentication and Authorization, on privacy preserving protocols and distributed architectures • Analyzing and measuring trade-offs of privacy-preserving authentication • Analyzing deployment in real-world systems • Designing secure and practical authentication systems • Implementation of proof of concepts for heterogeneous platforms Supervisor: Luca Ferretti | |
a.34 | Multi-user activity recognition |
Keywords: activity recognition, inertial sensor, machine learning Research objectives: Human Actitivy Recognition (HAR) is a set of techniques which identifies the activity a user is performing, through the analysis of a set of sensors which describe actions and movements from users. This allows to configure a computing system based on the current scenario of the user, and provide a more tailored experience. More recently classic HAR system have also been proposed to detect and classify group actions, in which it is not only important to classify signals obtained from a single person, but also to share data among a set of devices carried by different people, which is the main topic of this thesis work. Proposed research activity: • State of the art on HAR for groups of people • Design and development of a data gathering platform • Analysis and study of techniques to realize HAR for groups of people • Participation to relevant international schools and conferences Possible connections with research groups, companies, universities: University of Stuttgart (Germany), University of Bamberg (Germany), University of New South Wales (Australia) Supervisor: Prof. Luca Bedogni | |
a.35 | Digital Twins on Mobile devices |
Keywords: digital twins, mobile development, simulation Research objectives: In recent years, the concept of a Digital Twin— a digital replica of physical systems used for monitoring, simulation, and optimization—has gained significant traction across various industries. This thesis investigates the technological advancements enabling this transition, including the integration of mobile sensing, real-time data processing, and edge computing. It focus specifically on mobile devices, owned by user, to raise the privacy of the data produced by such mobile devices, without the need to offload it completely to an edge or cloud server. Several case studies are examined to demonstrate the practical applications of mobile-based Digital Twins across different sectors such as healthcare, manufacturing, and smart cities. In healthcare, for instance, the implementation of Digital Twins on smartphones can enhance personalized health monitoring and predictive analytics. In manufacturing, mobile Digital Twins enable real-time monitoring and maintenance of equipment even in remote locations. For smart cities, they provide dynamic management and optimization of urban infrastructure. Proposed research activity: • State of the art on DT and mobile devices • design and development of mobile DT • implementation of shadowing • user acceptance Possible connections with research groups, companies, universities: UCI, UNSW, Bonfiglioli consulting Supervisor: Prof. Luca Bedogni | |
a.36 | Efficient, scalable and flexible device to edge offloading |
Keywords: edge computing, machine learning, smart systems, context-aware-computing Research objectives: Edge computing refers to computing units which are placed at the edge of the network, close to the end devices. They typically serve as a lower latency computing unit to process information which devices cannot process on their own, due to battery or computational constraints. However the problem of defining how, when and what data has to be offloaded from a device to an Edge server is yet to be solved, as it encompasses a series of different parameters to take into account such as the operational characteristics of both the device and the edge server, the network, the time contraints on the results and so on. In this research these the PhD candidate will work towards a generalized model for Edge computing offloading from smart devices, which must take into account the heterogeneity of the data, the operational constraints of the device and of the edge server, the service quality to meet. Proposed research activity: • State of the art on Edge computing offloading • Design and implementation of a general purpose testbed for Device-to-edge offloading • Design, implementation and evaluation of Device-to-edge offloading protocols and techniques • Participation to relevant international schools and conferences Possible connections with research groups, companies, universities: University of California, Irvine (USA) Supervisor: Prof. Luca Bedogni | |
a.37 | Adaptive Orchestration of Workloads in Local, Edge, and Split Computing Environments |
Keywords: Research objectives: This PhD thesis investigates the design, deployment, and coordination of computational tasks across local, edge, and split computing infrastructures. It proposes adaptive strategies for dynamically partitioning and migrating workloads to optimize latency, energy efficiency, and resource utilization. The research introduces a unified framework that enables context-aware decision-making, ensuring seamless application performance across heterogeneous and distributed computing nodes. Supervisor: Prof. Luca Bedogni | |
a.38 | Context-Aware Planning and Localization for Edge-Case Autonomous Driving Pianificazione e Localizzazione Adattiva per Scenari Estremi nella Guida Autonoma |
Keywords: autonomous vehicles, high-level planning, offroad autonomy, racing AI, SLAM, Kalman/particle filters, hybrid localization, decision under uncertainty, resilient autonomy. Research Objectives: This PhD project addresses the challenge of enabling robust, high-level planning for autonomous vehicles operating in non-standard environments, such as offroad terrains or high-speed racing tracks, where traditional assumptions of structured environments no longer hold. The focus is on designing planning systems tightly integrated with localization modules, ensuring consistent performance even when GPS is unreliable, maps are incomplete, or sensor observations are ambiguous. A key research direction will be the fusion of classical SLAM and filtering approaches with adaptive, learning-based planning, to handle uncertainty and dynamics in real-time. The project will investigate: – The coupling of planning algorithms with real-time SLAM systems (e.g., visual-inertial, LiDAR-based) for mutual feedback and correction; – The development of resilient planning strategies capable of reasoning over localization confidence and dynamically adjusting behavior accordingly; – The use of Bayesian filtering (e.g., EKF, particle filters) in tandem with semantic and topological maps to support high-speed, high-risk decision-making. The ultimate goal is to enable safe, fast, and context-aware autonomous behavior in edge-case scenarios where traditional pipelines fail or degrade. Proposed Research Activities: – Comparative analysis of SLAM and filtering strategies under offroad and high-speed racing constraints – Design of planning algorithms aware of localization uncertainty and SLAM confidence metrics – Development of integrated localization-planning modules for dynamic environments – Testing of planning behaviors using uncertainty-aware cost functions and adaptive safety margins – Validation in simulated and semi-structured physical environments (e.g., offroad testbeds, closed racing circuits) Supervisor: Prof. Marko Bertogna Co-supervisor: Dott. Ayoub Raji |
Themes for the 6 scholarships with specific topic
Scholarships funded by Companies or University Departments
Theme | Funding entity | |
b.1 | Efficient 3D Scene Compression and Rendering for Real-Time Applications | HipeRT S.r.L |
Keywords: GPU Architecture, 3D Mesh, Point Cloud, Simulation, Neural Compression Research Motivation Representing large-scale 3D scenes such as cities, buildings, or rooms with high fidelity poses significant challenges in memory usage and real-time performance, which may be addressed by employing efficient compression and loading techniques to generate memory-friendly representations suitable for GPU-accelerated simulation and rendering. Research Objectives The main research question is whether neural approaches, such as those recently used for texture compression (Neural Texture Compression, NTC), can be extended to the representation of geometric models. The goal is to combine both traditional and neural strategies for compressing 3D meshes, textures, and point clouds from large-scale industrial datasets. This will involve defining and implementing a complete pipeline that performs offline data compression and encoding, followed by real-time decoding, rendering, and simulation of the compressed scene data. Proposed Research Activities • Understand the current state-of-art in scene compression, focussing on neurally compressed data for texture and more traditional approaches to 3D mesh and point cloud compression; • Investigate how neural compression techniques can be applied on geometric representation of large scenes; • Study recent developments in GPU software/hardware stacks (e.g. CUDA Cooperative Vectors, Tensor Cores) to support real-time inference of compressed data; • Integrate the proposed algorithms and pipelines into established software ecosystems (e.g. game engines, simulation platforms). Supervisor: Prof. Nicola Capodieci Co-supervisor: Prof. Marko Bertogna | ||
b.2 | Autonomous driving algorithms for urban and racing vehicles Algoritmi di guida autonoma per veicoli urbani e racing | HipeRT S.r.L |
Keywords: state estimation, sensor fusion, localization, SLAM, probabilistic robotics, embedded systems, autonomous vehicles. Research Objectives: This PhD project focuses on the development of high-performance and robust state estimation and localization algorithms for autonomous vehicles operating in diverse scenarios, ranging from urban environments to extreme racing conditions. The key challenge lies in achieving accurate and reliable vehicle state awareness across varying dynamics, sensor noise levels, and environmental complexities. The research will aim to: – Design multi-sensor fusion techniques that integrate data from LiDAR, IMU, GNSS, and cameras to ensure robust state estimation under uncertainty; – Develop scalable localization pipelines, including SLAM and map-based techniques, capable of operating in both structured (urban) and unstructured (racing) settings; – Optimize algorithmic performance for real-time operation on edge and embedded platforms, considering constraints typical of automotive applications; – Enable plug-and-play portability of localization systems across different vehicle types and operating conditions. Motion planning and control will be considered at a high level to evaluate the impact of estimation accuracy on decision-making, especially in high-speed or low-visibility scenarios. Proposed Research Activities: – Survey and comparison of state-of-the-art estimation and localization methods – Development of novel sensor fusion architectures tailored for dynamic and noisy environments – Testing in simulation and on physical platforms including urban autonomous vehicles and racing prototypes – Optimization of computational pipelines for real-time execution on embedded systems Supervisor: Prof. Marko Bertogna Co-supervisor: Dott. Ayoub Raji | ||
b.3 | Smart perception and control for robotic picking in industrial systems Percezione e controllo intelligente per sistemi robotici industriali | FIM Department |
Keywords: robotic manipulation, computer vision, deep learning, industrial automation, adaptive control, edge AI. Research Objectives This PhD project investigates the integration of smart perception and control techniques in robotic systems aimed at object picking tasks in industrial environments. As manufacturing evolves toward flexibility, customization, and autonomy, robotic systems must adapt to unstructured scenarios and variable object types without relying on extensive manual programming. The project will focus on enabling robotic arms to autonomously perceive, reason, and act in dynamic environments through a combination of advanced sensing, AI-based perception, and learning-based control. It will explore key techniques such as: • Real-time object detection and pose estimation using computer vision and deep learning; • Integration of perception and control within robotic software stacks (e.g., ROS2); • Deployment of trained models on edge computing platforms for low-latency inference and decision-making. The ultimate goal is to enhance the agility, autonomy, and robustness of robotic picking systems in industrial settings, making them capable of responding to environmental changes, uncertainties, and production variability. Proposed Research Activities • Study of current methods in robotic vision, grasping, and control • Design and training of deep learning models for object detection and recognition • Implementation and testing of end-to-end pipelines in simulation and on real robotic platforms • Optimization and deployment of AI models on embedded and edge devices • Experimental evaluation in representative industrial use cases Supervisor: Prof. Marko Bertogna Co-supervisor: Dott. Davide Sapienza | ||
b.4 | Complexity balance for Safe, Robust and Fast Autonomous Driving Algorithms Bilanciare la Complessità Algoritmica per una Guida Autonoma Sicura, Robusta e Veloce | Emilia Romagna Region |
Keywords: autonomous driving, hybrid AI, reinforcement learning, classical planning, complexity-aware control, safety guarantees, adaptive algorithms. Research Objectives: This PhD project explores the design of autonomous driving algorithms that adapt their internal complexity to balance safety, robustness, and computational efficiency. The focus is on developing hybrid decision-making architectures that combine classical rule-based methods with learning-based techniques such as reinforcement learning (RL) to operate reliably in dynamic and uncertain environments. A central objective is to define strategies that dynamically modulate the algorithmic load—selecting between fast conservative actions and more complex, high-performance behaviors—based on contextual awareness, computational constraints, and real-time safety metrics. The project will investigate: – The integration of RL-based policies and classical planning/control to leverage the strengths of both paradigms; – Runtime complexity management systems that monitor performance and adjust algorithmic pathways accordingly; – Safety-aware design through formal verification techniques and online validation modules. The resulting framework will support fast, robust, and safe autonomous behavior, adaptable to a wide range of use cases—from low-speed urban driving to more aggressive maneuvers in controlled environments. Proposed Research Activities: – Study of hybrid AI/classical architectures in autonomous driving – Development of complexity-monitoring and adaptive decision modules – Training and benchmarking of RL policies with built-in safety constraints – Integration of runtime verification and fail-safe fallback mechanisms – Validation through simulation and pilot projects aligned with regional mobility strategies Supervisor: Prof. Marko Bertogna Co-supervisor: Dott. Ayoub Raji |
Positions funded by an employment contract (Apprendistato di Alta Formazione)
Theme | Funding entity | |
c.1 | Perception Algorithms for Multi-Sensor Systems Including Cameras, LiDAR, and RADAR for Navigation in Unstructured Environments with a Focus on Off-road Perception Algoritmi di percezione per diversi sensori quali camere, LiDAR e RADAR per navigazione in mbienti non strutturati con particolare focus su percezione off-road | HipeRT_S.r.L |
Algoritmi di percezione per diversi sensori quali camere, LiDAR e RADAR per navigazione in mbienti non strutturati con particolare focus su percezione off-road Keywords: Autonomous Vehicles, Off-road Navigation, Perception Algorithms, Sensor Fusion, Domain Generalization, End-to-End Architectures, Compound AI, Unstructured Environments, Mining Automation, Forest Robotics Research Motivation: The advancement of autonomous vehicles has primarily focused on structured environments such as urban and highway settings. However, critical applications—including mining operations, forestry logistics, and disaster response—take place in unstructured and highly variable off-road environments. These scenarios are characterized by irregular terrain, vegetation, dust, varying illumination, and absence of traffic rules or lane markings. In such conditions, existing perception algorithms—often tailored to structured domains—fail to deliver reliable and robust performance without extensive re-training and data collection. Moreover, traditional autonomous navigation stacks based on sequential “sense-plan-act” models struggle to meet the real-time constraints and dynamic complexity of off-road environments.To address these gaps, this research aims to enhance perception systems for off-road autonomy by: • improving generalization capabilities across domains without requiring retraining: • advancing sensor fusion strategies that leverage multi-modal data; • evaluating new parallelized, end-to-end AV architectures such as PARA-Drive to overcome the limitations of classic navigation stacks. Research Objectives: • Develop perception algorithms capable of generalizing across diverse unstructured environments (e.g., forests, quarries, agricultural fields) without retraining; • Investigate novel sensor fusion techniques, including early, mid, and late fusion strategies, combining modalities such as LiDAR, radar, RGB and thermal cameras, and IMUs.; • Explore and evaluate parallel end-to-end architectures, inspired by PARA-Drive and compound AI approaches, that integrate perception, planning, and control in real-time; • Compare traditional and compound architectures to determine performance trade-offs and identify best-fit configurations for off-road conditions; • Validate proposed systems in real-world scenarios, leveraging simulated and physical testbeds that reflect the complexity of mining and forestry operations. Proposed Research Activities: The research will begin with a review of the current state of the art in off-road perception, sensor fusion, and autonomous vehicle (AV) architectures, with a focus on identifying key limitations in domain generalization and integration. Building on this, the work will focus on designing perception algorithms capable of adapting to varying unstructured environments—such as forests, quarries, and agricultural fields—without requiring retraining, leveraging approaches like domain adaptation, self-supervised learning, and data augmentation. In parallel, the study will explore advanced sensor fusion techniques by developing and comparing early, mid, and late fusion strategies that integrate LiDAR, radar, RGB and thermal cameras to improve robustness in complex scenarios. An important part of the research will involve implementing and analyzing novel parallel and end-to-end AV architectures, such as PARA-Drive, to assess how these frameworks can outperform traditional modular pipelines in terms of latency, efficiency, and reliability. Finally, all developed algorithms and architectures will be validated through a combination of high-fidelity simulation and real-world experiments in off-road testbeds to ensure applicability and performance in operational conditions. Supervisor: Prof. Marko Bertogna Co-supervisor: Dott. Micaela Verucchi (Hipert S.r.L.) Co-supervisor: Dott. Ignacio Sanudo (Hipert S.r.L.) | ||
c.2 | Time-Sensitive Networking and Operating Systems | Minerva Systems S.r.L. |
Keywords: communication networks, real-time, operating system, virtualization, mixed criticality, network configuration Research Objectives: The high-bandwidth demand and increase of mixed critical data within the communication networks of modern real-time systems—such as autonomous driving or avionics—has led to the need for communication standards that help ensure low-latency network predictability. New standards extending the Ethernet communication protocol are being developed and adopted to address this challenge, as an example, the Time-Sensitive Networking (TSN) set of standards is increasingly being utilized and further refined. However, this class of standards still presents several challenges for widespread and effective adoption. Among these is the sheer number of available protocols, which not only complicates the selection of the most appropriate solution for specific application contexts, but can also lead to overlapping functionalities or conflicts between protocols, potentially undermining overall system efficiency. Additionally, the complexity and cost associated with network configuration remain significant barriers to adoption. The aim of this research project is to investigate and tackle the challenges posed by these deterministic network standards, and to propose potential solutions. Proposed research activity: This PhD studentship will focus on the analysis of deterministic networking technologies across diverse application domains, including automotive, avionics, and industrial automation. The research involves mapping existing networking standards to specific application requirements to understand their suitability. A key activity will be the definition of classification criteria for protocol selection based on context, allowing for informed choices in various scenarios. Furthermore, the project will delve into the analysis of reconfiguration needs in adaptive and mixed-criticality scenarios, addressing how systems can dynamically adapt while maintaining critical performance. To validate these concepts, a prototype for dynamic configuration will be implemented in a real-time testbed. The studentship will culminate in the evaluation of the impact of reconfiguration on system determinism and performance, ensuring that adaptability does not compromise critical timing and efficiency. Supporting research projects (and Department): Minerva Systems SRL Possible connections with research groups, companies, universities: Chair of Cyber-Physical Systems at the Technical University of Munich (Germany) Research and Development unit at Sqrrl (Italy) Supervisor: Prof. Andrea Marongiu Co-Supervisor: Dr. Andrea Bastoni (Minerva Systems) Co-Supervisor: Dr. Marco Solieri (Minerva Systems) |
Previous Cycles
Research Theses of the XL Cycle