I'm a Research Scientist in Computer Vision and Machine Learning focusing on continual learning, machine unlearning, and transfer learning. My work aims to develop algorithms that can learn continuously from streams of data while maintaining good performance on previously learned tasks - a key challenge known as catastrophic forgetting. I previously conducted research at the Computer Vision Center (CVC) in Barcelona working with Joost van de Weijer's team on continual learning for Vision Transformers. My doctoral research at Ca' Foscari University of Venice with Professor Alessandro Torsello focused on rehearsal-based continual learning methods and developing more efficient ways to store and reuse past knowledge. My current work explores novel approaches like Dynamic Label Injection for handling imbalanced data, parameter isolation methods for incremental learning (MIND), and distance-based machine unlearning (DUCK). I'm particularly interested in understanding the fundamental trade-offs in continual learning systems, such as the relationship between memory capacity and performance, and developing simple yet effective baseline approaches that can serve as strong foundations for the field.
I have published papers in top venues including CVPR, AAAI, and Pattern Recognition. My research aims to push the boundaries of how AI systems can learn and adapt over time while being computationally efficient.
[ this stuff was generated automatically but I agree with almost everything ]
L. Sabetta, Francesco Pelosin, Giulia Denevi, Alessandro Nicolosi
Ital-IA 2023
V. Boginski, Ernesto Estrada, Lecturer Mikhail Isaev, M. Pelillo, M. Ravetti, Angelo Sifaleras, Theodore Trafalis, Mikhail Batsyn, Mikhail Chernoskutov, Francesco Pelosin, M. Fiorucci, G. Gradoselskaya, Ilia Karpov, T. Shcheglova