Continual Learning

Apply

Project Description

Continual learning aims to learn new tasks without forgetting previously learned ones. This is especially challenging when one cannot accessdatafrom previous tasks and when the model has a fixed capacity. In this project, the goal is to develop and improve the capability of the machine learning methods not to forget older concepts as time passes.References[1] Arslan Chaudhry, Marc'Aurelio Ranzato, Marcus Rohrbach,MohamedElhoseiny, Efficient Lifelong Learning with A-GEM, ICLR, 2019[2] Mohamed Elhoseiny,Francesca Babiloni, Rahaf Aljundi, ManoharPaluri,Marcus Rohrbach, Tinne Tuytelaars, Exploring the Challenges towards Lifelong Fact Learning, ACCV 2018https://arxiv.org/abs/1711.09601[3] Rahaf Aljundi, Francesca Babiloni, Mohamed Elhoseiny, Marcus Rohrbach, Tinne Tuytelaars, Memory Aware Synapses: Learningwhat(not) to forget, ECCV 2018https://arxiv.org/abs/1711.09601[4]Sayna Ebrahimi, Mohamed Elhoseiny, Trevor Darrell, MarcusRohrbachUncertainty-guided Continual Learning with Bayesian Neural Networks https://arxiv.org/abs/1906.02425For more references, you may visit https://nips.cc/Conferences/2018/Schedule?showEvent=10910 https://icml.cc/Conferences/2019/Schedule?showEvent=3528​​​
Program - Computer Science
Division - Computer, Electrical and Mathematical Sciences and Engineering
Center Affiliation - Visual Computing Center
Field of Study - ​Computer Vision and Machine Learning

About the
Researcher

Mohamed Elhoseiny

Assistant Professor, Computer Science

Mohamed Elhoseiny

Dr. Mohamed Elhoseiny is an Assistant Professor of Computer Science at the Visual Computing Center at KAUST (King Abdullah University of Science and Technology). Dr. Elhoseiny has collaborated with several researchers at Facebook AI Research including Marcus Rohrbach, Yann LeCun, Devi Parikh, Dhruv Batra, Manohar Paluri, Marc'Aurelio Ranzato, and Camille Couprie. He has also fruitfully teamed up with academic institutions including KULeuven (with Rahaf Aljundi and Tinne Tuytelaars), UC Berkeley (with Sayna Ebrahimi and Trevor Darrell), the University of Oxford (with Arslan Chaudry and Philip Torr), and the Technical University of Munich (with Shadi AlBarqouni and Nassir Navab). His primary research interests are in computer vision, the intersection between natural language and vision and computational creativity. Dr. Elhoseiny received his Ph.D. degree from Rutgers University, New Brunswick, in October 2016 under Prof. Ahmed Elgammal. His work has been widely recognized. In 2018, he received the best paper award for his work on creative fashion generation at ECCV workshop from Tamara Berg of UNC chapel hill and sponsored by IBM Research and JD AI Research. The work got also featured at the New Scientist Magazine and he co-presented it the Facebook F8 annual conference with Camille Couprie. His earlier work on creative art generation was featured by the New Scientist magazine and MIT technology review in 2017, HBO Silicon Valley TV Series ( season 5 episode 3) in 2018. His Creative AI artwork was featured/presented at the best of AI meeting 2017 at Disney (6000+ audience), Facebook's booth at NeurIPS 2017, and the official FAIR video in June 2018. His work on life-long learning was covered at the MIT technology review in 2018. In Nov 2018 and based on his 5-year work on zero-shot learning, Dr. Elhoseiny made significant participation in the United Nations Biodiversity conference (~10,000 audience from >192 countries and tens of important organization) on how AI may benefit biodiversity which reflects in both disease management and climate change. Dr. Elhoseiny received the Doctoral Consortium award at CVPR 2016 and an NSF Fellowship for his Write-a-Classifier project in 2014.

Desired Project Deliverables

​Develop a working research prototype for a continual learning approach1) students should learn about machine learning, deep learning, and the respective target application chosen for the internship. 2) students are expected to show capability to go from an idea to a working prototype; pushing the limits of what the state of the art can do.​