08-13, 11:15–11:55 (Asia/Yerevan), 113W PAB
A significant amount of progress is being made today in the field of representation learning. It has been demonstrated that unsupervised techniques can perform as well as, if not better than, fully supervised ones on benchmarks such as image classification, while also demonstrating improvements in label efficiency by multiple orders of magnitude. In this sense, representation learning is now addressing some of the major challenges in deep learning today. It is imperative, however, to understand systematically the nature of the learnt representations and how they relate to the learning objectives.
In this talk, we will present a comprehensive overview of representation learning from the beginning to the modern models, contextualize these methods, and discuss the pros and cons of current evaluation methods. New ara of deep learning methods that could understand simultaneously the different variety of data will introduce by its evolutions.
You should be familiar with how the neural network and deep learning work, but no knowledge of professional expertise in complicated models is needed. The material of this talk would be available online and will share with the audience.
- Chen, Ting, et al. "A simple framework for contrastive learning of visual representations." International conference on machine learning. PMLR, 2020.
- Cohen, Taco S., Mario Geiger, and Maurice Weiler. "A general theory of equivariant cnns on homogeneous spaces." Advances in neural information processing systems 32 (2019).
- Mildenhall, Ben, et al. "NeRF: representing scenes as neural radiance fields for view synthesis." Communications of the ACM 65.1 (2021): 99-106.
Previous knowledge expected
Hadi is leading a software team as a chief engineer at the R&D department of TELIGHT, Czechia, France and a lecturer at the Institute for Advanced Studies in Basic Sciences (IASBS), Iran. He is a former researcher at the Institute of Formal and Applied Linguistics (ÚFAL) at Charles University, Prague and participated in several international projects in collaboration with the concentration of experts in the fields of CV/NLP/HLT/CL/ML/DL. His research focuses on multimodal learning inspired by neural models that are both linguistically motivated, and tailored to language and vision, visual reasoning and deep learning. His main research interests are Machine Learning, Deep Learning, Computer Vision, Multimodal Learning and Visual Reasoning while he is experienced in a wide variety of international projects on cutting-edge technologies. Currently, they are developing a new generation of the patented holographic microscope that utilises live-cell label-free imaging to turn invisible live cells into visible ones.