PyData Yerevan 2022

Hadi Abdi Khojasteh

Hadi is leading a software team as a chief engineer at the R&D department of TELIGHT, Czechia, France and a lecturer at the Institute for Advanced Studies in Basic Sciences (IASBS), Iran. He is a former researcher at the Institute of Formal and Applied Linguistics (ÚFAL) at Charles University, Prague and participated in several international projects in collaboration with the concentration of experts in the fields of CV/NLP/HLT/CL/ML/DL. His research focuses on multimodal learning inspired by neural models that are both linguistically motivated, and tailored to language and vision, visual reasoning and deep learning. His main research interests are Machine Learning, Deep Learning, Computer Vision, Multimodal Learning and Visual Reasoning while he is experienced in a wide variety of international projects on cutting-edge technologies. Currently, they are developing a new generation of the patented holographic microscope that utilises live-cell label-free imaging to turn invisible live cells into visible ones.

The speaker's profile picture

Sessions

08-12
11:15
90min
Sequential Attention-Based Neural Machine Translation
Hadi Abdi Khojasteh

Sequential models in natural language understanding are widely useful for everything from machine translation to speech recognition. In machine translation, encoder-decoder architecture, especially the Transformer, is one of the most prominent branches. The attention idea has been one of the most influential ideas in deep learning. By adopting this idea, we can make a model that takes the long sequence of data (for example, the words in the long sentence for translation) and divides them into small parts while looking at the others simultaneously to generate the output at the end.

214W PAB
08-13
11:15
40min
Large Scale Representation Learning In-the-wild
Hadi Abdi Khojasteh

A significant amount of progress is being made today in the field of representation learning. It has been demonstrated that unsupervised techniques can perform as well as, if not better than, fully supervised ones on benchmarks such as image classification, while also demonstrating improvements in label efficiency by multiple orders of magnitude. In this sense, representation learning is now addressing some of the major challenges in deep learning today. It is imperative, however, to understand systematically the nature of the learnt representations and how they relate to the learning objectives.

113W PAB