Luka Chkhetiani is a Deep Learning Research & Technology Lead with extensive experience in research, management, deployment, and optimization of end-to-end deep learning services in cloud and edge ecosystems.
He is responsible for Research & Technology Leadership for Unsupervised and Semi-Supervised multilingual ASR research, deployment, and optimization at AssemblyAI.
Self-Supervised pretraining has been wildly successful lately, covering almost every domain: Speech, NLP, Vision. Networks, such as: Wav2Vec2, Hubert, JUST and alikes have enabled rapid development of Speech-related products. In this talk we're going to go through the end-to-end research and engineering process of production-grade self-supervised ASR in the multilingual setting. Covered topics include: Compute, Data, Scalability, Engineering for Pretraining and Downstream Tuning.