08-12, 13:45–14:25 (Asia/Yerevan), 213W PAB
Since multimodality became popular, lots of engineer are trying to make domain-universal search. The search engines that will find in image by textual query, the HTML file by piece of audio and so on. So here is our (Unum) approach with a bias towards GPU accelerating inference (for underlying models) and passion to distribute everything.
During the presentation, we will discuss the following questions:
-How to build fast and precise Semantic Textual Similarity engine. -Multilingual sentence encoders. -What is CLIP and how can we find an image with textual query. -Building indices: Approximate Nearest Neighbor Algorithms and toolkits. -What is the future of the semantic cross-domain search.
No previous knowledge expected
Vladimir Orshulevich is an NLP research engineer at Unum. He used to work as a Research Engineer in SberDevices.