08-12, 13:45–15:15 (Asia/Yerevan), 214W PAB
This hands-on tutorial will teach you how to accelerate every component of a machine learning system and improve your team’s productivity at every stage of the ML workflow. You’ll learn how to get started with RAPIDS and NVIDIA Forest Inference Library, and how to go beyond the basics to get the most out of your accelerated infrastructure. We’ll do all of this in the context of a real-world application that models financial payments fraud and detects it in real-time. We’ll show you how: RAPIDS enables you to find better insights into your data more quickly, through accelerated visualization techniques RAPIDS Machine Learning models can outperform rules-based approaches to detecting payments fraud NVIDIA Forest Inference Library enables you to accelerate inference of tree models, scoring incoming transactions with high throughput and low latency Data scientists will experience the high-velocity exploratory workflows enabled by NVIDIA RAPIDS and learn how to best take advantage of GPUs when porting CPU-based pandas and scikit-learn code to run on RAPIDS. Application developers and IT ops professionals will learn more about data science workflows, see how real-world ML systems work, and learn about the myriad benefits of GPU acceleration for these systems and the teams who build them. The tutorial can be delivered both remotely and onsite. Attendees would need a laptop and a stable internet connection. Attendees to the tutorial will be provided with a url to access the lab environment, so that they can access and run the tutorial with no prior set-up required. Familiarity with standard Python code is desirable.
This tutorial illustrates the benefits of the open-source RAPIDS library for work at all stages of the machine learning workflow, whether exploring and visualizing data, or running inference at scale. The tutorial will be split into the following sections:
[10 minutes] We will begin the session with slides, level-setting on the Machine Learning workflow, and introducing the use-case of detecting payments fraud. [5 minutes] Accessing the lab environment. Attendees will be guided to their lab environment, which is pre-populated with Jupyter Notebooks to work through. We give an overview of what they’ll be working through for the duration of the lab. [20 minutes] EDA and visualization notebooks - working through these notebooks gives an understanding of the pseudo-generated payments fraud data which we will be using to train models on, in later notebooks. Attendees will start to think about features which may be useful for identifying fraudulent transactions. [10 minutes] Rules engine notebook. This notebook shows how you can use a rules engine to encode human beliefs about fraudulent transactions into a model. Rules engines are currently used by many financial institutions to identify fraud. At the end of this notebook, attendees will have a rules-based model which can be used as a baseline when we go on to develop ML models. [20 minutes] GPU pipelines notebook. In this notebook, attendees train a tree-based model using XGBoost, running on the GPU. The notebook illustrates how to transform raw data into feature vectors, then train the model, all whilst keeping the data on the GPU. [10 minutes] Explaining Predictions notebook. This notebook shows how predictions made by the XGBoost tree-ensemble model can be explained using SHAP values. [10 minutes] Accelerating Inference notebook. This notebook considered the importance of making predictions on new data quickly when trying to detect fraud. The notebook shows how the RAPIDS Forest Inference Library accelerates predictions to a speed which is acceptable in real-world fraud-detection systems. [5 minutes] Wrap up. Discuss what we’ve seen today, and point attendees to resources where they can find out more about any of the techniques, tools and infrastructure they’ve used in the tutorial.
The focus of the interactive notebooks will be on quality-of-life improvements for experimental data science, but we’ll also show attendees how accelerated computing can increase the velocity of entire ML teams, supercharge the throughput of machine learning systems, and improve the business outcomes of production intelligent applications.
The notebooks are built such that a beginner user can execute the notebook with no need to change the code. However, we’ve called out places in all of the notebooks where more confident attendees may wish to amend code to try out different parameter values, or add extra functionality to the workflow.
Throughout the tutorial we will be on hand to answer any questions that may arise, and we will frequently check in with the group to ensure everyone is getting the most out of the tutorial.
The capacity of attendees for the tutorial is 30 people. All the participants are expected to bring their fully charged laptops.
No previous knowledge expected
Dmitry Mironov is an AI Solutions Architect at NVIDIA. He helps customers use the GPUs efficiently and helps speed up various pipelines in CV, NLP, Conversational AI, and Data Science. Before NVIDIA, Dmitry served as a CTO and co-founder of a startup. He had been integrating Computer Vision into gold mining, transportation, energy, and other industries.