Skip to content
Loading Events

« All Events

Intel oneAPI Workshop – Heterogenous SW Programming and AI Analytics for CPUs and discrete GPUs

November 2

Intel oneAPI Workshop – Heterogenous SW Programming and AI Analytics for CPUs and discrete GPUs

Alfasoft and Intel welcome you to a virtual technical workshop to see the growing momentum of oneAPI and learn about how the community is using oneAPI on various platforms such as ARM, NVIDIA, Intel and more for HPC and AI workloads. This workshop offers a full day of hands-on tutorials, tech talks, and presentations spanning all things high-performance computing and AI: hardware, oneAPI software tools, best-practice techniques, and more to advance and deploy next-generation innovations that scale across platforms. The training is free of charge and will be arranged on November 2, 2022.

The morning session will focus on Heterogenous Software Programming for CPUs and GPUs including oneAPI´s important part of this hardware evolution and why the new LLVM based compilers have a critical role in this transition. The afternoon will focus on hardware acceleration for AI using Intel oneAPI AI Analytics Toolkit. The focus will be on how to optimize Deep Learning with demos how to code and deploy anywhere. Please refer below for more details about each session.

 

Morning Session | Heterogenous Software Programming for CPUs and dGPUs with oneAPI

Purpose and Intentions

The oneAPI initiative is a cross-industry, open, standards-based unified programming model that delivers a common developer experience across multiple / multi-vendor CPU and accelerator architectures.

It is designed for faster application performance, more productivity, and greater innovation. The oneAPI industry initiative encourages collaboration on the oneAPI specification and compatible oneAPI implementations across the ecosystem.

Intel released the Intel® oneAPI Toolkits implementing its own programming languages, models, libraries and tools that are built to the above mentioned oneAPI specifications targeting Intel® Intel CPUs and accelerators (GPU and FPGAs).

The Intel® oneAPI Base & HPC Toolkit solution – the successor to the Intel® Parallel Studio XE tools suites- provides

  1. High performance LLVM based compilers for C/C++, Fortran
  2. OpenMP for offloading purposes
  3. Performance optimized libraries such as Intel® oneAPI Math Kernel Library and Intel® MPI Library and
  4. Analysis tools (VTune Profiler and Advisor) enhanced to support heterogeneous development.

The presenters will show you how to get on a standards-based path for heterogeneous programming through the oneAPI initiative and how to use the tools for shared and distributed computing on heterogenous hardware platforms including Intel CPUs, Intel hardware accelerators and Intel discrete graphics solutions.

After the event the attendees should understand:

  • The transition from the Intel Parallel Studio Development tool to Intel oneAPI Toolkits
  • How to use the Intel oneAPI Toolkits to develop heterogenous applications running on CPUs and HW accelerators
  • Making use of the offered development environments such as the Intel Dev Cloud
  • Migrating non oneAPI heterogenous code (CUDA) to oneAPI programming models (SYCL)
  • Being able to start developing and optimizing the performance with the oneAPI development environment

Pre-Requisite

It is assumed that participants have a basic knowledge about (HPC) application/software programming and should be able to program in C/C++ and Fortran.

 

Afternoon Session | Accelerated AI Machine and Deep Learning with Intel

Purpose and Intentions

The Intel AI Analytics Toolkit gives data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architecture. The components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance from pre-processing through machine learning and provides interoperability for efficient model development.

Using this toolkit, you can:

  • Deliver high-performance, deep learning training on Intel® XPUs and integrate fast inference into your AI development workflow with Intel®-optimized, deep learning frameworks for TensorFlow* and PyTorch*, pretrained models, and low-precision tools.
  • Achieve drop-in acceleration for data pre-processing and machine learning workflows with compute-intensive Python* packages, Modin*, Scikit-learn*, and XGBoost, optimized for Intel.
  • Gain direct access to analytics and AI optimizations from Intel to ensure that your software works together seamlessly.

The presenters will show you how to use the Intel AI Analytics toolkit and how to integrate the various optimized components into your workflow for both classical machine learning and deep learning workflows. We will also cover adjacent topics around this including:

  • Accelerating your model development by enhancing your experimentation with SigOpt
  • Federated learning using the flexible and extensible OpenFL framework
  • AI-driven multiphysics HPC applications on Intel architecture. This is to address the convergence of AI and HPC applications
  • Easily speed up Deep Learning inference by writing your code once and deploying it on any supported Intel hardware using the OpenVINO toolkit.

After the event the attendees should understand:

  • How to accelerate end-to-end AI and Data Science pipelines, achieve drop-in acceleration with optimized Python tools built using oneAPI libraries.
  • How to achieve high-performance deep learning training and inference with Intel-optimized TensorFlow and PyTorch versions, and low-precision optimization with support for fp16, int8 and bfloat16.
  • Seamlessly scale Pandas workflows across multi-node dataframes with Intel® Distribution of Modin.
  • Increase machine learning model throughput with algorithms in Scikit-learn and XGBoost optimized for Intel architectures.

Pre-Requisite

It is assumed that participants have a basic knowledge about machine learning and deep learning and should be able to basic program in Python.

Make sure to sign up for this free full day virtual workshop on November 2. The Workshop starts at 9.00 CET.

 

AGENDA – 2nd November, 2022

MORNING THEME: Heterogenous Software Programming for CPUs and dGPUs

9.00 –    Welcome and Introduction to the Intel workshop

09:10 – oneAPI – Introduction to a new heterogenous Development Environment

– Hardware Evolution : From CPUs to heterogenous HW (GPUs, FPGAs) programming

– Concept and purpose for the oneAPI Standardization initiative

– Intel’s oneAPI Solutions – Toolkits with Compilers, libs, analysis and migration tools

– Transition from Intel Parallel Studio XE to Intel oneAPI toolkits

– Dev Cloud, Public available development Sandbox

09:30 –  Direct programming with oneAPI and LLVM based Compilers (Part 1) – with Demos

– Intro to heterogenous programming model with SYCL 2020

– SYCL features and examples

o  “Hello World” Example

o  Device Selection

o  Execution Model

10:40 –  Direct programming with oneAPI and LLVM based Compilers (Part 2) – with Demos

o  Compilation and Execution Flow

o  Memory Model; Buffers, Unified Shared Memory (USM)

o  Performance optimizations with SYCL features

12:00 – Intel OpenMP for Offloading – with Demos

– Parallelizing heterogenous applications with OpenMP 5.1  (i.e. for Fortran)

– Mixing of OpenMP and SYCL

 

AFTERNOON THEME: Accelerated AI Machine and Deep Learning with Intel

13:45 –  Hardware acceleration for AI and Intel® oneAPI AI Analytics Toolkit

– Hardware features that are powering AI on Intel CPUs and dGPUs

14:15 –  How to accelerate Classical Machine Learning on Intel Architecture – with Demos

– Intel® Distribution for Python and its optimizations

– Data Frames Acceleration for ML with Modin (Pandas Replacement)

– Intel® Extension for Scikit-learn and XGBoost

15:25 –  Optimize Deep Learning on Intel – Same code just faster! – with Demos

– Deep Learning with the highly-optimized Intel® oneDNN library

– Intel® oneDNN in action in DL frameworks

– Intel-optimized TensorFlow

– Intel-optimized PyTorch and the Intel® Extension for PyTorch (IPEX)

16:05 –  Deep Learning quantization benefits for inference speed-up

– Showcase Intel tools to quantize your model (such as the Intel® Neural Compressor)

16:25 –  Easily speed up Deep Learning inference on multiple Intel Hardware – Write once deploy anywhere!

– OpenVINO Toolkit for high performance, deep learning inference

– Optimized for high-performance inference models, trained with TensorFlow* or with PyTorch

– Enables deep learning inference from the edge to cloud

17:10 –  Wrap up

See full agenda here.

 

Information

When: November 2, 2022

Cost: The event is free of charge

Where: Virtual, more info to come later

Learn more about Intel oneAPI Base & HPC Toolkit and the Intel® oneAPI AI Analytics Toolkit

 

Details

Date:
November 2
Event Category:

Venue

Online

Organizer

Alfasoft / Intel

Event Conditions

Course registration is binding. Upon cancellation more than 8 working days before the course start date, we invoice 50% of the course fee. Upon cancellation less than 8 working days before the course start date, we invoice the full course fee. Click here to read the full Terms and Conditions!

Registered: 1 / ∞

Register

Join one other person

Terms and Conditions Page
Loading gif

Already registered?

Use this tool to manage your registration.