Events

Big Data Processing on HPCTraining

by Apurv Deepak Kulkarni (ScaDS.AI) , Mr Pramod Baddam (TU Dresden) , Ms Wenyu Zhang (TU Dresden)

Europe/Berlin
online (TU Dresden)

online

TU Dresden

Description

Apache Spark and Apache Flink are two typical Big Data analytics frameworks. Their APIs allow the development and testing of an application on a local workstation and later, without changing the source code of the application, distribute work to many computers when the local workstation is not sufficient anymore due to limited resources.

The course focuses on the step from a local workstation to an HPC environment and presents how the typical Big Data analysis workflow can be organized in an HPC environment. In this course participants will be introduced to running a data pipeline and data processing along with managing the configurations on the HPC environment, using Apache Flink and Apache Spark.

Agenda

  1. Introduction
  2. Distributed Computing with Big Data
  3. HPC Considerations
    1. Data Space
    2. Software
    3. Hardware
  4. Big Data Framework Configuration
    1. Master/Worker
    2. Parallelism
    3. Memory
  5. Hands On Session
  6. Conclusion/Supplementary

Pre-knowledge

  • Basic knowledge of Big Data frameworks (e.g. Apache Flink, Apache Spark) is recommended (but not required)
  • Basic HPC Knowledge is recommended (but not required)

Course language

English 
 

Organized by

Trainings ScaDS.AI