Course Outline
-
Introduction to Scala for Government
- A concise overview of the Scala programming language
- Laboratory exercises: Familiarizing with Scala
-
Fundamentals of Spark for Government
- Historical context and development of Spark
- Integration of Spark with Hadoop
- Key concepts and architecture of Spark
- Components of the Spark ecosystem (core, SQL, MLlib, streaming)
- Laboratory exercises: Installation and execution of Spark
-
Initial Exploration of Spark for Government
- Executing Spark in local mode
- Navigating the Spark web user interface
- Utilizing the Spark shell
- Analyzing datasets: Part 1
- Examining Resilient Distributed Datasets (RDDs)
- Laboratory exercises: Exploration using the Spark shell
-
RDDs for Government
- Fundamentals of RDDs
- Partitioning strategies
- Operations and transformations on RDDs
- Types of RDDs
- Key-Value pair RDDs
- MapReduce operations with RDDs
- Caching and persistence techniques
- Laboratory exercises: Creating, inspecting, and caching RDDs
-
Spark API Programming for Government
- Introduction to the Spark API and RDD API
- Submitting the first program to Spark
- Debugging and logging practices
- Configuration properties and settings
- Laboratory exercises: Programming with the Spark API, submitting jobs
-
Spark SQL for Government
- SQL support within Spark
- DataFrames in Spark
- Defining tables and importing datasets
- Querying DataFrames using SQL
- Storage formats: JSON, Parquet
- Laboratory exercises: Creating and querying DataFrames, evaluating data formats
-
MLlib for Government
- Introduction to MLlib
- Overview of MLlib algorithms
- Laboratory exercises: Writing MLlib applications
-
GraphX for Government
- Overview of the GraphX library
- GraphX APIs and functionalities
- Laboratory exercises: Processing graph data using Spark
-
Spark Streaming for Government
- Overview of streaming capabilities in Spark
- Evaluating different streaming platforms
- Performing streaming operations
- Sliding window operations in Spark
- Laboratory exercises: Writing Spark streaming applications
-
Spark and Hadoop for Government
- Introduction to Hadoop (HDFS, YARN)
- Arcitecture of Hadoop and Spark integration
- Running Spark on Hadoop YARN
- Processing HDFS files using Spark
-
Spark Performance and Tuning for Government
- Broadcast variables in Spark
- Accumulators for data aggregation
- Memory management and caching strategies
-
Spark Operations for Government
- Deploying Spark in a production environment
- Sample deployment templates and configurations
- Configuration best practices
- Monitoring tools and techniques
- Troubleshooting common issues
Requirements
PRE-REQUISITES
Familiarity with either the Java, Scala, or Python programming languages (our labs are conducted in Scala and Python)
Basic understanding of a Linux development environment, including command line navigation and file editing using tools such as VI or nano, is required for government participants.
Testimonials (6)
Doing similar exercises different ways really help understanding what each component (Hadoop/Spark, standalone/cluster) can do on its own and together. It gave me ideas on how I should test my application on my local machine when I develop vs when it is deployed on a cluster.
Thomas Carcaud - IT Frankfurt GmbH
Course - Spark for Developers
Ajay was very friendly, helpful and also knowledgable about the topic he was discussing.
Biniam Guulay - ICE International Copyright Enterprise Germany GmbH
Course - Spark for Developers
Ernesto did a great job explaining the high level concepts of using Spark and its various modules.
Michael Nemerouf
Course - Spark for Developers
The trainer made the class interesting and entertaining which helps quite a bit with all day training.
Ryan Speelman
Course - Spark for Developers
We know a lot more about the whole environment.
John Kidd
Course - Spark for Developers
Richard is very calm and methodical, with an analytic insight - exactly the qualities needed to present this sort of course.