Course Outline

  1. Introduction to Scala for Government

    • A concise overview of the Scala programming language
    • Laboratory exercises: Familiarizing with Scala
  2. Fundamentals of Spark for Government

    • Historical context and development of Spark
    • Integration of Spark with Hadoop
    • Key concepts and architecture of Spark
    • Components of the Spark ecosystem (core, SQL, MLlib, streaming)
    • Laboratory exercises: Installation and execution of Spark
  3. Initial Exploration of Spark for Government

    • Executing Spark in local mode
    • Navigating the Spark web user interface
    • Utilizing the Spark shell
    • Analyzing datasets: Part 1
    • Examining Resilient Distributed Datasets (RDDs)
    • Laboratory exercises: Exploration using the Spark shell
  4. RDDs for Government

    • Fundamentals of RDDs
    • Partitioning strategies
    • Operations and transformations on RDDs
    • Types of RDDs
    • Key-Value pair RDDs
    • MapReduce operations with RDDs
    • Caching and persistence techniques
    • Laboratory exercises: Creating, inspecting, and caching RDDs
  5. Spark API Programming for Government

    • Introduction to the Spark API and RDD API
    • Submitting the first program to Spark
    • Debugging and logging practices
    • Configuration properties and settings
    • Laboratory exercises: Programming with the Spark API, submitting jobs
  6. Spark SQL for Government

    • SQL support within Spark
    • DataFrames in Spark
    • Defining tables and importing datasets
    • Querying DataFrames using SQL
    • Storage formats: JSON, Parquet
    • Laboratory exercises: Creating and querying DataFrames, evaluating data formats
  7. MLlib for Government

    • Introduction to MLlib
    • Overview of MLlib algorithms
    • Laboratory exercises: Writing MLlib applications
  8. GraphX for Government

    • Overview of the GraphX library
    • GraphX APIs and functionalities
    • Laboratory exercises: Processing graph data using Spark
  9. Spark Streaming for Government

    • Overview of streaming capabilities in Spark
    • Evaluating different streaming platforms
    • Performing streaming operations
    • Sliding window operations in Spark
    • Laboratory exercises: Writing Spark streaming applications
  10. Spark and Hadoop for Government

    • Introduction to Hadoop (HDFS, YARN)
    • Arcitecture of Hadoop and Spark integration
    • Running Spark on Hadoop YARN
    • Processing HDFS files using Spark
  11. Spark Performance and Tuning for Government

    • Broadcast variables in Spark
    • Accumulators for data aggregation
    • Memory management and caching strategies
  12. Spark Operations for Government

    • Deploying Spark in a production environment
    • Sample deployment templates and configurations
    • Configuration best practices
    • Monitoring tools and techniques
    • Troubleshooting common issues

Requirements

PRE-REQUISITES

Familiarity with either the Java, Scala, or Python programming languages (our labs are conducted in Scala and Python)
Basic understanding of a Linux development environment, including command line navigation and file editing using tools such as VI or nano, is required for government participants.

 21 Hours

Number of participants


Price per participant

Testimonials (6)

Upcoming Courses

Related Categories