Thank you for sending your enquiry! One of our team members will contact you shortly.
Thank you for sending your booking! One of our team members will contact you shortly.
Course Outline
1: HDFS (17%)
- Explain the functions of HDFS daemons.
- Describe the typical operation of an Apache Hadoop cluster, including data storage and processing.
- Identify contemporary computing system features that necessitate a solution like Apache Hadoop.
- Categorize the primary objectives of HDFS design.
- Given a scenario, determine the appropriate use case for HDFS Federation.
- Identify the components and daemons in an HDFS High Availability (HA) Quorum cluster.
- Analyze the role of HDFS security using Kerberos.
- Determine the most suitable data serialization method for a given scenario.
- Describe the file read and write processes in HDFS.
- Identify the commands to manage files in the Hadoop File System Shell.
2: YARN and MapReduce version 2 (MRv2) (17%)
- Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 impacts cluster settings.
- Understand the deployment of MapReduce v2 (MRv2 / YARN), including all YARN daemons.
- Comprehend the basic design strategy for MapReduce v2 (MRv2).
- Determine how YARN manages resource allocations.
- Identify the workflow of a MapReduce job running on YARN.
- Determine which files need to be modified and how to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN.
3: Hadoop Cluster Planning (16%)
- Identify key considerations in selecting hardware and operating systems for hosting an Apache Hadoop cluster.
- Analyze the factors involved in choosing an operating system.
- Understand kernel tuning and disk swapping configurations.
- Given a scenario and workload pattern, identify an appropriate hardware configuration for the scenario.
- Given a scenario, determine the ecosystem components required to meet service level agreements (SLAs).
- Cluster sizing: given a scenario and frequency of execution, specify the workload requirements, including CPU, memory, storage, and disk I/O.
- Disk Sizing and Configuration: understand JBOD versus RAID, SANs, virtualization, and disk sizing requirements in a cluster.
- Network Topologies: comprehend network usage in Hadoop (for both HDFS and MapReduce) and propose key network design components for a given scenario.
4: Hadoop Cluster Installation and Administration (25%)
- Given a scenario, identify how the cluster will manage disk and machine failures.
- Analyze logging configurations and logging configuration file formats.
- Understand the basics of Hadoop metrics and cluster health monitoring for government operations.
- Identify the functions and purposes of available tools for cluster monitoring.
- Be able to install all ecosystem components in CDH 5, including but not limited to: Impala, Flume, Oozie, Hue, Manager, Sqoop, Hive, and Pig.
- Identify the functions and purposes of available tools for managing the Apache Hadoop file system.
5: Resource Management (10%)
- Understand the overall design goals of each Hadoop scheduler.
- Given a scenario, determine how the FIFO Scheduler allocates cluster resources.
- Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN.
- Given a scenario, determine how the Capacity Scheduler allocates cluster resources.
6: Monitoring and Logging (15%)
- Understand the functions and features of Hadoop’s metric collection capabilities for government use.
- Analyze the NameNode and JobTracker Web UIs.
- Understand how to monitor cluster daemons for government operations.
- Identify and monitor CPU usage on master nodes.
- Describe how to monitor swap and memory allocation across all nodes.
- Identify how to view and manage Hadoop’s log files for government operations.
- Interpret a log file.
Requirements
- Fundamental Linux administration capabilities for government
- Essential programming skills
35 Hours
Testimonials (3)
I genuinely enjoyed the many hands-on sessions.
Jacek Pieczatka
Course - Administrator Training for Apache Hadoop
I genuinely enjoyed the big competences of Trainer.
Grzegorz Gorski
Course - Administrator Training for Apache Hadoop
I mostly liked the trainer giving real live Examples.