Hadoop Training in California

Hadoop Training in California

Apache Hadoop is an open-source implementation of two core Google BigData solutions: GFS (Google File System) and MapReduce programming paradigm. It is a complete framework destined for storing and processing large data sets. Hadoop is used by most of the global cloud service providers including such leaders like Yahoo, Facebook or LinkedIn.

CA, San Francisco - Golden Gate - 75 Broadway

75 Broadway Suite 202
San Francisco , CA 94111
United States
California US
CA, San Francisco - Golden Gate - 75 Broadway
With its fabulous arched windows, the 75 Broadway office, located in the heart of Jackson Square, is designed to impress. Situated at Golden Gateway Commons on...Read more

Testi...Client Testimonials

Hadoop Administration

What did you like the most about the training?:

Detailed on each tool.

Amber Mehrotra - NIIT Limited

Hadoop Administration

What did you like the most about the training?:

Detailed on each tool.

Amber Mehrotra - NIIT Limited

Hadoop for Developers and Administrators

The fact that all the data and software was ready to use on an already prepared VM, provided by the trainer in external disks.

- vyzVoice

Administrator Training for Apache Hadoop

Trainer give reallive Examples

Simon Hahn - OPITZ CONSULTING Deutschland GmbH

Administrator Training for Apache Hadoop

Big competences of Trainer

Grzegorz Gorski - OPITZ CONSULTING Deutschland GmbH

Administrator Training for Apache Hadoop

Many hands-on sessions.

Jacek Pieczątka - OPITZ CONSULTING Deutschland GmbH

A practical introduction to Data Analysis and Big Data

Willingness to share more

Balaram Chandra Paul - MOL Information Technology Asia Limited

Data Analysis with Hive/HiveQL

Liked very much the interactive way of learning.

Luigi Loiacono - Proximus

Data Analysis with Hive/HiveQL

It was a very practical training, I liked the hands-on exercises.

Proximus

Data Analysis with Hive/HiveQL

good overview, good balance between theory and exercises

Proximus

Data Analysis with Hive/HiveQL

Dynamic interaction and "hands on" the subject, thanks to the Virtual Machine, very stimulating!

Philippe Job - Proximus

Data Analysis with Hive/HiveQL

The competence and knowledge of the trainer

Jonathan Puvilland - Proximus

Hadoop Course Events - California

Code Name Venue Duration Course Date PHP Course Price [Remote / Classroom]
hadoopadm Hadoop Administration CA, San Francisco - Golden Gate - 75 Broadway 21 hours Mon, Dec 4 2017, 9:30 am $5700 / $7840
apacheh Administrator Training for Apache Hadoop CA, San Diego - Stonecrest IV 35 hours Mon, Dec 4 2017, 9:30 am $10250 / $13450
mdlmrah Model MapReduce and Apache Hadoop CA, Sacramento - Promenade Circle 14 hours Tue, Dec 5 2017, 9:30 am $3580 / $5590
hadoopmapr Hadoop Administration on MapR CA, Sunnyvale - Downtown Sunnyvale 28 hours Tue, Dec 5 2017, 9:30 am $7685 / $10485
mdlmrah Model MapReduce and Apache Hadoop CA, Sunnyvale - Downtown Sunnyvale 14 hours Mon, Dec 11 2017, 9:30 am $3580 / $5280
68736 Hadoop for Developers (2 days) CA, Sunnyvale - Downtown Sunnyvale 14 hours Thu, Dec 14 2017, 9:30 am $3350 / $5050
hadoopadm Hadoop Administration CA, San Diego - Stonecrest IV 21 hours Mon, Dec 18 2017, 9:30 am $5700 / $7940
68736 Hadoop for Developers (2 days) CA, San Diego - Stonecrest IV 14 hours Mon, Dec 18 2017, 9:30 am $3350 / $5110
hadoopadm Hadoop Administration CA, Sacramento - Promenade Circle 21 hours Tue, Jan 2 2018, 9:30 am $5700 / $8290
hadoopmapr Hadoop Administration on MapR CA, San Francisco - Golden Gate - 75 Broadway 28 hours Tue, Jan 2 2018, 9:30 am $7685 / $10405
apacheh Administrator Training for Apache Hadoop CA, Sacramento - Promenade Circle 35 hours Mon, Jan 8 2018, 9:30 am $10250 / $14000
hadoopmapr Hadoop Administration on MapR CA, San Diego - Stonecrest IV 28 hours Mon, Jan 8 2018, 9:30 am $7685 / $10405
hadoopadm Hadoop Administration CA, Sunnyvale - Downtown Sunnyvale 21 hours Mon, Jan 8 2018, 9:30 am $5700 / $7950
mdlmrah Model MapReduce and Apache Hadoop CA, San Diego - Stonecrest IV 14 hours Mon, Jan 15 2018, 9:30 am $3580 / $5340
68736 Hadoop for Developers (2 days) CA, San Francisco - Golden Gate - 75 Broadway 14 hours Tue, Jan 16 2018, 9:30 am $3350 / $4910
apacheh Administrator Training for Apache Hadoop CA, San Francisco - Golden Gate - 75 Broadway 35 hours Mon, Jan 22 2018, 9:30 am $10250 / $13550
apacheh Administrator Training for Apache Hadoop CA, Sunnyvale - Downtown Sunnyvale 35 hours Mon, Jan 22 2018, 9:30 am $10250 / $13600
hadoopmapr Hadoop Administration on MapR CA, Sacramento - Promenade Circle 28 hours Tue, Jan 23 2018, 9:30 am $7685 / $10855
68736 Hadoop for Developers (2 days) CA, Sacramento - Promenade Circle 14 hours Wed, Jan 24 2018, 9:30 am $3350 / $5360
mdlmrah Model MapReduce and Apache Hadoop CA, San Francisco - Golden Gate - 75 Broadway 14 hours Thu, Jan 25 2018, 9:30 am $3580 / $5140
mdlmrah Model MapReduce and Apache Hadoop CA, Sacramento - Promenade Circle 14 hours Thu, Jan 25 2018, 9:30 am $3580 / $5590
apacheh Administrator Training for Apache Hadoop CA, San Diego - Stonecrest IV 35 hours Mon, Jan 29 2018, 9:30 am $10250 / $13450
hadoopadm Hadoop Administration CA, San Francisco - Golden Gate - 75 Broadway 21 hours Mon, Jan 29 2018, 9:30 am $5700 / $7840
hadoopmapr Hadoop Administration on MapR CA, Sunnyvale - Downtown Sunnyvale 28 hours Mon, Feb 5 2018, 9:30 am $7685 / $10485
mdlmrah Model MapReduce and Apache Hadoop CA, Sunnyvale - Downtown Sunnyvale 14 hours Tue, Feb 6 2018, 9:30 am $3580 / $5280
hadoopadm Hadoop Administration CA, San Diego - Stonecrest IV 21 hours Wed, Feb 7 2018, 9:30 am $5700 / $7940
68736 Hadoop for Developers (2 days) CA, San Diego - Stonecrest IV 14 hours Wed, Feb 7 2018, 9:30 am $3350 / $5110
68736 Hadoop for Developers (2 days) CA, Sunnyvale - Downtown Sunnyvale 14 hours Thu, Feb 8 2018, 9:30 am $3350 / $5050
hadoopmapr Hadoop Administration on MapR CA, San Francisco - Golden Gate - 75 Broadway 28 hours Mon, Feb 26 2018, 9:30 am $7685 / $10405
hadoopadm Hadoop Administration CA, Sacramento - Promenade Circle 21 hours Tue, Feb 27 2018, 9:30 am $5700 / $8290
hadoopadm Hadoop Administration CA, Sunnyvale - Downtown Sunnyvale 21 hours Wed, Feb 28 2018, 9:30 am $5700 / $7950
mdlmrah Model MapReduce and Apache Hadoop CA, San Diego - Stonecrest IV 14 hours Tue, Mar 6 2018, 9:30 am $3580 / $5340
hadoopmapr Hadoop Administration on MapR CA, San Diego - Stonecrest IV 28 hours Tue, Mar 6 2018, 9:30 am $7685 / $10405
68736 Hadoop for Developers (2 days) CA, San Francisco - Golden Gate - 75 Broadway 14 hours Thu, Mar 8 2018, 9:30 am $3350 / $4910
apacheh Administrator Training for Apache Hadoop CA, Sacramento - Promenade Circle 35 hours Mon, Mar 12 2018, 9:30 am $10250 / $14000
68736 Hadoop for Developers (2 days) CA, Sacramento - Promenade Circle 14 hours Thu, Mar 15 2018, 9:30 am $3350 / $5360
hadoopmapr Hadoop Administration on MapR CA, Sacramento - Promenade Circle 28 hours Mon, Mar 19 2018, 9:30 am $7685 / $10855
apacheh Administrator Training for Apache Hadoop CA, Sunnyvale - Downtown Sunnyvale 35 hours Mon, Mar 19 2018, 9:30 am $10250 / $13600
apacheh Administrator Training for Apache Hadoop CA, San Francisco - Golden Gate - 75 Broadway 35 hours Mon, Mar 19 2018, 9:30 am $10250 / $13550
mdlmrah Model MapReduce and Apache Hadoop CA, San Francisco - Golden Gate - 75 Broadway 14 hours Mon, Mar 19 2018, 9:30 am $3580 / $5140
mdlmrah Model MapReduce and Apache Hadoop CA, Sacramento - Promenade Circle 14 hours Mon, Mar 19 2018, 9:30 am $3580 / $5590
hadoopadm Hadoop Administration CA, San Francisco - Golden Gate - 75 Broadway 21 hours Wed, Mar 21 2018, 9:30 am $5700 / $7840
apacheh Administrator Training for Apache Hadoop CA, San Diego - Stonecrest IV 35 hours Mon, Mar 26 2018, 9:30 am $10250 / $13450
mdlmrah Model MapReduce and Apache Hadoop CA, Sunnyvale - Downtown Sunnyvale 14 hours Wed, Mar 28 2018, 9:30 am $3580 / $5280
68736 Hadoop for Developers (2 days) CA, San Diego - Stonecrest IV 14 hours Thu, Mar 29 2018, 9:30 am $3350 / $5110
68736 Hadoop for Developers (2 days) CA, Sunnyvale - Downtown Sunnyvale 14 hours Mon, Apr 2 2018, 9:30 am $3350 / $5050
hadoopmapr Hadoop Administration on MapR CA, Sunnyvale - Downtown Sunnyvale 28 hours Tue, Apr 3 2018, 9:30 am $7685 / $10485
hadoopadm Hadoop Administration CA, San Diego - Stonecrest IV 21 hours Tue, Apr 3 2018, 9:30 am $5700 / $7940
hadoopadm Hadoop Administration CA, Sunnyvale - Downtown Sunnyvale 21 hours Wed, Apr 25 2018, 9:30 am $5700 / $7950
hadoopadm Hadoop Administration CA, Sacramento - Promenade Circle 21 hours Wed, Apr 25 2018, 9:30 am $5700 / $8290
mdlmrah Model MapReduce and Apache Hadoop CA, San Diego - Stonecrest IV 14 hours Thu, Apr 26 2018, 9:30 am $3580 / $5340
hadoopmapr Hadoop Administration on MapR CA, San Francisco - Golden Gate - 75 Broadway 28 hours Mon, Apr 30 2018, 9:30 am $7685 / $10405
68736 Hadoop for Developers (2 days) CA, San Francisco - Golden Gate - 75 Broadway 14 hours Mon, Apr 30 2018, 9:30 am $3350 / $4910
hadoopmapr Hadoop Administration on MapR CA, San Diego - Stonecrest IV 28 hours Mon, May 7 2018, 9:30 am $7685 / $10405
68736 Hadoop for Developers (2 days) CA, Sacramento - Promenade Circle 14 hours Mon, May 7 2018, 9:30 am $3350 / $5360
apacheh Administrator Training for Apache Hadoop CA, Sacramento - Promenade Circle 35 hours Mon, May 7 2018, 9:30 am $10250 / $14000
mdlmrah Model MapReduce and Apache Hadoop CA, San Francisco - Golden Gate - 75 Broadway 14 hours Tue, May 8 2018, 9:30 am $3580 / $5140
mdlmrah Model MapReduce and Apache Hadoop CA, Sacramento - Promenade Circle 14 hours Wed, May 9 2018, 9:30 am $3580 / $5590
hadoopmapr Hadoop Administration on MapR CA, Sacramento - Promenade Circle 28 hours Mon, May 14 2018, 9:30 am $7685 / $10855
apacheh Administrator Training for Apache Hadoop CA, Sunnyvale - Downtown Sunnyvale 35 hours Mon, May 14 2018, 9:30 am $10250 / $13600
apacheh Administrator Training for Apache Hadoop CA, San Diego - Stonecrest IV 35 hours Mon, May 21 2018, 9:30 am $10250 / $13450
apacheh Administrator Training for Apache Hadoop CA, San Francisco - Golden Gate - 75 Broadway 35 hours Mon, May 21 2018, 9:30 am $10250 / $13550
mdlmrah Model MapReduce and Apache Hadoop CA, Sunnyvale - Downtown Sunnyvale 14 hours Mon, May 21 2018, 9:30 am $3580 / $5280
68736 Hadoop for Developers (2 days) CA, San Diego - Stonecrest IV 14 hours Tue, May 22 2018, 9:30 am $3350 / $5110
hadoopadm Hadoop Administration CA, San Francisco - Golden Gate - 75 Broadway 21 hours Tue, May 22 2018, 9:30 am $5700 / $7840
68736 Hadoop for Developers (2 days) CA, Sunnyvale - Downtown Sunnyvale 14 hours Wed, May 23 2018, 9:30 am $3350 / $5050
hadoopmapr Hadoop Administration on MapR CA, Sunnyvale - Downtown Sunnyvale 28 hours Mon, May 28 2018, 9:30 am $7685 / $10485
hadoopadm Hadoop Administration CA, San Diego - Stonecrest IV 21 hours Tue, Jun 12 2018, 9:30 am $5700 / $7940

Course Outlines

Code Name Duration Outline
druid Druid: Build a fast, real-time data analysis system 21 hours

Druid is an open-source, column-oriented, distributed data store written in Java. It was designed to quickly ingest massive quantities of event data and execute low-latency OLAP queries on that data. Druid is commonly used in business intelligence applications to analyze high volumes of real-time and historical data. It is also well suited for powering fast, interactive, analytic dashboards for end-users. Druid is used by companies such as Alibaba, Airbnb, Cisco, eBay, Netflix, Paypal, and Yahoo.

In this course we explore some of the limitations of data warehouse solutions and discuss how Druid can compliment those technologies to form a flexible and scalable streaming analytics stack. We walk through many examples, offering participants the chance to implement and test Druid-based solutions in a lab environment.

Audience
    Application developers
    Software engineers
    Technical consultants
    DevOps professionals
    Architecture engineers

Format of the course
    Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding

Introduction

Installing and starting Druid

Druid architecture and design

Real-time ingestion of event data

Sharding and indexing

Loading data

Querying data

Visualizing data

Running a distributed cluster

Druid + Apache Hive

Druid + Apache Kafka

Druid + others

Troubleshooting

Administrative tasks

voldemort Voldemort: Setting up a key-value distributed data store 14 hours

Voldemort is an open-source distributed data store that is designed as a key-value store.  It is used at LinkedIn by numerous critical services powering a large portion of the site.

This course will introduce the architecture and capabilities of Voldomort and walk participants through the setup and application of a key-value distributed data store.

Audience
    Software developers
    System administrators
    DevOps engineers

Format of the course
    Part lecture, part discussion, heavy hands-on practice, occasional tests to gauge understanding

Introduction

Understanding distributed key-value storage systems

Voldomort data model and architecture

Downloading and configuration

Command line operations

Clients and servers

Working with Hadoop

Configuring build and push jobs

Rebalancing a Voldemort instance

Serving Large-scale Batch Computed Data

Using the Admin Tool

Performance tuning

BigData_ A practical introduction to Data Analysis and Big Data 35 hours

Participants who complete this training will gain a practical, real-world understanding of Big Data and its related technologies, methodologies and tools.

Participants will have the opportunity to put this knowledge into practice through hands-on exercises. Group interaction and instructor feedback make up an important component of the class.

The course starts with an introduction to elemental concepts of Big Data, then progresses into the programming languages and methodologies used to perform Data Analysis. Finally, we discuss the tools and infrastructure that enable Big Data storage, Distributed Processing, and Scalability.

Audience

  • Developers / programmers
  • IT consultants

Format of the course

  • Part lecture, part discussion, hands-on practice and implementation, occasional quizing to measure progress.

Introduction to Data Analysis and Big Data

  • What makes Big Data "big"?
    • Velocity, Volume, Variety, Veracity (VVVV)
  • Limits to traditional Data Processing
  • Distributed Processing
  • Statistical Analysis
  • Types of Machine Learning Analysis
  • Data Visualization

Languages used for Data Analysis

  • R language
    • Why R for Data Analysis?
    • Data manipulation, calculation and graphical display
  • Python
    • Why Python for Data Analysis?
    • Manipulating, processing, cleaning, and crunching data

Approaches to Data Analysis

  • Statistical Analysis
    • Time Series analysis
    • Forecasting with Correlation and Regression models
    • Inferential Statistics (estimating)
    • Descriptive Statistics in Big Data sets (e.g. calculating mean)
  • Machine Learning
    • Supervised vs unsupervised learning
    • Classification and clustering
    • Estimating cost of specific methods
    • Filtering
  • Natural Language Processing
    • Processing text
    • Understaing meaning of the text
    • Automatic text generation
    • Sentiment analysis / Topic analysis
  • Computer Vision
    • Acquiring, processing, analyzing, and understanding images
    • Reconstructing, interpreting and understanding 3D scenes
    • Using image data to make decisions

Big Data infrastructure

  • Data Storage
    • Relational databases (SQL)
      • MySQL
      • Postgres
      • Oracle
    • Non-relational databases (NoSQL)
      • Cassandra
      • MongoDB
      • Neo4js
    • Understanding the nuances
      • Hierarchical databases
      • Object-oriented databases
      • Document-oriented databases
      • Graph-oriented databases
      • Other
  • Distributed Processing
    • Hadoop
      • HDFS as a distributed filesystem
      • MapReduce for distributed processing
    • Spark
      • All-in-one in-memory cluster computing framework for large-scale data processing
      • Structured streaming
      • Spark SQL
      • Machine Learning libraries: MLlib
      • Graph processing with GraphX
  • Scalability
    • Public cloud
      • AWS, Google, Aliyun, etc.
    • Private cloud
      • OpenStack, Cloud Foundry, etc.
    • Auto-scalability
  • Choosing the right solution for the problem
  • The future of Big Data
  • Closing remarks
mdlmrah Model MapReduce and Apache Hadoop 14 hours

The course is intended for IT specialist that works with the distributed processing of large data sets across clusters of computers.

Data Mining and Business Intelligence

  • Introduction
  • Area of application
  • Capabilities
  • Basics of data exploration

Big data

  • What does Big data stand for?
  • Big data and Data mining

MapReduce

  • Model basics
  • Example application
  • Stats
  • Cluster model

Hadoop

  • What is Hadoop
  • Installation
  • Configuration
  • Cluster settings
  • Architecture and configuration of Hadoop Distributed File System
  • Console tools
  • DistCp tool
  • MapReduce and Hadoop
  • Streaming
  • Administration and configuration of Hadoop On Demand
  • Alternatives
ApHadm1 Apache Hadoop: Manipulation and Transformation of Data Performance 21 hours


This course is intended for developers, architects, data scientists or any profile that requires access to data either intensively or on a regular basis.

The major focus of the course is data manipulation and transformation.

Among the tools in the Hadoop ecosystem this course includes the use of Pig and Hive both of which are heavily used for data transformation and manipulation.

This training also addresses performance metrics and performance optimisation.

The course is entirely hands on and is punctuated by presentations of the theoretical aspects.

1.1Hadoop Concepts

1.1.1HDFS

  • The Design of HDFS
  • Command line interface
  • Hadoop File System

1.1.2Clusters

  • Anatomy of a cluster
  • Mater Node / Slave node
  • Name Node / Data Node

1.2Data Manipulation

1.2.1MapReduce detailed

  • Map phase
  • Reduce phase
  • Shuffle

1.2.2Analytics with Map Reduce

  • Group-By with MapReduce
  • Frequency distributions and sorting with MapReduce
  • Plotting results (GNU Plot)
  • Histograms with MapReduce
  • Scatter plots with MapReduce
  • Parsing complex datasets
  • Counting with MapReduce and Combiners
  • Build reports

 

1.2.3Data Cleansing

  • Document Cleaning
  • Fuzzy string search
  • Record linkage / data deduplication
  • Transform and sort event dates
  • Validate source reliability
  • Trim Outliers

1.2.4Extracting and Transforming Data

  • Transforming logs
  • Using Apache Pig to filter
  • Using Apache Pig to sort
  • Using Apache Pig to sessionize

1.2.5Advanced Joins

  • Joining data in the Mapper using MapReduce
  • Joining data using Apache Pig replicated join
  • Joining sorted data using Apache Pig merge join
  • Joining skewed data using Apache Pig skewed join
  • Using a map-side join in Apache Hive
  • Using optimized full outer joins in Apache Hive
  • Joining data using an external key value store

1.3Performance Diagnosis and Optimization Techniques

  • Map
    • Investigating spikes in input data
    • Identifying map-side data skew problems
    • Map task throughput
    • Small files
    • Unsplittable files
  • Reduce
    • Too few or too many reducers
    • Reduce-side data skew problems
    • Reduce tasks throughput
    • Slow shuffle and sort
  • Competing jobs and scheduler throttling
  • Stack dumps & unoptimized code
  • Hardware failures
  • CPU contention
  • Tasks
    • Extracting and visualizing task execution times
    • Profiling your map and reduce tasks
  • Avoid the reducer
  • Filter and project
  • Using the combiner
  • Fast sorting with comparators
  • Collecting skewed data
  • Reduce skew mitigation
apacheh Administrator Training for Apache Hadoop 35 hours

Audience:

The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment

Goal:

Deep knowledge on Hadoop cluster administration.

1: HDFS (17%)

  • Describe the function of HDFS Daemons
  • Describe the normal operation of an Apache Hadoop cluster, both in data storage and in data processing.
  • Identify current features of computing systems that motivate a system like Apache Hadoop.
  • Classify major goals of HDFS Design
  • Given a scenario, identify appropriate use case for HDFS Federation
  • Identify components and daemon of an HDFS HA-Quorum cluster
  • Analyze the role of HDFS security (Kerberos)
  • Determine the best data serialization choice for a given scenario
  • Describe file read and write paths
  • Identify the commands to manipulate files in the Hadoop File System Shell

2: YARN and MapReduce version 2 (MRv2) (17%)

  • Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 affects cluster settings
  • Understand how to deploy MapReduce v2 (MRv2 / YARN), including all YARN daemons
  • Understand basic design strategy for MapReduce v2 (MRv2)
  • Determine how YARN handles resource allocations
  • Identify the workflow of MapReduce job running on YARN
  • Determine which files you must change and how in order to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN.

3: Hadoop Cluster Planning (16%)

  • Principal points to consider in choosing the hardware and operating systems to host an Apache Hadoop cluster.
  • Analyze the choices in selecting an OS
  • Understand kernel tuning and disk swapping
  • Given a scenario and workload pattern, identify a hardware configuration appropriate to the scenario
  • Given a scenario, determine the ecosystem components your cluster needs to run in order to fulfill the SLA
  • Cluster sizing: given a scenario and frequency of execution, identify the specifics for the workload, including CPU, memory, storage, disk I/O
  • Disk Sizing and Configuration, including JBOD versus RAID, SANs, virtualization, and disk sizing requirements in a cluster
  • Network Topologies: understand network usage in Hadoop (for both HDFS and MapReduce) and propose or identify key network design components for a given scenario

4: Hadoop Cluster Installation and Administration (25%)

  • Given a scenario, identify how the cluster will handle disk and machine failures
  • Analyze a logging configuration and logging configuration file format
  • Understand the basics of Hadoop metrics and cluster health monitoring
  • Identify the function and purpose of available tools for cluster monitoring
  • Be able to install all the ecosystem components in CDH 5, including (but not limited to): Impala, Flume, Oozie, Hue, Manager, Sqoop, Hive, and Pig
  • Identify the function and purpose of available tools for managing the Apache Hadoop file system

5: Resource Management (10%)

  • Understand the overall design goals of each of Hadoop schedulers
  • Given a scenario, determine how the FIFO Scheduler allocates cluster resources
  • Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN
  • Given a scenario, determine how the Capacity Scheduler allocates cluster resources

6: Monitoring and Logging (15%)

  • Understand the functions and features of Hadoop’s metric collection abilities
  • Analyze the NameNode and JobTracker Web UIs
  • Understand how to monitor cluster Daemons
  • Identify and monitor CPU usage on master nodes
  • Describe how to monitor swap and memory allocation on all nodes
  • Identify how to view and manage Hadoop’s log files
  • Interpret a log file
hadoopforprojectmgrs Hadoop for Project Managers 14 hours

As more and more software and IT projects migrate from local processing and data management to distributed processing and big data storage, Project Managers are finding the need to upgrade their knowledge and skills to grasp the concepts and practices relevant to Big Data projects and opportunities.

This course introduces Project Managers to the most popular Big Data processing framework: Hadoop.  

In this instructor-led training, participants will learn the core components of the Hadoop ecosystem and how these technologies can used to solve large-scale problems. In learning these foundations, participants will also improve their ability to communicate with the developers and implementers of these systems as well as the data scientists and analysts that many IT projects involve.

Audience

  • Project Managers wishing to implement Hadoop into their existing development or IT infrastructure
  • Project Managers needing to communicate with cross-functional teams that include big data engineers, data scientists and business analysts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction
    Why and how project teams adopt Hadoop.
    How it all started
    The Project Manager's role in Hadoop projects

Understanding Hadoop's architecture and key concepts
    HDFS
    MapReduce
    Other pieces of the Hadoop ecosystem

What constitutes Big Data?

Different approaches to storing Big Data

HDFS (Hadoop Distributed File System) as the foundation

How Big Data is processed
    The power of distributed processing

Processing data with Map Reduce
    How data is picked apart step by step

The role of clustering in large-scale distributed processing
    Architectural overview
    Clustering approaches

Clustering your data and processes with YARN

The role of non-relational database in Big Data storage

Working with Hadoop's non-relational database: HBase

Data warehousing architectural overview

Managing your data warehouse with Hive

Running Hadoop from shell-scripts

Working with Hadoop Streaming

Other Hadoop tools and utilities

Getting started on a Hadoop project
    Demystifying complexity

Migrating an existing project to Hadoop
    Infrastructure considerations
    Scaling beyond your allocated resources

Hadoop project stakeholders and their toolkits
    Developers, data scientists, business analysts and project managers

Hadoop as a foundation for new technologies and approaches

Closing remarks

hadoopadm Hadoop Administration 21 hours

The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment

Course goal:

Getting knowledge regarding Hadoop cluster administration

  • Introduction to Cloud Computing and Big Data solutions

  • Apache Hadoop evolution: HDFS, MapReduce, YARN

  • Installation and configuration of Hadoop in Pseudo-distributed mode

  • Running MapReduce jobs on Hadoop cluster

  • Hadoop cluster planning, installation and configuration

  • Hadoop ecosystem: Pig, Hive, Sqoop, HBase

  • Big Data future: Impala, Cassandra
storm Apache Storm 28 hours

Apache Storm is a distributed, real-time computation engine used for enabling real-time business intelligence. It does so by enabling applications to reliably process unbounded streams of data (a.k.a. stream processing).

"Storm is for real-time processing what Hadoop is for batch processing!"

In this instructor-led live training, participants will learn how to install and configure Apache Storm, then develop and deploy an Apache Storm application for processing big data in real-time.

Some of the topics included in this training include:

  • Apache Storm in the context of Hadoop
  • Working with unbounded data
  • Continuous computation
  • Real-time analytics
  • Distributed RPC and ETL processing

Request this course now!

Audience

  • Software and ETL developers
  • Mainframe professionals
  • Data scientists
  • Big data analysts
  • Hadoop professionals

Format of the course

  •     Part lecture, part discussion, exercises and heavy hands-on practice

Request a customized course outline for this training!

68736 Hadoop for Developers (2 days) 14 hours

Introduction

  • What is Hadoop?
  • What does it do?
  • How does it do it?

The Motivation for Hadoop

  • Problems with Traditional Large-Scale Systems
  • Introducing Hadoop
  • Hadoopable Problems

Hadoop: Basic Concepts and HDFS

  • The Hadoop Project and Hadoop Components
  • The Hadoop Distributed File System

Introduction to MapReduce

  • MapReduce Overview
  • Example: WordCount
  • Mappers
  • Reducers

Hadoop Clusters and the Hadoop Ecosystem

  • Hadoop Cluster Overview
  • Hadoop Jobs and Tasks
  • Other Hadoop Ecosystem Components

Writing a MapReduce Program in Java

  • Basic MapReduce API Concepts
  • Writing MapReduce Drivers, Mappers, and Reducers in Java
  • Speeding Up Hadoop Development by Using Eclipse
  • Differences Between the Old and New MapReduce APIs

Writing a MapReduce Program Using Streaming

  • Writing Mappers and Reducers with the Streaming API

Unit Testing MapReduce Programs

  • Unit Testing
  • The JUnit and MRUnit Testing Frameworks
  • Writing Unit Tests with MRUnit
  • Running Unit Tests

Delving Deeper into the Hadoop API

  • Using the ToolRunner Class
  • Setting Up and Tearing Down Mappers and Reducers
  • Decreasing the Amount of Intermediate Data with Combiners
  • Accessing HDFS Programmatically
  • Using The Distributed Cache
  • Using the Hadoop API’s Library of Mappers, Reducers, and Partitioners

Practical Development Tips and Techniques

  • Strategies for Debugging MapReduce Code
  • Testing MapReduce Code Locally by Using LocalJobRunner
  • Writing and Viewing Log Files
  • Retrieving Job Information with Counters
  • Reusing Objects
  • Creating Map-Only MapReduce Jobs

Partitioners and Reducers

  • How Partitioners and Reducers Work Together
  • Determining the Optimal Number of Reducers for a Job
  • Writing Customer Partitioners

Data Input and Output

  • Creating Custom Writable and Writable-Comparable Implementations
  • Saving Binary Data Using SequenceFile and Avro Data Files
  • Issues to Consider When Using File Compression
  • Implementing Custom InputFormats and OutputFormats
  • Common MapReduce Algorithms
    • Sorting and Searching Large Data Sets
    • Indexing Data
    • Computing Term Frequency — Inverse Document Frequency
    • Calculating Word Co-Occurrence
    • Performing Secondary Sort
  • Joining Data Sets in MapReduce Jobs
    • Writing a Map-Side Join
    • Writing a Reduce-Side Join
  • Integrating Hadoop into the Enterprise Workflow
    • Integrating Hadoop into an Existing Enterprise
    • Loading Data from an RDBMS into HDFS by Using Sqoop
    • Managing Real-Time Data Using Flume
    • Accessing HDFS from Legacy Systems with FuseDFS and HttpFS
  • An Introduction to Hive, Imapala, and Pig
    • The Motivation for Hive, Impala, and Pig
    • Hive Overview
    • Impala Overview
    • Pig Overview
    • Choosing Between Hive, Impala, and Pig
  • An Introduction to Oozie
    • Introduction to Oozie
    • Creating Oozie Workflows
ambari Apache Ambari: Efficiently manage Hadoop clusters 21 hours

Apache Ambari is an open-source management platform for provisioning, managing, monitoring and securing Apache Hadoop clusters.

In this instructor-led live training participants will learn the management tools and practices provided by Ambari to successfully manage Hadoop clusters.

By the end of this training, participants will be able to:

  • Set up a live Big Data cluster using Ambari
  • Apply Ambari's advanced features and functionalities to various use cases
  • Seamlessly add and remove nodes as needed
  • Improve a Hadoop cluster's performance through tuning and tweaking

Audience

  • DevOps
  • System Administrators
  • DBAs
  • Hadoop testing professionals

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

hadoopmapr Hadoop Administration on MapR 28 hours

Audience:

This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand.

Big Data Overview:

  • What is Big Data
  • Why Big Data is gaining popularity
  • Big Data Case Studies
  • Big Data Characteristics
  • Solutions to work on Big Data.

Hadoop & Its components:

  • What is Hadoop and what are its components.
  • Hadoop Architecture and its characteristics of Data it can handle /Process.
  • Brief on Hadoop History, companies using it and why they have started using it.
  • Hadoop Frame work & its components- explained in detail.
  • What is HDFS and Reads -Writes to Hadoop Distributed File System.
  • How to Setup Hadoop Cluster in different modes- Stand- alone/Pseudo/Multi Node cluster.

(This includes setting up a Hadoop cluster in VirtualBox/KVM/VMware, Network configurations that need to be carefully looked into, running Hadoop Daemons and testing the cluster).

  • What is Map Reduce frame work and how it works.
  • Running Map Reduce jobs on Hadoop cluster.
  • Understanding Replication , Mirroring and Rack awareness in context of Hadoop clusters.

Hadoop Cluster Planning:

  • How to plan your hadoop cluster.
  • Understanding hardware-software to plan your hadoop cluster.
  • Understanding workloads and planning cluster to avoid failures and perform optimum.

What is MapR and why MapR :

  • Overview of MapR and its architecture.
  • Understanding & working of MapR Control System, MapR Volumes , snapshots & Mirrors.
  • Planning a cluster in context of MapR.
  • Comparison of MapR with other distributions and Apache Hadoop.
  • MapR installation and cluster deployment.

Cluster Setup & Administration:

  • Managing services, nodes ,snapshots, mirror volumes and remote clusters.
  • Understanding and managing Nodes.
  • Understanding of Hadoop components, Installing Hadoop components alongside MapR Services.
  • Accessing Data on cluster including via NFS Managing services & nodes.
  • Managing data by using volumes, managing users and groups, managing & assigning roles to nodes, commissioning decommissioning of nodes, cluster administration and performance monitoring, configuring/ analyzing and monitoring metrics to monitor performance, configuring and administering MapR security.
  • Understanding and working with M7- Native storage for MapR tables.
  • Cluster configuration and tuning for optimum performance.

Cluster upgrade and integration with other setups:

  • Upgrading software version of MapR and types of upgrade.
  • Configuring Mapr cluster to access HDFS cluster.
  • Setting up MapR cluster on Amazon Elastic Mapreduce.

All the above topics include Demonstrations and practice sessions for learners to have hands on experience of the technology.

kylin Apache Kylin: From classic OLAP to real-time data warehouse 14 hours

Apache Kylin is an extreme, distributed analytics engine for big data.

In this instructor-led live training, participants will learn how to use Apache Kylin to set up a real-time data warehouse.

By the end of this training, participants will be able to:

  • Consume real-time streaming data using Kylin
  • Utilize Apache Kylin's powerful features, including snowflake schema support, a rich SQL interface, spark cubing and subsecond query latency

Note

  • We use the latest version of Kylin (as of this writing, Apache Kylin v2.0)

Audience

  • Big data engineers
  • Big Data analysts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

hadoopdeva Advanced Hadoop for Developers 21 hours

Apache Hadoop is one of the most popular frameworks for processing Big Data on clusters of servers. This course delves into data management in HDFS, advanced Pig, Hive, and HBase.  These advanced programming techniques will be beneficial to experienced Hadoop developers.

Audience: developers

Duration: three days

Format: lectures (50%) and hands-on labs (50%).

 

Section 1: Data Management in HDFS

  • Various Data Formats (JSON / Avro / Parquet)
  • Compression Schemes
  • Data Masking
  • Labs : Analyzing different data formats;  enabling compression

Section 2: Advanced Pig

  • User-defined Functions
  • Introduction to Pig Libraries (ElephantBird / Data-Fu)
  • Loading Complex Structured Data using Pig
  • Pig Tuning
  • Labs : advanced pig scripting, parsing complex data types

Section 3 : Advanced Hive

  • User-defined Functions
  • Compressed Tables
  • Hive Performance Tuning
  • Labs : creating compressed tables, evaluating table formats and configuration

Section 4 : Advanced HBase

  • Advanced Schema Modelling
  • Compression
  • Bulk Data Ingest
  • Wide-table / Tall-table comparison
  • HBase and Pig
  • HBase and Hive
  • HBase Performance Tuning
  • Labs : tuning HBase; accessing HBase data from Pig & Hive; Using Phoenix for data modeling
hdp Hortonworks Data Platform (HDP) for administrators 21 hours

Hortonworks Data Platform is an open-source Apache Hadoop support platform that provides a stable foundation for developing big data solutions on the Apache Hadoop ecosystem.

This instructor-led live training introduces Hortonworks and walks participants through the deployment of Spark + Hadoop solution.

By the end of this training, participants will be able to:

  • Use Hortonworks to reliably run Hadoop at a large scale
  • Unify Hadoop's security, governance, and operations capabilities with Spark's agile analytic workflows.
  • Use Hortonworks to investigate, validate, certify and support each of the components in a Spark project
  • Process different types of data, including structured, unstructured, in-motion, and at-rest.

Audience

  • Hadoop administrators

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

 

hadoopdev Hadoop for Developers (4 days) 28 hours

Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. This course will introduce a developer to various components (HDFS, MapReduce, Pig, Hive and HBase) Hadoop ecosystem.

 

Section 1: Introduction to Hadoop

  • hadoop history, concepts
  • eco system
  • distributions
  • high level architecture
  • hadoop myths
  • hadoop challenges
  • hardware / software
  • lab : first look at Hadoop

Section 2: HDFS

  • Design and architecture
  • concepts (horizontal scaling, replication, data locality, rack awareness)
  • Daemons : Namenode, Secondary namenode, Data node
  • communications / heart-beats
  • data integrity
  • read / write path
  • Namenode High Availability (HA), Federation
  • labs : Interacting with HDFS

Section 3 : Map Reduce

  • concepts and architecture
  • daemons (MRV1) : jobtracker / tasktracker
  • phases : driver, mapper, shuffle/sort, reducer
  • Map Reduce Version 1 and Version 2 (YARN)
  • Internals of Map Reduce
  • Introduction to Java Map Reduce program
  • labs : Running a sample MapReduce program

Section 4 : Pig

  • pig vs java map reduce
  • pig job flow
  • pig latin language
  • ETL with Pig
  • Transformations & Joins
  • User defined functions (UDF)
  • labs : writing Pig scripts to analyze data

Section 5: Hive

  • architecture and design
  • data types
  • SQL support in Hive
  • Creating Hive tables and querying
  • partitions
  • joins
  • text processing
  • labs : various labs on processing data with Hive

Section 6: HBase

  • concepts and architecture
  • hbase vs RDBMS vs cassandra
  • HBase Java API
  • Time series data on HBase
  • schema design
  • labs : Interacting with HBase using shell;   programming in HBase Java API ; Schema design exercise
alluxio Alluxio: Unifying disparate storage systems 7 hours

Alexio is an open-source virtual distributed storage system that unifies disparate storage systems and enables applications to interact with data at memory speed. It is used by companies such as Intel, Baidu and Alibaba.

In this instructor-led, live training, participants will learn how to use Alexio to bridge different computation frameworks with storage systems and efficiently manage multi-petabyte scale data as they step through the creation of an application with Alluxio.

By the end of this training, participants will be able to:

  • Develop an application with Alluxio
  • Connect big data systems and applications while preserving one namespace
  • Efficiently extract value from big data in any storage format
  • Improve workload performance
  • Deploy and manage Alluxio standalone or clustered

Audience

  • Data scientist
  • Developer
  • System administrator

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

 

hadoopba Hadoop for Business Analysts 21 hours

Apache Hadoop is the most popular framework for processing Big Data. Hadoop provides rich and deep analytics capability, and it is making in-roads in to tradional BI analytics world. This course will introduce an analyst to the core components of Hadoop eco system and its analytics

Audience

Business Analysts

Duration

three days

Format

Lectures and hands on labs.

  • Section 1: Introduction to Hadoop
    • hadoop history, concepts
    • eco system
    • distributions
    • high level architecture
    • hadoop myths
    • hadoop challenges
    • hardware / software
    • Labs : first look at Hadoop
  • Section 2: HDFS Overview
    • concepts (horizontal scaling, replication, data locality, rack awareness)
    • architecture (Namenode, Secondary namenode, Data node)
    • data integrity
    • future of HDFS : Namenode HA, Federation
    • labs : Interacting with HDFS
  • Section 3 : Map Reduce Overview
    • mapreduce concepts
    • daemons : jobtracker / tasktracker
    • phases : driver, mapper, shuffle/sort, reducer
    • Thinking in map reduce
    • Future of mapreduce (yarn)
    • labs : Running a Map Reduce program
  • Section 4 : Pig
    • pig vs java map reduce
    • pig latin language
    • user defined functions
    • understanding pig job flow
    • basic data analysis with Pig
    • complex data analysis with Pig
    • multi datasets with Pig
    • advanced concepts
    • lab : writing pig scripts to analyze / transform data
  • Section 5: Hive
    • hive concepts
    • architecture
    • SQL support in Hive
    • data types
    • table creation and queries
    • Hive data management
    • partitions & joins
    • text analytics
    • labs (multiple) : creating Hive tables and running queries, joins , using partitions, using text analytics functions
  • Section 6: BI Tools for Hadoop
    • BI tools and Hadoop
    • Overview of current BI tools landscape
    • Choosing the best tool for the job
tigon Tigon: Real-time streaming for the real world 14 hours

Tigon is an open-source, real-time, low-latency, high-throughput, native YARN, stream processing framework that sits on top of HDFS and HBase for persistence. Tigon applications address use cases such as network intrusion detection and analytics, social media market analysis, location analytics, and real-time recommendations to users.

This instructor-led, live training introduces Tigon's approach to blending real-time and batch processing as it walks participants through the creation a sample application.

By the end of this training, participants will be able to:

  • Create powerful, stream processing applications for handling large volumes of data
  • Process stream sources such as Twitter and Webserver Logs
  • Use Tigon for rapid joining, filtering, and aggregating of streams

Audience

  • Developers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

hadoopadm1 Hadoop For Administrators 21 hours

Apache Hadoop is the most popular framework for processing Big Data on clusters of servers. In this three (optionally, four) days course, attendees will learn about the business benefits and use cases for Hadoop and its ecosystem, how to plan cluster deployment and growth, how to install, maintain, monitor, troubleshoot and optimize Hadoop. They will also practice cluster bulk data load, get familiar with various Hadoop distributions, and practice installing and managing Hadoop ecosystem tools. The course finishes off with discussion of securing cluster with Kerberos.

“…The materials were very well prepared and covered thoroughly. The Lab was very helpful and well organized”
— Andrew Nguyen, Principal Integration DW Engineer, Microsoft Online Advertising

Audience

Hadoop administrators

Format

Lectures and hands-on labs, approximate balance 60% lectures, 40% labs.

  • Introduction
    • Hadoop history, concepts
    • Ecosystem
    • Distributions
    • High level architecture
    • Hadoop myths
    • Hadoop challenges (hardware / software)
    • Labs: discuss your Big Data projects and problems
  • Planning and installation
    • Selecting software, Hadoop distributions
    • Sizing the cluster, planning for growth
    • Selecting hardware and network
    • Rack topology
    • Installation
    • Multi-tenancy
    • Directory structure, logs
    • Benchmarking
    • Labs: cluster install, run performance benchmarks
  • HDFS operations
    • Concepts (horizontal scaling, replication, data locality, rack awareness)
    • Nodes and daemons (NameNode, Secondary NameNode, HA Standby NameNode, DataNode)
    • Health monitoring
    • Command-line and browser-based administration
    • Adding storage, replacing defective drives
    • Labs: getting familiar with HDFS command lines
  • Data ingestion
    • Flume for logs and other data ingestion into HDFS
    • Sqoop for importing from SQL databases to HDFS, as well as exporting back to SQL
    • Hadoop data warehousing with Hive
    • Copying data between clusters (distcp)
    • Using S3 as complementary to HDFS
    • Data ingestion best practices and architectures
    • Labs: setting up and using Flume, the same for Sqoop
  • MapReduce operations and administration
    • Parallel computing before mapreduce: compare HPC vs Hadoop administration
    • MapReduce cluster loads
    • Nodes and Daemons (JobTracker, TaskTracker)
    • MapReduce UI walk through
    • Mapreduce configuration
    • Job config
    • Optimizing MapReduce
    • Fool-proofing MR: what to tell your programmers
    • Labs: running MapReduce examples
  • YARN: new architecture and new capabilities
    • YARN design goals and implementation architecture
    • New actors: ResourceManager, NodeManager, Application Master
    • Installing YARN
    • Job scheduling under YARN
    • Labs: investigate job scheduling
  • Advanced topics
    • Hardware monitoring
    • Cluster monitoring
    • Adding and removing servers, upgrading Hadoop
    • Backup, recovery and business continuity planning
    • Oozie job workflows
    • Hadoop high availability (HA)
    • Hadoop Federation
    • Securing your cluster with Kerberos
    • Labs: set up monitoring
  • Optional tracks
    • Cloudera Manager for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Cloudera distribution environment (CDH5)
    • Ambari for cluster administration, monitoring, and routine tasks; installation, use. In this track, all exercises and labs are performed within the Ambari cluster manager and Hortonworks Data Platform (HDP 2.0)
datameer Datameer for Data Analysts 14 hours

Datameer is a business intelligence and analytics platform built on Hadoop. It allows end-users to access, explore and correlate large-scale, structured, semi-structured and unstructured data in an easy-to-use fashion.

In this instructor-led, live training, participants will learn how to use Datameer to overcome Hadoop's steep learning curve as they step through the setup and analysis of a series of big data sources.

By the end of this training, participants will be able to:

  • Create, curate, and interactively explore an enterprise data lake
  • Access business intelligence data warehouses, transactional databases and other analytic stores
  • Use a spreadsheet user-interface to design end-to-end data processing pipelines
  • Access pre-built functions to explore complex data relationships
  • Use drag-and-drop wizards to visualize data and create dashboards
  • Use tables, charts, graphs, and maps to analyze query results

Audience

  • Data analysts

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

To request a customized course outline for this training, please contact us.

hbasedev HBase for Developers 21 hours

This course introduces HBase – a NoSQL store on top of Hadoop.  The course is intended for developers who will be using HBase to develop applications,  and administrators who will manage HBase clusters.

We will walk a developer through HBase architecture and data modelling and application development on HBase. It will also discuss using MapReduce with HBase, and some administration topics, related to performance optimization. The course  is very  hands-on with lots of lab exercises.


Duration : 3 days

Audience : Developers  & Administrators

  • Section 1: Introduction to Big Data & NoSQL
    • Big Data ecosystem
    • NoSQL overview
    • CAP theorem
    • When is NoSQL appropriate
    • Columnar storage
    • HBase and NoSQL
  • Section 2 : HBase Intro
    • Concepts and Design
    • Architecture (HMaster and Region Server)
    • Data integrity
    • HBase ecosystem
    • Lab : Exploring HBase
  • Section 3 : HBase Data model
    • Namespaces, Tables and Regions
    • Rows, columns, column families, versions
    • HBase Shell and Admin commands
    • Lab : HBase Shell
  • Section 3 : Accessing HBase using Java API
    • Introduction to Java API
    • Read / Write path
    • Time Series data
    • Scans
    • Map Reduce
    • Filters
    • Counters
    • Co-processors
    • Labs (multiple) : Using HBase Java API to implement  time series , Map Reduce, Filters and counters.
  • Section 4 : HBase schema Design : Group session
    • students are presented with real world use cases
    • students work in groups to come up with design solutions
    • discuss / critique and learn from multiple designs
    • Labs : implement a scenario in HBase
  • Section 5 : HBase Internals
    • Understanding HBase under the hood
    • Memfile / HFile / WAL
    • HDFS storage
    • Compactions
    • Splits
    • Bloom Filters
    • Caches
    • Diagnostics
  • Section 6 : HBase installation and configuration
    • hardware selection
    • install methods
    • common configurations
    • Lab : installing HBase
  • Section 7 : HBase eco-system
    • developing applications using HBase
    • interacting with other Hadoop stack (MapReduce, Pig, Hive)
    • frameworks around HBase
    • advanced concepts (co-processors)
    • Labs : writing HBase applications
  • Section 8 : Monitoring And Best Practices
    • monitoring tools and practices
    • optimizing HBase
    • HBase in the cloud
    • real world use cases of HBase
    • Labs : checking HBase vitals
nifi Apache NiFi for Administrators 21 hours

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn how to deploy and manage Apache NiFi in a live lab environment.

By the end of this training, participants will be able to:

  • Install and configure Apachi NiFi
  • Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes
  • Automate dataflows
  • Enable streaming analytics
  • Apply various approaches for data ingestion
  • Transform Big Data and into business insights

Audience

  • System administrators
  • Data engineers
  • Developers
  • DevOps

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction to Apache NiFi   
    Data at rest vs data in motion

Overview of big data and Apache Hadoop
    HDFS and MapReduce architecture

Installing and configuring NiFi

Cluster integration

NiFi FlowFile Processor

NiFi Flow Controller

Database aggregating, splitting and transforming

Troubleshooting

Closing remarks

hivehiveql Data Analysis with Hive/HiveQL 7 hours

This course covers how to use Hive SQL language (AKA: Hive HQL, SQL on Hive, HiveQL) for people who extract data from Hive

Hive Overview

  • Architecture and design
  • Aata types
  • SQL support in Hive
  • Creating Hive tables and querying
  • Partitions
  • Joins
  • Text processing
  • labs : various labs on processing data with Hive

DQL (Data Query Language) in Detail

  • SELECT clause
  • Column aliases
  • Table aliases
  • Date types and Date functions
  • Group function
  • Table joins
  • JOIN clause
  • UNION operator
  • Nested queries
  • Correlated subqueries
nifidev Apache NiFi for Developers 7 hours

Apache NiFi (Hortonworks DataFlow) is a real-time integrated data logistics and simple event processing platform that enables the moving, tracking and automation of data between systems. It is written using flow-based programming and provides a web-based user interface to manage dataflows in real time.

In this instructor-led, live training, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.

By the end of this training, participants will be able to:

  • Understand NiFi's architecture and dataflow concepts
  • Develop extensions using NiFi and third-party APIs
  • Custom develop their own Apache Nifi processor
  • Ingest and process real-time data from disparate and uncommon file formats and data sources

Audience

  • Developers
  • Data engineers

Format of the course

  • Part lecture, part discussion, exercises and heavy hands-on practice

Introduction
    Data at rest vs data in motion

Overview of big data tools and technologies
    Hadoop (HDFS and MapReduce) and Spark

Installing and configuring NiFi

Overview of NiFi architecture

Development approaches
    Application development tools and mindset
    Extract, Transform, and Load (ETL) tools and mindset

Design considerations

Components, events, and processor patterns

Exercise: Streaming data feeds into HDFS

Error Handling

Controller Services

Exercise: Ingesting data from IoT devices using web-based APIs

Exercise: Developing a custom Apache Nifi processor using JSON

Testing and troubleshooting

Contributing to Apache NiFi

Closing remarks

HadoopDevAd Hadoop for Developers and Administrators 21 hours

Module 1. Introduction to Hadoop

  • The Hadoop Distributed File System (HDFS)
  • The Read Path and The Write Path
  • Managing Filesystem Metadata
  • The Namenode and the Datanode
  • The Namenode High Availability
  • Namenode Federation
  • The Command-Line Tools
  • Understanding REST Support

Module 2. Introduction to MapReduce

  • Analyzing the Data with Hadoop
  • Map and Reduce Pattern
  • Java MapReduce
  • Scaling Out
  • Data Flow
  • Developing Combiner Functions
  • Running a Distributed MapReduce Job

Module 3. Planning a Hadoop Cluster

  • Picking a Distribution and Version of Hadoop
  • Versions and Features
  • Hardware Selection
  • Master and Worker Hardware Selection
  • Cluster Sizing
  • Operating System Selection and Preparation
  • Deployment Layout
  • Setting up Users, Groups, and Privileges
  • Disk Configuration
  • Network Design

Module 4. Installation and Configuration

  • Installing Hadoop
  • Configuration: An Overview
  • The Hadoop XML Configuration Files
  • Environment Variables and Shell Scripts
  • Logging Configuration
  • Managing HDFS
  • Optimization and Tuning
  • Formatting the Namenode
  • Creating a /tmp Directory
  • Thinking Namenode High Availability
  • The Fencing Options
  • Automatic Failover Configuration
  • Format and Bootstrap the Namenodes
  • Namenode Federation

Module 5. Understanding Hadoop I/O

  • Data Integrity in HDFS  
  • Understanding Codecs
  • Compression and Input Splits
  • Using Compression in MapReduce
  • The Serialization mechanism
  • File-Based Data Structures
  • The SequenceFile format
  • Other File Formats and Column-Oriented Formats

Module 6. Developing a MapReduce Application

  • The Configuration API 
  • Setting Up the Development Environment
  • Managing Configuration
  • GenericOptionsParser, Tool, and ToolRunner
  • Writing a Unit Test with MRUnit
  • The Mapper and Reducer
  • Running Locally on Test Data 
  • Testing the Driver
  • Running on a Cluster
  • Packaging and Launching a Job
  • The MapReduce Web UI
  • Tuning a Job

Module 7. Identity, Authentication, and Authorization

  • Managing Identity
  • Kerberos and Hadoop
  • Understanding Authorization

Module 8. Resource Management

  • What Is Resource Management?
  • HDFS Quotas
  • MapReduce Schedulers
  • Anatomy of a YARN Application Run
  • Resource Requests
  • Application Lifespan
  • YARN Compared to MapReduce 1
  • Scheduling in YARN
  • Scheduler Options
  • Capacity Scheduler Configuration
  • Fair Scheduler Configuration
  • Delay Scheduling
  • Dominant Resource Fairness

Module 9. MapReduce Types and Formats

  • MapReduce Types
  • The Default MapReduce Job
  • Defining the Input Formats
  • Managing Input Splits and Records
  • Text Input and Binary Input
  • Managing Multiple Inputs
  • Database Input (and Output)
  • Output Formats
  • Text Output and Binary Output
  • Managing Multiple Outputs
  • The Database Output

Module 10. Using MapReduce Features

  • Using Counters
  • Reading Built-in Counters
  • User-Defined Java Counters
  • Understanding Sorting
  • Using the Distributed Cache

Module 11. Cluster Maintenance and Troubleshooting

  • Managing Hadoop Processes
  • Starting and Stopping Processes with Init Scripts
  • Starting and Stopping Processes Manually
  • HDFS Maintenance Tasks
  • Adding a Datanode
  • Decommissioning a Datanode
  • Checking Filesystem Integrity with fsck
  • Balancing HDFS Block Data
  • Dealing with a Failed Disk
  • MapReduce Maintenance Tasks 
  • Killing a MapReduce Job
  • Killing a MapReduce Task
  • Managing Resource Exhaustion

Module 12. Monitoring

  • The available Hadoop Metrics
  • The role of SNMP
  • Health Monitoring
  • Host-Level Checks
  • HDFS Checks
  • MapReduce Checks

Module 13. Backup and Recovery

  • Data Backup
  • Distributed Copy (distcp)
  • Parallel Data Ingestion
  • Namenode Metadata
IntroToAvro Apache Avro: Data serialization for distributed applications 14 hours

This course is intended for

  • Developers

Format of the course

  • Lectures, hands-on practice, small tests along the way to gauge understanding

Principles of distributed computing

  • Apache Spark
  • Hadoop

Principles of data serialization

  • How data object is passed over the network
  • Serialization of objects
  • Serialization approaches
    • Thrift
    • Protocol Buffers
    • Apache Avro
      • data structure
      • size, speed, format characteristics
      • persistent data storage
      • integration with dynamic languages
      • dynamic typing
      • schemas
        • untagged data
        • change management

Data serialization and distributed computing

  • Avro as a subproject of Hadoop
    • Java serialization
    • Hadoop serialization
    • Avro serialization

Using Avro with

  • Hive (AvroSerDe)
  • Pig (AvroStorage)

Porting Existing RPC Frameworks

Other regions

Consulting

Hadoop training courses in California, Weekend Hadoop courses in California, Evening Hadoop training in California, Hadoop instructor-led in California , Hadoop instructor in California, Hadoop trainer in California, Hadoop one on one training in California, Evening Hadoop courses in California,Hadoop classes in California, Hadoop on-site in California, Hadoop coaching in California, Hadoop instructor-led in California, Hadoop boot camp in California, Hadoop private courses in California

Course Discounts

Course Discounts Newsletter

We respect the privacy of your email address. We will not pass on or sell your address to others.
You can always change your preferences or unsubscribe completely.

Some of our clients