Online or onsite, instructor-led live Stream Processing training courses demonstrate through interactive discussion and hands-on practice the fundamentals and advanced topics of Stream Processing.
Stream Processing training is available as "online live training" or "onsite live training". Online live training (aka "remote live training") is carried out by way of an interactive, remote desktop. Onsite live Stream Processing training can be carried out locally on customer premises in Boston or in NobleProg corporate training centers in Boston.
NobleProg -- Your Local Training Provider
MA, Boston - Federal Street
101 Federal Street Suite 1900, Boston, united states, 02110
The office in Boston's Federal Street is a recently renovated office complex set in the heart of the financial district, just one block from Congress Street. Based on the 16th and 19th floors, the office offers spectacular views of the city. Finance and tourism are some of the prime businesses in the state capital, which is the largest city in New England. With its many colleges and universities, Boston is also regarded an international center ...
The office in Boston's Federal Street is a recently renovated office complex set in the heart of the financial district, just one block from Congress Street. Based on the 16th and 19th floors, the office offers spectacular views of the city. Finance and tourism are some of the prime businesses in the state capital, which is the largest city in New England. With its many colleges and universities, Boston is also regarded an international center of higher education, with students contributing more than $4billion to the economy annually. It has a strong reputation for medicine, biotechnology and research. It is ranked in the top 20 of global financial centers and number one for innovation. Boston also has an historic seaport which supports tourism, industry and fishing.
This instructor-led, live training (online or onsite) is aimed at engineers who wish to use Confluent (a distribution of Kafka) to build and manage a real-time data processing platform for their applications.
By the end of this training, participants will be able to:
Install and configure Confluent Platform.
Use Confluent's management tools and services to run Kafka more easily.
Store and process incoming stream data.
Optimize and manage Kafka clusters.
Secure data streams.
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
This course is based on the open source version of Confluent: Confluent Open Source.
To request a customized training for this course, please contact us to arrange.
In this instructor-led, live training in Boston (onsite or remote), participants will learn how to set up and integrate different Stream Processing frameworks with existing big data storage systems and related software applications and microservices.
By the end of this training, participants will be able to:
Install and configure different Stream Processing frameworks, such as Spark Streaming and Kafka Streaming.
Understand and select the most appropriate framework for the job.
Process of data continuously, concurrently, and in a record-by-record fashion.
Integrate Stream Processing solutions with existing databases, data warehouses, data lakes, etc.
Integrate the most appropriate stream processing library with enterprise applications and microservices.
This instructor-led, live training in Boston (online or onsite) is aimed at data engineers, data scientists, and programmers who wish to use Apache Kafka features in data streaming with Python.By the end of this training, participants will be able to use Apache Kafka to monitor and manage conditions in continuous data streams using Python programming.
Kafka Streams is a client-side library for building applications and microservices whose data is passed to and from a Kafka messaging system. Traditionally, Apache Kafka has relied on Apache Spark or Apache Storm to process data between message producers and consumers. By calling the Kafka Streams API from within an application, data can be processed directly within Kafka, bypassing the need for sending the data to a separate cluster for processing.
In this instructor-led, live training, participants will learn how to integrate Kafka Streams into a set of sample Java applications that pass data to and from Apache Kafka for stream processing.
By the end of this training, participants will be able to:
Understand Kafka Streams features and advantages over other stream processing frameworks
Process stream data directly within a Kafka cluster
Write a Java or Scala application or microservice that integrates with Kafka and Kafka Streams
Write concise code that transforms input Kafka topics into output Kafka topics
Build, package and deploy the application
Audience
Developers
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
Notes
To request a customized training for this course, please contact us to arrange
Apache Samza is an open-source near-realtime, asynchronous computational framework for stream processing. It uses Apache Kafka for messaging, and Apache Hadoop YARN for fault tolerance, processor isolation, security, and resource management.
This instructor-led, live training introduces the principles behind messaging systems and distributed stream processing, while walking participants through the creation of a sample Samza-based project and job execution.
By the end of this training, participants will be able to:
Use Samza to simplify the code needed to produce and consume messages.
Decouple the handling of messages from an application.
Use Samza to implement near-realtime asynchronous computation.
Use stream processing to provide a higher level of abstraction over messaging systems.
Audience
Developers
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
This instructor-led, live training in Boston (online or onsite) is aimed at data engineers, data scientists, and programmers who wish to use Spark Streaming features in processing and analyzing real-time data.By the end of this training, participants will be able to use Spark Streaming to process live data streams for use in databases, filesystems, and live dashboards.
This instructor-led, live training in Boston (online or onsite) is aimed at developers who wish to implement Apache Kafka stream processing without writing code.By the end of this training, participants will be able to:
Install and configure Confluent KSQL.
Set up a stream processing pipeline using only SQL commands (no Java or Python coding).
Carry out data filtering, transformations, aggregations, joins, windowing, and sessionization entirely in SQL.
Design and deploy interactive, continuous queries for streaming ETL and real-time analytics.
This instructor-led, live training in Boston (online or onsite) is aimed at developers who wish to learn the principles behind persistent and pure in-memory storage as they step through the creation of a sample in-memory computing project.By the end of this training, participants will be able to:
Use Ignite for in-memory, on-disk persistence as well as a purely distributed in-memory database.
Achieve persistence without syncing data back to a relational database.
Use Ignite to carry out SQL and distributed joins.
Improve performance by moving data closer to the CPU, using RAM as a storage.
Spread data sets across a cluster to achieve horizontal scalability.
Integrate Ignite with RDBMS, NoSQL, Hadoop and machine learning processors.
Apache Beam is an open source, unified programming model for defining and executing parallel data processing pipelines. It's power lies in its ability to run both batch and streaming pipelines, with execution being carried out by one of Beam's supported distributed processing back-ends: Apache Apex, Apache Flink, Apache Spark, and Google Cloud Dataflow. Apache Beam is useful for ETL (Extract, Transform, and Load) tasks such as moving data between different storage media and data sources, transforming data into a more desirable format, and loading data onto a new system.In this instructor-led, live training (onsite or remote), participants will learn how to implement the Apache Beam SDKs in a Java or Python application that defines a data processing pipeline for decomposing a big data set into smaller chunks for independent, parallel processing.By the end of this training, participants will be able to:
Install and configure Apache Beam.
Use a single programming model to carry out both batch and stream processing from withing their Java or Python application.
Execute pipelines across multiple environments.
Format of the Course
Part lecture, part discussion, exercises and heavy hands-on practice
Note
This course will be available Scala in the future. Please contact us to arrange.
Apache Apex is a YARN-native platform that unifies stream and batch processing. It processes big data-in-motion in a way that is scalable, performant, fault-tolerant, stateful, secure, distributed, and easily operable.This instructor-led, live training introduces Apache Apex's unified stream processing architecture, and walks participants through the creation of a distributed application using Apex on Hadoop.By the end of this training, participants will be able to:
Understand data processing pipeline concepts such as connectors for sources and sinks, common data transformations, etc.
Build, scale and optimize an Apex application
Process real-time data streams reliably and with minimum latency
Use Apex Core and the Apex Malhar library to enable rapid application development
Use the Apex API to write and re-use existing Java code
Integrate Apex into other applications as a processing engine
Tune, test and scale Apex applications
Format of the Course
Interactive lecture and discussion.
Lots of exercises and practice.
Hands-on implementation in a live-lab environment.
Course Customization Options
To request a customized training for this course, please contact us to arrange.
Apache Storm is a distributed, real-time computation engine used for enabling real-time business intelligence. It does so by enabling applications to reliably process unbounded streams of data (a.k.a. stream processing)."Storm is for real-time processing what Hadoop is for batch processing!"In this instructor-led live training, participants will learn how to install and configure Apache Storm, then develop and deploy an Apache Storm application for processing big data in real-time.Some of the topics included in this training include:
Apache Storm in the context of Hadoop
Working with unbounded data
Continuous computation
Real-time analytics
Distributed RPC and ETL processing
Request this course now!Audience
Software and ETL developers
Mainframe professionals
Data scientists
Big data analysts
Hadoop professionals
Format of the course
Part lecture, part discussion, exercises and heavy hands-on practice
In this instructor-led, live training in Boston (onsite or remote), participants will learn how to deploy and manage Apache NiFi in a live lab environment.By the end of this training, participants will be able to:
Install and configure Apachi NiFi.
Source, transform and manage data from disparate, distributed data sources, including databases and big data lakes.
In this instructor-led, live training in Boston, participants will learn the fundamentals of flow-based programming as they develop a number of demo extensions, components and processors using Apache NiFi.By the end of this training, participants will be able to:
Understand NiFi's architecture and dataflow concepts.
Develop extensions using NiFi and third-party APIs.
Custom develop their own Apache Nifi processor.
Ingest and process real-time data from disparate and uncommon file formats and data sources.
This instructor-led, live training in Boston (online or onsite) introduces the principles and approaches behind distributed stream and batch data processing, and walks participants through the creation of a real-time, data streaming application in Apache Flink.By the end of this training, participants will be able to:
Set up an environment for developing data analysis applications.
Understand how Apache Flink's graph-processing library (Gelly) works.
Package, execute, and monitor Flink-based, fault-tolerant, data streaming applications.
Manage diverse workloads.
Perform advanced analytics.
Set up a multi-node Flink cluster.
Measure and optimize performance.
Integrate Flink with different Big Data systems.
Compare Flink capabilities with those of other big data processing frameworks.
Read more...
Last Updated:
Testimonials (4)
Sufficient hands on, trainer is knowledgable
Chris Tan
Course - A Practical Introduction to Stream Processing
During the exercises, James explained me every step whereever I was getting stuck in more detail. I was completely new to NIFI. He explained the actual purpose of NIFI, even the basics such as open source. He covered every concept of Nifi starting from Beginner Level to Developer Level.
Firdous Hashim Ali - MOD A BLOCK
Course - Apache NiFi for Administrators
That I had it in the first place.
Peter Scales - CACI Ltd
Course - Apache NiFi for Developers
Recalling/reviewing keypoints of the topics discussed.
Paolo Angelo Gaton - SMS Global Technologies Inc.
Course - Building Stream Processing Applications with Kafka Streams
Online Stream Processing training in Boston, Stream Processing training courses in Boston, Weekend Stream Processing courses in Boston, Evening Stream Processing training in Boston, Stream Processing instructor-led in Boston, Stream Processing instructor in Boston, Stream Processing boot camp in Boston, Stream Processing classes in Boston, Online Stream Processing training in Boston, Stream Processing instructor-led in Boston, Stream Processing private courses in Boston, Evening Stream Processing courses in Boston, Stream Processing one on one training in Boston, Weekend Stream Processing training in Boston, Stream Processing on-site in Boston, Stream Processing trainer in Boston, Stream Processing coaching in Boston