Statistics Training Courses

Estadística Training

Practical Applied Statistics courses

Client Testimonials

Hadoop for Developers

The trainer clearly understood the subject matter very well. He managed to articulate the subject areas well and demonstrated using practicals how to apply that knowledge.

Matthew Tindall - Knowledgepool

Hadoop for Developers

The trainer clearly understood the subject matter very well. He managed to articulate the subject areas well and demonstrated using practicals how to apply that knowledge.

Matthew Tindall - Knowledgepool

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

This was the first time I did remote training ever. It went well, better than I expected.

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Apache Solr - Full-Text Search Server

He's (the trainer) very flexible and work along with our questions.

Bokhara Bun- Employment and Social Development Canada.

Minitab for Statistical Data Analysis

Had a good mix of interactions and examples for all skill ranges.

The course was exactly what i was looking for in an introduction to minitab. in addition i got a statistics refresher in statistics theory as well. which was a bonus.

Desmond Erickson - EVRAZ Inc. NA.

Applied Machine Learning

ref material to use later was very good.

PAUL BEALES- Seagate Technology.

Applied Machine Learning

ref material to use later was very good.

PAUL BEALES- Seagate Technology.

Applied Machine Learning

ref material to use later was very good.

PAUL BEALES- Seagate Technology.

Applied Machine Learning

ref material to use later was very good.

PAUL BEALES- Seagate Technology.

Applied Machine Learning

ref material to use later was very good.

PAUL BEALES- Seagate Technology.

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Apache Solr - Full-Text Search Server

The pace was just right.

Greg Noseworthy - Employment and Social Development Canada

Introduction to R

He (the trainer) was excellent.

I liked the pace and the focus on principles at the beginning, not skipping over the detail.

Geoff Copps - Mediabrands

Introduction to R

He (the trainer) was excellent.

I liked the pace and the focus on principles at the beginning, not skipping over the detail.

Geoff Copps - Mediabrands

Subcategories

Statistics Course Outlines

ID Name Duration Overview
287823 Data Mining with R 14 hours Sources of methods Artificial intelligence Machine learning Statistics Sources of data Pre processing of data Data Import/Export Data Exploration and Visualization Dimensionality Reduction Dealing with missing values R Packages Data mining main tasks Automatic or semi-automatic analysis of large quantities of data Extracting previously unknown interesting patterns groups of data records (cluster analysis) unusual records (anomaly detection) dependencies (association rule mining) Data mining Anomaly detection (Outlier/change/deviation detection) Association rule learning (Dependency modeling) Clustering Classification Regression Summarization Frequent Pattern Mining Text Mining Decision Trees Regression Neural Networks Sequence Mining Frequent Pattern Mining Data dredging, data fishing, data snooping
2143 Semantic Web Overview 7 hours The Semantic Web is a collaborative movement led by the World Wide Web Consortium (W3C) that promotes common formats for data on the World Wide Web. The Semantic Web provides a common framework that allows data to be shared and reused across application, enterprise, and community boundaries. Semantic Web Overview Introduction Purpose Standards Ontology Projects Resource Description Framework (RDF) Introduction Motivation and Goals RDF Concepts RDF Vocabulary URI and Namespace (Normative) Datatypes (Normative) Abstract Syntax (Normative) Fragment Identifiers
2501 Statistic analysis in market research 28 hours Goal: Improving consumer behavior researcher workshop products and services Addressees  The researchers, market analysts, managers and employees of marketing departments, sales departments primarily pharmaceutical and FMCG, students of socio-economic and everyone interested in market research Module 1 Quantitative research Pre-treatment results check the accuracy of the database control of missing data weighting observations Statistical models multiple regression conjoint analysis classification trees Automate procedures in tracking studies Analysis of data from a marketing experiment The report and draw conclusions Module 2 Qualitative Research The transformation of qualitative data into a quantitative Statistical models for qualitative data
287843 Advanced R Programming 7 hours This course is for data scientists and statisticians that already have basic R & C++ coding skills and R code and need advanced R coding skills. The purpose is to give a practical advanced R programming course to participants interested in applying the methods at work. Sector specific examples are used to make the training relevant to the audience R's environment Object oriented programming in R S3 S4 Reference classes Performance profiling Exception handling Debugging R code Creating R packages Unit testing C/C++ coding in R SEXPRs Calling dynamically loaded libraries from R Writing and compiling C/C++ code from R Improving R's performance with C++ linear algebra library
287889 Survey Research, Sampling Techniques & Estimation 14 hours Survey research: Principle of sample survey design and implementation  survey preliminaries sampling methods (probability & non-probability methods) population & sampling frames survey data collection methods Questionnaire design Design and writing of questionnaires Pre-tests & piloting Planning & organisation of surveys Minimising errors, bias & non-response at the design stage  Survey data processing Commissioning surveys/research  Sample Techniques & Estimation: Sampling techniques and their strengths/weaknesses (may overlap above sampling methods) Simple Random Sampling Unequal Probability Sampling Stratified Sampling (with proportional to size & disproportional selection) Systematic Sampling Cluster sampling Multi-stage Sampling  Quota Sampling Estimation Methods of estimating sample sizes  Estimating population parameters using sample estimates Variance and confidence intervals estimation Estimating bias/precision  Methods of correcting bias Methods of handling missing data Non-response analysis
2502 Advanced statistics using SPSS Predictive Analytics Software. 28 hours Goal: Mastering the skill work independently with the program SPSS for advanced use, dialog boxes, and command language syntax for the selected analytical techniques. The addressees: Analysts, researchers, scientists, students and all those who want to acquire the ability to use SPSS package and advanced level and learn the selected statistical models. Training takes universal analysis problems and it is dedicated to a specific industry Preparation of a database for analysis management of data collection operations on variables transforming the variables selected functions (logarithmic, exponential, etc.) Parametric and nonparametric statistics, or how to fit a model to the data measuring scale distribution type outliers and influential observations (outliers) sample size central limit theorem Study the differences between the characteristics of statistical tests based on the average and media Analysis of correlation and similarities correlations principal component analysis cluster analysis Prediction - single regression analysis and multivariate method of least squares Linear Model instrumental variable regression models (dummy, effect, orthogonal coding) Statistical Inference
2625 Model MapReduce and Apache Hadoop 14 hours The course is intended for IT specialist that works with the distributed processing of large data sets across clusters of computers. Data Mining and Business Intelligence Introduction Area of application Capabilities Basics of data exploration Big data What does Big data stand for? Big data and Data mining MapReduce Model basics Example application Stats Cluster model Hadoop What is Hadoop Installation Configuration Cluster settings Architecture and configuration of Hadoop Distributed File System Console tools DistCp tool MapReduce and Hadoop Streaming Administration and configuration of Hadoop On Demand Alternatives
287890 Data Shrinkage for Government 14 hours Why shrink data Relational databases Introduction Aggregation and disaggregation Normalisation and denormalisation Null values and zeroes Joining data Complex joins Cluster analysis Applications Strengths and weaknesses Measuring distance Hierarchical clustering K-means and derivatives Applications in Government Factor analysis Concepts Exploratory factor analysis Confirmatory factor analysis Principal component analysis Correspondence analysis Software Applications in Government Predictive analytics Timelines and naming conventions Holdout samples Weights of evidence Information value Scorecard building demonstration using a spreadsheet Regression in predictive analytics Logistic regression in predictive analytics Decision Trees in predictive analytics Neural networks Measuring accuracy Applications in Government
42 Excel For Statistical Data Analysis 14 hours Audience Analysts, researchers, scientists, graduates and students and anyone who is interested in learning how to facilitate statistical analysis in Microsoft Excel. Course Objectives This course will help improve your familiarity with Excel and statistics and as a result increase the effectiveness and efficiency of your work or research. This course describes how to use the Analysis ToolPack in Microsoft Excel, statistical functions and how to perform basic statistical procedures. It will explain what Excel limitation are and how to overcome them. Aggregating Data in Excel Statistical Functions Outlines Subtotals Pivot Tables Data Relation Analysis Normal Distribution Descriptive Statistics Linear Correlation Regression Analysis Covariance Analysing Data in Time Trends/Regression line Linear, Logarithmic, Polynomial, Power, Exponential, Moving Average Smoothing Seasonal fluctuations analysis Comparing Populations Confidence Interval for the Mean Test of Hypothesis Concerning the Population Mean Difference Between Mean of Two Populations ANOVA: Analysis of Variances Goodness-of-Fit Test for Discrete Random Variables Test of Independence: Contingency Tables Test Hypothesis Concerning the Variance of Two Populations Forecasting Extrapolation
2503 Statistics with SPSS Predictive Analytics Software 14 hours Goal: Learning to work with SPSS at the level of independence The addressees: Analysts, researchers, scientists, students and all those who want to acquire the ability to use SPSS package and learn popular data mining techniques. Using the program The dialog boxes input / downloading data the concept of variable and measuring scales preparing a database Generate tables and graphs formatting of the report Command language syntax automated analysis storage and modification procedures create their own analytical procedures Data Analysis descriptive statistics Key terms: eg variable, hypothesis, statistical significance measures of central tendency measures of dispersion measures of central tendency standardization Introduction to research the relationships between variables correlational and experimental methods Summary: This case study and discussion
2624 Apache Solr - Full-Text Search Server 14 hours The course is intended for IT specialist that want to implement a solution that allows for elastic and efficient searching of big data sources. Introduction Apache Lucene What is Solr Installation Schema and textanalysis Schema modeling schema.xml Configuration Text analysis Working with index Importing data from other resources Indexing documents Querying Solr API Searching Basics of querying Sorting and Filtering Using scoring Functions Request handling Formatting Solr response Faceting Advanced topics Configuring and deploying Solr Integrating Solr with other libraries/technologies Search components Solr and scaling issues
287891 Statistical and Econometric Modelling 21 hours The Nature of Econometrics and Economic Data Econometrics and models Steps in econometric modelling Types of economic data, time series, cross-sectional, panel Causality in econometric analysis Specification and Data Issues Functional form Proxy variables Measurement error in variables Missing data, outliers, influential observations Regression Analysis Estimation Ordinary least squares (OLS) estimators Classical OLS assumptions, Gauss Markov-Theorem Best Linear Unbiased Estimators Inference Testing statistical significance of parameters t-test(single, group) Confidence intervals Testing multiple linear restrictions, F-test Goodness of fit Testing functional form Missing variables Binary variables Testing for violation of assumptions and their implications: Heteroscedasticity Autocorrelation Multicolinearity Endogeneity Other Estimation techniques Instrumental Variables Estimation Generalised Least Squares Maximum Likelihood Generalised Method of Moments Models for Binary Response Variables Linear Probability Model Probit Model Logit Model Estimation Interpretation of parameters, Marginal Effects Goodness of Fit Limited Dependent Variables Tobit Model Truncated Normal Distribution Interpretation of Tobit Model Specification and Estimation Issues Time Series Models Characteristics of Time Series Decomposition of Time Series Exponential Smoothing Stationarity ARIMA models Co-Integration ECM model Predictive Analysis Forecasting, Planning and Goals Steps in Forecasting Evaluating Forecast Accuracy Redisual Diagnostics Prediction Intervals
43 Statistics Level 2 28 hours This training course covers advanced statistics. It explains most of the tools commonly used in research, analysis and forecasting. It provides short explanations of the theory behind the formulas. This course does not relate to any specific field of knowledge, but can be tailored if all the delegates have the same background and goals. Some basic computer tools are used during this course (notably Excel and OpenOffice) Describing Bivariate Data Introduction to Bivariate Data Values of the Pearson Correlation Guessing Correlations Simulation Properties of Pearson's r Computing Pearson's r Restriction of Range Demo Variance Sum Law II Exercises Probability Introduction Basic Concepts Conditional Probability Demo Gamblers Fallacy Simulation Birthday Demonstration Binomial Distribution Binomial Demonstration Base Rates Bayes' Theorem Demonstration Monty Hall Problem Demonstration Exercises Normal Distributions Introduction History Areas of Normal Distributions Varieties of Normal Distribution Demo Standard Normal Normal Approximation to the Binomial Normal Approximation Demo Exercises Sampling Distributions Introduction Basic Demo Sample Size Demo Central Limit Theorem Demo Sampling Distribution of the Mean Sampling Distribution of Difference Between Means Sampling Distribution of Pearson's r Sampling Distribution of a Proportion Exercises Estimation Introduction Degrees of Freedom Characteristics of Estimators Bias and Variability Simulation Confidence Intervals Exercises Logic of Hypothesis Testing Introduction Significance Testing Type I and Type II Errors One- and Two-Tailed Tests Interpreting Significant Results Interpreting Non-Significant Results Steps in Hypothesis Testing Significance Testing and Confidence Intervals Misconceptions Exercises Testing Means Single Mean t Distribution Demo Difference between Two Means (Independent Groups) Robustness Simulation All Pairwise Comparisons Among Means Specific Comparisons Difference between Two Means (Correlated Pairs) Correlated t Simulation Specific Comparisons (Correlated Observations) Pairwise Comparisons (Correlated Observations) Exercises Power Introduction Factors Affecting Power Why power matters Exercises Prediction Introduction to Simple Linear Regression Linear Fit Demo Partitioning Sums of Squares Standard Error of the Estimate Prediction Line Demo Inferential Statistics for b and r Exercises ANOVA Introduction ANOVA Designs One-Factor ANOVA (Between-Subjects) One-Way Demo Multi-Factor ANOVA (Between-Subjects) Unequal Sample Sizes Tests Supplementing ANOVA Within-Subjects ANOVA Power of Within-Subjects Designs Demo Exercises Chi Square Chi Square Distribution One-Way Tables Testing Distributions Demo Contingency Tables 2 x 2 Table Simulation Exercises
85064 Big Data Business Intelligence for Telecom and Communication Service Providers 35 hours Overview Communications service providers (CSP) are facing pressure to reduce costs and maximize average revenue per user (ARPU), while ensuring an excellent customer experience, but data volumes keep growing. Global mobile data traffic will grow at a compound annual growth rate (CAGR) of 78 percent to 2016, reaching 10.8 exabytes per month. Meanwhile, CSPs are generating large volumes of data, including call detail records (CDR), network data and customer data. Companies that fully exploit this data gain a competitive edge. According to a recent survey by The Economist Intelligence Unit, companies that use data-directed decision-making enjoy a 5-6% boost in productivity. Yet 53% of companies leverage only half of their valuable data, and one-fourth of respondents noted that vast quantities of useful data go untapped. The data volumes are so high that manual analysis is impossible, and most legacy software systems can’t keep up, resulting in valuable data being discarded or ignored. With Big Data & Analytics’ high-speed, scalable big data software, CSPs can mine all their data for better decision making in less time. Different Big Data products and techniques provide an end-to-end software platform for collecting, preparing, analyzing and presenting insights from big data. Application areas include network performance monitoring, fraud detection, customer churn detection and credit risk analysis. Big Data & Analytics products scale to handle terabytes of data but implementation of such tools need new kind of cloud based database system like Hadoop or massive scale parallel computing processor ( KPU etc.) This course work on Big Data BI for Telco covers all the emerging new areas in which CSPs are investing for productivity gain and opening up new business revenue stream. The course will provide a complete 360 degree over view of Big Data BI in Telco so that decision makers and managers can have a very wide and comprehensive overview of possibilities of Big Data BI in Telco for productivity and revenue gain. Course objectives Main objective of the course is to introduce new Big Data business intelligence techniques in 4 sectors of Telecom Business (Marketing/Sales, Network Operation, Financial operation and Customer Relation Management). Students will be introduced to following: Introduction to Big Data-what is 4Vs (volume, velocity, variety and veracity) in Big Data- Generation, extraction and management from Telco perspective How Big Data analytic differs from legacy data analytic In-house justification of Big Data -Telco perspective Introduction to Hadoop Ecosystem- familiarity with all Hadoop tools like Hive, Pig, SPARC –when and how they are used to solve Big Data problem How Big Data is extracted to analyze for analytics tool-how Business Analysis’s can reduce their pain points of collection and analysis of data through integrated Hadoop dashboard approach Basic introduction of Insight analytics, visualization analytics and predictive analytics for Telco Customer Churn analytic and Big Data-how Big Data analytic can reduce customer churn and customer dissatisfaction in Telco-case studies Network failure and service failure analytics from Network meta-data and IPDR Financial analysis-fraud, wastage and ROI estimation from sales and operational data Customer acquisition problem-Target marketing, customer segmentation and cross-sale from sales data Introduction and summary of all Big Data analytic products and where they fit into Telco analytic space Conclusion-how to take step-by-step approach to introduce Big Data Business Intelligence in your organization Target Audience Network operation, Financial Managers, CRM managers and top IT managers in Telco CIO office. Business Analysts in Telco CFO office managers/analysts Operational managers QA managers Breakdown of topics on daily basis: (Each session is 2 hours) Day-1: Session -1: Business Overview of Why Big Data Business Intelligence in Telco. Case Studies from T-Mobile, Verizon etc. Big Data adaptation rate in North American Telco & and how they are aligning their future business model and operation around Big Data BI Broad Scale Application Area Network and Service management Customer Churn Management Data Integration & Dashboard visualization Fraud management Business Rule generation Customer profiling Localized Ad pushing Day-1: Session-2 : Introduction of Big Data-1 Main characteristics of Big Data-volume, variety, velocity and veracity. MPP architecture for volume. Data Warehouses – static schema, slowly evolving dataset MPP Databases like Greenplum, Exadata, Teradata, Netezza, Vertica etc. Hadoop Based Solutions – no conditions on structure of dataset. Typical pattern : HDFS, MapReduce (crunch), retrieve from HDFS Batch- suited for analytical/non-interactive Volume : CEP streaming data Typical choices – CEP products (e.g. Infostreams, Apama, MarkLogic etc) Less production ready – Storm/S4 NoSQL Databases – (columnar and key-value): Best suited as analytical adjunct to data warehouse/database Day-1 : Session -3 : Introduction to Big Data-2 NoSQL solutions KV Store - Keyspace, Flare, SchemaFree, RAMCloud, Oracle NoSQL Database (OnDB) KV Store - Dynamo, Voldemort, Dynomite, SubRecord, Mo8onDb, DovetailDB KV Store (Hierarchical) - GT.m, Cache KV Store (Ordered) - TokyoTyrant, Lightcloud, NMDB, Luxio, MemcacheDB, Actord KV Cache - Memcached, Repcached, Coherence, Infinispan, EXtremeScale, JBossCache, Velocity, Terracoqua Tuple Store - Gigaspaces, Coord, Apache River Object Database - ZopeDB, DB40, Shoal Document Store - CouchDB, Cloudant, Couchbase, MongoDB, Jackrabbit, XML-Databases, ThruDB, CloudKit, Prsevere, Riak-Basho, Scalaris Wide Columnar Store - BigTable, HBase, Apache Cassandra, Hypertable, KAI, OpenNeptune, Qbase, KDI Varieties of Data: Introduction to Data Cleaning issue in Big Data RDBMS – static structure/schema, doesn’t promote agile, exploratory environment. NoSQL – semi structured, enough structure to store data without exact schema before storing data Data cleaning issues Day-1 : Session-4 : Big Data Introduction-3 : Hadoop When to select Hadoop? STRUCTURED - Enterprise data warehouses/databases can store massive data (at a cost) but impose structure (not good for active exploration) SEMI STRUCTURED data – tough to do with traditional solutions (DW/DB) Warehousing data = HUGE effort and static even after implementation For variety & volume of data, crunched on commodity hardware – HADOOP Commodity H/W needed to create a Hadoop Cluster Introduction to Map Reduce /HDFS MapReduce – distribute computing over multiple servers HDFS – make data available locally for the computing process (with redundancy) Data – can be unstructured/schema-less (unlike RDBMS) Developer responsibility to make sense of data Programming MapReduce = working with Java (pros/cons), manually loading data into HDFS Day-2: Session-1.1: Spark : In Memory distributed database What is “In memory” processing? Spark SQL Spark SDK Spark API RDD Spark Lib Hanna How to migrate an existing Hadoop system to Spark Day-2 Session -1.2: Storm -Real time processing in Big Data Streams Sprouts Bolts Topologies Day-2: Session-2: Big Data Management System Moving parts, compute nodes start/fail :ZooKeeper - For configuration/coordination/naming services Complex pipeline/workflow: Oozie – manage workflow, dependencies, daisy chain Deploy, configure, cluster management, upgrade etc (sys admin) :Ambari In Cloud : Whirr Evolving Big Data platform tools for tracking ETL layer application issues Day-2: Session-3: Predictive analytics in Business Intelligence -1: Fundamental Techniques & Machine learning based BI : Introduction to Machine learning Learning classification techniques Bayesian Prediction-preparing training file Markov random field Supervised and unsupervised learning Feature extraction Support Vector Machine Neural Network Reinforcement learning Big Data large variable problem -Random forest (RF) Representation learning Deep learning Big Data Automation problem – Multi-model ensemble RF Automation through Soft10-M LDA and topic modeling Agile learning Agent based learning- Example from Telco operation Distributed learning –Example from Telco operation Introduction to Open source Tools for predictive analytics : R, Rapidminer, Mahut More scalable Analytic-Apache Hama, Spark and CMU Graph lab Day-2: Session-4 Predictive analytics eco-system-2: Common predictive analytic problems in Telecom Insight analytic Visualization analytic Structured predictive analytic Unstructured predictive analytic Customer profiling Recommendation Engine Pattern detection Rule/Scenario discovery –failure, fraud, optimization Root cause discovery Sentiment analysis CRM analytic Network analytic Text Analytics Technology assisted review Fraud analytic Real Time Analytic Day-3 : Sesion-1 : Network Operation analytic- root cause analysis of network failures, service interruption from meta data, IPDR and CRM: CPU Usage Memory Usage QoS Queue Usage Device Temperature Interface Error IoS versions Routing Events Latency variations Syslog analytics Packet Loss Load simulation Topology inference Performance Threshold Device Traps IPDR ( IP detailed record) collection and processing Use of IPDR data for Subscriber Bandwidth consumption, Network interface utilization, modem status and diagnostic HFC information Day-3: Session-2: Tools for Network service failure analysis: Network Summary Dashboard: monitor overall network deployments and track your organization's key performance indicators Peak Period Analysis Dashboard: understand the application and subscriber trends driving peak utilization, with location-specific granularity Routing Efficiency Dashboard: control network costs and build business cases for capital projects with a complete understanding of interconnect and transit relationships Real-Time Entertainment Dashboard: access metrics that matter, including video views, duration, and video quality of experience (QoE) IPv6 Transition Dashboard: investigate the ongoing adoption of IPv6 on your network and gain insight into the applications and devices driving trends Case-Study-1: The Alcatel-Lucent Big Network Analytics (BNA) Data Miner Multi-dimensional mobile intelligence (m.IQ6) Day-3 : Session 3: Big Data BI for Marketing/Sales –Understanding sales/marketing from Sales data: ( All of them will be shown with a live predictive analytic demo ) To identify highest velocity clients To identify clients for a given products To identify right set of products for a client ( Recommendation Engine) Market segmentation technique Cross-Sale and upsale technique Client segmentation technique Sales revenue forecasting technique Day-3: Session 4: BI needed for Telco CFO office: Overview of Business Analytics works needed in a CFO office Risk analysis on new investment Revenue, profit forecasting New client acquisition forecasting Loss forecasting Fraud analytic on finances ( details next session ) Day-4 : Session-1: Fraud prevention BI from Big Data in Telco-Fraud analytic: Bandwidth leakage / Bandwidth fraud Vendor fraud/over charging for projects Customer refund/claims frauds Travel reimbursement frauds Day-4 : Session-2: From Churning Prediction to Churn Prevention: 3 Types of Churn : Active/Deliberate , Rotational/Incidental, Passive Involuntary 3 classification of churned customers: Total, Hidden, Partial Understanding CRM variables for churn Customer behavior data collection Customer perception data collection Customer demographics data collection Cleaning CRM Data Unstructured CRM data ( customer call, tickets, emails) and their conversion to structured data for Churn analysis Social Media CRM-new way to extract customer satisfaction index Case Study-1 : T-Mobile USA: Churn Reduction by 50% Day-4 : Session-3: How to use predictive analysis for root cause analysis of customer dis-satisfaction : Case Study -1 : Linking dissatisfaction to issues – Accounting, Engineering failures like service interruption, poor bandwidth service Case Study-2: Big Data QA dashboard to track customer satisfaction index from various parameters such as call escalations, criticality of issues, pending service interruption events etc. Day-4: Session-4: Big Data Dashboard for quick accessibility of diverse data and display : Integration of existing application platform with Big Data Dashboard Big Data management Case Study of Big Data Dashboard: Tableau and Pentaho Use Big Data app to push location based Advertisement Tracking system and management Day-5 : Session-1: How to justify Big Data BI implementation within an organization: Defining ROI for Big Data implementation Case studies for saving Analyst Time for collection and preparation of Data –increase in productivity gain Case studies of revenue gain from customer churn Revenue gain from location based and other targeted Ad An integrated spreadsheet approach to calculate approx. expense vs. Revenue gain/savings from Big Data implementation. Day-5 : Session-2: Step by Step procedure to replace legacy data system to Big Data System: Understanding practical Big Data Migration Roadmap What are the important information needed before architecting a Big Data implementation What are the different ways of calculating volume, velocity, variety and veracity of data How to estimate data growth Case studies in 2 Telco Day-5: Session 3 & 4: Review of Big Data Vendors and review of their products. Q/A session: AccentureAlcatel-Lucent Amazon –A9 APTEAN (Formerly CDC Software) Cisco Systems Cloudera Dell EMC GoodData Corporation Guavus Hitachi Data Systems Hortonworks Huawei HP IBM Informatica Intel Jaspersoft Microsoft MongoDB (Formerly 10Gen) MU Sigma Netapp Opera Solutions Oracle Pentaho Platfora Qliktech Quantum Rackspace Revolution Analytics Salesforce SAP SAS Institute Sisense Software AG/Terracotta Soft10 Automation Splunk Sqrrl Supermicro Tableau Software Teradata Think Big Analytics Tidemark Systems VMware (Part of EMC)
44 Statistics Level 1 14 hours This course has been created for people who require general statistics skills. This course can be tailored to a specific area of expertise like market research, biology, manufacturing, public sector research, etc... Introduction Descriptive Statistics Inferential Statistics Sampling Demonstration Variables Percentiles Measurement Levels of Measurement Measurement Demonstration Basics of Data Collection Distributions Summation Notation Linear Transformations Exercises Graphing Distributions Qualitative Variables Quantitative Variables Stem and Leaf Displays Histograms Frequency Polygons Box Plots Box Plot Demonstration Bar Charts Line Graphs Exercises Summarizing Distributions Central Tendency What is Central Tendency Measures of Central Tendency Balance Scale Simulation Absolute Difference Simulation Squared Differences Simulation Median and Mean Mean and Median Simulation Additional Measures Comparing measures Variability Measures of Variability Estimating Variance Simulation Shape Comparing Distributions Demo Effects of Transformations Variance Sum Law I Exercises Normal Distributions History Areas of Normal Distributions Varieties of Normal Distribution Demo Standard Normal Normal Approximation to the Binomial Normal Approximation Demo Exercises
263582 Statistical Thinking for Decision Makers 7 hours This course has been created for decision makers whose primary goal is not to do the calculation and the analysis, but to understand them and be able to choose what kind of statistical methods are relevant in strategic planning of the organization. For example, a prospect participant needs to make decision how many samples needs to be collected before they can make the decision whether the product is going to be launched or not. If you need longer course which covers the very basics of statistical thinking have a look at 5 day "Statistics for Managers" training. What statistics can offer to Decision Makers Descriptive Statistics Basic statistics - which of the statistics (e.g. median, average, percentiles etc...) are more relevant to different distributions Graphs - significance of getting it right (e.g. how the way the graph is created reflects the decision) Variable types - what variables are easier to deal with Ceteris paribus, things are always in motion Third variable problem - how to find the real influencer Inferential Statistics Probability value - what is the meaning of P-value Repeated experiment - how to interpret repeated experiment results Data collection - you can minimize bias, but not get rid of it Understanding confidence level Statistical Thinking Decision making with limited information how to check how much information is enough prioritizing goals based on probability and potential return (benefit/cost ratio ration, decision trees) How errors add up Butterfly effect Black swans What is Schrödinger's cat and what is Newton's Apple in business Cassandra Problem - how to measure a forecast if the course of action has changed Google Flu trends - how it went wrong How decisions make forecast outdated Forecasting - methods and practicality ARIMA Why naive forecasts are usually more responsive How far a forecast should look into the past? Why more data can mean worse forecast? Statistical Methods useful for Decision Makers Describing Bivariate Data Univariate data and bivariate data Probability why things differ each time we measure them? Normal Distributions and normally distributed errors Estimation Independent sources of information and degrees of freedom Logic of Hypothesis Testing What can be proven, and why it is always the opposite what we want (Falsification) Interpreting the results of Hypothesis Testing Testing Means Power How to determine a good (and cheap) sample size False positive and false negative and why it is always a trade-off
85066 IoT (Internet of Things) for Entrepreneurs, Managers and Investors 21 hours Estimates for Internet of Things or IoT market value are massive, since by definition the IoT is an integrated and diffused layer of devices, sensors, and computing power that overlays entire consumer, business-to-business, and government industries. The IoT will account for an increasingly huge number of connections: 1.9 billion devices today, and 9 billion by 2018. That year, it will be roughly equal to the number of smartphones, smart TVs, tablets, wearable computers, and PCs combined. In the consumer space, many products and services have already crossed over into the IoT, including kitchen and home appliances, parking, RFID, lighting and heating products, and a number of applications in Industrial Internet. However the underlying technologies of IoT are nothing new as M2M communication existed since the birth of Internet. However what changed in last couple of years is the emergence of number of inexpensive wireless technologies added by overwhelming adaptation of smart phones and Tablet in every home. Explosive growth of mobile devices led to present demand of IoT. Due to unbounded opportunities in IoT business, a large number of small and medium sized entrepreneurs jumped on a bandwagon of IoT gold rush. Also due to emergence of open source electronics and IoT platform, cost of development of IoT system and further managing its sizable production is increasingly affordable. Existing electronic product owners are experiencing pressure to integrate their device with Internet or Mobile app. This training is intended for a technology and business review of an emerging industry so that IoT enthusiasts/entrepreneurs can grasp the basics of IoT technology and business. Course objectives Main objective of the course is to introduce emerging technological options, platforms and case studies of IoT implementation in home & city automation (smart homes and cities), Industrial Internet, healthcare, Govt., Mobile Cellular and other areas. Basic introduction of all the elements of IoT-Mechanical, Electronics/sensor platform, Wireless and wireline protocols, Mobile to Electronics integration, Mobile to enterprise integration, Data-analytics and Total control plane M2M Wireless protocols for IoT- WiFi, Zigbee/Zwave, Bluetooth, ANT+ : When and where to use which one? Mobile/Desktop/Web app- for registration, data acquisition and control –Available M2M data acquisition platform for IoT-–Xively, Omega and NovoTech, etc. Security issues and security solutions for IoT Open source/commercial electronics platform for IoT-Raspberry Pi, Arduino , ArmMbedLPC etc Open source /commercial enterprise cloud platform for IoT-Ayla, iO Bridge, Libellium, Axeda, Cisco frog cloud Studies of business and technology of some of the common IoT devices like Home automation, Smoke alarm, vehicles, military, home health etc Target Audience Investors and IoT entrepreneurs Managers and Engineers whose company is venturing into IoT space Business Analysts & Investors Pre-requisites Should have basic knowledge of business operation, devices, electronics systems and data systems Must have basic understanding of software and systems Basic understanding of Statistics ( in Excel levels) 1. Day 1, Session 1 — Business Overview of Why IoT is so important Case Studies from Nest, CISCO and top industries IoT adaptation rate in North American & and how they are aligning their future business model and operation around IoT Broad Scale Application Area Smart House and Smart City Industrial Internet Smart Cars Wearables Home Healthcare Business Rule Generation for IoT 3 layered architecture of Big Data — Physical (Sensors), Communication, and Data Intelligence 2. Day 1, Session 2 — Introduction of IoT: All about Sensors – Electronics Basic function and architecture of a sensor — sensor body, sensor mechanism, sensor calibration, sensor maintenance, cost and pricing structure, legacy and modern sensor network — all the basics about the sensors Development of sensor electronics — IoT vs legacy, and open source vs traditional PCB design style Development of sensor communication protocols — history to modern days. Legacy protocols like Modbus, relay, HART to modern day Zigbee, Zwave, X10,Bluetooth, ANT, etc. Business driver for sensor deployment — FDA/EPA regulation, fraud/tempering detection, supervision, quality control and process management Different Kind of Calibration Techniques — manual, automation, infield, primary and secondary calibration — and their implication in IoT Powering options for sensors — battery, solar, Witricity, Mobile and PoE Hands on training with single silicon and other sensors like temperature, pressure, vibration, magnetic field, power factor etc. 3. Day 1, Session 3 — Fundamental of M2M communication — Sensor Network and Wireless protocol What is a sensor network? What is ad-hoc network? Wireless vs. Wireline network WiFi- 802.11 families: N to S — application of standards and common vendors. Zigbee and Zwave — advantage of low power mesh networking. Long distance Zigbee. Introduction to different Zigbee chips. Bluetooth/BLE: Low power vs high power, speed of detection, class of BLE. Introduction of Bluetooth vendors & their review. Creating network with Wireless protocols such as Piconet by BLE Protocol stacks and packet structure for BLE and Zigbee Other long distance RF communication link LOS vs NLOS links Capacity and throughput calculation Application issues in wireless protocols — power consumption, reliability, PER, QoS, LOS Hands on training with sensor network PICO NET- BLE Base network Zigbee network-master/slave communication Data Hubs : MC and single computer ( like Beaglebone ) based datahub 4. Day 1, Session 4 — Review of Electronics Platform, production and cost projection PCB vs FPGA vs ASIC design-how to take decision Prototyping electronics vs Production electronics QA certificate for IoT- CE/CSA/UL/IEC/RoHS/IP65: What are those and when needed? Basic introduction of multi-layer PCB design and its workflow Electronics reliability-basic concept of FIT and early mortality rate Environmental and reliability testing-basic concepts Basic Open source platforms: Arduino, Raspberry Pi, Beaglebone, when needed? RedBack, Diamond Back 5. Day 2, Session 1 — Conceiving a new IoT product- Product requirement document for IoT State of the present art and review of existing technology in the market place Suggestion for new features and technologies based on market analysis and patent issues Detailed technical specs for new products- System, software, hardware, mechanical, installation etc. Packaging and documentation requirements Servicing and customer support requirements High level design (HLD) for understanding of product concept Release plan for phase wise introduction of the new features Skill set for the development team and proposed project plan -cost & duration Target manufacturing price 6. Day 2, Session 2 — Introduction to Mobile app platform for IoT Protocol stack of Mobile app for IoT Mobile to server integration –what are the factors to look out What are the intelligent layer that can be introduced at Mobile app level ? iBeacon in IoS Window Azure Linkafy Mobile platform for IoT Axeda Xively 7. Day 2, Session 3 — Machine learning for intelligent IoT Introduction to Machine learning Learning classification techniques Bayesian Prediction-preparing training file Support Vector Machine Image and video analytic for IoT Fraud and alert analytic through IoT Bio –metric ID integration with IoT Real Time Analytic/Stream Analytic Scalability issues of IoT and machine learning What are the architectural implementation of Machine learning for IoT 8. Day 2, Session 4 — Analytic Engine for IoT Insight analytic Visualization analytic Structured predictive analytic Unstructured predictive analytic Recommendation Engine Pattern detection Rule/Scenario discovery — failure, fraud, optimization Root cause discovery 9. Day 3, Session 1 — Security in IoT implementation Why security is absolutely essential for IoT Mechanism of security breach in IOT layer Privacy enhancing technologies Fundamental of network security Encryption and cryptography implementation for IoT data Security standard for available platform European legislation for security in IoT platform Secure booting Device authentication Firewalling and IPS Updates and patches 10. Day 3, Session 2 — Database implementation for IoT : Cloud based IoT platforms SQL vs NoSQL-Which one is good for your IoT application Open sourced vs. Licensed Database Available M2M cloud platform Axeda Xively Omega NovoTech Ayla Libellium CISCO M2M platform AT &T M2M platform Google M2M platform 11. Day 3, Session 3 — A few common IoT systems Home automation Energy optimization in Home Automotive-OBD IoT-Lock Smart Smoke alarm BAC ( Blood alcohol monitoring ) for drug abusers under probation Pet cam for Pet lovers Wearable IOT Mobile parking ticketing system Indoor location tracking in Retail store Home health care Smart Sports Watch 12. Day 3, Session 4 — Big Data for IoT 4V- Volume, velocity, variety and veracity of Big Data Why Big Data is important in IoT Big Data vs legacy data in IoT Hadoop for IoT-when and why? Storage technique for image, Geospatial and video data Distributed database Parallel computing basics for IoT
138 Minitab for Statistical Data Analysis 14 hours The course is aimed at anyone interested in statistical analysis. It provides familiarity with Minitab and will increase the effectiveness and efficiency of your data analysis and improve your knowledge of statistics. Descriptive Statistics Normal Distribution Correlation Regression Trend analysis & forecasting Confidence intervals t-tests proportion tests variance tests Anova Chi Squared tests
284990 Applied Machine Learning 14 hours This training course is for people that would like to apply Machine Learning in practical applications. Audience This course is for data scientists and statisticians that have some familiarity with statistics and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give practical applications to Machine Learning to participants interested in applying the methods at work. Sector specific examples are used to make the training relevant to the audience. Naive Bayes Multinomial models Bayesian categorical data analysis Discriminant analysis Linear regression Logistic regression GLM EM Algorithm Mixed Models Additive Models Classification KNN Bayesian Graphical Models Factor Analysis (FA) Principal Component Analysis (PCA) Independent Component Analysis (ICA) Support Vector Machines (SVM) for regression and classification Boosting Ensemble models Neural networks Hidden Markov Models (HMM) Space State Models Clustering
284980 From Data to Decision with Big Data and Predictive Analytics 21 hours Audience If you try to make sense out of the data you have access to or want to analyse unstructured data available on the net (like Twitter, Linked in, etc...) this course is for you. It is mostly aimed at decision makers and people who need to choose what data is worth collecting and what is worth analyzing. It is not aimed at people configuring the solution, those people will benefit from the big picture though. Delivery Mode During the course delegates will be presented with working examples of mostly open source technologies. Short lectures will be followed by presentation and simple exercises by the participants Content and Software used All software used is updated each time the course is run so we check the newest versions possible. It covers the process from obtaining, formatting, processing and analysing the data, to explain how to automate decision making process with machine learning. Quick Overview Data Sources Minding Data Recommender systems Target Marketing Datatypes Structured vs unstructured Static vs streamed Attitudinal, behavioural and demographic data Data-driven vs user-driven analytics data validity Volume, velocity and variety of data Models Building models Statistical Models Machine learning Data Classification Clustering kGroups, k-means, nearest neighbours Ant colonies, birds flocking Predictive Models Decision trees Support vector machine Naive Bayes classification Neural networks Markov Model Regression Ensemble methods ROI Benefit/Cost ratio Cost of software Cost of development Potential benefits Building Models Data Preparation (MapReduce) Data cleansing Choosing methods Developing model Testing Model Model evaluation Model deployment and integration Overview of Open Source and commercial software Selection of R-project package Python libraries Hadoop and Mahout Selected Apache projects related to Big Data and Analytics Selected commercial solution Integration with existing software and data sources
1277 Analysing Financial Data in Excel 14 hours Audience Financial or market analysts, managers, accountants Course Objectives Facilitate and automate all kinds of financial analysis with Microsoft Excel Advanced functions Logical functions Math and statistical functions Financial functions Lookups and data tables Using lookup functions Using MATCH and INDEX Advanced list management Validating cell entries Exploring database functions PivotTables and PivotCharts Creating Pivot Tables Calculated Item and Calculated Field Working with External Data Exporting and importing Exporting and importing XML data Querying external databases Linking to a database Linking to a XML data source Analysing online data (Web Queries) Analytical options Goal Seek Solver The Analysis ToolPack Scenarios Macros and custom functions Running and recording a macro Working with VBA code Creating functions Conditional formatting and SmartArt Conditional formatting with graphics SmartArt graphics
284991 Introduction to Machine Learning 7 hours This training course is for people that would like to apply basic Machine Learning techniques in practical applications. Audience Data scientists and statisticians that have some familiarity with machine learning and know how to program R. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose is to give a practical introduction to machine learning to participants interested in applying the methods at work Sector specific examples are used to make the training relevant to the audience. Naive Bayes Multinomial models Bayesian categorical data analysis Discriminant analysis Linear regression Logistic regression GLM EM Algorithm Mixed Models Additive Models Classification KNN Ridge regression Clustering
287768 Introduction to Recommendation Systems 7 hours Audience Marketing department employees, IT strategists and other people involved in decisions related to the design and implementation of recommender systems. Format Short theoretical background follow by analysing working examples and short, simple exercises. Challenges related to data collection Information overload Data types (video, text, structured data, etc...) Potential of the data now and in the near future Basics of Data Mining Recommendation and searching Searching and Filtering Sorting Determining weights of the search results Using Synonyms Full-text search Long Tail Chris Anderson idea Drawbacks of Long Tail Determining Similarities Products Users Documents and web sites Content-Based Recommendation i measurement of similarities Cosine distance The Euclidean distance vectors TFIDF and frequency of terms Collaborative filtering Community rating Graphs Applications of graphs  Determining similarity of graphs Similarity between users Neural Networks Basic concepts of Neural Networks Training Data and Validation Data Neural Network examples in recommender systems How to encourage users to share their data Making systems more comfortable Navigation Functionality and UX Case Studies Popularity of recommender systems and their problems Examples
1366 Market Forecasting 14 hours Audience This course has been created for analysts, forecasters wanting to introduce or improve forecasting which can be related to sale forecasting, economic forecasting, technology forecasting, supply chain management and demand or supply forecasting. Description This course guides delegates through series of methodologies, frameworks and algorithms which are useful when choosing how to predict the future based on historical data. It uses standard tools like Microsoft Excel or some Open Source programs (notably R project). The principles covered in this course can be implemented by any software (e.g. SAS, SPSS, Statistica, MINITAB ...) Problems facing forecasters Customer demand planning Investor uncertainty Economic planning Seasonal changes in demand/utilization Roles of risk and uncertainty Time series methods Moving average Exponential smoothing Extrapolation Linear prediction Trend estimation Growth curve Econometric methods (casual methods) Regression analysis using linear regression or non-linear regression Autoregressive moving average (ARMA) Autoregressive integrated moving average (ARIMA) Econometrics Judgemental methods Surveys Delphi method Scenario building Technology forecasting Forecast by analogy Simulation and other methods Simulation Prediction market Probabilistic forecasting and Ensemble forecasting Reference class forecasting
287842 Numerical Methods 14 hours This course is for data scientists and statisticians that have some familiarity with numerical methods and have at least one programming language from R, Python, Octave, and some C++ options. The emphasis of this course is on the practical aspects of data/model preparation, execution, post hoc analysis and visualization. The purpose of this course is to give a practical introduction in numerical methods to participants interested in applying the methods at work. Sector specific examples are used to make the training relevant to the audience. Topics Covered: curve fitting regression robust regression linear algebra: matrix operations eigenvalue/eigenvectormatrix decompositions ordinary & partial differential equations fourier analysis interpolation & splines
287782 Apache Spark 14 hours Why Spark? Problems with Traditional Large-Scale Systems Introducing Spark Spark Basics What is Apache Spark? Using the Spark Shell Resilient Distributed Datasets (RDDs) Functional Programming with Spark Working with RDDs RDD Operations Key-Value Pair RDDs MapReduce and Pair RDD Operations The Hadoop Distributed File System Why HDFS? HDFS Architecture Using HDFS Running Spark on a Cluster Overview A Spark Standalone Cluster The Spark Standalone Web UI Parallel Programming with Spark RDD Partitions and HDFS Data Locality Working With Partitions Executing Parallel Operations Caching and Persistence RDD Lineage Caching Overview Distributed Persistence Writing Spark Applications Spark Applications vs. Spark Shell Creating the SparkContext Configuring Spark Properties Building and Running a Spark Application Logging Spark, Hadoop, and the Enterprise Data Center Overview Spark and the Hadoop Ecosystem Spark and MapReduce Spark Streaming Spark Streaming Overview Example: Streaming Word Count Other Streaming Operations Sliding Window Operations Developing Spark Streaming Applications Common Spark Algorithms Iterative Algorithms Graph Analysis Machine Learning Improving Spark Performance Shared Variables: Broadcast Variables Shared Variables: Accumulators Common Performance Issues
1818 Statistics for Researchers 35 hours This course aims to give researchers an understanding of the principles of statistical design and analysis and their relevance to research in a range of scientific disciplines. It covers some probability and statistical methods, mainly through examples. This training contains around 30% of lectures, 70% of guided quizzes and labs. In the case of closed course we can tailor the examples and materials to a specific branch (like psychology tests, public sector, biology, genetics, etc...) In the case of public courses, mixed examples are used. Though various software is used during this course (Microsoft Excel to SPSS, Statgraphics, etc...) its main focus is on understanding principles and processes guiding research, reasoning and conclusion. This course can be delivered as a blended course i.e. with homework and assignments. Scientific Method, Probability & Statistics Very short history of statistics Why can be "confident" about the conclusions Probability and decision making Preparation for research (deciding "what" and "how") The big picture: research is a part of a process with inputs and outputs Gathering data Questioners and measurement What to measure Observational Studies Design of Experiments Analysis of Data and Graphical Methods Research Skills and Techniques Research Management Describing Bivariate Data Introduction to Bivariate Data Values of the Pearson Correlation Guessing Correlations Simulation Properties of Pearson's r Computing Pearson's r Restriction of Range Demo Variance Sum Law II Exercises Probability Introduction Basic Concepts Conditional Probability Demo Gamblers Fallacy Simulation Birthday Demonstration Binomial Distribution Binomial Demonstration Base Rates Bayes' Theorem Demonstration Monty Hall Problem Demonstration Exercises Normal Distributions Introduction History Areas of Normal Distributions Varieties of Normal Distribution Demo Standard Normal Normal Approximation to the Binomial Normal Approximation Demo Exercises Sampling Distributions Introduction Basic Demo Sample Size Demo Central Limit Theorem Demo Sampling Distribution of the Mean Sampling Distribution of Difference Between Means Sampling Distribution of Pearson's r Sampling Distribution of a Proportion Exercises Estimation Introduction Degrees of Freedom Characteristics of Estimators Bias and Variability Simulation Confidence Intervals Exercises Logic of Hypothesis Testing Introduction Significance Testing Type I and Type II Errors One- and Two-Tailed Tests Interpreting Significant Results Interpreting Non-Significant Results Steps in Hypothesis Testing Significance Testing and Confidence Intervals Misconceptions Exercises Testing Means Single Mean t Distribution Demo Difference between Two Means (Independent Groups) Robustness Simulation All Pairwise Comparisons Among Means Specific Comparisons Difference between Two Means (Correlated Pairs) Correlated t Simulation Specific Comparisons (Correlated Observations) Pairwise Comparisons (Correlated Observations) Exercises Power Introduction Example Calculations Factors Affecting Power Exercises Prediction Introduction to Simple Linear Regression Linear Fit Demo Partitioning Sums of Squares Standard Error of the Estimate Prediction Line Demo Inferential Statistics for b and r Exercises ANOVA Introduction ANOVA Designs One-Factor ANOVA (Between-Subjects) One-Way Demo Multi-Factor ANOVA (Between-Subjects) Unequal Sample Sizes Tests Supplementing ANOVA Within-Subjects ANOVA Power of Within-Subjects Designs Demo Exercises Chi Square Chi Square Distribution One-Way Tables Testing Distributions Demo Contingency Tables 2 x 2 Table Simulation Exercises Case Studies Analysis of selected case studies
287849 Administrator Training for Apache Hadoop 35 hours Audience: The course is intended for IT specialists looking for a solution to store and process large data sets in a distributed system environment Goal: Deep knowledge on Hadoop cluster administration. 1: HDFS (17%) Describe the function of HDFS Daemons Describe the normal operation of an Apache Hadoop cluster, both in data storage and in data processing. Identify current features of computing systems that motivate a system like Apache Hadoop. Classify major goals of HDFS Design Given a scenario, identify appropriate use case for HDFS Federation Identify components and daemon of an HDFS HA-Quorum cluster Analyze the role of HDFS security (Kerberos) Determine the best data serialization choice for a given scenario Describe file read and write paths Identify the commands to manipulate files in the Hadoop File System Shell 2: YARN and MapReduce version 2 (MRv2) (17%) Understand how upgrading a cluster from Hadoop 1 to Hadoop 2 affects cluster settings Understand how to deploy MapReduce v2 (MRv2 / YARN), including all YARN daemons Understand basic design strategy for MapReduce v2 (MRv2) Determine how YARN handles resource allocations Identify the workflow of MapReduce job running on YARN Determine which files you must change and how in order to migrate a cluster from MapReduce version 1 (MRv1) to MapReduce version 2 (MRv2) running on YARN. 3: Hadoop Cluster Planning (16%) Principal points to consider in choosing the hardware and operating systems to host an Apache Hadoop cluster. Analyze the choices in selecting an OS Understand kernel tuning and disk swapping Given a scenario and workload pattern, identify a hardware configuration appropriate to the scenario Given a scenario, determine the ecosystem components your cluster needs to run in order to fulfill the SLA Cluster sizing: given a scenario and frequency of execution, identify the specifics for the workload, including CPU, memory, storage, disk I/O Disk Sizing and Configuration, including JBOD versus RAID, SANs, virtualization, and disk sizing requirements in a cluster Network Topologies: understand network usage in Hadoop (for both HDFS and MapReduce) and propose or identify key network design components for a given scenario 4: Hadoop Cluster Installation and Administration (25%) Given a scenario, identify how the cluster will handle disk and machine failures Analyze a logging configuration and logging configuration file format Understand the basics of Hadoop metrics and cluster health monitoring Identify the function and purpose of available tools for cluster monitoring Be able to install all the ecosystem components in CDH 5, including (but not limited to): Impala, Flume, Oozie, Hue, Manager, Sqoop, Hive, and Pig Identify the function and purpose of available tools for managing the Apache Hadoop file system 5: Resource Management (10%) Understand the overall design goals of each of Hadoop schedulers Given a scenario, determine how the FIFO Scheduler allocates cluster resources Given a scenario, determine how the Fair Scheduler allocates cluster resources under YARN Given a scenario, determine how the Capacity Scheduler allocates cluster resources 6: Monitoring and Logging (15%) Understand the functions and features of Hadoop’s metric collection abilities Analyze the NameNode and JobTracker Web UIs Understand how to monitor cluster Daemons Identify and monitor CPU usage on master nodes Describe how to monitor swap and memory allocation on all nodes Identify how to view and manage Hadoop’s log files Interpret a log file
287792 Hadoop Administration on MapR 28 hours Audience: IT professionals who aspire to get involved in the 'Big Data' world or require knowledge of open source NoSQL solutions. This course is intended to demystify big data/hadoop technology and to show it is not difficult to understand. Big Data Overview: What is Big Data Why Big Data is gaining popularity Big Data Case Studies Big Data Characteristics Solutions to work on Big Data. Hadoop & Its components: What is Hadoop and what are its components. Hadoop Architecture and its characteristics of Data it can handle /Process. Brief on Hadoop History, companies using it and why they have started using it. Hadoop Frame work & its components- explained in detail. What is HDFS and Reads -Writes to Hadoop Distributed File System. How to Setup Hadoop Cluster in different modes- Stand- alone/Pseudo/Multi Node cluster. (This includes setting up a Hadoop cluster in VM BOX/VMware, Network configurations that need to be carefully looked into, running Hadoop Daemons and testing the cluster). What is Map Reduce frame work and how it works. Running Map Reduce jobs on Hadoop cluster. Understanding Replication , Mirroring and Rack awareness in context of Hadoop clusters. Hadoop Cluster Planning:   How to plan your hadoop cluster.   Understanding hardware-software to plan your hadoop cluster.   Understanding workloads and planning cluster to avoid failures and perform optimum. What is MapR and why MapR :  Overview of MapR and its architecture. Understanding & working of MapR Control System, MapR Volumes , snapshots & Mirrors. Planning a cluster in context of MapR. Comparison of MapR with other distributions and Apache Hadoop. MapR installation and cluster deployment. Cluster Setup & Administration: Managing services, nodes ,snapshots, mirror volumes and remote clusters. Understanding and managing Nodes. Understanding of Hadoop components, Installing Hadoop components alongside MapR Services. Accessing Data on cluster including via NFS Managing services & nodes. Managing data by using volumes,  managing users and groups, managing & assigning roles to nodes, commissioning decommissioning of nodes, cluster administration and performance monitoring, configuring/ analyzing and monitoring metrics to monitor performance, configuring and administering MapR security. Understanding and working with M7- Native storage for MapR tables. Cluster configuration and tuning for optimum performance. Cluster upgrade and integration with other setups: Upgrading software version of MapR and types of upgrade. Configuring Mapr cluster to access HDFS cluster. Setting up MapR cluster on Amazon Elastic Mapreduce. All the above topics include Demonstrations and practice sessions for learners to have hands on experience of the technology.
287850 Hadoop Administration 21 hours The course is dedicated to IT specialists that are looking for a solution to store and process large data sets in distributed system environment Course goal: Getting knowledge regarding Hadoop cluster administration Introduction to Cloud Computing and Big Data solutions Apache Hadoop evolution: HDFS, MapReduce, YARN Installation and configuration of Hadoop in Pseudo-distributed mode Running MapReduce jobs on Hadoop cluster Hadoop cluster planning, installation and configuration Hadoop ecosystem: Pig, Hive, Sqoop, HBase Big Data future: Impala, Cassandra
287794 Hadoop Administration 32 hours A basic knowledge of linux/unix would be helpful. Basic knowledge on Java know hows or databases will be helpful. Note**Even if not,students can learn from scratch to professional level. Audience: IT or non IT professionals who are interested in growing their career by gaining knowledge about BIG DATA and framework/solutions such as Hadoop & its components. Format: Course would have theoretical discussions followed by environment setup and tasks to work on to have hands on experience. Every student gets a 24*7 support for 15 days during and after course completion, course material, knowledge of real time case studies. 40% theory 55% hands on experience through instructor led live demons and then assignments. 5% test and mock interviews. Topics Covered: Big Data Overview: - What is Big Data - Why Big Data is gaining popularity - Big Data Case Studies - Big Data Characteristics - Solutions to work on Big Data. Hadoop & Its components: - What is Hadoop and what are its components. - Hadoop Architecture and its characteristics of Data it can handle /Process. - Brief on Hadoop History, companies using it and why they have started using it. Hadoop Frame work & its components- explained in detail. - What is HDFS and Reads -Writes to Hadoop Distributed File System. - How to Setup Hadoop Cluster in different modes- Pseudo/Multi Node cluster. This includes setting up Hadoop cluster in VM BOX/VMware or on individual machines, Network configurations that need to be carefully looked into, running Hadoop Daemons and testing the cluster. - What is Map Reduce frame work and how it works. - Running Map Reduce jobs on Hadoop cluster. - Understanding Replication , Mirroring and Rack awareness in context of Hadoop . All the above topics include Demos and practice sessions for learners to have hands on experience on the technology. Hadoop Cluster Planning: - How to plan your hadoop cluster. - Understanding hardware-software to plan your hadoop cluster. - Understanding workloads and planning cluster to avoid failures and perform optimum. Working with Hadoop cluster- Hadoop Administration - Understanding functionalities of JOB TRACKER –resource management and Job scheduling. - Understanding Schedulers- Fair | FIFO | capacity scheduler - Hadoop Administration: Setting paramters to Setup  Trash | Schedulers & pool | Metadata & Data storage at specific locations | replication | Hadoop client | commissioning and decommissioning of data nodes and many more. - Hadoop Administration commands to work on Hadoop clusters: Balancer | Job List, Status, Setting priority | Save namespace | Metasave | DFSadmin commands | FS commands | distcp | fsck |setting space quota | write /read access to HDFS | securing Hadoop cluster and many more. - Backup and recovery - Analyzing problems and resolving them : Some examples from live real time environments : Hadoop daemons not starting up | namespace IDs out of sync | connectivity issues between slave and master nodes | data being under replicated | browsing through respective UIs | job failures | etc. Hadoop cluster with latest features: - Hadoop 1.x and 2.x differences - Hadoop 2.x new features - What is Yarn, Federation and high Availability? - Hadoop daemons and what has changed. Working on Hadoop 2.x cluster: - Upgrading Hadoop old versions ( 0.22.x or 1.X.X) to Hadoop 2.X in different modes - Setting up Hadoop 2.x clusters in different modes and verifying the setup. - Running Map Reduce jobs on Hadoop Yarn. - A revisit on Hadoop configuration files, deprecated parameters, add on’s to existing config files and miscellaneous. - Understanding QJM-Quorum Journal Manager Understanding and working with Hadoop Components: Components: - What is oozie, flume, Hive, Hbase and why are they used. - How to setup Hive/PIG/Hbase on Hadoop clusters. - Setting up and working with HIVE, HBASE, PIG on Hadoop 1.x or Hadoop 2.x - Some real time case studies. - What is cloudera Manager and how is it used. - How to run a mapreduce jobs from Java API. Summarizing and brief revision. Questions and Answers. Depending on client /attendee's requirements , below mentioned topics can be included in the course agenda. Hbase administration in detail. Hive administration in detail. Hadoop administration on AWS. Additional sessions on Pig/Perl/Python scripting, usage of Java APIs with Hadoop cluster.
2004 Six Sigma Yellow Belt 21 hours Yellow Belt covers the basics of the Six Sigma Define Measure Analyse Improve Control (DMAIC) approach enabling delegates to take part and lead team based waste and defect reduction projects and initiatives. In addition emphasis is placed on applying the problem solving tools into daily roles. At the end of the course you will be equipped to look at your immediate team and role, determine what can be improved and create a business improvement project on a selected opportunity that is aligned to customer requirements. You will be able to analyse the process using visualization tools and identify the waste (non-value adding) components and work to eliminate these from the process. You will apply root cause analysis techniques to identify the underlying causes of defects in the process. The course uses simulations, case study exercises and work based projects to enable delegates to 'learn through doing'. Notes: This course has a minimum class size of 4. And if requested this course can be delivered in 2 days with some reductions to the course content and level of detail in some areas, notably Customer needs; Graphical analysis and Process handover. An overview of project selection and scoping Understanding customer needs and how they impact project aims Discovering processes using visualisation techniques Understanding the causes of work and how to simplify Finding and removing process waste Graphical analysis to understand process performance Problem solving tools to determine root cause Basic solution creation Piloting & implementation Process handover
287854 Octave for Data Analysis 14 hours Audience: This course is for data scientists and statisticians that have some familiarity statistical methods and would like to use the Octave programming language at work. The purpose of this course is to give a practical introduction in Octave programming to participants interested in using this programming language at work.   environment data types: numeric string, arrays  matrices variables  expressions  control flow  functions exception handling  debugging input/output  linear algebra  optimization statistical distributions  regression plotting
287809 Fundamentals of Cassandra DB 21 hours This course introduces the basics of Cassandra 2.0 including its installation & configuration, internal architecture, tools, Cassandra Query Language, and administration. Audience Administrators and developers seeking to use Cassandra. This course serves as a foundation and prerequisite for other advanced Cassandra courses.   Introduction to Cassandra Big Data Common use cases of Cassandra Cassandra architecture Installation and Configuration Running and Stopping Cassandra instance Cassandra Data Model Cassandra Query Language Configuring the Cassandra nodes and clusters using CCM cqlsh shell commands nodetool Using cassandra-stress to populate and test the Cassandra nodes Coordinating the Cassandra requests Replication Consistency Tuning Cassandra Nodes Communication Writing and Reading data to/from the storage engine Data directories Anti-entropy operations Cassandra Compaction Choosing and Implementing compaction strategies Best practices in hardware planning Troubleshooting resources
2005 Six Sigma Green Belt 70 hours Green Belts participate in and lead Lean and Six Sigma projects from within their regular job function. They can tackle projects as part of a cross functional team or projects scoped within their normal job. Each session of Green Belt training is separated by 3 or 4 weeks when the Green Belts apply their training to their improvement projects. We recommend supporting the Green Belts on their projects in between training sessions and holding stage gate reviews along with leadership and Lean Six Sigma Champions to ensure DMAIC methodology is being rigorously applied. Week 1 Foundation: covers the fundamentals of the Lean Six Sigma Define Measure Analyse Improve Control (DMAIC) approach enabling participants to take part and lead waste and defect reduction projects and initiatives. Week 2 Practitioner: provides additional data analysis and lean tools for participants to lead well scoped process improvement projects related to their regular job function. Block 1 Day 1 Introduction to Six Sigma Project Chartering & VOC Process Mapping Stakeholder analysis Day 2 Team Start Up Prioritisation Matrix Lean Thinking Value Stream Mapping Day 3 Data Collection Minitab and Graphical Analysis Descriptive Statistics Day 4 Measurement System Evaluation Process Capability Cp, CpK Six Sigma Metrics Day 5 5 Why FMEA Block 2 Day 1 Review of Block 1 Multivari Inferential Statistics Intro to Hypothesis Testing Day 2 2 sample t-tests F tests Hypothesis Testing – Chi Sq Day 3 Hypothesis Testing - Anova Day 4 Correlation and Regression Multiple Regression Introduction to Design Of Experiments Day 5 Mistake Proofing Control Plans Control Charts
2623 Marketing Analytics using R 21 hours Audience: Business owners (marketing managers, product managers, customer base managers) and their teams; customer insights professionals. Overview: The course follows the customer life cycle from acquiring new customers, managing the existing customers for profitability, retaining good customers, and finally understanding which customers are leaving us and why. We will be working with real (if anonymous) data from a variety of industries including telecommunications, insurance, media, and high tech. Format: Instructor-led training over the course of five half-day sessions with in-class exercises as well as homework. It can be delivered as a classroom or distance (online) course. Part 1: Inflow - acquiring new customers Our focus is direct marketing so we will not look at advertising campaigns but instead focus on understanding marketing campaigns (e.g. direct mail). This is the foundation for almost everything else in the course. We look at measuring and improving campaign effectiveness. including: The importance of test and control groups. Universal control group. Techniques: Lift curves, AUC Return on investment. Optimizing marketing spend. Part 2: Base Management: managing existing customers Considering the cost of acquiring new customers for many businesses there are probably few assets more valuable than their existing customer base, though few think of it in this way. Topics include: 1. Cross-selling and up-selling: Offering the right product or service to the customer at the right time. Techniques: RFM models. Multinomial regression. b. Value of lifetime purchases. 2. Customer segmentation: Understanding the types of customers that you have. Classification models using first simple decision trees, and then random forests and other, newer techniques. Part 3: Retention: Keeping your good customers Understanding which customers are likely to leave and what you can do about it is key to profitability in many industries, especially where there are repeat purchases or subscriptions. We look at propensity to churn models, including Logistic regression: glm (package stats) and newer techniques (especially gbm as a general tool) Tuning models (caret) and introduction to ensemble models. Part 4: Outflow: Understanding who are leaving and why Customers will leave you – that is a fact of life. What is important is to understand who are leaving and why. Is it low value customers who are leaving or is it your best customers? Are they leaving to competitors or because they no longer need your products and services? Topics include: Customer lifetime value models: Combining value of purchases with propensity to churn and the cost of servicing and retaining the customer. Analysing survey data. (Generally useful, but we will do a brief introduction here in the context of exit surveys.)
287818 Hadoop for Developers 14 hours Introduction What is Hadoop? What does it do? How does it do it? The Motivation for Hadoop Problems with Traditional Large-Scale Systems Introducing Hadoop Hadoopable Problems Hadoop: Basic Concepts and HDFS The Hadoop Project and Hadoop Components The Hadoop Distributed File System Introduction to MapReduce MapReduce Overview Example: WordCount Mappers Reducers Hadoop Clusters and the Hadoop Ecosystem Hadoop Cluster Overview Hadoop Jobs and Tasks Other Hadoop Ecosystem Components Writing a MapReduce Program in Java Basic MapReduce API Concepts Writing MapReduce Drivers, Mappers, and Reducers in Java Speeding Up Hadoop Development by Using Eclipse Differences Between the Old and New MapReduce APIs Writing a MapReduce Program Using Streaming Writing Mappers and Reducers with the Streaming API Unit Testing MapReduce Programs Unit Testing The JUnit and MRUnit Testing Frameworks Writing Unit Tests with MRUnit Running Unit Tests Delving Deeper into the Hadoop API Using the ToolRunner Class Setting Up and Tearing Down Mappers and Reducers Decreasing the Amount of Intermediate Data with Combiners Accessing HDFS Programmatically Using The Distributed Cache Using the Hadoop API’s Library of Mappers, Reducers, and Partitioners Practical Development Tips and Techniques Strategies for Debugging MapReduce Code Testing MapReduce Code Locally by Using LocalJobRunner Writing and Viewing Log Files Retrieving Job Information with Counters Reusing Objects Creating Map-Only MapReduce Jobs Partitioners and Reducers How Partitioners and Reducers Work Together Determining the Optimal Number of Reducers for a Job Writing Customer Partitioners Data Input and Output Creating Custom Writable and Writable-Comparable Implementations Saving Binary Data Using SequenceFile and Avro Data Files Issues to Consider When Using File Compression Implementing Custom InputFormats and OutputFormats Common MapReduce Algorithms Sorting and Searching Large Data Sets Indexing Data Computing Term Frequency — Inverse Document Frequency Calculating Word Co-Occurrence Performing Secondary Sort Joining Data Sets in MapReduce Jobs Writing a Map-Side Join Writing a Reduce-Side Join Integrating Hadoop into the Enterprise Workflow Integrating Hadoop into an Existing Enterprise Loading Data from an RDBMS into HDFS by Using Sqoop Managing Real-Time Data Using Flume Accessing HDFS from Legacy Systems with FuseDFS and HttpFS An Introduction to Hive, Imapala, and Pig The Motivation for Hive, Impala, and Pig Hive Overview Impala Overview Pig Overview Choosing Between Hive, Impala, and Pig An Introduction to Oozie Introduction to Oozie Creating Oozie Workflows
2006 Six Sigma Black Belt 84 hours Six Sigma is a data driven approach that tackles variation to improve the performance of products, services and processes, combining practical problem solving and the best scientific approaches found in experimentation and optimisation of systems. The approach has been widely and successfully applied in industry, notably by Motorola, AlliedSignal & General Electric. Black Belt is a qualification for improvement managers in a Six Sigma organisation. You will learn the tools and techniques to take an improvement project through the Define, Measure, Analyse, Improve and Control phases (DMAIC). These techniques include Process Mapping, Measurement System Evaluation, Regression Analysis, Design of Experiments, Statistical Tolerancing, Monte Carlo Simulation and Lean Thinking. The content of the course takes the participants through the DMAIC phases as well as introducing subjects such as Lean Thinking, Design for Six Sigma and discussing important leadership issues and experiences in deploying a Six Sigma programme. Week 1 Foundation: covers the fundamentals of the Lean Six Sigma Define Measure Analyse Improve Control (DMAIC) approach enabling participants to take part and lead waste and defect reduction projects and initiatives. Week 2 Practitioner: provides additional data analysis and lean tools for participants to lead well scoped process improvement projects related to their regular job function. Week 3 Expert: provides regression, design of experiment and data analysis techniques to enable participants to tackle complex problem solving projects that require understanding of the relationships between multiple variables. The trainer has 16 years experience with Six Sigma and as well as leading the deployment of Six Sigma at a number of businesses he has trained and coached over 300 Black Belts. Here are a few comments from previous participants: “Probably the most valuable course I will ever pass” “The content was very well delivered. The examples very relevant. Thank you” “The course was excellent and I am able to use part of it to coach my lean teams here” (Company supervisor who attended with KTP associate) Block 1 Day 1 Introduction to Six Sigma Project Chartering & VOC Process Mapping Stakeholder analysis Day 2 Team Start Up Prioritisation Matrix Lean Thinking Value Stream Mapping Day 3 Data Collection Minitab and Graphical Analysis Descriptive Statistics Day 4 Measurement System Evaluation Process Capability Cp, CpK Six Sigma Metrics Day 5 5 Why FMEA Block 2 Day 1 Review of Block 1 Multivari Inferential Statistics Intro to Hypothesis Testing Day 2 2 sample t-tests F tests Hypothesis Testing – Chi Sq Day 3 Hypothesis Testing - Anova Day 4 Correlation and Regression Multiple Regression Introduction to Design Of Experiments Day 5 Mistake Proofing Control Plans Control Charts Block 3 Day 1 Review of Block 2 2K Factorial Experiments Box Cox Transformations Hypothesis Testing – Non Parametric Day 2 2K Factorial Experiments Fractional Factorial Experiments Day 3 Noise Blocking Robustness Centre Points General Full Factorial Experiments Day 4 Response Surface Experiments Implementing Improvements Creative Solutions Day 5 Intro to Design for Six Sigma Statistical Tolerancing Monte Carlo Simulation Certification Six Sigma is a practical qualification, to demonstrate knowledge of what has been learnt on the course you will need to undertake 2 coursework projects. There is no report to produce but you will be required to present a PowerPoint presentation to the trainer and examiner showing results and method. The projects can cover work you would complete in your normal work, however you will need to show use of the DMAIC problem solving approach and application of Six Sigma and Lean tools. This provides a good balance between the practical approach and more rigorous analysis which together lead to robust solutions. You will be able to contact the trainer for discussions of how Six Sigma tools could benefit you in your project. Examples of projects from previous participants include: Formulating cream texture for seasonality in dairy feeds. Housing Association complaints reduction Multi-variable (cost, efficiency, size) optimisation of a fuel cell Job Scheduling improvement in a factory Ambulance waiting time reduction Reduction in resin thickness variation in glass manufacture NobleProg & Redlands provide Black Belt certification. For delegates that require independent accreditation, NobleProg & Redlands have partnered with the British Quality Foundation (BQF) to provide Lean Six Sigma Black Belt certification. Certification requires passing an exam at the end of the course and completing and presenting two improvement projects that demonstrate understanding and application of the Six Sigma approach and techniques. An additional charge of £600 plus VAT is levied for BQF independent accreditation.
85063 Training Neural Network in R 14 hours This course is an introduction to applying neural networks in real world problems using R-project software. Introduction to Neural Networks What are Neural Networks What is current status in applying neural networks Neural Networks vs regression models Supervised and Unsupervised learning Overview of packages available nnet, neuralnet and others differences between packages and itls limitations Visualizing neural networks Applying Neural Networks Concept of neurons and neural networks A simplified model of the brain Opportunities neuron XOR problem and the nature of the distribution of values The polymorphic nature of the sigmoidal Other functions activated Construction of neural networks Concept of neurons connect Neural network as nodes Building a network Neurons Layers Scales Input and output data Range 0 to 1 Normalization Learning Neural Networks Backward Propagation Steps propagation Network training algorithms range of application Estimation Problems with the possibility of approximation by Examples OCR and image pattern recognition Other applications Implementing a neural network modeling job predicting stock prices of listed
287841 Apache Mahout for Developers 14 hours Audience Developers involved in projects that use machine learning with Apache Mahout. Format Hands on introduction to machine learning. The course is delivered in a lab format based on real world practical use cases. Implementing Recommendation Systems with Mahout Introduction to recommender systems Representing recommender data Making recommendation Optimizing recommendation Clustering Basics of clustering Data representation Clustering algorithms Clustering quality improvements Optimizing clustering implementation Application of clustering in real world Classification Basics of classification Classifier training Classifier quality improvements
2012 Statistics for Managers 35 hours This course has been created for decision makers whose primary goal is not to do the calculation and the analysis, but to understand them. The course uses a lot of pictures, diagrams, computer simulations, anecdotes and sense of humour to explain concepts and pitfalls of statistics. Introduction to Statistics What are Statistics? Importance of Statistics Descriptive Statistics Inferential Statistics Variables Percentiles Measurement Levels of Measurement Basics of Data Collection Distributions Summation Notation Linear Transformations Common Pitfalls Biased samples Average, mean or median? Misleading graphs Semi-attached figures Third variable problem Ceteris paribus Errors in reasoning Understanding confidence level Understanding Results Describing Bivariate Data Probability Normal Distributions Sampling Distributions Estimation Logic of Hypothesis Testing Testing Means Power Prediction ANOVA Chi Square Case Studies Discussion about case studies chosen by the delegates.
287766 Programming with Big Data in R 21 hours Introduction to Programming Big Data with R (bpdR) Setting up your environment to use pbdR Scope and tools available in pbdR Packages commonly used with Big Data alongside pbdR Message Passing Interface (MPI) Using pbdR MPI 5 Parallel processing Point-to-point communication Send Matrices Summing Matrices Collective communication Summing Matrices with Reduce Scatter / Gather Other MPI communications Distributed Matrices Creating a distributed diagonal matrix SVD of a distributed matrix Building a distributed matrix in parallel   Statistics Applications Monte Carlo Integration Reading Datasets Reading on all processes Broadcasting from one process Reading partitioned data Distributed Regression Distributed Bootstrap 
287847 Data Mining 21 hours Course can be provided with any tools, including free open-source data mining software and applicationsIntroduction Data mining as the analysis step of the KDD process ("Knowledge Discovery in Databases") Subfield of computer science Discovering patterns in large data sets Sources of methods Artificial intelligence Machine learning Statistics Database systems What is involved? Database and data management aspects Data pre-processing Model and inference considerations Interestingness metrics Complexity considerations Post-processing of discovered structures Visualization Online updating Data mining main tasks Automatic or semi-automatic analysis of large quantities of data Extracting previously unknown interesting patterns groups of data records (cluster analysis) unusual records (anomaly detection) dependencies (association rule mining) Data mining Anomaly detection (Outlier/change/deviation detection) Association rule learning (Dependency modeling) Clustering Classification Regression Summarization Use and applications Able Danger Behavioral analytics Business analytics Cross Industry Standard Process for Data Mining Customer analytics Data mining in agriculture Data mining in meteorology Educational data mining Human genetic clustering Inference attack Java Data Mining Open-source intelligence Path analysis (computing) Police-enforced ANPR in the UK Reactive business intelligence SEMMA Stellar Wind Talx Zapaday Data dredging, data fishing, data snooping
2013 The Practitioner’s Guide to Multivariate Techniques 14 hours The introduction of the digital computer, and now the widespread availability of computer packages, has opened up a hitherto difficult area of statistics; multivariate analysis. Previously the formidable computing effort associated with these procedures presented a real barrier. That barrier has now disappeared and the analyst can therefore concentrate on an appreciation and an interpretation of the findings. Multivariate Analysis of Variance (MANOVA) Whereas the Analysis of Variance technique (ANOVA) investigates possible systematic differences between prescribes groups of individuals on a single variable, the technique of Multivariate Analysis of Variance is simply an extension of that procedure to numerous variates viewed collectively. These variates could be distinct in nature; for example Height, Weight etc, or repeated measures of a single variate over time or over space. When the variates are repeated measures over time or space, the analyses may often be reduced to a succession of univariate analyses, with easier interpretation. This procedure is often referred to as Repeated Measure Analysis. Principal Component Analysis If only two variates are recorded for a number of individuals, the data may conveniently be represented on a two-dimensional plot. If there are ‘p’ variates then one could imagine a plot of the data in ‘p’ dimensional space. The technique of Principal Component Analysis corresponds to a rotation of the axes so that the maximum amounts of variation are progressively represented along the new axes. It has been described as …….‘peering into multidimensional space, from every conceivable angle, and selecting as the viewing angle that which contains the maximum amount of variation’ The aim therefore is a reduction of the dimensionality of multivariate data. If for example a very high percentage (say 90%) of the variability is contained in the first two principal components, a plot of these components would be a virtually complete pictorial representation of the variability. Discriminant Analysis Suppose that several variates are observed on individuals from two identified groups. The technique of discriminant analysis involves calculating that linear function of the variates that best separates out the groups. The linear function may therefore be used to identify group membership simply from the pattern of variates. Various methods are available to estimate the success in general of this identification procedure. Canonical Variate Analysis Canonical Variate Analysis is in essence an extension of Discriminant Analysis to accommodate the situation where there are more than two groups of individuals. Cluster Analysis Cluster Analysis as the name suggests involves identifying groupings (or clusters) of individuals in multidimensional space. Since here there is no ‘a priori’ grouping of individuals, the identification of so called clusters is a subjective process subject to various assumptions. Most computer packages offer several clustering procedures that may often give differing results. However the pictorial representation of the so called ‘clusters’, in diagrams called dendrograms, provides a very useful diagnostic. Factor Analysis If ‘p’ variates are observed on each of ‘n’ individuals, the technique of factor analysis attempts to identify say ‘r’ (< p) so called factors which determine to a large extent the variate values. The implicit assumption here therefore is that the entire array of ‘p’ variates is controlled by ‘r’ factors. For example the ‘p’ variates could represent the performance of students in numerous examination subjects, and we wish to determine whether a few attributes such as numerical ability, linguistic ability could account for much of the variability. The difficulties here stem from the fact that the so-called factors are not directly observable, and indeed may not really exist. Factor analysis has been viewed very suspiciously over the years, because of the measure of speculation involved in the identification of factors. One popular numerical procedure starts with the rotation of axes using principal components (described above) followed by a rotation of the factors identified.
287807 Machine Learning Fundamentals with R 14 hours The aim of this course is to provide a basic proficiency in applying Machine Learning methods in practice. Through the use of the R programming platform and its various libraries, and based on a multitude of practical examples this course teaches how to use the most important building blocks of Machine Learning, how to make data modeling decisions, interpret the outputs of the algorithms and validate the results. Our goal is to give you the skills to understand and use the most fundamental tools from the Machine Learning toolbox confidently and avoid the common pitfalls of Data Sciences applications. Introduction to Applied Machine Learning Statistical learning vs. Machine learning Iteration and evaluation Bias-Variance trade-off Regression Linear regression Generalizations and Nonlinearity Exercises Classification Bayesian refresher Naive Bayes Logistic regression K-Nearest neighbors Exercises Cross-validation and Resampling Cross-validation approaches Bootstrap Exercises Unsupervised Learning K-means clustering Examples Challenges of unsupervised learning and beyond K-means
287977 MATLAB Fundamental 21 hours This three-day course provides a comprehensive introduction to the MATLAB technical computing environment. The course is intended for beginning users and those looking for a review. No prior programming experience or knowledge of MATLAB is assumed. Themes of data analysis, visualization, modeling, and programming are explored throughout the course. Topics include: Working with the MATLAB user interface Entering commands and creating variables Analyzing vectors and matrices Visualizing vector and matrix data Working with data files Working with data types Automating commands with scripts Writing programs with logic and flow control Writing functions Part 1 A Brief Introduction to MATLAB Objectives: Offer an overview of what MATLAB is, what it consists of, and what it can do for you An Example: C vs. MATLAB MATLAB Product Overview MATLAB Application Fields What MATLAB can do for you? The Course Outline Working with the MATLAB User Interface Objective: Get an introduction to the main features of the MATLAB integrated design environment and its user interfaces. Get an overview of course themes. MATALB Interface Reading data from file Saving and loading variables Plotting data Customizing plots Calculating statistics and best-fit line Exporting graphics for use in other applications Va​riables and Expressions Objective: Enter MATLAB commands, with an emphasis on creating and accessing data in variables. Entering commands Creating variables Getting help Accessing and modifying values in variables Creating character variables Analysis and Visualization with Vectors Objective: Perform mathematical and statistical calculations with vectors, and create basic visualizations. See how MATLAB syntax enables calculations on whole data sets with a single command. Calculations with vectors Plotting vectors Basic plot options Annotating plots Analysis and Visualization with Matrices Objective: Use matrices as mathematical objects or as collections of (vector) data. Understand the appropriate use of MATLAB syntax to distinguish between these applications. Size and dimensionality Calculations with matrices Statistics with matrix data Plotting multiple columns Reshaping and linear indexing Multidimensional arrays Part 2 Automating Commands with Scripts Objective: Collect MATLAB commands into scripts for ease of reproduction and experimentation. As the complexity of your tasks increases, entering long sequences of commands in the Command Window becomes impractical. A Modelling Example The Command History Creating script files Running scripts Comments and Code Cells Publishing scripts Working with Data Files Objective: Bring data into MATLAB from formatted files. Because imported data can be of a wide variety of types and formats, emphasis is given to working with cell arrays and date formats. Importing data Mixed data types Cell arrays Conversions amongst numerals, strings, and cells Exporting data Multiple Vector Plots Objective: Make more complex vector plots, such as multiple plots, and use color and string manipulation techniques to produce eye-catching visual representations of data. Graphics structure Multiple figures, axes, and plots Plotting equations Using color Customizing plots Logic and Flow Control Objective: Use logical operations, variables, and indexing techniques to create flexible code that can make decisions and adapt to different situations. Explore other programming constructs for repeating sections of code, and constructs that allow interaction with the user. Logical operations and variables Logical indexing Programming constructs Flow control Loops Matrix and Image Visualization Objective: Visualize images and matrix data in two or three dimensions. Explore the difference in displaying images and visualizing matrix data using images. Scattered Interpolation using vector and matrix data 3-D matrix visualization 2-D matrix visualization Indexed images and colormaps True color images Part 3 Data Analysis Objective: Perform typical data analysis tasks in MATLAB, including developing and fitting theoretical models to real-life data. This leads naturally to one of the most powerful features of MATLAB: solving linear systems of equations with a single command. Dealing with missing data Correlation Smoothing Spectral analysis and FFTs Solving linear systems of equations Writing Functions Objective: Increase automation by encapsulating modular tasks as user-defined functions. Understand how MATLAB resolves references to files and variables. Why functions? Creating functions Adding comments Calling subfunctions Workspaces  Subfunctions Path and precedence Data Types Objective: Explore data types, focusing on the syntax for creating variables and accessing array elements, and discuss methods for converting among data types. Data types differ in the kind of data they may contain and the way the data is organized. MATLAB data types Integers Structures Converting types File I/O Objective: Explore the low-level data import and export functions in MATLAB that allow precise control over text and binary file I/O. These functions include textscan, which provides precise control of reading text files. Opening and closing files Reading and writing text files Reading and writing binary files Note that the actual delivered might be subject to minor discrepancies from the outline above without prior notification. Conclusion Note that the actual delivered might be subject to minor discrepancies from the outline above without prior notification. Objectives: Summarise what we have learnt A summary of the course Other upcoming courses on MATLAB Note that the course might be subject to few minor discrepancies when being delivered without prior notifications.
1841 Introduction to R 21 hours R is an open-source free programming language for statistical computing, data analysis, and graphics. R is used by a growing number of managers and data analysts inside corporations and academia. R has also found followers among statisticians, engineers and scientists without computer programming skills who find it easy to use. Its popularity is due to the increasing use of data mining for various goals such as set ad prices, find new drugs more quickly or fine-tune financial models. R has a wide variety of packages for data mining. This course covers the manipulation of objects in R including reading data, accessing R packages, writing R functions, and making informative graphs. It includes analyzing data using common statistical models. The course teaches how to use the R software (http://www.r-project.org) both on a command line and in a graphical user interface (GUI). Introduction and preliminaries Making R more friendly, R and available GUIs The R environment Related software and documentation R and statistics Using R interactively An introductory session Getting help with functions and features R commands, case sensitivity, etc. Recall and correction of previous commands Executing commands from or diverting output to a file Data permanency and removing objects Simple manipulations; numbers and vectors Vectors and assignment Vector arithmetic Generating regular sequences Logical vectors Missing values Character vectors Index vectors; selecting and modifying subsets of a data set Other types of objects Objects, their modes and attributes Intrinsic attributes: mode and length Changing the length of an object Getting and setting attributes The class of an object Ordered and unordered factors A specific example The function tapply() and ragged arrays Ordered factors Arrays and matrices Arrays Array indexing. Subsections of an array Index matrices The array() function Mixed vector and array arithmetic. The recycling rule The outer product of two arrays Generalized transpose of an array Matrix facilities Matrix multiplication Linear equations and inversion Eigenvalues and eigenvectors Singular value decomposition and determinants Least squares fitting and the QR decomposition Forming partitioned matrices, cbind() and rbind() The concatenation function, (), with arrays Frequency tables from factors Lists and data frames Lists Constructing and modifying lists Concatenating lists Data frames Making data frames attach() and detach() Working with data frames Attaching arbitrary lists Managing the search path Reading data from files The read.table()function The scan() function Accessing builtin datasets Loading data from other R packages Editing data Probability distributions R as a set of statistical tables Examining the distribution of a set of data One- and two-sample tests Grouping, loops and conditional execution Grouped expressions Control statements Conditional execution: if statements Repetitive execution: for loops, repeat and while Writing your own functions Simple examples Defining new binary operators Named arguments and defaults The '...' argument Assignments within functions More advanced examples Efficiency factors in block designs Dropping all names in a printed array Recursive numerical integration Scope Customizing the environment Classes, generic functions and object orientation Statistical models in R Defining statistical models; formulae Contrasts Linear models Generic functions for extracting model information Analysis of variance and model comparison ANOVA tables Updating fitted models Generalized linear models Families The glm() function Nonlinear least squares and maximum likelihood models Least squares Maximum likelihood Some non-standard models Graphical procedures High-level plotting commands The plot() function Displaying multivariate data Display graphics Arguments to high-level plotting functions Low-level plotting commands Mathematical annotation Hershey vector fonts Interacting with graphics Using graphics parameters Permanent changes: The par() function Temporary changes: Arguments to graphics functions Graphics parameters list Graphical elements Axes and tick marks Figure margins Multiple figure environment Device drivers PostScript diagrams for typeset documents Multiple graphics devices Dynamic graphics Packages Standard packages Contributed packages and CRAN Namespaces
2202 R for Data Analysis and Research 7 hours Audience managers developers scientists students Format of the course on-line instruction and discussion OR face-to-face workshops The list below gives an idea of the topics that will be covered in the workshop. The number of topics that will be covered depends on the duration of the workshop (i.e. one, two or three days). In a one or two day workshop it may not be possible to cover all topics, and so the workshop will be tailored to suit the specific needs of the learners. A first R session Syntax for analysing one dimensional data arrays Syntax for analysing two dimensional data arrays Reading and writing data files Sub-setting data, sorting, ranking and ordering data Merging arrays Set membership The main statistical functions in R The Normal Distribution (correlation, probabilities, tests for normality and confidence intervals) Ordinary Least Squares Regression T-tests, Analysis of Variance and Multivariable Analysis of Variance Chi-square tests for categorical variables Writing functions in R Writing software (scripts) in R Control structures (e.g. Loops) Graphical methods (including scatterplots, bar charts, pie charts, histograms, box plots and dot charts) Graphical User Interfaces for R
2642 Forecasting with R 14 hours This course allows delegate to fully automate the process of forecasting with R Forecasting with R Introduction to Forecasting Exponential Smoothing ARIMA models The forecast package Package 'forecast' accuracy Acf arfima Arima arima.errors auto.arima bats BoxCox BoxCox.lambda croston CV dm.test dshw ets fitted.Arima forecast forecast.Arima forecast.bats forecast.ets forecast.HoltWinters forecast.lm forecast.stl forecast.StructTS gas gold logLik.ets ma meanf monthdays msts na.interp naive ndiffs nnetar plot.bats plot.ets plot.forecast rwf seasadj seasonaldummy seasonplot ses simulate.ets sindexf splinef subset.ts taylor tbats thetaf tsdisplay tslm wineind woolyrnq
Statistics instructor, Statistics private courses,Weekend Statistics courses, Statistics classes, Statistics trainer, Statistics training courses, Evening Statistics courses, Statistics one on one training, Statistics on-site, Statistics coaching, Weekend Statistics training, Evening Statistics training, Statistics boot camp

Some of our clients