Event Registration

Established in 1984, UNICOM is an events and training company specialising in the areas of business, IT and Quantitative Finance. The company's products include conferences, public and in-house training courses (including certified training) and networking events.

Name:

Email:

Category:

Organisation:

Phone:

How you heard about UNICOM:

Follow Us On

Call Us Now: +44 (0)1895 256 484

Data/Analytics

heading

last update : 07/12/2016

Big Data Training

Event Date Country City Days Price  
Mon , 25 Sep 2017 United Kingdom London 3 £ 1895.00 +VAT
Mon , 11 Dec 2017 United Kingdom London 3 £ 1895.00 +VAT
noc

 

Course Background

This 3-day Hadoop & Storm Big Data training course combines Big Data with an Apache Hadoop Training Course and Real-Time Big Data Processing with Spark & Storm training courses.

During this workshop you will gain valuable hands-on experience using platforms including Apache Hadoop, Spark, and Elastic Search to process, analyse and manipulate large data sets, using real-world scenarios.

This Big Data with Hadoop training course is designed to show Software Developers, DBAs, Business Intelligence Analysts, Software Architects and other vested stakeholders how to use key Open Source technologies in order to derive significant value from extremely large data sets.

The course is delivered by an industry expert with extensive experience of implementing cutting-edge Big Data platforms and processes in large-scale retail, marketing and scientific projects.

Key Learning Points from the course:

  • Big Data Patterns and Anti-Patterns
  • Hadoop, HDFS, MapReduce with examples
  • NoSQL Databases with demonstrations in Cassandra, HBase and others
  • Building Data Warehouses with Hive
  • Integration with SQL Databases
  • Parallel Programming with Pig
  • Machine Learning & Pattern Matching with Apache Mahout
  • Utilise Amazon Web Services
  • Spark & Storm architecture
  • How to use Spark with Java
  • How to integrate Spark with NoSQL and other Big Data technologies
  • How to scale calculations to a cluster of servers
  • How to deploy Spark projects to the Cloud

Who Should Attend?

Data Warehouse Managers & Business Intelligence Specialists, Software Developers, Database Developers, Software Architects, IT Managers & IT Directors

Course Syllabus


Hadoop Architecture

  • History of Hadoop – Facebook, Dynamo, Yahoo, Google
  • Hadoop Core
  • Yarn architecture, Hadoop 2.0

Hadoop Distributed File System (HDFS)

  • HDFS Clusters – NameNodes, DataNodes & Clients
  • Metadata
  • Web-based Administration

MapReduce

  • Processing & Generating large data sets
  • Map functions
  • Programming MapReduce using SQL / Bash / Python
  • Parallel Processing
  • Failover

Data warehousing with Hive

  • Data Summarisation
  • Ad-hoc queries
  • Analysing large datasets
  • HiveQL (SQL-like Query Language)
  • Integration with SQL databases
  • n-grams analysis

Parallel Processing with Pig

  • Parallel evaluation
  • Query language interface
  • Relational Algebra

Data Mining with Mahout

  • Clustering
  • Classification
  • Batch-based collaborative filtering

Searching with Elastic Search

  • Elastic search concepts
  • Installation, import of the data
  • Demonstration of API, sample queries

Structured Data Storage with HBase

  • Big Data: How big is big?
  • Optimised Real-time read/write access

Cassandra multi-master database

  • The Cassandra Data Model
  • Eventual Consistency
  • When to use Cassandra

Redis

  • Redis Data Model
  • When to use Redis

MongoDB

  • MongoDB data model
  • Installation of MongoDB
  • When to use MongoDB

Kafka

  • Kafka architecture
  • Installation
  • Example usage
  • When to use Kafka

Lambda Architecture

  • Concept
  • Hadoop + Stream processing integration
  • Architecture examples

Big Data in the Cloud

  • Amazon Web Services
  • Concepts: Pay pay use model
  • Amazon S3, EC2, EMR
  • Google Cloud Platform
  • Google Big Query

Real-time Analytics with Spark

  • Spark architecture
  • Installation and running Spark in theCloud
  • Programming with Spark
  • Streaming data with Spark
  • Integrating Storm with NoSQL and other Big Data technologies
  • Spark demo + avro + pig + hive
  • Spark and Kafka integration

Streaming algorithms

  • Dynamic sampling
  • Distinct count, cardinality estimation
  • HyperLogLog
  • Moving average

Integration with third-party applications and languages

  • Python
  • R – examples for beta distribution
  • Hadoop
  • Lambda architecture

Prerequisites for Taking the Course

No prior experience of working with Big Data tools or platforms is required. Delegates should ideally have an understanding of Enterprise application development, Business Systems Integration and / or database development. This course incorporates hands-on exercises using cloud-based environments. If you wish to participate in the hands-on exercises you should sign up for an Amazon AWS account prior to the course: http://aws.amazon.com/ and bring your login details

Bring your own device

For this course it is necessary to bring your own device. Laptops can be provided if requested beforehand.

Course fees: £1895 + VAT 

Submit your details to download the brochure:

First Name *:

Last Name *:

Email *:

Job Title:

Organisation:

Comments:

  Type the characters you see in the picture below *:

 

 

Further Details:

If you would like to register for this course or have any further questions, please contact: UNICOM Seminars Ltd, OptiRisk R&D House, One Oxford Road, Uxbridge, UB9 4DA.

Email: info@unicom.co.uk   OR   Tel: +44 (0) 1895 256 484

Contact Us

OptiRisk R&D House

One Oxford Road

Uxbridge

Middlesex

UB9 4DA

UNITED KINGDOM

Phone: +44 (0)1895 256484

Mail:info@unicom.co.uk

Privacy Policy

Location

© 2017 All Rights Reserved