Wednesday, 5 March 2014

Big data - Hadoop training Course Outline

This course has been designed to cover all type of audiences spanning from Architect, Administrator to developer. 

In case of any question regarding duration/fees/schedule , do call me @ 9840014739


Module 1
Big data Getting Started
What is Big Data?
What is Big Data Analytics ?
What  is Apache Hadoop ?
History of Hadoop
Understanding distributed file systems and Hadoop
Hadoop eco system components
Hadoop use cases
Ubuntu Installation
JDK Installation
Module 2
Hadoop Distributed File system

Eclipse Installation
Overview of HDFS
Communication Protocols
Rack Awareness
Hadoop cluster Topology
Setting up SSH for Hadoop Cluster
Running Hadoop –
          Pseudo-distributed mode
Linux basic commands
HDFS file commands
Reading and writing to HDFS programmatically
Module 3
MapReduce Framework

Java Basics
Anatomy of a MapReduce Program
Writables
InputFormat
OutputFormat
Streaming API
Inherent failure handling
Reading and writing
Module 4
Advanced MapReduce  Programming
Input splits, Record Reader, Mapper, Partition & Shuffle, Reduce, OutputFormat
Writing MapReduce program
Streaming in Hadoop
Counters
Performance Tuning
Joins
Sorting
Module  5
Apache Hadoop Administration   
Best Practices for Hadoop setup and infrastructure

Hadoop cluster Installation preparation
   Ø Cluster network design
   Ø  Installation of Linux operating system
   Ø  Configuring SSH
   Ø  Walkthrough on Rack topology and set up

Managing Hadoop cluster
   Ø  HDFS cluster management
   Ø  Secondary Name node configuration
   Ø  Task Tracker management
   Ø  Configuring the HDFS quota
   Ø  Configuring Fair Scheduler      
   Ø  Upgrading Hadoop     
   Ø  Deploying and managing Hadoop
         clusters with Ambari

Monitoring Hadoop cluster
   Ø  Monitoring Hadoop cluster with
        Ganglia
   Ø  Monitoring Hadoop cluster with 
             Ambari
   Ø  Monitoring Hadoop cluster with Nagia

Hadoop Cluster Performance Tuning
   Ø  Benchmarking and profiling
   Ø  Using compression for input and 
             output
   Ø  Configuring optimal map and reduce
        slots  for the TT
   Ø  Fine tuning Job Tracker config
   Ø  Fine tuning Task Tracker config
   Ø  Tuning Shuffle, merge and sort
             parameters

Security Implementation
              Kerberos security mplementation      
Workflow Scheduler
              Capacity Scheduler
               Fair Scheduler   

dfsadmin & mradmin commands

Administration of Hcatalog and Hive

Backup and Recovery
Scenario based exercises
-          Data node failure & Recovery
-          Name Node Failure & Recovery
-          JT & TT failure  & Recovery
-          Removing data nodes
-          Adding Data nodes


Module 6
Pig and Pig Latin
Installation and configuration
Running Pig Lating through grunt
Writing programs
-          Filter , Load & Store functions
Writing user defined functions

Working with Scripts
Lab Exercises
Module 7
HBase and ZooKeeper
NoSQL Vs SQL
Cap  Theorem
Architecture
Installation
Configuration
Java API
MR integration
Performance Tuning
Lab Exercises
Module 8
Hive
Features of Hive
Architecture
Installation and configuration
HiveQL

Lab Exercises
Module 9
Other Hadoop eco system components
Overview of Ambari, Oozie ,Mahout
Installing & configuring Sqoop, mysql-server
Installing & configuring flume

Lab Exercises


http://big-data-training-in-chennai.blogspot.in/

Friday, 28 June 2013

Integrating the map with qlikview

if you guys need any help in integrating the map with qlikview , I can help you out ... 
Do you want to create a compelling Dashboard something as below, I can help you out. 
Pl call me @ 9840014739, I live in Chennai. 

For training related queries , please contact 9790974910





Dcom  Interface made my life easier to integrate R with Qlikview..  You can perform Linear regression, multiple regression analysis, Logistic regression etc.,. 
 I can give cost-effective solution to my customer as whatever statistical analysis I could do with SAS enterprise miner , I can do it at zero cost in R...





Qlikview fan
Sasken

Saturday, 9 March 2013

QlikView Business Discovery Platform components


Components of the QlikView Business Discovery Platform

The QlikView Business Discovery platform consists of 3 major components – QlikView
Server, QlikView Publisher and QlikView Desktop, each playing an important part in
designing, developing and implementing almost every QlikView deployment (see Figure 3).
Each component is used primarily by either an IT professional, a business analyst/developer,
or a business user.


QLIKVIEW DESKTOP

The QlikView Desktop is a Windows-based desktop tool that is used by business analysts
and developers to create a data model and to lay out the graphical user interface (GUI or
presentation layer) for QlikView apps. It is within this environment where a developer will use
a SQL-like scripting environment (augmented by ‘wizards’) to create the linkages (connection
strings) to the source data and to transform the data (e.g. rename fields, apply expressions)
so that it can be analyzed and used within the UI, as well as re-used by other QlikView files.
The QlikView Desktop is also the environment where all user interface design and user
experience is developed in a drag-and-drop paradigm: everything from graphs and tables
containing slices of data to multi-tab architectures to application of color scheme templates
and company logos is done here.