HADOOP AND MAP REDUCE – A SMALL INTRODUCTION WITH PYTHON : PART II

Yayy!!

In this article I would like to introduce the usage of Hadoop in your local machine. I will also give a hint about how to use it in a cluster provided you have access to the same in my future post in a similar topic. First, you have to Login to a Unix or Linux machine. If you have one, good and great. Else you can use Amazon’s Linux Server for free if you choose its free tier machine. I will write another tutorial on the usage and access to AWS Linux Server in a future post. The following are the Steps to follow to SETUP your system.

  • Install Java Developer Kit latest Suite – Java 8 by first downloading the tar file
  • tar zxvf jdk-8.xx-linux-x64.tar.gz or YOUR Tar File
  • wget http://www.trieuvan.com/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz
  • tar zxvf hadoop-2.6.0.tar.gz
  • cd ~
  • vi .bashrc
  • Paste the following to the .bashrc file
    export JAVA_HOME=/usr/lib/jvm/java-1.8.0
    export HADOOP_CLASSPATH=$JAVA_HOME/lib/tools.jar
    export HADOOP_INSTALL=
    export PATH=$PATH:$HADOOP_INSTALL/bin
    export HADOOP_USER_NAME= any name
  • Go to any terminal window and type hadoop
  • If you’re not getting any error you are good wit the installation!!! Congrats!

Local Usage

When you are using Hadoop using something other then Java a good way would be to use the Streaming Mode. It would generally take inputs in the form of standard user input or output which can be provided indirectly. What that exactly means will be explained soon. Just keep in mind that the Hadoop Streaming process uses two distinct programs having your Map and Reduce functions.  Unlike mincemeat’s Map function it has actually a dedicated program which performs the Map task. You can check mincemeat’s Map Reduce implementation here. Similarly, it has a dedicated file to perform the Reduce task.

Please note that in real world with multiple machines in clusters to perform your task, you can also use one Map and more than one Reduce implementing files. 

So, now you’re ready with your Hadoop. What next? Yup, you gotta write your Map implementer and Reduce implementer as well. In this case we will assume that we need only one Reduce implementer and the problem to solve will be to print the sum of a given set of numbers from a list with every number in a newline and also, print the count of the numbers. Let’s break this solution into easy verbal steps as follows :

  1. Write a Map function(program) that will print “1 <number>” for every number that it encounters in every line and not just distinct numbers but all occurrences.
  2. Write a Reduce function(program) that will read every line of output from the Map function’s(program’s) in the form “Key Value” where Key will be 1 and the Value will be the number. 
  3. The Reduce function(program), as the next step, will aggregate all the unique Values and add up their count of 1s which are their Keys.
  4. Once the Keys are counted for every distinct Value all you need to do is display the SUM of the Keys and the SUM of the (Values*Keys). 

While, the former will give you the count of all the numbers, the latter will give you the total of all the values. You can also count all the occurrences of all numbers before making them distinct pairs to print the sum before even counting the number of numbers. This is so simple right? So, let’s get our hands dirty on the code.



 

Map Program:

#!/usr/bin/env python
import sys
import math
dp = {}
listNums = list(sys.stdin)
for number in listNums:
print “1\t%s%(number.strip())

Reduce Program:

#!/usr/bin/env python
import sys
count = 0
sumNum = 0
for number in sys.stdin:
(key,val) = number.strip().split(\t,1)
if(int(key) == 1):
count += int(key)
sumNum += int(val)
print “count\t\t\t%s\nsum\t\t\t%s%(count,sumNum)

Programming part is over. What next ? Running the programs!!! How ? Not directly with console input and console output but THROUGH HADOOP. This can be done by writing a small command in bash or writing a Script instead for the same. Let’s see what it is!

  • hadoop jar $HADOOP_INSTALL/share/hadoop/tools/lib/hadoop-streaming-2.6.0.jar
    -input YOUR_INPUT_FILE/DIRECTORY -output YOUR_OUTPUT_DIRECTORY -mapper map.py -reducer reduce.py
    – Ctrl+C

Copy and paste this in a file with the extension sh and change privileges of the file with the following commands :

  • touch myFirstHadoopScript.sh

Paste your code in the file by :

  • vi myFirstHadoopScript.sh – Ctrl+V :wq Return
  • chmod 755 myFirstHadoopScript.sh
  • ./myFirstHadoopScript.sh

Your Program should run properly with the output as :
count – your count
sum – your sum

Wow!!!! You are a Hadoop Rookie now 😀


Tips :

To keep testing your program during the development phase you can check for the correctness by

  • cat YOUR_TEXT_FILE | YOUR MAP PROGRAM | YOUR REDUCE PROGRAM
  • You ALSO need to keep removing the OUTPUT_DIRECTORY for every execution or you can use a new one instead. Otherwise you will get bad errors!

     

     

    Advertisements

    Hadoop and Map Reduce – A Small Introduction with Python : Part I

    So, you want to know what is Hadoop and how can Python be used to perform a simple task using Hadoop Streaming?

    HADOOP is a software framework that mainly is used to leverage distributed systems in an intelligent manner and also, perform efficiently operations on big datasets without letting the user worry about nodal failures (Failure in one among the ‘n’ machines performing your task). Hadoop has various components :

    • Hadoop Common – contains libraries and utilities needed by other Hadoop modules
    • Hadoop Distributed File System (HDFS) – a distributed file-system that stores data on commodity machines, providing very high aggregate bandwidth across the cluster
    • Hadoop YARN – a resource-management platform responsible for managing compute resources in clusters and using them for scheduling of users’ applications
    • Hadoop MapReduce – a programming model for large scale data processing.

    I will walk you through the Hadoop MapReduce component. For further information on Map Reduce you can either Google. For now I will present to you a brief introduction of it.



    What is MapReduce

    To know it truly you have to understand your problem first. So, what kind of problems can Map Reduce solve ?

    • Counting occurrences of digits in a list of numbers
    • Counting prime numbers in a list
    • Counting the number of sentences in a text
    • Counting the Average of 10 million numbers in a database
    • Counting the name of all people belonging to a particular sales region

    Do you think that these are trivial problems ? Yes, they appear as if they are but what if you have millions of records and time for processing the results is very important for you? Thinking again ? You’ll get your answer.
    Not just time but multiple dimensions of a task are there and map reduce if implemented efficiently, can help you overcome the risks associated with processing that much data.

    Okay, enough of what and why! Now ask me how !!!


    A MapReduce ‘system’ consists of two major kinds of functions, namely the Map Function and the Reduce function (not necessarily with the same names but more often with the pre-decided intentions of acting as the Map and Reduce functions). So, how do we solve the simple problem of counting a million numbers from a list quickly and display their sum ? This is, let me tell you, a very long operation though simple (For a complex program in MapReduce using not Hadoop but mincemeat module please go throughthis.

    In this particular example the Map Function(s) will go through the list of numbers and create a list(s) of key-value pairs of the format {m,1} for every number m that occur during the scan. The reduce function takes the list from the Map function(s) and forms a list of key-value pairs as {m,list(1+)}. 1+ means 1 or more occurrences of 1.

    The complicated expression above is nothing but just the number m encountered in the scan(s) by the Map Function(s) and the 1’s in the value in the Reduce task appear as many times as the number was encountered in the Map Function(s). So, that basically means {m, number of times m was encountered in the Map Phase}.

    The next step is to aggregate the 1’s in the value for every m. This means {m,sum(1s)}. The task is almost done now. All we got to do is just display the number and the corresponding sum of the 1s as the count of the number. But wait, still you don’t understand as why this is a big deal right? Anybody can do this. But hey! The Map Functions aren’t just there to take all your load and process alone all of it. Nope! there are in fact many instances of your Map Function working in parallel in different machines in your cluster (if it exists, else just multithread your program to create multiple instances of Map (but why should you when you have distributed systems). Every Map Function running simultaneously work on different chunks of your big list, hence, sharing the task and reducing processing time. See! What if you have a big cluster and many machines running multiple instances your Map Functions to process your list? It’s simple; your work gets done in no time!!! Similarly, the Reduce functions can also run in multiple machines but generally after sorting ( where your mincemeat or hadoop programs will first sort the say,m‘s and distribute distinct such m‘s to different reduce functions in different machines). So, even aggregation task gets quicker and you are ready with your output to impress your boss!

    A brief outline of what happened to the list of numbers is as follows:

      1. Map functions counted every occurrence of every number m
      2. Map functions stored every number m in the form {m,1} - as many pairs for any number m
      3. Reduce functions collected all such {m,1} pairs
      4. Reduce functions converted all such pairs as {m,sum(1's)} - Only 1 pair for a number m
      5. Reduce functions finally displayed the pairs or passed it to the main function to display or process

    In the part two of the tutorial I will explain how to install Hadoop and do the same program in python using Hadoop Framework.


    For a similar program in mincemeat please go through :

    import mincemeat

    import sys

    file = open(sys.argv[1], “r”)
    #The data source can be any dictionary-like object
    data = list(file)
    file.close()
    datasource = dict(enumerate(data))

    def mapfn(k, v):
    import hashlib
    total = 0
    for num in v.split():
    condition = num.isdigit()
    if condition:
    yield ‘sum’, int(num)
    if condition:
    yield ‘sumsquares’, int(num)**2

    yield ‘count’, 1

    def reducefn(k, vs):
    result = sum(vs)
    return result

    s = mincemeat.Server()
    s.datasource = datasource
    s.mapfn = mapfn
    s.reducefn = reducefn

    results = s.run_server(password=”changeme”)
    sumn = results[“sum”]
    sumn2 = results[“sumsquares”]
    n = results[“count”]
    variance = (n*sumn2 – sumn**2)/float(n**2)
    stdev = variance**0.5
    print “Count is : %d”%n
    print “Sum is : %s” %sumn
    print “Stdev : %0.2f”%stdev