Programmatic Deployment to Elastic Mapreduce with Boto and Bootstrap Action

A while back I wrote about How to combine Elastic Mapreduce/Hadoop with other Amazon Web Services. This posting is a small update to that, showing how to deploy extra packages with Boto for Python. Note that Boto can deploy mappers and reducers in written any language supported by Elastic Mapreduce. In the example below (it can also be found on github – http://github.com/atbrox/atbroxexamples, i.e. check out with git clone git@github.com:atbrox/atbroxexamples.git)

Imports and connection to elastic mapreduce on AWS

 
#!/usr/bin/env python
import boto
import boto.emr
from boto.emr.step import StreamingStep
from boto.emr.bootstrap_action import BootstrapAction
import time

# set your aws keys and S3 bucket, e.g. from environment or .boto
AWSKEY= 
SECRETKEY= 
S3_BUCKET=
NUM_INSTANCES = 1

conn = boto.connect_emr(AWSKEY,SECRETKEY)

Bootstrap step being created
In this case a shell script from s3, note that this could contain sudo commands in order to do apt-get installs, e.g to install classic programming language packages like gfortran or open-cobol, or more modern languages like ghc6 (haskell), or any code, e.g. checking out latest version of a programming language (e.g. Google Go with hg clone -r release https://go.googlecode.com/hg/ $GOROOT) interpreter/compiler and compile it before using it in your mappers or reducers

bootstrap_step = BootstrapAction("download.tst", "s3://elasticmapreduce/bootstrap-actions/download.sh",None)

Create map and reduce processing step
Using cache_files also adds a python library available for import (the other way could be to do sudo easy_install boto in the bootstrap step, which would be easier since the boto module wouldn’t have to be unpacked manually in the python code, see my previous posting for details about unpacking). Note that the mapper and reducer could be any language as long as you either have compiled in or have installed an interpreter for it with the bootstrap step.

step = StreamingStep(
  name='Wordcount',
  mapper='s3n://elasticmapreduce/samples/wordcount/wordSplitter.py',
  cache_files = ["s3n://" + S3_BUCKET + "/boto.mod#boto.mod"],
  reducer='aggregate',
  input='s3n://elasticmapreduce/samples/wordcount/input',
  output='s3n://' + S3_BUCKET + '/output/wordcount_output')

jobid = conn.run_jobflow(
    name="testbootstrap", 
    log_uri="s3://" + S3_BUCKET + "/logs", 
    steps = [step],
    bootstrap_actions=[bootstrap_step],
    num_instances=NUM_INSTANCES)

Wait for job to start
This waits for the Elastic Mapreduce Job to start and prints out status, one of the statuses between starting and running being bootstrapping.

state = conn.describe_jobflow(jobid).state
print "job state = ", state
print "job id = ", jobid
while state != u'COMPLETED':
    print time.localtime()
    time.sleep(30)
    state = conn.describe_jobflow(jobid).state
    print "job state = ", state
    print "job id = ", jobid

print "final output can be found in s3://" + S3_BUCKET + "/output" + TIMESTAMP
print "try: $ s3cmd sync s3://" + S3_BUCKET + "/output" + TIMESTAMP + " ."

Validation of what really happened
One way to validate is to check that your mappers and reducers written in any language (i.e. for which compiler that you installed with bootstrap action), e.g. the classic mapreduce word count written in classic languages like Cobol or Fortran 95? The other way is to check the s3 logs, the log directory for an elastic mapreduce job has the following subdirectories:

daemons  jobs  node  steps  task-attempts

In the node directory, each EC2 instance used in the job has a directory, and underneath each of them there is a bootstrap_actions directory with the master.log and stderr, stdout and controller logs. In the case presented above bootstrap output is shown underneath.
stderr output

--2010-10-01 17:38:38--  http://elasticmapreduce.s3.amazonaws.com/bootstrap-actions/file.tar.gz
Resolving elasticmapreduce.s3.amazonaws.com... 72.21.214.39
Connecting to elasticmapreduce.s3.amazonaws.com|72.21.214.39|:80... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 200 OK
  x-amz-id-2: NezTUU9MIzPwo72lJWPYIMo2wwlbDGi1IpDbV/mO07Nca4VarSV8l7j/2ArmclCB
  x-amz-request-id: 3E71CC3323EC1189
  Date: Fri, 01 Oct 2010 17:38:39 GMT
  Last-Modified: Thu, 03 Jun 2010 01:57:13 GMT
  ETag: "47a007dae0ff192c166764259246388c"
  Content-Type: application/octet-stream
  Content-Length: 153
  Connection: keep-alive
  Server: AmazonS3
Length: 153 [application/octet-stream]
Saving to: `file.tar.gz'

     0K                                                       100% 24.3M=0s

2010-10-01 17:38:38 (24.3 MB/s) - `file.tar.gz' saved [153/153]

Controller

2010-10-01T17:38:35.141Z INFO Fetching file 's3://elasticmapreduce/bootstrap-actions/download.sh'
2010-10-01T17:38:38.411Z INFO Working dir /mnt/var/lib/bootstrap-actions/1
2010-10-01T17:38:38.411Z INFO Executing /mnt/var/lib/bootstrap-actions/1/download.sh
2010-10-01T17:38:38.936Z INFO Execution ended with ret val 0
2010-10-01T17:38:38.938Z INFO Execution succeeded

Conclusion
The posting has shown how to programmatically install packages (e.g. programming languages) to EC2 nodes running elastic mapreduce. Since elastic mapreduce in streaming mode supports any programming language this can make it easier to deploy and test mappers and reducers written in your favorite language, and even automate it. (Opens a few doors for parallelization of legacy code)

Atbrox on LinkedIn

Best regards,
Amund Tveit, co-founder of Atbrox

Posted in cloud computing, Hadoop and Mapreduce | Tagged , , , , , , | 4 Comments

Predicting Startup Performance with Syllables

1. How to name your startup?
It seems to me that few syllables (typically 2) are usually much better performing names (with several exceptions though) than others. My personal theory is that it is about rhythm and pronunciation energy.

2. Company names examples with:
1 syllable: Bing, Ask, Xing, Ning, Dell, Skype, Slide, Yelp, Ford
2 syllables: Google, Yahoo, Ebay, Paypal, Facebook, Quora, Zynga, Youtube, Baidu, Blogger, WordPress, Twitter, LinkedIn, Craigslist, Flickr, Apple, CNet, Tumblr, TwitPic, Reddit, Netflix, SourceForge, Techcrunch, Hulu, bit.ly, Scribd*, Tesla, Samsung, DropBox,  AdGrok, Brushes, FanVibe, Gantto, GazeHawk, HipMunk, OhLife, TeeVox
3 syllables: Amazon, LiveJournal, GoDaddy, Mozilla, Mashable, Toyota, Microsoft
4 syllables: Hewlett Packard, Mitsubishi, StumbleUpon
5 syllables: Wikipedia

3. Impact on naming for investors?
I believe naming is so important that even (super) angels, venture capitalists and other investors should weigh this heavily when investing in startups (just take a look at your existing portfolio with Syllable glasses), this belief is backed by Alexa top 500 list[1]. See also 5.

4. How to find a name?
My recommendation is to running a mapreduce where each mappers creates a huge amount of random words or permutations of characters and create a scoring function in the reducer that scores up word with syllables, vowel density, pronouncability and scores down existing names (or domain names) (seed it with a list of brand names) and other unwanted words.

5. Predictions about last batch of Ycombinator companies[2]
a) Predicted 8 best investments from last yc demo day (i.e. have 2 syllables):
AdGrok – online marketing for the  masses
Brushes – premiere illustrations with iPad
FanVibe – sports social network
Gantto – project management service
GazeHawk – eye tracking for everyone
Hipmunk – flight search
OhLife – personal journal
Teevox – turns mobile devices into remotes for the Internet

6. Conclusion
It will be interesting to see how the 8 ycombinator startups above – viewed as a fund – perform e.g. relative to the famous new angel funds[3,4]. Perhaps creating a syllable-based index fund could be a thought?

[1] http://www.alexa.com/topsites
[2] http://techcrunch.com/2010/08/24/y-combinator-demo-day-2/
[3] 500 Startups
[4] Felicis Ventures
(* unsure about pronunciation of Scribd)

Atbrox on LinkedIn

Best regards,

Amund Tveit, co-founder of Atbrox

Posted in cloud computing | Tagged , , , , | 2 Comments

Recommended Mapreduce Workshop

If you are interested in Hadoop or Mapreduce, I would like to recommend participating or submitting your paper to the First International Workshop on Theory and Practice of Mapreduce (MAPRED’2010) (held in correspondance with the 2nd IEEE International Conference on Cloud Computing Technology and Science).

(I just joined the workshop as a program committee member)

Best regards,

Amund Tveit (co-founder of Atbrox)

Posted in cloud computing, Hadoop and Mapreduce | Tagged , , | Leave a comment

Word Count with MapReduce on a GPU – A Python Example

Atbrox is startup company providing technology and services for Search and Mapreduce/Hadoop. Our background is from Google, IBM and research.

GPU – Graphical Processing Unit like the NVIDIA Tesla – is fascinating hardware, in particular regarding extreme parallelism (hundreds of cores) and memory bandwidth (tens of Gigabytes/second). The main programming languages for programming GPUs are C-based OpenCL and Nvidia’s Cuda, in addition there are wrappers to those in many languages, for the following example we use Andreas Klöckner’s PyCuda for Python.

Word Count with PyCuda and MapReduce

One of the classic mapreduce examples is word frequency count (i.e. individual word frequencies), but let us start with an even simpler example – word count, i.e. how many words are there in a (potentially big) string?

In python the default approach would perhaps be to do:

wordcount = len(bigstring.split())

But assuming that you didn’t have split() or that split() was too slow, what would you do?

How to calculate word count?
If you have the string mystring = "this is a string" you could iterate through it and count the number of spaces, e.g. with

sum([1 for c in mystring if c == ' '])

(notice the one-off error), and perhaps split it up and parallelize it somehow. However, if there are several spaces in a row in the string this algorithm will fail, and it doesn’t use the GPU horsepower.

The MapReduce approach
Assuming you still have mystring = "this is a string", try to align the string almost with itself, i.e. have one string being all characters in mystring except the last – "this is a strin" == mystring[:-1] (called prefix from here), and another string with all characters in mystring except the first – "his is a string" == mystring[1:] (called suffix from here), and align those two like this:

this is a strin # prefix
his is a string # suffix

you can see that counting all occurences of when the character in the upper string (prefix) is whitespace and the corresponding character in the lower string (suffix) is non-white will give the correct count of words (with the same one-off as above that can be fixed by checking that first character is non-whitespace). This way of counting also deals with multiple spaces in a row (as the above one doesn’t). This can be expressed in Python with Map() and Reduce() as:

mystring = "this is a string"
prefix = mystring[:-1]
suffix = mystring[1:]
mapoutput = map(lambda x,y: (x == ' ')*(y != ' '), prefix, suffix)
reduceoutput = reduce(lambda x,y: x+y, mapoutput)
sum = reduceoutput + (mystring[0] != ' ') # fix one off-error

Mapreduce with PyCuda

PyCuda supports using python and numpy library with Cuda, and it also has library to support mapreduce type calls on data structures loaded to the GPU (typically arrays), under is my complete code for calculating word count with PyCuda, I used the complete works by Shakespeare as test dataset (downloaded as Plain text) and replicated it hundred times so in total 493820800 bytes (~1/2 Gigabyte) that I uploaded to our Nvidia Tesla C1060 GPU and run word count on (the results were compared with unix command line wc and len(dataset.split()) for smaller datasets).

import pycuda.autoinit
import numpy
from pycuda import gpuarray, reduction
import time

def createCudaWordCountKernel():
    initvalue = "0"
    mapper = "(a[i] == 32)*(b[i] != 32)" # 32 is ascii code for whitespace
    reducer = "a+b"
    cudafunctionarguments = "char* a, char* b"
    wordcountkernel = reduction.ReductionKernel(numpy.float32, neutral = initvalue, 
                                            reduce_expr=reducer, map_expr = mapper,
                                            arguments = cudafunctionarguments)
    return wordcountkernel

def createBigDataset(filename):
    print "reading data"
    dataset = file(filename).read()
    print "creating a big dataset"
    words = " ".join(dataset.split()) # in order to get rid of \t and \n
    chars = [ord(x) for x in words]
    bigdataset = []
    for k in range(100):
        bigdataset += chars
    print "dataset size = ", len(bigdataset)
    print "creating numpy array of dataset"
    bignumpyarray = numpy.array( bigdataset, dtype=numpy.uint8)
    return bignumpyarray

def wordCount(wordcountkernel, bignumpyarray):
    print "uploading array to gpu"
    gpudataset = gpuarray.to_gpu(bignumpyarray)
    datasetsize = len(bignumpyarray)
    start = time.time()
    wordcount = wordcountkernel(gpudataset[:-1],gpudataset[1:]).get()
    stop = time.time()
    seconds = (stop-start)
    estimatepersecond = (datasetsize/seconds)/(1024*1024*1024)
    print "word count took ", seconds*1000, " milliseconds"
    print "estimated throughput ", estimatepersecond, " Gigabytes/s"
    return wordcount

if __name__ == "__main__":
    bignumpyarray = createBigDataset("dataset.txt")
    wordcountkernel = createCudaWordCountKernel()
    wordcount = wordCount(wordcountkernel, bignumpyarray)

Results

python wordcount_pycuda.py 
reading data
creating a big dataset, about 1/2 GB of Shakespeare text
dataset size =  493820800
creating numpy array of dataset
uploading array to gpu
word count took  38.4578704834  milliseconds
estimated throughput  11.9587084015  Gigabytes/s (95.67 Gigabit/s)
word count =  89988104.0

Improvement Opportunities?
There are plenty of improvement opportunities, in particular fixing the creation of numpy array – bignumpyarray = numpy.array( bigdataset, dtype=numpy.uint8) – which took almost all of the total time.

It is also interesting to notice that this approach doesn’t gain from using combiners like in Hadoop/Mapreduce (a combiner is basically a reducer that sits on the tail of the mapper and creates partial results in the case of associative and commutative reducer methods, it can for all practical purposes be compared to an afterburner on a jet motor).

Atbrox on LinkedIn

Best regards,

Amund Tveit (Atbrox co-founder)

Posted in Hadoop and Mapreduce | Tagged , , , , , | 9 Comments

Statistics about Hadoop and Mapreduce Algorithm Papers

Underneath are statistics about which 20 papers (of about 80 papers) were most read in our 3 previous postings about mapreduce and hadoop algorithms (the postings have been read approximately 5000 times). The list is ordered by decreasing reading frequency, i.e. most popular at spot 1.

  1. MapReduce-Based Pattern Finding Algorithm Applied in Motif Detection for Prescription Compatibility Network
    authors: Yang Liu, Xiaohong Jiang, Huajun Chen , Jun Ma and Xiangyu Zhang – Zhejiang University

  2. Data-intensive text processing with Mapreduce
    authors: Jimmy Lin and Chris Dyer – University of Maryland

  3. Large-Scale Behavioral Targeting
    authors: Ye Chen (eBay), Dmitry Pavlov (Yandex Labs) and John F. Canny (University of California, Berkeley)

  4. Improving Ad Relevance in Sponsored Search
    authors: Dustin Hillard, Stefan Schroedl, Eren Manavoglu, Hema Raghavan and Chris Leggetter (Yahoo Labs)

  5. Experiences on Processing Spatial Data with MapReduce
    authors: Ariel Cary, Zhengguo Sun, Vagelis Hristidis and Naphtali Rishe – Florida International University

  6. Extracting user profiles from large scale data
    authors: Michal Shmueli-Scheuer, Haggai Roitman, David Carmel, Yosi Mass and David Konopnicki – IBM Research, Haifa

  7. Predicting the Click-Through Rate for Rare/New Ads
    authors: Kushal Dave and Vasudeva Varma – IIIT Hyderabad

  8. Parallel K-Means Clustering Based on MapReduce
    authors: Weizhong Zhao, Huifang Ma and Qing He – Chinese Academy of Sciences

  9. Storage and Retrieval of Large RDF Graph Using Hadoop and MapReduce
    authors: Mohammad Farhan Husain, Pankil Doshi, Latifur Khan and Bhavani Thuraisingham – University of Texas at Dallas

  10. Map-Reduce Meets Wider Varieties of Applications
    authors: Shimin Chen and Steven W. Schlosser – Intel Research

  11. LogMaster: Mining Event Correlations in Logs of Large-scale Cluster Systems
    authors: Wei Zhou, Jianfeng Zhan, Dan Meng (Chinese Academy of Sciences), Dongyan Xu (Purdue University) and Zhihong Zhang (China Mobile Research)

  12. Efficient Clustering of Web-Derived Data Sets
    authors: Luıs Sarmento, Eugenio Oliveira (University of Porto), Alexander P. Kehlenbeck (Google), Lyle Ungar (University of Pennsylvania)

  13. A novel approach to multiple sequence alignment using hadoop data grids
    authors: G. Sudha Sadasivam and G. Baktavatchalam – PSG College of Technology

  14. Web-Scale Distributional Similarity and Entity Set Expansion
    authors: Patrick Pantel, Eric Crestan, Ana-Maria Popescu, Vishnu Vyas (Yahoo Labs) and Arkady Borkovsky (Yandex Labs)

  15. Grammar based statistical MT on Hadoop
    authors: Ashish Venugopal and Andreas Zollmann (Carnegie Mellon University)

  16. Distributed Algorithms for Topic Models
    authors: David Newman, Arthur Asuncion, Padhraic Smyth and Max Welling – University of California, Irvine

  17. Parallel algorithms for mining large-scale rich-media data
    authors: Edward Y. Chang, Hongjie Bai and Kaihua Zhu – Google Research

  18. Learning Influence Probabilities In Social Networks
    authors: Amit Goyal, Laks V. S. Lakshmanan (University of British Columbia) and Francesco Bonchi (Yahoo! Research)

  19. MrsRF: an efficient MapReduce algorithm for analyzing large collections of evolutionary trees
    authors: Suzanne J Matthews and Tiffani L Williams – Texas A&M University

  20. User-Based Collaborative-Filtering Recommendation Algorithms on Hadoop
    authors: Zhi-Dan Zhao and Ming-sheng Shang

    Atbrox on LinkedIn

    Best regards,

    Amund Tveit (Atbrox co-founder)

Posted in cloud computing | Tagged , , , , , , , | 2 Comments