, ,

Hadoop Ecosystem Quick Start: 5 Key Components

Wild African Elephant

By now we all know that Hadoop is a central part of many big data projects, but how do we integrate this technology with some of the more traditional approaches to data handling? Is there a way to make sure our Hadoop cluster is interacting with and enriching the rest of our analytics environment? Luckily, there’s a whole suite of utilities that interact with Hadoop to address these questions, and it’s important to know what they are and how to take advantage of them. In this article, we’re going to take a quick look at five commonly used utilities in the Hadoop Ecosystem to help you understand how they can be used to integrate the Hadoop framework with more traditional relational databases and leverage the data for analytical purposes.

What Is Hadoop?

To get started, let’s look at a simple definition of the tool that the utilities we’ll discuss support. Apache Hadoop is a framework that allows for the distributed processing of large data sets across clusters of commodity computers using simple programming model. It is an open-source data management system with scale-out storage and distributed processing. It’s designed with big data in mind and is ideal for large amounts of information. If you’d like some more in-depth details on Hadoop and its deployment, check out Susan Ma’s article from earlier this year.

The Hadoop Ecosystem

The Hadoop Ecosystem consists of tools for data analysis, moving large amounts of unstructured and structured data, data processing, querying data, storing data, and other similar data-oriented processes. These utilities each serve a unique purpose and are geared toward different tasks completed through or user roles interacting with Hadoop.

Hadoop Ecosystem Diagram

Some of the most common utilities used with Hadoop are the following five tools:

    • HDFS
    • Pig
    • Hive
    • Sqoop
  • HBase

Hadoop Ecosystem Tools

Let’s briefly go through these tools one by one without going into too much detail. Then, we can link all of them together to see how we can use the Hadoop Ecosystem with traditional databases.

HDFS: Hadoop Distributed File System

The Hadoop Distributed File System (HDFS) is a distributed file system designed to run on commodity hardware An HDFS instance may consist of hundreds or thousands of server machines, each storing part of the file system’s data.

Key HDFS Facts:

  1. HDFS has a master/slave architecture.
  2. The master is the high-end server while the slaves are commodity servers.
  3. Each commodity server stores each file or incoming piece of data as a sequence of blocks.
  4. All blocks in a file except the last block are of the same size. The blocks of a file are replicated for fault tolerance amongst the commodity servers.
  5. The block size and replication factor are configurable per file. An application can specify the number of replicas of a file. The replication factor can be specified at file creation time and can be changed later.
  6. By distributing storage and computation across many servers, the combined storage resource can grow linearly with demand while remaining economical at every storage level, large or small.
  7. HDFS is a scalable, fault-tolerant, distributed storage system that works closely with a wide variety of concurrent data access applications.

Hadoop Features Diagram

Pig

Pig is an open-source, high-level dataflow system that sits on top of the Hadoop framework and can read data from the HDFS for analysis.

Key Pig Facts:

  1. Pig process large volumes of data quickly and efficiently from the underlying HDFS.
  2. It provides a simple language for queries and data manipulation, Pig Latin, that is compiled in to map-reduce jobs that are run on Hadoop.
  3. Java knowledge is not required to learn Pig.
  4. Pig can work with the forms of data listed below:
    • Structured data
    • Semistructured data
    • Unstructured data
  5. Pig is easy to learn and is similar to writing SQL, having built-in operations like JOIN, GROUP, FILTER, SORT, etc.
  6. It can read through a series of instructions and perform them.
  7. Companies like Yahoo, Microsoft, and Google use Pig to collect enormous quantities of data in the form of click streams, search logs, and web crawls.
  8. Because the data collected can be massive in size, it needs proper analysis to look for common industry trends.

Hive

Hive is a data warehousing package built on top of Hadoop that is used for complex data analysis and exploration.

Key Hive Facts:

  1. Hive is targeted toward users comfortable with writing SQL.
  2. It queries data using an SQL-based language called HiveQL.
  3. It was developed by Facebook and contributed to the Apache community.
  4. Facebook analyzed several terabytes of data using Hive.

Note: At this point, you might be thinking, “If we have Pig doing similar data analysis to Hive, then why do we need Hive anyway?” Well, there are some important distinctions between the two to consider:

Pig vs Hive

The thing to remember is that Pig is used by researchers and programmers while Hive is used by analysts generating reports.

Sqoop: SQL-To-Hadoop

Apache Sqoop efficiently transfers bulk data between Apache Hadoop and structured data stores, such as relational databases. This allows stronger communication between traditional and big data-oriented data sources.

Key Sqoop Facts:

  1. Sqoop imports individual table or complete database to HDFS.
  2. Sqoop can also be used to extract data from Hadoop and export it into external structured data stores.
  3. Sqoop works with relational databases such as Teradata, Netezza, Oracle, MySQL, Postgres, and HSQLDB.

HBase

Apache HBase is a column-oriented, NoSQL database built on top of Hadoop.

Key HBase Facts:

  1. HBase stores data in key value pairs.
  2. HBase provides random access and strong consistency for large amounts of unstructured and semi-structured data in a schema-less database organized by column families.
  3. It puts data types traditionally excluded from standard RDBMS systems into an accessible, recognizable format.

HBase Usage Diagram

How HBase Differs from a Traditional RDBMS

HBase vs RDBMS

Putting It All Together

Since we now have a brief overview of the common tools from the Hadoop Ecosystem under our belts, let’s try to connect the dots in a way that will let users on traditional databases leverage data coming from the Hadoop framework

Let’s say we have a huge XML file with unstructured data containing reviews for different products from website and we want to analyze the data. Since unstructured data is not handled in relational databases, we opt to use the Hadoop framework. To connect the Hadoop-based information to our analytics platform so it can be analyzed properly, we need to put it through a workflow that requires interacting with all the tools we’ve just discussed.

Hadoop Ecosystem Workflow

Workflow:

  1. The XML file is read into the HDFS.
  2. Map Reduce logic operations are performed to read the product categories and reviews to decide whether it’s good or bad review.
  3. The output data from step 2 is structured data
  4. PIG scripts are used on the output data to separate out different categories of products and reviews.
  5. HIVE is used write query to get top rated reviews per product category
  6. Output data from step 5 is read into the relational databases using Sqoop .
  7. Once we have data in the RDBMS we can leverage that data as per the requirements  to get the top rated reviews for different product categories.

Hadoop Ecosystem Expanded Workflow

Conclusion

The scenario above is just one example of how you can combine Hadoop with relational data sources for analysis. If you’d like to get more information on Hadoop and other big data-related concepts, keep an eye out for future articles on the subject and take a look at the resources below:

governance how to cta

Article References:

  • edureka.co/big-data-and-hadoop
  • hadoop.apache.org