An Overview of Apache Hadoop

Filter by Category

What is Apache Hadoop?

Apache Hadoop is an Open source framework designed for storage and large-scale processing of data-sets using various models. It is developed to scale up from single servers to multiple machines which offer local computations too.

Apache Hadoop handles failures at the application layer such that it delivers highly-available service on top of cluster of computers. It consists of Hadoop Common package that provide file system and OS level abstractions, a MapReduce engine and HDFS.

 Apache Hadoop consists of various modules as mentioned below:

  • Hadoop Distributed File System (HDFS): It is the distributed file system that provides high-throughput access to application data.
  • Hadoop YARN: Hadoop Yarn is the framework for job scheduling and cluster resource management.
  • Hadoop MapReduce: It is a YARN based system for parallel processing of large data sets.
  • Hadoop Common: Hadoop Common contains libraries and utilities needed by other modules of Hadoop.

There is a fundamental assumption in all modules of Hadoops that hardware failures are common and thud it should be automatically handled in software by framework.

Instead of relying on expensive hardware and multiple systems to store big data, Hadoops enables distributed parallel processing of large amount of data across inexpensive servers that both stores and process data.

Normally, Hadoops are used in the form of distribution that break ups the data and store them on the computer nodes. Such distributed file systems are written in Java and runs on different operating system.

How Netgains can help on Apache Hadoop

  • We provide Analytic SQL.
  • Search techniques that lets users query and browse data in Hadoop.
  • Fast and in-memory analytics
  • Real-time stream processing for Hadoop

We provide you flexible, integrated, secure, scalable & extensible, highly available, open and compatible solution.

Related News