Apache Pig Introduction:
( This amazing instroduction is from Apache Site)
Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs.
The salient property of Pig programs ( Pig data analysis programs ) is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.
@ Pig's infrastructure layer (at present) consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop subproject).
@ Pig's language layer currently consists of a textual language called Pig Latin, which has the following key properties:
Now Questions!!!
What is data analysis programs ?
Which is Infrastructure to Evaluate these programs?
What it means to Structure being amenable to parallelization?
Hadoop subproject?
What is Multiple interrelated data transformations?
Special-purpose processing?
Answers Coming Soon!! --->
( This amazing instroduction is from Apache Site)
Apache Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs.
The salient property of Pig programs ( Pig data analysis programs ) is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.
@ Pig's infrastructure layer (at present) consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop subproject).
@ Pig's language layer currently consists of a textual language called Pig Latin, which has the following key properties:
- Ease of programming. It is trivial to achieve parallel execution of simple, "embarrassingly parallel" data analysis tasks. Complex tasks comprised of multiple interrelated data transformations are explicitly encoded as data flow sequences, making them easy to write, understand, and maintain.
- Optimization opportunities. The way in which tasks are encoded permits the system to optimize their execution automatically, allowing the user to focus on semantics rather than efficiency.
- Extensibility. Users can create their own functions to do special-purpose processing.
Now Questions!!!
What is data analysis programs ?
Which is Infrastructure to Evaluate these programs?
What it means to Structure being amenable to parallelization?
Hadoop subproject?
What is Multiple interrelated data transformations?
Special-purpose processing?
Answers Coming Soon!! --->
x-----------------------------------------x
No comments:
Post a Comment