Cascading 3.3 User Guide - Apache Hadoop MapReduce Platform

1. Introduction

1.1. What Is Cascading?

2. Diving into the APIs

2.1. Anatomy of a Word-Count Application

3. Cascading Basic Concepts

3.1. Terminology

3.3. Pipes

3.4. Platforms

3.6. Sink Modes

3.7. Flows

4. Tuple Fields

4.1. Field Sets

5. Pipe Assemblies

5.1. Each and Every Pipes

5.2. Merge

5.3. GroupBy

5.4. CoGroup

5.5. HashJoin

6. Flows

6.1. Creating Flows from Pipe Assemblies

7. Cascades

7.1. Creating a Cascade

8. Configuring

8.1. Introduction

9. Local Platform

9.1. Building an Application

10. The Apache Hadoop Platforms

10.1. What is Apache Hadoop?

11. Apache Hadoop MapReduce Platform

11.1. Configuring Applications

11.3. Building

12. Apache Tez Platform

12.1. Configuring Applications

12.2. Building

13. Using and Developing Operations

13.1. Introduction

13.2. Functions

13.3. Filters

13.4. Aggregators

13.5. Buffers

14. Custom Taps and Schemes

14.1. Introduction

14.2. Custom Taps

15. Advanced Processing

15.1. SubAssemblies

16. Built-In Operations

16.1. Identity Function

16.9. Assertions

16.11. Buffers

17. Built-in SubAssemblies

17.1. Optimized Aggregations

18. Cascading Best Practices

18.1. Unit Testing

19. Extending Cascading

19.1. Scripting

20. Cookbook: Code Examples of Cascading Idioms

20.1. Tuples and Fields

20.5. API Usage

21. The Cascading Process Planner

21.1. FlowConnector

21.3. RuleRegistry

Apache Hadoop MapReduce Platform

By default, Apache Hadoop provides an API called MapReduce for performing computation at scale.

The following documentation covers details about using Cascading on the MapReduce platform that are not covered in the Apache Hadoop documentation of this guide.

Configuring Applications

At runtime, Hadoop must be told which application JAR file should be pushed to the cluster. Historically, this is done via the Hadoop API JobConf object, as seen in the example below.

In order to remain platform-independent, use the AppProps class as described in Configuring Applications.

If you must use an existing JobConf instance, consider the example below:

Example 1. Configuring the application JAR with a JobConf
JobConf jobConf = new JobConf();

// pass in the class name of your application
// this will find the parent jar at runtime
jobConf.setJarByClass( Main.class );

// ALTERNATIVELY ...

// pass in the path to the parent jar
jobConf.setJar( pathToJar );

// build the properties object using jobConf as defaults
Properties properties = AppProps.appProps()
  .setName( "sample-app" )
  .setVersion( "1.2.3" )
  .addTags( "deploy:prod", "team:engineering" )
  .buildProperties( jobConf );

// pass properties to the connector
FlowConnector flowConnector = new Hadoop2MR1FlowConnector( properties );

In the example above we see two ways to use methods to set the same property:

setJarClass()

In this method, you invoke a Class object by name. In the example, the Class that is named owns the "main" function for this application. The assumption here is that Main.class is not located in a Java JAR that is stored in the lib folder of the application JAR. If it is, the dependent lib folder JAR will be pushed to the cluster, not to the parent application JAR (the JAR containing the lib folder).

setJarPath()

This method requires setting a literal path to the Java JAR as a property.

In your application, only one of these methods must be called to properly configure Hadoop.

Creating Flows from a JobConf

If a MapReduce job already exists, then the cascading.flow.hadoop.MapReduceFlow class should be used. To do this, create a Hadoop JobConf instance and simply pass it into the MapReduceFlow constructor. The resulting Flow instance can be used like any other Flow.

Note both multiple MapReduceFlow instances and other Flow instances can be passed to a CascadeConnector to produce a Cascade.

Building

Cascading ships with several JARs and dependencies in the download archive.

Alternatively, Cascading is available via Maven and Ivy through the Conjars repository, along with a number of other Cascading-related projects. See http://conjars.org for more information.

The Cascading Hadoop artifacts include the following:

cascading-core-3.x.y.jar

This JAR contains the Cascading Core class files. It should be packaged with lib/*.jar when using Hadoop.

cascading-hadoop-3.x.y.jar

This JAR contains the Cascading Hadoop 1 specific dependencies. It should be packaged with lib/*.jar when using Hadoop.

cascading-hadoop2-mr1-3.x.y.jar

This JAR contains the Cascading Hadoop 2 specific dependencies. It should be packaged with lib/*.jar when using Hadoop.

Do not package both cascading-hadoop-3.x.y.jar and cascading-hadoop2-mr1-3.x.y.jar JAR files into your application. Choose the version that matches your Hadoop distribution version.

Cascading works with either of the Hadoop processing modes: the default local stand-alone mode and the distributed cluster mode. As specified in the Hadoop documentation, running in cluster mode requires the creation of a Hadoop job JAR that includes the Cascading JARs, plus any needed third-party JARs, in its lib directory. This is true regardless of whether they are Cascading Hadoop-mode applications or raw Hadoop MapReduce applications.