Features/Hadoop

From FedoraProject

< Features
Revision as of 19:54, 30 May 2013 by Tstclair (Talk | contribs)

Jump to: navigation, search

Contents

Apache Hadoop 2.0

Summary

Bring Apache Hadoop, the hottest open source big data platform, to Fedora, the hottest open source distribution. Fedora should be the best distribution for using Apache Hadoop.

This and other big data activities can be found going on in the Big Data SIG.

Owner

People involved

Name IRC Focus Additional
Matthew Farrellee mattf keeping track, integration testing UTC-5
Peter MacKinnon pmackinn packaging UTC-5
Rob Rati rsquared packaging UTC-5
Timothy St. Clair tstclair setup and configuration UTC-6
Sam Kottler skottler packaging UTC-5
Gil Cattaneo gil packaging UTC+1
Christopher Meng cicku packaging, testing UTC+8

Current status

  • Targeted release: Fedora 20
  • Last updated: 20 May 2013
  • Percentage of completion
    • Dependencies available in Fedora (missing since project initiation): 31%
    • Adaptation of Hadoop 2.0.2a source via patches: 100%
    • Hadoop spec completion: 60%
  • Test suite
    • hadoop-common Tests run: 1719, Failures: 12, Errors: 4, Skipped: 16
    • hadoop-hdfs Tests run: 1443, Failures: 5, Errors: 1, Skipped: 3
    • hadoop-yarn Tests run: 126, Failures: 0, Errors: 0, Skipped: 0
    • hadoop-mapreduce Tests run: 59, Failures: 0, Errors: 0, Skipped: 0
    • Disabled: 40 classes (see log below)

Detailed Description

Apache Hadoop is a widely used, increasingly complete big data platform, with a strong open source community and growing ecosystem. The goal is to package and integrate the core of the Hadoop ecosystem for Fedora, allowing for immediate use and creating a base for the rest of the ecosystem.


Benefit to Fedora

The Apache Hadoop software will be packaged and integrated with Fedora. The core of the Hadoop ecosystem will be available with Fedora and provide a base for additional packages.


Scope

  • Package the Apache Hadoop 2.0.2 software
  • Package all dependencies needed for Apache Hadoop 2.0.2
  • Skip package dependencies required for unit testing, record them in a dependency backlog for later cleanup

Approach

We are taking an iterative, depth-first approach to packaging. We do not have all the dependencies mapped out ahead of time. Dependencies are being tabulated into two groups:

  1. missing - the dependency being requested from a hadoop-common pom has not yet been packaged, reviewed or generated into fedora repos
  2. broken - the dependency requested is out of date with current fedora versions, and patches must be developed for inclusion in a hadoop rpm build that address any build, API or source code deltas

Note that a dependency may show up in both of these tables.

Anyone who wants to help should find an available dependency below, edit the table changing the state to Active and packager to yourself.

While packaging a dependency, test dependencies can be skipped. Testing will be done via integration testing periodically during packaging and then after packaging completes. Test dependencies that are skipped must be added to the Skipped dependencies table below.

If you are lucky enough to pick a dependency that itself has unpackaged dependencies, identify the sub-dependencies and add them to the bottom of the Dependencies table below, change your current dependency to Blocked and repeat.

If your dependency is already packaged but the version is incompatible, contact the package owner and resolve the incompatibility in a mutually satisfactory way. For instance:

  • If the version available in Fedora is older, explore updating the package. If that is not possible, explore creating a package that includes a version in its name, e.g. pkgnameXY. Ultimately, the most recent version in Fedora should have the name pkgname while older versions have pkgnameXY. It may take a full Fedora release to rationalize package names. Make a note in the Dependencies table.
  • If the version you need is older than the packaged version, consider creating a patch to use the newer version. If a patch is not viable, proceed by packaging the dependency with a version in its name, e.g. pkgnameXY. Make a note in the Dependencies table.

Dependencies

Missing dependency legend
State Notes
Available free for someone to take
Active dependency is actively being packaged if missing, or patch is being developed or tested for inclusion in hadoop-common build
Blocked pending packages for dependencies
Review under review, include link to review BZ
Complete woohoo!

Missing Dependencies

Project State Review BZ Packager Notes
hadoop Active rrati,pmackinn
bookkeeper Review RHBZ #948589 gil Version 4.0 requested. packaged 4.2.1. Patch: BOOKKEEPER-598
glassfish-gmbal Complete RHBZ #859112 gil F18 build
glassfish-management-api Complete RHBZ #859110 gil F18 build
grizzly Complete RHBZ #859114 gil Only for F20 for now. Cause: missing glassfish-servlet-api on F18 and F19.
groovy Review RHBZ #858127 gil 1.5 requested but 1.8 packaged in fedora. Possible moving forward 1.8 series will be known as groovy18 and groovy will be 2.x.
jersey Complete RHBZ #825347 gil F18 build Should be rebuilt with grizzly2 support enabled.
jets3t Review RHBZ #847109 gil
jspc-compiler Review RHBZ #960720 pmackinn Passes preliminary overall hadoop-common compilation/testing.
kfs Review RHBZ #960728 pmackinn kfs has become Quantcast qfs. No longer a dependency in 2.0.5-beta & >
maven-native Review RHBZ #864084 gil Needs patch to build with java7. NOTE: javac target/source is already set by mojo.java.target option
zookeeper Review RHBZ #823122 gil requires jtoaster

Broken Dependencies

Project Packager Notes
ant Version 1.6 requested, 1.8 currently packaged in Fedora. Needs to be inspected for API/functional incompatibilities(?)
apache-commons-collections pmackinn Java import compilation error with existing package. Patches for hadoop-common being tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-collections
apache-commons-math pmackinn Current apache-commons-math uses math3 in pom instead of math, and API changes in code. Patches for hadoop-common being tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-math
cglib pmackinn Missing an explicit dep which old dep chain didn't need.. Patches for hadoop-common being tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-cglib
ecj rrati Need ecj version ecj-4.2.1-6 or later to resolve a dependency lookup issue
gmaven gil Version 1.0 requested, available 1.4 (but has broken deps) RHBZ #914056
hadoop-hdfs pmackinn glibc link error in hdfs native build. Patch for hadoop-common being tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-cmake-hdfs
hsqldb tradej 1.8 in fedora, update to 2.2.9 in the process. API compatibility to be checked.
jersey pmackinn Needs jersey-servlet and version. Tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-jersey
jets3t pmackinn Requires 0.6.1. With 0.9.x: hadoop-common Jets3tNativeFileSystemStore.java error: incompatible types S3ObjectsChunk chunk = s3Service.listObjectsChunked(bucket.getName(). Patches for hadoop-common being tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-jets3t
jetty rrati jetty8 packaged in Fedora, but 6.x requested. 6 and 8 are incompatible. Patches tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-jetty
slf4j pmackinn Package in fedora fails to match in dependency resolution. jcl104-over-slf4j dep in hadoop-common moved to jcl-over-slf4j as part of jspc/tomcat dep. Patch being tracked at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-jasper
tomcat-jasper pmackinn Version 5.5.x requested. Adaptations made for incumbent Tomcat 7 via patches at https://github.com/fedora-bigdata/hadoop-common/tree/fedora-patch-jasper. Reviewing fit as part of overall hadoop-common compilation/testing.

Test Suite

Unit Test Log (for 2.0.2-alpha)

Module Name Baseline Fedora Tester Notes
hadoop-common TestDoAsEffectiveUser Pass Fail pmackinn Fails in hadoop-common test suite but frequently succeeds as standalone. testRealUserSetup
hadoop-common TestRPCCompatibility Pass Fail pmackinn Fails in hadoop-common test suite but frequently succeeds as standalone. testVersion2ClientVersion1Server: "expected:<3> but was:<-3>"
hadoop-common TestSSLHttpServer Pass Fail pmackinn Does internal keystore setup but then seems to get tripped up later looking for default keystore file. Possible config issue.
hadoop-hdfs TestWebHdfsFileSystemContract Pass Fail pmackinn OOM, possibly inside jetty container, leading to apparent hang. Unclear if related to datastream IO failures or vice-versa.
hadoop-hdfs TestDelegationTokenForProxyUser Pass Fail pmackinn testWebHdfsDoAs: "expected:<200> but was:<401>". BasicAuth config?
hadoop-hdfs TestEditLogRace Pass Fail pmackinn testSaveRightBeforeSync
hadoop-hdfs TestHftpURLTimeouts Pass Fail pmackinn testHsftpSocketTimeout -> intermittent test failure
hadoop-hdfs TestNameNodeMetrics Fail Fail pmackinn testCorruptBlock: Bad value for metric PendingReplicationBlocks expected:<0> but was:<1> in test suite. Sporadic.
hadoop-hdfs-httpfs TestCheckpoint Fail Fail pmackinn testSecondaryHasVeryOutOfDateImage: Test resulted in an unexpected exit in test suite. Standalone OK.
hadoop-hdfs/src/contrib/bkjournal TestBookKeeperJournalManager Pass Fail pmackinn testOneBookieFailure, testAllBookieFailure: bookies not starting?
hadoop-hdfs/src/contrib/bkjournal TestBookKeeperAsHASharedDir Pass Fail pmackinn more bookies not starting?
hadoop-hdfs/src/contrib/bkjournal TestBookKeeperHACheckpoints Pass Fail pmackinn testCheckpointWhenNoNewTransactionsHappened: Port in use: localhost:10001 (but not really)
hadoop-yarn-server-nodemanager TestNMWebServices* Pass Fail pmackinn All tests in error with java.lang.InstantiationException: something with the adaptation of GuiceServlet
hadoop-yarn-server-resourcemanager TestRMWebServices* Pass Fail pmackinn All tests in error with java.lang.InstantiationException: same as above
hadoop-yarn-server-resourcemanager TestDelegationTokenRenewer Pass Fail pmackinn testDTRenewalWithNoCancel: renew wasn't called as many times as expected expected:<1> but was:<2>
hadoop-yarn-server-resourcemanager TestApplicationTokens Pass Fail pmackinn testTokenExpiry: sporadic NPE
hadoop-yarn-server-resourcemanager TestAppManager Pass Fail pmackinn testRMAppSubmit,testRMAppSubmitWithQueueAndName: app event type is wrong before expected:<KILL> but was:<APP_REJECTED>; sporadic
hadoop-yarn-client TestYarnClient Fail Fail pmackinn testClientStop: Can only configure with YarnConfiguration
hadoop-yarn-applications TestUnmanagedAMLauncher Fail Fail pmackinn Seems designed to execute once by successfully contacting an RM but repeatedly retries with: yarnAppState=FAILED, distributedFinalState=FAILED
hadoop-mapreduce-client-app TestAMWebServices* Pass Fail pmackinn All tests in error with java.lang.InstantiationException: same guice-servlet problem
hadoop-mapreduce-client-hs TestHsWebServices* Pass Fail pmackinn All tests in error with java.lang.InstantiationException: same guice-servlet problem
hadoop-mapreduce-client-jobclient TestMiniMRProxyUser Pass Fail pmackinn testValidProxyUser: assert fail
hadoop-mapreduce-client-jobclient TestMRJobs Pass Fail pmackinn testDistributedCache: assert fail
hadoop-mapreduce-client-jobclient TestMapReduceLazyOutput Pass Fail pmackinn testLazyOutput: assert fail
hadoop-mapreduce-client-jobclient TestEncryptedShuffle Pass Fail pmackinn encryptedShuffleWithClientCerts,encryptedShuffleWithoutClientCerts: assert fail
hadoop-mapreduce-client-jobclient TestJobName Pass Fail pmackinn testComplexName,testComplexNameWithRegex: Job failed!
hadoop-mapreduce-client-jobclient TestJobSysDirWithDFS Pass Fail pmackinn testWithDFS: Job failed!
hadoop-mapreduce-client-jobclient TestClusterMapReduceTestCase Pass Fail pmackinn testMapReduce,testMapReduceRestarting: Job failed!
hadoop-mapreduce-client-jobclient TestLazyOutput Pass Fail pmackinn testLazyOutput: Job failed!
hadoop-mapreduce-client-jobclient TestMiniMRWithDFSWithDistinctUsers Pass Fail pmackinn testDistinctUsers,testMultipleSpills: Job failed!
hadoop-mapreduce-client-jobclient TestRMNMInfo Pass Fail pmackinn testRMNMInfoMissmatch:
hadoop-distcp TestCopyCommitter Pass Fail pmackinn testNoCommitAction: Commit failed

Tests are listed in the order of execution
Baseline: F18, maven 3.0.5, Oracle JDK 1.6u45

Upstream Patch Tracking

Currenly tracking against branch-2 @ https://github.com/timothysc/hadoop-common

Modification Tracking

Branch Commiter JIRA Target Status
fedora-patch-math pmackinn https://issues.apache.org/jira/browse/HADOOP-9594 2.0.5-beta PENDING REVIEW
fedora-patch-junit pmackinn https://issues.apache.org/jira/browse/HADOOP-9605 2.0.5-beta PENDING REVIEW
fedora-patch-javadocs rsquared https://issues.apache.org/jira/browse/HADOOP-9607 2.0.5-beta COMMITTED
fedora-patch-collections pmackinn https://issues.apache.org/jira/browse/HADOOP-9610 2.0.5-beta PENDING REVIEW
fedora-patch-cmake-hdfs pmackinn N/A 2.0.5-beta Already Modified Upstream

Packager Resources

Packager tips

  • mvn-rpmbuild utility will ONLY resolve from system repo
  • mvn-local will resolve from system repo first then fallback to maven if unresolved
    • can be used to find the delta between system repo packages available and missing dependencies that can be viewed in the .m2 local maven repo (find ~/.m2/repository -name '*.jar')
  • -Dmaven.local.debug=true
    • reveals how JPP lookups are executed per dependency: useful for finding groupId,artifactId mismatches
  • -Dmaven.test.skip=true
    • tells maven to skip test runs AND compilation
    • useful for unblocking end-to-end build

An alternative to gmaven:

  • apply a patch with the following content where required
  • test support is not guaranteed, should not work.

     <plugin>
       <groupId>org.apache.maven.plugins</groupId>
       <artifactId>maven-antrun-plugin</artifactId>
       <version>1.7</version>
       <dependencies>
         <dependency>
           <groupId>org.codehaus.groovy</groupId>
           <artifactId>groovy</artifactId>
           <version>any</version>
         </dependency>
         <dependency>
           <groupId>antlr</groupId>
           <artifactId>antlr</artifactId>
           <version>any</version>
         </dependency>
         <dependency>
           <groupId>commons-cli</groupId>
           <artifactId>commons-cli</artifactId>
           <version>any</version>
         </dependency>
         <dependency>
           <groupId>asm</groupId>
           <artifactId>asm-all</artifactId>
           <version>any</version>
         </dependency>
         <dependency>
           <groupId>org.slf4j</groupId>
           <artifactId>slf4j-nop</artifactId>
           <version>any</version>
         </dependency>
       </dependencies>
       <executions>
         <execution>
           <id>compile</id>
           <phase>process-sources</phase>
           <configuration>
             <target>
               <mkdir dir="${basedir}/target/classes"/>
               <taskdef name="groovyc" classname="org.codehaus.groovy.ant.Groovyc">
                 <classpath refid="maven.plugin.classpath"/>
               </taskdef>
               <groovyc destdir="${project.build.outputDirectory}" srcdir="${basedir}/src/main" classpathref="maven.compile.classpath">
                 <javac source="1.5" target="1.5" debug="on"/>
               </groovyc>
             </target>
           </configuration>
           <goals>
             <goal>run</goal>
           </goals>
         </execution>
       </executions>
     </plugin>

Repositories

An RPM repository of dependencies already packaged and in, or heading towards, review state can be found here:

http://repos.fedorapeople.org/repos/rrati/hadoop/

Currently, only Fedora 18 x86_64 packages are available


Source repositories:

https://github.com/fedora-bigdata/hadoop-common Fork of Apache Hadoop for changes required to support compilation on Fedora

https://github.com/fedora-bigdata/hadoop-rpm Spec and supporting files for generating an RPM for Fedora


Workflow

The Apache Hadoop project uses a number of old, or obsolete, dependencies in their build and test environment, and this presents a challenge for including Apache Hadoop into Fedora. Any changes to the Apache Hadoop source or build files that is required in order to use a newer version of a dependency is a candidate for creating a patch to send upstream. Any changes that are required to conform to Fedora's packaging guidelines or deal with a package naming issue should be contained to the hadoop spec file.

The intention of this process is to isolate changes to a single dependency so patches can be created that can be consumed upstream. It is important that changes to the source be isolated to 1 dependency and the changes must be self-contained. A dependency is not necessarily a single jar file. Changes to a dependency should entail everything needed to use the jar files from a later release of the dependency.


Dependency Branches

All code/build changes to Apache Hadoop should be performed on a branch in the hadoop-common repo that should be based off the

branch-2.0.2-alpha

branch and should following this naming convention:

fedora-patch-<dependency>

Where <dependency> is the name of the dependency being worked on. Changes to this branch should ONLY relate to the dependency being worked on. Do not include the dependency version in the branch name. These branches will be updated as needed because of Fedora or Hadoop updates until they are accepted upstream by Apache Hadoop. Not having the dependency version allows the branch to move from version 1->2->3 without confusion if it is required before accepted upstream.

Integration Branch

An integration branch should be created in the hadoop-common repository that corresponds with the release version being packaged using the following naming convention:

fedora-<version>-integration

where <ver> is the hadoop version being packaged. All branches containing changes that have not yet been accepted upstream should be merged to the integration branch and the result should pass the build and all tests. Once this is complete a patch should be generated and pushed to the hadoop-rpm repository.

Testing Changes

In order for a set of changes to be considered complete, it must be able to compile and pass all tests in 2 separate ways:

  1. On Fedora using Fedora packages (mvn-rpmbuild)
  2. On Fedora using maven retrieved packages (mvn)

The changes should compile and the build process should run through all tests without error. To verify a set of changes, use the following options:

<mvn-build> -Pnative install

Where <mvn-build> is either mvn-rpmbuild or mvn.

NOTE: This pirates' code is more what you'd call guidelines than actual rules. There are places where incompatible changes exist (at least for now): for example, the zookeeper test jar.

How To Test

  1. TODO: NEEDS MORE DEFINITION
  2. yum install X Y Z across one or more nodes
  3. Setup a simple cluster by following TBD
  4. Run http://hadoop.apache.org/docs/stable/gridmix.html


User Experience

For users who are interested in running Apache Hadoop on Fedora, they will find it available from Fedora Project yum repositories.

TODO: SPECIFICALLY PACKAGES X Y Z


Dependencies

No other packages currently depend on Apache Hadoop.

Completion of this feature will involve packaging numerous dependencies, see the Dependencies table. Some of the dependencies are already being packaged by others in the Fedora community. Where dependency overlap is found, a negotaition must occur to ensure a satisfactory version and package is available to all parties.

TODO: Is https://fedoraproject.org/wiki/Hypertable ?


Contingency Plan

With no packages depending on Apache Hadoop, none is necessary. The biggest risk is not completing packages for all dependencies. In that case, the feature can be removed from the release notes. The packaged dependencies should remain in the distribution. The feature can be pushed to the next Fedora release.


Documentation

Release Notes

  • TODO


Comments and Discussion