Robert Metzger created FLINK-17910:
-------------------------------------- Summary: Kerberized YARN on Docker e2e test: "Final Application State: FAILED" Key: FLINK-17910 URL: https://issues.apache.org/jira/browse/FLINK-17910 Project: Flink Issue Type: Improvement Components: Deployment / YARN, Tests Affects Versions: 1.12.0 Reporter: Robert Metzger CI: https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=2061&view=logs&j=91bf6583-3fb2-592f-e4d4-d79d79c3230a&t=94459a52-42b6-5bfc-5d74-690b5d3c6de8 {code} 2020-05-22T21:19:22.1943701Z ============================================================================== 2020-05-22T21:19:22.1944878Z Running 'Running Kerberized YARN application on Docker test (custom fs plugin)' 2020-05-22T21:19:22.1945365Z ============================================================================== 2020-05-22T21:19:22.1958773Z TEST_DATA_DIR: /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660 2020-05-22T21:19:22.3625117Z Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT 2020-05-22T21:19:22.3947572Z Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT 2020-05-22T21:19:22.4043434Z Flink dist directory: /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT 2020-05-22T21:19:22.4525323Z Docker version 19.03.9, build 9d988398e7 2020-05-22T21:19:23.0459269Z docker-compose version 1.25.4, build 8d51620a 2020-05-22T21:19:23.0942111Z Flink Tarball directory /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660 2020-05-22T21:19:23.0943325Z Flink tarball filename flink.tar.gz 2020-05-22T21:19:23.0944263Z Flink distribution directory name flink-1.12-SNAPSHOT 2020-05-22T21:19:23.0945164Z End-to-end directory /home/vsts/work/1/s/flink-end-to-end-tests 2020-05-22T21:19:23.1045503Z Building Hadoop Docker container 2020-05-22T21:19:23.1493298Z Sending build context to Docker daemon 56.83kB 2020-05-22T21:19:23.1494099Z 2020-05-22T21:19:23.2347114Z Step 1/54 : FROM sequenceiq/pam:ubuntu-14.04 2020-05-22T21:19:23.2351407Z ---> df7bea4c5f64 2020-05-22T21:19:23.2352674Z Step 2/54 : RUN set -x && addgroup hadoop && useradd -d /home/hdfs -ms /bin/bash -G hadoop -p hdfs hdfs && useradd -d /home/yarn -ms /bin/bash -G hadoop -p yarn yarn && useradd -d /home/mapred -ms /bin/bash -G hadoop -p mapred mapred && useradd -d /home/hadoop-user -ms /bin/bash -p hadoop-user hadoop-user 2020-05-22T21:19:23.2355663Z ---> Using cache 2020-05-22T21:19:23.2356858Z ---> 6d5924f58cc9 2020-05-22T21:19:23.2360334Z Step 3/54 : RUN set -x && apt-get update && apt-get install -y curl tar sudo openssh-server openssh-client rsync unzip krb5-user 2020-05-22T21:19:23.2362691Z ---> Using cache 2020-05-22T21:19:23.2363645Z ---> e751c48ace10 2020-05-22T21:19:23.2365365Z Step 4/54 : RUN set -x && mkdir -p /var/log/kerberos && touch /var/log/kerberos/kadmind.log 2020-05-22T21:19:23.2369651Z ---> Using cache 2020-05-22T21:19:23.2370072Z ---> 23112f030775 2020-05-22T21:19:23.2371522Z Step 5/54 : RUN set -x && rm -f /etc/ssh/ssh_host_dsa_key /etc/ssh/ssh_host_rsa_key /root/.ssh/id_rsa && ssh-keygen -q -N "" -t dsa -f /etc/ssh/ssh_host_dsa_key && ssh-keygen -q -N "" -t rsa -f /etc/ssh/ssh_host_rsa_key && ssh-keygen -q -N "" -t rsa -f /root/.ssh/id_rsa && cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys 2020-05-22T21:19:23.2378185Z ---> Using cache 2020-05-22T21:19:23.2379445Z ---> b440f5703744 2020-05-22T21:19:23.2383887Z Step 6/54 : RUN set -x && mkdir -p /usr/java/default && curl -Ls 'http://download.oracle.com/otn-pub/java/jdk/8u131-b11/d54c1d3a095b4ff2b6607d096fa80163/jdk-8u131-linux-x64.tar.gz' -H 'Cookie: oraclelicense=accept-securebackup-cookie' | tar --strip-components=1 -xz -C /usr/java/default/ 2020-05-22T21:19:23.2384981Z ---> Using cache 2020-05-22T21:19:23.2385779Z ---> 7dc6af910a17 2020-05-22T21:19:23.2386182Z Step 7/54 : ENV JAVA_HOME /usr/java/default 2020-05-22T21:19:23.2387984Z ---> Using cache 2020-05-22T21:19:23.2391424Z ---> 6a74039804e6 2020-05-22T21:19:23.2391826Z Step 8/54 : ENV PATH $PATH:$JAVA_HOME/bin 2020-05-22T21:19:23.2394073Z ---> Using cache 2020-05-22T21:19:23.2401679Z ---> 7c1c284d77a9 2020-05-22T21:19:23.2403017Z Step 9/54 : RUN set -x && curl -LOH 'Cookie: oraclelicense=accept-securebackup-cookie' 'http://download.oracle.com/otn-pub/java/jce/8/jce_policy-8.zip' && unzip jce_policy-8.zip && cp /UnlimitedJCEPolicyJDK8/local_policy.jar /UnlimitedJCEPolicyJDK8/US_export_policy.jar $JAVA_HOME/jre/lib/security 2020-05-22T21:19:23.2403961Z ---> Using cache 2020-05-22T21:19:23.2404343Z ---> 7188070276ed 2020-05-22T21:19:23.2404599Z Step 10/54 : ARG HADOOP_VERSION=2.8.4 2020-05-22T21:19:23.2414901Z ---> Using cache 2020-05-22T21:19:23.2415439Z ---> 5b94a16c64a1 2020-05-22T21:19:23.2416140Z Step 11/54 : ENV HADOOP_URL http://archive.apache.org/dist/hadoop/common/hadoop-$HADOOP_VERSION/hadoop-$HADOOP_VERSION.tar.gz 2020-05-22T21:19:23.2416697Z ---> Using cache 2020-05-22T21:19:23.2417046Z ---> 7772dac4cf00 2020-05-22T21:19:23.2417767Z Step 12/54 : RUN set -x && curl -fSL "$HADOOP_URL" -o /tmp/hadoop.tar.gz && tar -xf /tmp/hadoop.tar.gz -C /usr/local/ && rm /tmp/hadoop.tar.gz* 2020-05-22T21:19:23.2418335Z ---> Using cache 2020-05-22T21:19:23.2418720Z ---> e42fb0b26de2 2020-05-22T21:19:23.2418959Z Step 13/54 : WORKDIR /usr/local 2020-05-22T21:19:23.2461496Z ---> Using cache 2020-05-22T21:19:23.2461932Z ---> 5febdd652155 2020-05-22T21:19:23.2466548Z Step 14/54 : RUN set -x && ln -s /usr/local/hadoop-${HADOOP_VERSION} /usr/local/hadoop && chown root:root -R /usr/local/hadoop-${HADOOP_VERSION}/ && chown root:root -R /usr/local/hadoop/ && chown root:yarn /usr/local/hadoop/bin/container-executor && chmod 6050 /usr/local/hadoop/bin/container-executor && mkdir -p /hadoop-data/nm-local-dirs && mkdir -p /hadoop-data/nm-log-dirs && chown yarn:yarn /hadoop-data && chown yarn:yarn /hadoop-data/nm-local-dirs && chown yarn:yarn /hadoop-data/nm-log-dirs && chmod 755 /hadoop-data && chmod 755 /hadoop-data/nm-local-dirs && chmod 755 /hadoop-data/nm-log-dirs 2020-05-22T21:19:23.2470850Z ---> Using cache 2020-05-22T21:19:23.2471244Z ---> 209a70b30085 2020-05-22T21:19:23.2471518Z Step 15/54 : ENV HADOOP_HOME /usr/local/hadoop 2020-05-22T21:19:23.2472098Z ---> Using cache 2020-05-22T21:19:23.2472467Z ---> 027905270d57 2020-05-22T21:19:23.2472729Z Step 16/54 : ENV HADOOP_COMMON_HOME /usr/local/hadoop 2020-05-22T21:19:23.2473435Z ---> Using cache 2020-05-22T21:19:23.2476300Z ---> c015b1952ff2 2020-05-22T21:19:23.2476618Z Step 17/54 : ENV HADOOP_HDFS_HOME /usr/local/hadoop 2020-05-22T21:19:23.2477050Z ---> Using cache 2020-05-22T21:19:23.2477393Z ---> 862fb5b1ec33 2020-05-22T21:19:23.2477670Z Step 18/54 : ENV HADOOP_MAPRED_HOME /usr/local/hadoop 2020-05-22T21:19:23.2478070Z ---> Using cache 2020-05-22T21:19:23.2478429Z ---> 9f44aaa9cd29 2020-05-22T21:19:23.2478684Z Step 19/54 : ENV HADOOP_YARN_HOME /usr/local/hadoop 2020-05-22T21:19:23.2481757Z ---> Using cache 2020-05-22T21:19:23.2482168Z ---> fb4322d194d5 2020-05-22T21:19:23.2482467Z Step 20/54 : ENV HADOOP_CONF_DIR /usr/local/hadoop/etc/hadoop 2020-05-22T21:19:23.2482899Z ---> Using cache 2020-05-22T21:19:23.2483240Z ---> e3360fca4552 2020-05-22T21:19:23.2483535Z Step 21/54 : ENV YARN_CONF_DIR /usr/local/hadoop/etc/hadoop 2020-05-22T21:19:23.2483948Z ---> Using cache 2020-05-22T21:19:23.2484308Z ---> 04f42078c888 2020-05-22T21:19:23.2484556Z Step 22/54 : ENV HADOOP_LOG_DIR /var/log/hadoop 2020-05-22T21:19:23.2487607Z ---> Using cache 2020-05-22T21:19:23.2488004Z ---> 6c014421f6e1 2020-05-22T21:19:23.2488276Z Step 23/54 : ENV HADOOP_BIN_HOME $HADOOP_HOME/bin 2020-05-22T21:19:23.2488667Z ---> Using cache 2020-05-22T21:19:23.2489026Z ---> 4f01fad8b4a0 2020-05-22T21:19:23.2489294Z Step 24/54 : ENV PATH $PATH:$HADOOP_BIN_HOME 2020-05-22T21:19:23.2489744Z ---> Using cache 2020-05-22T21:19:23.2490111Z ---> 665af8882b26 2020-05-22T21:19:23.2494731Z Step 25/54 : ENV KRB_REALM EXAMPLE.COM 2020-05-22T21:19:23.2495306Z ---> Using cache 2020-05-22T21:19:23.2495661Z ---> c0bf6bdc23c9 2020-05-22T21:19:23.2495915Z Step 26/54 : ENV DOMAIN_REALM example.com 2020-05-22T21:19:23.2496294Z ---> Using cache 2020-05-22T21:19:23.2496653Z ---> 6a155297ac88 2020-05-22T21:19:23.2497084Z Step 27/54 : ENV KERBEROS_ADMIN admin/admin 2020-05-22T21:19:23.2499920Z ---> Using cache 2020-05-22T21:19:23.2500320Z ---> 07f834909f56 2020-05-22T21:19:23.2500572Z Step 28/54 : ENV KERBEROS_ADMIN_PASSWORD admin 2020-05-22T21:19:23.2500973Z ---> Using cache 2020-05-22T21:19:23.2501317Z ---> 49c93eac628b 2020-05-22T21:19:23.2501602Z Step 29/54 : ENV KEYTAB_DIR /etc/security/keytabs 2020-05-22T21:19:23.2501998Z ---> Using cache 2020-05-22T21:19:23.2502353Z ---> 913513dd0edd 2020-05-22T21:19:23.2502583Z Step 30/54 : RUN mkdir /var/log/hadoop 2020-05-22T21:19:23.2503019Z ---> Using cache 2020-05-22T21:19:23.2507301Z ---> f739ff56e1d3 2020-05-22T21:19:23.2507844Z Step 31/54 : ADD config/core-site.xml $HADOOP_HOME/etc/hadoop/core-site.xml 2020-05-22T21:19:23.2508311Z ---> Using cache 2020-05-22T21:19:23.2508659Z ---> 165b355f63cc 2020-05-22T21:19:23.2509172Z Step 32/54 : ADD config/hdfs-site.xml $HADOOP_HOME/etc/hadoop/hdfs-site.xml 2020-05-22T21:19:23.2512305Z ---> Using cache 2020-05-22T21:19:23.2512707Z ---> bfde303fe392 2020-05-22T21:19:23.2513236Z Step 33/54 : ADD config/mapred-site.xml $HADOOP_HOME/etc/hadoop/mapred-site.xml 2020-05-22T21:19:23.2513684Z ---> Using cache 2020-05-22T21:19:23.2514038Z ---> d999695c4795 2020-05-22T21:19:23.2514532Z Step 34/54 : ADD config/yarn-site.xml $HADOOP_HOME/etc/hadoop/yarn-site.xml 2020-05-22T21:19:23.2520094Z ---> Using cache 2020-05-22T21:19:23.2520477Z ---> 0024d525d33c 2020-05-22T21:19:23.2521046Z Step 35/54 : ADD config/container-executor.cfg $HADOOP_HOME/etc/hadoop/container-executor.cfg 2020-05-22T21:19:23.2526530Z ---> Using cache 2020-05-22T21:19:23.2526920Z ---> 0d0d242c894a 2020-05-22T21:19:23.2527193Z Step 36/54 : ADD config/krb5.conf /etc/krb5.conf 2020-05-22T21:19:23.2531960Z ---> Using cache 2020-05-22T21:19:23.2533987Z ---> da033d5d7699 2020-05-22T21:19:23.2537665Z Step 37/54 : ADD config/ssl-server.xml $HADOOP_HOME/etc/hadoop/ssl-server.xml 2020-05-22T21:19:23.2542099Z ---> Using cache 2020-05-22T21:19:23.2553040Z ---> 51c4362d205a 2020-05-22T21:19:23.2564394Z Step 38/54 : ADD config/ssl-client.xml $HADOOP_HOME/etc/hadoop/ssl-client.xml 2020-05-22T21:19:23.2574123Z ---> Using cache 2020-05-22T21:19:23.2577220Z ---> b794a3aad969 2020-05-22T21:19:23.2577560Z Step 39/54 : ADD config/keystore.jks $HADOOP_HOME/lib/keystore.jks 2020-05-22T21:19:23.2578018Z ---> Using cache 2020-05-22T21:19:23.2591589Z ---> 815f53403da8 2020-05-22T21:19:23.2592364Z Step 40/54 : RUN set -x && chmod 400 $HADOOP_HOME/etc/hadoop/container-executor.cfg && chown root:yarn $HADOOP_HOME/etc/hadoop/container-executor.cfg 2020-05-22T21:19:23.2592988Z ---> Using cache 2020-05-22T21:19:23.2593359Z ---> 165cd094f408 2020-05-22T21:19:23.2593620Z Step 41/54 : ADD config/ssh_config /root/.ssh/config 2020-05-22T21:19:23.2594036Z ---> Using cache 2020-05-22T21:19:23.2594380Z ---> 0e45ac4cb4a8 2020-05-22T21:19:23.2594971Z Step 42/54 : RUN set -x && chmod 600 /root/.ssh/config && chown root:root /root/.ssh/config 2020-05-22T21:19:23.2595542Z ---> Using cache 2020-05-22T21:19:23.2595891Z ---> 2b59d20a9e15 2020-05-22T21:19:23.2596678Z Step 43/54 : RUN set -x && ls -la /usr/local/hadoop/etc/hadoop/*-env.sh && chmod +x /usr/local/hadoop/etc/hadoop/*-env.sh && ls -la /usr/local/hadoop/etc/hadoop/*-env.sh 2020-05-22T21:19:23.2597289Z ---> Using cache 2020-05-22T21:19:23.2597658Z ---> b749e6379d22 2020-05-22T21:19:23.2598429Z Step 44/54 : RUN set -x && sed -i "/^[^#]*UsePAM/ s/.*/#&/" /etc/ssh/sshd_config && echo "UsePAM no" >> /etc/ssh/sshd_config && echo "Port 2122" >> /etc/ssh/sshd_config 2020-05-22T21:19:23.2599052Z ---> Using cache 2020-05-22T21:19:23.2613702Z ---> 65408ba9773c 2020-05-22T21:19:23.2614041Z Step 45/54 : EXPOSE 50470 9000 50010 50020 50070 50075 50090 50475 50091 8020 2020-05-22T21:19:23.2614516Z ---> Using cache 2020-05-22T21:19:23.2614865Z ---> 32f8cf5e1a0d 2020-05-22T21:19:23.2615096Z Step 46/54 : EXPOSE 19888 2020-05-22T21:19:23.2615447Z ---> Using cache 2020-05-22T21:19:23.2615809Z ---> b3ec79058f91 2020-05-22T21:19:23.2616099Z Step 47/54 : EXPOSE 8030 8031 8032 8033 8040 8042 8088 8188 2020-05-22T21:19:23.2616533Z ---> Using cache 2020-05-22T21:19:23.2616892Z ---> a3aacc337790 2020-05-22T21:19:23.2617113Z Step 48/54 : EXPOSE 49707 2122 2020-05-22T21:19:23.2617489Z ---> Using cache 2020-05-22T21:19:23.2617843Z ---> 4bddf3ee31a2 2020-05-22T21:19:23.2618108Z Step 49/54 : ADD bootstrap.sh /etc/bootstrap.sh 2020-05-22T21:19:23.2618498Z ---> Using cache 2020-05-22T21:19:23.2618861Z ---> 17a4b4cd7c92 2020-05-22T21:19:23.2619121Z Step 50/54 : RUN chown root:root /etc/bootstrap.sh 2020-05-22T21:19:23.2619538Z ---> Using cache 2020-05-22T21:19:23.2619877Z ---> 3f04c48a5704 2020-05-22T21:19:23.2620133Z Step 51/54 : RUN chmod 700 /etc/bootstrap.sh 2020-05-22T21:19:23.2627484Z ---> Using cache 2020-05-22T21:19:23.2627874Z ---> 63250321365f 2020-05-22T21:19:23.2628137Z Step 52/54 : ENV BOOTSTRAP /etc/bootstrap.sh 2020-05-22T21:19:23.2628521Z ---> Using cache 2020-05-22T21:19:23.2628879Z ---> 5d12818e48c4 2020-05-22T21:19:23.2632696Z Step 53/54 : ENTRYPOINT ["/etc/bootstrap.sh"] 2020-05-22T21:19:23.2640123Z ---> Using cache 2020-05-22T21:19:23.2640790Z ---> 214b0b13e211 2020-05-22T21:19:23.2641175Z Step 54/54 : CMD ["-h"] 2020-05-22T21:19:23.2641542Z ---> Using cache 2020-05-22T21:19:23.2641884Z ---> 6e4fbe512093 2020-05-22T21:19:23.2643935Z Successfully built 6e4fbe512093 2020-05-22T21:19:23.2694408Z Successfully tagged flink/docker-hadoop-secure-cluster:latest 2020-05-22T21:19:23.2724103Z Starting Hadoop cluster 2020-05-22T21:19:23.9070951Z Creating network "docker-hadoop-cluster-network" with the default driver 2020-05-22T21:19:24.1011749Z Creating kdc ... 2020-05-22T21:19:28.2896089Z [1A[2K 2020-05-22T21:19:28.2897203Z Creating kdc ... [32mdone[0m 2020-05-22T21:19:28.2978876Z [1BCreating master ... 2020-05-22T21:19:29.4561823Z [1A[2K 2020-05-22T21:19:29.4583837Z Creating master ... [32mdone[0m 2020-05-22T21:19:29.4735849Z [1BCreating slave1 ... 2020-05-22T21:19:29.4771024Z Creating slave2 ... 2020-05-22T21:19:30.6877638Z [2A[2K 2020-05-22T21:19:30.6878727Z Creating slave1 ... [32mdone[0m 2020-05-22T21:19:31.1249534Z [2B[1A[2K 2020-05-22T21:19:31.1250093Z Creating slave2 ... [32mdone[0m 2020-05-22T21:19:31.3107267Z [1BWaiting for hadoop cluster to come up. We have been trying for 0 seconds, retrying ... 2020-05-22T21:19:36.4389046Z Waiting for hadoop cluster to come up. We have been trying for 5 seconds, retrying ... 2020-05-22T21:19:41.5612168Z Waiting for hadoop cluster to come up. We have been trying for 10 seconds, retrying ... 2020-05-22T21:19:46.6590992Z Waiting for hadoop cluster to come up. We have been trying for 15 seconds, retrying ... 2020-05-22T21:19:51.7628617Z Waiting for hadoop cluster to come up. We have been trying for 20 seconds, retrying ... 2020-05-22T21:19:56.8427407Z Waiting for hadoop cluster to come up. We have been trying for 25 seconds, retrying ... 2020-05-22T21:20:01.8947108Z Waiting for hadoop cluster to come up. We have been trying for 30 seconds, retrying ... 2020-05-22T21:20:07.0212323Z Waiting for hadoop cluster to come up. We have been trying for 36 seconds, retrying ... 2020-05-22T21:20:12.0993651Z Waiting for hadoop cluster to come up. We have been trying for 41 seconds, retrying ... 2020-05-22T21:20:17.1946600Z Waiting for hadoop cluster to come up. We have been trying for 46 seconds, retrying ... 2020-05-22T21:20:22.4292515Z We only have 0 NodeManagers up. We have been trying for 0 seconds, retrying ... 2020-05-22T21:20:25.0995281Z 20/05/22 21:20:25 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:20:25.5793119Z 20/05/22 21:20:25 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:20:26.2098326Z We now have 2 NodeManagers up. 2020-05-22T21:20:47.4997429Z Flink config: 2020-05-22T21:20:47.7979001Z security.kerberos.login.keytab: /home/hadoop-user/hadoop-user.keytab 2020-05-22T21:20:47.7979800Z security.kerberos.login.principal: hadoop-user 2020-05-22T21:20:47.7980135Z slot.request.timeout: 120000 2020-05-22T21:20:48.3951160Z SLF4J: Class path contains multiple SLF4J bindings. 2020-05-22T21:20:48.3952602Z SLF4J: Found binding in [jar:file:/home/hadoop-user/flink-1.12-SNAPSHOT/lib/log4j-slf4j-impl-2.12.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] 2020-05-22T21:20:48.3953652Z SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.8.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] 2020-05-22T21:20:48.3954220Z SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. 2020-05-22T21:20:48.4009596Z SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] 2020-05-22T21:20:49.7367839Z 2020-05-22 21:20:49,736 INFO org.apache.hadoop.security.UserGroupInformation [] - Login successful for user hadoop-user using keytab file /home/hadoop-user/hadoop-user.keytab 2020-05-22T21:20:49.9978850Z 2020-05-22 21:20:49,997 INFO org.apache.hadoop.yarn.client.RMProxy [] - Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:20:50.1737353Z 2020-05-22 21:20:50,173 INFO org.apache.hadoop.yarn.client.AHSProxy [] - Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:20:50.1853317Z 2020-05-22 21:20:50,184 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - No path for the flink jar passed. Using the location of class org.apache.flink.yarn.YarnClusterDescriptor to locate the jar 2020-05-22T21:20:50.2151294Z 2020-05-22 21:20:50,214 WARN org.apache.flink.yarn.configuration.YarnLogConfigUtil [] - The configuration directory ('/home/hadoop-user/flink-1.12-SNAPSHOT/conf') already contains a LOG4J config file.If you want to use logback, then please delete or rename the log configuration file. 2020-05-22T21:20:50.4351339Z 2020-05-22 21:20:50,434 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Cluster specification: ClusterSpecification{masterMemoryMB=1000, taskManagerMemoryMB=1000, slotsPerTaskManager=1} 2020-05-22T21:21:18.5358063Z 2020-05-22 21:21:18,534 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Adding keytab /home/hadoop-user/hadoop-user.keytab to the AM container local resource bucket 2020-05-22T21:21:18.5735098Z 2020-05-22 21:21:18,572 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Adding delegation token to the AM container. 2020-05-22T21:21:18.5924580Z 2020-05-22 21:21:18,591 INFO org.apache.hadoop.hdfs.DFSClient [] - Created HDFS_DELEGATION_TOKEN token 1 for hadoop-user on 172.22.0.3:9000 2020-05-22T21:21:18.6243892Z 2020-05-22 21:21:18,623 INFO org.apache.hadoop.mapreduce.security.TokenCache [] - Got dt for hdfs://master.docker-hadoop-cluster-network:9000; Kind: HDFS_DELEGATION_TOKEN, Service: 172.22.0.3:9000, Ident: (HDFS_DELEGATION_TOKEN token 1 for hadoop-user) 2020-05-22T21:21:18.6245507Z 2020-05-22 21:21:18,623 INFO org.apache.flink.yarn.Utils [] - Attempting to obtain Kerberos security token for HBase 2020-05-22T21:21:18.6270047Z 2020-05-22 21:21:18,626 INFO org.apache.flink.yarn.Utils [] - HBase is not available (not packaged with this application): ClassNotFoundException : "org.apache.hadoop.hbase.HBaseConfiguration". 2020-05-22T21:21:18.6348936Z 2020-05-22 21:21:18,634 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Submitting application master application_1590182392090_0001 2020-05-22T21:21:18.8928412Z 2020-05-22 21:21:18,892 INFO org.apache.hadoop.yarn.client.api.impl.TimelineClientImpl [] - Timeline service address: http://master.docker-hadoop-cluster-network:8188/ws/v1/timeline/ 2020-05-22T21:21:20.5540928Z 2020-05-22 21:21:20,553 INFO org.apache.hadoop.yarn.client.api.impl.YarnClientImpl [] - Submitted application application_1590182392090_0001 2020-05-22T21:21:20.5547910Z 2020-05-22 21:21:20,554 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Waiting for the cluster to be allocated 2020-05-22T21:21:20.5599614Z 2020-05-22 21:21:20,559 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Deploying cluster, current state ACCEPTED 2020-05-22T21:21:52.0789480Z 2020-05-22 21:21:51,760 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - YARN application has been deployed successfully. 2020-05-22T21:21:52.0791023Z 2020-05-22 21:21:51,761 INFO org.apache.flink.yarn.YarnClusterDescriptor [] - Found Web Interface slave2.docker-hadoop-cluster-network:39241 of application 'application_1590182392090_0001'. 2020-05-22T21:21:52.0797444Z Flink YARN Application submitted. 2020-05-22T21:21:54.1439855Z 20/05/22 21:21:54 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:21:54.5999714Z 20/05/22 21:21:54 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:21:54.9004370Z Total number of applications (application-types: [] and states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED]):1 2020-05-22T21:21:54.9005897Z Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL 2020-05-22T21:21:54.9007137Z application_1590182392090_0001 Flink Application Cluster Apache Flink hadoop-user default RUNNING UNDEFINED 0% http://slave2.docker-hadoop-cluster-network:39241 2020-05-22T21:21:56.2740266Z 20/05/22 21:21:56 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:21:56.8067474Z 20/05/22 21:21:56 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:21:57.2174030Z Application ID: application_1590182392090_0001 2020-05-22T21:21:57.2200292Z Application application_1590182392090_0001 is in state UNDEFINED. We have been waiting for 0 seconds, looping ... 2020-05-22T21:22:00.1230233Z 20/05/22 21:22:00 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:01.3048967Z 20/05/22 21:22:01 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:01.8945150Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 4 seconds, looping ... 2020-05-22T21:22:05.0795394Z 20/05/22 21:22:05 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:06.0571565Z 20/05/22 21:22:06 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:06.6397174Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 9 seconds, looping ... 2020-05-22T21:22:10.0235540Z 20/05/22 21:22:10 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:11.0378700Z 20/05/22 21:22:11 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:11.6234136Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 14 seconds, looping ... 2020-05-22T21:22:14.9252628Z 20/05/22 21:22:14 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:15.9349646Z 20/05/22 21:22:15 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:16.6554490Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 19 seconds, looping ... 2020-05-22T21:22:19.9568591Z 20/05/22 21:22:19 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:22.2007232Z 20/05/22 21:22:22 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:22.7152250Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 25 seconds, looping ... 2020-05-22T21:22:27.0136495Z 20/05/22 21:22:27 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:28.4488505Z 20/05/22 21:22:28 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:29.3621435Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 32 seconds, looping ... 2020-05-22T21:22:32.4554140Z 20/05/22 21:22:32 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:33.5239432Z 20/05/22 21:22:33 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:34.0693567Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 37 seconds, looping ... 2020-05-22T21:22:36.9889796Z 20/05/22 21:22:36 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:38.0359283Z 20/05/22 21:22:38 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:38.6256480Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 41 seconds, looping ... 2020-05-22T21:22:41.6823176Z 20/05/22 21:22:41 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:42.4372362Z 20/05/22 21:22:42 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:42.8481557Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 45 seconds, looping ... 2020-05-22T21:22:46.6830904Z 20/05/22 21:22:46 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:48.0759311Z 20/05/22 21:22:48 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:48.9919876Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 51 seconds, looping ... 2020-05-22T21:22:52.8233298Z 20/05/22 21:22:52 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:54.3806892Z 20/05/22 21:22:54 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:22:55.2187828Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 58 seconds, looping ... 2020-05-22T21:22:59.0085142Z 20/05/22 21:22:59 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:22:59.6504228Z 20/05/22 21:22:59 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:00.0542081Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 63 seconds, looping ... 2020-05-22T21:23:02.3270745Z 20/05/22 21:23:02 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:23:03.0650742Z 20/05/22 21:23:03 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:03.4892849Z Application application_1590182392090_0001 is in state RUNNING. We have been waiting for 66 seconds, looping ... 2020-05-22T21:23:06.2065048Z 20/05/22 21:23:06 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:23:06.6894681Z 20/05/22 21:23:06 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:08.6649920Z 20/05/22 21:23:08 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:23:09.1570444Z 20/05/22 21:23:09 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:09.5164983Z Final Application State: FAILED 2020-05-22T21:23:09.5166627Z Running the Flink Application failed. 😞 2020-05-22T21:23:09.5167241Z Debugging failed YARN Docker test: 2020-05-22T21:23:09.5167704Z Currently running containers 2020-05-22T21:23:09.5625847Z CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2020-05-22T21:23:09.5629095Z 1aa2ffd4b7f2 flink/docker-hadoop-secure-cluster:latest "/etc/bootstrap.sh w…" 3 minutes ago Up 3 minutes 2122/tcp, 8020/tcp, 8030-8033/tcp, 8040/tcp, 8042/tcp, 8088/tcp, 8188/tcp, 9000/tcp, 19888/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090-50091/tcp, 50470/tcp, 50475/tcp slave2 2020-05-22T21:23:09.5631181Z b0d1568a8825 flink/docker-hadoop-secure-cluster:latest "/etc/bootstrap.sh w…" 3 minutes ago Up 3 minutes 2122/tcp, 8020/tcp, 8030-8033/tcp, 8040/tcp, 8042/tcp, 8088/tcp, 8188/tcp, 9000/tcp, 19888/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090-50091/tcp, 50470/tcp, 50475/tcp slave1 2020-05-22T21:23:09.5633542Z 49db5b6cc490 flink/docker-hadoop-secure-cluster:latest "/etc/bootstrap.sh m…" 3 minutes ago Up 3 minutes 2122/tcp, 8020/tcp, 8030-8033/tcp, 8040/tcp, 8042/tcp, 8088/tcp, 8188/tcp, 9000/tcp, 19888/tcp, 49707/tcp, 50010/tcp, 50020/tcp, 50070/tcp, 50075/tcp, 50090-50091/tcp, 50470/tcp, 50475/tcp master 2020-05-22T21:23:09.5635489Z 7745982b46e7 sequenceiq/kerberos "/config.sh" 3 minutes ago Up 3 minutes 88/tcp, 749/tcp kdc 2020-05-22T21:23:09.5660894Z Currently running JVMs 2020-05-22T21:23:09.6997631Z 45379 Jps -Dapplication.home=/usr/lib/jvm/zulu-8-azure-amd64 -Xms8m 2020-05-22T21:23:09.7125023Z Hadoop logs: 2020-05-22T21:23:09.8710127Z total 204K 2020-05-22T21:23:09.8711655Z 1840178 4.0K drwxr-xr-x 2 vsts docker 4.0K May 22 21:19 . 2020-05-22T21:23:09.8712628Z 1840177 4.0K drwxr-xr-x 3 vsts docker 4.0K May 22 21:23 .. 2020-05-22T21:23:09.8713424Z 1840179 52K -rw-r--r-- 1 vsts docker 52K May 22 21:20 historyserver.err 2020-05-22T21:23:09.8714275Z 1840180 0 -rw-r--r-- 1 vsts docker 0 May 22 21:19 historyserver.out 2020-05-22T21:23:09.8715109Z 1840181 52K -rw-r--r-- 1 vsts docker 51K May 22 21:23 namenode.err 2020-05-22T21:23:09.8715927Z 1840182 0 -rw-r--r-- 1 vsts docker 0 May 22 21:19 namenode.out 2020-05-22T21:23:09.8716745Z 1840183 64K -rw-r--r-- 1 vsts docker 64K May 22 21:23 resourcemanager.err 2020-05-22T21:23:09.8717597Z 1840184 0 -rw-r--r-- 1 vsts docker 0 May 22 21:19 resourcemanager.out 2020-05-22T21:23:09.8718435Z 1840185 28K -rw-r--r-- 1 vsts docker 25K May 22 21:21 timelineserver.err 2020-05-22T21:23:09.8719264Z 1840186 0 -rw-r--r-- 1 vsts docker 0 May 22 21:19 timelineserver.out 2020-05-22T21:23:09.8720201Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/historyserver.err: 2020-05-22T21:23:09.8722239Z 20/05/22 21:19:38 INFO hs.JobHistoryServer: STARTUP_MSG: 2020-05-22T21:23:09.8722867Z /************************************************************ 2020-05-22T21:23:09.8723401Z STARTUP_MSG: Starting JobHistoryServer 2020-05-22T21:23:09.8723895Z STARTUP_MSG: user = mapred 2020-05-22T21:23:09.8724729Z STARTUP_MSG: host = master.docker-hadoop-cluster-network/172.22.0.3 2020-05-22T21:23:09.8725333Z STARTUP_MSG: args = [] 2020-05-22T21:23:09.8725776Z STARTUP_MSG: version = 2.8.4 2020-05-22T21:23:09.8773367Z STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/modules/*.jar 2020-05-22T21:23:09.8804932Z STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 17e75c2a11685af3e043aa5e604dc831e5b14674; compiled by 'jdu' on 2018-05-08T02:50Z 2020-05-22T21:23:09.8805491Z STARTUP_MSG: java = 1.8.0_131 2020-05-22T21:23:09.8805816Z ************************************************************/ 2020-05-22T21:23:09.8806231Z 20/05/22 21:19:38 INFO hs.JobHistoryServer: registered UNIX signal handlers for [TERM, HUP, INT] 2020-05-22T21:23:09.8807293Z 20/05/22 21:19:48 INFO security.UserGroupInformation: Login successful for user mapred/[hidden email] using keytab file /etc/security/keytabs/mapred.keytab 2020-05-22T21:23:09.8808205Z 20/05/22 21:19:48 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2020-05-22T21:23:09.8808695Z 20/05/22 21:19:49 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2020-05-22T21:23:09.8809351Z 20/05/22 21:19:49 INFO impl.MetricsSystemImpl: JobHistoryServer metrics system started 2020-05-22T21:23:09.8809774Z 20/05/22 21:19:49 INFO hs.JobHistory: JobHistory Init 2020-05-22T21:23:09.8810555Z 20/05/22 21:19:51 INFO jobhistory.JobHistoryUtils: Default file system [hdfs://master.docker-hadoop-cluster-network:9000] 2020-05-22T21:23:09.8811645Z 20/05/22 21:19:52 WARN ipc.Client: Failed to connect to server: master.docker-hadoop-cluster-network/172.22.0.3:9000: try once and fail. 2020-05-22T21:23:09.8812172Z java.net.ConnectException: Connection refused 2020-05-22T21:23:09.8812500Z at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 2020-05-22T21:23:09.8813022Z at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 2020-05-22T21:23:09.8813489Z at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 2020-05-22T21:23:09.8813949Z at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) 2020-05-22T21:23:09.8814377Z at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685) 2020-05-22T21:23:09.8814849Z at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2020-05-22T21:23:09.8815313Z at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410) 2020-05-22T21:23:09.8815734Z at org.apache.hadoop.ipc.Client.getConnection(Client.java:1550) 2020-05-22T21:23:09.8816137Z at org.apache.hadoop.ipc.Client.call(Client.java:1381) 2020-05-22T21:23:09.8816503Z at org.apache.hadoop.ipc.Client.call(Client.java:1345) 2020-05-22T21:23:09.8816949Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 2020-05-22T21:23:09.8817451Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 2020-05-22T21:23:09.8817880Z at com.sun.proxy.$Proxy12.getFileInfo(Unknown Source) 2020-05-22T21:23:09.8818387Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:796) 2020-05-22T21:23:09.8818901Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-22T21:23:09.8819332Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-22T21:23:09.8819826Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-22T21:23:09.8820278Z at java.lang.reflect.Method.invoke(Method.java:498) 2020-05-22T21:23:09.8820747Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) 2020-05-22T21:23:09.8821308Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 2020-05-22T21:23:09.8821886Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 2020-05-22T21:23:09.8822439Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 2020-05-22T21:23:09.8822995Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) 2020-05-22T21:23:09.8823434Z at com.sun.proxy.$Proxy13.getFileInfo(Unknown Source) 2020-05-22T21:23:09.8823803Z at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1649) 2020-05-22T21:23:09.8824222Z at org.apache.hadoop.fs.Hdfs.getFileStatus(Hdfs.java:133) 2020-05-22T21:23:09.8824621Z at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1177) 2020-05-22T21:23:09.8825053Z at org.apache.hadoop.fs.FileContext$15.next(FileContext.java:1173) 2020-05-22T21:23:09.8825478Z at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) 2020-05-22T21:23:09.8825933Z at org.apache.hadoop.fs.FileContext.getFileStatus(FileContext.java:1173) 2020-05-22T21:23:09.8826389Z at org.apache.hadoop.fs.FileContext$Util.exists(FileContext.java:1638) 2020-05-22T21:23:09.8826863Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:692) 2020-05-22T21:23:09.8827433Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:622) 2020-05-22T21:23:09.8828081Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:585) 2020-05-22T21:23:09.8828652Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:550) 2020-05-22T21:23:09.8829169Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8829632Z at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:95) 2020-05-22T21:23:09.8830112Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8830582Z at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) 2020-05-22T21:23:09.8831165Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:151) 2020-05-22T21:23:09.8831654Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8832182Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:231) 2020-05-22T21:23:09.8832738Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:241) 2020-05-22T21:23:09.8833666Z 20/05/22 21:19:52 INFO hs.HistoryFileManager: Waiting for FileSystem at master.docker-hadoop-cluster-network:9000to be available 2020-05-22T21:23:09.8834587Z 20/05/22 21:20:02 INFO jobhistory.JobHistoryUtils: Default file system [hdfs://master.docker-hadoop-cluster-network:9000] 2020-05-22T21:23:09.8836065Z 20/05/22 21:20:02 INFO service.AbstractService: Service org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://master.docker-hadoop-cluster-network:9000/tmp/hadoop-yarn/staging/history/done] 2020-05-22T21:23:09.8837999Z org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://master.docker-hadoop-cluster-network:9000/tmp/hadoop-yarn/staging/history/done] 2020-05-22T21:23:09.8838804Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:639) 2020-05-22T21:23:09.8839405Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:585) 2020-05-22T21:23:09.8839961Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:550) 2020-05-22T21:23:09.8840477Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8840941Z at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:95) 2020-05-22T21:23:09.8841423Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8841920Z at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) 2020-05-22T21:23:09.8842428Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:151) 2020-05-22T21:23:09.8842942Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8843455Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:231) 2020-05-22T21:23:09.8844008Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:241) 2020-05-22T21:23:09.8844916Z Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.8845538Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.8846128Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.8846721Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.8847292Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.8847951Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.8848488Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.8849027Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.8849522Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.8850056Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.8850712Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.8854290Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.8858578Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.8859083Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.8859486Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.8859897Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.8860265Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.8860639Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.8861071Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.8861540Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.8861782Z 2020-05-22T21:23:09.8862062Z at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 2020-05-22T21:23:09.8862524Z at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 2020-05-22T21:23:09.8863099Z at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 2020-05-22T21:23:09.8863617Z at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 2020-05-22T21:23:09.8864080Z at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) 2020-05-22T21:23:09.8864598Z at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) 2020-05-22T21:23:09.8865068Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2474) 2020-05-22T21:23:09.8865477Z at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:317) 2020-05-22T21:23:09.8865857Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) 2020-05-22T21:23:09.8866286Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) 2020-05-22T21:23:09.8867367Z at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) 2020-05-22T21:23:09.8868540Z at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:734) 2020-05-22T21:23:09.8869055Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:694) 2020-05-22T21:23:09.8869615Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:622) 2020-05-22T21:23:09.8870020Z ... 10 more 2020-05-22T21:23:09.8871004Z Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.8871747Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.8873553Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.8874197Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.8874760Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.8875455Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.8875995Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.8876535Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.8877049Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.8877565Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.8878223Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.8878978Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.8879593Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.8880087Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.8880469Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.8880879Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.8881245Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.8881618Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.8882070Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.8882521Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.8882764Z 2020-05-22T21:23:09.8883054Z at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) 2020-05-22T21:23:09.8883463Z at org.apache.hadoop.ipc.Client.call(Client.java:1435) 2020-05-22T21:23:09.8883831Z at org.apache.hadoop.ipc.Client.call(Client.java:1345) 2020-05-22T21:23:09.8884281Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 2020-05-22T21:23:09.8884780Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 2020-05-22T21:23:09.8885195Z at com.sun.proxy.$Proxy12.mkdirs(Unknown Source) 2020-05-22T21:23:09.8885668Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:583) 2020-05-22T21:23:09.8886186Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-22T21:23:09.8887323Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-22T21:23:09.8889877Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-22T21:23:09.8890373Z at java.lang.reflect.Method.invoke(Method.java:498) 2020-05-22T21:23:09.8890827Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) 2020-05-22T21:23:09.8892115Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 2020-05-22T21:23:09.8892697Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 2020-05-22T21:23:09.8893272Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 2020-05-22T21:23:09.8893829Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) 2020-05-22T21:23:09.8894244Z at com.sun.proxy.$Proxy13.mkdirs(Unknown Source) 2020-05-22T21:23:09.8894635Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2472) 2020-05-22T21:23:09.8894941Z ... 17 more 2020-05-22T21:23:09.8896369Z 20/05/22 21:20:02 INFO service.AbstractService: Service org.apache.hadoop.mapreduce.v2.hs.JobHistory failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://master.docker-hadoop-cluster-network:9000/tmp/hadoop-yarn/staging/history/done] 2020-05-22T21:23:09.8898042Z org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://master.docker-hadoop-cluster-network:9000/tmp/hadoop-yarn/staging/history/done] 2020-05-22T21:23:09.8898762Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:639) 2020-05-22T21:23:09.8899363Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:585) 2020-05-22T21:23:09.8899935Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:550) 2020-05-22T21:23:09.8900439Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8901002Z at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:95) 2020-05-22T21:23:09.8901467Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8901957Z at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) 2020-05-22T21:23:09.8902472Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:151) 2020-05-22T21:23:09.8902980Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8903512Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:231) 2020-05-22T21:23:09.8904045Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:241) 2020-05-22T21:23:09.8904962Z Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.8905585Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.8906176Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.8906779Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.8907336Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.8907878Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.8908408Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.8908939Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.8909451Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.8909966Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.8910624Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.8911299Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.8911913Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.8912392Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.8912772Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.8913183Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.8913553Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.8913925Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.8914378Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.8914828Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.8915067Z 2020-05-22T21:23:09.8915342Z at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 2020-05-22T21:23:09.8915895Z at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 2020-05-22T21:23:09.8916457Z at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 2020-05-22T21:23:09.8916969Z at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 2020-05-22T21:23:09.8917431Z at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) 2020-05-22T21:23:09.8917949Z at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) 2020-05-22T21:23:09.8921458Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2474) 2020-05-22T21:23:09.8922008Z at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:317) 2020-05-22T21:23:09.8922405Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) 2020-05-22T21:23:09.8922814Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) 2020-05-22T21:23:09.8923254Z at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) 2020-05-22T21:23:09.8923680Z at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:734) 2020-05-22T21:23:09.8924158Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:694) 2020-05-22T21:23:09.8924730Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:622) 2020-05-22T21:23:09.8925109Z ... 10 more 2020-05-22T21:23:09.8926092Z Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.8926806Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.8927854Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.8928481Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.8929053Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.8929598Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.8930154Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.8930674Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.8931187Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.8931868Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.8932538Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.8933237Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.8938732Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.8939240Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.8939621Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.8947810Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.8950211Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.8950593Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.8951047Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.8951514Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.8951767Z 2020-05-22T21:23:09.8952041Z at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) 2020-05-22T21:23:09.8952445Z at org.apache.hadoop.ipc.Client.call(Client.java:1435) 2020-05-22T21:23:09.8952962Z at org.apache.hadoop.ipc.Client.call(Client.java:1345) 2020-05-22T21:23:09.8953414Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 2020-05-22T21:23:09.8953929Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 2020-05-22T21:23:09.8954329Z at com.sun.proxy.$Proxy12.mkdirs(Unknown Source) 2020-05-22T21:23:09.8954820Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:583) 2020-05-22T21:23:09.8955321Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-22T21:23:09.8955819Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-22T21:23:09.8956315Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-22T21:23:09.8956768Z at java.lang.reflect.Method.invoke(Method.java:498) 2020-05-22T21:23:09.8957244Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) 2020-05-22T21:23:09.8957808Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 2020-05-22T21:23:09.8958385Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 2020-05-22T21:23:09.8958938Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 2020-05-22T21:23:09.8959492Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) 2020-05-22T21:23:09.8959921Z at com.sun.proxy.$Proxy13.mkdirs(Unknown Source) 2020-05-22T21:23:09.8960293Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2472) 2020-05-22T21:23:09.8963487Z ... 17 more 2020-05-22T21:23:09.8964092Z 20/05/22 21:20:02 INFO hs.JobHistory: Stopping JobHistory 2020-05-22T21:23:09.8965675Z 20/05/22 21:20:02 INFO service.AbstractService: Service org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer failed in state INITED; cause: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://master.docker-hadoop-cluster-network:9000/tmp/hadoop-yarn/staging/history/done] 2020-05-22T21:23:09.8967196Z org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://master.docker-hadoop-cluster-network:9000/tmp/hadoop-yarn/staging/history/done] 2020-05-22T21:23:09.8967926Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:639) 2020-05-22T21:23:09.8968530Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:585) 2020-05-22T21:23:09.8969114Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:550) 2020-05-22T21:23:09.8969616Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8970102Z at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:95) 2020-05-22T21:23:09.8970571Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8971067Z at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) 2020-05-22T21:23:09.8971752Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:151) 2020-05-22T21:23:09.8972255Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.8972787Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:231) 2020-05-22T21:23:09.8973320Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:241) 2020-05-22T21:23:09.8974275Z Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.8974916Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.8975635Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.8976242Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.8976793Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.8977337Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.8977888Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.8978478Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.8978986Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.8979501Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.8980158Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.8980851Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.8981443Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.8981923Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.8982302Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.8982711Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.8983081Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.8983451Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.8983897Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.8984351Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.8984606Z 2020-05-22T21:23:09.8984866Z at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 2020-05-22T21:23:09.8985349Z at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 2020-05-22T21:23:09.8985906Z at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 2020-05-22T21:23:09.8986416Z at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 2020-05-22T21:23:09.8986877Z at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) 2020-05-22T21:23:09.8987400Z at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) 2020-05-22T21:23:09.8987886Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2474) 2020-05-22T21:23:09.8988279Z at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:317) 2020-05-22T21:23:09.8988675Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) 2020-05-22T21:23:09.8989082Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) 2020-05-22T21:23:09.8989518Z at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) 2020-05-22T21:23:09.8989940Z at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:734) 2020-05-22T21:23:09.8990416Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:694) 2020-05-22T21:23:09.8990984Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:622) 2020-05-22T21:23:09.8991364Z ... 10 more 2020-05-22T21:23:09.8992263Z Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.8992988Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.8994899Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.8995545Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.8996104Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.8996647Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.8997199Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.8997715Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.8998342Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.8998861Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.8999518Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.9000216Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.9000810Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.9001292Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.9001687Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.9002080Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.9002464Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.9002818Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.9003267Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.9003723Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.9003977Z 2020-05-22T21:23:09.9004249Z at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) 2020-05-22T21:23:09.9004653Z at org.apache.hadoop.ipc.Client.call(Client.java:1435) 2020-05-22T21:23:09.9005017Z at org.apache.hadoop.ipc.Client.call(Client.java:1345) 2020-05-22T21:23:09.9005460Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 2020-05-22T21:23:09.9005975Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 2020-05-22T21:23:09.9006375Z at com.sun.proxy.$Proxy12.mkdirs(Unknown Source) 2020-05-22T21:23:09.9006868Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:583) 2020-05-22T21:23:09.9007367Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-22T21:23:09.9007793Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-22T21:23:09.9008307Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-22T21:23:09.9008739Z at java.lang.reflect.Method.invoke(Method.java:498) 2020-05-22T21:23:09.9009207Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) 2020-05-22T21:23:09.9009767Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 2020-05-22T21:23:09.9010344Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 2020-05-22T21:23:09.9010895Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 2020-05-22T21:23:09.9011605Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) 2020-05-22T21:23:09.9012046Z at com.sun.proxy.$Proxy13.mkdirs(Unknown Source) 2020-05-22T21:23:09.9012412Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2472) 2020-05-22T21:23:09.9012834Z ... 17 more 2020-05-22T21:23:09.9013160Z 20/05/22 21:20:02 INFO impl.MetricsSystemImpl: Stopping JobHistoryServer metrics system... 2020-05-22T21:23:09.9013642Z 20/05/22 21:20:02 INFO impl.MetricsSystemImpl: JobHistoryServer metrics system stopped. 2020-05-22T21:23:09.9014132Z 20/05/22 21:20:02 INFO impl.MetricsSystemImpl: JobHistoryServer metrics system shutdown complete. 2020-05-22T21:23:09.9014591Z 20/05/22 21:20:02 FATAL hs.JobHistoryServer: Error starting JobHistoryServer 2020-05-22T21:23:09.9015703Z org.apache.hadoop.yarn.exceptions.YarnRuntimeException: Error creating done directory: [hdfs://master.docker-hadoop-cluster-network:9000/tmp/hadoop-yarn/staging/history/done] 2020-05-22T21:23:09.9016519Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:639) 2020-05-22T21:23:09.9017109Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.createHistoryDirs(HistoryFileManager.java:585) 2020-05-22T21:23:09.9017686Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.serviceInit(HistoryFileManager.java:550) 2020-05-22T21:23:09.9018186Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.9018667Z at org.apache.hadoop.mapreduce.v2.hs.JobHistory.serviceInit(JobHistory.java:95) 2020-05-22T21:23:09.9019130Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.9019620Z at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107) 2020-05-22T21:23:09.9020146Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.serviceInit(JobHistoryServer.java:151) 2020-05-22T21:23:09.9020642Z at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) 2020-05-22T21:23:09.9021171Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.launchJobHistoryServer(JobHistoryServer.java:231) 2020-05-22T21:23:09.9021707Z at org.apache.hadoop.mapreduce.v2.hs.JobHistoryServer.main(JobHistoryServer.java:241) 2020-05-22T21:23:09.9022636Z Caused by: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.9023273Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.9023841Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.9024447Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.9025000Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.9025550Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.9026099Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.9026614Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.9027128Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.9027644Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.9028299Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.9028994Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.9029585Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.9030070Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.9030449Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.9030910Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.9031371Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.9031744Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.9032193Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.9032644Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.9032898Z 2020-05-22T21:23:09.9033156Z at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 2020-05-22T21:23:09.9033632Z at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) 2020-05-22T21:23:09.9034250Z at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) 2020-05-22T21:23:09.9035217Z at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 2020-05-22T21:23:09.9035695Z at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121) 2020-05-22T21:23:09.9036223Z at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88) 2020-05-22T21:23:09.9036714Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2474) 2020-05-22T21:23:09.9037109Z at org.apache.hadoop.fs.Hdfs.mkdir(Hdfs.java:317) 2020-05-22T21:23:09.9037504Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:738) 2020-05-22T21:23:09.9037912Z at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:734) 2020-05-22T21:23:09.9038349Z at org.apache.hadoop.fs.FSLinkResolver.resolve(FSLinkResolver.java:90) 2020-05-22T21:23:09.9038769Z at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:734) 2020-05-22T21:23:09.9039249Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.mkdir(HistoryFileManager.java:694) 2020-05-22T21:23:09.9039819Z at org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager.tryCreatingHistoryDirs(HistoryFileManager.java:622) 2020-05-22T21:23:09.9040199Z ... 10 more 2020-05-22T21:23:09.9041156Z Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.security.AccessControlException): Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.9041875Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:318) 2020-05-22T21:23:09.9042457Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:219) 2020-05-22T21:23:09.9043060Z at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:189) 2020-05-22T21:23:09.9043617Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1663) 2020-05-22T21:23:09.9044165Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1647) 2020-05-22T21:23:09.9044702Z at org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkAncestorAccess(FSDirectory.java:1606) 2020-05-22T21:23:09.9045236Z at org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.mkdirs(FSDirMkdirOp.java:60) 2020-05-22T21:23:09.9045753Z at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3039) 2020-05-22T21:23:09.9046270Z at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1079) 2020-05-22T21:23:09.9046922Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:652) 2020-05-22T21:23:09.9047601Z at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) 2020-05-22T21:23:09.9048212Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:447) 2020-05-22T21:23:09.9048695Z at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:989) 2020-05-22T21:23:09.9049077Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:850) 2020-05-22T21:23:09.9049488Z at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:793) 2020-05-22T21:23:09.9049976Z at java.security.AccessController.doPrivileged(Native Method) 2020-05-22T21:23:09.9050348Z at javax.security.auth.Subject.doAs(Subject.java:422) 2020-05-22T21:23:09.9050781Z at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1840) 2020-05-22T21:23:09.9051420Z at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2489) 2020-05-22T21:23:09.9051675Z 2020-05-22T21:23:09.9051967Z at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1489) 2020-05-22T21:23:09.9052373Z at org.apache.hadoop.ipc.Client.call(Client.java:1435) 2020-05-22T21:23:09.9052737Z at org.apache.hadoop.ipc.Client.call(Client.java:1345) 2020-05-22T21:23:09.9053760Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 2020-05-22T21:23:09.9054260Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 2020-05-22T21:23:09.9054676Z at com.sun.proxy.$Proxy12.mkdirs(Unknown Source) 2020-05-22T21:23:09.9055163Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:583) 2020-05-22T21:23:09.9055678Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-22T21:23:09.9056107Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-22T21:23:09.9056601Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-22T21:23:09.9057052Z at java.lang.reflect.Method.invoke(Method.java:498) 2020-05-22T21:23:09.9057506Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) 2020-05-22T21:23:09.9058084Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 2020-05-22T21:23:09.9058664Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 2020-05-22T21:23:09.9059219Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 2020-05-22T21:23:09.9059778Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) 2020-05-22T21:23:09.9060189Z at com.sun.proxy.$Proxy13.mkdirs(Unknown Source) 2020-05-22T21:23:09.9060569Z at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2472) 2020-05-22T21:23:09.9060875Z ... 17 more 2020-05-22T21:23:09.9061503Z 20/05/22 21:20:02 INFO util.ExitUtil: Exiting with status -1 2020-05-22T21:23:09.9061904Z 20/05/22 21:20:02 INFO hs.JobHistoryServer: SHUTDOWN_MSG: 2020-05-22T21:23:09.9062261Z /************************************************************ 2020-05-22T21:23:09.9062915Z SHUTDOWN_MSG: Shutting down JobHistoryServer at master.docker-hadoop-cluster-network/172.22.0.3 2020-05-22T21:23:09.9063328Z ************************************************************/ 2020-05-22T21:23:09.9064041Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/historyserver.out: 2020-05-22T21:23:09.9064833Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/namenode.err: 2020-05-22T21:23:09.9065301Z 20/05/22 21:19:39 INFO namenode.NameNode: STARTUP_MSG: 2020-05-22T21:23:09.9065669Z /************************************************************ 2020-05-22T21:23:09.9065958Z STARTUP_MSG: Starting NameNode 2020-05-22T21:23:09.9066215Z STARTUP_MSG: user = hdfs 2020-05-22T21:23:09.9066729Z STARTUP_MSG: host = master.docker-hadoop-cluster-network/172.22.0.3 2020-05-22T21:23:09.9067050Z STARTUP_MSG: args = [] 2020-05-22T21:23:09.9067286Z STARTUP_MSG: version = 2.8.4 2020-05-22T21:23:09.9102792Z STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar 2020-05-22T21:23:09.9127729Z STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 17e75c2a11685af3e043aa5e604dc831e5b14674; compiled by 'jdu' on 2018-05-08T02:50Z 2020-05-22T21:23:09.9128293Z STARTUP_MSG: java = 1.8.0_131 2020-05-22T21:23:09.9128610Z ************************************************************/ 2020-05-22T21:23:09.9129018Z 20/05/22 21:19:39 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2020-05-22T21:23:09.9129458Z 20/05/22 21:19:39 INFO namenode.NameNode: createNameNode [] 2020-05-22T21:23:09.9130391Z 20/05/22 21:19:43 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2020-05-22T21:23:09.9130879Z 20/05/22 21:19:45 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2020-05-22T21:23:09.9131919Z 20/05/22 21:19:45 INFO impl.MetricsSystemImpl: NameNode metrics system started 2020-05-22T21:23:09.9132800Z 20/05/22 21:19:45 INFO namenode.NameNode: fs.defaultFS is hdfs://master.docker-hadoop-cluster-network:9000 2020-05-22T21:23:09.9133712Z 20/05/22 21:19:45 INFO namenode.NameNode: Clients are to use master.docker-hadoop-cluster-network:9000 to access this namenode/service. 2020-05-22T21:23:09.9134759Z 20/05/22 21:19:48 INFO security.UserGroupInformation: Login successful for user hdfs/[hidden email] using keytab file /etc/security/keytabs/hdfs.keytab 2020-05-22T21:23:09.9135367Z 20/05/22 21:19:48 INFO util.JvmPauseMonitor: Starting JVM pause monitor 2020-05-22T21:23:09.9136131Z 20/05/22 21:19:48 INFO hdfs.DFSUtil: Starting web server as: HTTP/[hidden email] 2020-05-22T21:23:09.9136929Z 20/05/22 21:19:48 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: https://0.0.0.0:50470 2020-05-22T21:23:09.9137456Z 20/05/22 21:19:49 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2020-05-22T21:23:09.9138069Z 20/05/22 21:19:49 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2020-05-22T21:23:09.9138633Z 20/05/22 21:19:49 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined 2020-05-22T21:23:09.9139478Z 20/05/22 21:19:49 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2020-05-22T21:23:09.9140143Z 20/05/22 21:19:49 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs 2020-05-22T21:23:09.9140844Z 20/05/22 21:19:49 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2020-05-22T21:23:09.9141551Z 20/05/22 21:19:49 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2020-05-22T21:23:09.9142517Z 20/05/22 21:19:49 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter) 2020-05-22T21:23:09.9143244Z 20/05/22 21:19:49 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/* 2020-05-22T21:23:09.9143913Z 20/05/22 21:19:49 INFO http.HttpServer2: Adding Kerberos (SPNEGO) filter to getDelegationToken 2020-05-22T21:23:09.9144413Z 20/05/22 21:19:49 INFO http.HttpServer2: Adding Kerberos (SPNEGO) filter to renewDelegationToken 2020-05-22T21:23:09.9144905Z 20/05/22 21:19:49 INFO http.HttpServer2: Adding Kerberos (SPNEGO) filter to cancelDelegationToken 2020-05-22T21:23:09.9145383Z 20/05/22 21:19:49 INFO http.HttpServer2: Adding Kerberos (SPNEGO) filter to fsck 2020-05-22T21:23:09.9145832Z 20/05/22 21:19:49 INFO http.HttpServer2: Adding Kerberos (SPNEGO) filter to imagetransfer 2020-05-22T21:23:09.9146415Z 20/05/22 21:19:49 INFO http.HttpServer2: Jetty bound to port 50470 2020-05-22T21:23:09.9147031Z 20/05/22 21:19:49 INFO mortbay.log: jetty-6.1.26 2020-05-22T21:23:09.9147882Z 20/05/22 21:19:50 INFO server.KerberosAuthenticationHandler: Using keytab /etc/security/keytabs/hdfs.keytab, for principal HTTP/[hidden email] 2020-05-22T21:23:09.9148954Z 20/05/22 21:19:50 INFO server.KerberosAuthenticationHandler: Using keytab /etc/security/keytabs/hdfs.keytab, for principal HTTP/[hidden email] 2020-05-22T21:23:09.9149591Z 20/05/22 21:19:50 INFO mortbay.log: Started SslSelectChannelConnectorSecure@0.0.0.0:50470 2020-05-22T21:23:09.9150334Z 20/05/22 21:19:50 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories! 2020-05-22T21:23:09.9151126Z 20/05/22 21:19:50 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories! 2020-05-22T21:23:09.9151740Z 20/05/22 21:19:51 INFO namenode.FSEditLog: Edit logging is async:true 2020-05-22T21:23:09.9152166Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: KeyProvider: null 2020-05-22T21:23:09.9152566Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: fsLock is fair: true 2020-05-22T21:23:09.9153045Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2020-05-22T21:23:09.9153554Z 20/05/22 21:19:51 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 2020-05-22T21:23:09.9154344Z 20/05/22 21:19:51 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2020-05-22T21:23:09.9154968Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2020-05-22T21:23:09.9155595Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: The block deletion will start around 2020 May 22 21:19:51 2020-05-22T21:23:09.9156102Z 20/05/22 21:19:51 INFO util.GSet: Computing capacity for map BlocksMap 2020-05-22T21:23:09.9156690Z 20/05/22 21:19:51 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:09.9157061Z 20/05/22 21:19:51 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 2020-05-22T21:23:09.9157468Z 20/05/22 21:19:51 INFO util.GSet: capacity = 2^21 = 2097152 entries 2020-05-22T21:23:09.9157898Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true 2020-05-22T21:23:09.9158539Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null 2020-05-22T21:23:09.9159179Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: defaultReplication = 1 2020-05-22T21:23:09.9160014Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: maxReplication = 512 2020-05-22T21:23:09.9160510Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: minReplication = 1 2020-05-22T21:23:09.9160961Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2020-05-22T21:23:09.9161438Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 2020-05-22T21:23:09.9161913Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: encryptDataTransfer = true 2020-05-22T21:23:09.9162369Z 20/05/22 21:19:51 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2020-05-22T21:23:09.9163309Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: fsOwner = hdfs/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9163867Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: supergroup = root 2020-05-22T21:23:09.9164293Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: isPermissionEnabled = true 2020-05-22T21:23:09.9164869Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: HA Enabled: false 2020-05-22T21:23:09.9165381Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: Append Enabled: true 2020-05-22T21:23:09.9165798Z 20/05/22 21:19:51 INFO util.GSet: Computing capacity for map INodeMap 2020-05-22T21:23:09.9166410Z 20/05/22 21:19:51 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:09.9166798Z 20/05/22 21:19:51 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 2020-05-22T21:23:09.9167183Z 20/05/22 21:19:51 INFO util.GSet: capacity = 2^20 = 1048576 entries 2020-05-22T21:23:09.9167591Z 20/05/22 21:19:51 INFO namenode.FSDirectory: ACLs enabled? false 2020-05-22T21:23:09.9167993Z 20/05/22 21:19:51 INFO namenode.FSDirectory: XAttrs enabled? true 2020-05-22T21:23:09.9168495Z 20/05/22 21:19:51 INFO namenode.NameNode: Caching file names occurring more than 10 times 2020-05-22T21:23:09.9168945Z 20/05/22 21:19:51 INFO util.GSet: Computing capacity for map cachedBlocks 2020-05-22T21:23:09.9169547Z 20/05/22 21:19:51 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:09.9169943Z 20/05/22 21:19:51 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 2020-05-22T21:23:09.9170330Z 20/05/22 21:19:51 INFO util.GSet: capacity = 2^18 = 262144 entries 2020-05-22T21:23:09.9171039Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2020-05-22T21:23:09.9171711Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 2020-05-22T21:23:09.9172182Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 2020-05-22T21:23:09.9172678Z 20/05/22 21:19:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2020-05-22T21:23:09.9173164Z 20/05/22 21:19:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2020-05-22T21:23:09.9173663Z 20/05/22 21:19:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2020-05-22T21:23:09.9174144Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2020-05-22T21:23:09.9175452Z 20/05/22 21:19:51 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2020-05-22T21:23:09.9176021Z 20/05/22 21:19:51 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2020-05-22T21:23:09.9176710Z 20/05/22 21:19:51 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:09.9177129Z 20/05/22 21:19:51 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 2020-05-22T21:23:09.9177546Z 20/05/22 21:19:51 INFO util.GSet: capacity = 2^15 = 32768 entries 2020-05-22T21:23:09.9178358Z 20/05/22 21:19:51 INFO common.Storage: Lock on /tmp/hadoop-hdfs/dfs/name/in_use.lock acquired by nodename [hidden email]-hadoop-cluster-network 2020-05-22T21:23:09.9179247Z 20/05/22 21:19:51 INFO namenode.FileJournalManager: Recovering unfinalized segments in /tmp/hadoop-hdfs/dfs/name/current 2020-05-22T21:23:09.9179740Z 20/05/22 21:19:51 INFO namenode.FSImage: No edit log streams selected. 2020-05-22T21:23:09.9180645Z 20/05/22 21:19:51 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/tmp/hadoop-hdfs/dfs/name/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000) 2020-05-22T21:23:09.9181250Z 20/05/22 21:19:52 INFO namenode.FSImageFormatPBINode: Loading 1 INodes. 2020-05-22T21:23:09.9181686Z 20/05/22 21:19:52 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds. 2020-05-22T21:23:09.9182495Z 20/05/22 21:19:52 INFO namenode.FSImage: Loaded image for txid 0 from /tmp/hadoop-hdfs/dfs/name/current/fsimage_0000000000000000000 2020-05-22T21:23:09.9183106Z 20/05/22 21:19:52 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false) 2020-05-22T21:23:09.9184636Z 20/05/22 21:19:52 INFO namenode.FSEditLog: Starting log segment at 1 2020-05-22T21:23:09.9185086Z 20/05/22 21:19:52 INFO namenode.NameCache: initialized with 0 entries 0 lookups 2020-05-22T21:23:09.9185525Z 20/05/22 21:19:52 INFO namenode.FSNamesystem: Finished loading FSImage in 871 msecs 2020-05-22T21:23:09.9186545Z 20/05/22 21:19:53 INFO namenode.NameNode: RPC server is binding to master.docker-hadoop-cluster-network:9000 2020-05-22T21:23:09.9187286Z 20/05/22 21:19:53 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 10000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2020-05-22T21:23:09.9187944Z 20/05/22 21:19:53 INFO ipc.Server: Starting Socket Reader #1 for port 9000 2020-05-22T21:23:09.9188371Z 20/05/22 21:19:53 INFO ipc.Server: Starting Socket Reader #2 for port 9000 2020-05-22T21:23:09.9188777Z 20/05/22 21:19:53 INFO ipc.Server: Starting Socket Reader #3 for port 9000 2020-05-22T21:23:09.9189306Z 20/05/22 21:19:53 INFO ipc.Server: Starting Socket Reader #4 for port 9000 2020-05-22T21:23:09.9189710Z 20/05/22 21:19:53 INFO ipc.Server: Starting Socket Reader #5 for port 9000 2020-05-22T21:23:09.9190145Z 20/05/22 21:19:53 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean 2020-05-22T21:23:09.9190611Z 20/05/22 21:19:53 INFO namenode.LeaseManager: Number of blocks under construction: 0 2020-05-22T21:23:09.9191096Z 20/05/22 21:19:53 INFO blockmanagement.BlockManager: initializing replication queues 2020-05-22T21:23:09.9191552Z 20/05/22 21:19:53 INFO hdfs.StateChange: STATE* Leaving safe mode after 2 secs 2020-05-22T21:23:09.9191999Z 20/05/22 21:19:53 INFO hdfs.StateChange: STATE* Network topology has 0 racks and 0 datanodes 2020-05-22T21:23:09.9192471Z 20/05/22 21:19:53 INFO hdfs.StateChange: STATE* UnderReplicatedBlocks has 0 blocks 2020-05-22T21:23:09.9193010Z 20/05/22 21:19:53 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2020-05-22T21:23:09.9194089Z 20/05/22 21:19:53 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 2020-05-22T21:23:09.9194795Z 20/05/22 21:19:53 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2020-05-22T21:23:09.9195361Z 20/05/22 21:19:53 INFO blockmanagement.BlockManager: Total number of blocks = 0 2020-05-22T21:23:09.9195850Z 20/05/22 21:19:53 INFO blockmanagement.BlockManager: Number of invalid blocks = 0 2020-05-22T21:23:09.9196651Z 20/05/22 21:19:53 INFO blockmanagement.BlockManager: Number of under-replicated blocks = 0 2020-05-22T21:23:09.9197372Z 20/05/22 21:19:53 INFO blockmanagement.BlockManager: Number of over-replicated blocks = 0 2020-05-22T21:23:09.9197866Z 20/05/22 21:19:53 INFO blockmanagement.BlockManager: Number of blocks being written = 0 2020-05-22T21:23:09.9198723Z 20/05/22 21:19:53 INFO hdfs.StateChange: STATE* Replication Queue initialization scan for invalid, over- and under-replicated blocks completed in 88 msec 2020-05-22T21:23:09.9199292Z 20/05/22 21:19:53 INFO ipc.Server: IPC Server Responder: starting 2020-05-22T21:23:09.9199710Z 20/05/22 21:19:54 INFO ipc.Server: IPC Server listener on 9000: starting 2020-05-22T21:23:09.9200484Z 20/05/22 21:19:54 INFO namenode.NameNode: NameNode RPC up at: master.docker-hadoop-cluster-network/172.22.0.3:9000 2020-05-22T21:23:09.9201023Z 20/05/22 21:19:54 INFO namenode.FSNamesystem: Starting services required for active state 2020-05-22T21:23:09.9201473Z 20/05/22 21:19:54 INFO namenode.FSDirectory: Initializing quota with 4 thread(s) 2020-05-22T21:23:09.9201948Z 20/05/22 21:19:54 INFO namenode.FSDirectory: Quota initialization completed in 30 milliseconds 2020-05-22T21:23:09.9202276Z name space=1 2020-05-22T21:23:09.9202472Z storage space=0 2020-05-22T21:23:09.9202718Z storage types=RAM_DISK=0, SSD=0, DISK=0, ARCHIVE=0 2020-05-22T21:23:09.9203197Z 20/05/22 21:19:55 INFO blockmanagement.CacheReplicationMonitor: Starting CacheReplicationMonitor with interval 30000 milliseconds 2020-05-22T21:23:09.9204669Z 20/05/22 21:19:55 INFO ipc.Server: Auth successful for hdfs/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9205779Z 20/05/22 21:19:55 INFO ipc.Server: Auth successful for hdfs/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9206997Z 20/05/22 21:19:55 INFO authorize.ServiceAuthorizationManager: Authorization successful for hdfs/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol 2020-05-22T21:23:09.9208403Z 20/05/22 21:19:55 INFO authorize.ServiceAuthorizationManager: Authorization successful for hdfs/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol 2020-05-22T21:23:09.9210110Z 20/05/22 21:19:56 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.22.0.5:50010, datanodeUuid=7c82e560-f8ac-43c9-a1a3-976cce1afd79, infoPort=0, infoSecurePort=50475, ipcPort=50020, storageInfo=lv=-57;cid=CID-6e253d18-f0f3-446b-a39f-e6aa75c5db3b;nsid=1964552002;c=1590182373027) storage 7c82e560-f8ac-43c9-a1a3-976cce1afd79 2020-05-22T21:23:09.9211569Z 20/05/22 21:19:56 INFO net.NetworkTopology: Adding a new node: /default-rack/172.22.0.5:50010 2020-05-22T21:23:09.9212539Z 20/05/22 21:19:56 INFO blockmanagement.BlockReportLeaseManager: Registered DN 7c82e560-f8ac-43c9-a1a3-976cce1afd79 (172.22.0.5:50010). 2020-05-22T21:23:09.9214108Z 20/05/22 21:19:56 INFO hdfs.StateChange: BLOCK* registerDatanode: from DatanodeRegistration(172.22.0.4:50010, datanodeUuid=8b7bd3e9-0431-4115-a271-cfe2f65d3324, infoPort=0, infoSecurePort=50475, ipcPort=50020, storageInfo=lv=-57;cid=CID-6e253d18-f0f3-446b-a39f-e6aa75c5db3b;nsid=1964552002;c=1590182373027) storage 8b7bd3e9-0431-4115-a271-cfe2f65d3324 2020-05-22T21:23:09.9215306Z 20/05/22 21:19:56 INFO net.NetworkTopology: Adding a new node: /default-rack/172.22.0.4:50010 2020-05-22T21:23:09.9216191Z 20/05/22 21:19:56 INFO blockmanagement.BlockReportLeaseManager: Registered DN 8b7bd3e9-0431-4115-a271-cfe2f65d3324 (172.22.0.4:50010). 2020-05-22T21:23:09.9217210Z 20/05/22 21:19:56 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-fba6d579-dbd2-4569-b80a-bac9254c9d42 for DN 172.22.0.4:50010 2020-05-22T21:23:09.9218257Z 20/05/22 21:19:56 INFO blockmanagement.DatanodeDescriptor: Adding new storage ID DS-48a77302-0c5e-4523-9509-f1b4a82437d3 for DN 172.22.0.5:50010 2020-05-22T21:23:09.9219422Z 20/05/22 21:19:56 INFO BlockStateChange: BLOCK* processReport 0xb0a0dbda3c155444: Processing first storage report for DS-48a77302-0c5e-4523-9509-f1b4a82437d3 from datanode 7c82e560-f8ac-43c9-a1a3-976cce1afd79 2020-05-22T21:23:09.9221866Z 20/05/22 21:19:56 INFO BlockStateChange: BLOCK* processReport 0xb0a0dbda3c155444: from storage DS-48a77302-0c5e-4523-9509-f1b4a82437d3 node DatanodeRegistration(172.22.0.5:50010, datanodeUuid=7c82e560-f8ac-43c9-a1a3-976cce1afd79, infoPort=0, infoSecurePort=50475, ipcPort=50020, storageInfo=lv=-57;cid=CID-6e253d18-f0f3-446b-a39f-e6aa75c5db3b;nsid=1964552002;c=1590182373027), blocks: 0, hasStaleStorage: false, processing time: 1 msecs, invalidatedBlocks: 0 2020-05-22T21:23:09.9223887Z 20/05/22 21:19:56 INFO BlockStateChange: BLOCK* processReport 0x36339d36c758be17: Processing first storage report for DS-fba6d579-dbd2-4569-b80a-bac9254c9d42 from datanode 8b7bd3e9-0431-4115-a271-cfe2f65d3324 2020-05-22T21:23:09.9226000Z 20/05/22 21:19:56 INFO BlockStateChange: BLOCK* processReport 0x36339d36c758be17: from storage DS-fba6d579-dbd2-4569-b80a-bac9254c9d42 node DatanodeRegistration(172.22.0.4:50010, datanodeUuid=8b7bd3e9-0431-4115-a271-cfe2f65d3324, infoPort=0, infoSecurePort=50475, ipcPort=50020, storageInfo=lv=-57;cid=CID-6e253d18-f0f3-446b-a39f-e6aa75c5db3b;nsid=1964552002;c=1590182373027), blocks: 0, hasStaleStorage: false, processing time: 0 msecs, invalidatedBlocks: 0 2020-05-22T21:23:09.9227309Z 20/05/22 21:19:59 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9227997Z 20/05/22 21:19:59 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9228825Z 20/05/22 21:20:01 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9229528Z 20/05/22 21:20:01 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9230582Z 20/05/22 21:20:02 INFO ipc.Server: Auth successful for mapred/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9231772Z 20/05/22 21:20:02 INFO authorize.ServiceAuthorizationManager: Authorization successful for mapred/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9233462Z 20/05/22 21:20:02 INFO ipc.Server: IPC Server handler 17 on 9000, call Call#2 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.mkdirs from 172.22.0.3:44561: org.apache.hadoop.security.AccessControlException: Permission denied: user=mapred, access=WRITE, inode="/":hdfs:hadoop:drwxr-xr-x 2020-05-22T21:23:09.9234379Z 20/05/22 21:20:04 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9235065Z 20/05/22 21:20:04 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9235766Z 20/05/22 21:20:06 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9236463Z 20/05/22 21:20:06 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9237152Z 20/05/22 21:20:08 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9237848Z 20/05/22 21:20:08 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9238547Z 20/05/22 21:20:10 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9239225Z 20/05/22 21:20:10 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9239923Z 20/05/22 21:20:12 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9240598Z 20/05/22 21:20:12 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9241299Z 20/05/22 21:20:14 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9241994Z 20/05/22 21:20:15 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9242677Z 20/05/22 21:20:17 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9243369Z 20/05/22 21:20:17 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9244059Z 20/05/22 21:20:19 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9244737Z 20/05/22 21:20:19 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9245434Z 20/05/22 21:20:21 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9246108Z 20/05/22 21:20:21 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9247168Z 20/05/22 21:20:50 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9248223Z 20/05/22 21:20:50 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9249430Z 20/05/22 21:20:51 INFO hdfs.StateChange: BLOCK* allocate blk_1073741825_1001, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-1.2-api-2.12.1.jar 2020-05-22T21:23:09.9250671Z 20/05/22 21:20:51 INFO namenode.FSNamesystem: BLOCK* blk_1073741825_1001 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-1.2-api-2.12.1.jar 2020-05-22T21:23:09.9252223Z 20/05/22 21:20:51 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-1.2-api-2.12.1.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9253110Z 20/05/22 21:20:51 INFO namenode.FSEditLog: Number of transactions: 21 Total time for transactions(ms): 21 Number of transactions batched in Syncs: 2 Number of syncs: 19 SyncTimes(ms): 24 2020-05-22T21:23:09.9254258Z 20/05/22 21:20:51 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-1.2-api-2.12.1.jar 2020-05-22T21:23:09.9255384Z 20/05/22 21:20:51 INFO hdfs.StateChange: BLOCK* allocate blk_1073741826_1002, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-core-2.12.1.jar 2020-05-22T21:23:09.9256603Z 20/05/22 21:20:52 INFO namenode.FSNamesystem: BLOCK* blk_1073741826_1002 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-core-2.12.1.jar 2020-05-22T21:23:09.9257790Z 20/05/22 21:20:52 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-core-2.12.1.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9258859Z 20/05/22 21:20:53 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-core-2.12.1.jar 2020-05-22T21:23:09.9260005Z 20/05/22 21:20:53 INFO hdfs.StateChange: BLOCK* allocate blk_1073741827_1003, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table-blink_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9261312Z 20/05/22 21:20:59 INFO namenode.FSNamesystem: BLOCK* blk_1073741827_1003 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table-blink_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9262569Z 20/05/22 21:20:59 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table-blink_2.11-1.12-SNAPSHOT.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9263708Z 20/05/22 21:20:59 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table-blink_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9264881Z 20/05/22 21:20:59 INFO hdfs.StateChange: BLOCK* allocate blk_1073741828_1004, replicas=172.22.0.4:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9266160Z 20/05/22 21:21:03 INFO namenode.FSNamesystem: BLOCK* blk_1073741828_1004 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9267386Z 20/05/22 21:21:03 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table_2.11-1.12-SNAPSHOT.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9268652Z 20/05/22 21:21:03 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-table_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9269816Z 20/05/22 21:21:03 INFO hdfs.StateChange: BLOCK* allocate blk_1073741829_1005, replicas=172.22.0.4:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-shaded-zookeeper-3.4.10.jar 2020-05-22T21:23:09.9271007Z 20/05/22 21:21:04 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-shaded-zookeeper-3.4.10.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9272230Z 20/05/22 21:21:04 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/flink-shaded-zookeeper-3.4.10.jar 2020-05-22T21:23:09.9273913Z 20/05/22 21:21:04 INFO hdfs.StateChange: BLOCK* allocate blk_1073741830_1006, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-api-2.12.1.jar 2020-05-22T21:23:09.9275185Z 20/05/22 21:21:04 INFO namenode.FSNamesystem: BLOCK* blk_1073741830_1006 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-api-2.12.1.jar 2020-05-22T21:23:09.9276364Z 20/05/22 21:21:05 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-api-2.12.1.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9277432Z 20/05/22 21:21:05 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-api-2.12.1.jar 2020-05-22T21:23:09.9278555Z 20/05/22 21:21:05 INFO hdfs.StateChange: BLOCK* allocate blk_1073741831_1007, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-slf4j-impl-2.12.1.jar 2020-05-22T21:23:09.9279740Z 20/05/22 21:21:05 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-slf4j-impl-2.12.1.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9280847Z 20/05/22 21:21:05 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/lib/log4j-slf4j-impl-2.12.1.jar 2020-05-22T21:23:09.9281929Z 20/05/22 21:21:05 INFO hdfs.StateChange: BLOCK* allocate blk_1073741832_1008, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/log4j.properties 2020-05-22T21:23:09.9283044Z 20/05/22 21:21:05 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/log4j.properties is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9284078Z 20/05/22 21:21:05 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/log4j.properties 2020-05-22T21:23:09.9285220Z 20/05/22 21:21:05 INFO hdfs.StateChange: BLOCK* allocate blk_1073741833_1009, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/another-dummy-fs/flink-another-dummy-fs.jar 2020-05-22T21:23:09.9286556Z 20/05/22 21:21:05 INFO namenode.FSNamesystem: BLOCK* blk_1073741833_1009 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/plugins/another-dummy-fs/flink-another-dummy-fs.jar 2020-05-22T21:23:09.9287844Z 20/05/22 21:21:05 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/plugins/another-dummy-fs/flink-another-dummy-fs.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9289005Z 20/05/22 21:21:05 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/another-dummy-fs/flink-another-dummy-fs.jar 2020-05-22T21:23:09.9290181Z 20/05/22 21:21:05 INFO hdfs.StateChange: BLOCK* allocate blk_1073741834_1010, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/dummy-fs/flink-dummy-fs.jar 2020-05-22T21:23:09.9291734Z 20/05/22 21:21:05 INFO namenode.FSNamesystem: BLOCK* blk_1073741834_1010 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/plugins/dummy-fs/flink-dummy-fs.jar 2020-05-22T21:23:09.9292989Z 20/05/22 21:21:06 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/plugins/dummy-fs/flink-dummy-fs.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9294110Z 20/05/22 21:21:06 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/dummy-fs/flink-dummy-fs.jar 2020-05-22T21:23:09.9295451Z 20/05/22 21:21:06 INFO hdfs.StateChange: BLOCK* allocate blk_1073741835_1011, replicas=172.22.0.4:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/metrics_jmx/flink-metrics-jmx-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9296724Z 20/05/22 21:21:06 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/plugins/metrics_jmx/flink-metrics-jmx-1.12-SNAPSHOT.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9297907Z 20/05/22 21:21:06 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/metrics_jmx/flink-metrics-jmx-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9299045Z 20/05/22 21:21:06 INFO hdfs.StateChange: BLOCK* allocate blk_1073741836_1012, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/README.txt 2020-05-22T21:23:09.9300180Z 20/05/22 21:21:06 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/plugins/README.txt is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9301203Z 20/05/22 21:21:06 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/plugins/README.txt 2020-05-22T21:23:09.9302280Z 20/05/22 21:21:06 INFO hdfs.StateChange: BLOCK* allocate blk_1073741837_1013, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/WordCount.jar 2020-05-22T21:23:09.9303435Z 20/05/22 21:21:06 INFO namenode.FSNamesystem: BLOCK* blk_1073741837_1013 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /user/hadoop-user/.flink/application_1590182392090_0001/WordCount.jar 2020-05-22T21:23:09.9304557Z 20/05/22 21:21:06 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/WordCount.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9305580Z 20/05/22 21:21:06 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/WordCount.jar 2020-05-22T21:23:09.9306681Z 20/05/22 21:21:06 INFO hdfs.StateChange: BLOCK* allocate blk_1073741838_1014, replicas=172.22.0.4:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/flink-dist_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9307628Z 20/05/22 21:21:18 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9308685Z 20/05/22 21:21:18 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9309881Z 20/05/22 21:21:18 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/flink-dist_2.11-1.12-SNAPSHOT.jar is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9310990Z 20/05/22 21:21:18 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/flink-dist_2.11-1.12-SNAPSHOT.jar 2020-05-22T21:23:09.9312231Z 20/05/22 21:21:18 INFO hdfs.StateChange: BLOCK* allocate blk_1073741839_1015, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/application_1590182392090_0001-flink-conf.yaml7266271465280845379.tmp 2020-05-22T21:23:09.9313664Z 20/05/22 21:21:18 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/application_1590182392090_0001-flink-conf.yaml7266271465280845379.tmp is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9314924Z 20/05/22 21:21:18 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/application_1590182392090_0001-flink-conf.yaml7266271465280845379.tmp 2020-05-22T21:23:09.9316193Z 20/05/22 21:21:18 INFO hdfs.StateChange: BLOCK* allocate blk_1073741840_1016, replicas=172.22.0.5:50010 for /user/hadoop-user/.flink/application_1590182392090_0001/hadoop-user.keytab 2020-05-22T21:23:09.9317299Z 20/05/22 21:21:18 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/.flink/application_1590182392090_0001/hadoop-user.keytab is closed by DFSClient_NONMAPREDUCE_331660630_1 2020-05-22T21:23:09.9318341Z 20/05/22 21:21:18 INFO namenode.FSDirectory: Increasing replication from 1 to 1 for /user/hadoop-user/.flink/application_1590182392090_0001/hadoop-user.keytab 2020-05-22T21:23:09.9319416Z 20/05/22 21:21:18 INFO delegation.AbstractDelegationTokenSecretManager: Creating password for identifier: (HDFS_DELEGATION_TOKEN token 1 for hadoop-user), currentKey: 2 2020-05-22T21:23:09.9320397Z 20/05/22 21:21:20 INFO ipc.Server: Auth successful for yarn/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9321569Z 20/05/22 21:21:20 INFO authorize.ServiceAuthorizationManager: Authorization successful for yarn/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9322842Z 20/05/22 21:21:20 INFO delegation.AbstractDelegationTokenSecretManager: Token renewal for identifier: (HDFS_DELEGATION_TOKEN token 1 for hadoop-user); total currentTokens 1 2020-05-22T21:23:09.9323818Z 20/05/22 21:21:21 INFO ipc.Server: Auth successful for yarn/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9324997Z 20/05/22 21:21:21 INFO authorize.ServiceAuthorizationManager: Authorization successful for yarn/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9325992Z 20/05/22 21:21:21 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9327015Z 20/05/22 21:21:21 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9327964Z 20/05/22 21:21:23 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9329000Z 20/05/22 21:21:23 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9329931Z 20/05/22 21:21:37 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9330970Z 20/05/22 21:21:37 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9332065Z 20/05/22 21:21:53 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9333134Z 20/05/22 21:21:53 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9334075Z 20/05/22 21:21:53 INFO namenode.FSEditLog: Number of transactions: 125 Total time for transactions(ms): 27 Number of transactions batched in Syncs: 25 Number of syncs: 100 SyncTimes(ms): 1225 2020-05-22T21:23:09.9335235Z 20/05/22 21:21:58 INFO ipc.Server: Auth successful for yarn/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9336419Z 20/05/22 21:21:58 INFO authorize.ServiceAuthorizationManager: Authorization successful for yarn/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9337414Z 20/05/22 21:21:58 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9338438Z 20/05/22 21:21:58 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9339499Z 20/05/22 21:22:02 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9340538Z 20/05/22 21:22:02 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9341466Z 20/05/22 21:22:21 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9342501Z 20/05/22 21:22:22 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9343432Z 20/05/22 21:22:43 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9344482Z 20/05/22 21:22:43 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9345443Z 20/05/22 21:22:57 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9346472Z 20/05/22 21:22:57 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9347408Z 20/05/22 21:22:57 INFO namenode.FSEditLog: Number of transactions: 126 Total time for transactions(ms): 27 Number of transactions batched in Syncs: 25 Number of syncs: 101 SyncTimes(ms): 1226 2020-05-22T21:23:09.9348376Z 20/05/22 21:22:58 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9349405Z 20/05/22 21:22:58 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9350498Z 20/05/22 21:23:03 INFO hdfs.StateChange: BLOCK* allocate blk_1073741841_1017, replicas=172.22.0.4:50010 for /user/hadoop-user/wc-out-26536/3 2020-05-22T21:23:09.9351474Z 20/05/22 21:23:03 INFO hdfs.StateChange: BLOCK* allocate blk_1073741842_1018, replicas=172.22.0.5:50010 for /user/hadoop-user/wc-out-26536/1 2020-05-22T21:23:09.9352421Z 20/05/22 21:23:04 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/wc-out-26536/3 is closed by DFSClient_NONMAPREDUCE_-646397215_66 2020-05-22T21:23:09.9353363Z 20/05/22 21:23:04 INFO hdfs.StateChange: DIR* completeFile: /user/hadoop-user/wc-out-26536/1 is closed by DFSClient_NONMAPREDUCE_-587498064_63 2020-05-22T21:23:09.9354172Z 20/05/22 21:23:04 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9355218Z 20/05/22 21:23:04 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9356171Z 20/05/22 21:23:07 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9357188Z 20/05/22 21:23:07 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9358257Z 20/05/22 21:23:07 INFO ipc.Server: Auth successful for [hidden email] (auth:TOKEN) 2020-05-22T21:23:09.9359300Z 20/05/22 21:23:07 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:TOKEN) for protocol=interface org.apache.hadoop.hdfs.protocol.ClientProtocol 2020-05-22T21:23:09.9360547Z 20/05/22 21:23:08 INFO hdfs.StateChange: BLOCK* allocate blk_1073741843_1019, replicas=172.22.0.5:50010 for /tmp/logs/hadoop-user/logs/application_1590182392090_0001/slave2.docker-hadoop-cluster-network_41369.tmp 2020-05-22T21:23:09.9361929Z 20/05/22 21:23:08 INFO hdfs.StateChange: BLOCK* allocate blk_1073741844_1020, replicas=172.22.0.4:50010 for /tmp/logs/hadoop-user/logs/application_1590182392090_0001/slave1.docker-hadoop-cluster-network_42227.tmp 2020-05-22T21:23:09.9363249Z 20/05/22 21:23:08 INFO namenode.FSNamesystem: BLOCK* blk_1073741843_1019 is COMMITTED but not COMPLETE(numNodes= 0 < minimum = 1) in file /tmp/logs/hadoop-user/logs/application_1590182392090_0001/slave2.docker-hadoop-cluster-network_41369.tmp 2020-05-22T21:23:09.9364519Z 20/05/22 21:23:08 INFO hdfs.StateChange: DIR* completeFile: /tmp/logs/hadoop-user/logs/application_1590182392090_0001/slave1.docker-hadoop-cluster-network_42227.tmp is closed by DFSClient_NONMAPREDUCE_1898733664_91 2020-05-22T21:23:09.9365760Z 20/05/22 21:23:08 INFO hdfs.StateChange: DIR* completeFile: /tmp/logs/hadoop-user/logs/application_1590182392090_0001/slave2.docker-hadoop-cluster-network_41369.tmp is closed by DFSClient_NONMAPREDUCE_-1596104169_91 2020-05-22T21:23:09.9366734Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/namenode.out: 2020-05-22T21:23:09.9367518Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/resourcemanager.err: 2020-05-22T21:23:09.9368025Z 20/05/22 21:19:39 INFO resourcemanager.ResourceManager: STARTUP_MSG: 2020-05-22T21:23:09.9368431Z /************************************************************ 2020-05-22T21:23:09.9368734Z STARTUP_MSG: Starting ResourceManager 2020-05-22T21:23:09.9369003Z STARTUP_MSG: user = yarn 2020-05-22T21:23:09.9369513Z STARTUP_MSG: host = master.docker-hadoop-cluster-network/172.22.0.3 2020-05-22T21:23:09.9369836Z STARTUP_MSG: args = [] 2020-05-22T21:23:09.9370074Z STARTUP_MSG: version = 2.8.4 2020-05-22T21:23:09.9424828Z STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/etc/hadoop/rm-config/log4j.properties 2020-05-22T21:23:09.9506435Z STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 17e75c2a11685af3e043aa5e604dc831e5b14674; compiled by 'jdu' on 2018-05-08T02:50Z 2020-05-22T21:23:09.9507039Z STARTUP_MSG: java = 1.8.0_131 2020-05-22T21:23:09.9507362Z ************************************************************/ 2020-05-22T21:23:09.9507798Z 20/05/22 21:19:39 INFO resourcemanager.ResourceManager: registered UNIX signal handlers for [TERM, HUP, INT] 2020-05-22T21:23:09.9508698Z 20/05/22 21:19:43 INFO conf.Configuration: found resource core-site.xml at file:/usr/local/hadoop-2.8.4/etc/hadoop/core-site.xml 2020-05-22T21:23:09.9509215Z 20/05/22 21:19:47 INFO security.Groups: clearing userToGroupsMap cache 2020-05-22T21:23:09.9511116Z 20/05/22 21:19:48 INFO conf.Configuration: found resource yarn-site.xml at file:/usr/local/hadoop-2.8.4/etc/hadoop/yarn-site.xml 2020-05-22T21:23:09.9512171Z 20/05/22 21:19:49 INFO security.UserGroupInformation: Login successful for user yarn/[hidden email] using keytab file /etc/security/keytabs/yarn.keytab 2020-05-22T21:23:09.9513023Z 20/05/22 21:19:49 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMFatalEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMFatalEventDispatcher 2020-05-22T21:23:09.9514015Z 20/05/22 21:19:49 INFO security.NMTokenSecretManagerInRM: NMTokenKeyRollingInterval: 86400000ms and NMTokenKeyActivationDelay: 900000ms 2020-05-22T21:23:09.9514770Z 20/05/22 21:19:49 INFO security.RMContainerTokenSecretManager: ContainerTokenKeyRollingInterval: 86400000ms and ContainerTokenKeyActivationDelay: 900000ms 2020-05-22T21:23:09.9515508Z 20/05/22 21:19:49 INFO security.AMRMTokenSecretManager: AMRMTokenKeyRollingInterval: 86400000ms and AMRMTokenKeyActivationDelay: 900000 ms 2020-05-22T21:23:09.9516429Z 20/05/22 21:19:49 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStoreEventType for class org.apache.hadoop.yarn.server.resourcemanager.recovery.RMStateStore$ForwardingEventHandler 2020-05-22T21:23:09.9517357Z 20/05/22 21:19:49 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.NodesListManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.NodesListManager 2020-05-22T21:23:09.9518156Z 20/05/22 21:19:49 INFO resourcemanager.ResourceManager: Using Scheduler: org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler 2020-05-22T21:23:09.9518990Z 20/05/22 21:19:49 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.scheduler.event.SchedulerEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$SchedulerEventDispatcher 2020-05-22T21:23:09.9520009Z 20/05/22 21:19:49 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationEventDispatcher 2020-05-22T21:23:09.9520991Z 20/05/22 21:19:49 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmapp.attempt.RMAppAttemptEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$ApplicationAttemptEventDispatcher 2020-05-22T21:23:09.9521969Z 20/05/22 21:19:49 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.rmnode.RMNodeEventType for class org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$NodeEventDispatcher 2020-05-22T21:23:09.9522963Z 20/05/22 21:19:49 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2020-05-22T21:23:09.9523468Z 20/05/22 21:19:50 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2020-05-22T21:23:09.9523961Z 20/05/22 21:19:50 INFO impl.MetricsSystemImpl: ResourceManager metrics system started 2020-05-22T21:23:09.9524603Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.RMAppManagerEventType for class org.apache.hadoop.yarn.server.resourcemanager.RMAppManager 2020-05-22T21:23:09.9525507Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.AMLauncherEventType for class org.apache.hadoop.yarn.server.resourcemanager.amlauncher.ApplicationMasterLauncher 2020-05-22T21:23:09.9526207Z 20/05/22 21:19:50 INFO resourcemanager.RMNMInfo: Registered RMNMInfo MBean 2020-05-22T21:23:09.9526729Z 20/05/22 21:19:50 INFO security.YarnAuthorizationProvider: org.apache.hadoop.yarn.security.ConfiguredYarnAuthorizer is instiantiated. 2020-05-22T21:23:09.9527280Z 20/05/22 21:19:50 INFO util.HostsFileReader: Refreshing hosts (include/exclude) list 2020-05-22T21:23:09.9528166Z 20/05/22 21:19:50 INFO conf.Configuration: found resource capacity-scheduler.xml at file:/usr/local/hadoop-2.8.4/etc/hadoop/capacity-scheduler.xml 2020-05-22T21:23:09.9528793Z 20/05/22 21:19:50 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root is undefined 2020-05-22T21:23:09.9529353Z 20/05/22 21:19:50 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root is undefined 2020-05-22T21:23:09.9530667Z 20/05/22 21:19:50 INFO capacity.ParentQueue: root, capacity=1.0, asboluteCapacity=1.0, maxCapacity=1.0, asboluteMaxCapacity=1.0, state=RUNNING, acls=ADMINISTER_QUEUE:*SUBMIT_APP:*, labels=*,, offswitchPerHeartbeatLimit = 1, reservationsContinueLooking=true 2020-05-22T21:23:09.9532026Z 20/05/22 21:19:50 INFO capacity.ParentQueue: Initialized parent-queue root name=root, fullname=root 2020-05-22T21:23:09.9532601Z 20/05/22 21:19:50 INFO capacity.CapacitySchedulerConfiguration: max alloc mb per queue for root.default is undefined 2020-05-22T21:23:09.9533192Z 20/05/22 21:19:50 INFO capacity.CapacitySchedulerConfiguration: max alloc vcore per queue for root.default is undefined 2020-05-22T21:23:09.9533791Z 20/05/22 21:19:50 INFO capacity.LeafQueue: Initializing default 2020-05-22T21:23:09.9534152Z capacity = 1.0 [= (float) configuredCapacity / 100 ] 2020-05-22T21:23:09.9534494Z asboluteCapacity = 1.0 [= parentAbsoluteCapacity * capacity ] 2020-05-22T21:23:09.9534808Z maxCapacity = 1.0 [= configuredMaxCapacity ] 2020-05-22T21:23:09.9535230Z absoluteMaxCapacity = 1.0 [= 1.0 maximumCapacity undefined, (parentAbsoluteMaxCapacity * maximumCapacity) / 100 otherwise ] 2020-05-22T21:23:09.9535627Z userLimit = 100 [= configuredUserLimit ] 2020-05-22T21:23:09.9535932Z userLimitFactor = 1.0 [= configuredUserLimitFactor ] 2020-05-22T21:23:09.9536373Z maxApplications = 10000 [= configuredMaximumSystemApplicationsPerQueue or (int)(configuredMaximumSystemApplications * absoluteCapacity)] 2020-05-22T21:23:09.9536908Z maxApplicationsPerUser = 10000 [= (int)(maxApplications * (userLimit / 100.0f) * userLimitFactor) ] 2020-05-22T21:23:09.9537369Z usedCapacity = 0.0 [= usedResourcesMemory / (clusterResourceMemory * absoluteCapacity)] 2020-05-22T21:23:09.9537769Z absoluteUsedCapacity = 0.0 [= usedResourcesMemory / clusterResourceMemory] 2020-05-22T21:23:09.9538163Z maxAMResourcePerQueuePercent = 0.1 [= configuredMaximumAMResourcePercent ] 2020-05-22T21:23:09.9538930Z minimumAllocationFactor = 0.8779297 [= (float)(maximumAllocationMemory - minimumAllocationMemory) / maximumAllocationMemory ] 2020-05-22T21:23:09.9539433Z maximumAllocation = <memory:8192, vCores:4> [= configuredMaxAllocation ] 2020-05-22T21:23:09.9539795Z numContainers = 0 [= currentNumContainers ] 2020-05-22T21:23:09.9540060Z state = RUNNING [= configuredState ] 2020-05-22T21:23:09.9540383Z acls = ADMINISTER_QUEUE:*SUBMIT_APP:* [= configuredAcls ] 2020-05-22T21:23:09.9540668Z nodeLocalityDelay = 40 2020-05-22T21:23:09.9540874Z labels=*, 2020-05-22T21:23:09.9541076Z reservationsContinueLooking = true 2020-05-22T21:23:09.9541325Z preemptionDisabled = true 2020-05-22T21:23:09.9541555Z defaultAppPriorityPerQueue = 0 2020-05-22T21:23:09.9542186Z 20/05/22 21:19:50 INFO capacity.CapacityScheduler: Initialized queue: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>, usedCapacity=0.0, absoluteUsedCapacity=0.0, numApps=0, numContainers=0 2020-05-22T21:23:09.9543183Z 20/05/22 21:19:50 INFO capacity.CapacityScheduler: Initialized queue: root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0 2020-05-22T21:23:09.9544152Z 20/05/22 21:19:50 INFO capacity.CapacityScheduler: Initialized root queue root: numChildQueue= 1, capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:0, vCores:0>usedCapacity=0.0, numApps=0, numContainers=0 2020-05-22T21:23:09.9544919Z 20/05/22 21:19:50 INFO capacity.CapacityScheduler: Initialized queue mappings, override: false 2020-05-22T21:23:09.9545892Z 20/05/22 21:19:50 INFO capacity.CapacityScheduler: Initialized CapacityScheduler with calculator=class org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator, minimumAllocation=<<memory:1000, vCores:1>>, maximumAllocation=<<memory:8192, vCores:4>>, asynchronousScheduling=false, asyncScheduleInterval=5ms 2020-05-22T21:23:09.9547053Z 20/05/22 21:19:50 INFO conf.Configuration: dynamic-resources.xml not found 2020-05-22T21:23:09.9547865Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9548880Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9549891Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9550924Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9551924Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9552917Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9553909Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9554885Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9555877Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9556870Z 20/05/22 21:19:50 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsEventType for class org.apache.hadoop.yarn.server.resourcemanager.metrics.SystemMetricsPublisher$ForwardingEventHandler 2020-05-22T21:23:09.9557618Z 20/05/22 21:19:50 INFO metrics.SystemMetricsPublisher: YARN system metrics publishing service is enabled 2020-05-22T21:23:09.9558570Z 20/05/22 21:19:52 INFO impl.TimelineClientImpl: Timeline service address: http://master.docker-hadoop-cluster-network:8188/ws/v1/timeline/ 2020-05-22T21:23:09.9559180Z 20/05/22 21:19:52 INFO resourcemanager.ResourceManager: Transitioning to active state 2020-05-22T21:23:09.9559615Z 20/05/22 21:19:52 INFO util.JvmPauseMonitor: Starting JVM pause monitor 2020-05-22T21:23:09.9560030Z 20/05/22 21:19:52 INFO recovery.RMStateStore: Updating AMRMToken 2020-05-22T21:23:09.9560725Z 20/05/22 21:19:52 INFO security.RMContainerTokenSecretManager: Rolling master-key for container-tokens 2020-05-22T21:23:09.9561465Z 20/05/22 21:19:52 INFO security.NMTokenSecretManagerInRM: Rolling master-key for nm-tokens 2020-05-22T21:23:09.9562046Z 20/05/22 21:19:52 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2020-05-22T21:23:09.9562622Z 20/05/22 21:19:52 INFO security.RMDelegationTokenSecretManager: storing master key with keyID 1 2020-05-22T21:23:09.9563088Z 20/05/22 21:19:52 INFO recovery.RMStateStore: Storing RMDTMasterKey. 2020-05-22T21:23:09.9563736Z 20/05/22 21:19:52 INFO event.AsyncDispatcher: Registering class org.apache.hadoop.yarn.nodelabels.event.NodeLabelsStoreEventType for class org.apache.hadoop.yarn.nodelabels.CommonNodeLabelsManager$ForwardingEventHandler 2020-05-22T21:23:09.9564640Z 20/05/22 21:19:52 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 2020-05-22T21:23:09.9565330Z 20/05/22 21:19:52 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2020-05-22T21:23:09.9565896Z 20/05/22 21:19:52 INFO security.RMDelegationTokenSecretManager: storing master key with keyID 2 2020-05-22T21:23:09.9566417Z 20/05/22 21:19:52 INFO recovery.RMStateStore: Storing RMDTMasterKey. 2020-05-22T21:23:09.9567071Z 20/05/22 21:19:52 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2020-05-22T21:23:09.9567718Z 20/05/22 21:19:52 INFO ipc.Server: Starting Socket Reader #1 for port 8031 2020-05-22T21:23:09.9568243Z 20/05/22 21:19:52 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceTrackerPB to the server 2020-05-22T21:23:09.9569180Z 20/05/22 21:19:52 INFO conf.Configuration: found resource hadoop-policy.xml at file:/usr/local/hadoop-2.8.4/etc/hadoop/hadoop-policy.xml 2020-05-22T21:23:09.9569729Z 20/05/22 21:19:53 INFO ipc.Server: IPC Server Responder: starting 2020-05-22T21:23:09.9570162Z 20/05/22 21:19:53 INFO ipc.Server: IPC Server listener on 8031: starting 2020-05-22T21:23:09.9570810Z 20/05/22 21:19:53 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2020-05-22T21:23:09.9571628Z 20/05/22 21:19:53 INFO ipc.Server: Starting Socket Reader #1 for port 8030 2020-05-22T21:23:09.9572148Z 20/05/22 21:19:53 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server 2020-05-22T21:23:09.9573130Z 20/05/22 21:19:53 INFO conf.Configuration: found resource hadoop-policy.xml at file:/usr/local/hadoop-2.8.4/etc/hadoop/hadoop-policy.xml 2020-05-22T21:23:09.9573678Z 20/05/22 21:19:53 INFO ipc.Server: IPC Server Responder: starting 2020-05-22T21:23:09.9574094Z 20/05/22 21:19:53 INFO ipc.Server: IPC Server listener on 8030: starting 2020-05-22T21:23:09.9574897Z 20/05/22 21:19:54 INFO ipc.Server: Auth successful for yarn/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9576079Z 20/05/22 21:19:54 INFO authorize.ServiceAuthorizationManager: Authorization successful for yarn/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.server.api.ResourceTrackerPB 2020-05-22T21:23:09.9577182Z 20/05/22 21:19:54 INFO ipc.Server: Auth successful for yarn/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9578379Z 20/05/22 21:19:54 INFO authorize.ServiceAuthorizationManager: Authorization successful for yarn/[hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.server.api.ResourceTrackerPB 2020-05-22T21:23:09.9579320Z 20/05/22 21:19:54 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 5000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2020-05-22T21:23:09.9579973Z 20/05/22 21:19:54 INFO ipc.Server: Starting Socket Reader #1 for port 8032 2020-05-22T21:23:09.9580501Z 20/05/22 21:19:54 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationClientProtocolPB to the server 2020-05-22T21:23:09.9581414Z 20/05/22 21:19:54 INFO conf.Configuration: found resource hadoop-policy.xml at file:/usr/local/hadoop-2.8.4/etc/hadoop/hadoop-policy.xml 2020-05-22T21:23:09.9581961Z 20/05/22 21:19:54 INFO ipc.Server: IPC Server Responder: starting 2020-05-22T21:23:09.9582395Z 20/05/22 21:19:54 INFO ipc.Server: IPC Server listener on 8032: starting 2020-05-22T21:23:09.9582943Z 20/05/22 21:19:54 INFO resourcemanager.ResourceManager: Transitioned to active state 2020-05-22T21:23:09.9583808Z 20/05/22 21:19:55 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:09.9585139Z 20/05/22 21:19:55 INFO resourcemanager.ResourceTrackerService: NodeManager from node slave1.docker-hadoop-cluster-network(cmPort: 42227 httpPort: 8042) registered with capability: <memory:2500, vCores:1>, assigned nodeId slave1.docker-hadoop-cluster-network:42227 2020-05-22T21:23:09.9588104Z 20/05/22 21:19:55 INFO resourcemanager.ResourceTrackerService: NodeManager from node slave2.docker-hadoop-cluster-network(cmPort: 41369 httpPort: 8042) registered with capability: <memory:2500, vCores:1>, assigned nodeId slave2.docker-hadoop-cluster-network:41369 2020-05-22T21:23:09.9589206Z 20/05/22 21:19:55 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2020-05-22T21:23:09.9589833Z 20/05/22 21:19:56 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2020-05-22T21:23:09.9590405Z 20/05/22 21:19:56 INFO http.HttpRequestLog: Http request log for http.requests.resourcemanager is not defined 2020-05-22T21:23:09.9591298Z 20/05/22 21:19:56 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2020-05-22T21:23:09.9591954Z 20/05/22 21:19:56 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context cluster 2020-05-22T21:23:09.9593030Z 20/05/22 21:19:56 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2020-05-22T21:23:09.9593776Z 20/05/22 21:19:56 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2020-05-22T21:23:09.9594342Z 20/05/22 21:19:56 INFO http.HttpServer2: adding path spec: /cluster/* 2020-05-22T21:23:09.9594768Z 20/05/22 21:19:56 INFO http.HttpServer2: adding path spec: /ws/* 2020-05-22T21:23:09.9595185Z 20/05/22 21:19:57 INFO webapp.WebApps: Registered webapp guice modules 2020-05-22T21:23:09.9595578Z 20/05/22 21:19:57 INFO http.HttpServer2: Jetty bound to port 8088 2020-05-22T21:23:09.9596228Z 20/05/22 21:19:57 INFO mortbay.log: jetty-6.1.26 2020-05-22T21:23:09.9597129Z 20/05/22 21:19:57 INFO mortbay.log: Extract jar:file:/usr/local/hadoop-2.8.4/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar!/webapps/cluster to /tmp/Jetty_0_0_0_0_8088_cluster____u0rgz3/webapp 2020-05-22T21:23:09.9597794Z May 22, 2020 9:19:58 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 2020-05-22T21:23:09.9598304Z INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver as a provider class 2020-05-22T21:23:09.9598798Z May 22, 2020 9:19:58 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 2020-05-22T21:23:09.9599300Z INFO: Registering org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices as a root resource class 2020-05-22T21:23:09.9599786Z May 22, 2020 9:19:58 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 2020-05-22T21:23:09.9600256Z INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class 2020-05-22T21:23:09.9600725Z May 22, 2020 9:19:58 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate 2020-05-22T21:23:09.9601401Z INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' 2020-05-22T21:23:09.9601908Z May 22, 2020 9:19:58 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 2020-05-22T21:23:09.9602470Z INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton" 2020-05-22T21:23:09.9603163Z May 22, 2020 9:19:59 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 2020-05-22T21:23:09.9603712Z INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" 2020-05-22T21:23:09.9604239Z May 22, 2020 9:19:59 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 2020-05-22T21:23:09.9604811Z INFO: Binding org.apache.hadoop.yarn.server.resourcemanager.webapp.RMWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2020-05-22T21:23:09.9605399Z 20/05/22 21:19:59 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:8088 2020-05-22T21:23:09.9605965Z 20/05/22 21:19:59 INFO webapp.WebApps: Web app cluster started at 8088 2020-05-22T21:23:09.9606779Z 20/05/22 21:19:59 INFO rmnode.RMNodeImpl: slave1.docker-hadoop-cluster-network:42227 Node Transitioned from NEW to RUNNING 2020-05-22T21:23:09.9607652Z 20/05/22 21:19:59 INFO rmnode.RMNodeImpl: slave2.docker-hadoop-cluster-network:41369 Node Transitioned from NEW to RUNNING 2020-05-22T21:23:09.9608620Z 20/05/22 21:19:59 INFO capacity.CapacityScheduler: Added node slave1.docker-hadoop-cluster-network:42227 clusterResource: <memory:2500, vCores:1> 2020-05-22T21:23:09.9609631Z 20/05/22 21:19:59 INFO capacity.CapacityScheduler: Added node slave2.docker-hadoop-cluster-network:41369 clusterResource: <memory:5000, vCores:2> 2020-05-22T21:23:09.9610457Z 20/05/22 21:19:59 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2020-05-22T21:23:09.9611118Z 20/05/22 21:19:59 INFO ipc.Server: Starting Socket Reader #1 for port 8033 2020-05-22T21:23:09.9611834Z 20/05/22 21:19:59 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.server.api.ResourceManagerAdministrationProtocolPB to the server 2020-05-22T21:23:09.9612850Z 20/05/22 21:19:59 INFO conf.Configuration: found resource hadoop-policy.xml at file:/usr/local/hadoop-2.8.4/etc/hadoop/hadoop-policy.xml 2020-05-22T21:23:09.9613407Z 20/05/22 21:19:59 INFO ipc.Server: IPC Server Responder: starting 2020-05-22T21:23:09.9613822Z 20/05/22 21:19:59 INFO ipc.Server: IPC Server listener on 8033: starting 2020-05-22T21:23:09.9614541Z 20/05/22 21:20:25 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9615602Z 20/05/22 21:20:25 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9616582Z 20/05/22 21:20:50 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9617653Z 20/05/22 21:20:50 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9619017Z 20/05/22 21:20:50 INFO resourcemanager.RMAuditLogger: USER=[hidden email] IP=172.22.0.3 OPERATION=Get Queue Info Request TARGET=ClientRMService RESULT=SUCCESS QUEUENAME=root INCLUDEAPPS=false INCLUDECHILDQUEUES=true RECURSIVE=true 2020-05-22T21:23:09.9619777Z 20/05/22 21:20:50 INFO resourcemanager.ClientRMService: Allocated new applicationId: 1 2020-05-22T21:23:09.9620532Z 20/05/22 21:21:19 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9621587Z 20/05/22 21:21:19 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9622764Z 20/05/22 21:21:19 INFO capacity.CapacityScheduler: Application 'application_1590182392090_0001' is submitted without priority hence considering default queue/cluster priority: 0 2020-05-22T21:23:09.9624021Z 20/05/22 21:21:19 INFO capacity.CapacityScheduler: Priority '0' is acceptable in queue : default for application: application_1590182392090_0001 for the user: hadoop-user 2020-05-22T21:23:09.9624950Z 20/05/22 21:21:19 INFO resourcemanager.ClientRMService: Application with id 1 submitted by user hadoop-user 2020-05-22T21:23:09.9625966Z 20/05/22 21:21:19 INFO resourcemanager.RMAuditLogger: USER=hadoop-user IP=172.22.0.3 OPERATION=Submit Application Request TARGET=ClientRMService RESULT=SUCCESS APPID=application_1590182392090_0001 2020-05-22T21:23:09.9627234Z 20/05/22 21:21:19 INFO security.DelegationTokenRenewer: application_1590182392090_0001 found existing hdfs token Kind: HDFS_DELEGATION_TOKEN, Service: 172.22.0.3:9000, Ident: (HDFS_DELEGATION_TOKEN token 1 for hadoop-user) 2020-05-22T21:23:09.9628793Z 20/05/22 21:21:20 INFO security.DelegationTokenRenewer: Renewed delegation-token= [Kind: HDFS_DELEGATION_TOKEN, Service: 172.22.0.3:9000, Ident: (HDFS_DELEGATION_TOKEN token 1 for hadoop-user);exp=1590268880093; apps=[application_1590182392090_0001]] 2020-05-22T21:23:09.9629991Z 20/05/22 21:21:20 INFO impl.TimelineClientImpl: Timeline service address: http://master.docker-hadoop-cluster-network:8188/ws/v1/timeline/ 2020-05-22T21:23:09.9631489Z 20/05/22 21:21:20 INFO security.DelegationTokenRenewer: Renewed delegation-token= [Kind: TIMELINE_DELEGATION_TOKEN, Service: 172.22.0.3:8188, Ident: (owner=hadoop-user, renewer=yarn, realUser=, issueDate=1590182479214, maxDate=1590787279214, sequenceNumber=1, masterKeyId=2);exp=1590268880306; apps=[application_1590182392090_0001]] 2020-05-22T21:23:09.9633410Z 20/05/22 21:21:20 INFO security.DelegationTokenRenewer: Renew Kind: TIMELINE_DELEGATION_TOKEN, Service: 172.22.0.3:8188, Ident: (owner=hadoop-user, renewer=yarn, realUser=, issueDate=1590182479214, maxDate=1590787279214, sequenceNumber=1, masterKeyId=2);exp=1590268880306; apps=[application_1590182392090_0001] in 86399953 ms, appId = [application_1590182392090_0001] 2020-05-22T21:23:09.9635160Z 20/05/22 21:21:20 INFO security.DelegationTokenRenewer: Renew Kind: HDFS_DELEGATION_TOKEN, Service: 172.22.0.3:9000, Ident: (HDFS_DELEGATION_TOKEN token 1 for hadoop-user);exp=1590268880093; apps=[application_1590182392090_0001] in 86399740 ms, appId = [application_1590182392090_0001] 2020-05-22T21:23:09.9636020Z 20/05/22 21:21:20 INFO rmapp.RMAppImpl: Storing application with id application_1590182392090_0001 2020-05-22T21:23:09.9636572Z 20/05/22 21:21:20 INFO rmapp.RMAppImpl: application_1590182392090_0001 State change from NEW to NEW_SAVING on event=START 2020-05-22T21:23:09.9637116Z 20/05/22 21:21:20 INFO recovery.RMStateStore: Storing info for app: application_1590182392090_0001 2020-05-22T21:23:09.9637699Z 20/05/22 21:21:20 INFO rmapp.RMAppImpl: application_1590182392090_0001 State change from NEW_SAVING to SUBMITTED on event=APP_NEW_SAVED 2020-05-22T21:23:09.9638732Z 20/05/22 21:21:20 INFO capacity.ParentQueue: Application added - appId: application_1590182392090_0001 user: hadoop-user leaf-queue of parent: root #applications: 1 2020-05-22T21:23:09.9639791Z 20/05/22 21:21:20 INFO capacity.CapacityScheduler: Accepted application application_1590182392090_0001 from user: hadoop-user, in queue: default 2020-05-22T21:23:09.9640473Z 20/05/22 21:21:20 INFO rmapp.RMAppImpl: application_1590182392090_0001 State change from SUBMITTED to ACCEPTED on event=APP_ACCEPTED 2020-05-22T21:23:09.9641088Z 20/05/22 21:21:20 INFO resourcemanager.ApplicationMasterService: Registering app attempt : appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9641702Z 20/05/22 21:21:20 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from NEW to SUBMITTED 2020-05-22T21:23:09.9642619Z 20/05/22 21:21:20 INFO capacity.LeafQueue: Application application_1590182392090_0001 from user: hadoop-user activated in queue: default 2020-05-22T21:23:09.9643961Z 20/05/22 21:21:20 INFO capacity.LeafQueue: Application added - appId: application_1590182392090_0001 user: hadoop-user, leaf-queue: default #user-pending-applications: 0 #user-active-applications: 1 #queue-pending-applications: 0 #queue-active-applications: 1 2020-05-22T21:23:09.9645338Z 20/05/22 21:21:20 INFO capacity.CapacityScheduler: Added Application Attempt appattempt_1590182392090_0001_000001 to scheduler from user hadoop-user in queue default 2020-05-22T21:23:09.9646011Z 20/05/22 21:21:20 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from SUBMITTED to SCHEDULED 2020-05-22T21:23:09.9646615Z 20/05/22 21:21:20 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000001 Container Transitioned from NEW to ALLOCATED 2020-05-22T21:23:09.9647751Z 20/05/22 21:21:20 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000001 2020-05-22T21:23:09.9649452Z 20/05/22 21:21:20 INFO scheduler.SchedulerNode: Assigned container container_1590182392090_0001_01_000001 of capacity <memory:1000, vCores:1> on host slave2.docker-hadoop-cluster-network:41369, which has 1 containers, <memory:1000, vCores:1> used and <memory:1500, vCores:0> available after allocation 2020-05-22T21:23:09.9651448Z 20/05/22 21:21:20 INFO allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1590182392090_0001_000001 container=container_1590182392090_0001_01_000001 queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@5b47bf57 clusterResource=<memory:5000, vCores:2> type=OFF_SWITCH requestedPartition= 2020-05-22T21:23:09.9653176Z 20/05/22 21:21:20 INFO security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : slave2.docker-hadoop-cluster-network:41369 for container : container_1590182392090_0001_01_000001 2020-05-22T21:23:09.9653969Z 20/05/22 21:21:20 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000001 Container Transitioned from ALLOCATED to ACQUIRED 2020-05-22T21:23:09.9654586Z 20/05/22 21:21:20 INFO security.NMTokenSecretManagerInRM: Clear node set for appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9655797Z 20/05/22 21:21:20 INFO capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:1000, vCores:1>, usedCapacity=0.2, absoluteUsedCapacity=0.2, numApps=1, numContainers=1 2020-05-22T21:23:09.9658025Z 20/05/22 21:21:20 INFO attempt.RMAppAttemptImpl: Storing attempt: AppId: application_1590182392090_0001 AttemptId: appattempt_1590182392090_0001_000001 MasterContainer: Container: [ContainerId: container_1590182392090_0001_01_000001, Version: 0, NodeId: slave2.docker-hadoop-cluster-network:41369, NodeHttpAddress: slave2.docker-hadoop-cluster-network:8042, Resource: <memory:1000, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.22.0.5:41369 }, ] 2020-05-22T21:23:09.9659529Z 20/05/22 21:21:20 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.2 absoluteUsedCapacity=0.2 used=<memory:1000, vCores:1> cluster=<memory:5000, vCores:2> 2020-05-22T21:23:09.9660290Z 20/05/22 21:21:20 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from SCHEDULED to ALLOCATED_SAVING 2020-05-22T21:23:09.9660920Z 20/05/22 21:21:20 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from ALLOCATED_SAVING to ALLOCATED 2020-05-22T21:23:09.9661474Z 20/05/22 21:21:20 INFO amlauncher.AMLauncher: Launching masterappattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9663213Z 20/05/22 21:21:21 INFO amlauncher.AMLauncher: Setting up container Container: [ContainerId: container_1590182392090_0001_01_000001, Version: 0, NodeId: slave2.docker-hadoop-cluster-network:41369, NodeHttpAddress: slave2.docker-hadoop-cluster-network:8042, Resource: <memory:1000, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.22.0.5:41369 }, ] for AM appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9664486Z 20/05/22 21:21:21 INFO security.AMRMTokenSecretManager: Create AMRMToken for ApplicationAttempt: appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9665217Z 20/05/22 21:21:21 INFO security.AMRMTokenSecretManager: Creating password for appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9667026Z 20/05/22 21:21:21 INFO amlauncher.AMLauncher: Done launching container Container: [ContainerId: container_1590182392090_0001_01_000001, Version: 0, NodeId: slave2.docker-hadoop-cluster-network:41369, NodeHttpAddress: slave2.docker-hadoop-cluster-network:8042, Resource: <memory:1000, vCores:1>, Priority: 0, Token: Token { kind: ContainerToken, service: 172.22.0.5:41369 }, ] for AM appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9668356Z 20/05/22 21:21:21 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from ALLOCATED to LAUNCHED 2020-05-22T21:23:09.9668982Z 20/05/22 21:21:21 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000001 Container Transitioned from ACQUIRED to RUNNING 2020-05-22T21:23:09.9669585Z 20/05/22 21:21:51 INFO ipc.Server: Auth successful for appattempt_1590182392090_0001_000001 (auth:SIMPLE) 2020-05-22T21:23:09.9670382Z 20/05/22 21:21:51 INFO authorize.ServiceAuthorizationManager: Authorization successful for appattempt_1590182392090_0001_000001 (auth:TOKEN) for protocol=interface org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB 2020-05-22T21:23:09.9671172Z 20/05/22 21:21:51 INFO resourcemanager.ApplicationMasterService: AM registration appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9672378Z 20/05/22 21:21:51 INFO resourcemanager.RMAuditLogger: USER=hadoop-user IP=172.22.0.5 OPERATION=Register App Master TARGET=ApplicationMasterService RESULT=SUCCESS APPID=application_1590182392090_0001 APPATTEMPTID=appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9673190Z 20/05/22 21:21:51 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from LAUNCHED to RUNNING 2020-05-22T21:23:09.9673801Z 20/05/22 21:21:51 INFO rmapp.RMAppImpl: application_1590182392090_0001 State change from ACCEPTED to RUNNING on event=ATTEMPT_REGISTERED 2020-05-22T21:23:09.9674380Z 20/05/22 21:21:51 INFO resourcemanager.ApplicationMasterService: Setting client token master key 2020-05-22T21:23:09.9675511Z 20/05/22 21:21:54 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9676685Z 20/05/22 21:21:54 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9677827Z 20/05/22 21:21:54 INFO resourcemanager.RMAuditLogger: USER=[hidden email] IP=172.22.0.3 OPERATION=Get Applications Request TARGET=ClientRMService RESULT=SUCCESS 2020-05-22T21:23:09.9678708Z 20/05/22 21:21:57 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9679777Z 20/05/22 21:21:57 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9680913Z 20/05/22 21:21:57 INFO resourcemanager.RMAuditLogger: USER=[hidden email] IP=172.22.0.3 OPERATION=Get Applications Request TARGET=ClientRMService RESULT=SUCCESS 2020-05-22T21:23:09.9681607Z 20/05/22 21:21:57 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000002 Container Transitioned from NEW to ALLOCATED 2020-05-22T21:23:09.9682738Z 20/05/22 21:21:57 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000002 2020-05-22T21:23:09.9684932Z 20/05/22 21:21:57 INFO scheduler.SchedulerNode: Assigned container container_1590182392090_0001_01_000002 of capacity <memory:1000, vCores:1> on host slave1.docker-hadoop-cluster-network:42227, which has 1 containers, <memory:1000, vCores:1> used and <memory:1500, vCores:0> available after allocation 2020-05-22T21:23:09.9686629Z 20/05/22 21:21:57 INFO allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1590182392090_0001_000001 container=container_1590182392090_0001_01_000002 queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@5b47bf57 clusterResource=<memory:5000, vCores:2> type=OFF_SWITCH requestedPartition= 2020-05-22T21:23:09.9688433Z 20/05/22 21:21:57 INFO capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:2000, vCores:2>, usedCapacity=0.4, absoluteUsedCapacity=0.4, numApps=1, numContainers=2 2020-05-22T21:23:09.9689524Z 20/05/22 21:21:57 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.4 absoluteUsedCapacity=0.4 used=<memory:2000, vCores:2> cluster=<memory:5000, vCores:2> 2020-05-22T21:23:09.9690814Z 20/05/22 21:21:57 INFO security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : slave1.docker-hadoop-cluster-network:42227 for container : container_1590182392090_0001_01_000002 2020-05-22T21:23:09.9691818Z 20/05/22 21:21:57 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000002 Container Transitioned from ALLOCATED to ACQUIRED 2020-05-22T21:23:09.9692475Z 20/05/22 21:21:57 INFO scheduler.AppSchedulingInfo: checking for deactivate of application :application_1590182392090_0001 2020-05-22T21:23:09.9693096Z 20/05/22 21:21:58 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000002 Container Transitioned from ACQUIRED to RUNNING 2020-05-22T21:23:09.9693983Z 20/05/22 21:22:01 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9695077Z 20/05/22 21:22:01 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9696031Z 20/05/22 21:22:06 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9697106Z 20/05/22 21:22:06 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9698068Z 20/05/22 21:22:11 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9699119Z 20/05/22 21:22:11 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9700082Z 20/05/22 21:22:16 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9701154Z 20/05/22 21:22:16 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9702114Z 20/05/22 21:22:22 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9703179Z 20/05/22 21:22:22 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9704121Z 20/05/22 21:22:29 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9705181Z 20/05/22 21:22:29 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9706153Z 20/05/22 21:22:33 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9707202Z 20/05/22 21:22:33 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9708313Z 20/05/22 21:22:38 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9709388Z 20/05/22 21:22:38 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9710335Z 20/05/22 21:22:42 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9711396Z 20/05/22 21:22:42 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9712264Z 20/05/22 21:22:43 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000003 Container Transitioned from NEW to ALLOCATED 2020-05-22T21:23:09.9713405Z 20/05/22 21:22:43 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000003 2020-05-22T21:23:09.9715011Z 20/05/22 21:22:43 INFO scheduler.SchedulerNode: Assigned container container_1590182392090_0001_01_000003 of capacity <memory:1000, vCores:1> on host slave2.docker-hadoop-cluster-network:41369, which has 2 containers, <memory:2000, vCores:2> used and <memory:500, vCores:-1> available after allocation 2020-05-22T21:23:09.9717063Z 20/05/22 21:22:43 INFO allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1590182392090_0001_000001 container=container_1590182392090_0001_01_000003 queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@5b47bf57 clusterResource=<memory:5000, vCores:2> type=OFF_SWITCH requestedPartition= 2020-05-22T21:23:09.9718949Z 20/05/22 21:22:43 INFO capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:3000, vCores:3>, usedCapacity=0.6, absoluteUsedCapacity=0.6, numApps=1, numContainers=3 2020-05-22T21:23:09.9719942Z 20/05/22 21:22:43 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.6 absoluteUsedCapacity=0.6 used=<memory:3000, vCores:3> cluster=<memory:5000, vCores:2> 2020-05-22T21:23:09.9720704Z 20/05/22 21:22:43 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000004 Container Transitioned from NEW to ALLOCATED 2020-05-22T21:23:09.9721853Z 20/05/22 21:22:43 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Allocated Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000004 2020-05-22T21:23:09.9723438Z 20/05/22 21:22:43 INFO scheduler.SchedulerNode: Assigned container container_1590182392090_0001_01_000004 of capacity <memory:1000, vCores:1> on host slave1.docker-hadoop-cluster-network:42227, which has 2 containers, <memory:2000, vCores:2> used and <memory:500, vCores:-1> available after allocation 2020-05-22T21:23:09.9724980Z 20/05/22 21:22:43 INFO allocator.AbstractContainerAllocator: assignedContainer application attempt=appattempt_1590182392090_0001_000001 container=container_1590182392090_0001_01_000004 queue=org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.allocator.RegularContainerAllocator@5b47bf57 clusterResource=<memory:5000, vCores:2> type=OFF_SWITCH requestedPartition= 2020-05-22T21:23:09.9726749Z 20/05/22 21:22:43 INFO capacity.ParentQueue: Re-sorting assigned queue: root.default stats: default: capacity=1.0, absoluteCapacity=1.0, usedResources=<memory:4000, vCores:4>, usedCapacity=0.8, absoluteUsedCapacity=0.8, numApps=1, numContainers=4 2020-05-22T21:23:09.9727756Z 20/05/22 21:22:43 INFO capacity.ParentQueue: assignedContainer queue=root usedCapacity=0.8 absoluteUsedCapacity=0.8 used=<memory:4000, vCores:4> cluster=<memory:5000, vCores:2> 2020-05-22T21:23:09.9728961Z 20/05/22 21:22:43 INFO security.NMTokenSecretManagerInRM: Sending NMToken for nodeId : slave2.docker-hadoop-cluster-network:41369 for container : container_1590182392090_0001_01_000003 2020-05-22T21:23:09.9729841Z 20/05/22 21:22:43 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000003 Container Transitioned from ALLOCATED to ACQUIRED 2020-05-22T21:23:09.9730491Z 20/05/22 21:22:43 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000004 Container Transitioned from ALLOCATED to ACQUIRED 2020-05-22T21:23:09.9731133Z 20/05/22 21:22:44 INFO scheduler.AppSchedulingInfo: checking for deactivate of application :application_1590182392090_0001 2020-05-22T21:23:09.9731883Z 20/05/22 21:22:44 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000003 Container Transitioned from ACQUIRED to RUNNING 2020-05-22T21:23:09.9732625Z 20/05/22 21:22:44 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000004 Container Transitioned from ACQUIRED to RUNNING 2020-05-22T21:23:09.9733519Z 20/05/22 21:22:48 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9734586Z 20/05/22 21:22:48 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9735561Z 20/05/22 21:22:54 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9736611Z 20/05/22 21:22:55 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9737582Z 20/05/22 21:22:59 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9738648Z 20/05/22 21:22:59 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9739601Z 20/05/22 21:23:03 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9740665Z 20/05/22 21:23:03 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9742119Z 20/05/22 21:23:04 INFO attempt.RMAppAttemptImpl: Updating application attempt appattempt_1590182392090_0001_000001 with final state: FINISHING, and exit status: -1000 2020-05-22T21:23:09.9742850Z 20/05/22 21:23:04 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from RUNNING to FINAL_SAVING 2020-05-22T21:23:09.9743483Z 20/05/22 21:23:04 INFO rmapp.RMAppImpl: Updating application application_1590182392090_0001 with final state: FINISHING 2020-05-22T21:23:09.9744056Z 20/05/22 21:23:04 INFO recovery.RMStateStore: Updating info for app: application_1590182392090_0001 2020-05-22T21:23:09.9744659Z 20/05/22 21:23:04 INFO rmapp.RMAppImpl: application_1590182392090_0001 State change from RUNNING to FINAL_SAVING on event=ATTEMPT_UNREGISTERED 2020-05-22T21:23:09.9745293Z 20/05/22 21:23:04 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from FINAL_SAVING to FINISHING 2020-05-22T21:23:09.9745899Z 20/05/22 21:23:04 INFO rmapp.RMAppImpl: application_1590182392090_0001 State change from FINAL_SAVING to FINISHING on event=APP_UPDATE_SAVED 2020-05-22T21:23:09.9746522Z 20/05/22 21:23:04 INFO resourcemanager.ApplicationMasterService: application_1590182392090_0001 unregistered successfully. 2020-05-22T21:23:09.9747148Z 20/05/22 21:23:04 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000003 Container Transitioned from RUNNING to COMPLETED 2020-05-22T21:23:09.9748336Z 20/05/22 21:23:04 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000003 2020-05-22T21:23:09.9750134Z 20/05/22 21:23:04 INFO scheduler.SchedulerNode: Released container container_1590182392090_0001_01_000003 of capacity <memory:1000, vCores:1> on host slave2.docker-hadoop-cluster-network:41369, which currently has 1 containers, <memory:1000, vCores:1> used and <memory:1500, vCores:0> available, release resources=true 2020-05-22T21:23:09.9751196Z 20/05/22 21:23:04 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000004 Container Transitioned from RUNNING to COMPLETED 2020-05-22T21:23:09.9752326Z 20/05/22 21:23:04 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000004 2020-05-22T21:23:09.9754062Z 20/05/22 21:23:04 INFO scheduler.SchedulerNode: Released container container_1590182392090_0001_01_000004 of capacity <memory:1000, vCores:1> on host slave1.docker-hadoop-cluster-network:42227, which currently has 1 containers, <memory:1000, vCores:1> used and <memory:1500, vCores:0> available, release resources=true 2020-05-22T21:23:09.9755124Z 20/05/22 21:23:04 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000002 Container Transitioned from RUNNING to COMPLETED 2020-05-22T21:23:09.9756270Z 20/05/22 21:23:04 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000002 2020-05-22T21:23:09.9757899Z 20/05/22 21:23:04 INFO scheduler.SchedulerNode: Released container container_1590182392090_0001_01_000002 of capacity <memory:1000, vCores:1> on host slave1.docker-hadoop-cluster-network:42227, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2500, vCores:1> available, release resources=true 2020-05-22T21:23:09.9758952Z 20/05/22 21:23:06 INFO rmcontainer.RMContainerImpl: container_1590182392090_0001_01_000001 Container Transitioned from RUNNING to COMPLETED 2020-05-22T21:23:09.9760081Z 20/05/22 21:23:06 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=AM Released Container TARGET=SchedulerApp RESULT=SUCCESS APPID=application_1590182392090_0001 CONTAINERID=container_1590182392090_0001_01_000001 2020-05-22T21:23:09.9761703Z 20/05/22 21:23:06 INFO scheduler.SchedulerNode: Released container container_1590182392090_0001_01_000001 of capacity <memory:1000, vCores:1> on host slave2.docker-hadoop-cluster-network:41369, which currently has 0 containers, <memory:0, vCores:0> used and <memory:2500, vCores:1> available, release resources=true 2020-05-22T21:23:09.9762759Z 20/05/22 21:23:06 INFO resourcemanager.ApplicationMasterService: Unregistering app attempt : appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9763415Z 20/05/22 21:23:06 INFO security.AMRMTokenSecretManager: Application finished, removing password for appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9764031Z 20/05/22 21:23:06 INFO attempt.RMAppAttemptImpl: appattempt_1590182392090_0001_000001 State change from FINISHING to FINISHED 2020-05-22T21:23:09.9764643Z 20/05/22 21:23:06 INFO rmapp.RMAppImpl: application_1590182392090_0001 State change from FINISHING to FINISHED on event=ATTEMPT_FINISHED 2020-05-22T21:23:09.9765274Z 20/05/22 21:23:06 INFO capacity.CapacityScheduler: Application Attempt appattempt_1590182392090_0001_000001 is done. finalState=FINISHED 2020-05-22T21:23:09.9765855Z 20/05/22 21:23:06 INFO scheduler.AppSchedulingInfo: Application application_1590182392090_0001 requests cleared 2020-05-22T21:23:09.9767135Z 20/05/22 21:23:06 INFO capacity.LeafQueue: Application removed - appId: application_1590182392090_0001 user: hadoop-user queue: default #user-pending-applications: 0 #user-active-applications: 0 #queue-pending-applications: 0 #queue-active-applications: 0 2020-05-22T21:23:09.9768469Z 20/05/22 21:23:06 INFO capacity.ParentQueue: Application removed - appId: application_1590182392090_0001 user: hadoop-user leaf-queue of parent: root #applications: 0 2020-05-22T21:23:09.9769687Z 20/05/22 21:23:06 INFO resourcemanager.RMAuditLogger: USER=hadoop-user OPERATION=Application Finished - Succeeded TARGET=RMAppManager RESULT=SUCCESS APPID=application_1590182392090_0001 2020-05-22T21:23:09.9770354Z 20/05/22 21:23:06 INFO amlauncher.AMLauncher: Cleaning master appattempt_1590182392090_0001_000001 2020-05-22T21:23:09.9773990Z 20/05/22 21:23:06 INFO resourcemanager.RMAppManager$ApplicationSummary: appId=application_1590182392090_0001,name=Flink Application Cluster,user=hadoop-user,queue=default,state=FINISHED,trackingUrl=http://master.docker-hadoop-cluster-network:8088/proxy/application_1590182392090_0001/,appMasterHost=slave2.docker-hadoop-cluster-network,startTime=1590182479362,finishTime=1590182584348,finalStatus=FAILED,memorySeconds=216607,vcoreSeconds=214,preemptedMemorySeconds=0,preemptedVcoreSeconds=0,preemptedAMContainers=0,preemptedNonAMContainers=0,preemptedResources=<memory:0\, vCores:0>,applicationType=Apache Flink 2020-05-22T21:23:09.9776854Z 20/05/22 21:23:07 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9778319Z 20/05/22 21:23:07 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9779835Z 20/05/22 21:23:08 INFO scheduler.AbstractYarnScheduler: Container container_1590182392090_0001_01_000004 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2020-05-22T21:23:09.9781281Z 20/05/22 21:23:08 INFO scheduler.AbstractYarnScheduler: Container container_1590182392090_0001_01_000002 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2020-05-22T21:23:09.9782751Z 20/05/22 21:23:08 INFO scheduler.AbstractYarnScheduler: Container container_1590182392090_0001_01_000003 completed with event FINISHED, but corresponding RMContainer doesn't exist. 2020-05-22T21:23:09.9783915Z 20/05/22 21:23:09 INFO ipc.Server: Auth successful for [hidden email] (auth:KERBEROS) 2020-05-22T21:23:09.9785382Z 20/05/22 21:23:09 INFO authorize.ServiceAuthorizationManager: Authorization successful for [hidden email] (auth:KERBEROS) for protocol=interface org.apache.hadoop.yarn.api.ApplicationClientProtocolPB 2020-05-22T21:23:09.9786736Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/resourcemanager.out: 2020-05-22T21:23:09.9787767Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/timelineserver.err: 2020-05-22T21:23:09.9788335Z 20/05/22 21:19:38 INFO applicationhistoryservice.ApplicationHistoryServer: STARTUP_MSG: 2020-05-22T21:23:09.9788770Z /************************************************************ 2020-05-22T21:23:09.9789107Z STARTUP_MSG: Starting ApplicationHistoryServer 2020-05-22T21:23:09.9789388Z STARTUP_MSG: user = yarn 2020-05-22T21:23:09.9789922Z STARTUP_MSG: host = master.docker-hadoop-cluster-network/172.22.0.3 2020-05-22T21:23:09.9790253Z STARTUP_MSG: args = [] 2020-05-22T21:23:09.9790490Z STARTUP_MSG: version = 2.8.4 2020-05-22T21:23:09.9837616Z STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/etc/hadoop/timelineserver-config/log4j.properties 2020-05-22T21:23:09.9870258Z STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 17e75c2a11685af3e043aa5e604dc831e5b14674; compiled by 'jdu' on 2018-05-08T02:50Z 2020-05-22T21:23:09.9870833Z STARTUP_MSG: java = 1.8.0_131 2020-05-22T21:23:09.9871149Z ************************************************************/ 2020-05-22T21:23:09.9871630Z 20/05/22 21:19:38 INFO applicationhistoryservice.ApplicationHistoryServer: registered UNIX signal handlers for [TERM, HUP, INT] 2020-05-22T21:23:09.9872826Z 20/05/22 21:19:48 INFO security.UserGroupInformation: Login successful for user yarn/[hidden email] using keytab file /etc/security/keytabs/yarn.keytab 2020-05-22T21:23:09.9873685Z 20/05/22 21:19:48 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties 2020-05-22T21:23:09.9874192Z 20/05/22 21:19:49 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s). 2020-05-22T21:23:09.9874699Z 20/05/22 21:19:49 INFO impl.MetricsSystemImpl: ApplicationHistoryServer metrics system started 2020-05-22T21:23:09.9875507Z 20/05/22 21:19:50 INFO timeline.LeveldbTimelineStore: Using leveldb path /tmp/hadoop-yarn/yarn/timeline/leveldb-timeline-store.ldb 2020-05-22T21:23:09.9876161Z 20/05/22 21:19:50 INFO timeline.LeveldbTimelineStore: Loaded timeline store version info 1.0 2020-05-22T21:23:09.9876712Z 20/05/22 21:19:50 INFO timeline.LeveldbTimelineStore: Starting deletion thread with ttl 604800000 and cycle interval 300000 2020-05-22T21:23:09.9877320Z 20/05/22 21:19:50 INFO timeline.LeveldbTimelineStore: Discarded 0 entities for timestamp 1589577590441 and earlier in 0.007 seconds 2020-05-22T21:23:09.9877965Z 20/05/22 21:19:50 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2020-05-22T21:23:09.9878655Z 20/05/22 21:19:50 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 2020-05-22T21:23:09.9879326Z 20/05/22 21:19:50 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2020-05-22T21:23:09.9880098Z 20/05/22 21:19:50 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler 2020-05-22T21:23:09.9880753Z 20/05/22 21:19:50 INFO ipc.Server: Starting Socket Reader #1 for port 10200 2020-05-22T21:23:09.9881173Z 20/05/22 21:19:50 INFO ipc.Server: Starting Socket Reader #2 for port 10200 2020-05-22T21:23:09.9881605Z 20/05/22 21:19:50 INFO ipc.Server: Starting Socket Reader #3 for port 10200 2020-05-22T21:23:09.9882014Z 20/05/22 21:19:50 INFO ipc.Server: Starting Socket Reader #4 for port 10200 2020-05-22T21:23:09.9882439Z 20/05/22 21:19:50 INFO ipc.Server: Starting Socket Reader #5 for port 10200 2020-05-22T21:23:09.9882970Z 20/05/22 21:19:51 INFO pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.yarn.api.ApplicationHistoryProtocolPB to the server 2020-05-22T21:23:09.9883482Z 20/05/22 21:19:51 INFO ipc.Server: IPC Server Responder: starting 2020-05-22T21:23:09.9883924Z 20/05/22 21:19:51 INFO ipc.Server: IPC Server listener on 10200: starting 2020-05-22T21:23:09.9884905Z 20/05/22 21:19:51 INFO applicationhistoryservice.ApplicationHistoryClientService: Instantiated ApplicationHistoryClientService at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:09.9885631Z 20/05/22 21:19:51 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog 2020-05-22T21:23:09.9886250Z 20/05/22 21:19:51 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets. 2020-05-22T21:23:09.9886847Z 20/05/22 21:19:51 INFO http.HttpRequestLog: Http request log for http.requests.applicationhistory is not defined 2020-05-22T21:23:09.9887684Z 20/05/22 21:19:51 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter) 2020-05-22T21:23:09.9888735Z 20/05/22 21:19:51 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context applicationhistory 2020-05-22T21:23:09.9889492Z 20/05/22 21:19:51 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static 2020-05-22T21:23:09.9890291Z 20/05/22 21:19:51 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs 2020-05-22T21:23:09.9891556Z 20/05/22 21:19:52 INFO http.HttpServer2: Added global filter 'Timeline Authentication Filter' (class=org.apache.hadoop.yarn.server.timeline.security.TimelineAuthenticationFilter) 2020-05-22T21:23:09.9892197Z 20/05/22 21:19:52 INFO http.HttpServer2: adding path spec: /applicationhistory/* 2020-05-22T21:23:09.9892626Z 20/05/22 21:19:52 INFO http.HttpServer2: adding path spec: /ws/* 2020-05-22T21:23:09.9893046Z 20/05/22 21:19:52 INFO webapp.WebApps: Registered webapp guice modules 2020-05-22T21:23:09.9893550Z 20/05/22 21:19:52 INFO http.HttpServer2: Jetty bound to port 8188 2020-05-22T21:23:09.9894175Z 20/05/22 21:19:52 INFO mortbay.log: jetty-6.1.26 2020-05-22T21:23:09.9895213Z 20/05/22 21:19:53 INFO mortbay.log: Extract jar:file:/usr/local/hadoop-2.8.4/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar!/webapps/applicationhistory to /tmp/Jetty_master_docker.hadoop.cluster.network_8188_applicationhistory____.xi8yr4/webapp 2020-05-22T21:23:09.9896396Z 20/05/22 21:19:53 INFO server.KerberosAuthenticationHandler: Using keytab /etc/security/keytabs/yarn.keytab, for principal HTTP/[hidden email] 2020-05-22T21:23:09.9897121Z 20/05/22 21:19:53 INFO delegation.AbstractDelegationTokenSecretManager: Updating the current master key for generating delegation tokens 2020-05-22T21:23:09.9897809Z 20/05/22 21:19:53 INFO delegation.AbstractDelegationTokenSecretManager: Starting expired delegation token remover thread, tokenRemoverScanInterval=60 min(s) 2020-05-22T21:23:09.9899883Z May 22, 2020 9:19:53 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 2020-05-22T21:23:09.9900389Z INFO: Registering org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider as a provider class 2020-05-22T21:23:09.9900866Z May 22, 2020 9:19:53 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 2020-05-22T21:23:09.9901375Z INFO: Registering org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices as a root resource class 2020-05-22T21:23:09.9901894Z May 22, 2020 9:19:53 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 2020-05-22T21:23:09.9902377Z INFO: Registering org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices as a root resource class 2020-05-22T21:23:09.9902877Z May 22, 2020 9:19:53 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register 2020-05-22T21:23:09.9903344Z INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class 2020-05-22T21:23:09.9903799Z May 22, 2020 9:19:54 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate 2020-05-22T21:23:09.9904590Z INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM' 2020-05-22T21:23:09.9905078Z May 22, 2020 9:19:54 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 2020-05-22T21:23:09.9905630Z INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton" 2020-05-22T21:23:09.9906178Z May 22, 2020 9:19:54 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 2020-05-22T21:23:09.9906717Z INFO: Binding org.apache.hadoop.yarn.webapp.YarnJacksonJaxbJsonProvider to GuiceManagedComponentProvider with the scope "Singleton" 2020-05-22T21:23:09.9907272Z May 22, 2020 9:19:56 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 2020-05-22T21:23:09.9909188Z INFO: Binding org.apache.hadoop.yarn.server.applicationhistoryservice.webapp.AHSWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2020-05-22T21:23:09.9909842Z May 22, 2020 9:19:56 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider 2020-05-22T21:23:09.9910408Z INFO: Binding org.apache.hadoop.yarn.server.timeline.webapp.TimelineWebServices to GuiceManagedComponentProvider with the scope "Singleton" 2020-05-22T21:23:09.9911614Z 20/05/22 21:19:56 INFO mortbay.log: Started HttpServer2$[hidden email]-hadoop-cluster-network:8188 2020-05-22T21:23:09.9912266Z 20/05/22 21:19:56 INFO applicationhistoryservice.ApplicationHistoryServer: Instantiating AHSWebApp at 8188 2020-05-22T21:23:09.9913506Z 20/05/22 21:21:19 INFO delegation.AbstractDelegationTokenSecretManager: Creating password for identifier: (owner=hadoop-user, renewer=yarn, realUser=, issueDate=1590182479214, maxDate=1590787279214, sequenceNumber=1, masterKeyId=2), currentKey: 2 2020-05-22T21:23:09.9915123Z 20/05/22 21:21:20 INFO delegation.AbstractDelegationTokenSecretManager: Token renewal for identifier: (owner=hadoop-user, renewer=yarn, realUser=, issueDate=1590182479214, maxDate=1590787279214, sequenceNumber=1, masterKeyId=2); total currentTokens 1 2020-05-22T21:23:09.9916246Z /home/vsts/work/1/s/flink-end-to-end-tests/test-scripts/temp-test-directory-22195494660/logs/hadoop/timelineserver.out: 2020-05-22T21:23:09.9916650Z Docker logs: 2020-05-22T21:23:09.9916827Z /usr/local 2020-05-22T21:23:09.9917431Z WARNING: no policy specified for hdfs/[hidden email]; defaulting to no policy 2020-05-22T21:23:09.9917850Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:09.9918417Z Principal "hdfs/[hidden email]" created. 2020-05-22T21:23:09.9919123Z WARNING: no policy specified for mapred/[hidden email]; defaulting to no policy 2020-05-22T21:23:09.9919542Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:09.9920118Z Principal "mapred/[hidden email]" created. 2020-05-22T21:23:09.9920799Z WARNING: no policy specified for yarn/[hidden email]; defaulting to no policy 2020-05-22T21:23:09.9921235Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:09.9921786Z Principal "yarn/[hidden email]" created. 2020-05-22T21:23:09.9922480Z WARNING: no policy specified for HTTP/[hidden email]; defaulting to no policy 2020-05-22T21:23:09.9922912Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:09.9923456Z Principal "HTTP/[hidden email]" created. 2020-05-22T21:23:09.9923822Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:09.9924620Z Entry for principal hdfs/master.docker-hadoop-cluster-network with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9925659Z Entry for principal hdfs/master.docker-hadoop-cluster-network with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9926658Z Entry for principal hdfs/master.docker-hadoop-cluster-network with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9927622Z Entry for principal hdfs/master.docker-hadoop-cluster-network with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9928590Z Entry for principal hdfs/master.docker-hadoop-cluster-network with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9929553Z Entry for principal hdfs/master.docker-hadoop-cluster-network with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9930536Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9931788Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9932834Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9933956Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9934921Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9935864Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:hdfs.keytab. 2020-05-22T21:23:09.9936385Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:09.9937311Z Entry for principal mapred/master.docker-hadoop-cluster-network with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9938343Z Entry for principal mapred/master.docker-hadoop-cluster-network with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9939361Z Entry for principal mapred/master.docker-hadoop-cluster-network with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9940325Z Entry for principal mapred/master.docker-hadoop-cluster-network with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9941306Z Entry for principal mapred/master.docker-hadoop-cluster-network with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9942284Z Entry for principal mapred/master.docker-hadoop-cluster-network with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9943287Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 3, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9944317Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 3, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9945326Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 3, encryption type des3-cbc-sha1 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9946283Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 3, encryption type arcfour-hmac added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9947252Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 3, encryption type des-hmac-sha1 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9948202Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 3, encryption type des-cbc-md5 added to keytab WRFILE:mapred.keytab. 2020-05-22T21:23:09.9948727Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:09.9949543Z Entry for principal yarn/master.docker-hadoop-cluster-network with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9950554Z Entry for principal yarn/master.docker-hadoop-cluster-network with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9951937Z Entry for principal yarn/master.docker-hadoop-cluster-network with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9952954Z Entry for principal yarn/master.docker-hadoop-cluster-network with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9953909Z Entry for principal yarn/master.docker-hadoop-cluster-network with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9954887Z Entry for principal yarn/master.docker-hadoop-cluster-network with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9955865Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 4, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9957028Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 4, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9958018Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 4, encryption type des3-cbc-sha1 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9958966Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 4, encryption type arcfour-hmac added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9959932Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 4, encryption type des-hmac-sha1 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9960990Z Entry for principal HTTP/master.docker-hadoop-cluster-network with kvno 4, encryption type des-cbc-md5 added to keytab WRFILE:yarn.keytab. 2020-05-22T21:23:09.9961474Z * Starting OpenBSD Secure Shell server sshd 2020-05-22T21:23:09.9961718Z ...done. 2020-05-22T21:23:09.9961987Z 20/05/22 21:19:30 INFO namenode.NameNode: STARTUP_MSG: 2020-05-22T21:23:09.9962362Z /************************************************************ 2020-05-22T21:23:09.9962653Z STARTUP_MSG: Starting NameNode 2020-05-22T21:23:09.9962913Z STARTUP_MSG: user = hdfs 2020-05-22T21:23:09.9963842Z STARTUP_MSG: host = master.docker-hadoop-cluster-network/172.22.0.3 2020-05-22T21:23:09.9964352Z STARTUP_MSG: args = [-format] 2020-05-22T21:23:09.9964623Z STARTUP_MSG: version = 2.8.4 2020-05-22T21:23:10.0001701Z STARTUP_MSG: classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/jsch-0.1.54.jar:/usr/local/hadoop/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/common/lib/json-smart-1.3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/httpclient-4.5.2.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/usr/local/hadoop/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/usr/local/hadoop/share/hadoop/common/lib/hadoop-auth-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/common/lib/jetty-sslengine-6.1.26.jar:/usr/local/hadoop/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/common/lib/httpcore-4.4.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/common/hadoop-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/common/hadoop-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs:/usr/local/hadoop/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okio-1.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/okhttp-2.4.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/htrace-core4-4.0.1-incubating.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/hdfs/lib/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-nfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-native-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/hdfs/hadoop-hdfs-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/json-io-2.5.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/fst-2.50.jar:/usr/local/hadoop/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-client-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-math-2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/yarn/lib/javassist-3.18.1-GA.jar:/usr/local/hadoop/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/curator-test-2.7.1.jar:/usr/local/hadoop/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop/share/hadoop/yarn/lib/java-util-1.9.0.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-registry-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-api-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-tests-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.8.4.jar:/usr/local/hadoop/share/hadoop/yarn/hadoop-yarn-client-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hadoop-annotations-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.8.4-tests.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.8.4.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.4.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar 2020-05-22T21:23:10.0033167Z STARTUP_MSG: build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r 17e75c2a11685af3e043aa5e604dc831e5b14674; compiled by 'jdu' on 2018-05-08T02:50Z 2020-05-22T21:23:10.0033726Z STARTUP_MSG: java = 1.8.0_131 2020-05-22T21:23:10.0034046Z ************************************************************/ 2020-05-22T21:23:10.0034473Z 20/05/22 21:19:30 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT] 2020-05-22T21:23:10.0035137Z 20/05/22 21:19:30 INFO namenode.NameNode: createNameNode [-format] 2020-05-22T21:23:10.0036054Z 20/05/22 21:19:32 INFO security.UserGroupInformation: Login successful for user hdfs/[hidden email] using keytab file /etc/security/keytabs/hdfs.keytab 2020-05-22T21:23:10.0039614Z Formatting using clusterid: CID-6e253d18-f0f3-446b-a39f-e6aa75c5db3b 2020-05-22T21:23:10.0040087Z 20/05/22 21:19:32 INFO namenode.FSEditLog: Edit logging is async:true 2020-05-22T21:23:10.0040522Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: KeyProvider: null 2020-05-22T21:23:10.0040939Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: fsLock is fair: true 2020-05-22T21:23:10.0041616Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false 2020-05-22T21:23:10.0049679Z 20/05/22 21:19:32 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000 2020-05-22T21:23:10.0053670Z 20/05/22 21:19:32 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true 2020-05-22T21:23:10.0054921Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000 2020-05-22T21:23:10.0055586Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: The block deletion will start around 2020 May 22 21:19:32 2020-05-22T21:23:10.0056106Z 20/05/22 21:19:32 INFO util.GSet: Computing capacity for map BlocksMap 2020-05-22T21:23:10.0056843Z 20/05/22 21:19:32 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:10.0057401Z 20/05/22 21:19:32 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB 2020-05-22T21:23:10.0057811Z 20/05/22 21:19:32 INFO util.GSet: capacity = 2^21 = 2097152 entries 2020-05-22T21:23:10.0058242Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true 2020-05-22T21:23:10.0058888Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null 2020-05-22T21:23:10.0059530Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: defaultReplication = 1 2020-05-22T21:23:10.0059991Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: maxReplication = 512 2020-05-22T21:23:10.0060551Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: minReplication = 1 2020-05-22T21:23:10.0061023Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: maxReplicationStreams = 2 2020-05-22T21:23:10.0061478Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000 2020-05-22T21:23:10.0061958Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: encryptDataTransfer = true 2020-05-22T21:23:10.0062417Z 20/05/22 21:19:32 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000 2020-05-22T21:23:10.0063332Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: fsOwner = hdfs/[hidden email] (auth:KERBEROS) 2020-05-22T21:23:10.0063900Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: supergroup = root 2020-05-22T21:23:10.0064314Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: isPermissionEnabled = true 2020-05-22T21:23:10.0064739Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: HA Enabled: false 2020-05-22T21:23:10.0065140Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: Append Enabled: true 2020-05-22T21:23:10.0065552Z 20/05/22 21:19:32 INFO util.GSet: Computing capacity for map INodeMap 2020-05-22T21:23:10.0066146Z 20/05/22 21:19:32 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:10.0066519Z 20/05/22 21:19:32 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB 2020-05-22T21:23:10.0066927Z 20/05/22 21:19:32 INFO util.GSet: capacity = 2^20 = 1048576 entries 2020-05-22T21:23:10.0067322Z 20/05/22 21:19:32 INFO namenode.FSDirectory: ACLs enabled? false 2020-05-22T21:23:10.0067724Z 20/05/22 21:19:32 INFO namenode.FSDirectory: XAttrs enabled? true 2020-05-22T21:23:10.0068147Z 20/05/22 21:19:32 INFO namenode.NameNode: Caching file names occurring more than 10 times 2020-05-22T21:23:10.0068595Z 20/05/22 21:19:32 INFO util.GSet: Computing capacity for map cachedBlocks 2020-05-22T21:23:10.0069186Z 20/05/22 21:19:32 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:10.0069562Z 20/05/22 21:19:32 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB 2020-05-22T21:23:10.0069966Z 20/05/22 21:19:32 INFO util.GSet: capacity = 2^18 = 262144 entries 2020-05-22T21:23:10.0070657Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033 2020-05-22T21:23:10.0071163Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0 2020-05-22T21:23:10.0071637Z 20/05/22 21:19:32 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000 2020-05-22T21:23:10.0072119Z 20/05/22 21:19:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10 2020-05-22T21:23:10.0072614Z 20/05/22 21:19:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10 2020-05-22T21:23:10.0073094Z 20/05/22 21:19:32 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25 2020-05-22T21:23:10.0073579Z 20/05/22 21:19:33 INFO namenode.FSNamesystem: Retry cache on namenode is enabled 2020-05-22T21:23:10.0074124Z 20/05/22 21:19:33 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis 2020-05-22T21:23:10.0074645Z 20/05/22 21:19:33 INFO util.GSet: Computing capacity for map NameNodeRetryCache 2020-05-22T21:23:10.0075361Z 20/05/22 21:19:33 INFO util.GSet: VM type = 64-bit 2020-05-22T21:23:10.0075760Z 20/05/22 21:19:33 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB 2020-05-22T21:23:10.0076195Z 20/05/22 21:19:33 INFO util.GSet: capacity = 2^15 = 32768 entries 2020-05-22T21:23:10.0076904Z 20/05/22 21:19:33 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1813050567-172.22.0.3-1590182373027 2020-05-22T21:23:10.0077704Z 20/05/22 21:19:33 INFO common.Storage: Storage directory /tmp/hadoop-hdfs/dfs/name has been successfully formatted. 2020-05-22T21:23:10.0078636Z 20/05/22 21:19:33 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression 2020-05-22T21:23:10.0079759Z 20/05/22 21:19:33 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-hdfs/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 315 bytes saved in 0 seconds. 2020-05-22T21:23:10.0080405Z 20/05/22 21:19:33 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0 2020-05-22T21:23:10.0080862Z 20/05/22 21:19:33 INFO util.ExitUtil: Exiting with status 0 2020-05-22T21:23:10.0081230Z 20/05/22 21:19:33 INFO namenode.NameNode: SHUTDOWN_MSG: 2020-05-22T21:23:10.0081604Z /************************************************************ 2020-05-22T21:23:10.0082215Z SHUTDOWN_MSG: Shutting down NameNode at master.docker-hadoop-cluster-network/172.22.0.3 2020-05-22T21:23:10.0082622Z ************************************************************/ 2020-05-22T21:23:10.0082982Z WARNING: no policy specified for [hidden email]; defaulting to no policy 2020-05-22T21:23:10.0083611Z WARNING: no policy specified for [hidden email]; defaulting to no policy 2020-05-22T21:23:10.0084443Z 20/05/22 21:19:51 WARN ipc.Client: Failed to connect to server: master.docker-hadoop-cluster-network/172.22.0.3:9000: try once and fail. 2020-05-22T21:23:10.0084930Z java.net.ConnectException: Connection refused 2020-05-22T21:23:10.0085269Z at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) 2020-05-22T21:23:10.0085665Z at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) 2020-05-22T21:23:10.0086146Z at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206) 2020-05-22T21:23:10.0086601Z at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:531) 2020-05-22T21:23:10.0087023Z at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:685) 2020-05-22T21:23:10.0087494Z at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:788) 2020-05-22T21:23:10.0087937Z at org.apache.hadoop.ipc.Client$Connection.access$3500(Client.java:410) 2020-05-22T21:23:10.0088378Z at org.apache.hadoop.ipc.Client.getConnection(Client.java:1550) 2020-05-22T21:23:10.0088763Z at org.apache.hadoop.ipc.Client.call(Client.java:1381) 2020-05-22T21:23:10.0089147Z at org.apache.hadoop.ipc.Client.call(Client.java:1345) 2020-05-22T21:23:10.0089591Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:227) 2020-05-22T21:23:10.0090094Z at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116) 2020-05-22T21:23:10.0090515Z at com.sun.proxy.$Proxy10.setSafeMode(Unknown Source) 2020-05-22T21:23:10.0091006Z at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.setSafeMode(ClientNamenodeProtocolTranslatorPB.java:691) 2020-05-22T21:23:10.0091713Z at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 2020-05-22T21:23:10.0092135Z at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 2020-05-22T21:23:10.0092645Z at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) 2020-05-22T21:23:10.0093100Z at java.lang.reflect.Method.invoke(Method.java:498) 2020-05-22T21:23:10.0093556Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:409) 2020-05-22T21:23:10.0094135Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:163) 2020-05-22T21:23:10.0094802Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:155) 2020-05-22T21:23:10.0095375Z at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95) 2020-05-22T21:23:10.0095931Z at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:346) 2020-05-22T21:23:10.0096348Z at com.sun.proxy.$Proxy11.setSafeMode(Unknown Source) 2020-05-22T21:23:10.0096735Z at org.apache.hadoop.hdfs.DFSClient.setSafeMode(DFSClient.java:2143) 2020-05-22T21:23:10.0097207Z at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1359) 2020-05-22T21:23:10.0098117Z at org.apache.hadoop.hdfs.DistributedFileSystem.setSafeMode(DistributedFileSystem.java:1343) 2020-05-22T21:23:10.0098636Z at org.apache.hadoop.hdfs.tools.DFSAdmin.setSafeMode(DFSAdmin.java:644) 2020-05-22T21:23:10.0102850Z at org.apache.hadoop.hdfs.tools.DFSAdmin.run(DFSAdmin.java:1977) 2020-05-22T21:23:10.0103448Z at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) 2020-05-22T21:23:10.0103853Z at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) 2020-05-22T21:23:10.0104275Z at org.apache.hadoop.hdfs.tools.DFSAdmin.main(DFSAdmin.java:2168) 2020-05-22T21:23:10.0109414Z safemode: Call From master.docker-hadoop-cluster-network/172.22.0.3 to master.docker-hadoop-cluster-network:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused 2020-05-22T21:23:10.0110219Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:10.0110538Z Principal "[hidden email]" created. 2020-05-22T21:23:10.0110829Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:10.0111595Z Entry for principal root with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/root/root.keytab. 2020-05-22T21:23:10.0112487Z Entry for principal root with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/root/root.keytab. 2020-05-22T21:23:10.0113390Z Entry for principal root with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/root/root.keytab. 2020-05-22T21:23:10.0114243Z Entry for principal root with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/root/root.keytab. 2020-05-22T21:23:10.0115058Z Entry for principal root with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/root/root.keytab. 2020-05-22T21:23:10.0115886Z Entry for principal root with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/root/root.keytab. 2020-05-22T21:23:10.0116355Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:10.0116842Z Principal "[hidden email]" created. 2020-05-22T21:23:10.0117160Z Authenticating as principal admin/admin with password. 2020-05-22T21:23:10.0117944Z Entry for principal hadoop-user with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/home/hadoop-user/hadoop-user.keytab. 2020-05-22T21:23:10.0118955Z Entry for principal hadoop-user with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/home/hadoop-user/hadoop-user.keytab. 2020-05-22T21:23:10.0119927Z Entry for principal hadoop-user with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/home/hadoop-user/hadoop-user.keytab. 2020-05-22T21:23:10.0120861Z Entry for principal hadoop-user with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/home/hadoop-user/hadoop-user.keytab. 2020-05-22T21:23:10.0121806Z Entry for principal hadoop-user with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/home/hadoop-user/hadoop-user.keytab. 2020-05-22T21:23:10.0122751Z Entry for principal hadoop-user with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/home/hadoop-user/hadoop-user.keytab. 2020-05-22T21:23:10.0123180Z Safe mode is OFF 2020-05-22T21:23:10.0123409Z Finished master initialization 2020-05-22T21:23:10.0123620Z Flink logs: 2020-05-22T21:23:11.4689466Z 20/05/22 21:23:11 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:23:11.9766773Z 20/05/22 21:23:11 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:12.2413807Z Total number of applications (application-types: [] and states: [NEW, NEW_SAVING, SUBMITTED, ACCEPTED, RUNNING, FINISHED, FAILED, KILLED]):1 2020-05-22T21:23:12.2415644Z Application-Id Application-Name Application-Type User Queue State Final-State Progress Tracking-URL 2020-05-22T21:23:12.2418784Z application_1590182392090_0001 Flink Application Cluster Apache Flink hadoop-user default FINISHED FAILED 100% N/A 2020-05-22T21:23:13.5805530Z 20/05/22 21:23:13 INFO client.RMProxy: Connecting to ResourceManager at master.docker-hadoop-cluster-network/172.22.0.3:8032 2020-05-22T21:23:14.0501769Z 20/05/22 21:23:14 INFO client.AHSProxy: Connecting to Application History server at master.docker-hadoop-cluster-network/172.22.0.3:10200 2020-05-22T21:23:14.4321458Z Application ID: 2020-05-22T21:23:15.1754151Z options parsing failed: Missing argument for option: applicationId 2020-05-22T21:23:15.1761443Z Retrieve logs for completed YARN applications. 2020-05-22T21:23:15.1790909Z usage: yarn logs -applicationId <application ID> [OPTIONS] 2020-05-22T21:23:15.1791209Z 2020-05-22T21:23:15.1799114Z general options are: 2020-05-22T21:23:15.1801018Z -am <AM Containers> Prints the AM Container logs for this 2020-05-22T21:23:15.1801688Z application. Specify comma-separated 2020-05-22T21:23:15.1802055Z value to get logs for related AM 2020-05-22T21:23:15.1802639Z Container. For example, If we specify -am 2020-05-22T21:23:15.1803047Z 1,2, we will get the logs for the first 2020-05-22T21:23:15.1803417Z AM Container as well as the second AM 2020-05-22T21:23:15.1803794Z Container. To get logs for all AM 2020-05-22T21:23:15.1804365Z Containers, use -am ALL. To get logs for 2020-05-22T21:23:15.1804968Z the latest AM Container, use -am -1. By 2020-05-22T21:23:15.1805361Z default, it will only print out syslog. 2020-05-22T21:23:15.1805925Z Work with -logFiles to get other logs 2020-05-22T21:23:15.1806519Z -appOwner <Application Owner> AppOwner (assumed to be current user if 2020-05-22T21:23:15.1806856Z not specified) 2020-05-22T21:23:15.1807391Z -containerId <Container ID> ContainerId. By default, it will only 2020-05-22T21:23:15.1807755Z print syslog if the application is 2020-05-22T21:23:15.1808345Z runing. Work with -logFiles to get other 2020-05-22T21:23:15.1808682Z logs. 2020-05-22T21:23:15.1809176Z -help Displays help for all commands. 2020-05-22T21:23:15.1809746Z -logFiles <Log File Name> Work with -am/-containerId and specify 2020-05-22T21:23:15.1810310Z comma-separated value to get specified 2020-05-22T21:23:15.1810700Z container log files. Use "ALL" to fetch 2020-05-22T21:23:15.1811065Z all the log files for the container. 2020-05-22T21:23:15.1812030Z -nodeAddress <Node Address> NodeAddress in the format nodename:port 2020-05-22T21:23:16.1892439Z Stopping slave2 ... 2020-05-22T21:23:16.1893278Z Stopping slave1 ... 2020-05-22T21:23:16.1893592Z Stopping master ... 2020-05-22T21:23:16.1893810Z Stopping kdc ... 2020-05-22T21:23:27.1114337Z [4A[2K 2020-05-22T21:23:27.1122258Z Stopping slave2 ... [32mdone[0m 2020-05-22T21:23:27.1127667Z [4B[3A[2K 2020-05-22T21:23:27.1128148Z Stopping slave1 ... [32mdone[0m 2020-05-22T21:23:37.3854759Z [3B[2A[2K 2020-05-22T21:23:37.3856007Z Stopping master ... [32mdone[0m 2020-05-22T21:23:47.6345234Z [2B[1A[2K 2020-05-22T21:23:47.6345744Z Stopping kdc ... [32mdone[0m 2020-05-22T21:23:47.6802158Z [1BRemoving slave2 ... 2020-05-22T21:23:47.6802444Z Removing slave1 ... 2020-05-22T21:23:47.6802672Z Removing master ... 2020-05-22T21:23:47.6802874Z Removing kdc ... 2020-05-22T21:23:47.7175232Z [1A[2K 2020-05-22T21:23:47.7176176Z Removing kdc ... [32mdone[0m 2020-05-22T21:23:47.7439794Z [1B[3A[2K 2020-05-22T21:23:47.7440633Z Removing slave1 ... [32mdone[0m 2020-05-22T21:23:47.7685690Z [3B[4A[2K 2020-05-22T21:23:47.7686586Z Removing slave2 ... [32mdone[0m 2020-05-22T21:23:47.8104183Z [4B[2A[2K 2020-05-22T21:23:47.8104745Z Removing master ... [32mdone[0m 2020-05-22T21:23:47.8105276Z [2BRemoving network docker-hadoop-cluster-network 2020-05-22T21:23:47.9572035Z [FAIL] Test script contains errors. 2020-05-22T21:23:47.9580367Z Checking for errors... 2020-05-22T21:23:47.9757366Z No errors in log files. 2020-05-22T21:23:47.9758092Z Checking for exceptions... 2020-05-22T21:23:47.9956993Z No exceptions in log files. 2020-05-22T21:23:47.9958318Z Checking for non-empty .out files... 2020-05-22T21:23:47.9977876Z grep: /home/vsts/work/1/s/flink-dist/target/flink-1.12-SNAPSHOT-bin/flink-1.12-SNAPSHOT/log/*.out: No such file or directory 2020-05-22T21:23:47.9983348Z No non-empty .out files. 2020-05-22T21:23:47.9983603Z 2020-05-22T21:23:47.9984288Z [FAIL] 'Running Kerberized YARN application on Docker test (custom fs plugin)' failed after 4 minutes and 25 seconds! Test exited with exit code 1 {code} -- This message was sent by Atlassian Jira (v8.3.4#803005) |
Free forum by Nabble | Edit this page |