请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

提交作业到yarn失败

老师,我在提交作业到yarn的时候失败了,平台时ubuntu,下面是报错信息,您能解答一下么
hadoop jar hadoop-train-v2-1.0.jar com.imooc.bigdata.hadoop.mr.access.AccessYARNApp /access/input/access.log /access/output/
19/05/24 22:10:03 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform… using builtin-java classes where applicable
19/05/24 22:10:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/05/24 22:10:04 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
19/05/24 22:10:05 INFO input.FileInputFormat: Total input paths to process : 1
19/05/24 22:10:05 INFO mapreduce.JobSubmitter: number of splits:1
19/05/24 22:10:05 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1558706975691_0001
19/05/24 22:10:05 INFO impl.YarnClientImpl: Submitted application application_1558706975691_0001
19/05/24 22:10:05 INFO mapreduce.Job: The url to track the job: http://ly-HP-ZHAN-66-Pro-G1:8088/proxy/application_1558706975691_0001/
19/05/24 22:10:05 INFO mapreduce.Job: Running job: job_1558706975691_0001
19/05/24 22:10:11 INFO mapreduce.Job: Job job_1558706975691_0001 running in uber mode : false
19/05/24 22:10:11 INFO mapreduce.Job: map 0% reduce 0%
19/05/24 22:10:15 INFO mapreduce.Job: map 100% reduce 0%
19/05/24 22:10:20 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000000_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:21 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000001_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#2
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:22 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_m_000000_0, Status : FAILED
Too many fetch failures. Failing the attempt. Last failure reported by attempt_1558706975691_0001_r_000002_0 from host ly-HP-ZHAN-66-Pro-G1
19/05/24 22:10:22 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000002_0, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:23 INFO mapreduce.Job: map 0% reduce 0%
19/05/24 22:10:27 INFO mapreduce.Job: map 100% reduce 0%
19/05/24 22:10:29 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000000_2, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#2
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:31 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000002_1, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#2
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:31 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_m_000000_1, Status : FAILED
Too many fetch failures. Failing the attempt. Last failure reported by attempt_1558706975691_0001_r_000001_1 from host ly-HP-ZHAN-66-Pro-G1
19/05/24 22:10:31 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000001_1, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#3
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:32 INFO mapreduce.Job: map 0% reduce 0%
19/05/24 22:10:37 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000001_2, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:38 INFO mapreduce.Job: map 100% reduce 0%
19/05/24 22:10:38 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000000_3, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#5
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:38 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_m_000000_2, Status : FAILED
Too many fetch failures. Failing the attempt. Last failure reported by attempt_1558706975691_0001_r_000002_2 from host ly-HP-ZHAN-66-Pro-G1
19/05/24 22:10:38 INFO mapreduce.Job: Task Id : attempt_1558706975691_0001_r_000002_2, Status : FAILED
Error: org.apache.hadoop.mapreduce.task.reduce.Shuffle$ShuffleError: error in shuffle in fetcher#1
at org.apache.hadoop.mapreduce.task.reduce.Shuffle.run(Shuffle.java:134)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:376)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.io.IOException: Exceeded MAX_FAILED_UNIQUE_FETCHES; bailing-out.
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.checkReducerHealth(ShuffleSchedulerImpl.java:392)
at org.apache.hadoop.mapreduce.task.reduce.ShuffleSchedulerImpl.copyFailed(ShuffleSchedulerImpl.java:307)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.copyFromHost(Fetcher.java:366)
at org.apache.hadoop.mapreduce.task.reduce.Fetcher.run(Fetcher.java:198)

19/05/24 22:10:39 INFO mapreduce.Job: map 0% reduce 0%
19/05/24 22:10:44 INFO mapreduce.Job: map 100% reduce 0%
19/05/24 22:10:46 INFO mapreduce.Job: map 100% reduce 100%
19/05/24 22:10:47 INFO mapreduce.Job: Job job_1558706975691_0001 failed with state FAILED due to: Task failed task_1558706975691_0001_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

19/05/24 22:10:47 INFO mapreduce.Job: Counters: 40
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=144499
FILE: Number of read operations=0

正在回答

3回答

需要在yarn site中添加一个配置,就是把hadoop指向的那个本地路径,也配置到yarn site里面

0 回复 有任何疑惑可以回复我~
  • 提问者 慕函数0201195 #1
    能说详细点吗?把那个路劲配进去,具体的property的name和value是啥?
    回复 有任何疑惑可以回复我~ 2019-05-25 14:47:09
  • Michael_PK 回复 提问者 慕函数0201195 #2
    就是那个property拷贝进去重启下
    回复 有任何疑惑可以回复我~ 2019-05-25 14:54:30
  • 提问者 慕函数0201195 #3
    这是我当前的配置
    <configuration>
        <property>
            <name>yarn.nodemanager.aux-services</name>
            <value>mapreduce_shuffle</value>
        </property>
        <property>
                 <name>yarn.resourcemanager.scheduler.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
    
         </property>
         <property>
               <name>yarn.scheduler.minimum-allocation-mb</name>
               <value>512</value>
         </property>
    
    <property>  
        <name>yarn.log-aggregation-enable</name>  
            <value>true</value>  
            </property>
    </configuration>
    
    我重启过几次了,仍然报同样的错误
    回复 有任何疑惑可以回复我~ 2019-05-25 15:02:59
Michael_PK 2020-01-31 04:15:43

这问题问答区有答案,思路是在yarn相关的配置文件中添加一个参数。你找下问答区

0 回复 有任何疑惑可以回复我~
Michael_PK 2019-10-03 13:04:09

这个问题,问答区有答案。需要在yarnsite文件中配置几个HDFS的参数。你可以在问答区找下

0 回复 有任何疑惑可以回复我~
  • 请问有该问题的链接吗?我在问答区中并没有找到。
    回复 有任何疑惑可以回复我~ 2020-01-30 19:29:46
  • 链接手机上不太好找,PC端搜下那个错误信息
    回复 有任何疑惑可以回复我~ 2020-01-31 04:16:54
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信