请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

这个问题

17/12/23 02:26:38 INFO Utils: Successfully started service 'SparkUI' on port 4040.

17/12/23 02:26:38 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.140.128:4040

17/12/23 02:26:38 INFO SparkContext: Added JAR file:/home/hadoop/lib/sql-1.0-jar-with-dependencies.jar at spark://192.168.140.128:44782/jars/sql-1.0-jar-with-dependencies.jar with timestamp 1514024798854

17/12/23 02:26:39 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032

17/12/23 02:26:39 INFO Client: Requesting a new application from cluster with 1 NodeManagers

17/12/23 02:26:39 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)

17/12/23 02:26:39 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead

17/12/23 02:26:39 INFO Client: Setting up container launch context for our AM

17/12/23 02:26:39 INFO Client: Setting up the launch environment for our AM container

17/12/23 02:26:39 INFO Client: Preparing resources for our AM container

17/12/23 02:26:41 WARN Client: Failed to cleanup staging dir hdfs://hadoop001:8020/user/hadoop/.sparkStaging/application_1514023934045_0002

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot delete /user/hadoop/.sparkStaging/application_1514023934045_0002. Name node is in safe mode.

The reported blocks 0 needs additional 432 blocks to reach the threshold 0.9990 of total blocks 432.

The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1418)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInternal(FSNamesystem.java:4044)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.deleteInt(FSNamesystem.java:4002)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.delete(FSNamesystem.java:3986)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.delete(NameNodeRpcServer.java:839)

at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.delete(AuthorizationProviderProxyClientProtocol.java:307)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.delete(ClientNamenodeProtocolServerSideTranslatorPB.java:592)

at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)

at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)

at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:415)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693)

at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)


at org.apache.hadoop.ipc.Client.call(Client.java:1471)

at org.apache.hadoop.ipc.Client.call(Client.java:1408)

at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)

at com.sun.proxy.$Proxy16.delete(Unknown Source)

at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:526)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)

at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)

at com.sun.proxy.$Proxy17.delete(Unknown Source)

at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:2038)

at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:646)

at org.apache.hadoop.hdfs.DistributedFileSystem$12.doCall(DistributedFileSystem.java:642)

at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:642)

at org.apache.spark.deploy.yarn.Client.cleanupStagingDir(Client.scala:194)

at org.apache.spark.deploy.yarn.Client.submitApplication(Client.scala:180)

at org.apache.spark.scheduler.cluster.YarnClientSchedulerBackend.start(YarnClientSchedulerBackend.scala:56)

at org.apache.spark.scheduler.TaskSchedulerImpl.start(TaskSchedulerImpl.scala:156)

at org.apache.spark.SparkContext.<init>(SparkContext.scala:509)

at org.apache.spark.SparkContext$.getOrCreate(SparkContext.scala:2313)

at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:868)

at org.apache.spark.sql.SparkSession$Builder$$anonfun$6.apply(SparkSession.scala:860)

at scala.Option.getOrElse(Option.scala:121)

at org.apache.spark.sql.SparkSession$Builder.getOrCreate(SparkSession.scala:860)

at com.imooc.log.SparkStatCleanJobYARN$.main(SparkStatCleanJobYARN.scala:19)

at com.imooc.log.SparkStatCleanJobYARN.main(SparkStatCleanJobYARN.scala)

at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)

at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

at java.lang.reflect.Method.invoke(Method.java:606)

at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)

at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)

at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)

at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)

at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

17/12/23 02:26:41 ERROR SparkContext: Error initializing SparkContext.

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /user/hadoop/.sparkStaging/application_1514023934045_0002. Name node is in safe mode.

The reported blocks 0 needs additional 432 blocks to reach the threshold 0.9990 of total blocks 432.

The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1418)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4290)

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4265)

at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:867)

at 

正在回答 回答被采纳积分+3

1回答

Michael_PK 2017-12-23 19:25:14

The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached. 仔细看提示信息,已经明确说了safe mode,必须要等HDFS非safemode才OK

0 回复 有任何疑惑可以回复我~
问题已解决,确定采纳
还有疑问,暂不采纳
微信客服

购课补贴
联系客服咨询优惠详情

帮助反馈 APP下载

慕课网APP
您的移动学习伙伴

公众号

扫描二维码
关注慕课网微信公众号