请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

yarn提交任务模式,提示pyspark.zip找不到

/home/hadoop/app/spark-2.4.4-bin-hadoop2.7/bin/spark-submit --master yarn --name spark0402 ~/local_scripts/scripts/spark0402.py hdfs://node1:9000/hello.txt hdfs://node1:9000/output

报错信息主要是这样:
file:/home/hadoop/.sparkStaging/application_1575014477999_0002/pyspark.zip does not exist

19/12/03 05:14:10 INFO SparkContext: Successfully stopped SparkContext
Traceback (most recent call last):
  File "/home/hadoop/local_scripts/scripts/test.py", line 14, in <module>
    sc = SparkContext(conf=conf)
  File "/home/hadoop/app/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 136, in __init__
  File "/home/hadoop/app/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 198, in _do_init
  File "/home/hadoop/app/spark-2.4.4-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/context.py", line 306, in _initialize_context
  File "/home/hadoop/app/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1525, in __call__
  File "/home/hadoop/app/spark-2.4.4-bin-hadoop2.7/python/lib/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
py4j.protocol.Py4JJavaError: An error occurred while calling None.org.apache.spark.api.java.JavaSparkContext.
: org.apache.spark.SparkException: Application application_1575014477999_0002 failed 2 times due to AM Container for appattempt_1575014477999_0002_000002 exited with  exitCode: -1000
For more detailed output, check application tracking page:http://node1:8088/cluster/app/application_1575014477999_0002Then, click on links to logs of each attempt.
Diagnostics: File file:/home/hadoop/.sparkStaging/application_1575014477999_0002/pyspark.zip does not exist
java.io.FileNotFoundException: File file:/home/hadoop/.sparkStaging/application_1575014477999_0002/pyspark.zip does not exist
	

正在回答 回答被采纳积分+3

1回答

Michael_PK 2019-12-04 11:42:01

你跑个wc案例,确保你的yarn是正常的,现在这日志,感觉你的yarn可能有问题

0 回复 有任何疑惑可以回复我~
  • 提问者 gyy_ #1
    我是这么解决的
    
    conf = SparkConf().setMaster("yarn").set("spark.hadoop.fs.defaultFS", "hdfs://node1:9000")
    
    要在代码里面显式的给出fs.defaultFS的配置
    回复 有任何疑惑可以回复我~ 2019-12-04 13:33:45
  • Michael_PK 回复 提问者 gyy_ #2
    这么解决是可以,但是代码写死了。我估计还是hadoop的配置有问题,导致有些参数找不到
    回复 有任何疑惑可以回复我~ 2019-12-04 14:48:08
问题已解决,确定采纳
还有疑问,暂不采纳
微信客服

购课补贴
联系客服咨询优惠详情

帮助反馈 APP下载

慕课网APP
您的移动学习伙伴

公众号

扫描二维码
关注慕课网微信公众号