请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

Cannot run program "python3.6": error=2, 没有那个文件或目录

环境变化:
export JAVA_HOME=/home/hadoop/app/jdk1.8.0_91
export PATH=JAVAHOME/bin:JAVA_HOME/bin:JAVAHOME/bin:PATH

export SCALA_HOME=/home/hadoop/app/scala-2.11.8
export PATH=SCALAHOME/bin:SCALA_HOME/bin:SCALAHOME/bin:PATH

export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=HADOOPHOME/bin:HADOOP_HOME/bin:HADOOPHOME/bin:PATH

export MAVEN_HOME=/home/hadoop/app/apache-maven-3.3.9
export PATH=MAVENHOME/bin:MAVEN_HOME/bin:MAVENHOME/bin:PATH

export PATH=/home/hadoop/app/python3/bin:/usr/bin/python:$PATH

export PYSPARK_PYTHON=python3.6

export SPARK_HOME=/home/hadoop/app/spark-2.3.0-bin-2.6.0-cdh5.7.0
export PATH=SPARKHOME/bin:SPARK_HOME/bin:SPARKHOME/bin:PATH

手工运行python可以
但用standalone运行报错:
Caused by: java.io.IOException: Cannot run program “python3.6”: error=2, 没有那个文件或目录
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048)
at org.apache.spark.api.python.PythonWorkerFactory.startDaemon(PythonWorkerFactory.scala:168)
at org.apache.spark.api.python.PythonWorkerFactory.createThroughDaemon(PythonWorkerFactory.scala:94)
at org.apache.spark.api.python.PythonWorkerFactory.create(PythonWorkerFactory.scala:70)
at org.apache.spark.SparkEnv.createPythonWorker(SparkEnv.scala:117)
at org.apache.spark.api.python.BasePythonRunner.compute(PythonRunner.scala:86)
at org.apache.spark.api.python.PythonRDD.compute(PythonRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:109)
at org.apache.spark.executor.ExecutorTaskRunner.run(Executor.scala:345)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)atjava.util.concurrent.ThreadPoolExecutorTaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutorTaskRunner.run(Executor.scala:345)atjava.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)atjava.util.concurrent.ThreadPoolExecutorWorker.run(ThreadPoolExecutor.java:617)
… 1 more
Caused by: java.io.IOException: error=2, 没有那个文件或目录
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.(UNIXProcess.java:248)
at java.lang.ProcessImpl.start(ProcessImpl.java:134)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1029)
… 14 more
不知道哪里还需要配置?

正在回答 回答被采纳积分+3

1回答

Michael_PK 2021-10-09 10:35:51

python3.6的有加在系统环境变量中吗

0 回复 有任何疑惑可以回复我~
  • 提问者 慕粉1926212165 #1
    系统环境变量(~/.bash_profile):
    
    export JAVA_HOME=/home/hadoop/app/jdk1.8.0_91
    export PATH=$JAVA_HOME/bin:$PATH
    
    export SCALA_HOME=/home/hadoop/app/scala-2.11.8
    export PATH=$SCALA_HOME/bin:$PATH
    
    
    export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
    export PATH=$HADOOP_HOME/bin:$PATH
    
    export MAVEN_HOME=/home/hadoop/app/apache-maven-3.3.9
    export PATH=$MAVEN_HOME/bin:$PATH
    
    export PATH=/home/hadoop/app/python3/bin:/usr/bin/python:$PATH
    
    export PYSPARK_PYTHON=python3.6
    
    export SPARK_HOME=/home/hadoop/app/spark-2.3.0-bin-2.6.0-cdh5.7.0
    export PATH=$SPARK_HOME/bin:$PATH
    回复 有任何疑惑可以回复我~ 2021-10-09 10:59:56
  • 提问者 慕粉1926212165 #2
    将 export PYSPARK_PYTHON=python3.6 改成 export PYSPARK_PYTHON=python
    回复 有任何疑惑可以回复我~ 2021-10-09 16:07:18
问题已解决,确定采纳
还有疑问,暂不采纳
微信客服

购课补贴
联系客服咨询优惠详情

帮助反馈 APP下载

慕课网APP
您的移动学习伙伴

公众号

扫描二维码
关注慕课网微信公众号