老师您好,我用scala命令spark.sql(“select pday,count(*) num from xtbl group by pday order by pday”).write.saveAsTable(“test1”)新建了一张test1表。之后用scala命令spark.table(“test1”).show()和sparksql都可以查到插入的数据内容。但是在hive里select查询出结果为空。
老师您好,我用spark-shell --master local[2] --jars /home/hadoop/software/mysql-connector-java-5.1.27-bin.jar启动scala,然后输入spark.sql("select pday,count(*) num from xtbl group by pday order by pday").write.saveAsTable("test1")
输入spark.table("test1").show()可以看到查询结果
用spark-sql --master local[2] --jars /home/hadoop/software/mysql-connector-java-5.1.27-bin.jar启动sparksql输入select * from test1可以看到查询结果