请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

spark-sql> create table t(key string, value string);出错

报错内容如下:
20/07/21 00:23:46 INFO HiveMetaStore: 0: get_database: default
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: default
20/07/21 00:23:46 INFO HiveMetaStore: 0: get_table : db=default tbl=t
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_table : db=default tbl=t
20/07/21 00:23:46 INFO HiveMetaStore: 0: get_database: default
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: default
20/07/21 00:23:46 INFO HiveMetaStore: 0: get_database: default
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: default
20/07/21 00:23:46 INFO HiveMetaStore: 0: get_database: default
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: default
20/07/21 00:23:46 INFO HiveMetaStore: 0: get_table : db=default tbl=t
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_table : db=default tbl=t
20/07/21 00:23:46 INFO HiveMetaStore: 0: get_database: default
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=get_database: default
20/07/21 00:23:46 INFO HiveMetaStore: 0: create_table: Table(tableName:t, dbName:default, owner:hadoop, createTime:1595262226, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:string, comment:null), FieldSchema(name:value, type:string, comment:null)], location:file:/user/hive/warehouse/t, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={“type”:“struct”,“fields”:[{“name”:“key”,“type”:“string”,“nullable”:true,“metadata”:{}},{“name”:“value”,“type”:“string”,“nullable”:true,“metadata”:{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.4.3}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))
20/07/21 00:23:46 INFO audit: ugi=hadoop ip=unknown-ip-addr cmd=create_table: Table(tableName:t, dbName:default, owner:hadoop, createTime:1595262226, lastAccessTime:0, retention:0, sd:StorageDescriptor(cols:[FieldSchema(name:key, type:string, comment:null), FieldSchema(name:value, type:string, comment:null)], location:file:/user/hive/warehouse/t, inputFormat:org.apache.hadoop.mapred.TextInputFormat, outputFormat:org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat, compressed:false, numBuckets:-1, serdeInfo:SerDeInfo(name:null, serializationLib:org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, parameters:{serialization.format=1}), bucketCols:[], sortCols:[], parameters:{}, skewedInfo:SkewedInfo(skewedColNames:[], skewedColValues:[], skewedColValueLocationMaps:{})), partitionKeys:[], parameters:{spark.sql.sources.schema.part.0={“type”:“struct”,“fields”:[{“name”:“key”,“type”:“string”,“nullable”:true,“metadata”:{}},{“name”:“value”,“type”:“string”,“nullable”:true,“metadata”:{}}]}, spark.sql.sources.schema.numParts=1, spark.sql.create.version=2.4.3}, viewOriginalText:null, viewExpandedText:null, tableType:MANAGED_TABLE, privileges:PrincipalPrivilegeSet(userPrivileges:{}, groupPrivileges:null, rolePrivileges:null))
20/07/21 00:23:46 WARN HiveMetaStore: Location: file:/user/hive/warehouse/t specified for non-external table:t
20/07/21 00:23:46 INFO FileUtils: Creating directory if it doesn’t exist: file:/user/hive/warehouse/t
Error in query: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:file:/user/hive/warehouse/t is not a directory or unable to create one);

正在回答 回答被采纳积分+3

1回答

Michael_PK 2020-07-21 00:36:03

file:/user/hive/warehouse/t is not a directory or unable to create one);

看着个消息,hive数据是存放在HDFS上的,你这里是本地呢。。。  你确定你的HDFS正常工作吗

0 回复 有任何疑惑可以回复我~
  • 提问者 慕田峪0177977 #1
    hdfs正常工作,在hive中创建表没问题,在spark-sql中就出现这个问题了
    回复 有任何疑惑可以回复我~ 2020-07-21 14:10:03
  • Michael_PK 回复 提问者 慕田峪0177977 #2
    你这个错就是连在本地文件系统,不是hdfs,你看那个异常信息就知道了
    回复 有任何疑惑可以回复我~ 2020-07-21 14:27:08
  • 提问者 慕田峪0177977 回复 Michael_PK #3
    可以了老师,我做的时候漏了两步关键操作:1.没有把hive-site.xml的配置文件copy到spark的conf目录下; 2.spark的jar目录下没有mysql-connector-java-5.1.27-bin.jar
    回复 有任何疑惑可以回复我~ 2020-07-21 14:43:07
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信