请稍等 ...
×

采纳答案成功!

向帮助你的同学说点啥吧!感谢那些助人为乐的人

sh sh hive_to_hbase.sh

输入正文

=========清空 hbase user_tags_map_all=============

2023-03-19 00:46:53,238 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

Truncating 'user_tags_map_all' table (it may take a while):


ERROR: Unknown table user_tags_map_all!


Here is some help for this command:

  Disables, drops and recreates the specified table.



================导入数据=========================

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]


Logging initialized using configuration in file:/opt/hive/conf/hive-log4j2.properties Async: true

WARNING: Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.

Query ID = root_20230319004657_85a2594b-14d4-4dd7-bdb2-a1109aaf6e0c

Total jobs = 1

Launching Job 1 out of 1

Number of reduce tasks not specified. Estimated from input data size: 1

In order to change the average load for a reducer (in bytes):

  set hive.exec.reducers.bytes.per.reducer=<number>

In order to limit the maximum number of reducers:

  set hive.exec.reducers.max=<number>

In order to set a constant number of reducers:

  set mapreduce.job.reduces=<number>

Starting Job = job_1679157726184_0003, Tracking URL = http://resourcemanager:8088/proxy/application_1679157726184_0003/

Kill Command = /opt/hadoop-2.7.4/bin/hadoop job  -kill job_1679157726184_0003

Hadoop job information for Stage-3: number of mappers: 1; number of reducers: 1

2023-03-19 00:47:08,082 Stage-3 map = 0%,  reduce = 0%

2023-03-19 00:47:13,277 Stage-3 map = 100%,  reduce = 0%, Cumulative CPU 3.62 sec

2023-03-19 00:47:41,147 Stage-3 map = 100%,  reduce = 100%, Cumulative CPU 3.62 sec

MapReduce Total cumulative CPU time: 3 seconds 620 msec

Ended Job = job_1679157726184_0003 with errors

Error during job, obtaining debugging information...

Examining task ID: task_1679157726184_0003_m_000000 (and more) from job job_1679157726184_0003


Task with the most failures(4): 

-----

Task ID:

  task_1679157726184_0003_r_000000


URL:

  http://resourcemanager:8088/taskdetails.jsp?jobid=job_1679157726184_0003&tipid=task_1679157726184_0003_r_000000

-----

Diagnostic Messages for this Task:

Error: java.lang.RuntimeException: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"_col0":"201237"},"value":{"_col0":["????:????","????:???","?30????????:300-500?","?????:??????","???????:??","???:20???","???????:??","???????:??","????:??","rfm????:??????","????:?????","???????:???","????:20-30?","?????:??????","??????????:10-20?"]}}

at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:257)

at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)

at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)

at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)

at java.security.AccessController.doPrivileged(Native Method)

at javax.security.auth.Subject.doAs(Subject.java:422)

at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)

at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: Hive Runtime Error while processing row (tag=0) {"key":{"_col0":"201237"},"value":{"_col0":["????:????","????:???","?30????????:300-500?","?????:??????","???????:??","???:20???","???????:??","???????:??","????:??","rfm????:??????","????:?????","???????:???","????:20-30?","?????:??????","??????????:10-20?"]}}

at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:245)

... 7 more

Caused by: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1236 actions: Table 'user_tags_map_all' was not found, got: hbase:namespace.: 1236 times, 

at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:796)

at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)

at org.apache.hadoop.hive.ql.exec.SelectOperator.process(SelectOperator.java:95)

at org.apache.hadoop.hive.ql.exec.Operator.forward(Operator.java:897)

at org.apache.hadoop.hive.ql.exec.GroupByOperator.forward(GroupByOperator.java:1047)

at org.apache.hadoop.hive.ql.exec.GroupByOperator.processAggr(GroupByOperator.java:847)

at org.apache.hadoop.hive.ql.exec.GroupByOperator.processKey(GroupByOperator.java:721)

at org.apache.hadoop.hive.ql.exec.GroupByOperator.process(GroupByOperator.java:787)

at org.apache.hadoop.hive.ql.exec.mr.ExecReducer.reduce(ExecReducer.java:236)

... 7 more

Caused by: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 1236 actions: Table 'user_tags_map_all' was not found, got: hbase:namespace.: 1236 times, 

at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)

at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:208)

at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1689)

at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)

at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:141)

at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:98)

at org.apache.hadoop.hbase.client.HTable.put(HTable.java:1028)

at org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat$MyRecordWriter.write(HiveHBaseTableOutputFormat.java:146)

at org.apache.hadoop.hive.hbase.HiveHBaseTableOutputFormat$MyRecordWriter.write(HiveHBaseTableOutputFormat.java:117)

at org.apache.hadoop.hive.ql.io.HivePassThroughRecordWriter.write(HivePassThroughRecordWriter.java:40)

at org.apache.hadoop.hive.ql.exec.FileSinkOperator.process(FileSinkOperator.java:762)

... 15 more



FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

MapReduce Jobs Launched: 

Stage-Stage-3: Map: 1  Reduce: 1   Cumulative CPU: 3.62 sec   HDFS Read: 202336 HDFS Write: 0 FAIL

Total MapReduce CPU Time Spent: 3 seconds 620 msec

lstat /root/imooc-dmp-env/load_tags_data/show_dwt_user_tags_map_all.sql: no such file or directory

======打印输出用户标签主题表数据===========

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/opt/hive/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/opt/hadoop-2.7.4/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]


Logging initialized using configuration in file:/opt/hive/conf/hive-log4j2.properties Async: true

OK

Failed with exception java.io.IOException:org.apache.hadoop.hbase.TableNotFoundException: Table 'user_tags_map_all' was not found, got: hbase:namespace.

Time taken: 2.317 seconds

============数据导入成功===========

====可以查看Hive表:dw.dwt_user_tags_map_all=======

====可以查看Hbase表:user_tags_map_all=======


正在回答 回答被采纳积分+3

1回答

小简同学 2024-09-24 14:34:16
同学你好,是表不存在,可能前面的步骤有报错,可以重新初始化安装
0 回复 有任何疑惑可以回复我~
问题已解决,确定采纳
还有疑问,暂不采纳
意见反馈 帮助中心 APP下载
官方微信