hive环境搭建Specified key was too long

2017/03/20 20:12
阅读数 325

错误

2017-03-20 17:43:55,108 ERROR [main]: DataNucleus.Datastore (Log4JLogger.java:error(115)) - Error thrown executing CREATE TABLE `SD_PARAMS`
(
    `SD_ID` BIGINT NOT NULL,
    `PARAM_KEY` VARCHAR(256) BINARY NOT NULL,
    `PARAM_VALUE` VARCHAR(4000) BINARY NULL,
    CONSTRAINT `SD_PARAMS_PK` PRIMARY KEY (`SD_ID`,`PARAM_KEY`)
) ENGINE=INNODB : Specified key was too long; max key length is 767 bytes
com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Specified key was too long; max key length is 767 bytes
	at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
	at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
	at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
	at com.mysql.jdbc.Util.handleNewInstance(Util.java:408)
	at com.mysql.jdbc.Util.getInstance(Util.java:383)
	at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1062)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4208)
	at com.mysql.jdbc.MysqlIO.checkErrorPacket(MysqlIO.java:4140)
	at com.mysql.jdbc.MysqlIO.sendCommand(MysqlIO.java:2597)
	at com.mysql.jdbc.MysqlIO.sqlQueryDirect(MysqlIO.java:2758)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2820)
	at com.mysql.jdbc.ConnectionImpl.execSQL(ConnectionImpl.java:2769)
	at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:907)
	at com.mysql.jdbc.StatementImpl.execute(StatementImpl.java:791)
	at com.jolbox.bonecp.StatementHandle.execute(StatementHandle.java:254)
	at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatement(AbstractTable.java:760)
	at org.datanucleus.store.rdbms.table.AbstractTable.executeDdlStatementList(AbstractTable.java:711)
	at org.datanucleus.store.rdbms.table.AbstractTable.create(AbstractTable.java:425)
	at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:488)
	at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.performTablesValidation(RDBMSStoreManager.java:3380)
	at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.addClassTablesAndValidate(RDBMSStoreManager.java:3190)
	at org.datanucleus.store.rdbms.RDBMSStoreManager$ClassAdder.run(RDBMSStoreManager.java:2841)
	at org.datanucleus.store.rdbms.AbstractSchemaTransaction.execute(AbstractSchemaTransaction.java:122)
	at org.datanucleus.store.rdbms.RDBMSStoreManager.addClasses(RDBMSStoreManager.java:1605)
	at org.datanucleus.store.AbstractStoreManager.addClass(AbstractStoreManager.java:954)
	at org.datanucleus.store.rdbms.RDBMSStoreManager.getDatastoreClass(RDBMSStoreManager.java:679)
	at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getStatementForCandidates(RDBMSQueryUtils.java:408)
	at org.datanucleus.store.rdbms.query.JDOQLQuery.compileQueryFull(JDOQLQuery.java:947)
	at org.datanucleus.store.rdbms.query.JDOQLQuery.compileInternal(JDOQLQuery.java:370)
	at org.datanucleus.store.query.Query.executeQuery(Query.java:1744)
	at org.datanucleus.store.query.Query.executeWithArray(Query.java:1672)
	at org.datanucleus.store.query.Query.execute(Query.java:1654)
	at org.datanucleus.api.jdo.JDOQuery.execute(JDOQuery.java:221)
	at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.ensureDbInit(MetaStoreDirectSql.java:184)
	at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.<init>(MetaStoreDirectSql.java:137)
	at org.apache.hadoop.hive.metastore.ObjectStore.initialize(ObjectStore.java:295)
	at org.apache.hadoop.hive.metastore.ObjectStore.setConf(ObjectStore.java:258)
	at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.<init>(RawStoreProxy.java:57)
	at org.apache.hadoop.hive.metastore.RawStoreProxy.getProxy(RawStoreProxy.java:66)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.newRawStore(HiveMetaStore.java:593)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.getMS(HiveMetaStore.java:571)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.createDefaultDB(HiveMetaStore.java:620)
	at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.init(HiveMetaStore.java:461)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.<init>(RetryingHMSHandler.java:66)
	at org.apache.hadoop.hive.metastore.RetryingHMSHandler.getProxy(RetryingHMSHandler.java:72)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5756)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.newRetryingHMSHandler(HiveMetaStore.java:5751)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.startMetaStore(HiveMetaStore.java:5984)
	at org.apache.hadoop.hive.metastore.HiveMetaStore.main(HiveMetaStore.java:5909)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

原因

数据库编码问题

解决办法

修改数据库编码

执行以下命令: alter database hive character set latin1;

额外问题1

更改后执行建库建表语句没问题,执行load data和 alter table tablename add partition (logdate='20151107') location '/cmslogs/20151107/46_log/' 语句,依旧报错。

An exception was thrown while adding/validating class(es) : Specified key was too long; max key length is 767 bytes

改变mysql设置,不能改变已经存在的表。你需要转换表的编码。

更改表编码:
MariaDB [hive]> alter table PARTITIONS convert to character set latin1;
Query OK, 0 rows affected (0.01 sec)              
Records: 0  Duplicates: 0  Warnings: 0

MariaDB [hive]> alter table PARTITION_KEYS convert to character set latin1;
Query OK, 1 row affected (0.00 sec)               
Records: 1  Duplicates: 0  Warnings: 0

再次在hive命令行执行
hive> alter table originallogs add partition (logdate='20151107') location '/cmslogs/20151107/46_log/';
OK
Time taken: 0.249 seconds
已经解决。 
由此得出两个结论: 1,改变数据库编码,不会改变已经存在的表的编码。
2,如果先建库,然后改变数据库编码,然后再启动hive元数据服务,是否能让表的编码变成latin1?

验证: 
1,停止hive的元数据进程。 
[careland@master bin]$ ps -ef |grep hive;
careland 17600 17025  0 09:46 pts/2    00:00:00 grep --color=auto hive
careland 22999     1  0 11月18 ?      00:00:56 /home/careland/hadoop/jdk1.7.0_79/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/home/careland/hadoop/hadoop-2.6.0/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/home/careland/hadoop/hadoop-2.6.0 -Dhadoop.id.str=careland -Dhadoop.root.logger=INFO,console -Djava.library.path=/home/careland/hadoop/hadoop-2.6.0/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /home/careland/hadoop/apache-hive-1.2.1-bin//lib/hive-service-1.2.1.jar org.apache.hadoop.hive.metastore.HiveMetaStore
[careland@master bin]$ kill -9 22999

2,删除mysql上的hive元数据库。 
MariaDB [hive]> drop database hive;
Query OK, 30 rows affected (0.03 sec)

3,在mysql上建立hive元数据。 
MariaDB [(none)]> create database hive;
Query OK, 1 row affected (0.00 sec)

MariaDB [(none)]> use hive;
Database changed
MariaDB [hive]> show tables;
Empty set (0.01 sec)
4,改变该库的编码 。
MariaDB [hive]> alter database hive character set latin1;
Query OK, 1 row affected (0.01 sec)
5,启动hive元数据服务。

[careland@master bin]$ pwd
/home/careland/hadoop/apache-hive-1.2.1-bin/bin
[careland@master bin]$ nohup ./hive --service metastore &
[1] 17690
[careland@master bin]$ nohup: 忽略输入并把输出追加到"nohup.out"

[careland@master bin]$

6,查看mysql中的hive元数据表的编码。
MariaDB [hive]> use hive;
Database changed
MariaDB [hive]> show tables;
MariaDB [hive]> show create table PARTITIONS;

已经改为latin1编码 。

验证结论2正确。

综述: hive命令出现Specified key was too long; max key length is 767 bytes 错误。是由于元数据的编码导致。
需要更改元数据库编码。如果元数据库中已经有了数据,不能删除。那么就更改对应的表的编码。如果元数据库为空。那么就删除元数据库。再重新建立数据库,并更改编码,然后再让hive来建表。

额外问题2

在hive中,show tables,create 等命令能正常执行,删除表drop table x时,会出现卡住的现象 也是编码问题造成的。

展开阅读全文
打赏
0
0 收藏
分享
加载中
更多评论
打赏
0 评论
0 收藏
0
分享
返回顶部
顶部