hive-default.xml
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:derby:;databaseName=metastore_db;create=true</value> 表示使用嵌入式的derby,create为true表示自动创建数据库,数据库名为metastore_db
<!--<value>jdbc:derby://192.168.0.3:4567/hadoopor;create=true</value>--> 表示使用客服模式的derby,hadoopor为数据库名,192.168.0.3为derby服务端的IP地址,而4567为服务端的端口号
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>org.apache.derby.jdbc.EmbeddedDriver</value> 表示使用嵌入式的derby
<!--<value>org.apache.derby.jdbc.ClientDriver</value>--> 表示使用客服模式的derby
<description>Driver class name for a JDBC metastore</description>
</property>
对于嵌入式的derby要求在hive的lib目录下有文件derby.jar,而对于客服模式的derby要求有derbyclient.jar文件
如果是derby坏了,就得把metastore_db删除就好了,不过以前的数据也没了,我觉得测试的时候用derby还行,如果正式上线的话就不要启动嵌入式的了,直接启动并连接线上服务器就ok了。不然metastore_db一加锁,启动了hive --service hiveserver就不能启动hive 启动了hive就不能启动hive --service hiveserver。
说明:
测试的时候使用嵌入式还可以,正式环境一定要用服务端模式,否则出了问题就没法恢复了。
可以选择任何你熟悉的语言类作为JDBC连接:
import java.sql.SQLException;
import java.sql.Connection;
import java.sql.ResultSet;
import java.sql.Statement;
import java.sql.DriverManager;
public class HiveJdbcClient {
private static String driverName = "org.apache.hadoop.hive.jdbc.HiveDriver";
/**
* @param args
* @throws SQLException
*/
public static void main(String[] args) throws SQLException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
Connection con = DriverManager.getConnection("jdbc:hive://localhost:10000/default", "", "");
Statement stmt = con.createStatement();
String tableName = "testHiveDriverTable";
stmt.executeQuery("drop table " + tableName);
ResultSet res = stmt.executeQuery("create table " + tableName + " (key int, value string)");
// show tables
String sql = "show tables '" + tableName + "'";
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
if (res.next()) {
System.out.println(res.getString(1));
}
// describe table
sql = "describe " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1) + "\t" + res.getString(2));
}
// load data into table
// NOTE: filepath has to be local to the hive server
// NOTE: /tmp/a.txt is a ctrl-A separated file with two fields per line
String filepath = "/tmp/a.txt";
sql = "load data local inpath '" + filepath + "' into table " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
// select * query
sql = "select * from " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));
}
// regular hive query
sql = "select count(1) from " + tableName;
System.out.println("Running: " + sql);
res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(res.getString(1));
}
}
}
接下来做的工作即是运行了:
# Then on the command-line
$ javac HiveJdbcClient.java
# To run the program in standalone mode, we need the following jars in the classpath
# from hive/build/dist/lib
# hive_exec.jar
# hive_jdbc.jar
# hive_metastore.jar
# hive_service.jar
# libfb303.jar
# log4j-1.2.15.jar
#
# from hadoop/build
# hadoop-*-core.jar
#
# To run the program in embedded mode, we need the following additional jars in the classpath
# from hive/build/dist/lib
# antlr-runtime-3.0.1.jar
# derby.jar
# jdo2-api-2.1.jar
# jpox-core-1.2.2.jar
# jpox-rdbms-1.2.2.jar
#
# as well as hive/build/dist/conf
$ java -cp $CLASSPATH HiveJdbcClient
# Alternatively, you can run the following bash script, which will seed the data file
# and build your classpath before invoking the client.
#!/bin/bash
HADOOP_HOME=/your/path/to/hadoop
HIVE_HOME=/your/path/to/hive
echo -e '1\x01foo' > /tmp/a.txt
echo -e '2\x01bar' >> /tmp/a.txt
HADOOP_CORE={{ls $HADOOP_HOME/hadoop-*-core.jar}}
CLASSPATH=.:$HADOOP_CORE:$HIVE_HOME/conf
for i in ${HIVE_HOME}/lib/*.jar ; do
CLASSPATH=$CLASSPATH:$i
done
java -cp $CLASSPATH HiveJdbcClient
分享到:
相关推荐
apache-hive-2.1.1-bin.tar apache-hive-2.1.1-bin.tar apache-hive-2.1.1-bin.tarapache-hive-2.1.1-bin.tar apache-hive-2.1.1-bin.tar apache-hive-2.1.1-bin.tarapache-hive-2.1.1-bin.tar apache-hive-2.1.1-...
hive配置,hive-default.xml.template,大数据hive常用配置
hive-exec-1.2.1.spark2.jar spark2-shell 支持 hive2 hadoop3
hive2.1.1 show create table 表名,hive中文乱码,替换hive-exec-2.1.1.jar
apache-hive-3.1.2-bin.tar.gz, 下载自:https://mirrors.bfsu.edu.cn/apache/hive/hive-3.1.2/, 上传至CSDN备份,本资源下载后需要解压缩zip文件,才是原本的apache-hive-3.1.2-bin.tar.gz文件
打开压缩包得到apache-hive-2.3.9-bin.tar.gz
Missing Hive Execution Jar: /hive/hive1.2.1/lib/hive-exec-*.jar
apache-hive-2.3.3-bin.tar.gz
Hive错误之 Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask错误分析_xiaohu21的博客-CSDN博客.mht
apache hive 2.3.4版本,开源.
Hive的配置,选定一个Hive的master,其他的节点为slaves,master和slaves的配置略有不同,不是完全照搬,准确的来说,应该是只有一个节点是Hive的服务器节点负责和元数据库以及集群内部通信,其他的Hive节点其实是装...
apache-hive-2.1.0-bin.tar.gz apache-hive-2.1.0-bin.tar.gz
ranger-2.1.0-hive-plugin.tar.gz
Hive-jdbc-3.1.1.zip hive jdbc,用于大数据 hive 开发和数据库连接。
apache-hive-2.3.0-bin.tar.gz
apache-hive-0.14.0-bin.tar.gz apache-hive-0.14.0-bin.tar.gz
guava-27.0-jre.jar 编译的hive-exec-3.1.2.jar
Linux系统 大数据开发 apache-hive-1.2.2-bin.tar.gz
从Hive官网下载的最新版apache-hive-3.1.0-bin.tar.gz安装包,需要的朋友拿走