hadoop集群安装hive
时间: 2023-10-18 10:22:33 浏览: 115
安装 Hive 需要先安装 Hadoop,因为 Hive 需要依赖于 Hadoop 文件系统和 MapReduce。下面是基于 Hadoop 2.x 版本的 Hive 安装步骤:
1. 下载 Hive 安装包并解压缩到指定目录:
```
tar -zxf apache-hive-x.x.x-bin.tar.gz
sudo mv apache-hive-x.x.x-bin /usr/local/hive
```
2. 配置环境变量:
```
export HIVE_HOME=/usr/local/hive
export PATH=$PATH:$HIVE_HOME/bin
```
3. 修改 Hive 配置文件 hive-site.xml:
```
cd $HIVE_HOME/conf
cp hive-default.xml.template hive-site.xml
vi hive-site.xml
```
在 hive-site.xml 文件中添加以下配置信息:
```
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://localhost:3306/hive?createDatabaseIfNotExist=true</value>
<description>JDBC connect string for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
<description>Driver class name for a JDBC metastore</description>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>hive</value>
<description>Username to use against metastore database</description>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>hive</value>
<description>Password to use against metastore database</description>
</property>
<property>
<name>hive.metastore.warehouse.dir</name>
<value>/user/hive/warehouse</value>
<description>Location of Hive warehouse directory</description>
</property>
<property>
<name>hive.exec.local.scratchdir</name>
<value>/tmp/hive</value>
<description>Local scratch space for Hive jobs</description>
</property>
<property>
<name>hive.querylog.location</name>
<value>/var/log/hive</value>
<description>Location of Hive query log</description>
</property>
<property>
<name>hive.server2.enable.doAs</name>
<value>false</value>
<description>Enable impersonation for HiveServer2</description>
</property>
<property>
<name>hive.server2.thrift.port</name>
<value>10000</value>
<description>TCP port number for HiveServer2</description>
</property>
</configuration>
```
4. 修改 Hive 配置文件 hive-env.sh:
```
cd $HIVE_HOME/conf
cp hive-env.sh.template hive-env.sh
vi hive-env.sh
```
在 hive-env.sh 文件中添加以下配置信息:
```
export HADOOP_HOME=/usr/local/hadoop
export HIVE_CONF_DIR=/usr/local/hive/conf
export HIVE_AUX_JARS_PATH=$HADOOP_HOME/share/hadoop/common/lib/mysql-connector-java-x.x.x.jar
```
5. 启动 Hive 元数据存储服务:
```
schematool -initSchema -dbType mysql
```
6. 启动 Hive 服务:
```
hive --service metastore &
hive --service hiveserver2 &
```
至此,Hive 部署完成。可以通过 `hive -e "show databases;"` 测试 Hive 是否正常运行。
阅读全文