怎么配置hive on spark
时间: 2023-09-03 18:13:54 浏览: 120
配置Hive on Spark需要进行以下步骤:
1. 确保已经安装了Spark和Hive,并且它们都能正常工作。
2. 在Hive的配置文件`hive-site.xml`中添加以下配置项:
```
<property>
<name>hive.execution.engine</name>
<value>spark</value>
</property>
```
3. 在Spark的配置文件`spark-defaults.conf`中添加以下配置项:
```
spark.master=yarn
spark.eventLog.enabled=true
spark.eventLog.dir=hdfs://mycluster/spark/history
spark.executor.memory=2g
spark.driver.memory=2g
spark.driver.extraLibraryPath=/home/local/hadoop/lib/native
spark.executor.extraLibraryPath=/home/local/hadoop/lib/native
```
4. 将Spark的jar包上传到HDFS上的一个目录,例如`/spark-jars`:
```
hdfs dfs -mkdir /spark-jars
hdfs dfs -put /opt/spark/jars/* /spark-jars
```
5. 重启Hive服务,使配置生效。
通过以上步骤,你就可以成功配置Hive on Spark了。请注意,这只是一个简单的配置示例,你可能需要根据你的具体环境和需求进行适当的调整。引用\[1\]\[2\]\[3\]
#### 引用[.reference_title]
- *1* [hive on spark 配置 和 spark on hive](https://blog.csdn.net/young_0609/article/details/114900238)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item]
- *2* [Hive 上配置 Hive on Spark](https://blog.csdn.net/zhy0414/article/details/126885386)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item]
- *3* [配置hive on spark](https://blog.csdn.net/weixin_56581189/article/details/129555523)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^insertT0,239^v3^insert_chatgpt"}} ] [.reference_item]
[ .reference_list ]
阅读全文