pyspark中union
时间: 2023-11-24 14:07:41 浏览: 114
在pyspark中,union方法用于将两个DataFrame进行合并,但不按照列名进行合并,而是按照位置进行合并。具体使用方式如下所示:
```
df1 = spark.createDataFrame([[1, 2, 3]], ["col0", "col1", "col2"])
df2 = spark.createDataFrame([[4, 5, 6]], ["col1", "col2", "col0"])
df1.union(df2).show()
```
这段代码会将df1和df2按行合并,并输出合并后的结果。需要注意的是,union会自动去除重复的行,保留唯一值。如果想要保留重复的行,可以使用unionAll方法。
<span class="em">1</span><span class="em">2</span><span class="em">3</span>
#### 引用[.reference_title]
- *1* *2* [pyspark union、unionAll、unionByName用法区别](https://blog.csdn.net/h18208975507/article/details/124883466)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"]
- *3* [数据库中union和union all的区别](https://download.csdn.net/download/qq_43797491/87853822)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"]
[ .reference_list ]
阅读全文