-------user_id_mapping与sku_id_mapping数据前5条如下:------- 0:0 0:89 1:1 1:2 1:3
时间: 2024-05-15 11:11:47 浏览: 8
user_id_mapping与sku_id_mapping是两个数据表,前者是用户ID映射表,后者是SKU(商品)ID映射表。根据提供的数据前5条,可以看出每行数据都由两个数字组成,以冒号“:”分隔,左侧表示用户或者SKU的原始ID,右侧表示对应的映射后的ID。
因此,前5条数据可以这样解读:
user_id_mapping数据前5条:
- 原始用户ID为0的用户映射后的ID为0
- 原始用户ID为0的用户映射后的ID为89
- 原始用户ID为1的用户映射后的ID为1
- 原始用户ID为1的用户映射后的ID为2
- 原始用户ID为1的用户映射后的ID为3
sku_id_mapping数据前5条:
- 原始SKU ID为0的商品映射后的ID为0
- 原始SKU ID为1的商品映射后的ID为1
- 原始SKU ID为2的商品映射后的ID为2
- 原始SKU ID为3的商品映射后的ID为3
- 原始SKU ID为4的商品映射后的ID为4
相关问题
spark-shell:5、 据Hive的dwd库中相关表或MySQL数据库shtd_store中订单相关表(order_detail、order_info、sku_info),对用户购买过的商品进行去重,将其转换为以下格式:第一列为用户id mapping(Mapping操作:例如用户id:1、4、7、8、9,则做完mapping操作转为字典类型,键0对应用户id 1,键1对应用户id 4,以此类推),第二列为用户购买过的商品id mapping,按照user_id与sku_id进行升序排序,保存数据至Hive
好的,我明白了。针对您的问题,我可以提供以下的Spark SQL代码来实现您的需求:
```sql
-- 从Hive中读取订单相关表
CREATE TEMPORARY VIEW dwd_order_detail
AS
SELECT * FROM hive_db.dwd.order_detail;
CREATE TEMPORARY VIEW dwd_order_info
AS
SELECT * FROM hive_db.dwd.order_info;
CREATE TEMPORARY VIEW dwd_sku_info
AS
SELECT * FROM hive_db.dwd.sku_info;
-- 从MySQL中读取shtd_store数据库中的订单相关表
CREATE TEMPORARY VIEW mysql_order_detail
AS
SELECT * FROM jdbc_db.mysql.order_detail;
CREATE TEMPORARY VIEW mysql_order_info
AS
SELECT * FROM jdbc_db.mysql.order_info;
CREATE TEMPORARY VIEW mysql_sku_info
AS
SELECT * FROM jdbc_db.mysql.sku_info;
-- 合并订单相关表
CREATE TEMPORARY VIEW order_detail_union
AS
SELECT * FROM dwd_order_detail
UNION ALL
SELECT * FROM mysql_order_detail;
CREATE TEMPORARY VIEW order_info_union
AS
SELECT * FROM dwd_order_info
UNION ALL
SELECT * FROM mysql_order_info;
CREATE TEMPORARY VIEW sku_info_union
AS
SELECT * FROM dwd_sku_info
UNION ALL
SELECT * FROM mysql_sku_info;
-- 对用户购买过的商品进行去重
CREATE TEMPORARY VIEW user_sku_distinct
AS
SELECT user_id, sku_id
FROM (
SELECT ROW_NUMBER() OVER (PARTITION BY user_id, sku_id ORDER BY order_id) AS rn,
user_id,
sku_id
FROM order_detail_union
JOIN order_info_union ON order_detail_union.order_id = order_info_union.order_id
WHERE order_info_union.order_status = 'paid'
) t
WHERE rn = 1;
-- 将用户id和sku_id进行mapping操作
CREATE TEMPORARY VIEW user_sku_mapping
AS
SELECT dense_rank() OVER (ORDER BY user_id) - 1 AS user_id_mapping,
dense_rank() OVER (ORDER BY sku_id) - 1 AS sku_id_mapping
FROM user_sku_distinct;
-- 按照user_id_mapping和sku_id_mapping进行升序排序
CREATE TEMPORARY VIEW user_sku_sorted
AS
SELECT user_id_mapping, sku_id_mapping
FROM user_sku_mapping
ORDER BY user_id_mapping, sku_id_mapping;
-- 将结果保存至Hive
INSERT OVERWRITE TABLE hive_db.result.user_sku_sorted
SELECT user_id_mapping, sku_id_mapping
FROM user_sku_sorted;
```
需要注意的是,以上代码仅供参考,实际情况可能需要根据表结构、数据情况等进行相应的调整。另外,您需要将其中涉及到的数据库和表名、字段名等进行适当修改,以符合实际情况。
Spark-shell 据Hive的dwd库中相关表或MySQL数据库shtd_store中订单相关表(order_detail、order_info、sku_info),对用户购买过的商品进行去重,将其转换为以下格式:第一列为用户id mapping(Mapping操作:例如用户id:1、4、7、8、9,则做完mapping操作转为字典类型,键0对应用户id 1,键1对应用户id 4,以此类推),第二列为用户购买过的商品id mapping,按照user_id与sku_id进行升序排序,,最后保存结果到Hive
好的,根据您的需求,我可以提供以下Spark SQL代码实现:
```sql
-- 从Hive中读取订单相关表
CREATE TEMPORARY VIEW dwd_order_detail
AS
SELECT * FROM hive_db.dwd.order_detail;
CREATE TEMPORARY VIEW dwd_order_info
AS
SELECT * FROM hive_db.dwd.order_info;
CREATE TEMPORARY VIEW dwd_sku_info
AS
SELECT * FROM hive_db.dwd.sku_info;
-- 从MySQL中读取shtd_store数据库中的订单相关表
CREATE TEMPORARY VIEW mysql_order_detail
AS
SELECT * FROM jdbc_db.mysql.order_detail;
CREATE TEMPORARY VIEW mysql_order_info
AS
SELECT * FROM jdbc_db.mysql.order_info;
CREATE TEMPORARY VIEW mysql_sku_info
AS
SELECT * FROM jdbc_db.mysql.sku_info;
-- 合并订单相关表
CREATE TEMPORARY VIEW order_detail_union
AS
SELECT * FROM dwd_order_detail
UNION ALL
SELECT * FROM mysql_order_detail;
CREATE TEMPORARY VIEW order_info_union
AS
SELECT * FROM dwd_order_info
UNION ALL
SELECT * FROM mysql_order_info;
CREATE TEMPORARY VIEW sku_info_union
AS
SELECT * FROM dwd_sku_info
UNION ALL
SELECT * FROM mysql_sku_info;
-- 对用户购买过的商品进行去重
CREATE TEMPORARY VIEW user_sku_distinct
AS
SELECT DISTINCT user_id, sku_id
FROM (
SELECT order_detail_union.user_id, order_detail_union.sku_id,
ROW_NUMBER() OVER (PARTITION BY order_detail_union.user_id, order_detail_union.sku_id ORDER BY order_detail_union.order_id) AS rn
FROM order_detail_union
JOIN order_info_union ON order_detail_union.order_id = order_info_union.order_id
WHERE order_info_union.order_status = 'paid'
) t
WHERE rn = 1;
-- 将用户id和sku_id进行mapping操作
CREATE TEMPORARY VIEW user_sku_mapping
AS
SELECT dense_rank() OVER (ORDER BY user_id) - 1 AS user_id_mapping,
dense_rank() OVER (ORDER BY sku_id) - 1 AS sku_id_mapping
FROM user_sku_distinct;
-- 按照user_id_mapping和sku_id_mapping进行升序排序
CREATE TEMPORARY VIEW user_sku_sorted
AS
SELECT user_id_mapping, sku_id_mapping
FROM user_sku_mapping
ORDER BY user_id_mapping, sku_id_mapping;
-- 将结果保存至Hive
INSERT OVERWRITE TABLE hive_db.result.user_sku_mapping
SELECT collect_list(sku_id_mapping) AS sku_id_mapping_list
FROM user_sku_sorted
GROUP BY user_id_mapping;
```
需要注意的是,以上代码中的数据库和表名、字段名等需要根据实际情况进行修改。另外,根据您的要求,最终结果是将用户购买过的商品id mapping按照user_id与sku_id进行升序排序,并将结果保存到Hive中的`hive_db.result.user_sku_mapping`表中,第一列为用户id mapping,这里我将用户id mapping作为结果的索引。