SELECT COALESCE(storage_in.in_date, storage_out.out_date) AS date, COALESCE(storage_in.product_name, storage_out.product_name) AS product_name, IFNULL(storage_in.quantity, 0) AS IN, IFNULL(storage_out.quantity, 0) AS OUT, SUM(IFNULL(storage_in.quantity, 0) - IFNULL(storage_out.quantity, 0)) OVER (PARTITION BY COALESCE(storage_in.in_date, storage_out.out_date), COALESCE(storage_in.product_name, storage_out.product_name) ORDER BY COALESCE(storage_in.in_date, storage_out.out_date)) AS END FROM storage_in FULL OUTER JOIN storage_out ON storage_in.product_name = storage_out.pproduct_name AND storage_in.in_date = storage_out.out_date ORDER BY 1,2;
时间: 2024-02-29 07:53:43 浏览: 134
这是一个 SQL 查询语句,用于从两个表中获取产品进出库存信息,并计算每个产品在每个日期的总库存。其中使用了 COALESCE 函数来处理空值,IFNULL 函数来处理 NULL 值。JOIN 条件是产品名称和日期相等,使用了 FULL OUTER JOIN 来保证左右两个表都包含在结果中。最后使用了 OVER 子句来计算每个产品在每个日期的总库存。
相关问题
delimiter // CREATE PROCEDURE area_pro2 ( IN x INT, OUT alarm_count INT, OUT over_spped INT, OUT over_spped_10 INT, OUT over_spped_20 INT, OUT over_spped_50 INT, OUT untreated INT, OUT processed INT ) BEGIN WITH cte AS ( SELECT organization_id FROM `user` WHERE organization_id = x OR father LIKE CONCAT( COALESCE ( ( SELECT father FROM `user` WHERE organization_id = x ), "" ), x, "-%" ) AND is_del = 0 ) SELECT count( location_alarm.id ) INTO alarm_count, COUNT( CASE WHEN location_alarm.max_speed > location_alarm.speed_limit * 10 THEN 0 END ), count( CASE WHEN location_alarm.max_speed / location_alarm.speed_limit > 11 THEN 0 END ) INTO over_spped_10, count( CASE WHEN location_alarm.max_speed / location_alarm.speed_limit > 12 THEN 0 END ) INTO over_spped_20, count( CASE WHEN location_alarm.max_speed / location_alarm.speed_limit > 15 THEN 0 END ) INTO over_spped_50, count( CASE WHEN location_alarm.alarm_process_id IS NULL THEN 0 END ) INTO untreated, count( location_alarm.alarm_process_id ) INTO processed FROM cte INNER JOIN car ON car.organization_id = cte.organization_id AND car.is_del = 0 INNER JOIN location_alarm ON car.id = location_alarm.car_id; END // 我这个存储过程为什么报错了
你的存储过程报错可能是因为你没有指定 DELIMITER。在你的存储过程中,你使用了分号 (;) 来分隔语句,而这与 MySQL 默认使用的分隔符相同。因此,你需要使用 DELIMITER 命令来改变分隔符。
下面是修改后的代码:
```
DELIMITER //
CREATE PROCEDURE area_pro2 (
IN x INT,
OUT alarm_count INT,
OUT over_spped INT,
OUT over_spped_10 INT,
OUT over_spped_20 INT,
OUT over_spped_50 INT,
OUT untreated INT,
OUT processed INT
)
BEGIN
WITH cte AS (
SELECT
organization_id
FROM
`user`
WHERE
organization_id = x
OR father LIKE CONCAT(
COALESCE((SELECT father FROM `user` WHERE organization_id = x), ''),
x,
'-%'
)
AND is_del = 0
)
SELECT
count(location_alarm.id) INTO alarm_count,
COUNT(CASE WHEN location_alarm.max_speed > location_alarm.speed_limit * 10 THEN 0 END),
count(CASE WHEN location_alarm.max_speed / location_alarm.speed_limit > 11 THEN 0 END) INTO over_spped_10,
count(CASE WHEN location_alarm.max_speed / location_alarm.speed_limit > 12 THEN 0 END) INTO over_spped_20,
count(CASE WHEN location_alarm.max_speed / location_alarm.speed_limit > 15 THEN 0 END) INTO over_spped_50,
count(CASE WHEN location_alarm.alarm_process_id IS NULL THEN 0 END) INTO untreated,
count(location_alarm.alarm_process_id) INTO processed
FROM
cte
INNER JOIN car ON car.organization_id = cte.organization_id
AND car.is_del = 0
INNER JOIN location_alarm ON car.id = location_alarm.car_id;
END//
DELIMITER ;
```
在这里,我们使用了 DELIMITER // 命令来指定新的分隔符,并在存储过程的结尾使用了 END// 来结束存储过程定义。最后,我们使用 DELIMITER ; 命令来将分隔符改回默认值。
val userrdd = sc.textFile("C:\\Users\\20430\\Desktop\\spark课设\\users.dat").map(_.split("::")).map(x => (x(0), (x(1), x(2), x(3)))) //用户ID 用户性别 用户年龄 用户职业 val rdd5 = rdd4.join(userrdd) // rdd5.collect().foreach(println) val data: RDD[sex] = rdd5.map(x => sex(x._1, x._2._1._1, x._2._1._2, x._2._2._3, x._2._2._2, x._2._2._1)) import spark.implicits._ val df = data.toDF() //创建dataFrame df.createOrReplaceTempView("sex2") //临时表 val resultDF: DataFrame = spark.sql("select userid,movieid,labe,rating,zhiye,sex from sex2 lateral view explode(split(schema,'\\\\|')) tmp as labe") resultDF.coalesce(1).write.csv("C:\\Users\\20430\\Desktop\\spark课设\\输出") resultDF.show()
这段代码主要是将用户信息、电影信息和评分信息进行整合,并输出到CSV文件中。具体来说,代码第一行通过 `sc.textFile()` 方法将指定路径下的users.dat文件以文本形式读入,然后通过 `map()` 方法对每一行进行切割,得到一个数组,其中数组的第0个元素是用户ID,第1个元素是用户性别,第2个元素是用户年龄,第3个元素是用户职业。然后通过 `map()` 方法将每个元素转换成 Tuple 的形式,即 (userId, (userGender, userAge, userOccupation))。
代码第二行是将电影评分数据和用户信息进行组合,通过 `join()` 方法将rdd4和userrdd进行join操作,得到rdd5。其中,rdd4是已经处理好的电影评分数据和电影类型数据,每个元素是一个Tuple,形如 (userId, (movieId, rating)),userrdd是已经处理好的用户数据,每个元素是一个Tuple,形如 (userId, (userGender, userAge, userOccupation))。
代码第三行是将rdd5中的元素进行转换,将元素的各个部分提取出来,构成一个新的样例类sex的对象,其中sex样例类的各个属性分别对应用户ID、电影ID、评分、职业、性别和年龄。返回一个新的RDD data,其中每个元素都是一个sex对象。
代码第四行将RDD转换成DataFrame类型,然后为DataFrame创建一个名为sex2的临时表。
代码第五行是通过Spark SQL查询的方式,将sex2表展开,得到一个新的DataFrame resultDF,其中每个元素都是一个Tuple,形如 (userId, movieId, label, rating, occupation, gender)。其中,label是指电影类型,通过explode和split函数将sex2表中的schema列展开为多行,每行对应一个电影类型。结果将DataFrame输出到CSV文件中。
代码最后一行是将DataFrame以表格形式展示出来。
阅读全文