StreamPhysicalOverAggregate doesn't support consuming update changes which is produced by node GroupAggregate(groupBy=[D], select=[D, SUM(Rpay) AS trade])
时间: 2023-06-20 21:10:08 浏览: 171
这个错误是由于使用了StreamPhysicalOverAggregate操作符,但它不支持处理由GroupAggregate节点产生的更新操作。这是因为GroupAggregate节点会对输入数据进行分组和聚合,从而改变了数据的形式。而StreamPhysicalOverAggregate操作符只能处理没有分组的流数据,并且不支持使用聚合函数。
要解决这个问题,可以尝试将StreamPhysicalOverAggregate操作符替换为支持处理分组数据的操作符,例如StreamPhysicalGroupAggregate。另外,如果需要使用聚合函数,可以在GroupAggregate节点中使用它们,并将结果作为单独的列输出。这样,StreamPhysicalOverAggregate操作符就可以处理这些列,而不会收到更新操作的影响。
相关问题
Recall that to solve (P2) in the tth time frame, we observe ξt {hti, Qi(t), Yi(t)}Ni=1, consisting of the channel gains {hti}Ni=1 and the system queue states {Qi(t),Yi(t)}Ni=1, and accordingly decide the control action {xt, yt}, including the binary offloading decision xt and the continuous resource allocation yt τit, fit, eti,O, rit,ONi=1. A close observation shows that although (P2) is a non-convex optimization problem, the resource allocation problem to optimize yt is in fact an “easy” convex problem if xt is fixed. In Section IV.B, we will propose a customized algorithm to efficiently obtain the optimal yt given xt in (P2). Here, we denote Gxt,ξt as the optimal value of (P2) by optimizing yt given the offloading decision xt and parameter ξt. Therefore, solving (P2) is equivalent to finding the optimal offloading decision (xt)∗, where (P3) : xt∗ = arg maximize G xt, ξt . (20) xt ∈{0,1}N In general, obtaining (xt)∗ requires enumerating 2N offloading decisions, which leads to significantly high computational complexity even when N is moderate (e.g., N = 10). Other search based methods, such as branch-and-bound and block coordinate descent [29], are also time-consuming when N is large. In practice, neither method is applicable to online decision- making under fast-varying channel condition. Leveraging the DRL technique, we propose a LyDROO algorithm to construct a policy π that maps from the input ξt to the optimal action (xt)∗, i.e., π : ξt → (xt)∗, with very low complexity, e.g., tens of milliseconds computation time (i.e., the time duration from observing ξt to producing a control action {xt, yt}) when N = 10.,为什么要使用深度强化学习
在这个问题中,我们需要使用深度强化学习来解决一个在线决策问题,即在快速变化的信道条件下,每个时刻需要决定哪些任务应该在本地执行,哪些任务应该在边缘设备上执行。由于任务数量可能很大,传统的搜索和优化方法在计算上非常昂贵,因此我们需要使用更高效的方法。深度强化学习可以学习一个策略,该策略可以在给定当前状态时返回最优的行动,而且可以通过训练过程自适应地处理不同的任务和信道条件。因此,通过使用深度强化学习,我们可以在快速变化的环境中快速地做出优化的决策,这对于边缘计算等实时应用非常重要。
#!/bin/bash #变量不能为空 if [ "$1" = "" -a "$2" = "" -a "$3" = "" -a "$4" = "" -a "$5" = "" ];then echo "please specify variable bfe_log_dir mysqlHost mysqlPort mysqlDBName time" exit 0 fi loadDataFile="./time_consuming.log" logFilesDir="./log_files.txt" fileDate=$(date -d $5' minute ago' +'%Y-%m-%d %H:%M:%S') nowDate=$(date '+%Y-%m-%d %H:%M:%S') if [ ! -f $logFilesDir ];then touch $logFilesDir echo "create log_files success." else echo "log_files ready" fi find $1 -type f -newermt "$fileDate" ! -newermt "$nowDate" -name 'access_2.log.*' > $logFilesDir if [ ! -f $loadDataFile ];then touch $loadDataFile echo "create load date file success." else echo "load date file ready" fi echo > $loadDataFile #start read bfe log file while read line do echo "read bfe log file: "$line #采集Link-trace-id,backendDur字段信息 cat $line | grep "type\[Req\]" | awk -F"]" '{print $1,$12}' | awk -F"[\[ ]" '{print $2","$5}' |awk -F , '{if($1!="-")print}' >> $loadDataFile done < $logFilesDir echo "backent dur write file success." #将采集信息插入到数据库中 loadCommand="load data local infile '$loadDataFile' into table ai_backend_dur_log fields terminated BY ',' lines terminated BY '\n' (trace_id,backend_dur);" mysql --local-infile=1 -u gzaipbc -pGzaipbc@123 -h $2 -P $3 $4 -e "$loadCommand" exit 0
这是一个 Bash 脚本,主要功能是从指定目录中读取符合时间范围内的日志文件,提取其中的特定字段信息,然后将这些信息插入到指定的 MySQL 数据库表中。具体流程如下:
1. 首先判断是否传入了必要的参数,如果没有就提示用户输入。
2. 创建用于记录符合时间范围内的日志文件名的文件 log_files.txt。
3. 使用 find 命令查找指定目录中符合时间范围内的日志文件,将其文件名写入 log_files.txt 中。
4. 创建用于记录采集到的信息的文件 time_consuming.log。
5. 读取 log_files.txt 中的每个日志文件,使用 grep 和 awk 命令从中提取出 Link-trace-id 和 backendDur 两个字段的信息,将其写入 time_consuming.log 文件中。
6. 将 time_consuming.log 文件中的信息通过 load data 命令插入到指定的 MySQL 数据库表中。
注意,脚本中的参数依次为:bfe_log_dir(指定的目录)、mysqlHost(MySQL 主机名)、mysqlPort(MySQL 端口号)、mysqlDBName(MySQL 数据库名)、time(时间范围,单位为分钟)。另外,脚本中的 MySQL 用户名和密码都是写死的,需要根据具体情况修改。
阅读全文
相关推荐
![doc](https://img-home.csdnimg.cn/images/20241231044833.png)
![doc](https://img-home.csdnimg.cn/images/20241231044833.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![pdf](https://img-home.csdnimg.cn/images/20241231044930.png)
![zip](https://img-home.csdnimg.cn/images/20241231045053.png)
![-](https://img-home.csdnimg.cn/images/20241231045053.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)