死锁预防:基于监视器的ξ资源S3PR强化控制器设计

0 下载量 41 浏览量 更新于2024-08-29 收藏 2.05MB PDF 举报
"基于ξ资源的S3PR基于监视器的生命增强监控器的综合" 这篇研究论文专注于解决在灵活制造系统(FMSs)中的死锁问题,死锁是这类系统中一个非常不希望出现的现象。作者Dan You、Shouguang Wang(IEEE资深会员)和Mengchu Zhou(IEEE院士)提出了一种通过添加监视器来实现死锁预防的策略,该策略适用于一类名为α-S3PR(State-Successor State Place/Transition Petri Net with Resources)并带有ξ资源的系统。 首先,他们介绍了一种算法,用于通过ξ资源来简化S3PR。ξ资源在α-S3PR中被划分为两种类型:A-ξ资源和B-ξ资源。对于只有B-ξ资源的α-S3PR,论文证明了一个最大允许的、能确保活力(liveness)的监督器可以通过控制所有已清空的严格最小输油管(siphons)来设计。如果α-S3PR包含A-ξ资源,那么可以迭代地通过A-ξ资源减少网络,并添加相应的监视器来设计一个活力增强的监督器。 最后,论文提出了一个针对α-S3PR的死锁预防算法。为了进一步验证提出的理论,文中给出了两个FMS实例进行分析和应用。这些实例有助于直观展示如何应用所提方法来避免系统的死锁状态,从而提高系统的稳定性和效率。 通过引入监视器,该研究不仅解决了死锁问题,还增强了系统的监控能力,使得系统管理者可以实时了解系统状态并及时采取预防措施。这在实际的制造环境中具有重要的应用价值,因为死锁可能导致生产流程停滞,造成资源浪费和效率降低。通过这种方法,可以提高系统的灵活性和可靠性,确保制造过程的连续性和高效性。

Algorithm 1: The online LyDROO algorithm for solving (P1). input : Parameters V , {γi, ci}Ni=1, K, training interval δT , Mt update interval δM ; output: Control actions 􏰕xt,yt􏰖Kt=1; 1 Initialize the DNN with random parameters θ1 and empty replay memory, M1 ← 2N; 2 Empty initial data queue Qi(1) = 0 and energy queue Yi(1) = 0, for i = 1,··· ,N; 3 fort=1,2,...,Kdo 4 Observe the input ξt = 􏰕ht, Qi(t), Yi(t)􏰖Ni=1 and update Mt using (8) if mod (t, δM ) = 0; 5 Generate a relaxed offloading action xˆt = Πθt 􏰅ξt􏰆 with the DNN; 6 Quantize xˆt into Mt binary actions 􏰕xti|i = 1, · · · , Mt􏰖 using the NOP method; 7 Compute G􏰅xti,ξt􏰆 by optimizing resource allocation yit in (P2) for each xti; 8 Select the best solution xt = arg max G 􏰅xti , ξt 􏰆 and execute the joint action 􏰅xt , yt 􏰆; { x ti } 9 Update the replay memory by adding (ξt,xt); 10 if mod (t, δT ) = 0 then 11 Uniformly sample a batch of data set {(ξτ , xτ ) | τ ∈ St } from the memory; 12 Train the DNN with {(ξτ , xτ ) | τ ∈ St} and update θt using the Adam algorithm; 13 end 14 t ← t + 1; 15 Update {Qi(t),Yi(t)}N based on 􏰅xt−1,yt−1􏰆 and data arrival observation 􏰙At−1􏰚N using (5) and (7). i=1 i i=1 16 end With the above actor-critic-update loop, the DNN consistently learns from the best and most recent state-action pairs, leading to a better policy πθt that gradually approximates the optimal mapping to solve (P3). We summarize the pseudo-code of LyDROO in Algorithm 1, where the major computational complexity is in line 7 that computes G􏰅xti,ξt􏰆 by solving the optimal resource allocation problems. This in fact indicates that the proposed LyDROO algorithm can be extended to solve (P1) when considering a general non-decreasing concave utility U (rit) in the objective, because the per-frame resource allocation problem to compute G􏰅xti,ξt􏰆 is a convex problem that can be efficiently solved, where the detailed analysis is omitted. In the next subsection, we propose a low-complexity algorithm to obtain G 􏰅xti, ξt􏰆. B. Low-complexity Algorithm for Optimal Resource Allocation Given the value of xt in (P2), we denote the index set of users with xti = 1 as Mt1, and the complementary user set as Mt0. For simplicity of exposition, we drop the superscript t and express the optimal resource allocation problem that computes G 􏰅xt, ξt􏰆 as following (P4) : maximize 􏰀j∈M0 􏰕ajfj/φ − Yj(t)κfj3􏰖 + 􏰀i∈M1 {airi,O − Yi(t)ei,O} (28a) τ,f,eO,rO 17 ,基于模型的DRL算法和无优化的DRL算法和DNN深度学习都各体现在哪

2023-05-12 上传
2023-05-12 上传

Algorithm 1: The online LyDROO algorithm for solving (P1). input : Parameters V , {γi, ci}Ni=1, K, training interval δT , Mt update interval δM ; output: Control actions 􏰕xt,yt􏰖Kt=1; 1 Initialize the DNN with random parameters θ1 and empty replay memory, M1 ← 2N; 2 Empty initial data queue Qi(1) = 0 and energy queue Yi(1) = 0, for i = 1,··· ,N; 3 fort=1,2,...,Kdo 4 Observe the input ξt = 􏰕ht, Qi(t), Yi(t)􏰖Ni=1 and update Mt using (8) if mod (t, δM ) = 0; 5 Generate a relaxed offloading action xˆt = Πθt 􏰅ξt􏰆 with the DNN; 6 Quantize xˆt into Mt binary actions 􏰕xti|i = 1, · · · , Mt􏰖 using the NOP method; 7 Compute G􏰅xti,ξt􏰆 by optimizing resource allocation yit in (P2) for each xti; 8 Select the best solution xt = arg max G 􏰅xti , ξt 􏰆 and execute the joint action 􏰅xt , yt 􏰆; { x ti } 9 Update the replay memory by adding (ξt,xt); 10 if mod (t, δT ) = 0 then 11 Uniformly sample a batch of data set {(ξτ , xτ ) | τ ∈ St } from the memory; 12 Train the DNN with {(ξτ , xτ ) | τ ∈ St} and update θt using the Adam algorithm; 13 end 14 t ← t + 1; 15 Update {Qi(t),Yi(t)}N based on 􏰅xt−1,yt−1􏰆 and data arrival observation 􏰙At−1􏰚N using (5) and (7). i=1 i i=1 16 end With the above actor-critic-update loop, the DNN consistently learns from the best and most recent state-action pairs, leading to a better policy πθt that gradually approximates the optimal mapping to solve (P3). We summarize the pseudo-code of LyDROO in Algorithm 1, where the major computational complexity is in line 7 that computes G􏰅xti,ξt􏰆 by solving the optimal resource allocation problems. This in fact indicates that the proposed LyDROO algorithm can be extended to solve (P1) when considering a general non-decreasing concave utility U (rit) in the objective, because the per-frame resource allocation problem to compute G􏰅xti,ξt􏰆 is a convex problem that can be efficiently solved, where the detailed analysis is omitted. In the next subsection, we propose a low-complexity algorithm to obtain G 􏰅xti, ξt􏰆. B. Low-complexity Algorithm for Optimal Resource Allocation Given the value of xt in (P2), we denote the index set of users with xti = 1 as Mt1, and the complementary user set as Mt0. For simplicity of exposition, we drop the superscript t and express the optimal resource allocation problem that computes G 􏰅xt, ξt􏰆 as following (P4) : maximize 􏰀j∈M0 􏰕ajfj/φ − Yj(t)κfj3􏰖 + 􏰀i∈M1 {airi,O − Yi(t)ei,O} (28a) τ,f,eO,rO 17 ,这个算法中基于模型的DRL算法和无优化的DRL算法和DNN深度学习都各体现在哪

2023-05-12 上传