Algorithm 1: The online LyDROO algorithm for solving (P1). input : Parameters V , {γi, ci}Ni=1, K, training interval δT , Mt update interval δM ; output: Control actions xt,ytKt=1; 1 Initialize the DNN with random parameters θ1 and empty replay memory, M1 ← 2N; 2 Empty initial data queue Qi(1) = 0 and energy queue Yi(1) = 0, for i = 1,··· ,N; 3 fort=1,2,...,Kdo 4 Observe the input ξt = ht, Qi(t), Yi(t)Ni=1 and update Mt using (8) if mod (t, δM ) = 0; 5 Generate a relaxed offloading action xˆt = Πθt ξt with the DNN; 6 Quantize xˆt into Mt binary actions xti|i = 1, · · · , Mt using the NOP method; 7 Compute Gxti,ξt by optimizing resource allocation yit in (P2) for each xti; 8 Select the best solution xt = arg max G xti , ξt and execute the joint action xt , yt ; { x ti } 9 Update the replay memory by adding (ξt,xt); 10 if mod (t, δT ) = 0 then 11 Uniformly sample a batch of data set {(ξτ , xτ ) | τ ∈ St } from the memory; 12 Train the DNN with {(ξτ , xτ ) | τ ∈ St} and update θt using the Adam algorithm; 13 end 14 t ← t + 1; 15 Update {Qi(t),Yi(t)}N based on xt−1,yt−1 and data arrival observation At−1N using (5) and (7). i=1 i i=1 16 end With the above actor-critic-update loop, the DNN consistently learns from the best and most recent state-action pairs, leading to a better policy πθt that gradually approximates the optimal mapping to solve (P3). We summarize the pseudo-code of LyDROO in Algorithm 1, where the major computational complexity is in line 7 that computes Gxti,ξt by solving the optimal resource allocation problems. This in fact indicates that the proposed LyDROO algorithm can be extended to solve (P1) when considering a general non-decreasing concave utility U (rit) in the objective, because the per-frame resource allocation problem to compute Gxti,ξt is a convex problem that can be efficiently solved, where the detailed analysis is omitted. In the next subsection, we propose a low-complexity algorithm to obtain G xti, ξt. B. Low-complexity Algorithm for Optimal Resource Allocation Given the value of xt in (P2), we denote the index set of users with xti = 1 as Mt1, and the complementary user set as Mt0. For simplicity of exposition, we drop the superscript t and express the optimal resource allocation problem that computes G xt, ξt as following (P4) : maximize j∈M0 ajfj/φ − Yj(t)κfj3 + i∈M1 {airi,O − Yi(t)ei,O} (28a) τ,f,eO,rO 17 ,基于模型的DRL算法和无优化的DRL算法和DNN深度学习都各体现在哪
时间: 2023-06-16 14:05:56 浏览: 198
基于模型的DRL算法和无优化的DRL算法是两种不同的算法类型,它们在处理问题时的思路和方法不同。
基于模型的DRL算法,如LyDROO算法,会建立一个模型来预测未来的状态和奖励,然后基于该模型进行决策。这种算法需要较少的真实数据,但是需要花费更多的计算资源来训练模型。
无优化的DRL算法则是直接基于经验进行学习,通过不断试错来优化策略。这种算法不需要建立模型,但是需要更多的真实数据来进行训练。
DNN深度学习则是一种广泛应用于各种DRL算法中的技术,它通过多层神经网络来学习和表示复杂的输入数据。在LyDROO算法中,DNN用于生成放松的离线决策,同时也用于学习真实数据中的状态和动作对之间的映射关系。
相关问题
Algorithm 1: The online LyDROO algorithm for solving (P1). input : Parameters V , {γi, ci}Ni=1, K, training interval δT , Mt update interval δM ; output: Control actions xt,ytKt=1; 1 Initialize the DNN with random parameters θ1 and empty replay memory, M1 ← 2N; 2 Empty initial data queue Qi(1) = 0 and energy queue Yi(1) = 0, for i = 1,··· ,N; 3 fort=1,2,...,Kdo 4 Observe the input ξt = ht, Qi(t), Yi(t)Ni=1 and update Mt using (8) if mod (t, δM ) = 0; 5 Generate a relaxed offloading action xˆt = Πθt ξt with the DNN; 6 Quantize xˆt into Mt binary actions xti|i = 1, · · · , Mt using the NOP method; 7 Compute Gxti,ξt by optimizing resource allocation yit in (P2) for each xti; 8 Select the best solution xt = arg max G xti , ξt and execute the joint action xt , yt ; { x ti } 9 Update the replay memory by adding (ξt,xt); 10 if mod (t, δT ) = 0 then 11 Uniformly sample a batch of data set {(ξτ , xτ ) | τ ∈ St } from the memory; 12 Train the DNN with {(ξτ , xτ ) | τ ∈ St} and update θt using the Adam algorithm; 13 end 14 t ← t + 1; 15 Update {Qi(t),Yi(t)}N based on xt−1,yt−1 and data arrival observation At−1N using (5) and (7). i=1 i i=1 16 end,算法的输入输出是什么
输入:参数V,{γi,ci}Ni=1,K,训练间隔δT,Mt更新间隔δM。
输出:控制动作xt,ytKt=1。
其中,V是一个参数集合,{γi,ci}是关于每个用户i的重要性权重和计算任务的数据量,K是迭代次数,δT是训练间隔,δM是Mt更新间隔。算法的输出是K次迭代后的控制动作xt,ytKt=1。
algorithm identifier in public key not recognised:1.2.156.10197.1.301
这个错误通常发生在使用Bouncy Castle库解析SM2公钥时,因为Bouncy Castle默认不支持SM2算法标识符。
要解决此问题,您可以使用以下代码将Bouncy Castle的Provider配置为支持SM2算法标识符:
```
Security.addProvider(new BouncyCastleProvider());
SM2ParameterSpec sm2Spec = new SM2ParameterSpec();
AlgorithmParameters parameters = AlgorithmParameters.getInstance("SM2", "BC");
parameters.init(sm2Spec);
```
在创建SM2的PublicKey对象之前,您需要先执行上述代码来初始化算法参数。这将告诉Bouncy Castle库如何解析SM2算法标识符,从而避免出现“algorithm identifier in public key not recognised”错误。
另外,还需要确保您的公钥PEM文件正确包含了SM2算法标识符,如下所示:
```
-----BEGIN PUBLIC KEY-----
MIICITCCAYoCCQD2yTo1T8ZQvzANBgkqhkiG9w0BAQsFADBFMQswCQYDVQQGEwJB
....
-----END PUBLIC KEY-----
```
其中,`-----BEGIN PUBLIC KEY-----`和`-----END PUBLIC KEY-----`是PEM文件的起始和结束标记。`MIICITCCAYoCCQD2yTo1T8ZQvzANBgkqhkiG9w0BAQsFADBFMQswCQYDVQQGEwJB`是经过Base64编码的SM2公钥数据,以ASN.1 DER编码格式存储。如果您的公钥PEM文件中没有正确包含SM2算法标识符,也可能会导致“algorithm identifier in public key not recognised”错误。
阅读全文