这是什么意思We optimize the loss (3) or (6) by projected gradient descent with line search (subject to the observation above). The projection consists of imposing Pk Yk = 0, which we enforce by centering ∇Y before taking a step. This eliminates the degeneracy of the Loss in (3) and (6) w.r.t constant shift in Y. To further improve the good trade-off between time per iteration and number of iterations, we found that a heavy-ball method with parameter α is effective. At each iteration computing the gradient is O((S + s3)n) where S is the number of nonzero entries of L.
时间: 2024-04-28 19:20:47 浏览: 165
genetic-to-optimize-lifetime-of-wsn.rar_GA wsn_The Network_WSN
这段话是在描述一种优化算法,使用投影梯度下降法来优化损失函数(3)或(6),并且在投影时强制执行Pk Yk = 0的约束条件。在采取步骤之前,通过将∇Y居中来消除(3)和(6)中的常数移位的退化性。为了进一步改善每次迭代的时间和迭代次数之间的良好平衡,发现带参数α的重球方法是有效的。在每次迭代中,计算梯度的复杂度为O((S + s3)n),其中S是L的非零条目数。
阅读全文