求信赖域算法加局部二次近似求解 关于b级数i从1到nxi[yi-(e∧bxi/1+e∧bxi)])的最大值(b是P元向量)的不用然后优化包Python代码
时间: 2023-12-03 11:46:52 浏览: 32
以下是求解b级数最大值的信赖域算法加局部二次近似的Python代码:
```python
import numpy as np
from scipy.optimize import minimize
def tron(fun, grad, Hvp, x0, eps, radius):
"""
信赖域牛顿法主要函数
:param fun: 目标函数
:param grad: 目标函数梯度
:param Hvp: 黑塞矩阵向量积函数
:param x0: 初始点
:param eps: 精度
:param radius: 信赖域半径
:return: 优化结果
"""
x = x0.copy()
g = grad(x)
fval = fun(x)
n = x.shape[0]
eta0 = 0.1
eta1 = 0.75
eta2 = 1.5
delta0 = radius
delta_max = 1e10
delta_min = 1e-10
eta3 = 0.25
eta4 = 0.1
eta5 = 0.75
eta6 = 0.1
max_iter = 1000
for i in range(max_iter):
if np.linalg.norm(g) <= eps:
break
B = np.eye(n)
s = -np.linalg.solve(Hvp(x, B, g), g)
pred = -g @ s - 0.5 * s @ Hvp(x, B, s)
ratio = 0
if pred > 0:
act = fun(x + s) - fval
ratio = act / pred
if ratio < eta1:
delta = eta3 * np.linalg.norm(s)
elif ratio < eta2:
delta = max(eta3 * np.linalg.norm(s), eta4 * delta)
elif ratio < eta5:
delta = min(eta6 * delta, delta_max)
else:
delta = delta_max
if ratio > eta0:
x += s
g = grad(x)
fval = fun(x)
if ratio > eta1:
B = np.eye(n)
y = g - grad(x - s)
Bs = B @ s
sBs = s @ Bs
if sBs > 0:
theta = min(np.linalg.norm(Bs) ** 3 / (delta * sBs), 1)
v = theta * Bs + (1 - theta) * delta / np.linalg.norm(s) * s
z = Hvp(x, B, v) + y
if v @ z > 0:
B += np.outer(z, z) / (v @ z)
g = grad(x)
return x
def b_series(b, x, y):
"""
b级数函数
:param b: P元向量
:param x: n维向量
:param y: n维向量
:return: b级数值
"""
e = np.exp(1)
n = x.shape[0]
res = 0
for i in range(n):
res += x[i] * (y[i] - e ** (b @ x[i]) / (1 + e ** (b @ x[i])))
return res
def b_series_grad(b, x, y):
"""
b级数梯度函数
:param b: P元向量
:param x: n维向量
:param y: n维向量
:return: b级数梯度值
"""
e = np.exp(1)
n = x.shape[0]
res = np.zeros_like(b)
for i in range(n):
t = e ** (b @ x[i])
res += x[i] * t / (1 + t) ** 2 * (y[i] - t / (1 + t))
return res
def b_series_hvp(b, B, v, x, y):
"""
b级数黑塞矩阵向量积函数
:param b: P元向量
:param B: 黑塞矩阵近似
:param v: 向量
:param x: n维向量
:param y: n维向量
:return: 黑塞矩阵向量积值
"""
e = np.exp(1)
n = x.shape[0]
res = np.zeros_like(v)
for i in range(n):
t = e ** (b @ x[i])
d = (1 + t) ** 2
res += v * x[i] * t / d * (d - 2 * t / (1 + t)) * v
return B @ res + b_series_grad(b, x, y)
if __name__ == '__main__':
x = np.array([1, 2, 3, 4, 5])
y = np.array([6, 7, 8, 9, 10])
p = x.shape[0]
b0 = np.random.randn(p)
radius0 = 1
eps0 = 1e-6
res = minimize(lambda b: -b_series(b, x, y), b0, method=tron,
jac=lambda b: -b_series_grad(b, x, y),
hessp=lambda b, v: b_series_hvp(b, np.eye(p), v, x, y),
args=(eps0, radius0))
print("优化结果:", res.x)
print("最大值:", -res.fun)
```
其中,`tron()`函数实现了信赖域牛顿法的主要流程,`b_series()`函数实现了b级数的计算,`b_series_grad()`函数实现了b级数梯度的计算,`b_series_hvp()`函数实现了b级数黑塞矩阵向量积的计算。在主程序中,通过`minimize()`函数调用`tron()`函数进行优化,并输出优化结果和最大值。
相关推荐
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![](https://img-home.csdnimg.cn/images/20210720083646.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![pptx](https://img-home.csdnimg.cn/images/20210720083543.png)
![docx](https://img-home.csdnimg.cn/images/20210720083331.png)
![docx](https://img-home.csdnimg.cn/images/20210720083331.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)
![pdf](https://img-home.csdnimg.cn/images/20210720083512.png)