灰狼优化算法求函数最小值
时间: 2024-12-30 16:22:39 浏览: 5
### 使用灰狼优化算法寻找函数最小值
灰狼优化 (Gray Wolf Optimizer, GWO) 是一种基于群体的元启发式优化算法,灵感来源于灰狼的社会等级制度及其狩猎行为。该方法通过模拟灰狼群中的领导结构和协作机制来解决复杂的最优化问题。
#### 算法原理
GWO 中个体的位置更新依赖于三只最优解代表的头狼(α、β 和 δ),其余成员则根据这三者的指导调整自己的位置以逐步逼近全局最优解[^1]:
- α 表示当前种群里最好的解决方案;
- β 为第二好的方案;
- δ 则是第三好;
其他所有个体都被称为 ω,在每次迭代过程中都会依据上述三个领导者的信息改变自己所在的空间坐标。
#### 数学模型
设目标空间内有 n 只狼,则每一只狼 i 的位置可以表示成向量 Xi=(xi1, xi2,...xin),其中 d 维度对应着待求参数的数量。对于给定的目标函数 f(x), 寻找其极小化过程如下所示:
定义包围猎物的行为方程组:
\[ D=\left | C\cdot X_p(t)-X_i(t)\right |\tag{1} \]
\[ X_{i}(t+1)=X_p(t)-A\cdot D\tag{2}\]
这里 \( A \) 和 \( C \) 都是由随机数构成的一维数组,具体形式分别为:
\[ A=2a\cdot r_1-a,\quad C=2r_2\]
其中 \( a \in [0,2]\) 是线性下降系数,\( r_1,r_2\) 均服从均匀分布 U(0,1).
随着迭代次数增加,\( a \) 将逐渐减小至零,使得搜索范围不断缩小并最终聚焦到最佳解附近。
#### Python 实现示例
下面给出一段简单的Python代码实现用于找到二维平面上某个特定测试函数的最小值点:
```python
import numpy as np
import matplotlib.pyplot as plt
def objective_function(X):
"""Sphere test function"""
return sum([x ** 2 for x in X])
class GrayWolfOptimizer():
def __init__(self, population_size, dimensions, bounds, max_iter):
self.population_size = population_size
self.dimensions = dimensions
self.bounds = bounds
self.max_iter = max_iter
# Initialize wolves' positions randomly within the search space.
self.positions = np.random.uniform(low=bounds[0], high=bounds[1],
size=(population_size, dimensions))
def optimize(self):
alpha_pos, beta_pos, delta_pos = None, None, None
best_scores = []
for iteration in range(self.max_iter):
fitness_values = []
for pos in self.positions:
score = objective_function(pos)
fitness_values.append(score)
sorted_indices = np.argsort(fitness_values)[:3]
if alpha_pos is None or fitness_values[sorted_indices[0]] < min(best_scores+[np.inf]):
alpha_pos = self.positions[sorted_indices[0]]
if beta_pos is None or fitness_values[sorted_indices[1]] < objective_function(beta_pos):
beta_pos = self.positions[sorted_indices[1]]
if delta_pos is None or fitness_values[sorted_indices[2]] < objective_function(delta_pos):
delta_pos = self.positions[sorted_indices[2]]
best_scores.append(min(fitness_values))
a = 2 - iteration * ((2) / self.max_iter)
new_positions = []
for current_position in self.positions:
r1 = np.random.rand()
r2 = np.random.rand()
A1 = 2*a*r1 - a;
C1 = 2*r2;
D_alpha = abs(C1*alpha_pos - current_position);
X1 = alpha_pos - A1*D_alpha;
r1 = np.random.rand();
r2 = np.random.rand();
A2 = 2*a*r1 - a;
C2 = 2*r2;
D_beta = abs(C2*beta_pos - current_position);
X2 = beta_pos - A2*D_beta;
r1 = np.random.rand();
r2 = np.random.rand();
A3 = 2*a*r1 - a;
C3 = 2*r2;
D_delta = abs(C3*delta_pos-current_position);
X3 = delta_pos-A3*D_delta;
updated_position = (X1 + X2 + X3)/3
# Ensure that all components stay inside boundary conditions
updated_position_clipped = np.clip(updated_position,
self.bounds[0],
self.bounds[1])
new_positions.append(updated_position_clipped)
self.positions = np.array(new_positions)
return {'best_solution': alpha_pos,'fitness_history': best_scores}
if __name__ == '__main__':
gwo_instance = GrayWolfOptimizer(population_size=30,dimensions=2,bounds=[-5.12,5.12],max_iter=1000)
result = gwo_instance.optimize()
print('Best solution found:',result['best_solution'])
print('Objective value at this point:',objective_function(result['best_solution']))
plt.plot(range(len(result['fitness_history'])),result['fitness_history'],'b')
plt.xlabel('Iteration Number')
plt.ylabel('Fitness Value')
plt.title('Convergence Curve of GWO Algorithm')
plt.show()
```
此段程序实现了基本版的灰色狼群优化器,并将其应用于球形测试函数上进行寻优操作。可以看到,经过一定数量代际演化之后,算法能够有效地定位到接近理论上的全局最低点处。
阅读全文