[ WARN:0@0.552] global c:\b\abs_d8ltn27ay8\croot\opencv-suite_1676452046667\work\opencv_contrib-4.6.0\modules\xfeatures2d\misc\python\shadow_sift.hpp (15) cv::xfeatures2d::SIFT_create DEPRECATED: cv.xfeatures2d.SIFT_create() is deprecated due SIFT tranfer to the main repository. https://github.com/opencv/opencv/issues/16736 E:\anaconda\envs\pytorch\lib\site-packages\scipy\optimize\_optimize.py:2417: RuntimeWarning: overflow encountered in scalar subtract tmp2 = (x - v) * (fx - fw) E:\anaconda\envs\pytorch\lib\site-packages\scipy\optimize\_optimize.py:2416: RuntimeWarning: overflow encountered in scalar subtract tmp1 = (x - w) * (fx - fv) E:\anaconda\envs\pytorch\lib\site-packages\scipy\optimize\_optimize.py:3386: RuntimeWarning: overflow encountered in scalar multiply t -= delta*temp*temp E:\anaconda\envs\pytorch\lib\site-packages\scipy\optimize\_optimize.py:2878: RuntimeWarning: overflow encountered in scalar subtract tmp2 = (xb - xc) * (fb - fa) E:\anaconda\envs\pytorch\lib\site-packages\scipy\optimize\_optimize.py:3384: RuntimeWarning: overflow encountered in scalar multiply t *= temp*temp
时间: 2023-06-29 12:19:05 浏览: 249
get-pip.py文件
这个警告和运行时的错误是由不同的问题引起的。首先,警告是由于OpenCV库中的SIFT算法已经被移动到主存储库中,因此推荐使用`cv2.SIFT_create()`代替`cv2.xfeatures2d.SIFT_create()`。如果你使用的是OpenCV 4.4及以上版本,则可以直接使用`cv2.SIFT_create()`来避免这个警告。
至于运行时的错误,它可能是由于优化算法中的数值溢出引起的。您可以尝试使用其他优化算法来解决这个问题,例如BFGS或L-BFGS-B。示例代码如下所示:
```python
import cv2
import numpy as np
from scipy.optimize import minimize
# 读入全景图像
image1 = cv2.imread('image1.jpg')
image2 = cv2.imread('image2.jpg')
# 将图像转换为灰度图像
gray1 = cv2.cvtColor(image1, cv2.COLOR_BGR2GRAY)
gray2 = cv2.cvtColor(image2, cv2.COLOR_BGR2GRAY)
# 创建SIFT对象
sift = cv2.SIFT_create()
# 检测关键点和描述符
kp1, des1 = sift.detectAndCompute(gray1, None)
kp2, des2 = sift.detectAndCompute(gray2, None)
# 使用FLANN算法进行特征匹配
FLANN_INDEX_KDTREE = 0
index_params = dict(algorithm=FLANN_INDEX_KDTREE, trees=5)
search_params = dict(checks=50)
flann = cv2.FlannBasedMatcher(index_params, search_params)
matches = flann.knnMatch(des1, des2, k=2)
# 选择最佳匹配点
good_matches = []
for m, n in matches:
if m.distance < 0.7 * n.distance:
good_matches.append(m)
# 获取匹配点的坐标
points1 = np.float32([kp1[m.queryIdx].pt for m in good_matches]).reshape(-1, 1, 2)
points2 = np.float32([kp2[m.trainIdx].pt for m in good_matches]).reshape(-1, 1, 2)
# 定义损失函数
def loss_function(params):
H = np.array(params).reshape((3, 3))
transformed = cv2.warpPerspective(image2, H, (image1.shape[1], image1.shape[0]))
residual = np.sum(np.abs(transformed - image1))
return residual
# 初始参数
initial_params = np.zeros(9)
# 优化
res = minimize(loss_function, initial_params, method='L-BFGS-B')
# 计算单应矩阵
H = np.array(res.x).reshape((3, 3))
# 计算拼接后的图像
result = cv2.warpPerspective(image2, H, (image1.shape[1], image1.shape[0]))
result[0:image1.shape[0], 0:image1.shape[1]] = image1
# 显示结果
cv2.imshow('Result', result)
cv2.waitKey(0)
```
如果仍然遇到数值溢出的问题,可以尝试对输入图像进行调整,例如将图像的大小降低一些。
阅读全文