yolo车辆视频目标检测 python代码
时间: 2023-09-30 10:02:41 浏览: 185
yolo目标检测
以下是使用Python和YOLOv3进行车辆视频目标检测的代码示例:
```python
import cv2
import numpy as np
# 加载YOLOv3模型
net = cv2.dnn.readNet('path/to/yolov3.weights', 'path/to/yolov3.cfg')
# 设置类别列表和颜色编码
classes = []
with open('path/to/coco.names', 'r') as f:
classes = [line.strip() for line in f.readlines()]
colors = np.random.uniform(0, 255, size=(len(classes), 3))
# 读取输入视频
cap = cv2.VideoCapture('path/to/input/video.mp4')
# 检测车辆并标记
while True:
# 从视频流中读取一帧
ret, frame = cap.read()
# 如果视频结束,跳出循环
if not ret:
break
# 获取输入图像的尺寸
height, width, _ = frame.shape
# 构建一个blob,并将其输入到YOLOv3模型中
blob = cv2.dnn.blobFromImage(frame, 1/255, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(net.getUnconnectedOutLayersNames())
# 初始化一些列表
class_ids = []
confidences = []
boxes = []
# 处理YOLOv3模型的输出
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
center_x = int(detection[0] * width)
center_y = int(detection[1] * height)
w = int(detection[2] * width)
h = int(detection[3] * height)
x = int(center_x - w/2)
y = int(center_y - h/2)
boxes.append([x, y, w, h])
confidences.append(float(confidence))
class_ids.append(class_id)
# 应用非最大抑制,并在图像中标记检测到的车辆
indices = cv2.dnn.NMSBoxes(boxes, confidences, 0.5, 0.4)
for i in indices:
i = i[0]
box = boxes[i]
x, y, w, h = box
color = colors[class_ids[i]]
cv2.rectangle(frame, (x, y), (x+w, y+h), color, 2)
cv2.putText(frame, classes[class_ids[i]], (x, y-10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, color, 2)
# 显示标记后的帧
cv2.imshow('Car Detection', frame)
# 按下q键退出程序
if cv2.waitKey(25) & 0xFF == ord('q'):
break
# 释放视频流和窗口
cap.release()
cv2.destroyAllWindows()
```
在上面的代码中,我们首先加载YOLOv3模型,然后使用OpenCV的VideoCapture函数读取输入视频。接下来,我们在循环中读取每一帧视频,并将其作为输入传递给YOLOv3模型。然后,我们处理模型的输出,并使用非最大抑制算法应用阈值,以过滤掉重叠的边界框。最后,我们在每个检测到的车辆周围绘制矩形,并在窗口中显示标记后的帧。最后,我们按下q键退出程序,并释放视频流和窗口。
阅读全文