如何使用deepsort对检测结果进行跟踪,代码实现
时间: 2024-06-10 07:10:55 浏览: 10
使用deepsort对检测结果进行跟踪,可以按照以下步骤进行:
1. 安装deepsort库
安装命令:`pip install deepsort`
2. 加载检测器
首先需要加载检测器,例如使用YOLOv3进行检测:
```python
from deepsort import nn_matching
from deepsort.detection import Detection
from deepsort.tracker import Tracker
from deepsort import generate_detections
import cv2
# 加载YOLOv3检测器
net = cv2.dnn.readNetFromDarknet('yolov3.cfg', 'yolov3.weights')
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_OPENCV)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_CPU)
```
3. 初始化跟踪器
然后需要初始化跟踪器,定义跟踪器的一些参数:
```python
# 初始化跟踪器
metric = nn_matching.NearestNeighborDistanceMetric("cosine", 0.2)
tracker = Tracker(metric)
# 定义一些参数
max_cosine_distance = 0.5
nn_budget = None
nms_max_overlap = 1.0
```
4. 对每一帧图像进行处理
对每一帧图像进行处理,包括检测行人和对行人进行跟踪:
```python
# 对每一帧图像进行处理
while True:
# 读取一帧图像
ret, frame = cap.read()
if not ret:
break
# 检测行人
detections = []
blob = cv2.dnn.blobFromImage(frame, 1/255.0, (416,416), swapRB=True, crop=False)
net.setInput(blob)
outs = net.forward(get_output_layers(net))
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5 and class_id == 0:
center_x = int(detection[0] * frame_width)
center_y = int(detection[1] * frame_height)
width = int(detection[2] * frame_width)
height = int(detection[3] * frame_height)
left = int(center_x - width / 2)
top = int(center_y - height / 2)
detections.append(Detection(np.array([left, top, width, height]), confidence, feature))
# 对行人进行跟踪
tracker.predict()
tracker.update(detections)
# 绘制跟踪结果
for track in tracker.tracks:
if not track.is_confirmed() or track.time_since_update > 1:
continue
bbox = track.to_tlbr()
cv2.rectangle(frame, (int(bbox[0]), int(bbox[1])), (int(bbox[2]), int(bbox[3])), (0, 255, 0), 2)
cv2.putText(frame, "ID: {}".format(track.track_id), (int(bbox[0]), int(bbox[1] - 15)), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
# 显示结果
cv2.imshow("Frame", frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
```
其中,`cap`为视频读取器,`frame_width`和`frame_height`为视频帧的宽度和高度,`get_output_layers(net)`为获取YOLOv3网络的输出层,`to_tlbr()`为将跟踪框的坐标从中心点左上角和右下角格式转换为左上角和右下角格式。
以上就是使用deepsort对检测结果进行跟踪的代码实现。
相关推荐
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![zip](https://img-home.csdnimg.cn/images/20210720083736.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)
![](https://csdnimg.cn/download_wenku/file_type_ask_c1.png)