写一个代码YOLOv5的视频流传输到前端使用WebSocket数据发送到前端。在前端使用JavaScript解码JSON数据,并在视频流中绘制检测到的对象。您可以使用Canvas或WebGL来绘制视频流和检测对象。
时间: 2024-03-17 22:43:02 浏览: 197
pyqt+yolov5+pcb缺陷检测,登入界面+支持图像视频检测整体为YoloV5的代码
5星 · 资源好评率100%
这里是一个简单的Python代码示例,用于检测YOLOv5视频流中的对象并将结果通过WebSocket发送到前端:
```python
import cv2
import numpy as np
import json
import asyncio
import websockets
# Load YOLOv5 model
net = cv2.dnn.readNet("yolov5.weights", "yolov5.cfg")
classes = []
with open("coco.names", "r") as f:
classes = [line.strip() for line in f.readlines()]
layer_names = net.getLayerNames()
output_layers = [layer_names[i[0] - 1] for i in net.getUnconnectedOutLayers()]
# Start video capture
cap = cv2.VideoCapture(0)
# Set up WebSocket server
async def handler(websocket, path):
while True:
# Read video frame
ret, frame = cap.read()
if not ret:
break
# Run object detection on the frame
blob = cv2.dnn.blobFromImage(frame, 0.00392, (416, 416), (0, 0, 0), True, crop=False)
net.setInput(blob)
outs = net.forward(output_layers)
class_ids = []
confidences = []
boxes = []
for out in outs:
for detection in out:
scores = detection[5:]
class_id = np.argmax(scores)
confidence = scores[class_id]
if confidence > 0.5:
# Object detected
center_x = int(detection[0] * frame.shape[1])
center_y = int(detection[1] * frame.shape[0])
w = int(detection[2] * frame.shape[1])
h = int(detection[3] * frame.shape[0])
x = int(center_x - w / 2)
y = int(center_y - h / 2)
class_ids.append(class_id)
confidences.append(float(confidence))
boxes.append([x, y, w, h])
# Encode object detection results as JSON
detections = []
for i in range(len(boxes)):
detections.append({
"class": classes[class_ids[i]],
"confidence": confidences[i],
"box": boxes[i]
})
json_str = json.dumps(detections)
# Send JSON data to WebSocket client
await websocket.send(json_str)
# Start WebSocket server
start_server = websockets.serve(handler, "localhost", 8765)
asyncio.get_event_loop().run_until_complete(start_server)
asyncio.get_event_loop().run_forever()
```
在这个例子中,我们使用了OpenCV的DNN模块来加载YOLOv5模型和视频流,并在每一帧上运行对象检测。检测到的对象信息被编码为JSON格式,并通过WebSocket发送到前端。
在前端,您可以使用JavaScript来解码JSON数据并在视频流中绘制检测到的对象。以下是一个简单的JavaScript代码示例:
```javascript
const canvas = document.getElementById("canvas");
const ctx = canvas.getContext("2d");
const ws = new WebSocket("ws://localhost:8765/");
ws.onmessage = function(event) {
// Decode JSON data
const detections = JSON.parse(event.data);
// Draw detections on canvas
ctx.clearRect(0, 0, canvas.width, canvas.height);
for (const detection of detections) {
const [x, y, w, h] = detection.box;
ctx.strokeStyle = "red";
ctx.lineWidth = 2;
ctx.strokeRect(x, y, w, h);
ctx.fillStyle = "red";
ctx.font = "16px Arial";
ctx.fillText(detection.class + " (" + detection.confidence.toFixed(2) + ")", x, y - 5);
}
};
```
在这个例子中,我们使用了Canvas API来绘制视频流和检测对象。每当WebSocket收到一个新的JSON数据时,我们解码它并在Canvas上绘制检测到的对象。
阅读全文