Location>code7788 >text

OpenVino rapid deployment tutorial

Popularity:62 ℃/2024-08-27 15:34:22

OpenVino rapid deployment tutorial

        OpenvinoattributableIntelA semi-open source toolkit developed specifically for optimizing and deploying AI inference, primarily for deepReasoning for optimization. This tutorial applies toYolov5-7.0The test platform--Intel Nuc 11th generation i5 processor.

I. Installation of OpenVino

Go to OpenVino official website

/2024/get-started/

Choose your favorite download method, this tutorial uses OpenVino-2022.3.1 version

II. Model conversion

  1. Converting .pt to .onnx format via a file that comes with Yolov5

    python3 --weights xxxx/ --include onnx --batch_size 1 --opset 10
    
    PS: If the conversion fails, e.g. opset 10 is not supported or onnx version is not supported, please rebuild the yolov5 environment and install the lowest version of the library.
    
  2. Use OpenVino toolchain to convert .onnx to xml, bin models

    mo --input_model xxx/
    
    PS: If the openvino environment is installed successfully, you can use the mo command directly in the yolov5 environment.
    

PS: Please be sure to use the model visualization tool to check if the conversion is correct after the conversion is completed.

III. Rapid deployment using the following code

import  as ov
import cv2
import numpy as np
import  as op

class ObjectDetector:
    def __init__(self, model_xml, model_bin, labels, device="CPU"):
         = ()
         = .read_model(model_xml, model_bin)
         = labels
        self.preprocess_model()
        self.compiled_model = .compile_model(, device)
        self.infer_request = self.compiled_model.create_infer_request()

    def preprocess_model(self):
        premodel = ()
        ().tensor().set_element_type(.u8).set_layout(("NHWC")).set_color_format()
        ().preprocess().convert_element_type(.f32).convert_color().scale([255., 255., 255.])
        ().model().set_layout(("NCHW"))
        (0).tensor().set_element_type(.f32)
         = ()

    def infer(self, img):
        detections = []
        img_re, dw, dh = (img, (640, 640))
        input_tensor = np.expand_dims(img_re, 0)
        self.infer_request.infer({0: input_tensor})
        output = self.infer_request.get_output_tensor(0)
        detections = self.process_output([0])
        return detections

    def process_output(self, detections):
        boxes = []
        class_ids = []
        confidences = []
        for prediction in detections:
            confidence = prediction[4].item()
            if confidence >= 0.6:
                classes_scores = prediction[5:]
                _, _, _, max_indx = (classes_scores)
                class_id = max_indx[1]
                if (classes_scores[class_id] > .25):
                    (confidence)
                    class_ids.append(class_id)
                    x, y, w, h = prediction[0].item(), prediction[1].item(), prediction[2].item(), prediction[3].item()
                    xmin = x - (w / 2)
                    ymin = y - (h / 2)
                    box = ([xmin, ymin, w, h])
                    (box)
        indexes = (boxes, confidences, 0.5, 0.5)
        detections = []
        for i in indexes:
            j = ()
            ({"class_index": class_ids[j], "confidence": confidences[j], "box": boxes[j]})
        return detections

    def resizeimg(self, image, new_shape):
        old_size = [:2]
        ratio = float(new_shape[-1] / max(old_size))
        new_size = tuple([int(x * ratio) for x in old_size])
        image = (image, (new_size[1], new_size[0]))
        delta_w = new_shape[1] - new_size[1]
        delta_h = new_shape[0] - new_size[0]
        color = [100, 100, 100]
        new_im = (image, 0, delta_h, 0, delta_w, cv2.BORDER_CONSTANT, value=color)
        return new_im, delta_w, delta_h


if __name__ == "__main__":
    # Example usage:
    labels = [
        "right",
        "warning",
        "left",
        "people",
        "10",
        "pullover",
        "10off",
        "green",
        "red"
    ]
    detector = ObjectDetector("/home/nuc/MyCar/yolov5-7.0/", "/home/nuc/MyCar/yolov5-7.0/", labels, "CPU")
    cap = (0)
    while True:
        ret, frame = ()
        detections = (frame)
        for detection in detections:
            classId = detection["class_index"]
            confidence = detection["confidence"]
            label = labels[classId]
            box = detection["box"]
            area = box[2] * box[3]
            print(f"Detected object: {label}, Confidence: {confidence}, Area: {area}")
    ()