Location>code7788 >text

Intelligent traffic monitoring system based on RDK X5

Popularity:730 ℃/2024-11-18 01:15:30

- This Blogs is synchronized to CSDN:/xiongqi123123/article/details/143840675?sharetype=blogdetail&sharerId=143840675&sharerefer=PC&sharesource=xiongqi123123&spm=1011.2480.3001.8118

I. Background of the project

Under the current macro background of the popularization of higher education and the continuous expansion of universities, the scale of students enrolled in many colleges and universities has become increasingly large. Accompanied by a surge in demand for student outings, takeaway services, and the increased frequency of parental visits, these factors together contribute to the frequency of off-campus vehicles into the school and the number of student-owned motor vehicles, non-motorized vehicles has risen sharply. This series of changes not only exacerbates the traffic flow on campus, but also triggers a series of frequent occurrence of uncivilized traffic behaviors, specifically manifested as: vehicles speeding on campus, drifting showmanship, illegal parking, and ignoring the priority of pedestrians; student bicycles parked randomly, frequently encroaching on the motorway and sidewalks, seriously impeding the normal traffic order and safe passage on campus.
In view of the above severe traffic management challenges, this project was born, and its core objective is to design and implement an intelligent Internet traffic monitoring system relying on deep learning technology. The system will innovatively adopt the domestic high-end AI development board RDK X5 as the core hardware support platform, aiming to accurately and efficiently cope with the complex traffic management challenges brought about by the adjustment of electric vehicle traffic restriction policy and the opening up of foreign vehicle management. Through the implementation of this project, we expect to effectively improve the intelligent level of campus traffic management, promote the development of civilized traffic behavior, and ensure a safe and convenient campus travel environment for teachers and students.

在这里插入图片描述
在这里插入图片描述

II. Content of the project

Based on deep learning algorithms and advanced image processing technology, this system will realize accurate detection and identification of various types of motor vehicles and non-motor vehicles in the school in order to realize accurate traceability of vehicles in the school, and the specific functions include:

2.1 Vehicle Detection and License Plate Recognition

    (1) Vehicle classification identification: Deep learning algorithms are used to train models to classify and recognize vehicle types (e.g., electric vehicles, motorcycles, cars, etc.) on campus.
    (2) Automatic license plate recognition: Through OCR (Optical Character Recognition) technology, the license plate number of the motor vehicle is automatically captured and recognized, and matched with the school database in real time to determine the identity and nature of the vehicle, including distinguishing whether it is a takeaway vehicle, a net car, a temporary external vehicle, or a vehicle that has not been registered in the school.
    (3) Traffic flow recording and analysis: The system will record the changes in traffic flow in each monitoring area in real time to provide data support for statistical analysis of traffic flow on campus. The data will be regularly stored in the server to facilitate the university to analyze the traffic flow trend and provide a scientific basis for policy formulation and vehicle flow regulation.

2.2 Intelligent Detection of Violations

The system collects traffic behavior data on campus through high-precision cameras and multiple sensors, and combines deep learning algorithms to make a comprehensive judgment of traffic behavior according to the national road traffic law and the traffic management regulations of each university campus or small public area to achieve the following functions:
    (1) Speed detection and real-time alerts: The camera is utilized to capture moving motor vehicles and calculate the speed by detecting the driving track and time interval. If a vehicle is detected exceeding the campus speed limit, the system will automatically record the relevant data and send reminders in real time to curb speeding behavior.
    (2) Parking violation identification: The algorithm will automatically identify the illegal parking behavior of motor vehicles and non-motorized vehicles in the no-parking area, and if the illegal parking is detected for more than a specified period of time, the system will automatically shoot and leave the evidence, and upload the illegal information to the monitoring platform.
    (3) Yielding to Pedestrian Behavior Recognition: Based on the target detection and behavior recognition model, the system will identify whether motor vehicles yield to pedestrians on campus. If any failure to yield to pedestrians occurs, the system will record and remind, helping to create a civilized traffic environment.
    (4) Data logging and violation file generation: The system automatically stores data on all types of violations to form a corresponding violation file for the school to further track and manage violations.

2.3 Intelligent Grid System and Online Monitoring Platform

In order to ensure the real-time and convenient campus traffic management, this system is designed based on RDK X5, and provides a visualization GUI as well as the function of intelligent analysis online monitoring platform:
    (1) Automatic reporting of violations: When a violation is detected, the system automatically uploads the violation information (e.g., parking violation photos, vehicle identification information, type of violation, etc.) to the online monitoring platform via the Internet for real-time viewing and tracking by security and traffic management departments on campus.
    (2) Real-time monitoring of traffic conditions: The monitoring platform will automatically generate a campus-wide traffic flow data monitoring map and dynamically display the traffic flow in each area according to different time periods, which will help to identify and alleviate traffic congestion points in a timely manner.
    (3) Intelligent notification function: The system can send alerts to administrators when there is excessive traffic or frequent violations at a particular location on campus, making it easier for the university to take targeted control measures.

2.4 System Integration and Edge Deployment

This project will integrate and deploy all the algorithms to the domestic high-end AI development board RDK X5 through the edge computing technology, realizing real-time monitoring and data processing, and improving the system response speed and computational efficiency:
    (1) Edge computing deployment: By distributing data processing tasks on edge node devices, traffic data can be quickly calculated in local devices, reducing data uploading delays, improving the real-time and intelligence of the system, and reducing the investment in server room equipment.
    (2) Deep learning arithmetic support: The RDK X5 development board provides powerful arithmetic support to ensure the efficient operation of deep learning algorithms, realizing real-time recognition, analysis and processing of vehicles, pedestrians and other targets.
    (3) Scalable design: The system will reserve interfaces and APIs for future access to more monitoring devices or functional modules to support the continuous upgrading and optimization of campus traffic management.
在这里插入图片描述

III. Introduction to development

3.1 Vehicle recognition model training

We want to realize an intelligent Internet traffic detection system is certainly the most important thing is to realize the system of vehicle identification and detection, first of all, I used a tripod and not use the phone outside the laboratory on the balcony of the school road for uninterrupted filming for the creation of the real environment of the traffic dataset
在这里插入图片描述
At the same time, I went online to look for a part of the intersection vehicle dataset UA-DETRAC captured by the traffic camera, the two according to the proportion of the construction of our final dataset, and then use the labeling of the dataset to carry out a lengthy labeling, labeling, after the use of yolov5-2.0 can be used to carry out the training process, the training process is omitted ... (10,000 words)...

3.2 Model Quantification and Deployment

3.2.1 Installing the official DockerOE package

· DIGUA X5 Algorithmic Toolchain Version Released

· Horizon OE companion downloadable Docker image for GPU/CPU. How does it work?

· [BPU Deployment Tutorial] An article to take you easily out of the model deployment novice village

To call RDK X5 10TOPS of arithmetic need to be trained on our pt model for transformation, similar to the use of TensorRT in Jetson to accelerate the model, we also need to quantize the model we get to train for the BPU can be run on top of the architecture, this time it is necessary to use the official groundnut to provide us with the OE conversion package, the specific link! Attached to the above, we first install Docker.Docker: Accelerated Container Application DevelopmentJust select the corresponding version and install it

在这里插入图片描述

- You can refer to this tutorial for a specific installation tutorial:Docker] Windows11 operating system under the installation, the use of Docker nanny level tutorial_docker windo11_win11 install docker-CSDN Blog

- and how to install Docker to other disks:Windows11 Install Docker to D drive (any other non-C drive will do)_docker install to d drive-CSDN blog

After installing Docker and the official OE packages we can start quantizing our model!

First we use the yolov5 code that comes with it to convert our generated .pt model to a .onnx model

Attention!!! Attention!!!

We need to modify the output header to make sure it is a 4-dimensional NHWC output Modify. /models/file, Detect class, forward method, about 22 lines. Note: It is recommended to keep the original forward method, e.g. change the name to something else, such as forward_, to make it easier to change it back during training.

def forward(self, x):
    return [[i](x[i]).permute(0,2,3,1).contiguous() for i in range()]

If you are not sure about the steps, please refer to Model Zoo's tutorial.YOLOv5 Detect

Then we quantize the model, refer to the TGI toolchain manual and the OE package, check the model, all the operators are on the BPU, and then follow the prompts for the YAML file (the corresponding yaml file is in the./ptq_yamlsdirectory.) Modify it and compile it.

(bpu_docker) $ hb_mapper checker --model-type onnx --march bayes-e --model Car_Detect.onnx
(bpu_docker) $ hb_mapper makertbin --model-type onnx --config yolov5_detect_bayese_640x640_nv12.yaml

After the model conversion is completed, you can visualize the model or check the input/output situation, the specific commands are as follows:

hb_perf Car_Detect.bin #Visualize the model
hrt_model_exec model_info --model_file Car_Detect.bin #Check the inputs and outputs of the bin model

Then we can use Tros's dnn_node_example to detect both cars.

# Configure the environment
source /opt/tros/humble/
# Configure the MIPI camera
export CAM_TYPE=usb
# Launch the launch file
ros2 launch dnn_node_example dnn_node_example. dnn_example_config_file:=(change to your own model config file path).json dnn_example_image_width:=480 dnn_ example_image_height:=640

Then we enter http://IP:8000 IP: RDK X5's IP in the same network environment to see the real-time target detection results
在这里插入图片描述

3.2.2 Deployment of the ByteTrack multi-target tracking algorithm

Target detection alone is not enough, we also need to give each car an independent ID so that the computer can intelligently analyze to know which car is next where and the corresponding behavior of this car, so we need to introduce multi-target tracking algorithms, multi-target tracking algorithms can correlate the tracking results of the target in different frames in order to maintain the identity of the target consistency. Target association algorithms need to match and associate targets in different frames based on their appearance, motion and spatio-temporal information. Common target association algorithms include matching methods based on appearance features (e.g., Kalman filter, Hungarian algorithm, etc.) and matching methods based on motion models (e.g., nearest-neighbor matching, multi-target data association, etc.), etc., and we use the ByteTrack algorithm here.

The ByteTrack algorithm is a target detection based tracking algorithm that, like other non-ReID algorithms, uses only the bbox obtained from target tracking for tracking. The tracking algorithm uses Kalman filtering to predict the bounding box, and then uses the Hungarian algorithm for matching between the target and the track. The biggest innovation is the use of low-scoring bboxes. The authors believe that low-scoring bboxes may be the bboxes generated when occluding the object, and directly discarding the low-scoring bboxes will affect the performance, so they use low-scoring bboxes to perform a secondary matching of the tracking algorithm, which effectively optimizes the problem of ID switching due to occlusion during the tracking process.

We add another layer of Tracker package between Yolo's output and the Web's display according to Tros's canonical input and output, so that we can get a unique ID for each recognition result

class oTracker : public rclcpp::Node {
public:
    Tracker();
    Tracker(const std::string &node_name) : Node(node_name) {
        RCLCPP_INFO(this->get_logger(), "Tracker node is created");
        this->declare_parameter("input_topic", "dnn_node_sample");
        this->get_parameter("input_topic", input_topic_);
        this->declare_parameter("output_topic", "tracker_res");
        this->get_parameter("output_topic", output_topic_);
        
        perception_subscription_ = this->create_subscription<ai_msgs::msg::PerceptionTargets>(
            input_topic_, 10, std::bind(&YoloTracker::PerceptionCallback, this, std::placeholders::_1));
        
        tracking_result_publisher_ = this->create_publisher<ai_msgs::msg::PerceptionTargets>(output_topic_, 10);
        
        tracker_ = std::make_shared<BYTETracker>(30, 30);
    }

private:
    std::string input_topic_; 
    std::string output_topic_;
    std::shared_ptr<BYTETracker> tracker_;

    rclcpp::Subscription<ai_msgs::msg::PerceptionTargets>::SharedPtr perception_subscription_;
    rclcpp::Publisher<ai_msgs::msg::PerceptionTargets>::SharedPtr tracking_result_publisher_;

    void PerceptionCallback(const ai_msgs::msg::PerceptionTargets::SharedPtr msg) {
        ai_msgs::msg::PerceptionTargets tracking_result;
        std::vector<STrack> tracked_objects;
 
        tracker_->update(msg, tracked_objects);
        for (auto &tracked_object : tracked_objects) {
            float xmin, ymin, xmax, ymax;
            bool is_vertical = tracked_object.tlwh[2] / tracked_object.tlwh[3] > 1.6;
            if (tracked_object.tlwh[2] * tracked_object.tlwh[3] > 20 && !is_vertical) {
                xmin = tracked_object.tlwh[0];
                ymin = tracked_object.tlwh[1];
                xmax = tracked_object.tlwh[0] + tracked_object.tlwh[2];
                ymax = tracked_object.tlwh[1] + tracked_object.tlwh[3];
            }
            ai_msgs::msg::Target target;
            target.set__type(tracked_object.class_name);
            ai_msgs::msg::Roi roi;
            .set__x_offset(xmin);
            .set__y_offset(ymin);
            .set__width(xmax - xmin);
            .set__height(ymax - ymin);
            roi.set__confidence(tracked_object.score);

            .emplace_back(roi);
            target.set__track_id(tracked_object.track_id);
            tracking_result.targets.emplace_back(std::move(target));
        }
        tracking_result.header = msg->header;
        tracking_result.fps = msg->fps;
        tracking_result_publisher_->publish(tracking_result);
    }
};

Then we again in the same network environment enter http://IP:8000 IP: IP of RDK X5, you can see the real-time multi-target tracking results, we here use the UA-DETRAC multi-map splicing generated by the brief video, you can see the tracking effect is still very good!

在这里插入图片描述

3.2.3 Realization of speeding warnings

After the realization of the target vehicle tracking, we will have to write an algorithm for the determination of violations, due to time constraints (junior in November is the final exam week ku.......) I used the simplest way to achieve the determination of speeding, the specific process is: first of all, through the OpenCV automatically identify the centerline of the two-way lanes, and then automatically generate two triggers perpendicular to the lane line and the centerline, we detect the multi-objective tracking through the single ID trigger the sequence of the two lines, the frame ID, and the average height of the domestic gantry can be comprehensively determined by the speed of each vehicle. speed.

在这里插入图片描述

3.2.4 Sending violation alerts to cell phones

Get the illegal information is not enough, we also need to push the illegal information to the administrator's cell phone in real time, here we use PushPlus (PushPlus as the port of our push service), the use of PushPlus is very simple, register an account with the WeChat public number can be bound to get an API, through the API, we can send the message to the public number

The code used is as follows:

import requests
def send_wechat(msg):
    token = ' '#Paste your owntoken
    title = ' '
    content = msg
    template = 'html'
    url = f"/send?token={token}&title={title}&content={content}&template={template}"
    print(url)
    r = (url=url)
    print()
 
if __name__ == '__main__':
    msg = 'this is a python test'
    send_wechat(msg)

As shown in the picture: you can get the violation information

在这里插入图片描述

3.2.5 Traffic jam heat map

(TODO)

3.2.6 Database construction

(TODO)

3.2.7 Additional violation alerts

(TODO)

IV. Summary of the incubation camp

This one and a half month incubation camp experience is invaluable, every closed-door discussion makes me full of goods, I also made a lot of big brother in the group, but but the time is really so short ah! I'm really sorry that I couldn't complete the program perfectly, time is really a bit tight for college students, especially juniors during finals week, but I'll definitely fill in the gaps after I finish my exams! I hope the next incubation camp can be held during winter break (there are lots and lots of competitions in the summer)?