Study on the Application of Real-Time Drone Monitoring in Ordos Open-Pit Coal Mine

Abstract

In the process of intelligent mine construction in open-pit mine, in order to improve the safety monitoring ability of mine transportation system, solve the problems of large human interference and blind Angle detection by existing conventional monitoring methods, this paper establishes an open-pit mine monitoring data set, and proposes a real-time intelligent monitoring model based on UAV. The reasoning component with strong computing power and low power consumption is selected, and the lightweight object detection model is selected for the experiment. A quantitative standard of dynamic energy consumption detection by evaluation algorithm is proposed. Through experimental comparison, it is found that YOLOv4-tiny has the highest comprehensive grade in detection accuracy, speed, energy consumption and other aspects, which is suitable for application in the above model.

Share and Cite:

Wu, J. , Wang, W. and Ni, X. (2023) Study on the Application of Real-Time Drone Monitoring in Ordos Open-Pit Coal Mine. Open Journal of Applied Sciences, 13, 483-495. doi: 10.4236/ojapps.2023.134039.

1. Introduction

The coal industry is an important basic industry related to the national economy and energy security, and the main goal of open-pit mine construction at this stage is to achieve smart mine construction [1] . So far, most of the safety monitoring work in surface coal mines has been carried out by manual inspection, which is costly and inefficient; although the deployment of fixed heads for monitoring is a good solution to the above problems, it is not flexible enough in the face of the complex geographical environment of the mine. With the development of Unmanned Aerial Vehicles (UAVs) technology, the low-altitude remote sensing operation of drone can reduce the influence of terrain, weaken the interference of human factors, and have the flexibility and mobility to become a powerful supplement to high-altitude satellite remote sensing, making it possible to fully build an integrated three-dimensional monitoring network between heaven and earth [2] .

With the continuous expansion and enrichment of UAV mounted equipment, researchers have begun to use UAVs equipped with intelligent detection modules to carry out safety monitoring and geological research in coal mines [3] . UAVs equipped with infrared cameras and ground-penetrating radar can conduct combined detection of hidden ground fractures. Not only can they find the location of hidden ground fractures caused by underground mining, but they also find that the surface temperature above hidden ground fractures is higher than the surface temperature of non-fracture areas, enabling them to efficiently and accurately locate hidden ground fractures and explain the mechanism of distribution of ground temperature values in mining areas [4] . Effectively predicted the total suspended particulate (TSP) matter concentration in Anjialing coal mine by using an unmanned aircraft with dust detection equipment and a LSTM memory network incorporating an attention mechanism [5] . At present, computer vision algorithms based on machine learning and deep learning have been widely used, and deep learning target detection models, with their fast feature extraction and more accurate classification regression, but there has been a dynamic balance between the computing power of the veneer equipment on the UAV and the real-time target detection algorithms that can be invoked for deployment when the UAV is conducting inspection operations, and the UAV is ensuring that the flight components operate There is also a limit to the amount of power that can support the intelligent single board device (SBD) for computation during the process, making the use of advanced on-board computers on UAVs and the deployment of an efficient target detection framework particularly critical [6] . As the RCNN [7] and SSD [8] algorithms take up a large number of resources and cannot be detected in real time on a single board device, the lightweight YOLO [9] target detection algorithm is deployed.

In this paper, a dataset is built from actual working videos of trucks in the Ordos Dongsheng coalfield in Inner Mongolia, and a Prometheus 600 (abbreviated as P600) drone equipped with NVIDIA JETSON XAVIER NX (abbreviated as NX) is used as the hardware operating platform for the fast detection algorithm of surface mine trucks, and a real-time monitoring model for surface mine transportation equipment is proposed. In order to investigate the most suitable target detection algorithm to be ported to the NX platform, this experiment deploys a darknet-based surface mining truck detection model on the UAV platform, compares the detection accuracy of several algorithms on the NX platform for mining trucks using experiments, and grades the four networks on NX in terms of running speed and computational resource occupation through a weighted scoring method. Finally, it is hoped that this research can explore a fast, accurate and low-cost target detection technology for open pit mines, promote the development of unmanned aerial vehicles and enhance the intelligence of mining machinery management.

2. Related Work

In recent years, aerial UAVs have been widely used in many fields, including power inspection [10] , rail transportation [11] , agricultural production [12] and disaster monitoring [13] , to extract object features or targets of interest through target detection by aerial video captured by UAVs [14] .

Convolutional deep neural network-based target detection methods have also been rapidly developed, and the single-board devices that can be carried on UAVs are becoming smaller and lighter, while being able to achieve larger computational volumes with lower power consumption [15] , so that applying UAVs and deploying deep learning-based target detection models in open-pit mines can be used for real-time monitoring of mining trucks.

In agricultural scenes, drones are commonly applied for production operations, and drones with vision functions are generally applied for disease detection [12] ; in the field of electric power or rail transportation inspection, the scene depth is large, and drones and fixed aircraft positions are commonly used in cooperation, and drones are generally applied for fine operations with relatively low requirements for real-time, and the underground platform of the open pit mine has a large drop from the ground and a complex slope structure. The safety monitoring of mining trucks is very necessary, which is beneficial to the safety production work of open pit coal mines [5] . Although in other industries based on aerial video drones have been fully used, but in open-pit coal mines, drones generally carry gas and temperature sensors for simple data collection, or environmental modeling, the application of drones for aerial surveillance is relatively incomplete, and the development of real-time monitoring systems is still in the trial stage.

3. Models and Presentation Method

Aiming at the operating conditions of open pit coal mines, combined with the experience gained from existing UAV aerial video monitoring in the fields of electricity, agriculture and disaster response, etc. Summarizing the shortcomings of the traditional monitoring model, this paper investigates the model construction of a real-time monitoring system for open pit coal mines, and proposes a method to grade the performance of the target detection model on a single-board computer with the commissioning of the system.

3.1. Real-Time Safety Monitoring Model for Open-Pit Mines

Traditional applications of UAV target detection technology require the generation and collection of large volumes of video during routine inspections. Deep learning-based target detection allows for efficient and accurate identification of objects to be detected. While there are managers who process manually in data centres or deploy intelligent detection frameworks on cloud servers for automated processing, solutions via over-the-horizon communication are not suitable for the demanding tasks that arise from the proliferation of video volumes. With the performance and speed of intelligent computing devices constantly pushing new limits and more and more video processing tasks being placed on devices at the edge of the network, bringing them as close to the data source as possible, now is the beginning of the era of edge computing and edge devices. An intelligent autonomous sensing inspection system that incorporates cloud, edge computing, deep learning and big data technologies consists of the following three main components.

1) Data acquisition and analysis system

2) Real-time autonomous visual monitoring system for UAVs

3) High-performance single board devices for UAVs that can run real-time inspection depth models

The autonomous inspection system designed in this project is mainly based on the above 3 components. Based on the above analysis, a system model for real-time UAV detection is proposed in this paper, and the system architecture diagram is shown in Figure 1. This inspection process is divided into two phases: the first phase is the training phase, where data collection, dataset production and model training are to be completed to obtain training weights; the second phase is the inspection phase, where the inspection model and the modified business model (to achieve statistical, functional wearability) are deployed in advance to the edge-side single board device. This enables real-time monitoring of the aerial inspection process of drones, which require rapid and continuous analysis of incoming data to be able to parse the world around them and take action within millisecond response times.

Figure 1. Flow chart of the UAV real-time target detection system.

This time constraint makes it impossible to rely entirely on the cloud to process the data stream when detecting, but rather the processing must be done locally. However, there is a disadvantage to processing locally: the edge device hardware does not have the computing power available on the cloud and cannot compromise on accuracy and speed depending on the actual requirements. The solution to this problem is either to use stronger and more efficient hardware, or not to use as complex a deep neural network. To get the best results, a balance has to be found between the two points.

3.2. Computing Devices for Real-Time Video Processing on Board Drones

Naeem et al. deployed the yolo series of algorithms using Raspberry Pi 4, Nvidia Jetson Nano, Nvidia Jetson TX2, and Nvidia AGX Xavier devices to evaluate the use of autonomous deep learning target detection algorithms on these airborne devices and use this for powerline inspection. But they did not experiment with the latest Nvidia Jetson NX [10] .

For graphical image processing UAVs, the core device for edge computing is the aircraft’s Single Board Computer (SBC), and the most advanced SBCs available are NVIDIA’s range of visual reasoning modules. NVIDIA’s leading single board computers are shown in Table 1.

On the NX, the NVIDIA Deep Learn Accelerator (NVDLA) engine and GPU run simultaneously at INT8 precision, while on the Jetson Nano and Jetson TX2, the GPU runs at FP16 precision. jetson Xavier NX delivers 10 times more performance compared to Jetson TX2 with the same power and 25% less footprint. The kit’s small size and light weight compared to the AGX led to the selection of the NX as the single board device to carry the UAV.

3.3. Performance Evaluation Metrics for Target Detection Algorithms

In this paper, we use the mean average precision (mAP) to measure the goodness

Table 1. List of training environment.

of the network model accuracy, mAP means the average of the APs of all categories within all images, which is calculated as

mAP = 1 N × i = 1 N AP i × % (1)

where: N—number of categories, which is 3 in this experiment; AP—Average accuracy per class, refers to the area under the P-R curve of a specific category within all images. AP is calculated as follows

AP = 0 1 P ( R ) d R (2)

P = TP TP + FP (3)

R = TP TP + FN (4)

where: TP indicates predicted with mining card, actual with mining card; FP indicates predicted with mining card, actual without mining card; FN indicates predicted without mining card, actual with mining card; TN indicates predicted without mining card, actual without mining card. The AP calculation of COCO will take multiple cross-merge ratios for calculation. In this experiment, the result of IOU = 0.5 is used to calculate AP.

3.4. Quantitative Standard Design for Single Board Computers on Drones

In order to select a network model suitable for real-time target detection at NX, it is necessary to evaluate the performance of different target detection network models operating on NX, and therefore a quantitative criterion is proposed in this section. This criterion is based on the fact that a large amount of computational resources will be used by the UAV for real-time monitoring, and that the computational resource occupation will affect the operation of other functional modules of the overall detection system (e.g. distance detection, obstacle avoidance, etc.), and the performance of each model is analyzed comprehensively in this paper in relation to its occupation of hardware resources.

The NX single-board device on board the UAV turns on the maximum performance mode (MAXN mode) during the target detection of the mining card, closes all irrelevant terminal windows, and ensures that only one network model detection program and the jtop system resource monitoring program are opened for each test. This paper uses a scoring system to evaluate the overall performance of the network model, which is calculated using the formula

Grade = i = 1 3 w i g i (5)

where: Grade is total score; g1, g2, g3 are mAP item score, average frame rate item score, hardware test score; w1, w2, w3 are the weighting factor of the above 3 scores, which sums to 1.

When evaluating the drone detection resource score, GPU, CPU and RAM usage is considered and a corresponding score is assigned to each sub-category. As a rule of thumb, fluency is the most important factor for real-time drone detection, followed by object detection accuracy and hardware resource consumption respectively. For this reason, all evaluation items in real-time UAV monitoring are weighted and given different weighting factors according to their importance, with 1 being the highest weighting factor. This paper therefore assigns the following values to the weighting coefficients of all the test terms according to their importance: w1 is 0.4, w2 is 0.4 and w3 is 0.2. A maximum standard score of 5 points was established for the scoring of the test items and the final score was calculated by decreasing the standard score by 1 point for each 20%. The item-specific evaluation criteria can be found in Table 2.

w1 is the frame rate test weight, w2 is the resource usage test weight and w3 is the detection accuracy test weight.

4. Experiment

4.1. Experimental Environment

The model training test platform is a H3C UniServer R4900 G3 dual-way rackmount server with Intel(R) Xeon(R) Bronze 3204 CPU, 16 GB RAM, NVIDIA A10 GPU and 24 GB video memory to build the DarkNet deep learning framework. On this basis, the training trials of the target detection model were completed.

The visual inference module on board the drone is NVIDIA’s NX single board device with a Volta architecture GPU containing 384 CUDA compute cores, each chip hitching 48 Tensor Cores, with over 59.7 GB/s of memory bandwidth and 8 GB of 128-bit LPDDR4x memory with a maximum frequency of 1600 MHz and an arithmetic power of 21 TOPS. As mentioned earlier, the YOLO family of target detection algorithms (YOLOv3, YOLOV4, YOLOv3-tiny and YOLOv4-tiny) will be used in this experiment.

Among them, the training parameters of YOLOv3, YOLOV4, YOLOv3-tiny and YOLOv4-tiny are the same: learning rate 0.001. A batch training method is used to divide the training set as well as the test set into four batches, both with a batch-size of 4, and the dataset is set to a uniform resolution of 640 × 640 for

Table 2. List of training environment.

T1 is CPU usage, core occupancy ≥ 95% is “core full”; T2 is GPU usage; T3 is RAM usage.

training and testing.

4.2. Building a Data Set

Data were collected using a DJI Royal 2 zoom version of the drone (Figure 2(a)) and real-time monitoring was carried out using a research drone p600 equipped with a Jetson Xavier NX (Figure 2(b)).

The dataset in this experiment was taken from the aerial video taken and produced by a UAV at the Dongsheng coalfield in Ordos, as shown in Figure 3. The video files were split with a frame rate of 10 images/s to eliminate redundant and repetitive images. In order to enrich the diversity of the dataset and increase the generalization ability of the model in practical use, a small part of the dataset was searched on the Internet or by means of screen shading, etc. A total of 1863 images were collected as the original images.

The data set was divided into a training set, a validation set and a test set according to 8:1:1, and the data set was expanded by data enhancement methods such as Mosaic and Mixup during training.

5. Results

5.1. Training Data Set

In this paper, four different deep learning models YOLOv3-DarkNet53, YOLOv3-tiny, YOLOv4-CSPdarknet53 and YOLOv4-tiny were trained to construct

Figure 2. Experimental UAVs. (a) DJI Mavic2 ZOOM; (b) Prometheus 600.

Figure 3. Open-pit mine target aerial image dataset.

the autonomous target detection algorithm, with 6000 (Figure 4) and 12,000 (Figure 5) iterations on the dataset respectively.

From Figure 4 it can be seen that YOLOv4-tiny after a relatively small number of iterations the model basically converges to 0.6, and YOLOv3 and YOLOv4 quickly reach a high training accuracy of 91.3% and 95.1% respectively.

Compared to YOLOv4 and YOLOv3, YOLOv4-tiny and YOLOv3-tiny have fewer convolution layers, which improves their suitability for real-time processing,

Figure 4. Results for the 4 Models Trained 6000 Times. (a) YOLOv3 (6000 iterations); (b) YOLOv3-tiny (6000 iterations); (c) YOLOv4 (6000 iterations); (d) YOLOv4-tiny (6000 iterations).

Figure 5. Results for the 4 Models Trained 12,000 Times. (a) YOLOv3 (12,000 iterations); (b) YOLOv3-tiny (12,000 iterations); (c) YOLOv4 (12,000 iterations); (d) YOLOv4-tiny (12,000 iterations).

but theoretically reduces accuracy slightly. As can be seen from Figure 5, the training accuracy of YOLOv4 is higher (4.6% higher than YOLOv3), the convergence of the YOLOv3 and YOLOv4tiny algorithms is better, and the loss rate of YOLOv4tiny drops to 0.65 or less with little variation after a small number of iterations. Of all four models, YOLOv4-tiny gave the best training results with an accuracy of 80% and a loss of 0.62. YOLOv3 also had a low loss of 0.8 and an accuracy of 91.7%, but it used 53 convolutional layers, which made it significantly more computationally expensive compared to the 29 convolutional layers used in YOLOv4-tiny.

5.2. Results of Real-Time Video Detection on NX for Four Models

The results of real-time video detection on NX are showing on Table 3. The table lists the detection speed of the algorithms and the operation of hardware devices such as CPU, GPU and memory, based on the results below to obtain the grades in 5.3.

5.3. Grades of Real-Time Detection on NX for the Four Models

Based on the real-time detection results in Table 3 and the detection accuracy mAP values of the four network models (YOLOv3: 92.8, YOLOv3-tiny: 79.5, YOLOv4: 97.1, YOLOv4-tiny: 80.7), the scoring results of each model on NX were calculated using the scoring criteria in Section 3.4, as shown in Table 4.

YOLOv4-tiny and YOLOv3-tiny scored highest in resource usage with their lightweight size. Although YOLOv4 and YOLOv3 have higher detection accuracy, they do not score well in real-time target detection because of their slow detection speed and high resource usage. Other modules will also request access to GPU resources when the drone is performing real-time monitoring. Considering the high GPU usage of YOLOv3 and YOLOv4, using YOLOv4-tiny may be more beneficial to the overall performance of the system [10] .

6. Conclusions

This paper presents a model for running a real-time autonomous target detection algorithm on a UAV-mounted veneer device based on actual work experience in the Dongsheng coalfield in Inner Mongolia, with the aim of detecting

Table 3. Results of the target detection model when run on a single board.

Table 4. Grade of target detection models running on a single board.

personnel through mining trucks and other equipment.

By comparing the most popular veneer devices available today, the newest NVIDIA Jetson NX was selected as the vision inference module embedded on the UAV; the real-time processing results of YOLOv3, YOLOv3-tiny, YOLOv4 and YOLOv4-tiny on the NX platform were further compared. The results show that YOLOv4-tiny achieves the best balance of higher accuracy and better frame rate during real-time detection.

In the future, our work will be expanded to consider multiple tasks on UAVs (path planning, beyond-horizon control), as well as the ability for UAVs to take off and land on hangars for frog-jump inspection operations, improved safety, and enhanced range.

Funding

This article is supported by Hebei Important Science Foundation of China Youth Fund, project approval number: 19270318D, and Chinese Central Universities Basic Research Business Fund Project, project approval number: 3142021009.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Zhang, R.X., Mao, S.J., Zhao, H.Z., et al. (2019) Framework and Structure Design of System Construction for Intelligent Open-Pit Mine. Coal Science and Technology, 47, 1-23.
[2] Wu, J.Y., Wang, C., Yang, H.J. and Ni, X.S. (2021) Graphic Analysis of Research Hotspots and Develop Trends of Domestic Intelligent Coal Mining. Journal of North China Institute of Science and Technology, 18, 110-118.
[3] Geng, S.H., Zhang, W.J., Tian, S.J., Zi, Y.K., Miao, J.Y. and Shen, R. (2022) Study on UAV Image Extraction of Surface Crack Information Technology in Goaf. Open Access Library Journal, 9, e8675.
https://doi.org/10.4236/oalib.1108675
[4] Zhang, Y.X., Ling, C.W., Zhang, K.N., Gao, Y.R., Sun, B. and Wang, X.L. (2021) Detection of Hidden Mining-Induced Ground Fissures via Unmanned Aerial Vehicle Infrared System and Ground-Penetrating Radar. International Journal of Rock Mechanics and Mining Sciences, 160, Article ID: 105254.
https://doi.org/10.1016/j.ijrmms.2022.105254
[5] Li, L., Zhang, R.X., Sun, J.D., He, Q., Kong, L.Z. and Liu, X. (2021) Monitoring and Prediction of Dust Concentration in an Open-Pit Mine Using a Deep-Learning Algorithm. Journal of Environmental Health Science and Engineering, 19, 401-414.
https://doi.org/10.1007/s40201-021-00613-0
[6] Lu, J., Ma, C., Li, L., Xing, X., Zhang, Y., Wang, Z. and Xu, J. (2018) A Vehicle Detection Method for Aerial Image Based on YOLO. Journal of Computer and Communications, 6, 98-107.
https://doi.org/10.4236/jcc.2018.611009
[7] Girshick, R. (2015) Fast R-CNN. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 7-13 December 2015, 1440-1448.
https://doi.org/10.1109/ICCV.2015.169
[8] Liu, W., Leibe, B., Matas, J., Sebe, N. and Welling, M. (2016) SSD: Single Shot MultiBox Detector. Computer Vision ECCV 2016, Amsterdam, 11-14 October 2016, 21-37.
https://doi.org/10.1007/978-3-319-46478-7
[9] Ren, J. and Wang, Y. (2022) Overview of Object Detection Algorithms Using Convolutional Neural Networks. Journal of Computer and Communications, 10, 115-132.
[10] Naeem A. and Peter S. (2021) Real-Time On-Board Deep Learning Fault Detection for Autonomous UAV Inspections. Electronics, 10, 2-14.
https://doi.org/10.3390/electronics10091091
[11] Ye, T., Zhang, Z., Zhang, X. and Zhou, F. (2020) Autonomous Railway Traffic Object Detection Using Feature-Enhanced Single-Shot Detector. IEEE Access, 8, 145182-145193.
https://doi.org/10.1109/ACCESS.2020.3015251
[12] Cui, M.D., Lou, Y.Y., Ge, Y.L. and Wang, K.Q. (2023) LES-YOLO: A Lightweight Pinecone Detection Algorithm Based on Improved YOLOv4-Tiny Network. Computers and Electronics in Agriculture, 205, Article ID: 107613.
https://doi.org/10.1016/j.compag.2023.107613
[13] Boualem, R., Christian, W. and Gerald, R. (2018) A Drone Fleet Model for Last-Mile Distribution in Disaster Relief Operations. International Journal of Disaster Risk Reduction, 28, 107-112.
https://doi.org/10.1016/j.ijdrr.2018.02.020
[14] Anitha, R. and Arun, K.S. (2021) A Review on Object Detection in Unmanned Aerial Vehicle Surveillance. International Journal of Cognitive Computing in Engineering, 2, 215-228.
https://doi.org/10.1016/j.ijcce.2021.11.005
[15] Andrea, A., Matteo, N. and Davide, B. (2022) Low-Power Deep Learning Edge Computing Platform for Resource Constrained Lightweight Compact UAVs. Sustainable Computing: Informatics and Systems, 34, Article ID: 100725.
https://doi.org/10.1016/j.suscom.2022.100725

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.