Research on Biometric Identification Method of Nuclear Cold Source Disaster Based on Deep Learning

Abstract

In this paper, an improved Fast-R-CNN nuclear power cold source disaster biological image recognition algorithm is proposed to improve the safety operation of nuclear power plants. Firstly, the image data sets of the disaster-causing creatures hairy shrimp and jellyfish were established. Then, in order to solve the problems of low recognition accuracy and unrecognizable small entities in disaster biometrics, Gamma correction algorithm was used to optimize the image of the data set, improve the image quality and reduce the noise interference. Transposed convolution is introduced into the convolution layer to increase the recognition accuracy of small targets. The experimental results show that the recognition rate of this algorithm is 6.75%, 7.5%, 9.8% and 9.03% higher than that of ResNet-50, MobileNetv1, GoogleNet and VGG16, respectively. The actual test results show that the accuracy of this algorithm is obviously better than other algorithms, and the recognition efficiency is higher, which basically meets the preset requirements of this paper.

Share and Cite:

Liu, K. , Wu, Y. , Luo, D. , Zhang, J. and Zhang, W. (2024) Research on Biometric Identification Method of Nuclear Cold Source Disaster Based on Deep Learning. Journal of Computer and Communications, 12, 162-176. doi: 10.4236/jcc.2024.121012.

1. Introduction

With the rapid development of science and technology, the current nuclear power cold source disaster biometric monitoring methods are various; the three existing detection and recognition means are acoustic detection [1] , video detection [2] and remote sensing monitoring [3] . The acoustic detection method is mainly sonar, which acquires underwater acoustic signal, extracts the feature of underwater acoustic signal, and then classifies the target, so as to achieve the purpose of target recognition. Sonar is an important underwater detection equipment, which can be divided into single-beam sonar and multi-beam sonar. In 2011, Doehring K et al. [4] [5] used dual-frequency sonar to identify and detect fish migration activities, record fish quantity, and solve the problem of quantity detection during fish movement. Zhang J et al. [6] used the underwater acoustic high-resolution multi-beam detection method to detect and identify the disaster causing organisms such as jellyfish and shrimp in the waters near the nuclear power cold source, obtained their echo characteristics, analyzed their distribution density in the sensitive sea area, and provided effective identification information.

However, the underwater recognition and detection system composed of sonar also has some defects, such as low resolution, less target information, sensitive to noise, and so on. Remote sensing technology is a detection technology that uses sensors to perceive the electromagnetic wave information reflected and radiated by the target. Yu X et al. used near infrared remote sensing technology to identify the types of shrimp, the method adds a new tool to the multivariate analysis of hyperspectral images for shrimp quality detection and demonstrates the rapid and nondestructive detection of shrimp through the combination of hyperspectral imaging technology and deep learning [7] .

At present, optical remote sensing mainly uses electromagnetic waves such as visible light, near infrared, short-wave infrared and thermal infrared [8] [9] [10] [11] for imaging recognition and detection, although remote sensing data with high spatial, hyperspectral and high temporal resolution can be obtained. However, optical remote sensing has some limitations. For one thing, it relies on solar radiation and can’t work around the clock. Secondly, it is greatly affected by the weather, and it is difficult to meet the needs of real-time monitoring and identification. The existing video surveillance mainly relies on the camera for target monitoring and recognition, compared with sonar and radar, the image information obtained by the camera is more abundant. Video surveillance [12] [13] [14] [15] uses technologies such as signal transmission, data storage and target detection to detect, locate and classify targets all day long. However, the monitoring range of video surveillance is limited by the shooting distance, and can only be monitored and identified in a small range. In addition, the current classification recognition technology also has some limitations.

Currently, recognition technologies are mainly divided into two categories. One is the traditional recognition method [16] , which needs to segment the target and design a separate algorithm according to its features, such as the Haar feature plus cascade classifier [17] : The target object in the image is detected by Haar feature, and the target is classified by the subsequent classifier, SIFT feature and Bag of Words model [18] [19] : The local features of the image are extracted by SIFT algorithm, and then the Bag of Words model is used to classify the image. But traditional recognition algorithms only use a single object and cannot transfer learning. Another category is deep learning [20] [21] [22] [23] , which uses artificial neural networks to learn and recognize patterns in images. Deep learning models can be trained on large data sets to accurately identify objects in images, such as convolutional neural networks [24] [25] [26] : Through multi-layer convolution and pooling operations, image features are extracted, and then fully connected layers are used for classification. Recurrent neural network [27] [28] [29] [30] : recognition of images by sequential modeling of images. And object detection algorithms [31] [32] such as YOLO: by positioning the target in the image to achieve the recognition of jellyfish and shrimp. However, the above neural network algorithm has some disadvantages in practice, such as long training time and gradient explosion caused by deep network layers. Therefore, Adding transposition and dilatation convolution to Fast-R-CNN Dilatation convolution is a convolution operation that can increase the receptive field. By introducing void parameters into the convolution kernel, the receptive field can be expanded without increasing the parameters and the amount of computation. For objects such as jellyfish and shrimp, which may have large size and detail information, the use of dilation convolution can better capture the context information of these objects and improve the accuracy of detection. Transposed convolution is an operation used for upsampling. In Fast R-CNN, transposed convolution can be used to upsample a low-resolution feature map to the same size as the input image for more fine-grained target positioning and detection. For small targets such as jellyfish and shrimp, transposed convolution can increase the resolution of feature maps and improve the accuracy of target detection. By adding dilatation convolution and transposition convolution, the sensing ability and positioning accuracy of the Fast R-CNN model can be enhanced for objects such as jellyfish and shrimp, and the accuracy and effect of target detection can be improved.

2. Data Set Construction and Image Enhancement

2.1. Establishment of Dataset

At present, there are few public data sets of nuclear cold-source disaster-causing organisms, and the number of species of disaster-causing organisms is small. Therefore, this paper established its own data set of nuclear cold-source disaster-causing organisms, shrimp and jellyfish. Some of the images in the dataset were obtained through the open data website of the Third Ocean Institute, and some images were collected by oneself. Among them, there are 2301 public data sets and 4205 self-collected images, totaling 6205 images (2304 for shrimp and 4202 jellyfish). Figure 1 shows a sample of a disaster organism; Table 1 shows the quantity of each disaster-causing organism in the cold source of nuclear power:

In Figure 1, the left is a jellyfish sample and there is a shrimp sample. Here’s the part.

In order to make the experimental effect effective, this paper divides the training set and test set according to the ratio of 8:2, including 5205 training sets and 1301 test sets.

Figure 1. Nuclear power cold source disaster biological samples.

Table 1. Nuclear power cold source disaster biological data set.

2.2. Image Enhancement of Disaster-Causing Organisms

Disaster causing organisms live near the inlet of the nuclear power plant, overlapping with the scope of human activities, so there are a lot of human household garbage in the water, coupled with the sewage discharged by the nearby factory, so the image of shrimp and jellyfish will be disturbed by noise. Because they are small and gather in large groups, they are difficult to distinguish. The color of jellyfish body in seawater is affected by seawater, and the body characteristics are not obvious. When jellyfish moves, the umbrella will change its shape, which brings challenges to feature extraction in the recognition process. Therefore, this paper needs to enhance the image processing to improve the image quality and reduce the image noise. This paper attempts to process images using three methods: Gamma correction, unsharping shielding and dark shadow removal [33] [34] [35] . After comparing the processing effects of several algorithms, the one with better processing effect is selected as the image enhancement method in this paper. The result of image enhancement processing is shown in Figure 2.

In order to evaluate the processing results of image enhancement, the definition of image is introduced as an evaluation index. In this paper, jellyfish image is mainly taken as the evaluation index value. Table 2 shows the evaluation index value of image quality after various processing methods.

Table 2 shows the clarity data of the quality index of the image evaluated by Gamma. From the data, the clarity corrected by Gamma is 6.7740697857211885px. Therefore, the data set constructed in this paper was corrected by Gamma as an enhancement method for biological images caused by nuclear cold source disaster.

3. The Principle and Improvement of Fast-R-CNN

In convolutional neural networks, Fast-R-CNN is a deep learning model for image processing and recognition. The number of layers and iteration times of the

Figure 2. Image enhancement contrast.

Table 2. Image quality evaluation index.

model will affect the feature extraction and recognition rate. In general, increasing the number of layers and iterations can improve the recognition rate, but it also increases the training time and the consumption of computing resources. Convolutional networks have the largest receptive field when processing images, that is, they can capture a larger range of feature information. However, when multiple pooled iterations are included in the convolution structure, detailed features below a certain pixel may not be accurately extracted. This can lead to small creatures not being correctly identified in the image, such as the disaster-causing shrimp and jellyfish. Therefore, when designing convolutional neural networks, it is necessary to weigh the number of layers and the number of iterations according to actual requirements, and pay attention to the influence of the number and step size of pooled iterations on feature extraction.

Figure 3. Fast-R-CNN small target recognition error.

As shown in Figure 3, failure to recognize small targets is a common problem in image recognition. To solve this problem, there are usually two ways. One method is to adjust the size of the convolution kernel and enlarge the size of the receptive field on the premise that the number of layers in the network is unchanged. This can increase the perceptual range of detail features in the convolution check image, so as to improve the recognition rate of small targets. Another approach is to use multiple small convolution nuclei instead of a single large convolution kernel layer. This method can increase the nonlinear expression ability of the network, improve the extraction and recognition ability of complex features, and further improve the recognition rate of small targets. In practical application, the suitable method can be selected according to the specific situation or the combination of two methods for design.

3.1. Improved Fast-R-CNN Algorithm Principle

In this paper, an algorithm introducing transposition convolution is introduced, which can enlarge the feature points by transposition without changing the actual size of the convolution kernel. Specifically, transposed convolution is a special kind of forward convolution, which is implemented by up-sampling (i.e. enlarging pixels), then convolution operations, and finally forward convolution. Compared with interpolation method, transposed convolution achieves better UPsample effect through parameter learning. Transposed convolution is mainly used to restore the size of feature map, and is used to restore the image size in segmentation and other fields. In particular, transposed convolution does not increase the amount of computation in the convolution computation (Figure 4).

Transposed convolution can be simply understood as three steps: filling, rotating and convolution.

Take a two-dimensional image as an example, where s represents step size, p represents fill, and k is the formula for calculating the transposed convolution of the convolution kernel:

Y i , j = p = 0 k = 0 q = 0 k = 0 k p , q X ( i +p × S-p ) , ( j + q × S ) (1)

Let the input feature graph be X, the convolution kernel be K, the step size be S, and the fill be P, and the output feature graph Y, i, and j represent the coordinates of the rows and columns of the output feature graph Y, respectively. At each output pixel position (i, j), the output value of that position is obtained by weighted summing a certain region in the input feature map X.

The improved algorithm proposed in this paper is shown in Figure 5.

This paper presents an improved method to improve the recognition rate of small targets. In this method, the subsampling operation in the convolutional module is replaced by a transposed convolution with step 1 and fill amount $k - 1$, and the semantic information in the deep feature is used to improve the recognition rate of small targets. The improved algorithm introduces transposed convolution in the shared convolution layer to generate the feature map of the image, and then transmits the feature map to the candidate region generation network and ROI pooling layer. The candidate region generation network is used to generate the candidate region and the region score, while the ROI pooling layer extracts the candidate region feature map after synthesizing the feature map information and the candidate region information. Finally, the feature map of the candidate region is sent to the full connection layer and classification layer to calculate the category of each candidate region and output the classification accuracy rate. At the same time, candidate box regression will adjust the candidate box position precisely. The advantage of this method is to improve the recognition rate of small targets without changing the network structure or adding extra computing burden.

Figure 4. Transposed convolution diagram.

Figure 5. Improved algorithm diagram.

3.2. Algorithm Verification

Whether the improved algorithm proposed in this paper can improve the recognition speed and accuracy, ResNet-50, MobileNetv1, GoogleNet and VGG16 are respectively used to conduct comparative demonstration under the same training conditions in the same data set to verify the effectiveness of the algorithm proposed in this paper. The parameter Settings for each network used in validation training are shown in Table 3.

In this paper, the learning rate of the improved algorithm and comparison network is set to 0.001, the categorical algorithm is used as a loss function, and the Adam optimizer is used to reduce the risk of gradient explosion and memory consumption. In addition, the training cycle is 15 times.

4. Analysis of Experimental Identification Accuracy

As shown in Figure 6, the biometric identification accuracy (test set) of nuclear cold source disaster caused by ResNet-50, MobileNetv1, GoogleNet and VGG16 trained is shown in Figure 6.

Among these five networks, the recognition rate of ResNet-50, MobileNetv1, GoogleNet and VGG16 is 88.54%, 87.79%, 85.49% and 86.26%, respectively, while the recognition rate of the improved algorithm in this paper is 95.29%. It is worth noting that under the same data set and training times, the improved algorithm in this paper has improved by 6.75%, 7.5%, 9.8% and 9.03%, respectively, compared with the ResNet-50, MobileNetv1, GoogleNet and VGG16 algorithms, showing better performance. These results show that the improved algorithm proposed in this paper has significant advantages in small target recognition.

As can be seen from Table 4, the total recognition time (15 training sessions) of the improved algorithm in this paper is 32s, which is the optimal value. Thus, the improved algorithm proposed in this paper can effectively improve the biometric identification speed of nuclear cold source disaster.

In terms of loss function, the values of ResNet-50, MobileNetv1, GoogleNet and VGG16 are 0.4362, 0.4461, 0.4381 and 0.6174, respectively, while the value of loss function in this paper is 0.0352. Compared with other networks, the accuracy is the highest in the actual target detection, indicating that the improved network prediction effect is better. Only the loss function curve of the network

Table 3. Image quality evaluation index.

Figure 6. Recognition accuracy rate of each model.

Table 4. Training time for each model (15 training sessions).

Figure 7. In this paper, the algorithm training set and test set loss function graph.

in this paper is given here, as shown in Figure 7.

Analysis of Identification Effect

The recognition effect of the five algorithms is shown in Figure 8.

In Figure 8, there are some problems with biometrics in each network. When shrimp concentrations are concentrated together, networks such as ResNet, GoogleNet, MobileNet, and VGG16 can be miscalculated. In addition, these networks are also unable to accurately identify smaller targets due to the varied morphology and color of jellyfish. In this paper, an improved algorithm is proposed, which can effectively improve the accuracy rate of disaster-causing organisms, and has excellent performance in recognition, which can identify all organisms in the image. Through confidence analysis, the confidence of the algorithm in this paper is optimal. The purpose of improving the algorithm proposed in this paper is achieved.

Figure 8. Each model identifies the result graph.

5. Conclusion

In order to solve the problem of difficult and slow identification of disaster biometrics caused by nuclear power cold source, a data set including jellyfish, shrimp and other disaster organisms was established in this paper. At the same time, the recognition algorithm based on Faster R-CNN is improved, and the transpose and dilatation convolution methods are introduced to improve the detection accuracy of disaster-causing organisms and the detection ability of small individuals, and ensure a certain recognition speed. By introducing transposed convolution into the convolution layer, we can effectively up sample the low-resolution feature map to the same size as the input image, thus improving the accuracy of object detection. This is particularly important for small organisms such as jellyfish and shrimp in nuclear cold sources, as they tend to have smaller size and detailed information. The use of transposed convolution can increase the resolution of feature maps, making it easier for models to capture the subtle features of these small creatures, thus improving the accuracy of recognition. In addition, the introduction of expansive convolution can enlarge the receptive field of the convolution kernel and capture more context information. The disaster-causing organisms in the cold source of nuclear power may have large size and complex morphological characteristics. The use of expansive convolution can increase the model’s perception of these objects and improve the accuracy of detection. The evaluation results show that compared with ResNet-50, MobileNetv1, GoogleNet and VGG16, the network has improved by 6.75%, 7.5%, 9.8% and 9.03%, respectively. The recognition accuracy of this method is 95.29%. In addition, the convergence value of the loss function also shows good performance, which further verifies the effectiveness of the proposed method. These results have some reference value for nuclear power cold source safety. Future research will focus on expanding the nuclear power cold source disaster biological data set and improving the accuracy of occlusion recognition algorithm. By collecting more sample data, the generalization ability and robustness of the model can be further improved. At the same time, in view of the possible occlusion situation in the nuclear power cold source environment, we will be committed to improving the algorithm and improving the accurate identification ability of the occlusion target, so as to further improve the level of nuclear power cold source safety.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Zeng, L., Chen, G., Wang, T., et al. (2021) Acoustic Study on the Outbreak of Creseise acicula nearby the Daya Bay Nuclear Power Plant Base during the Summer of 2020. Marine Pollution Bulletin, 165, Article ID: 112144.
https://doi.org/10.1016/j.marpolbul.2021.112144
[2] Kang, K., Ouyang, W., Li, H., et al. (2016) Object Detection from Video Tubelets with Convolutional Neural Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, 27-30 June 2016, 817-825.
https://doi.org/10.1109/CVPR.2016.95
[3] Khelifi, L. and Mignotte, M. (2020) Deep Learning for Change Detection in Remote Sensing Images: Comprehensive Review and Meta-Analysis. IEEE Access, 8, 126385-126400.
https://doi.org/10.1109/ACCESS.2020.3008036
[4] Doehring, K., Young, R.G., Hay, J., et al. (2011) Suitability of Dual-Frequency Identification Sonar (DIDSON) to Monitor Juvenile Fish Movement at Floodgates. New Zealand Journal of Marine and Freshwater Research, 45, 413-422.
https://doi.org/10.1080/00288330.2011.571701
[5] Olivieri, M., Glegg, S.A.L. and Coulson, R.K. (1998) Measurements of Snapping Shrimp Colonies Using a Wideband Mobile Passive Sonar. The Journal of the Acoustical Society of America, 103, 41-47.
https://doi.org/10.1121/1.421732
[6] Zhang, J., Wu, Z. and An, C. (2021) Research on the Detection and Early Warning Technology of Harmful Marine Organisms in the Water Intake of Nuclear Power Plant by 3D Image Sonar. E3S Web of Conferences, 290, Article No. 03013.
https://doi.org/10.1051/e3sconf/202129003013
[7] Yu, X., Tang, L., Wu, X., et al. (2018) Nondestructive Freshness Discriminating of Shrimp Using Visible/Near-Infrared Hyperspectral Imaging Technique and Deep Learning Algorithm. Food Analytical Methods, 11, 768-780.
https://doi.org/10.1007/s12161-017-1050-8
[8] Brewin, R.J.W., Hardman-Mountford, N., Lavender, S.J., et al. (2011) An Intercomparison of Bio-Optical Techniques for Detecting Dominant Phytoplankton Size Class from Satellite Remote Sensing. Remote Sensing of Environment, 115, 325-339.
https://doi.org/10.1016/j.rse.2010.09.004
[9] Wang, M. (2007) Remote Sensing of the Ocean Contributions from Ultraviolet to Near-Infrared Using the Shortwave Infrared Bands: Simulations. Applied Optics, 46, 1535-1547.
https://doi.org/10.1364/AO.46.001535
[10] Kay, S., Hedley, J.D. and Lavender, S. (2009) Sun Glint Correction of High and Low Spatial Resolution Images of Aquatic Scenes: A Review of Methods for Visible and Near-Infrared Wavelengths. Remote Sensing, 1, 697-730.
https://doi.org/10.3390/rs1040697
[11] Des Marais, D.J., Harwit, M.O., Jucks, K.W., et al. (2002) Remote Sensing of Planetary Properties and Biosignatures on Extrasolar Terrestrial Planets. Astrobiology, 2, 153-181.
https://doi.org/10.1089/15311070260192246
[12] Kavanaugh, M.T., Bell, T., Catlett, D., et al. (2021) Satellite Remote Sensing and the Marine Biodiversity Observation Network. Oceanography, 34, 62-79.
https://doi.org/10.5670/oceanog.2021.215
[13] Basu, B., Sannigrahi, S., Sarkar Basu, A., et al. (2021) Development of Novel Classification Algorithms for Detection of Floating Plastic Debris in Coastal Waterbodies Using Multispectral Sentinel-2 Remote Sensing Imagery. Remote Sensing, 13, Article No. 1598.
https://doi.org/10.3390/rs13081598
[14] Shortis, M.R. and Harvey, E.S. (1998) Design and Calibration of an Underwater Stereo-Video System for the Monitoring of Marine Fauna Populations. International Archives of Photogrammetry and Remote Sensing, 32, 792-799.
[15] Demir, H.S., Christen, J.B. and Ozev, S. (2020) Energy-Efficient Image Recognition System for Marine Life. IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 39, 3458-3466.
https://doi.org/10.1109/TCAD.2020.3012745
[16] Harvey, E.S. and Shortis, M.R. (1998) Calibration Stability of an Underwater Stereo-Video System: Implications for Measurement Accuracy and Precision. Marine Technology Society Journal, 32, 3-17.
[17] Zhao, X.Q., Peng, H.M. and Gao, Q. (2017) A Video Image Detection Method for Fish Body Motion Characteristics. Shaanxi: Journal of Xi’an University of Posts and Telecommunications, 22, 38-43.
[18] Lai, Y. (2019) A Comparison of Traditional Machine Learning and Deep Learning in Image Recognition. Journal of Physics: Conference Series, 1314, Article ID: 012148.
https://doi.org/10.1088/1742-6596/1314/1/012148
[19] Sharifara, A., Rahim, M.S.M. and Anisi, Y. (2014) A General Review of Human Face Detection Including a Study of Neural Networks and Haar Feature-Based Cascade Classifier in Face Detection. 2014 IEEE International Symposium on Biometrics and Security Technologies (ISBAST), Kuala Lumpur, 26-27 August 2014, 73-78.
https://doi.org/10.1109/ISBAST.2014.7013097
[20] Gao, H., Dou, L., Chen, W., et al. (2013) Image Classification with Bag-of-Words Model Based on Improved Sift Algorithm. 2013 IEEE 9th Asian Control Conference (ASCC), Istanbul, 23-26 June 2013, 1-6.
https://doi.org/10.1109/ASCC.2013.6606268
[21] Li, W.S. and Peng, D. (2013) Object Recognition Based on the Region of Interest and Optical Bag of Words Model. Proceedings of the 5th International Conference on Internet Multimedia Computing and Service, Huangshan, 17-19 August 2013, 394-398.
https://doi.org/10.1145/2499788.2499873
[22] LeCun, Y., Bengio, Y. and Hinton, G. (2015) Deep Learning. Nature, 521, 436-444.
https://doi.org/10.1038/nature14539
[23] Deng, L. and Yu, D. (2014) Deep Learning: Methods and Applications. Foundations and Trends® in Signal Processing, 7, 197-387.
https://doi.org/10.1561/2000000039
[24] Heaton, J.T. (2018) Ian Goodfellow, Yoshua Bengio, and Aaron Courville: Deep Learning: The MIT Press, 2016, 800 pp, ISBN: 0262035618. Genetic Programming and Evolvable Machines, 19, 305-307.
https://doi.org/10.1007/s10710-017-9314-z
[25] Wu, M. and Chen, L. (2015) Image Recognition Based on Deep Learning. 2015 IEEE Chinese Automation Congress (CAC), Wuhan, 27-29 November 2015, 542-546.
[26] Fang, W., Zhang, F., Sheng, V.S., et al. (2018) A Method for Improving CNN-Based Image Recognition Using DCGAN. Computers, Materials & Continua, 57, 167-178.
https://doi.org/10.32604/cmc.2018.02356
[27] Hijazi, S., Kumar, R. and Rowen, C. (2015) Using Convolutional Neural Networks for Image Recognition. Cadence Design Systems Inc., San Jose.
[28] Chauhan, R., Ghanshala, K.K. and Joshi, R.C. (2018) Convolutional Neural Network (CNN) for Image Detection and Recognition. 2018 IEEE 1st International Conference on Secure Cyber Computing and Communication (ICSCCC), Jalandhar, 15-17 December 2018, 278-282.
https://doi.org/10.1109/ICSCCC.2018.8703316
[29] Xiong, J., Yu, D., Liu, S., et al. (2021) A Review of Plant Phenotypic Image Recognition Technology Based on Deep Learning. Electronics, 10, 81.
https://doi.org/10.3390/electronics10010081
[30] Dong, Y. and Liang, G. (2019) Research and Discussion on Image Recognition and Classification Algorithm Based on Deep Learning. 2019 IEEE International Conference on Machine Learning, Big Data and Business Intelligence (MLBDBI), Taiyuan, 8-10 November 2019, 274-278.
https://doi.org/10.1109/MLBDBI48998.2019.00061
[31] Boddapati, V., Petef, A., Rasmusson, J., et al. (2017) Classifying Environmental Sounds Using Image Recognition Networks. Procedia Computer Science, 112, 2048-2056.
https://doi.org/10.1016/j.procs.2017.08.250
[32] Kim, J.H., Kim, N., Park, Y.W., et al. (2022) Object Detection and Classification Based on YOLO-V5 with Improved Maritime Dataset. Journal of Marine Science and Engineering, 10, 377.
https://doi.org/10.3390/jmse10030377
[33] Singnoo, J. and Finlayson, G.D. (2010) Understanding the Gamma Adjustment of Images. 18th Color and Imaging Conference, San Antonio, 8-12 November 2010, 134-139.
https://doi.org/10.2352/CIC.2010.18.1.art00024
[34] Zhang, X. and Zhang, C. (2007) Satellite Cloud Image De-Noising and Enhancement by Fuzzy Wavelet Neural Network and Genetic Algorithm in Curvelet Domain. In: International Conference on Life System Modeling and Simulation, Springer, Berlin, 389-395.
https://doi.org/10.1007/978-3-540-74769-7_42
[35] Guo, R., Dai, Q. and Hoiem, D. (2012) Paired Regions for Shadow Detection and Removal. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 2956-2967.
https://doi.org/10.1109/TPAMI.2012.214

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.