Vision-Based On-Road Nighttime Vehicle Detection and Tracking Using Taillight and Headlight Features

Abstract

An important and challenging aspect of developing an intelligent transportation system is the identification of nighttime vehicles. Most accidents occur at night owing to the absence of night lighting conditions. Vehicle detection has become a vital subject for research to ensure safety and avoid accidents. New vision-based on-road nighttime vehicle detection and tracking system are suggested in this survey paper using taillight and headlight features. Using computer vision and some image processing techniques, the proposed system can identify vehicles based on taillight and headlight features. For vehicle tracking, a centroid tracking algorithm has been used. Euclidean Distance method has been used for measuring the distances between two neighboring objects and tracks the nearest neighbor. In the proposed system two flexible fixed Region of Interest (ROI) have been used, one is the Headlight ROI, and another is the Taillight ROI that could adapt to different resolutions of the images and videos. The achievement of this research work is that the proposed two ROIs can work simultaneously in a frame to identify oncoming and preceding vehicles at night. The segmentation techniques and double thresholding method have been used to extract the red and white components from the scene to identify the vehicle headlights and taillights. To evaluate the capability of the proposed process, two types of datasets have been used. Experimental findings indicate that the performance of the proposed technique is reliable and effective in distinct nighttime environments for detection and tracking of vehicles. The proposed method has been able to detect and track double lights as well as single light such as motorcycle light and achieved average accuracy and average processing time of vehicle detection about 97.22% and 0.01 s per frame respectively.

Share and Cite:

Parvin, S. , Rozario, L. and Islam, M. (2021) Vision-Based On-Road Nighttime Vehicle Detection and Tracking Using Taillight and Headlight Features. Journal of Computer and Communications, 9, 29-53. doi: 10.4236/jcc.2021.93003.

1. Introduction

The number of vehicles on the road has risen very rapidly in current history and thus, road accidents are escalating day by day [1] - [4]. The majority of accidents occur at night, according to data analysis. Because there are some variations can be seen between daytime and nighttime street environment. During daytime conditions, drivers have a plenty of visual information and the drivers have usually used this information to assist in road guidance. At night driver’s primary guidance are the roadway and vehicle lights. During the night, the lighting of many rural highways rarely at certain critical locations, such as railroad crossings, narrow or long bridges, tunnels, sharp curves, and roadside areas where accidents happen increasingly. So, nighttime vehicle detection and tracking have always been a difficult and most challenging task due to the inadequate luminosity and meager contrast than daytime vehicle detection. It has been proven that the street environment at night is much more complex than day. During night, the whole body of the vehicle is poorly visible due to a lack of light conditions and this is most probably the main cause of the road accidents [5]. Perception at night is the massive concern for safe driving. In dark conditions, the vehicle is visible by its brake lights/taillights and headlights [6]. Hence, reckless drivers keep going to use high beams despite the oncoming vehicle is accused [7] [8]. At night, the headlamps of vehicles are designed to throw down the low beams as well as high beams of light. The low beams are the fewer intensity lights that are used at night travel whereas the high beams throw the high-intensity light [8] [9]. When the oncoming vehicles throw the high-intensity light on the opposite vehicle, the driver has to face the fluorescence for some amount of time, which can cause the closing of eyes or blindness for some time. For this consequence, the majority of accidents take place at night. For the deployment of safety features in vehicles, thus, vehicle detection at nighttime is of considerable significance [10].

Detection of vehicles is the most stimulating, ongoing and active field of investigation in the automated driving system (ADS), advanced driver assistance system (ADAS), and intelligent transportation system (ITS). ADAS has achieved substantial coverage in current years because many accidents are caused primarily by driver’s lack of awareness of drowsiness. That’s why the identification of vehicles in front of someone else’s by using computer vision techniques is a key aspect of ADAS and has been a very common motivating factor in the last few decades [11] [12]. They extract out worthwhile and useful traffic information for their research purpose like vehicle tracking, vehicle count, vehicle trajectory, vehicle classification, vehicle speed, vehicle flow, license plate recognition, etc. [13] [14].

To decrease accidents at night, detection of vehicles is important. Due to illumination conditions, it has always been a difficult problem at nighttime. In dark conditions, vehicles are visible by the most reliable features brake lights/taillights and headlights [15] [16]. Figure 1 shows the vehicles visibility appearance at night.

The proposed methodology of nighttime vehicle detection outperforms other related works in several ways. It is a combined mechanism using the most reliable night features headlight and taillight. In the proposed system, two flexible fixed ROIs have been used in an effective way. These two ROIs can work in a frame simultaneously to identify oncoming and preceding vehicles at night. Only the ROIs of the frame have been executed and the rest has been omitted. Most important thing is that nighttime vehicle detection all techniques have been applied inside these ROIs. Double threshold technique has been used for extracting the attained features of the vehicles at night. Previous studies suggested various algorithms for vehicle tracking such as Pairing Algorithm, Kalman Filter, CAMShift Algorithm, Feature Matching Algorithm, Gaussian Mixture Model etc. The centroid tracking algorithm has been used in our proposed system for vehicle tracking. This is basically an object tracking algorithm. Using this method, double lights, and single light both categories of vehicles can be identified.

The residual of the paper is structured as follows. Associated work on the nighttime vehicle detection is addressed in Section 2. The proposed method of nighttime vehicle detection and tracking using taillight and headlight features is demonstrated in Section 3. The background of the proposed method is discussed in Section 4. Section 5 represents the experimental results and discussion. Finally, the conclusions are specified in Section 6.

2. Related Works

Nighttime vehicle detection is a massive field of studies. In driver assistance systems and also in traffic management systems, such study is crucial. Several scholars have presented their research and many of them are still working on it. Some implementations incorporate driver assistance systems using non-vision-based approaches [18] [19] while others utilize vision-based methods for traffic surveillance and driver assistance systems [20] [21]. Headlights and taillights have been widely used at night for vehicle identification. This Section presents the related work on

Figure 1. Visibility appearance at night [17].

the methods of nighttime vehicle detection suggested by researchers and the associated systems developed by vehicle builders depending on the type of headlights and taillights. Based on the headlights and taillights, there are many strategies available to identify and distinguish vehicles at nighttime, such as features-based, segmentation and threshold-based, etc.

Pham, T. A. et al. [22] implemented vehicle detection and tracking system using vehicle lights (headlights and taillights) with occlusion handling at night. Their experimental findings showed that the handling of occlusion for headlights is 94.64% and 98.16% for taillights. They claimed that their proposed method’s performance was effective, but some limitations were found in their work. 1) Motorcycles are not considered in their study; 2) The proposed approach can perform well in partial vehicle occlusion, but it cannot overcome the full occlusion of the vehicle; 3) False detections, especially in heavy traffic, sometimes occur due to false pairing.

A system based on image processing was developed by Pushkar, S. et al. [23]. They implemented a system that identifies vehicle headlights and selects an ideal beam to minimize accidents due to vision loss. They have claimed that 94.84 % is their detection rate. Study of unique features of headlights and taillights is anticipated for further analysis to improve the detection rate. In the future, different applications, such as traffic monitoring, control of smart headlight beams, lane departure warning, etc., would be combined to receive advanced driver assistance systems.

A nighttime vehicle detection system was developed by Bogacki, P. et al. [15]. The researchers proposed a new Binary Blob Shape (BBS) feature, and their proposed system is built on convolutional neural networks (CNNs). They have reported in this work that the average accuracy is higher than 93% and the BBS feature improved the accuracy of the classification by about 1%.

Vu, T. A. et al. [24] suggested a system for vehicle identification and recognition at night. This strategy consists of segmentation of headlights, identification of headlights, headlight tracking, headlight pairing and classification of vehicles. Using a trajectory tracing technique, the headlights are paired, and they have stated that their proposed system’s detection rate in nighttime scenes is 81.19%. Future research can be expanded to deal with various vehicles, such as small (motorcycles, bikes, tricycles), medium (cars, sedans, SUV), heavy vehicles (trucks, buses), etc.

Gao, F. et al. [25] introduced a multi-lane night-time vehicle detection approach compliance with saliency detection for the traffic surveillance system. They also reported that their proposed system’s accuracy is more than 80%. Their proposed system for buses has a tiny flaw. The key explanation is that on the same bus, the gap between the two taillights is wider than that of other vehicles. They have suggested that some efficient techniques could solve this problem in the future.

A nighttime vehicle detection system and a feature aggregation method were proposed by Mo, Y. et al. [26] to integrate multi-scale highlight features with the MSCNN [27] mechanism and the visual features of the vehicle to take advantage of the location of the vehicle highlight. The future work will be directed to accelerating algorithms and simplifying models. In addition, the technique can be used to detect camera scenes and night-time road anomaly detection.

MUSLU, G. et al. introduced an algorithm for nighttime vehicle taillight detection in [28]. This approach is a fusion of the Haar Cascade Classifier [29] [30] and the processing of images based on rules [31]. Haar [32], which is used for vehicle identification and vehicle taillight or vehicle rear vision. The experimental outcome shows that their proposed method uses a single classifier to detect vehicle taillights with greater accuracy than other algorithms, and they have stated that their proposed method has 7 millisecond time intervals for processing the image to obtain the output.

In [33], a background subtraction method for night-time vehicle detection in the urban traffic environment was proposed by Kumar, U. P. et al. They have claimed that their experimental findings show that their proposed system’s accuracy is 96.5% for nighttime and 96% for daytime, and the execution time of the system is 10.79 s for nighttime and 9.14 s for daytime.

In [34], Zou, Q. et al. introduced a set packing (SP) system for the identification and tracking of night vehicles with joint pairing of headlights. They reported from their experiment results that the consistency of their proposed system is for the urban scene (the accuracy of multiple object tracking is 85.0%, the false-positive rate is 3.5%, and the miss rate is 10.8%), for the highway (the accuracy of multiple object tracking (MOTA) is 90.1%, the false-positive rate is 1.7%, and the miss rate is 7.2% (multiple objects tracking accuracy is 78.6%, the false-positive rate is 10.2%, and the miss rate is 9.5%). In this study, they suggested several instructions for further improvements. First, it can work with long term occlusion (e.g., one headlight is occluded) because it is a challenging issue. Second, the vehicle type is not currently considered, while the design of models for various types of vehicles would be more precise (e.g., big trucks). In addition to these enhancements, the proposed SP model is being expanded to manage the discovery and monitoring of small groups in crowds that enable the group size to be scaled to more than two or to a dynamically varying size.

3. Proposed Method

In the proposed approach, we have adopted an efficient vision-based on-road nighttime vehicle identification and tracking system using taillight and headlight features. The proposed method consists of two main processes vehicle detection and tracking with taillight and vehicle detection and tracking with headlight. We have considered the input as a video sequence from the camera and extracted only the red component for taillights and white components for headlights using the computer vision and image processing techniques. The flow diagram of our suggested method is shown in Figure 2.

Figure 2. Flowchart of the proposed method.

The proposed method can be summarized as follows: At first, the camera captures the video sequence and the system take the video sequence as input and reads each of the frames. Then two types of Region of Interest (ROI) have been applied, one for taillight and another for headlight. The taillight ROI scan the ROI area and then convert to HSV format. Same procedure has been used for the headlight ROI. Next, the double threshold (Upper threshold range and Lower threshold range) has been applied for taillight and headlight feature extraction (Red & white). Using the double threshold, the Red Mask and White Mask have been created. Next, the morphological operation and segmentation techniques have been used. Finally, an appropriate bounding box has been applied for vehicle detection. For vehicle tracking the centroid tracking algorithm has been used. This tracking algorithm used Euclidean Distance method for measuring the distances between two neighboring objects and tracks the nearest neighbor.

4. Background of the Proposed Method

4.1. Nighttime Vehicle Features (Headlight and Taillight)

A headlight is a light that is attached towards the front of the vehicle to lighten the path ahead of the vehicle [35] [36] and a taillight is the rear light of the vehicle which provides a red signal (visible) to other vehicles [37] [38]. Headlights are most commonly referred to as headlamps and taillights are also referred to as taillamps. Figure 3 shows vehicles at night view with headlights and taillights.

4.2. Region of Interest (ROI)

The area of interest (ROI) is a particular sub-region of image, leaving other regions unaffected. Using Mathematica Graphics Primitives, such as Point, Line, Circle, Polygon, Vertex Positions [40], the image sub-regions can be defined. For our proposed model, we have used two forms of ROIs, one is the Taillight ROI, and another is the Headlight ROI, shown in Figure 4. When the method executes, these two ROIs works simultaneously. Each of the ROI simply scans its own ROI modules and then performs the next tasks listed in the proposed method. Figure 5 displays the two ROIs of the proposed system.

Two items are required to construct a rectangular ROI, one is an image, and the other is a vertex. A rectangular area is also used with defined the locations of the upper left corner and the bottom right corner. Figure 6 shows the graphical representation of the ROI and the ROI parameters are displayed in Table 1.

Figure 3. Headlight and Taillight feature at night [39].

Figure 4. Region of Interest (ROI) [41].

Figure 5. Two Region of Interest (ROI) of the proposed system.

Figure 6. Graphical representation of ROI.

Table 1. ROI parameters.

4.3. HSV Conversion and Splitting

HSV is a color space where H represents Hue, S represents Saturation and V represents Value. The Hue denotes color, and it has an angle from 0 to 360 degree. The Saturation indicates the range (0% to 100%) of grey in the color space and the Value (0 to 100) is the brightness of the color which varies with the saturation. The hue, saturation and value that can found by splitting the original image into HSV format is shown in Table 2. Figure 7 displays the outcomes of the conversion of the original image to HSV format.

4.4. Double Threshold

We have applied the double threshold (Upper and Lower threshold) in the ROIs region after the HSV color conversion. Double Threshold works as a filter that accepts data within a defined range and eliminates data (e.g., noise) outside the range. The double threshold [42] values are specified in Table 3. The Double Threshold mechanism can be summarized as follows:

• Select two thresholds T1 and T2 like Table 4. One is the lower threshold range (T1), and another is the upper threshold range (T2).

• Any image pixel (X), if it lies between the two thresholds (T1 and T2) then it is extracted for further process otherwise it is discarded.

• It can be shown easily by the following expression:

X > T 1 & X < T 2 .

• The ampersand (&) acts as a logical “AND”.

4.5. Red Mask and White Mask

The HSV color space and the double threshold are used to construct the Red Mask and White Mask. Using the red mask and the white mask, the taillights and the headlights are effectively detected. The red mask and white mask are one type of filter from which the red components and white components can be extracted. The threshold range for a red and white mask is shown in Table 4.

(a) (b)

Figure 7. (a) Original images and (b) HSV color space of the original images.

Table 2. HSV color splitting of the image.

Table 3. Double threshold.

Table 4. Threshold range.

4.6. Morphological Operation

We performed the morphological operation after making the red mask and the white mask. We used the dilation morphological operation in this method so that the red and white elements are easily understood. The dilation of morphological operation of the suggested system is shown in Figure 8. The noise can also be filtered out by these operations. In order to minimize noisy data, but not to harm artifacts of interest, the size of the kernel is most important.

In dilation [43], ij is the location used in the dilation of an image I by structural element, S if the intersection is non-zero, which is defined as

I S = { ( i j ) : S i j I 0 } (1)

where is the logical OR operation. The dilation method can be used to create a process of region filling. Assume that image X0 includes a particular pixel inside the region, and that E represents the boundary of the region. Then by iterating:

X i + 1 = [ X i ( 1 1 1 1 1 1 1 1 1 ) ] E (2)

(a) (b)

Figure 8. (a) Red mask dilation and (b) White mask dilation.

where represents the logical AND operation, we converge to filling the entire area.

4.7. Segmentation Techniques

Segmentation of the image is one of the most important methods of image processing. Image segmentation is a process of splitting or separating the image into portions called segments [44]. In the segmentation stage, we have segmented the taillights and headlights from the ROIs of the frame. Basically, the bitwise- and operation was used to segment the red and white components. When extracting some part of the image, it will be extremely useful. Figure 9 displays the outcomes of the segmented taillights and headlights of the proposed system and the parameters of the bitwise- and segmentation operation are shown in Table 5 [45].

The function computes the per-element bit-wise logical and for:

d s t ( I ) = s r c 1 ( I ) s r c 2 ( I ) if m a s k ( I ) 0 (3)

4.8. Rectangular Bounding Box

We used some contour functions for vehicle detection to construct a bounding box. The contour is the line that connects all the points that have the same intensity along the boundary of an image [46].

• Moments is used to calculate the center of mass of the object [47].

o To extract centroid from this moment. Centroid is given by the relations:

C x = M 10 M 00 and C y = M 01 M 00

• Contour Area is returning the area inside a contour.

• Bounding Rectangle is the straight rectangle. It has some parameters.

o img—An RGB or grayscale image.

o y—The y-coordinate of the upper left corner of the rectangle.

o x—The x-coordinate of the upper left corner of the rectangle.

o h—The height of the rectangle.

o W—The width of the rectangle.

4.9. Object Tracking Algorithm

Object tracking is an advanced technology that tracks an object’s location. In computer vision, centroid-based object tracking is an algorithm for tracking that is simple to understand, but highly effective.

(a) (b)

Figure 9. (a) Segmented taillights and (b) Segmented headlights.

Table 5. Parameters of the bitwise and operation for segmentation.

In the centroid tracking algorithm, we consider that certain bounding box sets are moved in each frame with (x, y) coordinates for each object detected. It is important that bounding boxes in the video sequences are determined for each frame. After bounding boxes with their (x, y) coordinates are allocated in the frame, their centroid is determined, and each bounding box is identified by a unique ID [48] [49]. The definition of the rectangular bounding box that we discussed in rectangular bounding box Section and this principle is extended to the computation of centroid artifacts. Using the Euclidean distance method for calculating the distance, the centroid’s location is stored in the list. The velocity of the movement of the object from one frame to frame is measured taking the distance and captured video sequences’ frame rate as input. Figure 10 shows the measurement of the Euclidean distance and Figure 11 shows the sample bounding box with centroid.

The Euclidean distance formula is

d = ( X 2 X 1 ) 2 + ( Y 2 Y 1 ) 2 (4)

where,

X1 = previous pixel location.

X2 = current pixel location in width.

Y1 = previous pixel location.

Y2 = current pixel location in height.

A multi-step technique is the centroid tracking algorithm. We have briefly discussed the centroid tracking algorithm.

• Step 1: Intake enclosing box coordinates and calculate centroids

· In every single frame, needs enclosing box (x, y)—coordinates for individual detected object.

· Calculate the centroid from the center (x, y)—coordinates of the enclosing box.

Figure 10. Euclidean distance measurement [50].

Figure 11. Sample of the bounding box with centroid [51].

• Step 2: Calculate Euclidean distance between new enclosing boxes and current objects.

• Step 3: Update (x, y)-coordinates of current objects.

· If the distance between the centroids of the subsequent frames Ft and Ft+1 is smaller than all other distances between objects, then object tracker is built automatically.

• Step 4: Register new objects.

· Giving it a new object ID.

· Storing the centroid of the enclosing box coordinates for that object.

• Step 5: Deregister old objects.

· When an object has been missing or left the field, the tracker deregisters to this object.

5. Experimental Results and Discussion

In this Section, the experimental outcomes of our proposed system with detailed discussions are presented. At first, the system configuration details followed by simulation results are provided. Then the results on both nighttime vehicle detection and vehicle tracking are presented and discussed respectively. Finally, results of the proposed system are compared with few existing approaches to justify the effectiveness of this system.

5.1. Environments and Datasets

For the implementation and verification of the system, we have used four system configurations as shown in Table 6. To evaluate the performance of the suggested system, two types of datasets have been used, one is the NiTra (Nighttime Traffic) Dataset [34] and another one is the NVDD (Nighttime Vehicle Detection Dataset) that is created by the researchers. NiTra dataset comprises three categories of nighttime traffic sceneries: Urban, Highway, and Rainy night. We have used two categories of scenes from NiTra Dataset: the urban and the highway scene. The urban subgroup includes two sequences considered by vehicles traveling at constantly shifting speeds and pairs of moving reflections, as well as highway sequences where the major obstacles are disapproving glances caused by streetlamps. The NiTra dataset is shown in Table 7. The NVDD dataset contains still images (almost 2779 images for the test) and videos (six sequences with different frame rates) of the oncoming vehicles and preceding vehicles. We have used different resolutions for the images and the videos with different frame rates such as 24 fps, 25 fps, and 29 fps. These data have been collected from the highway and urban areas and some are collected from the web. As shown in Table 8, a total of 6756 frames have used as a test set.

This study will be conducted using actual recorded road scenes and public datasets to assess whether the proposed approach could be effectively implemented. The material of the video sequence used for the tests included the usual urban and rural road scene, the highway road scene, the complex road scene, and the conditions of the foggy road weather scene.

5.2. Simulation Results

In the process of this research, vehicles would only be detected if they comply with the proposed vehicle detection conditions. We have used versatile fixed ROIs in this scheme, which will automatically adapt to different frame resolutions. We operated only with the frame portion of the ROIs and avoided the rest of the frame except the ROIs.

Table 6. System configurations.

Table 7. Details of NiTra dataset.

Table 8. Details of NVDD dataset.

The processing time will be decreased by doing this and the performance of the proposed system will be more reliable and effective. Only if they are present inside the ROIs, vehicles will be identified. Our experimental results would demonstrate in terms of various scenarios and circumstances, that the proposed scheme can spot vehicles with headlights and taillights at night successfully. Table 9 shows the results of the vehicle detection of the proposed system on NVDD dataset.

Figure 12 shows the vehicle detection results with single light and it is shown that the proposed system has been able to detect double light vehicles as well as single light vehicles. This proposed system is hoped to be capable of detecting all forms of vehicles.

We used NiTra dataset to assess the effectiveness of the proposed scheme. Three video sequences of two subgroups have been tested from the NiTra dataset and shown in Table 10.

5.3. Detection Results

The most important performance parameters for the proposed method are accuracy, false positives per image and miss rate (correctly not detected). Accuracy is measured within two ROIs as a ratio of vehicles correctly identified and the overall number of vehicles and can be expressed as:

Accuracy ( % ) = Correctly detected vehicles Total no .of vehicles × 100 (5)

However, the ratio of cumulative false detection (FD) collected in the assessment dataset to the total number of samples is measured as false positives (FP). Mathematically expressed as:

Table 9. Results of the proposed system using NVDD dataset.

Table 10. Results of the proposed system using NiTra dataset.

(a)(b)(c)

Figure 12. Vehicle detection results with single light (e.g motorbike).

FP = Total false detections Number of vehicles (6)

The detection rate (DR) is measured as the ratio between the total count of vehicles correctly detected in the two ROIs and the total count of vehicles in the two ROIs. Mathematically expressed as:

DR = Total no .of Correctly detected vehicles within two ROIS Total no .of vehicles within two ROIS (7)

Miss rate (MR) is calculated as the ratio of the total amount of miss detection (MD) within the two ROIs to the total number of vehicles. To find out the accuracy of the proposed technique using the NVDD dataset that is shown in Table 8, we have manually checked the calculation. The accuracy of the proposed system for the detection of vehicles is displayed in Table 11.

On NVDD dataset, we can obtain an average accuracy is 97.22%, detection rate is 94.44%, false positives per image is 0.028 and the miss detection rate is 0.056 by solving (5), (6) and (7) equations which are shown in accuracy Table 11. The accuracy chart for the NVDD dataset is shown in Figure 13.

The accuracy results of vehicle detection using the NiTra dataset are shown in Table 12. Three NiTra dataset sequences have been used, which are listed in Table 7. Basically, using these three sequences, the efficacy of the proposed technique has been tested. The vehicle detection rate, average accuracy, miss rates and false positives per frame is calculated using (5), (6) and (7) equations. The average accuracy is about 83.33%, detection rate is about 92.86%, false positives per frame are 0.214 and the miss detection rate is 0.714.

On four system platforms, the proposed system has been tested that is defined in Table 6. The process time of the four systems with different frame resolutions that are shown in Table 8 has been established. From the data provided in Table 13, it is seen that system 1 takes comparatively less time than the other three systems. The most time was taken by system 4. From this data it indicates that system 1 gave much better results than other systems.

Table 14 gives the overall vehicle detection results using the NVDD dataset and NiTra dataset.

Table 11. Accuracy table of the proposed system using NVDD dataset.

Table 12. Accuracy table of the proposed system using NiTra dataset.

Table 13. Processing time of the proposed system.

Table 14. Overall vehicle detection results.

Figure 13. Accuracy chart of the proposed system using NVDD dataset.

5.4. Vehicle Tracking Results

Vehicle tracking is simply called monitoring of the vehicles in successive frames of a video [52]. We are all aware that vehicle tracking works in real time and like vehicle detection, it is almost the same process. The monitoring results of our proposed system are shown in Figure 14. In order to detect and track vehicles the proposed system mainly depends on detecting vehicle lights. We can see from the experimental outcomes that the proposed scheme can correctly detect and track vehicles. The tracking outcomes of the proposed scheme are displayed in Table 15. This table describes how many vehicles are accurately tracked within two ROIs and the false positives that incorrectly indicate that there is a specific object.

Miss rate = No .of vehicles correctly not detected No .of vehicles within two ROIS (8)

Tracking rate ( % ) = No .of Correctly detected vehicles within two ROIS No .of vehicles within two ROIS × 100 (9)

False positives (FP) are computed as, ratio of the total count of the false detections (FD) to total count of vehicles used in two ROIs. From Table 15, the proposed system can obtain the tracking rates (TR) is about 97.77% and the average tracking accuracy is about 98.33%. We obtain the false positives of the vehicle tracking is 0.022 and the miss rate (MR) is about 0.022 by solving (8) and (9) equations. Figure 15 shows the vehicle tracking chart of the proposed system.

5.5. Comparison between the Proposed and Existing Systems

The comparison table of nighttime vehicle detection of the proposed system and the existing techniques is presented in Table 16. We showed this distinction in this table based on overall accuracy and the processing duration per frame of the existing methods. It is seen that the proposed technique’s processing time (SI. No -1) is much less than other methods (SI. No. 2 to 7) and that the accuracy is also comparatively high. It also shows that the proposed system is efficient, and it is a real-time system.

Table 15. Accuracy table of the proposed vehicle tracking results.

Table 16. Comparison table of the proposed system with existing systems.

(a) (b) (c) (d) (e) (f)

Figure 14. Vehicle tracking results using two datasets (NiTra and NVDD).

Figure 15. Vehicle tracking chart of the proposed system.

6. Conclusion

Vehicle detection and tracking at night are always been a challenging task due to the lack of illumination conditions. This paper proposes a new vision-based on-road nighttime vehicle detection and tracking method using the taillight and headlight features. As shown in the experimental results, the proposed system has appropriately detected vehicles with headlights as well as taillights in spite of the illumination conditions at night. Our proposed system is very effective and robust, and the system achieved an average accuracy is about 97.22%, the detection rate is 94.44%, the false positives per image is 0.028, and miss detection rate is 0.056. Average tracking accuracy of the proposed system is about 98.33%. Moreover, the proposed method has also attained an average processing time of about 0.01s per frame. This shows that the processing time for the vehicle detection is comparable to others very efficient existing algorithms such as the global algorithms and it meets the real-time conditions. This work could be extended to include other features like brake lights of the vehicle. Therefore, more in-depth analysis and research are essential in this field to efficiently and accurately detect all of vehicles in a scene. The drawbacks of the proposed system whereby vehicles in complex road scenes and foggy weather conditions are not adequately identified can be resolved in the future through comprehensive research work. No discrete graphics have been used in the proposed system. So, it can be used in the future to enhance the detection process.

Acknowledgements

We are thankful to the Department of Computer Science and Engineering, Jahangirnagar University.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Sarma, C., Gupta, A., Singh, A., Bhattacharjee, I., Mohanta, A., Tripathy, S. and Singhal, D. (2018) Limitations of Probable Vehicle Headlight Technologies—A Critical Review. IOP Conference Series: Materials Science and Engineering, 390, Article ID: 012073.
https://doi.org/10.1088/1757-899X/390/1/012073
[2] Kavya, T.S., Tsogtbaatar, E., Jang, Y.M. and Cho, S.B. (2018) Night-Time Vehicle Detection Based on Brake/Tail Light Color. International SoC Design Conference (ISOCC), IEEE, Daegu, 12-15 November 2018, 206-207.
https://doi.org/10.1109/ISOCC.2018.8649981
[3] Vaishali, V., Thanuja, T. and Priyadarshini, J. (2017) Nighttime Vehicle Detection, Counting and Classification. International Journal of Advance Research and Innovative Ideas in Education, 3, 2672-2677.
[4] Panicker, J.V. (2015) Nighttime Vehicle Detection and Traffic Surveillance. International Journal of Science and Research, 4, 957-962.
[5] Tian, Q., Zhang, L., Wei, Y., Zhao, W. and Fei, W. (2013) Vehicle Detection and Tracking at Night in Video Surveillance. International Journal of Online and Biomedical Engineering, 9, 2626-8493.
https://doi.org/10.3991/ijoe.v9iS6.2828
[6] Badave, H.H. (2018) Vehicle Detection Systems: A Review. Open Access International Journal of Science & Engineering, 3, 7-10.
[7] Sevekar, P. and Dhonde, S.B. (2016) Nighttime Vehicle Detection for Intelligent Headlight Control: A Review. 2nd International Conference on Applied and Theoretical Computing and Communication Technology, IEEE, Bengaluru, 21-23 July 2016, 188-190.
https://doi.org/10.1109/ICATCCT.2016.7911989
[8] Muralikrishnan, R. (2014) Automatic Headlight Dimmer a Prototype for Vehicles. International Journal of Research in Engineering and Technology, 3, 85-90.
[9] Chilla, D., Joshi, M., Kajale, S. and Deoghare, S. (2016) Headlight Intensity Control Methods—A Review. International Journal of Innovative Research in Computer and Communication Engineering, 4, 1140.
[10] Chen, Y.L. and Chiang, C.Y. (2010) Embedded On-Road Nighttime Vehicle Detection and Tracking System for Driver Assistance. IEEE International Conference on Systems, Man and Cybernetics, Istanbul, 10-13 October 2010, 6.
https://doi.org/10.1109/ICSMC.2010.5642340
[11] Huang, H.W., Lee, C.R. and Lin, H.P. (2017) Nighttime Vehicle Detection and Tracking Base on Spatiotemporal Analysis Using RCCC Sensor. IEEE 9th International Conference on Humanoid, Nanotechnology, Information Technology, Communication and Control, Environment and Management (HNICEM), Manila, 1-3 December 2017, 1-5.
https://doi.org/10.1109/HNICEM.2017.8269548
[12] Taha, M., Zayed, H.H., Nazmy, T. and Khalifa, M.E. (2014) An Efficient Method for Multi Moving Objects Tracking at Nighttime. IJCSI International Journal of Computer Science Issues, 11, 17-27.
https://doi.org/10.15849/icit.2015.0002
[13] Hadi, R.A., Sulong, G. and George, L.E. (2014) Vehicle Detection and Tracking Techniques: A Concise Review. Signal & Image Processing: An International Journal (SIPIJ), 5, 1-12.
https://doi.org/10.5121/sipij.2014.5101
[14] Chen, Y.L., Wu, B.F., Huang, H.Y. and Fan, C.J. (2011) A Real-Time Vision System for Nighttime Vehicle Detection and Traffic Surveillance. IEEE Transactions on Industrial Electronics, 58, 2030-2044.
https://doi.org/10.1109/TIE.2010.2055771
[15] Bogacki, P. and Dlugosz, R. (2019) Selected Methods for Increasing the Accuracy of Vehicle Lights Detection. 24th International Conference on Methods and Models in Automation and Robotics (MMAR). Miedzyzdroje, 26-29 August 2019, 227-231.
https://doi.org/10.1109/MMAR.2019.8864675
[16] Fleyeh, H. and Mohammed, I.A. (2012) Night Time Vehicle Detection. Journal of Intelligent Systems, 21, 143-165.
https://doi.org/10.1515/jisys-2012-0007
[17] AUTODEALSpk (2018) 10 Tips to Help You Drive Safely in The Night.
https://autodeals.pk/10-tips-to-help-you-drive-safely-in-the-nigh
[18] Sina, I., Wibisono, A., Nurhadiyatna, A., Hardjono, B., Jatmiko, W. and Mursanto, P. (2013) Vehicle Counting and Speed Measurement Using Headlight Detection. International Conference on Advanced Computer Science and Information Systems (ICACSIS), Sanur Bali, 28-29 September 2013, 149-154.
https://doi.org/10.1109/ICACSIS.2013.6761567
[19] Juric, D. and Loncaric, S. (2014) A Method for On-Road Night-Time Vehicle Headlight Detection and Tracking. International Conference on Connected Vehicles and Expo (ICCVE), Vienna, 3-7 November 2014, 655-660.
https://doi.org/10.1109/ICCVE.2014.7297630
[20] Padmavathi, S., Naveen, C.R. and Kumari, V.A. (2016) Vision Based Vehicle Counting for Traffic Congestion Analysis during Night Time. Indian Journal of Science and Technology, 9, 1-6.
https://doi.org/10.17485/ijst/2016/v9i20/91742
[21] Chen, X.Z., Liao, K.K., Chen, Y.L., Yu, C.W. and Wang, C. (2018) A Vision-Based Nighttime Surrounding Vehicle Detection System. 7th International Symposium on Next Generation Electronics (ISNE), Taiwan, 7-9 May 2018, 1-3.
https://doi.org/10.1109/ISNE.2018.8394717
[22] Pham, T.A. and Yoo, M. (2020) Nighttime Vehicle Detection and Tracking with Occlusion Handling by Pairing Headlights and Taillights. Journals of Applied Sciences, 10, 3986.
https://doi.org/10.3390/app10113986
[23] Sevekar, P. and Dhonde, S.B. (2017) Night-Time Vehicle Detection for Automatic Headlight Beam Control. International Journal of Computer Applications, 157, 8-12.
https://doi.org/10.5120/ijca2017912737
[24] Vu, T.A., Pham, L.H., Huynh, T.K. and Hat, S.V.U. (2017) Nighttime Vehicle Detection and Classification via Headlights Trajectories Matching. International Conference on System Science and Engineering, Ho Chi Minh City, 21-23 July 2017, 221-225.
https://doi.org/10.1109/ICSSE.2017.8030869
[25] Gao, F., Ge, Y., Lu, S. and Zhang, Y. (2018) On-Line Vehicle Detection at Nighttime-Based Tail-Light Pairing with Saliency Detection in the Multi-Lane Intersection. IET Intelligent Transport Systems, 13, 515-522.
https://doi.org/10.1049/iet-its.2018.5197
[26] Mo, Y., Han, G., Zhang, H., Xu, X. and Qu, W. (2019) Highlight-Assisted Nighttime Vehicle Detection Using a Multi-Level Fusion Network and Label Hierarchy. Neurocomputing, 355, 13-23.
https://doi.org/10.1016/j.neucom.2019.04.005
[27] Cai, Y., Sun, X., Wang, H., Chen, L. and Jiang, H. (2016) Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning. Journal of Sensors, 2016, Article ID: 8046529.
https://doi.org/10.1155/2016/8046529
[28] Muslu, G. and Bolat, B. (2019) Nighttime Vehicle Tail Light Detection with Rule Based Image Processing. Scientific Meeting on Electrical-Electronics & Biomedical Engineering and Computer Science (EBBT), Istanbul, 24-26 April 2019, 1-4.
[29] Menezes, P., Barreto, J.C. and Dias, J. (2004) Face Tracking Based on Haar-Like Features and Eigen Faces. IFAC/EURON Syfmposium on Intelligent Autonomous Vehicles, Lisbon, 5-7 July 2004, 500.
[30] Li, C.M., et al. (2017) Human Face Detection Algorithm via Haar Cascade Classifier Combined with Three Additional Classifiers. IEEE 13th International Conference on Electronic Measurement & Instruments, Yangzhou, 20-22 October 2017, 483-487.
https://doi.org/10.1109/ICEMI.2017.8265863
[31] Mahmoud, M.A.I. and Ren, H. (2018) Forest Fire Detection Using a Rule-Based Image Processing Algorithm and Temporal Variation. Mathematical Problems in Engineering, 2018, Article ID: 7612487.
https://doi.org/10.1155/2018/7612487
[32] Viola, P. and Jones, M. (2001) Rapid Object Detection Using a Boosted Cascade of Simple Features. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, 8-14 December 2001, Kauai.
https://doi.org/10.1109/CVPR.2001.990517
[33] Kumar, U.P. and Bharathi, S.H. (2019) Vehicle Detection in Night Time Using Background Model in Urban Traffic Environment. International Journal of Innovative Technology and Exploring Engineering (IJITEE), 8, 2532.
[34] Zou, Q., Ling, H., Pang, Y., Huang, Y. and Tian, M. (2017) Joint Head-light Pairing and Vehicle Tracking by Weighted Set Packing in Nighttime Traffic Videos. IEEE Transaction on Intelligent Transportation Systems, 19, 1950-1961.
https://doi.org/10.1109/TITS.2017.2745683
[35] Varghese, C. and Shankar, U. (2014) Passenger Vehicle Occupant Fatalities by Day and Night—A Contrast. Traffic Safety Facts.
[36] Merriam-Webster.com Dictionary, Merriam-Webster (2020) Headlight.
https://www.merriam-webster.com/dictionary/headlight
[37] American Dictionary, Definition of Taillight.
https://dictionary.cambridge.org/dictionary/english/tail-light
[38] Merriam-Webster.com Dictionary, Merriam-Webster (2020) Taillight.
https://www.merriam-webster.com/dictionary/taillight
[39] Mail Online (2014) Road Safety Row as Hard Shoulder Shut Down Permanently for First Time on M25 “Smart” Motorway to Ease Gridlock (and Don’t Forget the Speed Cameras).
https://www.dailymail.co.uk/news/article-2602746/Road-safety-row-hard-shoulder-shut-permanently-time-M25-smart-motorway-ease-gridlock-dont-forget-speed-cameras.html
[40] Pantech Shop (2013) Overview of ROI Processing.
https://www.pantechsolutions.net/blog/matlab-code-for-region-of-interest-in-image
[41] Robo Craft, ROI-Region of Interest (12).
http://robocraft.ru/blog/computervision/289.html
[42] Chen, Q., Sun, Q.S., Heng, P.A. and Xia, D.S. (2008) A Double-Threshold Image Binarization Method Based on Edge Detector. Pattern Recognition, 41, 1254-1267.
https://doi.org/10.1016/j.patcog.2007.09.007
[43] Blackledge, J.M. (2005) Morphological Operations (Chapter 16.7.3). Digital Image Processing, Mathematical and Computational Methods, 1st Edition, 505.
[44] Kaur, D. and Kaur, Y. (2014) Various Image Segmentation Techniques: A Review. International Journal of Computer Science and Mobile Computing, 3, 809-814.
[45] OpenCV, Open Source Computer Vision, Operations on Arrays, Bitwise and Operation.
https://docs.opencv.org/3.4/d2/de8/group__core__array.html
[46] Sinha, S. (2019) Find and Draw Contours Using OpenCV. GeeksforGeeks.
https://www.geeksforgeeks.org/find-and-draw-contours-using-opencv-python/#:~:text
=Contours%20are%20defined%20as%20the,the%20contours%20from%20the%20image
[47] Abid, K.A. (2013) Contour Features.
[48] Manikandan, R. and Ramakrishnan, R. (2013) Human Object Detection and Tracking Using Background Subtraction for Sports Applications. International Journal of Advanced Research in Computer and Communication Engineering, 2, 4077-4080.
[49] Bakliwal, A., et al. (2020) Croud Counter: An Application of Centroid Tracking Algorithm. International Research Journal of Modernization in Engineering Technology and Science, 2, 1138-1141.
[50] Rosalind, Glossary, Euclidean Distance.
http://rosalind.info/glossary/euclidean-distance
[51] Rosebrock, A. (2018) Simple Object Tracking with OpenCV.
https://www.pyimagesearch.com/2018/07/23/simple-object-tracking-with-opencv/
[52] Shrestha, S. (2019) Vehicle Tracking Using Video Surveillance. In: Intelligent System and Computing, IntechOpen, London, 1-20.
https://doi.org/10.5772/intechopen.89405
[53] Wang, J., Sun, X. and Guo, J. (2013) A Region Tracking-Based Vehicle Detection Algorithm in Nighttime Traffic Scenes. Proceedings of IEEE Sensors, 13, 16474-16493.
https://doi.org/10.3390/s131216474
[54] Cai, Z., Fan, Q., Feris, R.S. and Vasconcelos, N. (2016) A Unified Multi-Scale Deep Convolutional Neural Network for Fast Object Detection. European Conference on Computer Vision, Amsterdam, 8-16 October 2016, 354-370.
https://doi.org/10.1007/978-3-319-46493-0_22
[55] Kuang, H., Zhang, X., Li, Y.J., Chan, L.L.H. and Yan, H. (2017) Nighttime Vehicle Detection Based on Bio-Inspired Image Enhancement and Weighted Score-Level Feature Fusion. IEEE Transactions on Intelligent Transportation Systems, 18, 927-936.
https://doi.org/10.1109/TITS.2016.2598192
[56] O’Malley, R., Glavin, M. and Jones, E. (2010) Vehicle Detection at Night Based on Tail-Light Detection. 1st International ICST Symposium on Vehicular Computing Systems, Dublin, 22-24 July 2008.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.