In-Vehicle Stereo Vision Systems with Improved Ant Colony Optimization Based Lane Detection: A Solution to Accidents Involving Large Goods Vehicles Due to Blind Spots

Abstract

This paper presents an in-vehicle stereo vision system as a solution to accidents involving large good vehicle due to blind spots using Nigeria as a case study. In this paper, a stereo-vision system was attached to the front of Large Good Vehicles (LGVs) with a view to presenting live feeds of vehicles close to the LGV vehicles and their distance away. The captured road images using the stereo vision system were optimized for effectiveness and optimal vehicle maneuvering using a modified metaheuristics algorithm called the simulated annealing Ant Colony Optimization (saACO) algorithm. The concept of simulated annealing is strategies used to automatically select the control parameters of the ACO algorithm. This helps to stabilize the performance of the ACO algorithm irrespective of the quality of the lane images captured in the in-vehicle vision system. The system is capable of notifying drivers through lane detection techniques of blind spots. This technique enables the driver to be more aware of what surrounds the vehicle and make decisions early. In order to test the system, the stereo-vision device was mounted on a Large good vehicle, driven in Zaria (a city in Kaduna state in Nigeria), and data were in the record. Out of 180 events, 42.22% of potential accident events were caused by Passenger Cars, while 27.22%, 18.33% and 12.22% were caused by two-wheelers, Large Good Vehicles and road users, respectively. In the same vein, the in-vehicle lane detection system shows a good performance of the saACO-based lane detection system and gives a better performance in comparison with the standard ACO method.

Share and Cite:

Umar, I. , Hu, S. and Luo, H. (2022) In-Vehicle Stereo Vision Systems with Improved Ant Colony Optimization Based Lane Detection: A Solution to Accidents Involving Large Goods Vehicles Due to Blind Spots. Open Journal of Applied Sciences, 12, 346-367. doi: 10.4236/ojapps.2022.123025.

1. Introduction

Stereo vision systems in vehicles are image information systems divided into two categories: monocular vision and binocular vision. Monocular vision delivers environmental information to the system, which is in most cases insufficient for safety needs. However, binocular vision, which employs depth image information to infer depth map information, can be used to meet the system’s safety criteria [1] [2]. This system’s safety criteria are essential for preventing accidents caused by Large Goods Vehicles (LGVs).

Large Goods Vehicles (LGVs), also known as Heavy Goods Vehicles (HGVs), play a critical role in the development and maintenance of today’s economies [3]. LGVs accounted for 85.4 percent of road goods transportation in the European Union (EU) countries in 2018, with a maximum allowable laden weight of 30 tones [4]. Similarly, LGVs were responsible for 64% of all goods transport in the United Kingdom, accounting for 152 billion tonne-kilometers of products carried. This cargo movement benefits all sectors of society, with 20% of LGVs hauling food, 13% moving metal, minerals, and synthetic commodities, and 1% transporting waste-related items. Glass, cement, and other non-metallic mineral products are also essential items handled by LGVs, accounting for 10% of all goods moved by LGVs [5]. Unlike other smaller road vehicles such as passenger cars and motorcycles, large commercial vehicles have multiple blind spots due to their size. Blind spots are not visible to the driver either directly or indirectly (through a device such as a side view mirror). Depending on the vehicle’s design, a blind spot in large vehicles can extend up to 2 meters. The front, back, left, and right sides of an LGV have the blindest spots. Within these blind spot regions, the driver has limited or no ability to view the regions. Hence, necessitating the need for an efficient vision system [6] (Figure 1).

Figure 1. Blind sport in LGVs.

Cars or objects in blind areas cause most LGV incidents; however, these blind spots do not always contribute equally to accidents. As presented in past literature, the blind area in front of the vehicle is responsible for about 31% of fatal incidents involving LGVs. These could be caused by passenger-side lane changes and turns, starting an LGV from a standstill at crosswalks or other locations where a person or an object could be close to the vehicle’s front, and other traffic scenarios. To prevent accidents due to blind spots, this works an improved in-vehicle vision and image processing system capable of detecting lanes and their edges using the Ant Colony Optimization. Using the technique, a lane border marker and a road shape constraint were presented. The road geometry was utilized to estimate the inverse perspective mapping, while the lane boundary marker was used to map out critical spots in the lane boundaries. An enhanced symmetrical thresholding technique was used to retrieve the lane markings’ edge points. Bresenham line voting space was used to implement line segmentation. Different status vectors and Kalman filters were used to track the critical elements of the linear and non-linear parts of the lane markers. Experiments revealed that the technique might satisfactorily meet the real-time requirements of Large Good Vehicles.

The following is how the remainder of the paper is organized.

The second half of the research report summarizes the existing in-vehicle system and the methodology used. In section three, the case study entire system with the proposed research is presented. In section four, the method’s performance is assessed and the discussion of the obtained results, whereas this paper’s conclusion, recommendations, are presented at the end.

2. Review of In-Vehicle Vision System

Researchers have employed various approaches in achieving obstacle detection in automotive applications, among which is the knowledge-based system, which uses different obstacle properties for detection. Evenness, shading, shadows, horizontal and vertical edges, and surfaces are the most utilized properties. Optical flow is generally utilized for motion-based systems, in which the streamflow presented by the sense of movement of the vehicle is estimated. By compensating for this streamflow in the final optical stream result, the moving obstacles in the scene can be identified [7] [8] [9] [10]. Another Stereo System method is the use of the depth domain. This method uses an inverse point of view transformed left image to anticipate the correct image in light of a level road presumption [11]. The obstructions are outlined by computing the distinction between the anticipated right image and the virtual right image. The use of triangular-shaped objects on the difference map of the two transformed bird’s eye view images is another method employed in stereo vision. This approach requires less computation than the previous depth domain method by utilizing the qualities of the precise application [12]. The temple matching method is another approach employed in the Stereo system in which it both identifies and classifies obstacles using the calculated dissimilarity of the edge pixels [13]. Also, both stereo vision and inverse perspective mapping (IPM) are combined. This method uses an algorithm that recognizes the vertical edges crossing disappearing lines that ordinarily correspond to obstacles [14]. Another approach in stereo vision is using u-v disparity to evaluate the three-dimensional street surface position, which likewise identifies the obstacles on this street [15]. The combination of motion data and the depth map from stereo vision systems to identify moving and fixed obstacles is another approach used in in-vehicle stereo systems. This method precisely identifies obstacle positions; however, it requires ten seconds to compute a pair of VGA resolution images [13].

It is clear from the discussion thus far that much study has been done in the domain of in-vehicle vision systems to limit the Large Good Vehicle accident to the barest minimum. Nonetheless, the basis of this paper is the development of an improved in-vehicle vision system with a dynamic lane detection algorithm based on swarm optimization techniques as a tool for edge detection under various lighting and weather conditions, as well as the consideration of road images of varying quality (properly marked and improperly marked roads). For lane edge identification, this research uses the Ant Colony Optimization (ACO) [16], a population-based optimization method inspired by ant foraging behaviors and their natural ability to locate the shortest path to a food source.

It is clear from the discussion thus far that much study has been done in the domain of in-vehicle vision systems to limit the Large Good Vehicle accident to the barest minimum. Nonetheless, the basis of this paper is the development of an improved in-vehicle vision system with a dynamic lane detection algorithm based on swarm optimization techniques as a tool for edge detection under various lighting and weather conditions, as well as the consideration of road images of varying quality (properly marked and improperly marked roads). For lane edge identification, this research uses the Ant Colony Optimization (ACO) [16], a population-based optimization method inspired by ant foraging behaviors and their natural ability to locate the shortest path to a food source.

2.1. Stereo Vision Using Depth Estimation

One of the fundamental stages in using depth estimation in designing an in-vehicle stereo vision is camera calibration. Camera calibration is the way toward evaluating the spatial connection between the two cameras: deciding the position of the cameras that make up the stereo system setup concerning each other. This camera calibration is generally done by utilizing a calibration pattern, such as a checkerboard design where the size of the checks is known, and taking pictures of this pattern in various directions and distances from the cameras. A straightforward contrast-based calculation can be utilized to recognize the dark white crossing points on the checkerboard. Utilizing the contradictions between these crossing points focuses in the two images, the rotation and translation matrices between the two cameras can be resolved, and consequently can change the coordinate the systems of each of the cameras into a standard coordinate system [7]. Using a pinhole camera model, as shown in Figure 3, the mathematical relationship between the coordinates of any 3D point in the world and the 2D coordinates of the image plane on which the 3D point is projected can be determined using Equations (1) and (2) [17].

s m = M i n t [ R | t ] M (1)

s ( u v 1 ) = ( f x 0 c x 0 f y c y 0 0 1 ) ( r 11 r 12 r 13 r 21 r 22 r 23 r 31 r 32 r 33 | t 1 t 2 t 3 ) ( X Y Z 1 ) (2)

(X, Y, Z) are the coordinates of a 3D point in the world coordinate space, (u, v) are the coordinates of the projection point in pixels, ( c x , c y ) is a principal point that is usually at the image center, ( f x , f y ) are the focal lengths expressed in pixel units, [ R | t ] is the matrix that describes the rotation and translation of the camera around a static scene.

However, in the real world, camera lenses usually have some form of distortion which can be radial or tangential. This distortion is put into consideration using Equations (3) to (6) [17].

( x y z ) = R ( X Y Z ) t (3)

In which,

x = x / z (4a)

y = y / z (4b)

x = x 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 1 + k 4 r 2 + k 5 r 4 + k 6 r 6 + 2 p 1 x y + p 2 ( r 2 + 2 x 2 ) (4c)

y = y 1 + k 1 r 2 + k 2 r 4 + k 3 r 6 1 + k 4 r 2 + k 5 r 4 + k 6 r 6 + 2 p 1 y y + p 2 ( r 2 + 2 y 2 ) (4d)

r 2 = x 2 + y 2 (5)

u = f x x + c x (6a)

v = f y y + c y (6b)

where k 1 , k 2 , , k 6 are the radial distortion components p 1 and p 2 are the tangential distortion components?

Depth estimation is another crucial step in in-Vehicle Stereo Vision which is considered the most significant wherein the actual distance to the vehicle or object in front is determining. To discover the depth, there is a need to decide the divergence of the corresponding points. The difference is the apparent shift in the intrigue points along the x-axis in a rectified pair of stereo images [18]. The divergence is usually gotten by superimposing the images from one of the cameras on the top of the other. When the relating highlights from both the rectified images have been obtained, the difference between their x-coordinates is obtained, thereby identifying the disparity of the images. When the disparities are resolved, the relating distances at each set of acquired point correspondences are determined utilizing Equation (7) [17].

distance = f B / d (7)

In which f is the focal length in pixels, B is the pattern length that is the separation between the two camera centers, d is the divergence at the comparing set of points being considered.

2.2. The Case Study

Nigeria’s Federal Republic covers a total area of 923,768 square kilometers. The country is located between Benin and Cameroon in the Gulf of Guinea. Nigeria has a 4447-kilometer-long international border with Cameroon (1690 kilometers) to the east, Niger (1497 kilometers) to the north, Benin (773 kilometers) to the west, and Chad (87 kilometers) to the northeast. Nigeria also boasts an 853-kilometer-long coastline. Nigeria is governed by a federal system that includes a federal government, 36 state governments, and the Federal Capital Territory of Abuja. In addition, the country has 774 Local Government Areas. Nigeria has a population of 140 million people, according to the 2006 population census, with an annual growth rate of 1.9 percent [19]. The Nigerian transportation system includes forms of transportation such as air, ocean, land (road and rail), and pipelines. However, research has revealed that, because of its low cost and accessibility, road transportation is the most widely used mode of moving products and people in Nigeria [19]. According to Table 1 [16], Nigeria’s road network is around 195,000 kilometers long, with about 32,000 kilometers of federal roadways and 31,000 kilometers of state roads. According to studies, pipeline transportation of petroleum products has been replaced by road transportation due to illegal activities such as pipeline vandalism. As a result, the number of LGVs on the road has increased by about 2600, increasing the number of mishaps on Nigerian roads that result in death and property loss [20].

Table 1. Road networks distribution in Nigeria.

With around 5000 of them active in the wet payload (oil) hauling moving more than 150 million liters of oil, large goods vehicles have become an important segment of Nigeria’s road transport systems. There are also 2500 dry payload trailers plying the roads on a daily basis, specializing in the transportation of goods such as food and agricultural supplies from rural areas, as well as household and office appliances [21].

With around 5000 of them active in the wet payload (oil) hauling moving more than 150 million liters of oil, large goods vehicles have become an important segment of Nigeria’s road transport systems. There are also 2500 dry payload trailers plying the roads on a daily basis, specializing in the transportation of goods such as food and agricultural supplies from rural areas, as well as household and office appliances [21].

In the case of Nigerian cargo traffic, studies show that the number of deaths involving LGVs varies annually, with a high prediction of an increase compared to previous years due to a high reliance on road cargo transport with little reliance on rail and pipeline transportation systems. According to the study, there were roughly 50,000 recorded causality instances involving LGVs between 2007 and 2017, with about 27% of these cases resulting in death as showed in Table 2 [21].

Distractions while driving, the presence of numerous blind spots from the driver, high speed and reckless driving, night driving and wrong-way driving, drug intake, potholes, pedestrian carelessness on the road, disobedience to traffic rules, overloading, and other factors have all been identified as contributing to the high rate of LGV accidents on Nigerian roads.

3. The System Methodology

The following items discuss the step-by-step approach used to successfully implement the proposed method presented in this paper. The subsection discusses the entire system architecture that comprises the in-vehicle vision system and the Ant Colony-based land detection system for blind spot detection.

Two low-cost web cameras (webcams) with wide-angle lenses make up the hardware setup embedded with the lane detection system. To get a wide field of view, the Zinox webcams were used. As illustrated in Figure 2, these cameras were housed in a case. The casing was made of acrylic glass and was machined to have two front openings, allowing the two cameras to see nearby obstacles. It can also be unlocked to allow access to the cameras. It also includes a black hole that allows for camera hookup. This setup was installed in the front of a large-goods truck, as depicted in Figure 3, just below the windshield, to detect obstructions in the vehicle’s blind zone.

On the setup, a stereo vision algorithm for obstacle identification written in Python runs and shows the stereo image on a screen. Furthermore, the system was trained to notify the driver whenever an obstacle entered the truck’s blind zones. The worst trucks, as shown in [22], have a blind spot of 1.9 meters, hence the algorithm is programmed to warn the driver if an impediment is within 0 - 1.9 meters of the truck.

Table 2. Road traffic crash data involving LGVs in Nigeria.

Figure 2. Casing for stereo vision cameras.

Figure 3. Position of stereo-vision system on LGV.

The processed followed to acquire the data for the proposed technique is presented in Figure 4.

3.1. In Vehicle Stereo Based Lane Detection

This section presents the proposed modified simulated annealing-based AC and sample test images used to evaluate the performance of developed lane detection model.

Figure 4. Process followed to acquire data.

3.1.1. saACO Algorithm

Ant colony optimization is one of the dominant algorithms in the field of metaheuristics optimization. The major challenges with this algorithm are the ease at which it falls into local minimal. This is attributed to the constant effect of control parameters as the algorithm iterate through the optimization process [23] [24] [25] [26]. To address these challenges, this paper proposed a dynamic control parameter selection using the simulated annealing inertia weight strategies of PSO. These parameters include the pheromone influencer α , the heuristic matrix influencer β , the evaporation rate ρ and the pheromone decay coefficient φ . To minimize the constant effect of these control parameters, each parameter was modified using the simulated annealing inertial weight as follows:

α k = α min + ( w max w min ) × λ ( k 1 ) (8)

β k = β min + ( β max β min ) × λ ( k 1 ) (9)

ρ k = ρ min + ( ρ max ρ min ) × λ ( k 1 ) (10)

φ k = φ min + ( φ max φ min ) × λ ( k 1 ) (11)

The min and max parameters in Equations (8)-(11) are initialized as the minimum and maximum values for each parameter.

The memory length of the ACO algorithm is very crucial in the performance of the algorithm. for example, small memory length may lead to an idle behaviour of the ACO, while, large memory length may lead the ACO into missing some important details. The memory length is usually image specific, which means that the algorithm only supports a particular image memory length at a particular time. If the image available is not within the range of image memory length supported by the algorithm, the memory length determination may fail. The transition probability is then calculated based on the approach described in Equation (12) [27] [28].

P i , j ( n ) = ( τ i , j ( n 1 ) ) α ( n i , j ) β j Ω i ( τ i , j ( n 1 ) ) α ( n i , j ) β

where τ i , j ( n 1 ) is the pheromone for pixel (i, j); Ω i is the neighborhood pixel and n i , j is pixel (i, j) heuristics information.

The next position of the ants is randomly generated and recorded into the ant’s memory after which the pheromone function is updated based on a decision between the current and previous position of the ant. When an ant visits a pixel, the pixel is updated locally immediately. At every kth iteration, the amount of pheromone on pixel (i, j) is update based on Equation (13) [27] [28]:

τ i , j n = ( 1 φ ) τ i , j n + τ i n i t i a l (13)

The permissible movement of ants is estimated from a connectivity neighborhood. Though an ant has the freedom to move to any adjacent pixel, this movement is restricted by the condition that ant can only move to a pixel it has not previously visited. At the end of the pixel construction process, the global ant pheromone update is performed on the visited pixel using Equation (14) [27] [28]:

τ i , j n = ( 1 ρ ) τ i , j n 1 + ρ k = 1 K Δ τ i , j k (14)

where Δ τ i , j k is the total pheromone deposit on pixel (i, j) by kth ant.

Finally, based on the pheromone function generated, a threshold is applied on each pixel to determine if the pixel is an edge or not. In order to extract the edge map from the updated pheromone matrix, an optimal threshold value is determined using Otsu’s method of thresholding. However, a tolerance value was introduced into the constructed edge map matrix. This is ensuring that as many edges are decided on based on the binary decision that was to be carried out. Figure 5 shows the flowchart for implementing the saACO algorithm.

Figure 5. Flowchart of saACO algorithm.

3.1.2. Used Dataset

This section presents sample of the test lanes used in to implement the in-vehicle base lane detection technique. During implementation, over 50 images of lane of different condition collected withing Nigeria were used as build the lane detection model. These images were collected considering different occlusion and noise. As an example, Figure 5 and Figure 6, presents ten samples of the test images used in this research.

(a) TImg1 (b) TImg2 (c) TImg3 (d) TImg4 (e) TImg5 (f) TImg6 (g) TImg7 (h) TImg8 (i) TImg9 (j) TImg10

Figure 6. Nigerian test lanes for In-Vehicle stereo based lane detection.

From Figure 6, the test images labeled TImg1 and TImg2 is a medium quality images of the same lane captured at different time interval, different location under different road conditions. This helps to evaluate the performance of the in-vehicle lane detection technique on the same lane with lane marking about a curve road and straight lane markings. Test image TImg3 and TImg4 were selected to determine how effective the in-vehicle lane detection is, in detecting center lane lines and lane lines of a sharp bend. Test image TImg5 and TImg6, were used to were selected to examine the effective of the lane detection technique on poor quality image and poor-quality image with high traffic. The TImg7 is to evaluate the effectiveness of the technique on two-way highway lane while TImg8 is used to evaluate the effectiveness of the lane detection technique on a single lane with four lane markings. The TImg9 and TImg10 is used for the same purpose as in TImg8 and TImg7 respectively, but with different image qualities and slight traffic density.

4. Result Discussion

In Zaria, Kaduna State, the research was conducted on actual traffic conditions (Nigeria). The device was installed on a large-goods truck and driven for around 3 hours on two occasions. The red spots in Figure 7 illustrate the path that was taken on these occasions.

The truck was stopped several times during this time, particularly at pedestrian crosswalks, to identify possible accident events that could occur when the vehicle was about to start or take off. Data were collected to assess the system’s suitability for obstacle recognition and notification, and if a vehicle or road user was detected within the blind spot, the driver is notified and the event details were recorded, as shown in Figure 7. The objects detected were grouped into different classes based on the 180 events that were recorded. These classes include passenger cars (sedans, SUVs, Vans, 3-wheelers), Two-Wheelers (motorcycles and bicycles), large goods vehicles (trucks, tanks), and road users (people crossing the road), with further analysis to determine how much each group contributes to possible accident events shown in Table 3.

These events were also aggregated in order to determine who was accountable for an occurrence that resulted in a notification when a vehicle or other road user was in the blind spot. This was done to determine if an event/notification was produced by the LGV driver (when the LGV driver approaches another vehicle in front of the LGV) or by another vehicle/road user moving into a blind area. The diagram in Figure 5 shows the number of event/notifications that was generated by the LGV vehicle.

Many possible accident situations (42.22 percent) were caused by passenger cars in the blind spot in front of the LGV, according to the findings. Two-wheelers, large-good vehicles, and road users are next to passenger cars. This demonstrates how most drivers and road users are completely ignorant of the blind zones that exist in front of an LGV and its potential to cause needless accidents. The findings also demonstrate that the majority of incidents that resulted in an LGV driver receiving a notification when a vehicle or road user was in the LGV’s blind spot were caused by the vehicle or road user, not the LGV driver. However, there isn’t much of a difference. This demonstrates that LGV drivers play a substantial role in these incidents (Figure 8).

Figure 7. Map route for the drive test of LGV.

Table 3. Classification of potential accident event with large goods vehicles.

Figure 8. Event/ Notification produced by the LGV driver as a result of blind spot detection.

These findings suggest that LGV drivers and other road users in Nigeria are unaware of the blind zones that exist surrounding these vehicles, which contributes to the high incidence of incidents involving these vehicles. Road users and drivers must be fully educated about the blind areas of LGVs. This would reduce the number of cars and road users identified in blind zones, lowering the incidence of LGV-related incidents. The in-stereo system described in this study goes a long way toward assisting LGV drivers in taking the appropriate action when another vehicle is detected in the LGV’s blind spot.

Results Analysis of the In-Vehicle Lane Detection

This section presents the results obtained using the in-vehicle-based lane detection technique. The results are presented in two phases. In the first phase, the detected edges using the saACO and the standard ACO presented. The Hough transform was also presented briefly. The second phase presents the detect lanes using the lane edges detected by both the saACO and the ACO algorithm.

Detected Edges and Hough Transform of saACO

To analyze the efficiency of the in-vehicle lane detection system, both the saACO and the standard ACO were utilized on a set of carefully chosen datasets.

The quality of the test image determines the performance of the in-Vehicle Lane Detection described in this article. The image dataset used in this article were not pre-processed before applying the developed algorithm. This is contrary to the standard techniques which require a lot of preprocessing, thus requiring a lot of economic resources. The edges were detected using the saACO and the lanes lines in the detected edges were computed using the Hough transform. Figure 9 shows the detected edges using the saACO algorithm. Note that due to the amount of information to be displayed; this research only presented the preprocessing results obtained using the simulated annealing-based inertial weight.

The presented saACO-based edge detection strategy used a group of ants to travel across a 2-D/3-D image in order to build a pheromone matrix, each entry of which contains edge information at each pixel location in the image. In addition, the ants’ motions are influenced by local variations in the image’s intensity values. The simulated annealing was used with the ant colony optimization to optimally select the parameters of the ACO. The proposed method begins with the initialization process and then iteratively performs both the construction and update processes for N iterations to generate the pheromone matrix. Finally, the edge is determined by the decision process. Even in the presence of background noise, reflections, and low-quality photos, the saACO functioned admirably. This is due to the fact that the test images were given straight as input images into the created saACO without any pre-processing.

To compute the Hough peaks, a Hough transform of the recognized edges was generated, which aids in determining the presence of straight lane lines in the original test images. For Hough transform implementation, MATLAB includes

TImg1 TImg2 TImg3 TImg4 TImg5 TImg6 TImg7 TImg8 TImg9 TImg10

Figure 9. Detected Edges using saACO.

an outstanding image processing tool box command. In the binary outputs of saACO based edge detectors, the Hough transform tools that include the Hough peaks and highlines were employed to extract line segments that create lane markings. The x-axis represents the relative position of the lane lines on the image, while the y-axis represents the distance of the line corresponding to the image plane corner. The result of the Hough transform is as presented in Figure 10.

The results presented in Figure 11 and Figure 12 show the in-vehicle lane detection system using the traditional ACO and the saACO. To find the lane lines that make up the Hough peak, the extracted line segment was de-Houghed using the Houghtransfer. These de-Houghed lines were then mapped as detected lanes on the original test image. From Figure 11 and Figure 12, it can be observed that the saACO based lane detection algorithms detected most of the lane marking of interest on the road as compared to the standard ACO. For image

TImg1 TImg2 TImg3 TImg4 TImg5 TImg6 TImg7 TImg8 TImg9 TImg10

Figure 10. Detected edges using hough transform.

saACO Detected Lane ACO Detected Lane

saACO-TImg1 saACO Detected Lane ACO-TImg1 ACO Detected Lane saACO-TImg2 saACO Detected Lane ACO-TImg2 ACO Detected Lane saACO-TImg3 saACO Detected Lane ACO-TImg3 ACO Detected Lane saACO-TImg4 saACO Detected Lane ACO-TImg4 ACO Detected Lane saACO-TImg5 ACO-TImg5

Figure 11. Detected Lane Using saACO and ACO.

saACO Detected Lane ACO Detected Lane

saACO-TImg6 saACO Detected Lane ACO-TImg6 ACO Detected Lane saACO-TImg7 saACO Detected Lane ACO-TImg7 ACO Detected Lane saACO-TImg8 saACO Detected Lane ACO-TImg8 ACO Detected Lane saACO-TImg9 saACO Detected Lane ACO-TImg9 ACO Detected Lane saACO-TImg10 ACO-TImg10

Figure 12. Detected Lane Using saACO and AC.

TImg1 and TImg2, the saACO based method detected the major lane marks whereas the ACO-based method detects only one of the lane lines and the centre lane line. With respect to all the images used, because the saACO and the ACO-based method do not require any preparation, the image quality has little impact on the algorithm’s performance. However, both algorithms performed poorly on TImg5 because of the cloudy kind and poor quality of the image. For images 3, 4, 6, 7 and 9 the lanes line was adequately detected which shows that the saACO clearly outperforms the standard ACO in the lane line detection.

5. Conclusions

The results show that many potential accident cases resulted from passenger cars (42.22%) in the blind spot in front of the LGV. Next to passenger cars are two-wheelers, large-good vehicles and road users, respectively. This shows to a great extent how unaware most drivers and road users are of the blind spots that exist in front of an LGV and its potential of leading to avoidable accidents. The results also show that most events that led to a notification when a vehicle or road user was at the blind spot of an LGV were caused by the vehicle or road user and not the LGV driver. However, the difference is not so much. This shows that drivers of LGVs also contribute significantly to these events. These results show that Drivers of LGVs and other road users in Nigeria are not well educated on the blind spots around LGVs, contributing to the number of accidents involving these vehicles.

There is a need for road users and drivers to be adequately educated on blind spots of LGVs. This would reduce the number of vehicles and road users found in blind spots and would, in turn, reduce the number of accidents involving LGVs. The in-stereo system presented in this paper goes a long way to assist drivers of LGV in taking the right action whenever another vehicle is found in the blind spot of the LGV. Similarly, the in-vehicle vision-based lane detection also showed a good performance of the developed saACO based lane detection technique. Comparing the saACO with the standard ACO, results showed that the proposed saACO is capable of capturing effectively the lane marking much better than the ACO-based method. One of the major challenges that affect the success of the research is the lack of availability of data set, to implement the in-vision-based system before real-time application. it is recommended that researchers consider the development of a standard database containing lane images of different quality. Implementing the same system using different metaheuristic algorithms is also recommended.

Acknowledgements

This project would not have been achieved without the involvement and support of a lot of people, whose names may not have been listed. Their contributions are gratefully recognized and genuinely appreciated.

First and foremost, I want to express my gratitude to God for allowing me to accomplish this project. Then I’ll express my gratitude to my Supervisor (Hu Shen Bo), who helped me learn a lot about this project. His advice and recommendations contributed to the accomplishment of this job.

I am incredibly grateful to my uncle, Alhaji Adamu Dogara Muhammad (Marafan Kagarko), who has supported my professional ambitions and has actively pushed to provide me with the secured educational opportunity to explore these goals.

I want to thank Dr Salawudeen Ahmed Tijani, University of Jos, Nigeria for his assistance in obtaining various information, collecting data, and directing me from time to time in completing this project; although his hectic schedules, he contributed new ideas to make this project unique. Finally, I want to thank my mother Hajiya Madina D Kukui. and classmates for their essential advice and recommendations throughout the project‘s many stages.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Chan, C.-Y., Zhang, W.-B., El-Koursi, E.-M. and Lemaire, E. (2001) Safety Assessment of Advanced Vehicle Control and Safety Systems (AVCSS): A Case Study. California Digital Library, Berkeley.
[2] Chai, Y.F., Chen, X.B., Wang, H. and Chen, L. (2015) Deep Representation and Stereo Vision-Based Vehicle Detection. 2015 IEEE International Conference on Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), Shenyang, 8-12 June 2015, 305-310.
[3] Ilya, F. (2007) Freight Transport and Economic Growth—Two Inseparable Twins? In: Transport, Trade and Economic Growth—Coupled or Decoupled? An Inquiry into Relationships between Transport, Trade and Economic Growth and into User Preferences Concerning Growth-Oriented Transport, Ifmo Institut für Mobilitatsforschung, Berlin, 5-8.
[4] Eurostat (2019) Road Freight Transport by Vehicles Characteristics. Eurostat Statistics Explained.
[5] DOT (2019) Domestic Road Freight Statistics, United Kingdom 2019. Department of Transport, United Kingdom.
[6] Massimo, B., Alberto, B., Paolo, M., Pier, P.P. and Agneta, S. (2016) Stereo Vision-Based Start-Inhibit for Heavy Goods Vehicles. Intelligent Vehicles Symposium, Tokyo, 13-15 June 2006, 350-354.
[7] Massimo, B., Alberto, B., Alessandra, F. and Stefano, N. (2000) Stereo Vision-Based Vehicle Detection. Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, 3-5 October 2000, 39-44.
[8] Guo, D., Fraichard, T., Xie, M. and Laugier, C. (2000) Color Modeling by Spherical Influence Field in Sensing Driving Environment. Proceedings of the IEEE Intelligent Vehicles Symposium 2000, Dearborn, 3-5 October 2000, 249-254.
[9] Goerick, C., Noll, D. and Werner, M. (1996) Artificial Neural Networks in Real-Time Car Detection and Tracking Applications. Pattern Recognition Letters, 17, 335-343.
https://doi.org/10.1016/0167-8655(95)00129-8
[10] Mori, H. and Charkari, M.N. (1993) Shadow and Rhythm as Sign Patterns of Obstacle Detection. IEEE International Symposium on Industrial Electronics Conference Proceedings, Budapest, 1-3 June 1993, 271-277.
[11] Wongsaree, P., Sinchai, S., Wardkein, P. and Koseeyaporn, J. (2018) Distance Detection Technique Using Enhancing Inverse Perspective Mapping. 2018 3rd International Conference on Computer and Communication Systems (ICCCS), Nagoya, 27-30 April 2018, 217-221.
https://doi.org/10.1109/CCOMS.2018.8463318
[12] Li, Q.Q., Chen, L.M., Shaw, S.-L. and Nüchter, A. (2013) A Sensor-Fusion Drivable-Region and Lane-Detection System for Autonomous Vehicle Navigation in Challenging Road Scenarios. IEEE Transactions on Vehicular Technology, 63, 540-555.
https://doi.org/10.1109/TVT.2013.2281199
[13] Zhang, Z., Wang, Y.F., Brand, J. and Dahnoun, N. (2012) Real-Time Obstacle Detection Based on Stereo Vision for Automotive Applications. 2012 5th European DSP Education and Research Conference (EDERC), Amsterdam, 13-14 September 2012, 281-285.
https://doi.org/10.1109/EDERC.2012.6532272
[14] Bruls, T., Porav, H., Kunze, L. and Newman, P. (2019) The Right (Angled) Perspective: Improving the Understanding of Road Scenes Using Boosted Inverse Perspective Mapping. 2019 IEEE Intelligent Vehicles Symposium, 4, 302-309.
https://doi.org/10.1109/IVS.2019.8814056
[15] Rosado, A.L., Chien, S., Li, L., Yi, Q., Chen, Y.B. and Sherony, R. (2016) Certainty and Critical Speed for Decision Making in Tests of Pedestrian Automatic Emergency Braking Systems. IEEE Transactions on Intelligent Transportation Systems, 18, 1358-1370.
https://doi.org/10.1109/TITS.2016.2603445
[16] Oyeyemi, B. (2018) Articulated Vehicle Crashes in Nigeria: Concrete and Sustainable Mitigation Measures. Federal Road Safety Corps, Nigeria.
[17] Arjun, K. (2014) Stereo Vision-Based Vehicular Proximity Estimation. Graduate School, New Brunswick.
[18] Tola, E., Strecha, C. and Fua, P. (2012) Efficient Large-Scale Multi-View Stereo for Ultra-High-Resolution Image Sets. Machine Vision and Applications, 23, 903-920.
https://doi.org/10.1007/s00138-011-0346-8
[19] Adebayo, H.O. (2015) Geographical Information System (GIS)-Based Analysis of Road Traffic Accidents in Nigeria (1960-2012). Ethiopian Journal of Environmental Studies and Management, 8, 675-691.
https://doi.org/10.4314/ejesm.v8i6.7
[20] Farah, B. (2017) The Economic Cost and Policy Implications of Heavy Goods Vehicles Road Traffic Accidents in Nigeria. University of Huddersfield, Huddersfield.
[21] Moses, O. and Olapoju, O. (2018) Understanding the Spatial Patterns of Tanker Accidents in Nigeria Using Geographically Weighted Regression. International Journal of Vehicle Safety, 10, 58-77.
https://doi.org/10.1504/IJVS.2018.093057
[22] Cornelis, S. and Todts, W. (2016) Eliminating Truck Blind Spots—A Matter of (Direct) Vision. Loughborough Design School, Loughborough.
[23] Tian, J., Yu, W. and Xie, S. (2008) An Ant Colony Optimization Algorithm for Image Edge Detection. 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence), Hong Kong, 1-6 June 2008, 751-756.
[24] Baterina, A.V. and Oppus, C. (2010) Image Edge Detection Using Ant Colony Optimization. WSEAS Transactions on Signal Processing, 6, 58-67.
[25] Salawudeen, A.T., Mu’azu, M.B., Yusuf, A. and Adedokun, A.E. (2021) A Novel Smell Agent Optimization (SAO): An Extensive CEC Study and Engineering Application. Knowledge-Based Systems, 232, Article ID: 107486.
https://doi.org/10.1016/j.knosys.2021.107486
[26] Salawudeen, A.T., Mu’azu, M.B., Yusuf, A. and Adedokun, E.A. (2018) From Smell Phenomenon to Smell Agent Optimization (SAO): A Feasibility Study. International Conference on Global and Emerging Trends (ICGET 2018), Abuja, 2-4 May 2018, 79-85.
[27] Dorigo, M., Birattari, M. and Stutzle, T. (2006) Ant Colony Optimization. IEEE Computational Intelligence Magazine, 1, 28-39.
https://doi.org/10.1109/CI-M.2006.248054
[28] Sengupta, S., Mittal, N. and Modi, M. (2019) Improved Skin Lesion Edge Detection Method Using Ant Colony Optimization. Skin Research and Technology, 25, 846-856.
https://doi.org/10.1111/srt.12744

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.