Methodology for Automatically Setting Camera View to Mile Marker for Traffic Incident Management

Abstract

Traffic incident management (TIM) is a FHWA Every Day Counts initiative with the objective of reducing secondary crashes, improving travel reliability, and ensuring safety of responders. Agency roadside cameras play a critical role in TIM by helping dispatchers quickly identify the precise location of incidents when receiving reports from motorists with varying levels of spatial accuracy. Reconciling position reports that are often mile marker based, with cameras that operate in a Pan-Tilt-Zoom coordinate system relies on dispatchers having detailed knowledge for hundreds of cameras and perhaps some presets. During real-time incident dispatching, reducing the time it takes to identify the most relevant cameras and setting their view on the incident is an important opportunity to improve incident management dispatch times. This research develops a camera-to-mile marker mapping technique that automatically sets the camera view to a specified mile marker within the field-of-view of the camera. Over 350 traffic cameras along Indiana’s 2250 directional miles of interstate were mapped to approximately 5000 discrete locations that correspond to approximately 780 directional miles (~35% of interstate) of camera coverage. This newly developed technique will allow operators to quickly identify the nearest camera and set them to the reported location. This research also identifies segments on the interstate system with limited or no camera coverage for decision makers to prioritize future capital investments. This paper concludes with brief discussion on future research to automate the mapping using LiDAR data and to set the cameras after automatically detecting the events using connected vehicle trajectory data.

Share and Cite:

Mathew, J. , Malackowski, H. , Gartner, C. , Desai, J. , Cox, E. , Habib, A. and Bullock, D. (2023) Methodology for Automatically Setting Camera View to Mile Marker for Traffic Incident Management. Journal of Transportation Technologies, 13, 708-730. doi: 10.4236/jtts.2023.134033.

1. Introduction

1.1. Background and Problem Statement

Traffic Incident Management (TIM) consists of a series of planned and coordinated efforts to detect, respond and clear traffic incidents as safely and quickly as possible to reduce secondary crashes [1] . Studies have shown that crash rates increase by a factor of 24 on congested sections compared to free flowing sections of interstates [2] .

Incidents are typically detected and reported by either roadway related Intelligent Transportation System (ITS) sensors or incoming 911 calls. To effectively dispatch appropriate resources, operators in the call centers must verify these incidents, often with a variety of simultaneous cell calls with varying levels of location accuracy. Agency roadside cameras are an important asset for dispatchers to quickly identify the precise roadway location and nature of incidents when receiving reports from motorists. Reconciling position reports that are often mile marker (MM) based, with cameras that operate in a Pan-Tilt-Zoom (PTZ) coordinate system relies on dispatchers having detailed knowledge of these cameras and the interstate system. Although experienced dispatchers are quite efficient at this task, there is a fairly steep learning curve for new staff, and it is important to minimize time “searching” for incidents.

This paper reports on a camera-to-mile marker mapping technique that automatically sets the camera view to specified mile marker signs (nominally every 0.1 mile) within the field-of-view of the camera with a simple mouse click.

1.2. Camera Network and Coverage of Mile Marker Mapping Used in Study

Indiana has deployed more than 500 roadside cameras on interstates with PTZ functionality into their traffic management centers (TMC) (Table 1). Approximately 350 of those cameras have been integrated into a system that can automatically set the camera views by mile markers. This provides coverage of approximately 780 directional miles over 2250 directional miles of Indiana’s Interstates.

2. Literature Review

2.1. Freeway Camera

Traffic cameras were first routinely deployed in the early 1970’s by a variety of agencies across the country. Minnesota Department of Transportation was one of the early adopters of freeway cameras [3] [4] . This technology operated on

Table 1. Statewide camera integration by Indiana interstates.

closed circuit television which required a separate monitor at TMCs for each camera, often without PTZ capability. In the early days, cameras were primarily used for qualitative traffic system monitoring, but as technology improved, the use of the cameras grew to include incident verification as well as research into shockwaves and other traffic flow characteristics [5] - [14] . Similar deployments in large urban areas occurred around the same time and further stimulated interest in deploying cameras to assist with freeway management activities. One of the outcomes of these early freeway monitoring centers was recognizing the opportunity for improving integration between highway agency centers and 911 dispatchers to quickly locate events from calls to verify the nature and precise location of events. Although this coordination grew, as far back as 2007, TMC operators reported that the current methods of incident detection were challenging for effective dispatch [15] .

By 2018, interstate camera deployments had grown to approximately 280 cameras per state, and with some states managing over 800 cameras [16] . With many states now having a large number of cameras deployed, agencies are facing a challenge of how to train a diverse group of users to effectively set these cameras to realize their value [17] . Furthermore, when ITS cameras were first installed, agencies placed them at opportunistic locations where power was easily accessible and there is now a need to go back and systematically backfill camera placement in key areas with limited visibility.

2.2. Connected Vehicle

As cameras deployments have matured, there has been enormous growth in connected vehicle data that can complement freeway cameras [18] - [23] . In the US, over 500 billion connected vehicles (CV) records per month are accessible in near real time [24] [25] . Each of these records has a unique trip and data point identifier with GPS position, timestamp, speed, heading, and ignition status. This data represents approximately 5% [19] [20] of the vehicles on the roadway, with a typical latency of under 1 minute [21] . This data is used for a variety of real time and after-action analysis [26] [27] [28] . Integrating this CV data with freeway cameras helps provide real time assessment of traffic flow, weather conditions, and work zone impacts [26] [29] [30] [31] [32] . When CV data is closely integrated with freeway cameras, it can provide tremendous insight into impacts on freeway flow and help identify opportunities to facilitate improvements in incident management [1] [15] [33] .

3. Study Objectives

The objective of this paper is to demonstrate the process of systematically mapping cameras to mile markers so that operators can rapidly select and set the appropriate camera views during an incident. Not only does this provide opportunities to improve TIM activities, but it also provides an inventory by mile marker if the interstate segment has camera coverage, and quality of that camera coverage. Finally, by formally defining a relationship between mile marker and PTZ camera settings, CV data can be effectively integrated with cameras to automate the camera view when shock waves associated with congestion or incidents are detected in the CV data.

4. Current Incident Verification Process at TMC

Figure 1 provides an overview of the incident verification and coordination process at TMC. During an incident, detection most commonly occurs via incoming 911 calls. Operators frequently receive multiple calls with varying levels of spatial accuracy, usually reported in terms of routes and mile markers. Sometimes those are very precise, sometimes the mile markers correspond to what mile marker the driver is at when the operator queries the motorist for the location.

Figure 1. Traffic incident verification process.

This information is relayed to the TMC where operators must select from hundreds of cameras to identify relevant camera(s). Operators require extensive knowledge of the interstate system and appropriate cameras to search for the event. Once the precise location of the incident is determined, operators typically cycle through multiple cameras to identify the ones with the best view. After verifying the incident, TMC staff coordinate with emergency responders enroute, on the scene, and on some occasions direct additional response such as motorist assist patrols and diversions.

5. Integration of Verification Time into the FHWA TIM Diagram

Figure 2(a) illustrates the standard FHWA event sequence for TIM [34] from start of incident to incident resolution. This is a very effective chart to explain

(a)(b)

Figure 2. Traffic incident verification process. (a) FHWA; (b) Modified Event Sequence with TMC Verification (TEYE).

the TIM process, but since incident verification is such a critical part of TIM, we believe it is important to add one more reference point on this chart, specifically the verification time to document the time an incident is verified (Figure 2(b)). We refer to this as TEYE, corresponding to when the TMC has camera eyes on the incident. Tracking this reference time is an important part of developing comprehensive TIM performance measures to identify where there are opportunities to improve detection procedures, coordination of activities, or perhaps infrastructure investment (additional cameras).

6. Case Study Illustrating Mile Marker Positioning of Cameras Integrated with CV Data

Although urban areas have evolved to have close integration of 911 dispatch centers with agencies, rural sections of interstate have fewer incidents and cover a diverse set of first responders, many of them volunteers. This can often result in substantial delays in TMC receiving notification of an interstate incident.

Figure 3 illustrates an incident that occurred near MM 246 in the northbound direction of rural I-65 around 14:26 (T0) on Mar 22, 2023. This incident was associated with a lane changing crash inside the work zone. Figure 3(a) represents a time-space diagram of CV trajectories color coded by their speed. The first evidence of slowdown in the CV data is captured around MM 245.6 at 14:28 (callout i)—which is within 2 minutes of the reported 911 incident time. Figure 3(b) shows the nearest camera view at T0 whereas Figure 3(c) shows the first camera eyes on the incident at 14:32 (TEYE). During this incident, it took the operators just under 6 minutes to locate and verify this incident. Callouts iii and iv on this figure represent the same mile makers on subfigures b and c. One can see the lane restrictions lasted until about 15:20 (callout v on heatmap) and the queue cleared just after 16:00 (Figure 3(d)).

Figure 4 illustrates a truck roll-over incident that occurred near MM 201 in the southbound direction of rural I-65 around 14:36 (T0) on Feb 09, 2023, and Figure 4(a) represents a similar CV time-space diagram. The first evidence of slowdown in the CV data is captured around MM 200.9 at 14:38 (callout i)—approximately 2 minutes after the reported 911 incident time. Figure 4(b) corresponds to the nearest camera view at 14:36 and queue formation can be seen within 4 minutes of the incident (Figure 4(c)). This camera remains in that view for approximately 54 minutes (Figure 4(d)) until positioned on the incident at 15:36 (Figure 4(e)), resulting in a verification time of approximately 1 hour (TEYE-T0). During real-time incident dispatching, it is important to consistently reduce the time it takes to identify the most relevant cameras and to set their views, and thereby provide opportunities for coordinating diversion routes, message boards, and other complementary TIM resources.

7. Methodology for Mapping Mile Markers to Camera Coordinates

Figure 5(a) shows an aerial map of roadside cameras and mile markers between

(a)(b)(c)(d)

Figure 3. Example work zone related incident on I-65N between MM 240 and 250 on March 22, 2023. (a) CV trajectory speed heatmap during incident; (b) Camera 48: View at Time of Incident 14:26 (T0); (c) Camera 48: First visual of incident 14:32 (TEYE); (d) Camera 48: Queue cleared just after 16:00.

(a)(b)(c)(d)(e)

Figure 4. Example truck roll over crash on I-65S between MM 200 and 201 on February 9, 2023. (a) CV trajectory speed heatmap during incident; (b) Camera 480: View at time of incident 14:36 (T0) (callout i); (c) Camera 480: First view of queue formation around 14:40 (callout ii); (d) Camera 480: View at 15:34 (callout iii); (e) Camera 480: First camera eyes on the incident 15:36 (Teye) (callout iv).

(a)(b)(c)(d)

Figure 5. Mile markers in field-of-view of ITS cameras 92 and 193 on I-70. (a) Mile marker vicinity map of camera 92 and 193; (b) Camera 92: View of camera 193, MM 92.8 (callout i) and MM 93 (callout ii); (c) Camera 193: View of MM 92.8 (callout i); (d) Camera 193: View of MM 93.0 (callout ii).

MM 92.4 and 93.4 along I-70. There are two cameras, C92 and C193, along this section. Figure 5(b) captures a view from C92 showing camera 193 (callout C193) and mile markers 92.8 (callout i) and 93.0 (callout ii) on both I-70E and I-70W. Figure 5(c) and Figure 5(d) displays a sample view from C193 showing MM 92.8 (callout i) and MM 93.0 on I-70W, respectively. Camera 92, from which Figure 5(b) was captured, is also shown in Figure 5(c) and Figure 5(d).

To set the camera view to the mile marker in its field-of-view, the corresponding PTZ settings need to be mapped to the respective mile markers. The pan and tilt are two main settings required for this approach. The zoom level is a bit more subjective and is usually selected to cover a view of approximately 0.1 to 0.2 on miles each side of the desired mile marker. These cameras mostly operate on a 30x optical zoom with a maximum digital magnification of 12×.

Figure 6 illustrates a graphic that maps PTZ settings of C193 to mile markers 92.8 (callout i) and 93.0 (callout ii) on I-70W. The view showing MM 92.8 (Figure 5(c)) is defined by a set of PTZ settings {Pi, Ti, Zi} and view showing MM 93.0 (Figure 5(d)) is defined by another set of PTZ settings {Pii, Tii, Zii}. In general, the mile marker “m” from a camera “c” is a function of PTZ settings of “c” as shown by

(1)

Table 2 shows the respective PTZ settings mapped for MM 92.8 and 93.0 from C193. These settings are extracted using a Simple Network Management Protocol (SNMP) GET command based on the standards listed on the National Transportation Communications for ITS Protocol (NTCIP) 1205 [35] . A look-up table (LUT) is then established that stores the PTZ settings for every mile marker in the field-of-view. Finally, to set the camera view on a specified mile marker, the appropriate settings are sent to the cameras using the SNMP SET commands. For illustration purposes, both camera images Figure 5(c) and Figure 5(d) are very close with only small variation in the PTZ values as shown in Table 2. The subsequent discussion of Table 3 in next section illustrates the

Figure 6. Mapping I-70W MM 92.8 and MM 93.0 to PTZ coordinates of Camera 193.

Table 2. Look-up table of PTZ settings and mile markers from Camera 193.

Table 3. Look-up table of PTZ settings and mile markers from Camera 193.

importance of this scaling over much larger distances to provide PTZ settings corresponding to over 5000 discrete mile marker locations covering 780 directional miles of interstate.

8. Image Quality Assessment

In general, the “design” process of camera placement has relied on a combination of placing cameras where we can cost effectively get power and communication coupled with simple heuristics on spacing and placement near curves in the roadway. However, because a camera is near a mile marker does not mean that the section of road is visible to the camera. Camera images of roadways can sometimes be occluded by vegetation, bridge decks, utility poles, vertical curves, horizontal curves, signs, and other roadside items. Also, as distance from the camera extends, air quality haze and zoom resolution can substantially impact image quality. Figure 7 shows few examples highlighting the visibility quality of mapped mile markers on I-70W. Figure 7(a) illustrates a plot color coded by visibility of mile markers between MM 91 and 95 from the cameras in this area. Mile markers on this section of interstate are placed every 0.2 miles. Callouts on this plot correspond to the sub figures showing images from Camera 193. MM 93.0 (callout i) and MM 93.6 (callout iv) are fully visible whereas MM 93.4 (callout iii) and MM 93.2 (callout ii) are partially and fully occluded by vegetation, respectively.

Table 3 documents the table mapping PTZ settings, mile markers and visibility assessment of cameras 92, 93 and 193 from Figure 7(a) along the section of I-70W between MM 91 and 94. This table also has an empirical image quality assessment attribute to provide guidance to camera operators to quickly identify the most relevant camera(s) with the highest quality images for use in locating/verifying incidents.

Table 3 is used for illustration purposes, but the full table for the 364 cameras (Table 1) covering 780 directional miles of interstate and around 5000 discrete mile markers has over 10,000 records that were generated by manually mapping each camera location to 0.1 or 0.2-mile resolution. Some mile markers are visible from multiple cameras and may therefore have multiple mappings/records. One of the additional benefits of this manual mapping and image quality assessment is the ability to develop visualizations of this camera coverage by image quality. The use case for that data is described in the next section.

(a)(b)(c)(d)(e)

Figure 7. Mapped image quality assessment. (a) I-70W mile marker visibility by camera; (b) Camera 193: Full Visibility of MM 93.0; (c) Camera 193: Occluded View of MM 93.2; (d) Camera 193: Partial Visibility of MM 93.4; (e) Camera 193: Full Visibility of MM 93.6.

9. Graphics to Visualize Opportunities for Future Camera Investments and Upgrades

Figure 8 illustrates a graphical representation of Table 3 along the entire stretch of I-70W for 51 unique cameras. The diagram is linear referenced by mile marker and color coded by percent visibility along every mile. Several miles on the east end (callout i) and around the MM 70 region are fully visible. A few miles near MM 6 region are partially visible (callout ii) and few locations near MM 110 are fully occluded (callout iv). There are several white sections on this route where cameras are not deployed, especially the sections between MM 20 - 50, MM 110 - 120 and MM 130 - 140 (callout v). Areas where cameras were recently deployed but are pending integration are shown in black hatches (callout iii).

Figure 8. I-70W camera visibility by mile marker.

Figure 9 shows a map view of the visibility. Figure 9(a) shows the same data as Figure 8, but in a map view for I-70. Figure 9(b) shows a map view illustrating the coverage of 545 cameras across all Indiana interstates. In both Figure 9(a) and Figure 9(b), sections where cameras are pending integration are shown in purple instead of black hatch for better contrast in map view. Qualitative assessments show that I-465 and I-94 have almost full coverage, as well as the urbanized areas of I-65 have good coverage. Potential opportunities for future investments on I-64, I-69, I-70 and I-74 are easy to identify as they are shown as white areas.

(a)(b)

Figure 9. Qualitative assessment of statewide visibility map. (a) I-70 with mile marker callouts every 10 mile; (b) All interstates.

Table 4 is similar to Table 1, but with an additional metric that shows the percent of interstate miles fully or partially visible from integrated cameras along a route. I-265, I-465, and I-94 have more than 90% camera coverage whereas I-64, I-69, I-70, and I-74 have less than 30% coverage. Overall analysis shows that approximately 35% of the interstate routes have full or partial coverage. In addition to the percentage of cameras integrated, the percent visibility measure provides an additional metric for decision makers and agencies to consider while prioritizing future deployments and camera upgrades. For example, 81% of currently deployed cameras along I-64 are integrated, however this only covers a little more than 10% of the entire route.

These graphical illustrations and performance metrics are important tools for agencies and decision makers to prioritize future capital investments and camera deployments. Sections with partial/full occlusion may also benefit from additional camera upgrades such as extended/optical zoom and enhanced focus.

10. Implementation of Mile Marker to PTZ Camera Mapping

10.1. Automate Camera View to Mile Marker Application

After generating the statewide table of mile marker to PTZ settings for the 364 cameras, an application (Figure 10) was developed to assist operators to quickly set the camera view to the specified mile marker. Figure 10(a) shows a snapshot of the application where operators can use the dropdown to select the route and input the range of mile markers. The application displays a tile for every mile marker (X-axis) in the field-of-view of the available cameras (Y-axis). Double clicking a tile sets the corresponding camera view to the specified mile marker. For example, in Figure 10, double clicking callout b moves camera 93 to MM 93.4 (Figure 10(b)), callout c to MM 93.6 (Figure 10(c)), callout d to MM 93.8 (Figure 10(d)) and callout e to the partially visible MM 94.0 (Figure 10(e)). A

Table 4. Statewide camera visibility by interstate.

(a)(b)(c)(d)(e)

Figure 10. Automate camera view to mile marker application. (a) Application Interface Link to YouTube tour of Camera 93: https://tinyurl.com/I70-CAM93; (b) Camera 93: View of MM 93.4; (c) Camera 93: View of MM 93.6; (d) Camera 93: View of MM 94.0; (e) Camera 93: View of MM 94.4.

“Tour” feature is also implemented that automatically cycles the camera view to all the mile markers available for that camera on the current application view. A YouTube video showing this feature for camera 93 is presented via a QR code in Figure 10(a), or by clicking on: https://tinyurl.com/I70-CAM93. This feature is particularly helpful for operators while they are trying to quickly identify incidents over a spatial area based on positional reports obtained from 911 calls.

10.2. Integration of MM to PTZ Mapping in Incident Verification Process at TMC

As discussed previously, reconciling position reports that are often mile marker based, with cameras that operate in a PTZ coordinate system relies on operators having detailed knowledge for hundreds of cameras. Automatically setting the available cameras views and identifying the best view for them provides an opportunity to reduce human bottlenecks and decrease dispatch times. Figure 11 is a revised version of Figure 1 that illustrates how the above application can be integrated into the workflow. After receiving reports with varying levels of spatial accuracy, operators can first use the “Tour” feature to locate the nearest mile marker of the incident. After this, they can use the application to identify the camera with the best view of the incident (callout i).

11. Future Research

11.1. Systematically Deriving PTZ Settings of Mile Markers Using LiDAR Data

Although 364 cameras with over 10,000 mile markers to PTZ mapping entries were generated manually, this mapping has the potential to be done directly

Figure 11. Integrating automated camera positioning dashboard into TMC verification workflow.

from LiDAR surveys that generate 3D surveys of both roads and cameras. This data can be used to systematically estimate the Interior/Exterior Orientation Parameters (IOP/EOP) or PTZ coordinates of pre-specified camera settings based on the Direct Linear Transformation model [36] [37] .

The proposed strategy for controlling a PTZ camera to have a specific location in its field-of-view is based on using the geospatial data at the camera vicinity to determine the camera’s characteristics (IOP/EOP) for different camera settings. Then, the IOP/EOP together with the geospatial data can be used to define the needed camera settings to have a specific latitude/longitude (location of mile marker) in the field-of-view of the camera while having sufficient geometric resolution (defined by what is known as Ground Sampling Distance or GSD—i.e., the extent of ground covered by a single pixel).

11.2. Using CV Trajectory Data to Automatically Identify Incidents and Set Camera View

CV trajectory data is well documented and provides very precise identification of location and time of an incident [21] [26] . Since this CV data is reported as latitude and longitude, it can be rapidly linear referenced to a miler marker as soon as they occur [21] . Figure 12 shows a CV trajectory heatmap (similar to Figure 3(a)) on I-70E between MM 90 and 110 during 8 am to noon on August 27, 2022. Around 9:20 am a crash occurs near MM 100 (callout i) that causes significant slowdowns and queuing that closes the interstate for approximately 30 minutes. These sudden drops in speeds combined with extensive closure of interstates can be used as a trigger to identify the precise location of incidents and slowdowns on interstates. This latitude/longitude data can then be linear referenced to the nearest mile marker and the PTZ settings can be transmitted to set the nearest camera(s) on the incident.

Figure 12. Using CV trajectory data to detect incidents and slowdowns.

12. Conclusions

An efficient TIM program requires early detection of incidents, quick verification, rapid on-scene response and swift return of traffic to normal conditions. This study proposes a framework that assists camera operators and dispatchers to quickly identify and verify the incidents using roadside cameras.

A new performance metric on verification time (TEYE) is proposed to be integrated into the FHWA TIM event sequence that captures the time it takes for TMC operators to have the first visual on roadside cameras (Figure 2(b)). This is followed by a scalable methodology and a table that stores camera PTZ settings for the mile markers within its field-of-view (Figure 6). Performance metrics that summarize spatial camera coverage and image quality for use in both dispatch and long-term statewide planning for camera deployments are also developed. Images from over 350 cameras along Indiana interstates are mapped to more than 5000 discrete mile marker signs to generate a statewide spatial camera coverage map (Figure 9). Results show that nearly 35% of the interstates in Indiana have sufficient camera coverage (Table 4). Finally, a web application is demonstrated that assists operators to quickly set a camera view to specified mile marker signs on interstate routes (Figure 10).

Future research briefly discusses the use of LiDAR geospatial data to automate the mapping of mile markers to camera PTZ settings. Integration of CV trajectory data to detect incidents and set the nearest camera view on the incident are also discussed for future studies (Figure 12).

The automated camera view to mile marker application is an important tool that allows TMC operators to quickly identify the nearest camera with the best view on the incident. And the incident verification time performance metric proposed in this study will help transportation agencies understand how quickly they are able to locate and validate an incident. As new technologies and tools are integrated, this metric will help understand the impact of these tools on the verification time. Finally, the statewide camera coverage map and percent of interstate visibility metric is a valuable performance measure that decision makers can use as guidance for investment planning and camera upgrades.

Transportation agencies deploy several hundreds of cameras for traffic monitoring. During an incident operators/dispatchers need to search through these cameras to identify the precise location of incidents and need extensive knowledge of camera locations and routes to quickly identify these incidents. It is important to reduce the workload of camera operators so they can swiftly dispatch emergency responders to the scene and save lives.

Acknowledgements

Connected vehicle data used in this study was provided by Wejo Data Services Inc. Icons used in a few figures were adopted from thenounproject.com. This work was supported by the Joint Transportation Research Program and the Indiana Department of Transportation. The contents of this paper reflect the views of the authors, who are responsible for the facts and the accuracy of the data presented herein, and do not necessarily reflect the official views or policies of the sponsoring organizations. These contents do not constitute a standard, specification, or regulation.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] FHWA: Federal Highway Administration (2023) Welcome to Traffic Incident Management (TIM).
https://ops.fhwa.dot.gov/tim/
[2] Mekker, M.M., Remias, S.M., McNamara, M.L. and Bullock, D.M. (2020) Characterizing Interstate Crash Rates Based on Traffic Congestion Using Probe Vehicle Data. JTRP Affiliated Reports. Paper 31.
https://docs.lib.purdue.edu/jtrpaffdocs/31/
https://doi.org/10.5703/1288284317119
[3] Stehr, R.A. (1991) Minnesota Department of Transportation Experience in the Application of Advanced Traffic Management Systems. Applications of Advanced Technologies in Transportation Engineering, Minneapolis, MN, 458-462.
https://cedb.asce.org/CEDBsearch/record.jsp?dockey=0074134
[4] Castleman, M. (2019) A History of Traffic Management Technology.
https://streets.mn/2019/10/16/a-history-of-traffic-management-technology/
https://doi.org/10.12968/S1356-9252(23)40210-9
[5] Qiu, M., et al. (2021) Intelligent Highway Lane Center Identification from Surveillance Camera Video. 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), Indianapolis, 19-22 September 2021, 2506-2511.
https://doi.org/10.1109/ITSC48978.2021.9564560
[6] Chien, S., Chen, Y., Christopher, L., Qiu, M. and Ding, Z. (2022) Road Condition Detection and Classification from Existing CCTV Feed. Purdue University, West Lafayette.
https://doi.org/10.5703/1288284317364
[7] Masihullah, S. and Kandaswamy, S. (2022) A Decentralized Collaborative Strategy for PTZ Camera Network Tracking System Using Graph Learning: Assessing Strategies for Information Sharing in a PTZ Camera Network for Improving Vehicle Tracking, via Agent-Based Simulations. Proceedings of the 2022 5th International Conference on Mathematics and Statistics, Paris, 17-19 June 2022, 59-65.
https://doi.org/10.1145/3545839.3545849
[8] Sonnleitner, E., Barth, O., Palmanshofer, A. and Kurz, M. (2020) Traffic Measurement and Congestion Detection Based on Real-Time Highway Video Data. Applied Sciences, 10, Article 6270.
https://doi.org/10.3390/app10186270
[9] Fredianelli, L., et al. (2022) Traffic Flow Detection Using Camera Images and Machine Learning Methods in ITS for Noise Map and Action Plan Optimization. Sensors, 22, Article 1929.
https://doi.org/10.3390/s22051929
[10] Ali Al Mahairzi1, Z.S.A. and Reddy, N.S. (2017) Smart Road Technology for Traffic Management and ITS Infrastructure Assessment: A Case Study of Muscat Express Highway. International Journal of Advanced Engineering, Management and Science, 3, 576-583.
https://doi.org/10.24001/ijaems.3.5.28
[11] Franke, U., Bottiger, F., Zomotor, Z. and Seeberger, D. (1995) Truck Platooning in Mixed Traffic. Proceedings of the Intelligent Vehicles 95. Symposium, Detroit, 25-26 September 1995, 1-6.
[12] Kim, W., Li, H., Mathew, J.K. and Bullock, D.M. (2020) Analytical Techniques to Use Historical Probe Data to Assess Platooning Potential on Interstate Corridors. International Conference on Transportation and Development, Seattle, 26-29 May 2020, 284-295.
https://doi.org/10.1061/9780784483138.025
[13] Dahmane, K., Duthon, P., Bernardin, F., Colomb, M., Chausse, F. and Blanc, C. (2021) WeatherEye-Proposal of an Algorithm Able to Classify Weather Conditions from Traffic Camera Images. Atmosphere, 12, Article 717.
https://doi.org/10.3390/atmos12060717
[14] Mathew, J., Thomas, H., Sharma, A., Devi, L. and Rilett, L. (2013) Studying Platoon Dispersion Characteristics under Heterogeneous Traffic in India. Procedia-Social and Behavioral Sciences, 104, 422-429.
https://doi.org/10.1016/j.sbspro.2013.11.135
[15] Williams, B. and Guin, A. (2007) Traffic Management Center Use of Incident Detection Algorithms: Findings of a Nationwide Survey. IEEE Transactions on Intelligent Transportation Systems, 8, 351-358.
https://doi.org/10.1109/TITS.2007.894193
[16] Laan, Z.V., Sadabadi, K.F. and Jacobs, T. (2018) Video Analytics Usage in Transportation Agencies: Nationwide Survey and Maryland Feasibility Study. Transportation Research Record, 2672, 34-44.
https://doi.org/10.1177/0361198118787083
[17] Federal Highway Administration (2023) Transportation Management Center Video Recording and Archiving Best General Practices: Chapter 8 Case Studies.
https://ops.fhwa.dot.gov/publications/fhwahop16033/chap8.htm
[18] Waddell, J.M., Remias, S.M. and Kirsch, J.N. (2020) Characterizing Traffic-Signal Performance and Corridor Reliability Using Crowd-Sourced Probe Vehicle Trajectories. Journal of Transportation Engineering, Part A: Systems, 146.
https://doi.org/10.1061/JTEPBS.0000378
[19] Hunter, M., Mathew, J., Cox, E., Blackwell, M. and Bullock, D. (2021) Estimation of Connected Vehicle Penetration Rate on Indiana Roadways. TRP Affiliated Reports. Paper 37.
https://doi.org/10.5703/1288284317343
[20] Sakhare, R.S., Hunter, M., Mukai, J., Li, H. and Bullock, D.M. (2022) Truck and Passenger Car Connected Vehicle Penetration on Indiana Roadways. Journal of Transportation Technologies, 12, 578-599.
https://doi.org/10.4236/jtts.2022.124034
[21] Mathew, J.K., Desai, J.C., Sakhare, R.S., Kim, W., Li, H. and Bullock, D.M. (2021) Big Data Applications for Managing Roadways. Institute of Transportation Engineers, 91, 28-35.
[22] Cao, Y., Tang, K., Sun, J. and Ji, Y. (2021) Day-to-Day Dynamic Origin-Destination Flow Estimation Using Connected Vehicle Trajectories and Automatic Vehicle Identification Data. Transportation Research Part C: Emerging Technologies, 129, Article ID: 103241.
https://doi.org/10.1016/j.trc.2021.103241
[23] Yao, H., Li, X. and Yang, X. (2023) Physics-Aware Learning-Based Vehicle Trajectory Prediction of Congested Traffic in a Connected Vehicle Environment. IEEE Transactions on Vehicular Technology, 72, 102-112.
https://doi.org/10.1109/TVT.2022.3203906
[24] Desai, J., Mathew, J., Li, H., Sakhare, R., Horton, D. and Bullock, D. (2022) National Mobility Analysis for All Interstate Routes in the United States: August 2022. Purdue University, West Lafayette.
https://doi.org/10.5703/1288284317585
[25] Desai, J., Mathew, J., Li, H., Sakhare, R., Horton, D. and Bullock, D. (2022) National Mobility Analysis for All Interstate Routes in the United States: December 2022. Purdue University, West Lafayette.
https://doi.org/10.5703/1288284317585
[26] Sakhare, R.S., Desai, J., Li, H., Kachler, M.A. and Bullock, D.M. (2022) Methodology for Monitoring Work Zones Traffic Operations Using Connected Vehicle Data. Safety, 8, Article 41.
https://doi.org/10.3390/safety8020041
[27] Li, H., et al. (2019) Connected Vehicle Corridor Deployment and Performance Measures for Assessment. Purdue University, West Lafayette.
https://doi.org/10.5703/1288284317108
[28] Saldivar-Carranza, E.D., Li, H., Gayen, S., Taylor, M., Sturdevant, J. and Bullock, D.M. (2023) Comparison of Arrivals on Green Estimations from Vehicle Detection and Connected Vehicle Data. Transportation Research Record: Journal of the Transportation Research Board.
https://doi.org/10.1177/03611981231168116
[29] Desai, J., et al. (2021) Leveraging Telematics for Winter Operations Performance Measures and Tactical Adjustment. Journal of Transportation Technologies, 11, 611-627.
[30] Sakhare, R.S., et al. (2021) Evaluation of the Impact of Queue Trucks with Navigation Alerts Using Connected Vehicle Data. Journal of Transportation Technologies, 11, 561-576.
https://doi.org/10.4236/jtts.2021.114035
[31] Sakhare, R.S., Zhang, Y., Li, H. and Bullock, D.M. (2023) Impact of Rain Intensity on Interstate Traffic Speeds Using Connected Vehicle Data. Vehicles, 5, 133-155.
https://doi.org/10.3390/vehicles5010009
[32] Desai, J., et al. (2020) Dashboards for Real-Time Monitoring of Winter Operations Activities and After-Action Assessment. JTRP Affiliated Reports. Paper 33.
https://doi.org/10.5703/1288284317252
[33] Shah, V., Hatcher, G., Greer, E., Fraser, J., Franz, M. and Sadabadi, K. (2022) Guidelines for Quantifying Benefits of Traffic Incident Management Strategies. NCHRP Research Report, No. 981.
https://trid.trb.org/view/1909382
[34] James, W., McKinzie, S., Benson, W. and Heise, C. (2015) Crash Investigation and Reconstruction Technologies and Best Practices. Federal Highway Administration, Washington DC.
https://rosap.ntl.bts.gov/view/dot/50639
[35] National Transportation and Communications for ITS Protocol 1205 v01 AMENDMENT 1 (2023) Object Definitions for Closed Circuit Television (CCTV) Camera Control. A Joint Standard of AASHTO, ITE, and NEMA, Washington DC.
https://www.ntcip.org/file/2018/11/NTCIP1205v01Amd1-14j-1.pdf
[36] Abdel-Aziz, Y.I., Karara, H.M. and Hauck, M. (2015) Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry. Photogrammetric Engineering & Remote Sensing, 81, 103-107.
https://doi.org/10.14358/PERS.81.2.103
[37] El-Ashmawy, K.L.A. (2018) Using Direct Linear Transformation (DLT) Method for Aerial Photogrammetry Applications. Geodesy and Cartography, 44, 71-79.
https://doi.org/10.3846/gac.2018.1629

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.