Global Calibration Method for Monocular Multi-View System Using Rotational Dual-Target

Abstract

To make the problems of existing high requirements of calibration tools, complex global calibration process addressed for monocular multi-view visual measurement system during measurement, in the paper, a global calibration method is proposed for the geometric properties of rotational correlation motion and the absolute orientation of the field of view without over lap. Firstly, a dual-camera system is constructed for photographing and collecting the rotating image sequence of two flat targets rigidly connected by a long rod at different positions, and based on the known parameters, such as, target feature image, world coordinates, camera internal parameters and so on, then the global PnP optimization method is used to solve the rotation axis and the reference point at different positions; Then, the absolute orientation matrix is constructed based on the parameters of rotation axis, reference point and connecting rod length obtained by this method. In the end, the singular value decomposition method is used to find the optimal rotation matrix, and then get the translation matrix. It’s shown based on simulation and actual tests that in comparison with the existing methods, the maximum attitude and pose errors is 0.0083˚ and 0.3657 mm, respectively, which improves the accuracy by 27.8% and 24.4%, respectively. The calibration device in this paper is simple, and there are no parallel, vertical and coplanar requirements between multiple rotating positions. At the same time, in view of the calibration accuracy, the accuracy requirements of most application scenarios can be met.

Share and Cite:

He, K. and He, H. (2024) Global Calibration Method for Monocular Multi-View System Using Rotational Dual-Target. Open Journal of Applied Sciences, 14, 1192-1203. doi: 10.4236/ojapps.2024.145077.

1. Introduction

In the three-dimensional vision measurement system, when using a single camera for measurement, there is a limitation of the field of vision. In order to meet the measurement requirements of higher accuracy and wider field of vision, it is necessary to use multiple cameras to build a measurement system, which can cover a larger visual space and meet the complex industrial measurement tasks [1] . The global calibration is the key to realize the calibration of dual view and multi view systems [2] .

Zhan [3] et al. combined two-dimensional plane targets with one-dimensional targets. The distance between feature points of one-dimensional targets is a known quantity, and the multi camera structured light large field measurement system model is established with the inherent distance between feature points as the constraint condition. Lu [4] et al. used a rotary table to properly rotate the two-dimensional plane target. Taking the rotation angle as a known parameter, combined with the target and rotary table coordinate system, they solved the relative pose between the cameras. Although the above methods are simple and effective, they all need to prepare special calibration auxiliary equipment, and the cost is high. Quan [5] and others proposed a hierarchical step-by-step calibration method. First, the projective projection matrix is obtained through the basic matrix, and then this method converts it into a metric projection matrix based on the normalization algorithm. Xia [6] and others took the pose relationship of two fixed plane targets as constraints to establish equations. By solving the equations, the conversion matrix between the two camera coordinate systems was obtained. Finally, the optimal solution of the conversion matrix was solved by nonlinear optimization method to achieve global calibration. This method finally solves the pose conversion matrix between two cameras through the polar geometric relationship of a series of feature points on the calibration object.

Zhao [7] and his team used two two-dimensional calibration plates as an intermediary to solve the conversion matrix between the camera coordinate system to be calibrated and the reference coordinate system, and converted the solution of the relative position relationship between the camera to be calibrated and the reference camera into the solution of the equation for hand eye calibration, which is now relatively mature. However, the premise for the application of the above methods is that there is a common field of view between cameras. Under this premise, multi camera global calibration can be carried out. When there is no common field of view between sub cameras, such calibration methods will no longer be applicable. Zhao [8] and others used the laser tracker to generate the marker points, and directly measured the three-dimensional coordinates of the marker points in the reference coordinate system to obtain the relative position and pose relationship between all cameras to achieve global calibration. This method can obtain high measurement accuracy, but its calibration operation is complex and depends on high-precision large-scale measurement equipment, so it cannot be applied to complex industrial scenes.

Given the limitations of existing multi-camera calibration techniques, this paper presents a method to globally calibrate monocular multi-view vision systems using a connecting rod biplane target. By treating the target feature image, world coordinates, and internal camera parameters as known, and employing the geometric characteristics of rotation-related motion with the Lie group and Lie algebra model, this method efficiently optimizes rotational movements to achieve global calibration.

2. Multi Camera Global Parameter Calibration

In a multi-camera system, each camera possesses specific internal parameters (such as focal length and principal point) and external parameters (such as position and orientation). The objective of global parameter calibration is to determine both the internal and external parameters for each camera and to establish the relationships between them [9] [10] [11] .

2.1. Solution of Rotation Axis Parameters

As shown in Figure 1, targets 1 and 2 are checkerboard targets mounted on the same bar, thereby forming a rigid linkage. Prior to calibration, the relative position and orientation of the two targets are unknown, but the internal parameters of cameras 1 and 2 are known. Both targets can rotate around a shared fixed axis, referred to as their coupling axis. Throughout the calibration process, the rotation centers of the targets consistently align with the connecting axis, and the distance between the centers of the targets remains constant. With the base of the target securely fixed in place, regardless of its rotation, the coordinates of the rotation centers within both the target’s own coordinate system and the camera’s coordinate system stay constant. Therefore, the rotation axis connecting the two targets is also fixed. When using this device, camera 1 and camera 2 respectively capture the image sequence of targets 1 and 2 when they rotate around the axis, and the coordinate ( u m , v m ) of the image coordinate system and the coordinate p m ( x m , y m , z m ) of the world coordinate system of the m-th internal corner P m of the target can be obtained.

Figure 1. Relative pose relationship between two camera coordinate frames.

In machine vision, Euclidean transformation is generally used to describe rigid body motion or rigid body transformation. Three dimensional Euclidean space points are transformed into:

p t = [ cos β cos γ sin α sin β cos γ cos α sin γ cos α sin β cos γ + sin α sin γ cos β sin γ sin α sin β sin γ + cos α cos γ cos α sin β sin γ sin α cos γ sin β sin α cos β cos α cos β ] r p + [ t x t y t z ] t (1)

where r is the rotation orthogonal matrix, which can be expressed by using α , β and γ , and t is the translation vector. The above formula can be transformed into homogeneous form and expressed as follows:

[ p t 1 ] = [ r t 0 T 1 ] [ p 1 ] (2)

According to the definition of Euclidean transformation, the product between two 3 × 3 rotation matrices is also a rotation matrix, but the sum of the two is no longer a rotation matrix. Therefore, r cannot form a linear space in the traditional Euclidean space 3 × 3 , but can only form a group:

S O ( 3 ) = { r 3 × 3 | r r T = I , det ( r ) = 1 } (3)

In the previous formula, S O ( 3 ) represents a special orthogonal group, which describes the continuous rotation of a rigid body in three-dimensional space, so S O ( 3 ) belongs to a lie group. Since the rotation in three-dimensional space only has three degrees of freedom, the use of Lie group S O ( 3 ) to express this rotation will produce a certain degree of redundancy. Therefore, a rotation representation with no redundancy (but singularity) of degrees of freedom, namely Lie algebra, can be used as S O ( 3 ) = { φ 3 × 3 } , where the vector φ = θ k can be mapped to S O ( 3 ) by the following exponents:

exp ( φ × ) = cos θ i I + ( 1 cos θ i ) k k T + sin θ i k × (4)

Define the rotation axis k = [ k x k y k z ] T between two targets in the world coordinate system, and the anti symmetric matrix of k can be expressed as follows:

k × = [ 0 k z k y k z 0 k x k y k x 0 ] (5)

The above formula is consistent with Rodrigues’ rotation transformation formula, that is, in the lie algebra S O ( 3 ) , the direction of the vector φ is the unit vector k of the rotation axis, and the module length of φ is the rotation angle θ .

Let R i be the rotation matrix of camera i ( i = 1 , 2 ) relative to the reference pose when the target rotates, and use lie group lie algebra model to obtain:

R i = cos θ i I + ( 1 cos θ i ) k k T + sin θ i k × (6)

With PNP to solve the equation:

arg min R i , r , t , o { i = 1 N [ ( n m T n m ) 1 n m n m T I ] [ R i ( r p m + t o ) + o ] 2 } (7)

Of which:

n m = [ u m u 0 a x v m v 0 a y 1 ] T (8)

In the above formula, r is the external parameter matrix under a certain rotation pose (reference pose), R i and t i are the rotation matrix and translation vector of other rotation poses relative to the reference pose, o is the rotation center, u 0 , v 0 , a x and a y are the internal parameters of the camera. The nonlinear solution model is constructed by the above optimization function, and the rotation axis k and the rotation center o between the two targets can be obtained by Gauss Newton method, LM and other methods.

2.2. Global Calibration

In visual measurement, the use of image control points with known positions and directions to determine the scale of a three-dimensional object model and the precise position and direction in the ground coordinate system is called the absolute orientation problem [12] . In the dual-camera global calibration system developed in this study, the coordinates of the target’s rotation center within the target coordinate system and the camera coordinate system remain unchanged, irrespective of how the target rotates [13] [14] . Thus, when the connecting rod targets in two different positions are captured by the dual cameras, four pairs of corresponding matching points can be obtained. These points represent the rotation centers of the two targets at two distinct positions and orientations. Consequently, the global calibration of the binocular vision system can be accomplished by resolving the absolute orientation problem [15] .

Figure 2 shows the dual camera calibration system: C1 and C2 are dual cameras, T1 and T2 are connecting rod dual targets, and the length of the rigid connecting rod is l. The internal parameters of cameras C1 and C2 have been calibrated. Rotate the connecting rod target at position 1, and C1, C2 capture the image sequences of T1 and T2 respectively. At the same time, the rotation axis n 1 and the rotation center O 1 can be solved based on the method proposed in the previous section;

Through the image sequence, the center point P C 1 , T 1 of T1 in C1 and the center point P C 2 , T 2 of T2 in C2 can be directly obtained; Then

{ P C 1 , T 2 = P C 1 , T 1 + l n 1 P C 2 , T 1 = P C 2 , T 2 l n 1 (9)

On the premise of ensuring the integrity of imaging, move the connecting rod target freely to the rotation position 2, rotate the connecting rod target at position 2, and obtain the rotation axis m 1 and the rotation center point O 2 ; The center point Q C 1 , T 1 of T1 in C1 and the center point Q C 2 , T 2 of T2 in C2 can be obtained directly from the image sequence:

Figure 2. Calibration schematic diagram of dual-camera system.

{ Q C 1 , T 2 = Q C 1 , T 1 + l m 1 Q C 2 , T 1 = Q C 2 , T 2 l m 1 (10)

The absolute orientation matrix can be constructed through the four center points of the target in C1 and the four center points of the target in C2. Note that 4 points in C1 are { p i } , i = 1 , 2 , 3 , 4 and 4 points in C2 are { ρ i } , i = 1 , 2 , 3 , 4 . When dealing with { p i } and { ρ i } , we regard them as four matching points with corresponding relationship in the two point clouds. Considering the error, the relationship between the two matching points is expressed as follows:

ρ i = r p i + t + η i , T = [ r t 0 T 1 ] (11)

where T is the rigid transformation matrix between two point clouds, r is the rotation matrix, and t is the translation vector. It can be seen from the previous section that a has six degrees of freedom. Then establish the error function:

E ( R , t ) = i = 1 n ρ i ( R p i + t ) (12)

In this way, solving the absolute orientation problem can be changed into solving the following least squares problem:

min i = 1 n ρ i ( R p i + t ) , R T R = I (13)

The singular value decomposition (SVD) method can be used to obtain the closed form solution of the above least squares problem. First, the centroid of the midpoint of the above two coordinate systems needs to be solved. The centroid coordinates are expressed as follows:

p ¯ = 1 n i = 1 n p i , ρ ¯ = 1 n i = 1 n ρ i (14)

Then the covariance matrix M can be solved through the centroid coordinates obtained from the above formula:

M = i = 1 n ( ρ i ρ ¯ ) ( p i p ¯ ) T (15)

Then singular value decomposition is performed on the covariance matrix M:

M = U D V T (16)

Finally, the optimal rotation matrix R and translation matrix t are solved:

R * = V U T (17)

t * = ρ ¯ R * p ¯ (18)

From Equation (11), it can be seen that one matching point can determine two equations. If it is required to solve t, there must be at least three pairs of matching points, and these three matching points cannot be collinear. Finally, the rotation matrix R can be solved by solving the absolute orientation problem, and then the translation vector t can be obtained to complete the global calibration of binocular vision system without overlapping field of view.

3. Global Calibration Experiment

3.1. Simulation Experiment

First, a dual camera system composed of two simulation cameras is used to obtain the virtual calibration data set. The internal parameters of the two cameras are shown in Table 1. The relative pose between the two cameras is [−2˚, 4˚, 3˚], and the relative position is [−1200 mm, 30 mm, −30 mm]. The virtual checkerboard calibration board has feature points set at a distance of 20 mm apart, with the feature array sized at 6 × 6. The two target planes are parallel, maintaining a relative distance of 1600 mm. Under the condition that each checkerboard target is fully captured by its respective camera, 30 calibration poses are randomly generated. Random noise, uniformly distributed within the range of [−0.25, 0.25], is introduced to the images for positioning and attitude. This simulation process is conducted 100 times, comparing the non-overlapping field of view global calibration method [16] with the method proposed in this study for global calibration.

Figure 3 is a comparison diagram of the angular deviation between the solution result of the attitude angle and the theoretical value in the simulation experiment of the method in reference [16] and the method in this paper; Figure 4 shows the comparison of the distance deviation between the displacement solution results and the theoretical value by the literature method and the method in this paper. The simulation results show that the calibration accuracy of this method is better.

Table 1. Internal parameter value of virtual camera.

Figure 3. Error results of attitude angle for simulation measurement.

Figure 4. Error results of displacement for simulation measurement.

3.2. Actual Test

As shown in Figure 5, in order to verify the accuracy of the calibration method proposed in this paper, a dual camera system is constructed by using two cameras C1 and C2 with internal parameters as shown in Table 2. The calibration of camera internal parameters refers to the camera calibration in Chapter 2 of this paper for actual calibration experiments.

The calibration apparatus employs two checkerboard targets, T1 and T2, mounted on the same rigid connecting rod. The dimensions, corner count, and other parameters of T1 and T2 are identical. The dual targets on the connecting rod are rotated to two arbitrary positions (neither parallelism nor coplanarity is necessary) and 12 angles are adjusted accordingly.

Table 2. Dual-camera internal parameters.

Figure 5. Dual camera global calibration experiment.

When rotating, ensure that at each angle, T1 can fully image in C1, T2 can fully image in C2, and there is no overlapping field of view between C1 and C2. The hardware trigger method is used in the experiment. The experimental process is shown in Figure 6.

Table 3 shows the global measurement results of attitude angle and displacement by the global calibration method in this paper and the calibration method in reference [16] .

Table 3 alone is insufficient for comparing the accuracies of the two global calibration methods. Therefore, a highly precise coordinate measuring machine (micron-level accuracy) is introduced to measure the position and orientation of the target. Using the measurement results from this machine, the global calibration method described in this paper and the method in reference [16] are evaluated to compare the merits and demerits of the calibration outcomes. The targets for pose estimation are checkerboard targets with identical size parameters. The experimental procedure is depicted in Figure 7.

Figure 8 shows the global measurement results of attitude and displacement. The ordinate is the measured value of three coordinates, and the abscissa is the error between the two methods and the measured value of three coordinates. It can be seen from the figure that the measurement error of the method in this paper is significantly less than that of the literature method. Through calculation, it can be seen that the root mean square error of the attitude and displacement

(a) (b)

Figure 6. Target images captured with different rotation angles (a) T1 image, (b) T2 image.

Figure 7. Target position and attitude Measurements using CMM and MCS.

(a)(b)

Figure 8. Measurement results (a) attitude angle, (b) Displacement.

Table 3. Calibration results of dual-camera system.

measurement results of the method in this paper is 0.0083˚ and 0.3657 mm respectively, and the root mean square error of the measurement results of the literature method is 0.0115˚ and 0.4839 mm respectively. This shows that compared with the global calibration method used in the literature, the proposed method has more advantages in the calibration accuracy.

4. Conclusion

In this paper, the global calibration of monocular multi view vision system is studied, and an improved calibration method based on the geometric characteristics of rotation related motion and the absolute orientation without overlapping field of view is proposed. The method is simple, and does not require parallel, vertical and coplanar requirements between multiple rotation positions, which enhances the flexibility of calibration. In addition, simulation and experimental results show that this method has higher accuracy and stability than the existing global calibration methods.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Liu, Z., Zhang, G., Wei, Z. and Sun, J.H. (2011) A Global Calibration Method for Multiple Vision Sensors Based on Multiple Targets. Measurement Science Technology, 22, Article ID: 125102.
https://doi.org/10.1088/0957-0233/22/12/125102
[2] Pfaff, F., Maier, G., Aristov, M., et al. (2017) Real-Time Motion Prediction Using the Chromatic Offset of Line Scan Cameras. At-Automatisierungstechnik, 65, 369-380.
https://doi.org/10.1515/auto-2017-0009
[3] Zhan, D., Yu, L., Xiao, J., et al. (2013) Study on Vehicle Vibration Compensation in Railway Track Profile Inspection. Chinese Journal of Scientific Instrument, 49, 186-194.
[4] Lu, Y.N., Wan, Z.J. and Wang, X.J. (2017) A Method for Solving The Position Relationship of Cameras without Public Field of View. Applied Optics, 38, 400-405.
[5] Quan, Y.M., Qin, Z.B., Li, W.S., et al. (2019) Multi Camera Calibration of One-Dimensional Calibration Object Based on Normalization Algorithm. Acta Optica Sinica, 39, Article ID: 0415001.
https://doi.org/10.3788/AOS201939.0415001
[6] Xia, R., Hu, M., Zhao, J., et al. (2018) Global Calibration of Multi-Cameras with Non-Overlapping Fields of View Based on Photogrammetry and Reconfigurable Target. Measurement Science and Technology, 29, Article ID: 065005.
https://doi.org/10.1088/1361-6501/aab028
[7] Zhao, H.Z., Gao, N., Meng, Z.Z., et al. (2021) Method of Simultaneous Calibration of Dual View 3D Measurement System. Opto-Electronic Engineering, 48, Article ID: 200127.
[8] Zhao, Y.H., Yuan, F., Ding, Z.L., et al. (2011) Global Calibration Method for Large Field of View Multi Vision Sensor Measurement System. Journal of Basic Science and Engineering, 19, 679-688.
[9] Huang, D.Z., Zhao, Q.C., Ou, Y. and Yang, T.L. (2016) Research on Global Calibration Method for Multi-Camera Visual Measurement System. Proceedings of the SPIE, 9903, Article ID: 990334.
[10] Ma, M.S., Yang, X.G., Li, C.X., et al. (2019) Accurate Calibration Method of Non Overlapping Field of View Camera Based on Spatial Constraints. Acta Optica Sinica, 39, Article ID: 1015003.
https://doi.org/10.3788/AOS201939.1015003
[11] Lang, W., Xue, J.P., Li, C.H., et al. (2019) Multi View Point Cloud Mosaic Based on Rotating Table Parameter Calibration. Chinese Journal of Lasers, 46, Article ID: 1104003.
https://doi.org/10.3788/CJL201946.1104003
[12] Zhu, A., Zhao, Q., Yang, T., et al. (2023) Condition Monitoring of Wind Turbine Based on Deep Learning Networks and Kernel Principal Component Analysis. Computers and Electrical Engineering, 105, Article ID: 108538.
https://doi.org/10.1016/j.compeleceng.2022.108538
[13] Zhang, Z. (2000) A Flexible New Technique for Camera Calibration. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22, 1330-1334.
https://doi.org/10.1109/34.888718
[14] Chang, S.H., Cosman, P.C. and Milstein, L.B. (2011) Chernoff-Type Bounds for The Gaussian Error Function. IEEE Transactions on Communications, 59, 2939-2944.
https://doi.org/10.1109/TCOMM.2011.072011.100049
[15] Chai, S.W., Yang, X.Q. and Guo, X.W. (2020) Dual Quaternion Absolute Orientation Iterative Solution. Science of Surveying and Mapping, 45, 88-94.
[16] Hu, M.B., Xia, R.B., Chen, S.L., et al. (2018) Global Calibration of Non Overlapping Field of View Camera Based on Photogrammetry. Combined Machine Tool and Automatic Machining Technology, 10, 89-92.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.