Simulation of Hazy Image and Validation of Haze Removal Technique

Abstract

Haze hampers the performance of vision systems. So, removal of haze appearance in a scene should be the first-priority for clear vision. It finds wide spectrum of practical applications. A good number of dehazing techniques have already been developed. However, validation with the help of ground truth i.e. simulated haze on a clear image is an ultimate necessity. To address this issue, in this work synthetic haze images with various haze concentrations are simulated and then used to confirm the validation task of dark-channel dehazing mechanism, as it is a very promising single image dehazing technique. The simulated hazy image is developed using atmospheric model with and without Perlin noise. The effectiveness of dark-channel dehazing method is confirmed using the simulated haze images through average gradient metric, as haze reduces the gradient score.

Share and Cite:

Sarker, A. , Akter, M. and Uddin, M. (2019) Simulation of Hazy Image and Validation of Haze Removal Technique. Journal of Computer and Communications, 7, 62-72. doi: 10.4236/jcc.2019.72005.

1. Introduction

Haze is a natural phenomenon that causes obstruction to vision. For clear vision, dehazing is an ultimate necessity that finds diverse applications such as navigation of vehicles, outdoor movements of people, surveillance system and so on. Many dehazing mechanisms have been developed [1] - [10] . But all contain some drawbacks. Methods mentioned in references [2] [3] use a pair or multiple images of the same scene for haze removal through polarizing filter. This polarized-filter is not effective in situation where changes in images are more rapid than the rotation of filter [4] . Method shown in reference [5] estimates the complete 3D structure and recovers haze free image from two or more bad weather images. Although some of these methods give good results, but, have limited practicability as acquisition of multiple images of same scene under diverse condition is a difficult task. To cope with these drawbacks, researchers are concentrating on developing dehazing using single image. Tan [6] investigated a method based on local contrast maximization. Fatal [7] developed an independent component analysis based dehazing technique using single image. He et al. [8] first developed a single image-based dark channel prior for haze removal. The prior based methods were highly successful in recovering haze-free images. We [11] further improved this method through proposing an adaptive filter-patch to deal with various haze concentrations. However, the effectiveness of these methods is not completely confirmed yet using ground truth (simulated) images. Due to absence ground truth, i.e. simulated haze of diverse densities on a clear image, it is not possible to absolutely quantize the effectiveness of dehazing mechanisms. Therefore, the main objectives of this work are: 1) generation of simulated haze of diverse densities on natural (real) images and 2) doing the validation of a haze removal technique. This paper tries to accomplish these objectives.

In this paper, we generated the synthetic homogeneous hazes with different concentrations on a clear natural image through atmospheric scattering model [12] - [23] . As realistic natural haze is heterogeneous in nature, so to generate heterogeneous hazy image we use Perlin noise. It is a gradient noise developed by Ken Perlin [24] to give natural visual effects on computer generated graphics.

After generation of haze of different concentrations, we used the dark-channel prior [1] [8] [9] [11] as it is the most prominent method among the dehazing mechanisms.

The rest of the paper is described as follows: Section 2 explains the haze generation mechanism; Section 3 shows the dehazing mechanism using dark-channel prior; Section 4 presents the experimental results and validation; and finally Section 5 concludes the paper.

2. Haze Simulation

In computer graphics, visualization of atmospheric phenomenon is important which has high practical value. A realistic haze will greatly improve the reality of simulated scenes. Special effects in computer games, virtual reality, digital movies, TV, entertainment-industry products and so forth are some applications of the simulated haze. For simulation of a hazy scene, various methods have been developed by using the atmospheric model [12] .

The hazy image formation model can be described by using the following equation [12]

I ( x ) = J ( x ) t ( x ) + A ( 1 t ( x ) ) (1)

x = (x, y) is a 2D vector that represents the coordinates of a pixel’s location in the image. I is the input hazy image, J is the scene radiance, t is the medium transmission, A is the global atmospheric light.

In the above Equation (1), the term in the first part of right-side J ( x ) t ( x ) is called direct attenuation and the term in second part of right-side A ( 1 t ( x ) ) is called airtight.

Here, we simulate haze using above atmospheric scattering model with and without Perline noise. The flow steps to simulate haze from atmospheric scattering model on an input clear image are shown in Figure 1.

Here, at first, we calculate the depth map of an image. For scene depth restoration, a linear model given in Equation (2) is used. The concentration (density) of haze increases along with the decreases of scene depth. Density of haze is the disparity between the brightness and the saturation. Then it can create a linear model.

We can express this linear model as:

d ( x ) = θ 0 + θ 1 v ( x ) + θ 2 s ( x ) + ε ( x ) (2)

where x is the position within the image, d is the scene depth, v is the brightness component of the hazy image, s is the saturation component, θ 0 , θ 1 , θ 2 are the unknown linear coefficients, ε(x) is a random variable represents the random error of the model that is regarded as a random image. A simple and efficient supervised learning method is used to determine the coefficients θ 0 , θ 1 , θ 2 . The training data [19] are needed for finding out the coefficients θ 0 , θ 1 , θ 2 . In this case, a training sample consists of an image and its corresponding ground truth depth map. Figure 2 presents the images at different steps of Figure 1.

Raw depth map is determined based on a hypothesis that the scene depth is locally constant as

Figure 1. Haze simulation workflow.

(a) (b) (c) (d) (e)

Figure 2. Different steps during haze generation method: (a) Original clear image; (b) Depth map calculation; (c) Transmission estimation; (d) Refined transmission estimation using bilateral filter; and (e) Hazy image.

d r ( x ) = min y Ω r ( x ) d ( y ) (3)

where Ω r ( x ) is an r × r neighborhood centered at x, and d r is the depth map with scale r. However, it is also obvious that the blocking artifacts may present in the image. To overcome these artifacts a bilateral filter is used that generates a refine transmission map [25] .

Since we already have the clear image J(x), the refined transmission map, and the air light (which can be set as 255), we can easily simulate the hazy scene according to Equation (1).

We can also generate hazy scenes with different haze densities assuming the transmission medium:

t ( x ) = e β d ( x ) λ ,

where β is a coefficient and λ is the haze density factor λ. Figure 3 shows the hazy images with different haze densities.

However, haze is not always perfectly homogeneous in real situations. Therefore, Perlin noise, which is a gradient noise is introduced in our method though the following Equation (4)

(a) (b) (c)

Figure 3. Simulated hazy images with different haze densities. (a) Less hazy image, λ = 1; (b) Medium hazy image, λ = 3; (c) More hazy image, λ = 5.

R ( x ) = I ( x ) + k n ( x ) (4)

Here, I is the hazy image that is obtained by using our haze simulation technique, k is used to control the appearance of Perlin’s turbulence texture and n is the perline noise image. Amplitude and frequency are the two properties that characterize the Perline noise function [19] [20] [21] [22] [23] . Figure 4 shows the Perline noise of different concentrations.

3. Haze Removal Technique

Haze effect minimization in real scene is very important and it finds wide applications. Previously, we modified a single image haze removal algorithm [11] based on dark channel prior with an automated calculation of patch size and automated handling of sky region’s degradation effect, which is known as halo effect.

For removal of haze we used dark channel prior algorithm. This algorithm mainly used to estimate patch size for direct attenuation and guided filtering. Haze removal model can be shown as:

J ( x ) = I ( x ) A max ( t ( x ) , t 0 ) + A (5)

The flow diagram of haze removal technique based on dark-channel prior is shown in Figure 5 and Figure 6 presents the images at different steps of the flow diagram. The detail explanations of the dehazing mechanism are given in reference (1).

4. Experimental Results and Validation

The visual (subjective) representations of dehazing through dark-channel prior

(a) (b) (c) (d) (e)

Figure 4. Perline noise and its effects. (a) Perline noise with low frequency; (b) Perline noise with higher frequency; (c) Original clear image; (d) Refined transmission map; (e) With added Perline noise.

Figure 5. Workflow diagram of haze removal technique using dark-channel prior.

(a) (b) (c) (d) (e)

Figure 6. Different steps of our haze removal method: (a) Original image; (b) Dark channel calculation; (c) Transmission estimation; (d) Refinement using guided filter; (e) Haze free output image.

method for homogeneous and heterogeneous hazy situations with three different haze concentrations are shown in Figure 7 and Figure 8, respectively. In addition to subjective measure (visual inspection), the validation is performed using a well-known dehazing objective metric―the average gradient shown in Equation (6). Average gradient is the resultant of horizontal and vertical gradients of an image. Here we used the average gradient of the dehazed image, as haze reduces the gradient that makes it blur. Hence, this is an effective metric for haze removal estimation.

A G = G x 2 + G y 2 (6)

Here, AG is average gradient, Gx and Gy are the horizontal and vertical gradients, respectively. The objective evaluation results of Figure 7 and Figure 8 are shown in Table 1 and Table 2, respectively. Higher AG indicates high quality dehazed image. From these tables, we can see that from low haze concentration to high haze concentration the AG figure is gradually decreasing. This is also confirmed through visual inspection in Figure 7 and Figure 8. In addition, we can see from Figure 7 that for homogeneous haze the original and the dehazed image quality is almost similar, but for heterogeneous haze (created using Perline noise, shown in Figure 8) the dehazed image quality is somehow poor. It is also confirmed from the AG figures of Table 1 and Table 2.

(a) (b) (c) (d) (e) (f) (g)

Figure 7. Example of homogeneous hazy scenes with different haze densities and dehazed images obtained by dark channel prior technique. (a) Original Image; (b) Less Hazy Image; (c) Dehazed Image; (d) Medium Haze Image; (e) Dehazed Image; (f) More Hazy Image; (g) Dehazed Image.

Table 1. Dehazing performance values (here the value of average gradient AG) for different homogeneous haze conditions. Higher AG indicates high quality dehazed image.

Table 2. Dehazing performance values (here the value of average gradient AG) for different heterogeneous haze conditions. Higher AG indicates high quality dehazed image.

(a) (b) (c) (d) (e) (f) (g)

Figure 8. Example of in homogeneous hazy scenes with different haze densities and dehazed images obtained by dark channel prior technique. (a)-(g) Perline Noise. (a) Original Image; (b) Less Hazy Image; (c) Dehazed Image; (d) Medium Haze Image; (e) Dehazed Image; (f) More Hazy Image; (g) Dehazed Image.

5. Conclusion

In this work, synthetic hazes are generated on a real scene through atmospheric model with and without Perlin noise. After that we successfully performed the validation of a prominent single image dehazing technique―the dark-channel prior through subjective and objective measures. Here, a well-established dehazing objective metric average gradient is used. The simulated haze will find application in validating any new dehazing technique. In addition, it will be used for many outdoor visual enhancements such as surveillance and navigation systems, real time tasks processing by robots etc.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Makkar, D. and Malhotra, M. (2016) Single Image Haze Removal Using Dark Channel Prior. International Journal of Engineering and Computer Science (IJECS), 5, 15467-15473.
[2] Narasimhan, S.G. and Nayar, S.K. (2003) Contrast Restoration of Weather Degraded Images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25, 713-724.
https://doi.org/10.1109/TPAMI.2003.1201821
[3] Shwartz, S., Namer, E. and Schechner, Y.Y. (2006) Blind Haze Separation. 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, 17-22 June 2006, 1984-1991.
https://doi.org/10.1109/CVPR.2006.71
[4] Yang, R., Yin, L. and Gabbouj, M. (1993) Optimal Weighted Median Filtering under Structural Constraints. IEEE International Symposium on Circuits and Systems, Chicago, 3-6 May 1993, 942-945.
[5] Kopf, B., Neubert, B., Chen, B., Cohen, M., Cohen-Or, D., Deussen, O., Uyttendaele, M. and Lischinski, D. (2008) Deep Photo: Model-Based, Photograph Enhancement and Viewing. ACM Transactions on Graphics, 27, 116:1-116:10.
[6] Tan, R. (2008) Visibility in Bad Weather from a Single Image. 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, 23-28 June 2008, 1-8.
https://doi.org/10.1109/CVPR.2008.4587643
[7] Fattal, R. (2008) Single Image Dehazing. Proceeding SIGGRAPH’08 ACM SIGGRAPH 2008, Los Angeles, CA, 11-15 August 2008, 72:1-72:9.
https://doi.org/10.1145/1399504.1360671
[8] He, K., Sun, J. and Tang, X. (2009) Single Image Haze Removal Using Dark Channel Prior. IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, 20-25 June 2009, 1956-1963.
[9] Narasimhan, S.G. and Nayar, S.K. (2001) Removing Weather Effects from Monochrome Images. Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, 8-14 December 2001, 186-193.
https://doi.org/10.1109/CVPR.2001.990956
[10] Hautiere, N., Tarel, J.P. and Aubert, D. (2007) Towards Fog-Free In-Vehicle Vision Systems through Contrast Restoration. IEEE Conference on Computer Vision and Pattern Recognition, Minnepolis, MN, 18-23 June 2007, 1-8.
https://doi.org/10.1109/CVPR.2007.383259
[11] Uddin, M.S., Gautam, B., Sarker, A., Akter, M. and Reduanul, M. (2017) HaqueImage-Based Automated Haze Removal Using Dark Channel Prior. IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Dhaka, 21-23 December 2017, 412-415.
[12] Narasimhan, S.G. and Nayar, S.K. (2002) Vision and the Atmosphere. International Journal of Computer Vision, 48, 233-254.
https://doi.org/10.1023/A:1016328200723
[13] Jackèl, D. and Walter, B. (1997) Modeling and Rendering of the Atmosphere Using Mie-Scattering. Computer Graphics Forum, 16, 201-210.
https://doi.org/10.1111/1467-8659.00180
[14] Nishita, T., Dobashi, Y. and Nakamae, E. (1996) Display of Clouds Taking into Account Multiple Anisotropic Scattering and Sky Light. Proceedings of the Computer Graphics Conference (SIGGRAPH’96), New York, August 1996, 379-386.
https://doi.org/10.1145/237170.237277
[15] Sun, B., Ramamoorthi, R., Narsinhan, S.G. and Nayar, K. (2005) A Practical Analytic Single Scattering Model for Real Time Rendering. ACM Transactions on Graphics, 24, 1040-1049.
https://doi.org/10.1145/1073204.1073309
[16] Wang, C.B., Wang, Z.Y. and Peng, Q.S. (2007) Real-Time Rendering of Sky Scene Considering Scattering and Refraction. Computer Animation and Virtual Worlds, 18, 539-548.
https://doi.org/10.1002/cav.213
[17] Yamamoto, T., Dobashi, Y. and Nishita, T. (2000) Interactive Rendering Method for Displaying Shafts of Light. Proceedings of the 8th Pacific Conference on Computer Graphics and Applications, Hong Kong, 3-5 October 2000, 31-37.
[18] Guo, F., Tang, J. and Xiao, X. (2014) Foggy Scene Rendering Based on Transmission Map Estimation. International Journal of Computer Games Technology, 2014, Article ID: 308629.
[19] Tang, K., Yang, J. and Wang, J. (2014) Investigating Haze Relevant Features in a Learning Framework for Image Dehazing. IEEE Conference on Computer Vision and Pattern Recognition, Columbus, 23-28 June 2014, 2995-3002.
https://doi.org/10.1109/CVPR.2014.383
[20] Anthony, G. and Venceslas, B. (2010) Modeling and Rendering Heterogeneous Fog in Real-Time Using B-Spline Wavelets. WSCG, Plzen, February 2010, 145-152.
[21] Guo, F., Tang, J. and Xiao, X. (2014) Foggy Scene Rendering Based on Transmission Map Estimation. International Journal of Computer Games Technology, 2014, Article ID: 308629.
[22] Zdrojewska, D. (2004) Real Time Rendering of Heterogeneous Fog Based on the Graphics Hardware Acceleration. CESCG, Budmerice, 19-21 April 2004, 95-101.
[23] Zhang, N., Zhang, L. and Cheng, Z. (2017) Towards Simulating Foggy and Hazy Images and Evaluating Their Authenticity. In: International Conference on Neural Information Processing, LNCS, Volume 10636, Springer, Berlin, 405-415.
https://doi.org/10.1007/978-3-319-70090-8_42
[24] Ken, P. (1985) An Image Synthesizer. Proceedings of the Computer Graphics Conference (SIGGRAPH’85), San Francisco, 22-26 July 1985, 287-296.
[25] Tomasi, C. and Manduchi, R. (1998) Bilateral Filtering for Gray and Color Images. Proceedings of IEEE International Conference on Computer Vision, Bombay, 4-7 January 1998, 839-846.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.