Image Reconstruction of Ghost Imaging Based on Improved Generative Adversarial Networks

Abstract

In this paper, we improve traditional generative adversarial networks (GAN) with reference to residual networks and convolutional neural networks to facilitate the reconstruction of complex objects that cannot be reconstructed by traditional associative imaging methods. Unlike traditional ghost imaging to reconstruct objects from bucket signals, our proposed method can use simple objects (such as EMNIST) as a training set for GAN, and then recognize objects (such as faces) of completely different complexity than the training set. We use traditional ghost imaging and neural network to reconstruct target objects respectively. According to the research results in this paper, the method based on neural network can reconstruct complex objects very well, but the method based on traditional ghost imaging cannot reconstruct complex objects. The research scheme in this paper is of great significance for the reconstruction of complex object-related imaging under low sampling conditions.

Share and Cite:

Chen, X. (2022) Image Reconstruction of Ghost Imaging Based on Improved Generative Adversarial Networks. Journal of Applied Mathematics and Physics, 10, 1098-1104. doi: 10.4236/jamp.2022.104076.

1. Introduction

As we all know, Ghost imaging (GI) is a new imaging method, which is different from the traditional optical imaging method [1] [2]. When the object and the image are not in the same light field, the object can also be observed, which cannot be achieved by traditional optical imaging. Reconstructing the target object using the collected bucket signals in ghost imaging can effectively reduce interference factors such as environmental noise. While ghost imaging has many advantages, it requires extensive sampling, which is time consuming. With the development of deep learning and its wide application in various fields, such as natural language processing [3], face recognition [4], etc., it has achieved results beyond people’s expectations. In recent years, this method has also been applied in the field of optical imaging, which can improve the quality of the image. Since deep learning was introduced into the field of optics, this method has been widely used in face recognition, medical image processing, dynamic target imaging, etc.

In recent years, with the development of computer technology, an associative imaging method based on deep learning has been proposed. In 2012, Hinton further deepened the convolutional neural network [5], which made a breakthrough in the research of image recognition and classification. Convolutional neural networks can solve the dependence on parameters through methods such as parameter sharing and can achieve better identification of high-dimensional data. In 2015, He et al. proposed a Residual Neural Network (ResNet) composed of convolutional neural networks [6], which won two awards in the ImageNet competition in image classification and object recognition. The characteristic of residual network is that it can improve the accuracy of network recognition by using a deep deepening method, and it is easy to optimize. In 2017, Lyu et al. proposed a new computational correlative imaging (CGI) framework [7]. Using the reconstructed GI images and the original target to train a deep neural network (DNN), the trained DNN can improve the reconstruction quality in the case of low sampling. In the same year, the research group of Professor Xu modified the convolutional neural network in deep learning and proposed a ghost imaging convolutional neural network [8]. Target images can be obtained faster and more accurately at low sampling rates using this new method. In 2019, Wu et al. proposed the DAttNet network structure [9], which can reconstruct high-quality target images under sub-Nyquist sampling ratios (SNSRs). In this paper, we propose a novel neural network by combining residual neural networks and generative adversarial networks.

In this paper, we propose a novel generative adversarial network that combines residual and convolutional networks to train a neural network with simple objects and then recognize objects of higher complexity. The residual module makes the network deeper without causing overfitting during network training, so as to achieve better generalization. Both simulation and experiments show that the neural network has high efficiency in ghost imaging and has important applications in real-time ghost imaging, such as dynamic imaging in complex environments.

2. Method

We binarized the collected random speckle in Matlab as the light source for experiments and simulations. Make sure that the experiment and simulation are as identical as possible. Use T(x,y) to represent the two-dimensional object information, and use the random matrix of Im(x,y) light source, according to the ghost imaging theory:

S m = T ( x , y ) I m ( x , y ) d x d y m = 1 , 2 , 3 , , M 2 (1)

In Equation (1), m represents the number of samples. After collecting the light intensity information of the object, the reconstructed image of the object is obtained from the second-order correlation function:

T ( x , y ) = S m I m S m I m , m = 1 , 2 , 3 , , M 2 (2)

In this paper, we reconstruct objects from one-dimensional bucket signals through neural networks, and the reconstruction process is:

T ( x , y ) = R { O ( x , y ) , T ( x , y ) } (3)

The R in the above formula is the implicit function of the object reconstructed by the network, and its purpose is to establish the connection between the object to be reconstructed and the target object. T(x,y) is the bucket signal matrix, which is also the training set and test set of the neural network; O(x,y) is the target object, that is, the label corresponding to the training set, which plays an important role in the calculation of the loss function. J represents the number of objects. With the continuous iterative optimization of the neural network, when the Loss value drops to about 0.001, the network training is completed, and the reconstructed image quality is also the best.

3. Network Structure

Nowadays, AI technology is becoming more and more popular, which is largely due to the proposal of generative adversarial network (GAN). Figure 1 is the structure diagram of the simplest generative adversarial network. It is mainly based on the two-player game. In the generative confrontation network, there is such a game relationship: the generative model G and the discriminant model D, each of which has its own energy supply and plays an important role in the entire network.

In the generative adversarial network, there are two input data, one of which is the input real picture data, which is used as a criterion for judgment; the other is random noise data, which will be “processed” into a “very similar to the real picture” in the generative model. The network training process is divided into two steps: the first step is to fix the generative model, let the generative model generate random pictures, called “fake pictures”, and input the real pictures and fake pictures into the discriminant model, so that the discriminant model can tell the true and false respectively, and give 1 point to the real picture and 0 point to the fake picture. The second step is to fix the discriminant model, and continuously optimize the generation model, so that when the generated image is scored by the discriminant model, the score is also 1, which confuses the discriminant model to judge the real image and the fake image. When the discriminative model cannot distinguish which is a fake image and which is a real image, the network training is completed.

The simple generative adversarial network based on Figure 1 may not be able to reconstruct a good target object in the ghost imaging, so we improve the network structure of Figure 1. In the generative adversarial network, if you want to convert random noise into a good picture through the generative model, then the generative model needs good learning ability and universality. In the first generative adversarial network, the generative model is generally composed of fully connected layers. To improve network performance, multiple fully connected layers can be added, but this will lead to too many parameters in the network, making it difficult to train the network. Not the desired effect. In our work, we use the convolutional layer to replace the fully connected layer in the generative model. The shared weight based on convolution can effectively reduce the parameters of the network, especially the convolution operation is more conducive to the network’s image data processing. Figure 2 is a structural diagram of the built generative model.

There are four parts in the generative model, namely: fully connected module, convolution module, residual module sampling module. The function of full connection is to expand the input one-dimensional random noise data to a temporary image of size 128 × 128, and then input it into the convolution module.

Figure 1. Generative adversarial network structure diagram.

Figure 2. Generator model structure diagram.

There are three layers of convolution layers in the convolution module. The powerful extraction ability of feature information can extract the information of the temporary map. The convolution module is followed by a maximum pooling layer, whose purpose is to reduce the size of the temporary image, and then restore the image size to 128 × 128 after the information is extracted by the residual module again. Finally, after a layer of volume Layering, while obtaining the prediction map, reduces the number of channels of the network.

4. Simulation Results

In our work, in order to verify the generalization ability of the network, the face dataset is used as the test data of the network, which is selected from the open-source CelebA-Cropped dataset on the internet. The result in Figure 3 is the face reconstruction result obtained by using the original generative adversarial network model. It can be seen from Figure 3 that the network reconstruction result at this time is very poor, and the obtained faces are also deformed to varying degrees.

Since the results obtained with the original network are not good, the generative model of the modified network in our work is shown in Figure 2. Using the modified network to train 20,000 times on the EMNIST training set, the results shown in Figure 4 are obtained. The first row of Figure 4 is the original picture of the face, and each column of the second row corresponds to the result of the network reconstruction of the first row. From the results of Figure 4, it can be seen that the reconstruction ability of the modified network is very large. Each face will not be deformed, with high controllability.

In order to verify the performance of the neural network, we also compared traditional ghost imaging methods for face reconstruction. Figure 5 shows the reconstructed result using the traditional ghost imaging method. The first row is the original image, and the second row is the corresponding reconstruction result.

Figure 3. Face reconstruction results based on the generative adversarial network before modification.

Figure 4. Face reconstruction results based on the modified generative adversarial network (the first row is the original image, and the second row corresponds to the reconstructed image of the first row and each column).

Figure 5. Face reconstruction results based on the traditional ghost imaging method (the first row is the original image, and the second row corresponds to the reconstructed images of the first row and each column).

It can be seen from the figure that it is not applicable when using traditional methods to reconstruct face data. However, comparing Figure 4, it can be found that when reconstructing face data sets based on the improved generative adversarial network. It has strong practicability, and it also proves that this method is much better than the traditional ghost imaging method.

5. Conclusion

In this paper, we improve traditional generative adversarial networks based on residual networks and convolutional neural networks, which can use simple physics as a training set (such as EMNIST) to train network parameters, and then reconstruct complex face images. Different from the traditional ghost imaging method to reconstruct the target object, this method can effectively reconstruct the face, and due to the characteristics of the parameter sharing of the convolution layer, the parameters of the entire network are greatly reduced, and the training time of the network is also reduced. Reduce resource consumption and waste. The results of this work are of great significance for real-time dynamic imaging.

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Shapiro, J.H. (2008) Computational Ghost Imaging. Physical Review A, 78, Article ID: 061802.
https://doi.org/10.1103/PhysRevA.78.061802
[2] Bennink, R.S., Bentley, S.J. and Boyd, R.W. (2002) “Two-Photon” Coincidence Imaging with a Classical Source. Physical Review Letters, 89, Article ID: 113601.
https://doi.org/10.1103/PhysRevLett.89.113601
[3] Ayadi, A., Samet, A., de Bertrand de Beuvronn, F. and Zanni-Merk, C. (2019) Ontology Population with Deep Learning-Based NLP: A Case Study on the Biomolecular Network Ontology. Procedia Computer Science, 159, 572-581.
https://doi.org/10.1016/j.procs.2019.09.212
[4] Fan, D., Fang, S., Wang, G., Gao, S. and Liu, X. (2019) The Visual Human Face Super-Resolution Reconstruction Algorithm Based on Improved Deep Residual Network. EURASIP Journal on Advances in Signal Processing, 2019, Article No. 32.
https://doi.org/10.1186/s13634-019-0626-4
[5] Krizhevsky, A., Sutskever, I. and Hinton, G.E. (2012) ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25, 1097-1105.
[6] He, K., Zhang, X., Ren, S. and Sun, J. (2016) Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, 27-30 June 2016, 770-778.
https://doi.org/10.1109/CVPR.2016.90
[7] Lyu, M., Wang, W., Wang, H., Wang, H., Li, G., Chen, N. and Situ, G. (2017) Deep-Learning-Based Ghost Imaging. Scientific Reports, 7, 1-6.
https://doi.org/10.1038/s41598-017-18171-7
[8] He, Y., Wang, G., Dong, G., Zhu, S., Chen, H., Zhang, A. and Xu, Z. (2018) Ghost Imaging Based on Deep Learning. Scientific Reports, 8, 1-7.
https://doi.org/10.1038/s41598-018-24731-2
[9] Wu, H., Wang, R., Zhao, G., Xiao, H., Wang, D., Liang, J., Tian, X., Cheng, L. and Zhang, X. (2020) Subnyquist Computational Ghost Imaging with Deep Learning. Optics Express, 27, 3846-3853.
https://doi.org/10.1364/OE.386976

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.