On an Application of the Improved Maximum Product Criterion to Inverse Acoustic Scattering in a Layered Medium ()
1. Introduction
The scientific field of inverse scattering theory for acoustic waves has been a very active field in the recent years, and its mathematical and modeling techniques are widely used in real-world problems. Acoustic scattering problems are very extensive and there a lot of examples due to the following areas: medical imaging, ultrasound tomography, nondestructive testing, material science, radar, etc. The aim in this research field has been not only to detect but also to identify unknown obstacles throughout the use of acoustic waves, and a lot of work has been done for direct scattering problems as well as for the inverse ones (see [1] [2] and the references therein).
In particular, the inverse scattering problems are exterior problems where the measurement data are taken outside the scattering obstacles. The paper at hand deals with reconstructions of sound-soft two-dimensional obstacles from time-harmonic acoustic plane waves in a layered background medium. These types of problems arise in applications where the background is not homogeneous and hence is modeled as a layered medium. This topic was originally investigated by Bondarenko et al. [3] within the framework of the factorization method.
As shown in Figure 1, let
denote a non-penetrable obstacle which is an open and bounded domain with a
boundary
. We also denote the background medium by
, which is divided, due to its
-boundary
, into two connected domains: the bounded homogeneous domain
and the homogeneous unbounded one,
(see Figure 1). Hence the background medium will consist of a finite number of homogeneous layers (domains)
, where without loss of generality, we will assume that
, meaning that the background medium consists of two layers. Let
be the positive wave number in terms of the frequency
and the speed of sound
in the corresponding domain
,
. Mathematically, the problem of scattering of time-harmonic acoustic waves in a two-layered medium in
is described by the following boundary value problem: Determine the total field
such that
(1)
(2)
(3)
(4)
(5)
where
and
denote the incident plane wave field and the corresponding scattered field, respectively. The outward unit vector to
is denoted by
, and
,
(
,
) denote the limit of u,
on
from the
outside (inside) of
. In addition,
is a non-negative constant written
in terms of the density
in the corresponding medium
,
. The Dirichlet boundary condition will be applied on
and corresponds to a sound-soft obstacle, whereas the transmission one will be imposed on
and represents the continuity of the medium and equilibrium of the forces acting on it.
An application of the Green’s formula implies that the solution u of the direct problem (1)-(5), has the asymptotic behavior [4]
(6)
uniformly with respect to
(S is the unit circle) for some analytic function
which is called the far-field pattern of
defined on S. In the case of the inverse problem, it represents the measured data. In particular, the inverse problem that will be considered here is the problem of recovering the shape of the imbedded objects from a complete knowledge of the far-field pattern.
The existence, uniqueness and stability of the direct problem, modeled by (1)-(5), have been established by Liu et al. in [5]. Most of the theoretical work for the corresponding inverse problem, including a rigorous proof of the factorization method for recovering imbedded obstacles, has been done in [3] within the framework of the factorization method [2] [6], and with the use of a mixed reciprocity principle which doesn’t require knowledge of the Green function for the background medium. Therefore, the resulting method is more computationally efficient than its counterparts.
Recall that the main idea of the factorization method is that the support of the scattering obstacle is obtained by solving an integral equation of the first kind and noting that the norm of an appropriate indicator function becomes unbounded as a point lying on a rectangular grid containing the scatterer approaches its boundary. Due to the ill-posedness of the corresponding integral equation, Tikhonov regularization with the regularization constant computed via Morozov’s discrepancy principle [7] is employed. It is well known however that Morozov’s discrepancy principle is time-consuming and requires an a priori knowledge of the noise level in the data, something that is unavoidable in real life applications. Therefore we employ an Improved Maximum Product Criterion (IMPC), developed by Bazán et al. [8], which via a fast and efficient algorithm chooses as regularization parameter, the critical point associated with the largest local maximum, of a product created by the regularized solution norm and the corresponding residual norm. The IMPC is an improved version of the Maximum Product Criterion (MPC) that originally appeared in [9]. Moreover, as with MPC, IMPC does not depend on user specified input parameters (like subspace dimension or truncating parameter) and requires no a priori knowledge of the noise level [9]. As we mentioned before, IMPC extends in a very elegant way the maximum product criterion, and it is worth mentioning that has been applied with great success in reconstructing obstacles in acoustics [9], electromagnetics [8] as well as applications in elastic scattering [10]. In addition to employing IMPC, the introduction of the mixed reciprocity relation avoids the computation of the far field of the Green function by replacing it with the total wave, and therefore renders the resulting algorithm even more attractive with respect to the computational point of view [3].
We organize our paper as follows. In Section 2 by using [3] as our main reference, we formulate a two-layered background medium problem and present a mixed reciprocity relation along with a formulation of an appropriate version of the factorization method. In Section 3, after discussing recently introduced indicator functions used for object reconstruction [11] [12], IMPC is revisited under the framework of the factorization method established in Section 2 and evidence is provided about why the method works so well in various applications tested so far. Using IMPC, the regularization parameter will be computed via a fast iterative algorithm which requires no a priori knowledge of the noise level in the data. Consequently, in Section 4, full far-field patterns will be generated, and the method will be applied for the reconstruction of imbedded obstacles under the assumption that the background medium is given in advance. Concluding remarks are given in Section 5.
2. Formulation of the Problem
Similarly as in [3], we let
, and we denote the total field by
, where
and
denote the incident plane wave and the corresponding scattering field respectively. In particular, the scattering of incident plane waves in a two-layered background medium can be formulated as:
(7)
(8)
(9)
(10)
Recall that the fundamental solution of the Helmholtz equation is given by
(11)
in which
is the Hankel function of order zero and of the first kind. Now, let
be the far field pattern of the scattered field
due to an incident plane wave
with the incident direction
. In addition, if the incident field is generated by a point source
with source point
, we denote by
its scattered field and by
its far field pattern. Following Potthast [3] and [13], the following mixed reciprocity relation has been established.
Theorem 1. (Mixed reciprocity relation) For acoustic scattering of plane waves
,
and point sources
,
from an obstacle
we have
(12)
where
solves the transmission problem (7)-(10).
For the proof, we refer the reader to [3].
The far field patterns
,
given by (6), define the far field operator
by
(13)
It is well known that the factorization method is looking for a regularized solution
of the equation
(14)
where
is a self-adjoint operator and the right hand side is chosen as the far field pattern of the problem specific Green function. We now proceed with the theoretical background needed for the recovery of the interface, and the shape and location of the imbedded obstacle.
For recovering the interface, we replace
in Equation (14), by the well known operator
which characterizes the factorization method as developed by Kirsch in [6]. Hence Equation (14) takes the form
(15)
and the spectral properties of F are used for the reconstructions. To be more specific, since F is normal and compact, which guarantees the existence of a singular system
,
, of F with
and
with
, then the characterization of the object depends on a range test as described in the following theorem due to Kirsch [6].
Theorem 2. For any
let
denote the right hand side of Equation (15). Assume that
is not a Dirichlet eigenvalue of
in D i.e. the corresponding homogeneous problem has only the trivial solution. Then a point
belongs to D if and only if the series
(16)
converges, or equivalently,
if and only if
belongs to the range of the operator
.
A consequence of the above result is that if one considers the partial sum of the first n terms of the series (16), for large n the partial sum must be large for z outside D, as the corresponding series fails to converge, and small for z inside. The factorization method characterizes the object by plotting a suitable indicator function which can be defined in several ways. More precisely, one defines an indicator function depending on
, so that the function is relatively large when
and small otherwise. Several indicator functions will be presented and discussed later.
In reconstructing the imbedded obstacle, we employ a variant of the factorization method, called
-factorization, described in [2] and also successfully used by Bondarenko et al. to recover this type of obstacles in [3]. To this end, we replace the left hand side operator,
, of Equation (14) by the operator
with
, i.e. we now obtain the modified integral equation
(17)
where the real and imaginary parts of F are self-adjoint operators given by
and
.
Consequently, consider the far field patterns
,
, and define the far field operator
by
(18)
Then it follows that the scattering operator
can be defined by
(19)
where
denotes the identity. It can be shown that
is unitary [2].
We now formulate the main theorem of the paper, which will allow us to characterize the imbedded obstacle. Its proof can be found in our main reference [3].
Theorem 3. For any
, let
defined by
where
solves the problem (7)-(10), and assume that
is not a Dirichlet eigenvalue of
in
. Then a point z belongs to
if and only if the series
(20)
converges, or equivalently,
if and only if
belongs to the range of the operator
. Finally,
is the eigensystem of the operator
which is given by
(21)
It is now obvious from the Theorem above, that the explicit use of the Green’s function of the background medium (which corresponds to solving a direct problem for each grid point z) is avoided due to the utilization of the near field data
that have already been computed. In addition, similar to interface reconstruction, obstacle reconstructions are also performed through appropriate indicator functions.
3. On Sampling Indicator Functions
In practice, we work with finite dimensional approximations of the linear operator Equations (15) (resp. (17)) and characterize the object through related indicator functions. The following analysis concerns some indicator functions associated with (15) but something similar can be elaborated to determine objects by means of (17). Let
be a singular system of a finite dimensional approximation of the far-field operator F,
, constructed by some numerical method. When using the factorization method, for each sampling point
we deal with the system of linear equations
(22)
where
denotes the conjugate transpose of F and
denotes the discretized version of the right hand side in (15), and then we calculate the discrete the indicator function
(23)
where
is the Fourier coefficient of
along the jth right singular vector of
and
is the unique solution of (22),
, where for simplicity, this solution is denoted with the same symbol as that of the continuous case, see (15).
The factorization method determines the shape of the object by inspecting the sampling points z where
becomes large. The rationale behind such a strategy lies on the fact that
remains within reasonable bounds when z belongs to the scatterer and becomes large when z moves away from the boundary. Actually, because of Theorem 2, for sampling points z inside the scatterer, it is mandatory that the Fourier coefficients
decay faster than
to assure convergence of the series (16). This is not the case for z outside D because for these sampling points the series diverges and therefore the associated Fourier coefficients should get larger than
even for small j. Most importantly, this observation is proven numerically valid in the case of finite approximations of the operator Equation (15), even using noisy far-field data, as one can see in Figure 2, which compares Fourier coefficients for a sampling point inside D, denoted as above by
, Fourier coefficients for a sampling point outside D, denoted by
, and
. This illustration corresponds to the reconstruction of a kite using 64 incident and observation directions,
sampling
Figure 2. Singular values and Fourier coefficients.
points, and Far-field data corrupted by additive noise with relative noise level 5%. Data are taken from [9].
From the behavior of the Fourier coefficients described above, it is evident that:
1) For points z outside and close to D,
is necessarily large as in this case most Fourier coefficients remain above the singular values, so
becomes very small;
2) As Fourier coefficients for external points are on the average much larger than the Fourier coefficients for internal points, which generally remain below the singular values, for z inside D the solution norm
will be of moderate size.
Hence provided the Fourier coefficients behave as described above,
will be relatively large when z belongs to D and small for z outside. That is, whenever the Fourier coefficients separate into two well distinguished groups in terms of size, one group associated with points inside D and other groups for points outside, several sampling indicator functions can be considered. In fact, a class of sampling indicators that exploit such a separation considers the q-norm (q = 2 or 1) of the vector
where
is a diagonal matrix such that, for chosen natural
,
,
for
and
otherwise. Then a sampling indicator function can be defined as
(24)
We recall that the decision to take
Fourier coefficients
in this indicator lies on the fact that
remains constant for all sampling point z, hence no recovering is possible. The choice
,
leads to a method known as SVD-tail proposed in [14], for which, unlike
,
becomes large for z outside D and small for z inside. Obviously, for
to behave similarly as
it suffices taking
. In this work, we have tested the choice
, which avoids calculating squares and square roots, and our conclusion has been that both choices work relatively well, but showing low contrast profile reconstruction.
Apart from the indicators in (24), although not explicitly mentioned in the literature, some recently published qualitative methods, referred to as direct sampling methods (DSM), also rely on the separation of Fourier coefficients. As examples we quote the indicators
(25)
where
is the vector of Fourier coefficients of
with respect to left singular vectors of
,
is the vector of Fourier coefficients
defined above and
is the diagonal matrix with diagonal entries equal to
, and
(26)
see [11] [12] for details. We observe in passing that (23) as be regarded as a sampling indicator of a direct sampling method since, like (26) we use a diagonal weighted matrix,
(27)
It is evident that the sampling indicator functions mentioned above avoid solving linear system and are therefore very efficient in terms of computational effort. From a theoretical point of view, it is known that DSM and DFM1 are equivalent and that the indicators decay as the sampling point z moves away from the scatter, see Theorems 1 and 4 in [15]. A missing information however, is how the indicators behave for z outside and close to D, and this may partially be the cause of low contrast reconstruction profiles as well as the presence of artifacts such as spurious oscillatory peaks in the corresponding surfaces, although of course, numerical results presented in [11] indicate that DFM1 delivers better accuracy than DSM.
3.1. Understanding Drawbacks Attributed to DSM and DFM Indicators
In this section, we intend to some extent, explain why the graphs of the indicator functions (25)-(26) include spurious oscillatory peaks. For this, it will be sufficient to show, not with proofs or theorems but with strong arguments, that the frequency content of
is high and that high frequency modes in the indicator are propagated for z outside D. The oscillatory behavior of the Fourier coefficients seen in Figure 2 is the key piece in our analysis and requires therefore justification. For this, we take n equidistantly distributed angles on the unit circle,
, and set
. With this notation, we see that the discretized version of the right hand side of the operator Equation (15) reads
where the superscript T denotes transpose of a vector or matrix, and thus the Fourier coefficient
can be expressed as
(28)
where the bar denotes complex conjugate. This shows that
involves a sum of weighted harmonics, with potentially high frequency content, and oscillatory weights that are the components of singular vectors
. Weight oscillation is because singular vectors
oscillate strongly as j grows, which is well known in connection with discrete inverse problems, see, for example [16]. This justifies the oscillatory behavior of the
displayed in Figure 2.
To complete our explanation, notice now that because the indicator computes inner products of the vector
with itself, the highest frequency in the indicator will be at least twice the highest frequency in
. Hence, oscillatory peaks in the indicator behavior become completely natural because the weights
propagate high oscillations in
, this being more pronounced for z outside D in which case
gets larger. To illustrate this a 3D graph of
for far-field data used to build Figure 2 is presented in Figure 3 where spurious peaks are evident.
We close the section with the observation that the effect of oscillatory
on
is not so dramatic since for z outside D, the weight
implies that
gets large, which is good for separating the indicator behavior, while for z inside, since on the average
, see Figure 2, the coefficient
remains relatively small and so
gets moderate size. That is, although as already commented, the reconstructed profiles obtained with
suffer from low contrast reconstruction, spurious oscillatory peaks are mitigated. A similar observation holds true for
.
To overcome the above drawbacks, post-filtering strategies have been proposed recently for the above DSM methods [17], whereas for Kirch’s factorization method (FM) the ad-hoc strategy has been to use Tikhonov regularization often based on Morozov’s generalized discrepancy principle (GDP). However, while filtering did not completely succeed in eliminating artifacts for DSMs, the performance of Tikhonov regularization has been highly satisfactory in several reconstruction problems, specially when the regularization parameter is selected via the Improved Maximum Product Criterion (IMPC). Recall that unlike GDP which requires a priori knowledge of the noise level in the data, IMPC does not require any user specified input parameters such as noise level or others.
To illustrate the superior reconstruction quality of the Tikhonov regularized based method coupled with IMPC over the above DSMs, we normalize the indicators dividing them by their maximum values and use level curves to recover the profile of the obstacle. Level curves at the values 0.5, 0.6, 0.7, and 0.8 for the indicators
,
and the one based on the Tikhonov based approach coupled with IMPC, labeled here by W-FP, are displayed in Figure 4.
The level curves behavior confirms two facts. First, that reconstructed profiles obtained by
are indeed contaminated by spurious peaks and second, that the Tikhonov regularized version of
coupled with IMPC mitigates significantly the effects of oscillatory
.
In addition to the level curves behavior displayed in Figure 4, to reinforce once more the superior reconstruction quality of W-FP over the other indicators, in Figure 5 we display flat surface plots for
,
, W-FP and
, where the last one is denoted by AFC and corresponds to
,
. Flat surface plots are obtained using the built-in matlab function pcolor. As we can see, although this particular form of visualization hides oscillating peaks, the same is not true if we focus on the background, where small oscillations are evident. Other than that, the low reconstruction contrast of profiles obtained with
and
is also evident.
Despite the fact that DSMs are theoretically stable and computationally cheap [11] [15], motivated for their shortcomings and the superior quality of the reconstructions obtained with W-FP, perhaps the most important observation here is that if the main objective is the quality of the reconstruction, one should choose W-FP. With this motivation, DSMs will no longer be used in this paper, and to justify using IMPC, in the next section, we revisit IMPC to describe new theoretical properties.
3.2. Revisiting IMPC and GDP
We first revisit a modified version of (23) based on Tikhonov regularization where the regularization parameter is chosen by the improved maximum product criterion (IMPC) [9]. In this case, the indicator function takes the form
(29)
Figure 4. Level curves for
,
and W-FP.
Figure 5. Reconstructed objects using four indicator functions.
with
(30)
where the regularization parameter
is a critical point of the product of the regularized solution norm and the corresponding residual norm. Let
be the residual vector corresponding to the regularized solution
, that is,
. From the SVD of
, it is simple to check that
,
and the corresponding squared norms are given by
(31)
(32)
where
are the Tikhonov filter factors,
(33)
It is well known that
and
vary monotonically with
, the former being decreasing and the latter increasing [16]. Recall that if
,
, IMPC selects as regularization parameter the largest maximum point of
(34)
which is simple to compute via a fast fixed point procedure [8] [9]. It is instructive to notice that
(35)
Hence, since
for all
, as seen in [9],
is maximum point of
if and only if
is a fixed point of
, and
[9]. For convergence details and practical implementation of IMPC the reader is referred to [8].
The goal of this subsection is to give further insight into the theoretical properties of IMPC to fully understand why it work so well. A key fact for this is to understand the behavior of
as a function of the sampling point z. Figure 6 displays a typical behavior of
for points outside and inside D, labeled as
and
, respectively, as well as the corresponding functions
. To explain such a behavior we assume, as usual, that
has only maximum point, and we will examine its location.
The expressions in (31)-(32) are the basis for our analysis of the first factor in (35). Notice that for z outside D, for
,
becomes close to
, which is large, because generally the Fourier coefficients
remain above
, see Figure 2. Assume now that
lies not so far from
such that
Figure 6. Typical behavior of functions
and
for z outside (resp. inside) the scatterer. Maximum points of
or equivalently related Fixed points of
are marked with small circle.
with p filter factor close to 1 and
small filter factors. Then, based on (33) we have
Thus, for z outside D and
relatively close to
, that is,
, it is clear that
dominates
(because
becomes large), so
, and
increases most often quickly and dramatically as
moves away from
, until the maximum is reached. From there, as p diminishes or equivalently, as
starts to get away from
,
dominates
and so
is decreasing and becoming small as
approaches
. A similar phenomenon happens when z lies inside D, but with the difference that, as
moves away from
,
increases much slower than when z is outside D because the coefficients
remain below
, see again Figure 2; in this case, the maximum point of
is reached relatively close to
. A simple way to see this is by noting that since
can be approximated as
(36)
a relatively large number of Fourier coefficients in
is required for
to be dominated by
. That is, the maximum point should be relatively close to
.
Having discussed the behavior of function
for points inside and outside D, to strengthen the conclusions above we now describe some theoretical properties in the following theorem.
Theorem 4. Assume that
is nonsingular with distinct singular values and
,
. Let
. Then for all
(37)
Consequently,
(38)
Moreover, irrespective of the sampling point, function
has always a maximum point
that is a fixed point of
,
, and this point is unique if
for all
.
Proof: It is the well known that the solution to the optimization problem (30) satisfies the regularized normal equations,
(39)
where I denotes the
identity matrix. Therefore
(40)
and hence, taking 2-norm on both sides of the above equality we see that
implies
(41)
Since
is non-singular, from the SVD of
and well known eigenvalue properties of symmetric matrices, the inequalities in (37) are straightforward consequences of (41), while the inequalities in (38) are consequences of (37). In addition, from (38) it is clear that
has at least a fixed point
. It remains to show firstly that
,
, and then that this fixed point maximizes
and is unique. We will do the first part by contradiction. Assume that
. Taking 2-norm on both sides of (40) and taking (31)-(32) into account, the fixed point equality
will be satisfied either when
, for
, or when all singular values of
are equal to
. This is a contradiction. Therefore
. A similar analysis shows that
. What
maximizes
is a consequence of (35) and the condition
.
Finally, to prove uniqueness, assume that
has two maximum points, say
such that
for
. Following an analysis from [18] it follows that there exists a fixed point
of
between
and
that minimizes
. Using the mean value theorem for derivatives,
with
. This implies that
and so
which is a contradiction because
. This completes the proof.
Except for inequality (37), the statements of Theorem 4 were demonstrated in [9] [18] by other means. Here, these statements are elegantly proven from (37). In addition, the current approach allows us to explain to some extent what we discussed above about the location of the maximizing point of
. Actually, using the notation
and
introduced in Figure 6, and assuming that the maximum point of
is unique and achieved at
, from (37) it follows that
(42)
(43)
In particular, at
we see that (42) implies
and this indicates that
may reach a very large value and that this value may be much larger than
. If this is the case, the maximizer of
must be smaller than
, a fact which, as we have seen, depends on the behavior of the Fourier coefficients. A line of analysis involving derivatives of
can be used to characterize the location of the maximum point of
. In fact, as Fourier coefficients associated with points outside D are generally larger than Fourier coefficients associated with internal points, to characterize such a location it is reasonable to assume that
. This can be seen as follows. Let
. It is clear that
and that
is differentiable for
. Then
(44)
where ’ denotes differentiation with respect to
. Based on (44) we obtain the following corollary.
Corollary 1. Assume that
for all
. Then
and therefore, the maximizer of
lies on the left of the maximizer of
.
The discussion and analysis developed in this section regarding DSMs and IMPC are relevant in many ways. For example, since we have identified the source of oscillations and spurious peaks in DSM reconstructions, strategies to circumvent DSM difficulties can be thought of. Now, focusing on IMPC, what matters here is that we have provided additional insight to explain why the method works so well and why its reconstructions are potentially of better quality compared to those obtained by DSMs, although clearly at a higher cost. However, we emphasize that the IMPC cost is not a crucial issue in the case of 2D reconstructions as IMPC is based on a fast fixed point iteration algorithm. A similar observation holds true for 3D reconstructions since in this case IMPC is applied in conjunction with a projection technique. Other than that, it is worth noting that if
is used as a preprocessing step to identify a small region containing the object, by applying IMPC to that region the total cost of the reconstruction can be significantly reduced. This subject is left to future work.
Recall that GDP chooses as regularization parameter the only root of the nonlinear equation
(45)
where
is an estimate for
such that
. It is well known that G is convex for small
and concave for large
. As a result, global and monotone convergence of Newton’s method cannot be guaranteed [19]. This difficulty is circumvented through a fast fixed-point algorithm proven convergent irrespective of the chosen initial guess [20].
We close the subsection with a new estimate for the only root of (45) that is again a consequence of (37).
Corollary 2. Let the only root of (45) be denoted by
. Whenever
we have
(46)
Estimates in (46) improve significantly an earlier estimate reported in Bazán ( [20], Theorem 3.2).
4. Numerical Results
In this section, we give several 2D numerical examples to illustrate the effectiveness of IMPC in reconstructing interface and imbedded obstacles in layered medium. To generate discrete data for the reference operator
and the far-field operator F we use the boundary integral equation method as described in [3]. This yields finite approximations
and
given as
(47)
where
and n is the number of incident and observation directions around the unit circle. For our examples we set
,
and for the transmission parameter we chose
; in all cases we used
and the reconstructions were made using a grid of
equally spaced sampling points on the rectangle
.
To recover the interface we rely on the
method and use the indicator functions (23) for noise free data as described in (47) and its regularized counterpart (29) for noisy data, with regularization parameter being chosen by IMPC and GDP and calculated by using fast fixed-point iterations as described in [8] [10]. Noisy data are generated as
(48)
where
denotes the spectral matrix norm,
is a complex random matrix satisfying
, and
gives the relative noise level in the data. On the other hand, the reconstruction of imbedded obstacles is based on a discrete counterpart of the linear operator Equation (17),
(49)
where
is a discrete version of function
in Theorem 3 and
(50)
with
being obtained from (19) by replacing
by
. In (50), the absolute value of a Hermitian matrix A with eigenvalue decomposition
is given as
. Also, as usual,
, and
. Obviously, since
is Hermitian positive definite, we have
. Therefore, indicator functions
and
based on (49) are essentially the same as those based on (23) and (29). Noisy data
is generated similarly as in (48).
We report numerical results using noise free data and noisy data with
and
, that is, data with relative noise level 5% and 10% respectively. In this sense, the numerical experiment performed here complement what is reported in [3] where reconstructions are based on
and
without considering additive noise. Reconstruction based on noise free data are labeled by W and reconstructions based on noisy data are labeled by W-FP when we use IMPC and by W-GDP when we use Morozov’s generalized discrepancy principle. For the implementation of GDP we use the noise level estimate
, see (45).
Example 1. The interface is given by a rounded square and the imbedded obstacle is given by the union of an ellipse and a kite, see Figure 7. Both obstacles are assumed to be sound-soft.
Results for noise level 5% shown in Figure 8 indicate that both GDP and IMPC produce good quality interface reconstructions (first line) and that the three indicators give comparable object reconstructions, with the observation
Figure 7. Profile of interface and objects.
Figure 8. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.
that the quality of the W-FP reconstruction looks better than that of W-GDP (second line). Reconstructions using data with noise level 10% are displayed in Figure 9. Again we see that the three indicators provide good quality interface reconstructions and comparable quality object reconstructions, although with the background better resolved with IMPC than with GDP.
Example 2. In this example, the interface and obstacle are the same as in Example 1 but now the kite is assumed to be sound-hard. Reconstructions using data with noise level 10% are displayed in Figure 10. Again we see that interface reconstructions are of good quality and that the reconstructions of the sound-soft object are visually of better quality than those of the sound-soft one. As explained in [3], this is because the values of the indicator functions are much smaller for the sound-hard obstacle compared to the sound-soft one, as these values depend on the physical property under consideration.
Example 3. The interface is composed of one rounded square and a circle and the imbedded obstacles are an ellipse and a kite, see Figure 11. Both objects are assumed to be sound-soft.
Figure 9. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.
Figure 10. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles. Obstacle is assumed to be sound-hard.
Figure 12 shows reconstructions using data with noise levels 5% (line 1 and line 2) and 10% (line 3 and line 4). As in example 1, we see that interface reconstructions for both noise levels have comparable quality, while the reconstruction of the objects obtained with W-GDP appears to degrade when the noise level increases to 10%. Interestingly, the interface seems to be part of the background in the reconstruction of the objects. Anyway, here we can see that the best object reconstructions for both noise levels are obtained with W-FP. This confirms once again what is often seen when IMPC is compared to GDP in solving other inverse scattering problems, see, for instance [8] [9] [11] [21].
Example 4. In this example, the interface and object are as in Example 3 but the kite is assumed to be sound-hard. The reconstructions are shown in Figure 13. As we can see, while interface reconstructions are robust to noise, as in the examples above, a certain lack of resolution seems to be present in the kite reconstruction, just as in Example 2 where we dealt with the sound-hard case. Other than that, note again that when reconstructing the objects, the interface seems to be part of the background.
Example 5. The interface is composed of two rounded squares, as displayed in Figure 14, and the imbedded obstacles are an ellipse and a kite, assuming as in Example 4 that the ellipse is sound-soft and the kite is sound-hard. Reconstructions using data with noise level 10% are displayed in Figure 15. Once again, we see that the interface reconstructions are of good quality, while the reconstructions of the sound-hard object are as in Examples 2 and 4, that is, with slightly poor resolution.
Figure 12. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.
Figure 13. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.
Figure 15. Top: Reconstructed interfaces. Noiseless data (Left), Noisy data (Center and Right). Bottom: Reconstructed imbedded obstacles.
5. Conclusion
We have presented an updated version of the Improved Maximum Product Criterion (IMPC) which combined with a mixed reciprocity relation is used for the reconstruction of two-dimensional obstacles in a layered medium. Numerical results indicate that the method yields accurate and robust reconstructions and comparison with some recently developed direct factorization methods illustrates its superiority in terms of reconstruction quality. Extension to 3D reconstructions can be a topic of future research in which the significant reduction of the computational cost will be a primary task. This can be achieved by using an indicator function that relies on the separation of the Fourier coefficients as a preprocessing step in order to identify a small region in which the object is located, and then obtain the reconstruction by applying the IMPC.