Strong Consistency of Estimators under Missing Responses

Abstract

In this article, we focus on the semi-parametric error-in-variables model with missing responses: , where yi are the response variables missing at random, are design points, ζi are the potential variables observed with measurement errors μi, the unknown slope parameter ß and nonparametric component g(·) need to be estimated. Here we choose two different approaches to estimate ß and g(·). Under appropriate conditions, we study the strong consistency for the proposed estimators.

Share and Cite:

Zhang, L. and Zhang, J. (2019) Strong Consistency of Estimators under Missing Responses. Journal of Applied Mathematics and Physics, 7, 93-103. doi: 10.4236/jamp.2019.71008.

1. Introduction

Consider the following semi-parametric error-in-variables (EV) model

{ y i = ξ i β + g ( t i ) + ϵ i , x i = ξ i + μ i , (1.1)

where y i are the response variables, ( ξ i , t i ) are design points, ξ i are the potential variables observed with measurement errors μ i , E μ i = 0 , ϵ i are random errors with E ϵ i = 0 . β R is an unknown parameter that needs to be estimated. g ( ) is a unknown function defined on close interval [ 0,1 ] , h ( ) is a known function defined on [ 0,1 ] satisfying

ξ i = h ( t i ) + v i , (1.2)

where v i are also design points.

Model (1.1) and its special forms have gained much attention in recent years. When μ i 0 , ξ i are observed exactly, the model (1.1) reduces to the general semi-parametric model, which was first introduced by Engle et al. [1] . However, in many applications, there are often covariates measurement errors. So the EV models are somewhat more practical than the ordinary regression model. In addition, when y i are complete observed and g ( ) 0 , the model (1.1) reduces to the usual linear EV model, which has been studied by Liu and Chen [2] , Miao et al. [3] , Miao and Liu [4] , Fan et al. [5] and so on. For complete data, the model (1) itself has also been studied by many authors: See Cui and Li [6] , Liang et al. [7] , Zhou et al. [8] and so on. In recent years, the semi-parametric EV models have been widely concerned.

On the other hand, we often encounter incomplete data in the practical application of the models. In particular, some response variables may be missing, by design or by happenstance. For example, the responses y i may be very expensive to measure and only part of y i are available. Actually, missing of responses is very common in opinion polls, social-economic investigations, market research surveys and so on. Therefore, we focus our attention on the case that missing data occur only in the response variables. When ξ i can fully be observed, the model (1.1) reduces to the usual semi-parametric model which has been studied by many scholars in the literature: See Wang et al. [9] , Wang and Sun [10] , Bianco et al. [11] .

To deal with missing data, one method is to impute a plausible value for each missing datum and then analyze the results as if they are complete. In regression problems, common imputation approaches include linear regression imputation by Healy and Westmacott [12] , nonparametric kernel regression imputation by Cheng [13] , semi-parametric regression imputation by Wang et al. [9] , Wang and Sun [10] , among others. We here extend the methods to the estimation of β and g ( ) under the semi-parametric EV model (1.1). We obtain two approaches to estimate β and g ( ) with missing responses and study the strong consistency for the estimators.

In this paper, suppose we obtain a random sample of incomplete data { ( y i , δ i , x i , t i ) } from the model (1.1), where δ i = 0 if y i is missing, otherwise δ i = 1 . Throughout this paper, we assume that y i is missing at random. The assumption implies that δ i and y i are independent. That is, P ( δ i = 1 | y i ) = P ( δ i = 1 ) . This assumption is a common assumption for statistical analysis with missing data and is reasonable in many practical situations.

The paper is organized as follows. In Section 2, we list some assumptions. The main results are given in Section 3. Some preliminary lemmas are stated in Section 4. Proofs of the main results are provided in Sections 5.

2. Assumptions

In this section, we list some assumptions which will be used in the main results. Here a n = O ( b n ) means | a n | C | b n | for every n 1 , a n = o ( b n ) means a n / b n 0 as n , while a.s. is stand for almost sure.

(A0) Let { ϵ i ,1 i n } , { μ i ,1 i n } and { δ i ,1 i n } be sequences of independent random variables satisfying

i) E ϵ i = 0 , E μ i = 0 , E ϵ i 2 = 1 , E μ i 2 = Ξ μ 2 > 0 is known.

ii) sup i E | ϵ i | p < , sup i E | μ i | p < for some p > 4 .

iii) { ϵ i ,1 i n } , { μ i ,1 i n } , { δ i ,1 i n } are independent of each other.

(A1) Let { v ,1 i n } in (2) be a sequence satisfying

i) lim n n 1 i = 1 n v i 2 = Σ 0 , lim n n 1 i = 1 n δ i v i 2 = Σ 1 a .s . ( 0 < Σ 0 , Σ 1 < ) .

ii) lim n sup n ( n log n ) 1 max 1 m n | i = 1 m v j i | < , where { j 1 , j 2 , , j n } is a permutation of ( 1,2, , n ) .

iii) max 1 i n | v i | = O ( 1 ) .

(A2) g ( ) and h ( ) are continuous functions satisfying the first-order Lipschitz condition on the close interval [ 0,1 ] .

(A3) Let W n j c ( t i ) ( 1 i , j n ) be weight functions defined on [0, 1] and satisfy

i) max 1 j n i = 1 n δ j W n j c ( t i ) = O ( 1 ) a.s.

ii) max 1 i n j = 1 n δ j W n j c ( t i ) I ( | t i t j | > a n 1 / 4 ) = o ( n 1 / 4 ) a.s. for any a > 0 .

iii) max 1 i , j n W n j c ( t i ) = o ( n 1 / 2 log 1 n ) a.s.

(A4) The probability weight functions W n j ( t i ) ( 1 i , j n ) are defined on [ 0,1 ] and satisfy

i) max 1 j n i = 1 n W n j ( t i ) = O ( 1 ) .

ii) max 1 i n j = 1 n W n j ( t i ) I ( | t i t j | > a n 1 / 4 ) = o ( n 1 / 4 ) , for any a > 0 .

iii) max 1 i , j n W n j ( t i ) = o ( n 1 / 2 log 1 n ) .

Remark 2.1. Conditions (A0)-(A4) are standard regularity conditions and used commonly in the literature, see Härdle et al. [14] , Gao et al. [15] and Chen [16] .

3. Main Results

For model (1.1), we want to seek the estimators of β and g ( ) . The most natural idea is to delete all the missing data. Therefore, one can get model δ i y i = δ i ξ i β + δ i g ( t i ) + δ i ϵ i . If ξ i can be observed, we can apply the least squares estimation method to estimate the parameter β . If the parameter β is known, using the complete data { ( δ i y i , δ i x i , δ i t i ) ,1 i n } , we can define the estimator of g ( ) to be

g n * ( t , β ) = j = 1 n W n j c ( t ) ( δ j y j δ j x j β ) ,

where W n j c ( t ) are weight functions satisfying (A3). On the other hand, under the condition of the semi-parametric EV model, Liang et al. [7] improved the least squares estimator (LSE) on the basis of the usual partially linear model, and employ the estimator of parameter β to minimize the following formula:

S S ( β ) = i = 1 n δ i { [ y i x i β g n * ( t i , β ) ] 2 Ξ μ 2 β 2 } = min !

Therefore, we can achieve the modified LSE of β as follow:

β ^ c = [ i = 1 n ( δ i x ˜ i c 2 δ i Ξ μ 2 ) ] 1 i = 1 n δ i x ˜ i c y ˜ i c , (3.1)

where x ˜ i c = x i j = 1 n δ j W n j c ( t i ) x j , y ˜ i c = y i j = 1 n δ j W n j c ( t i ) y j . We substitute (3.1) into g n * ( t , β ) , then

g ^ n c ( t ) = j = 1 n δ j W n j c ( t ) ( y j x j β ^ c ) . (3.2)

Apparently, the estimators β ^ c and g ^ n c ( t ) are formed without taking all sample information into consideration. Hence, in order to make up for the missing data, we imply an imputation method from Wang and Sun [10] , and let

U i [ I ] = δ i y i + ( 1 δ i ) [ x i β ^ c + g ^ n c ( t i ) ] . (3.3)

Therefore, Using complete data { ( U i [ I ] , x i , t i ) ,1 i n } , similar to (3.1)-(3.2), one can get another estimators for β and g ( ) , that is

β ^ I = [ i = 1 n ( x ˜ i 2 δ i Ξ μ 2 ) ] 1 i = 1 n x ˜ i U ˜ i [ I ] , (3.4)

g ^ n [ I ] ( t ) = j = 1 n W n j ( t ) ( U j [ I ] x j β ^ I ) . (3.5)

where U ˜ i [ I ] = U i [ I ] j = 1 n W n j ( t i ) U j [ I ] , x ˜ i = x i j = 1 n W n j ( t i ) x j , W n j ( t ) are weight functions satisfying (A4).

Based on the estimators for β and g ( ) , we have the following results.

Theorem 3.1 Suppose that (A0)-(A3) are satisfied. For every t [ 0,1 ] , we have

a) β ^ c β a . s .

b) g ^ n c ( t ) g ( t ) a . s .

Theorem 3.2 Suppose that (A0)-(A4) are satisfied. For every t [ 0,1 ] , we have

a) β ^ I β a . s .

b) g ^ n [ I ] ( t ) g ( t ) a . s .

4. Preliminary Lemmas

In the sequel, let C , C 1 , be some finite positive constants, whose values are unimportant and may change. Now, we introduce several lemmas, which will be used in the proof of the main results.

Lemma 4.1 (Baek ang Liang [17] , Lemma 3.1) Let α > 2 , e 1 , , e n be independent random variables with E e i = 0 . Assume that { a n i ,1 i n } is a triangular array of numbers with max 1 i n | a n i | = O ( n 1 / 2 ) and i = 1 n a n i 2 = o ( n 2 / α log 1 n ) . If sup i E | e i | p < for some p > 2 α / ( α 1 ) . Then

i = 1 n a n i e i = o ( n 1 / α ) a . s .

Lemma 4.2 (Hardle et al. [14] , Lemma A.3) Let V 1 , , V n be independent random variables with E V i = 0 , finite variances and sup 1 j n E | V j | r C < ( r > 2 ) . Assume that { a k i , k , i = 1 , , n } is a sequence of numbers such that sup 1 i , k n | a k i | = O ( n p 1 ) for some 0 < p 1 < 1 and j = 1 n a j i = O ( n p 2 ) for p 2 max ( 0 , 2 / r p 1 ) . Then

max 1 i n | k = 1 n a k i V k | = O ( n s log n ) a . s . for s = ( p 1 p 2 ) / 2 .

Lemma 4.3

a) Let A ˜ i = A ( t i ) j = 1 n W n j ( t i ) A ( t j ) , where A ( ) = g ( ) or h ( ) . Let A ˜ i c = A ( t i ) j = 1 n δ j W n j c ( t i ) A ( t j ) , where A ( ) = g ( ) or h ( ) . Then, (A0)-(A4) imply that max 1 i n | A ˜ i | = o ( n 1 / 4 ) and max 1 i n | A ˜ i c | = o ( n 1 / 4 ) a . s .

b) (A0)-(A4) imply that n 1 i = 1 n ξ ˜ i 2 Σ 0 , i = 1 n | ξ ˜ i | C 1 n , n 1 i = 1 n δ i ( ξ ˜ i c ) 2 Σ 1 a . s . and i = 1 n | δ i ξ ˜ i c | C 2 n a . s .

c) (A0)-(A4) imply that max 1 i n | ξ ˜ i | = O ( 1 ) and max 1 i n | ξ ˜ i c | = O ( 1 ) a . s .

Lemma 4.4 Suppose that (A0)-(A4) are satisfied. Then one can deduce that

max 1 i n | g ^ n c ( t i ) g ( t i ) | = o ( n 1 4 ) a . s .

One can easily get Lemma 4.3 by (A0)-(A4). The proof Lemma 4.4 is analogous to the proof of Theorem 3.1(b).

5. Proof of Main Results

Firstly, we introduce some notations, which will be used in the proofs below.

ξ ˜ i c = ξ i j = 1 n δ j W n j c ( t i ) ξ j , μ ˜ i c = μ i j = 1 n δ j W n j c ( t i ) μ j ,

g ˜ i c = g ( t i ) j = 1 n δ j W n j c ( t i ) g ( t j ) , ϵ ˜ i c = ϵ i j = 1 n δ j W n j c ( t i ) ϵ j ,

ξ ˜ i = ξ i j = 1 n W n j ( t i ) ξ j , μ ˜ i = μ i j = 1 n W n j ( t i ) μ j ,

g ˜ i = g ( t i ) j = 1 n W n j ( t i ) g ( t j ) , ϵ ˜ i = ϵ i j = 1 n W n j ( t i ) ϵ j , η i = ϵ i μ i β ,

B 1 n 2 = i = 1 n δ i ξ ˜ i 2 , S n 2 = i = 1 n ξ ˜ i 2 , S 1 n 2 = i = 1 n ( δ i x ˜ i 2 δ i Ξ μ 2 ) , S 2 n 2 = i = 1 n ( x ˜ i 2 Ξ μ 2 ) .

Proof of Theorem 3.1(a). From (3.1), one can write that

β ^ c β = S 1 n 2 [ i = 1 n δ i ( ξ ˜ i c + μ ˜ i c ) ( y ˜ i c ξ ˜ i c β μ ˜ i c β ) + i = 1 n δ i Ξ μ 2 β ] = S 1 n 2 { i = 1 n [ δ i ( ξ ˜ i c + μ ˜ i c ) ( ϵ ˜ i c μ ˜ i c β ) + δ i Ξ μ 2 β ] + i = 1 n δ i ξ ˜ i c g ˜ i c + i = 1 n δ i μ ˜ i c g ˜ i c }

= S 1 n 2 { i = 1 n δ i ξ ˜ i c ( ϵ i μ i β ) + i = 1 n δ i μ i ϵ i i = 1 n δ i ( μ 2 Ξ μ 2 ) β + i = 1 n δ i ξ ˜ i c g ˜ i c + i = 1 n δ i μ ˜ i c g ˜ i c i = 1 n j = 1 n δ i δ j W n j c ( t i ) ξ ˜ i c ϵ j i = 1 n j = 1 n δ i δ j W n j c ( t i ) ϵ i μ j

i = 1 n j = 1 n δ i δ j W n j c ( t i ) μ i ϵ j + i = 1 n j = 1 n δ i δ j W n j c ( t i ) ξ ˜ i c μ j β + 2 i = 1 n j = 1 n δ i δ j W n j c ( t i ) μ i μ j β + i = 1 n j = 1 n k = 1 n δ i δ j δ k W n j c ( t i ) W n k c ( t i ) μ j ϵ k i = 1 n j = 1 n k = 1 n δ i δ j δ k W n j c ( t i ) W n k c ( t i ) μ j μ k β } : = S 1 n 2 k = 1 12 A k n . (5.1)

Thus, to prove β ^ c β a.s., we only need to verify that S 1 n 2 C n 1 a . s . and n 1 A k n = o ( 1 ) a . s . for k = 1 , 2 , , 12 .

Step 1. We prove S 1 n 2 C n 1 a . s . Note that

S 1 n 2 = i = 1 n [ δ i ( ξ ˜ i c + μ ˜ i c ) 2 δ i Ξ μ 2 ] = i = 1 n δ i ( ξ ˜ i c ) 2 + i = 1 n δ i ( μ i 2 Ξ μ 2 ) + i = 1 n δ i [ j = 1 n δ j W n j c ( t i ) μ j ] 2 + 2 i = 1 n δ i ξ ˜ i c μ i 2 i = 1 n δ i ξ ˜ i c j = 1 n δ j W n j c ( t i ) μ j 2 i = 1 n δ i μ i j = 1 n δ j W n j c ( t i ) μ j : = B 1 n + B 2 n + B 3 n + B 4 n + B 5 n + B 6 n .

By Lemma 4.3(a), we have n 1 B 1 n Σ 1 a.s. Hence, it suffices to verify that B k n = o ( B 1 n ) = o ( n ) a.s. for k = 2 , 3 , , 6 . Applying (A0), taking r = p / 2 > 2 , p 1 = 1 / 2 , p 2 = 1 / 2 in Lemma 4.2, we can verity that

i = 1 n ( ζ i E ζ i ) = n 1 2 i = 1 n n 1 2 ( ζ i E ζ i ) = O ( n 1 2 log n ) a . s . (5.2)

where { ζ i } is a sequence of independent random variables satisfying E ζ i = 0 and sup 1 i n E | ζ i | p / 2 < . Therefore, we obtain B 2 n = O ( n 1 / 2 log n ) = o ( n ) a . s . from (A0) and (5.2). On the other hand, taking α = 4 , p > 4 in Lemma 4.1, we have

max 1 i n | j = 1 n δ j W n j c ( t i ) ζ j | = o ( n 1 4 ) a .s ., max 1 i n | j = 1 n W n j ( t i ) ζ j | = o ( n 1 4 ) a .s . (5.3)

where { ζ i } is a sequence of independent random variables satisfying E ζ i = 0 and sup 1 i n E | ζ i | p < . By (A0) and Lemma 4.3, taking r = 4 , p 1 = 1 / 4 , p 2 = 3 / 4 in Lemma 4.2, one can also deduce that

| B 4 n | = 2 n 1 4 | i = 1 n n 1 4 δ i ξ ˜ i c μ i | = O ( n 1 2 log n ) = o ( n ) a .s . (5.4)

Note that, from Lemma 4.3(a), (5.2) and (5.3), we have

| B 3 n | i = 1 n | δ i | max 1 i n | j = 1 n δ j W n j c ( t i ) μ j | 2 = o ( n 1 2 ) a .s . (5.5)

| B 5 n | 2 i = 1 n | δ i ξ ˜ i c | max 1 i n | j = 1 n δ j W n j c ( t i ) μ j | = o ( n 3 4 ) a .s . (5.6)

| B 6 n | 2 [ i = 1 n ( | δ i μ i | E | δ i μ i | ) + i = 1 n E | δ i μ i | ] max 1 i n | j = 1 n δ j W n j c ( t i ) μ j | = o ( n 3 4 ) a .s . (5.7)

Therefore, for (5.2)-(5.7), one can deduce that S 1 n 2 = B 1 n + o ( n ) = B 1 n + o ( B 1 n ) a .s . , which yields that

lim n B 1 n S 1 n 2 = lim n B 1 n B 1 n + o ( B 1 n ) = 1 a .s .

Therefore, by the Lemma 4.3(b), we can get that S 1 n 2 C n 1 a .s .

Step 2. We verify that n 1 A k n = o ( n 1 / 4 ) a .s . for k = 1 , 2 , , 12 . From (A0), we find out { η i = ϵ i μ i β ,1 i n } is a sequences of independent random variables with E η i = 0 , sup i E | η i | p C sup i E | e i | p + C sup i E | μ i | p < , for some p > 4 . Similar to (4), we deduce that

n 1 A 1 n = n 1 i = 1 n δ i ξ ˜ i c η i = O ( n 1 2 log n ) a .s .

Meanwhile, from (A0)-(A3), Lemma 4.3, (5.2) and (5.3), one can achieve that

S n 2 A 2 n C n | i = 1 n δ i μ i ϵ i | = o ( 1 ) a .s .

S n 2 A 3 n C n | i = 1 n δ i ( μ i 2 Ξ μ 2 ) β | = o ( 1 ) a .s .

S n 2 A 4 n C n | i = 1 n δ i ξ ˜ i c g ˜ i c | C n [ max 1 i n i = 1 n | δ i ξ ˜ i c | max 1 i n | g ˜ i c | ] = o ( 1 ) a .s .

S n 2 A 5 n C n | i = 1 n δ i μ ˜ i c g ˜ i c | C n [ i = 1 n | μ i | | δ i g ˜ i c | + i = 1 n | δ i g ˜ i c | | j = 1 n δ j W n j c ( t i ) μ j | ] C n max 1 i n ( i = 1 n ( | μ i | E | μ i | ) + i = 1 n E | μ i | ) max 1 i n | δ i g ˜ i c | + o ( n 1 2 ) = o ( 1 ) a .s .

The proof of n 1 A k n = o ( 1 ) a .s . for k = 6 , , 12 is analogous. Thus, the proof of Theorem 3.1(a) is completed.

Proof of Theorem 3.1(b). From (3.2), for every t [ 0,1 ] , one can write that

g ^ n c ( t ) g ( t ) = j = 1 n W n j c ( t ) δ j ( y j x j β ^ c ) g ( t ) = j = 1 n W n j c ( t ) δ j [ ξ j β + g ( t j ) + ϵ j ( ξ j + μ j ) β ^ c ] g ( t ) = j = 1 n W n j c ( t ) δ j ξ j ( β β ^ c ) + j = 1 n W n j c ( t i ) δ j [ g ( t j ) g ( t ) ] + j = 1 n W n j c ( t i ) δ j ϵ j + j = 1 n W n j c ( t i ) δ j μ j β + j = 1 n W n j c ( t i ) δ j μ j ( β ^ c β ) : = F 1 n ( t ) + F 2 n ( t ) + F 3 n ( t ) + F 4 n ( t ) + F 5 n ( t ) .

Therefore, we only need to prove that F k n ( t ) 0 a.s. for every t [ 0,1 ] and k = 1 , 2 , , 5 . From (A0)-(A3), Theorem 3.1(a), Lemma 4.3, (2) and (3), for every t [ 0,1 ] and any a > 0 , one can get

F 1 n ( t ) | β β ^ c | max 1 j n | h ( t j ) + v j | j = 1 n δ j W n j c ( t ) | = o ( n 1 4 ) a .s .

F 2 n ( t ) j = 1 n δ j W n j c ( t ) [ g ( t j ) g ( t ) ] I ( | t j t | > a n 1 4 ) + j = 1 n δ j W n j c ( t ) [ g ( t j ) g ( t ) ] I ( | t j t | < a n 1 4 ) C a n 1 4 = o ( 1 ) a .s .

F 3 n ( t ) | j = 1 n W n j c ( t ) δ j ϵ j | = o ( 1 ) a .s .

F 4 n ( t ) | j = 1 n W n j c ( t ) δ j μ j β | = o ( 1 ) a .s .

F 5 n ( t ) | β β ^ c | | j = 1 n W n j c ( t ) δ j μ j | = o ( 1 ) a .s .

Thus, the proof of Theorem 3.1(b) is completed.

Proof of Theorem 3.2(a). From (3.3)-(3.4), write that

β ^ I β = S 2 n 2 { i = 1 n ( ξ ˜ i + μ ˜ i ) [ δ i ( y i ξ i β μ i β ) + ( 1 δ i ) ( ξ i + μ i ) ( β ^ c β ) + ( 1 δ i ) g ^ n c ( t i ) j = 1 n W n j ( t i ) ( δ j ( y j ξ j β μ j β ) + ( 1 δ j ) ( ξ j + μ j ) ( β ^ c β ) + ( 1 δ j ) g ^ n c ( t j ) ) ] + i = 1 n δ i Ξ μ 2 β }

= S 2 n 2 { i = 1 n ( ξ ˜ i + μ ˜ i ) [ δ i ( ϵ i μ i β ) + δ i ( g ( t i ) g ^ n c ( t i ) ) + ( 1 δ i ) ( ξ i + μ i ) ( β ^ c β ) + g ^ n c ( t i ) j = 1 n W n j ( t i ) ( δ j ( ϵ j μ j β ) + δ j ( g ( t j ) g ^ n c ( t j ) ) + ( 1 δ j ) ( ξ j + μ j ) ( β ^ c β ) + g ^ n c ( t j ) ) ] + i = 1 n δ i Ξ μ 2 β } = S 2 n 2 { i = 1 n δ i ξ ˜ i ( ϵ i μ i β ) + i = 1 n δ i ξ ˜ i ( g ( t i ) g ^ n c ( t i ) )

+ i = 1 n ξ ˜ i ( ξ i + μ i ) ( 1 δ i ) ( β ^ c β ) i = 1 n j = 1 n δ j W n j ( t i ) ξ ˜ i ϵ j + i = 1 n j = 1 n δ j W n j ( t i ) ξ ˜ i μ j β i = 1 n j = 1 n δ j W n j ( t i ) ξ ˜ i ( g ( t j ) g ^ n c ( t j ) ) i = 1 n j = 1 n W n j ( t i ) ξ ˜ i ( 1 δ j ) ( ξ j + μ j ) ( β ^ c β ) + i = 1 n ξ ˜ i g ˜ i c + i = 1 n δ i μ i ϵ i

i = 1 n j = 1 n δ i W n j ( t i ) ϵ i μ j i = 1 n δ i ( μ i 2 Ξ μ 2 ) β + i = 1 n j = 1 n δ i W n j ( t i ) μ i μ j β + i = 1 n δ i μ ˜ i ( g ( t i ) g ^ n c ( t i ) ) + i = 1 n μ ˜ i ( 1 δ i ) ( ξ i + μ i ) ( β ^ c β )

i = 1 n j = 1 n δ j W n j ( t i ) μ i ϵ j + i = 1 n j = 1 n k = 1 n δ j W n j ( t i ) W n k ( t i ) μ k ϵ j + i = 1 n j = 1 n δ j W n j ( t i ) μ i μ j β i = 1 n j = 1 n k = 1 n δ j W n j ( t i ) W n k ( t i ) μ j μ k β i = 1 n j = 1 n δ j W n j ( t i ) μ ˜ i ( g ( t j ) g ^ n c ( t j ) ) i = 1 n j = 1 n W n j ( t i ) ( 1 δ j ) μ ˜ i ( ξ j + μ j ) ( β ^ c β ) + i = 1 n μ ˜ i g ˜ i c : = S 2 n 2 k = 1 21 D k n .

Using a similar approach as step 1 in the proof of Theorem 3.1(a), one can get S 2 n 2 C n 1 a .s .

Therefore, we only need to verify that n 1 D k n = o ( 1 ) a . s . for k = 1 , 2 , , 21 . From (A0)-(A4), Lemmas 4.2-4.4, Theorem 3.1(a), (5.2)-(5.4), we have

n 1 D 1 n = n 1 i = 1 n δ i ξ ˜ i ( ϵ i μ i β ) = n 1 O ( n 1 2 log n ) = o ( 1 ) a .s .

n 1 D 2 n n 1 [ i = 1 n | δ i ξ ˜ i | max 1 i n | g ( t i ) g ^ n c ( t i ) | ] = o ( 1 ) a .s .

n 1 D 3 n = n 1 i = 1 n ξ ˜ i 2 ( 1 δ i ) ( β ^ c β ) + n 1 i = 1 n ξ ˜ i ( 1 δ i ) j = 1 n W n j ( t i ) ξ j ( β ^ c β ) + n 1 i = 1 n ξ ˜ i μ i ( 1 δ i ) ( β ^ c β ) = o ( 1 ) a .s .

n 1 D 4 n n 1 i = 1 n | ξ ˜ i | max 1 i n | j = 1 n δ j W n j ( t i ) ϵ j | = o ( 1 ) a .s .

In the same way, from (A0)-(A4), Lemmas 4.2-4.4, (5.2) and (5.3), one can similarly deduce that n 1 D k n = o ( n 1 / 4 ) a .s . for k = 5 , 6 , , 21 . Thus, the proof of Theorem 3.2(a) is completed.

Proof of Theorem 3.2(b). From (3.4), write that

g ^ n [ I ] ( t ) g ( t ) = j = 1 n W n j ( t ) { δ j y j + ( 1 δ j ) [ ( ξ j + μ j ) β ^ c + g ^ n c ( t j ) ] ( ξ i + μ j ) β ^ I } g ( t ) = j = 1 n W n j ( t ) { δ j [ ξ j β + g ( t j ) + ϵ j ] + ( 1 δ j ) [ ( ξ j + μ j ) β ^ c + g ^ n c ( t j ) ] ( ξ j + μ j ) β ^ I } g (t)

= j = 1 n W n j ( t ) { δ j ξ j β + δ j g ( t j ) + δ j ϵ j + ξ j β ^ c + μ j β ^ c δ j ξ j β ^ c δ j μ j β ^ c + g ^ n c ( t j ) δ j g ^ n c ( t j ) ξ j β ^ I μ j β ^ I } g (t)

= j = 1 n W n j ( t ) δ j ξ j ( β β ^ c ) + j = 1 n W n j ( t ) δ j [ g ( t j ) g ^ n c ( t j ) ] + j = 1 n W n j ( t ) δ j ϵ j + j = 1 n W n j ( t ) ξ j ( β ^ c β ) + j = 1 n W n j ( t ) μ j ( β ^ c β ) j = 1 n W n j ( t ) δ j μ j β ^ c

+ j = 1 n W n j ( t ) [ g ^ n c ( t j ) g ( t j ) ] + j = 1 n W n j ( t ) [ g ( t j ) g ( t ) ] + j = 1 n W n j ( t ) ξ j ( β β ^ I ) + j = 1 n W n j ( t ) μ j ( β β ^ I ) : = k = 1 10 G k n .

Therefore, we only need to prove that G k n ( t ) 0 a.s. for every t [ 0,1 ] and k = 1 , 2 , , 10 . From (A0)-(A4), Lemma 4.3-4.4, (5.2), (5.3), one can get

G 1 n ( t ) | β β ^ c | max 1 j n | δ j ξ j | j = 1 n W n j ( t ) | β β ^ c | max 1 j n | ξ j | j = 1 n W n j ( t ) = o ( 1 ) a .s .

G 2 n ( t ) max 1 j n | g ( t j ) g ^ n c ( t j ) | | j = 1 n W n j ( t ) δ j | = o ( 1 ) a .s .

Meanwhile, the proof of G k n ( t ) 0 a .s . for every t [ 0,1 ] and k = 3 , , 10 is analogous. Thus, the proof of Theorem 3.2(b) is completed.

Acknowledgements

The authors greatly appreciate the constructive comments and suggestions of the Editor and referee. This research was supported by the National Natural Science Foundation of China (11701368).

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Engle, R.F., Granger, C.W.J., Rice, J. and Weiss, A. (1986) Semiparametric Estimation of the Relation between Weather and Electricity Sales. Journal of the American Statistical Association, 81, 310-320.
https://doi.org/10.1080/01621459.1986.10478274
[2] Liu, J.X. and Chen, X.R. (2005) Consistency of LS Estimator in Simple Linear EV Regression Models. Acta Mathematica Scientia, Series B, 25, 50-58.
https://doi.org/10.1016/S0252-9602(17)30260-6
[3] Miao, Y., Yang, G. and Shen, L. (2007) The Central Limit Theorem for LS Estimator in Simple Linear EV Regression Models. Communications in Statistics—Theory and Methods, 36, 2263-2272.
https://doi.org/10.1080/03610920701215266
[4] Miao, Y. and Liu, W. (2009) Moderate Deviations for LS Estimator in Simple Linear EV Regression Model. Journal of Statistical Planning and Inference, 139, 3122-3131.
https://doi.org/10.1016/j.jspi.2009.02.021
[5] Fan, G.L., Liang, H.Y., Wang, J.F. and Xu, H.X. (2010) Asymptotic Properties for LS Estimators in EV Regression Model with Dependent Errors. Advances in Statistical Analysis, 94, 89-103.
https://doi.org/10.1007/s10182-010-0124-3
[6] Cui, H.J. and Li, R.C. (1998) On Parameter Estimation for Semi-Linear Errors-in-Variables Models. Journal of Multivariate Analysis, 64, 1-24.
https://doi.org/10.1006/jmva.1997.1712
[7] Liang, H., Hardle, W. and Carrol, R.J. (1999) Estimation in a Semiparametric Partially Linear Errosr-in-Variables Model. The Annals of Statistics, 27, 1519-1935.
[8] Zhou, H.B., You, J.H. and Zhou, B. (2010) Statistical Inference for Fixed-Effects Partially Linear Regression Models with Errors in Variables. Statistical Papers, 51, 629-650.
https://doi.org/10.1007/s00362-008-0150-3
[9] Wang, Q., Linton, O. and Hardle, W. (2004) Semiparametric Regression Analysis with Missing Response at Random. Journal of the American Statistical Association, 99, 334-345.
https://doi.org/10.1198/016214504000000449
[10] Wang, Q. and Sun, Z. (2007) Estimation in Partially Linear Models with Missing Responses at Random. Journal of Multivariate Analysis, 98, 1470-1493.
https://doi.org/10.1016/j.jmva.2006.10.003
[11] Bianco, A., Boente, G., Gonzlez-Manteiga, W. and Prez-Gonzlez, A. (2010) Estimation of the Marginal Location under a Partially Linear Model with Missing Responses. Computational Statistics & Data Analysis, 54, 546-564.
https://doi.org/10.1016/j.csda.2009.09.028
[12] Healy, M.J.R. and Westmacott, M. (1956) Missing Values in Experiments Analysis on Automatic Computers. Journal of Applied Statistics, 5, 203-206.
https://doi.org/10.2307/2985421
[13] Cheng, P.E. (1994) Nonparametric Estimation of Mean Functionals with Data Missing at Random. Journal of the American Statistical Association, 89, 81-87.
https://doi.org/10.1080/01621459.1994.10476448
[14] Hardle, W., Liang, H. and Gao, J.T. (2000) Partial Linear Models. Physica-Verlag, Heidelberg.
https://doi.org/10.1007/978-3-642-57700-0
[15] Gao, J.T., Chen, X.R. and Zhao, L.C. (1994) Asymptotic Normality of a Class of Estimators in Partial Linear Models. Acta Mathematica Sinica, 37, 256-268.
[16] Chen, H. (1988) Convergence Rates for Parametric Components in a Partly Linear Model. The Annals of Statistics, 16, 136-146.
https://doi.org/10.1214/aos/1176350695
[17] Baek, J.I. and Liang, H.Y. (2006) Asymptotic of Estimators in Semi-Parametric Model under NA Samples. Journal of Statistical Planning and Inference, 136, 3362-3382.
https://doi.org/10.1016/j.jspi.2005.01.008

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.