An Analysis of Two-Dimensional Image Data Using a Grouping Estimator

Abstract

Machine learning methods, one type of methods used in artificial intelligence, are now widely used to analyze two-dimensional (2D) images in various fields. In these analyses, estimating the boundary between two regions is basic but important. If the model contains stochastic factors such as random observation errors, determining the boundary is not easy. When the probability distributions are mis-specified, ordinal methods such as probit and logit maximum likelihood estimators (MLE) have large biases. The grouping estimator is a semiparametric estimator based on the grouping of data that does not require specific probability distributions. For 2D images, the grouping is simple. Monte Carlo experiments show that the grouping estimator clearly improves the probit MLE in many cases. The grouping estimator essentially makes the resolution density lower, and the present findings imply that methods using low-resolution image analyses might not be the proper ones in high-density image analyses. It is necessary to combine and compare the results of high- and low-resolution image analyses. The grouping estimator may provide theoretical justifications for such analysis.

Share and Cite:

Nawata, K. (2022) An Analysis of Two-Dimensional Image Data Using a Grouping Estimator. Open Journal of Statistics, 12, 33-48. doi: 10.4236/ojs.2022.121003.

1. Introduction

The analysis of two-dimensional (2D) images by machine learning methods, one type of methods used in artificial intelligence, is widely used [1] - [20] . For details see the review and survey works [21] [22] [23] [24] [25] of this subject. Brown [26] mentions the potential of marching learning and its limitations. In keeping with Brown’s statement, we believe that it is important to research and identify the limitations of machine learning. In 2D image analyses, it is very important to divide the sample space into two regions (such as the target and background regions). Suppose that S is a bounded subspace of 2D space, y is a binary variable which takes a value of 1 in the region A S and 0 in the region B S . The boundary is given by the deterministic function g ( x ) = 0 where x = ( x 1 , x 2 ) . A is given by g ( x ) > 0 and B is given by g ( x ) < 0 as shown in Figure 1. In this case, we can separate the space with a non-stochastic line such as by using support vector machines [27] .

However, when the model contains stochastic factors such as random observation errors as in Figure 2, it is necessary to consider stochastic models. Ma et al. [28] pointed out that even if non-stochastic patterns of noise are added to images, the machine learning methods may not give proper results. In this case, we separate the region defined by g ( x ) = 0 such that P [ y = 1 | x ] > 1 / 2 if x A = { x : g ( x ) > 0 } and P [ y = 0 | x ] < 1 / 2 if x B = { x : g ( x ) < 0 } . (Note that the model can be easily generalized to the α-quantile cases.)

Figure 1. The case in which S is divided into two regions by a non-stochastic line.

Figure 2. The case in which the model contains stochastic factors.

The problem is that it is not easy to estimate g ( x ) properly in the non-stochastic case. If g ( x ) is mis-specified, we cannot get proper results. Nawata [29] proposed the estimator of g ( x ) by the grouping method based on Nawata [30] [31] (hereafter referred to as the grouping estimator). The grouping estimator is a semiparametric parameter and does not require P [ y = 1 | x ] to be specified. The method has not been used because grouping observations are difficult. Analyses of 2D high-resolution images have become very important in many fields. The sizes of 2D images are finite, and the images are overlaid with grid lines (and usually rectangles). Therefore, grouping is very easy, and each group can have a sufficient number of observations.

A grouping estimator for binary variables in the 2D case is explained in this study, and the results of a Monte Carlo study are presented.

2. Models and Assumptions of a Grouping Estimator

Let y i j be the binary variable that takes 1 if the targeted object occurs and 0 otherwise, let S be a bounded subspace of the 2D space where we obtain observations, and let z i j be an m-dimensional vector given by

z i j = ( 1 , z i j 2 , z i j 3 , , z i j m ) , z i j k = z k ( x i j ) , x i j = ( x 1 i , x 2 j ) S ,(1)

i = 1 , 2 , , n 1 , j = 1 , 2 , , n 2 .

z i j is a function of x i j . Let n = n 1 n 2 ,which is the total number of observations. S is divided into two regions such that:

Region A: P [ y i j = 1 | x i j ] > 1 / 2 if x i j A ,(2)

and

Region B: P [ y i j = 1 | x i j ] < 1 / 2 if x i j B .

Suppose that the boundary C between the two regions in S is given by g ( x ) = z β = 0 ,and

P [ y i j = 1 ] > 1 / 2 if z i j β > 0 , P [ y i j = 1 ] < 1 / 2 if z i j β < 0 ,(3)

and

P [ y i j = 1 ] = 1 / 2 if z i j β = 0 ,

where β is the m-th dimensional vector of unknown parameters. This means

z i j β > 0 if x i j A and z i j β < 0 if x i j B . (4)

Note that we only consider a linear function of z i j ,but the method can be easily generalized to non-linear cases. From (3) and (4), we get

y i j * = z i j β + u i j and y i j = 1 ( y i j * > 0 ) . (5)

where 1 ( D ) is the indicator function that takes 1 if D is true and 0 otherwise. u i j is a random error term such that F i j ( 0 ) = 1 2 where F i j ( u ) is the distribution function of u i j . One of the biggest problems is that we do not know the distribution function. A linear probability function (and modified types of linear functions) is sometimes used because computation is easy using such a function. However, Amemiya ( [32] , p. 268) mentions that “(it) is not a proper distribution function as it is not lie between 0 and 1.” The other widely used alternative distribution is the logistic distribution. Miguel-Hurtado [33] considered the linear and logistic regression methods and concluded that “Our experiments have shown that the machine learning classification typically out-performs linear (logistic) regression for the prediction of these four demographic trials based on bin assessments.” However, it is to be expected that we will not obtain correct results if the model is mis-specified; that is, there is no reason to use linear or logistic regression in the analysis. The grouping estimator is a semiparametric estimator that does not depend on the distribution of error terms; it is consistent in not only independent and identically distributed (i.i.d) cases but also in heteroscedastic cases.

The following assumptions are made:

Assumption 1

S is a bounded closed subspace of the 2D space. S is divided into Region A: P [ y = 1 | x ] > 1 / 2 if x A and Region B: P [ y = 1 | x ] < 1 / 2 if x B where x = ( x 1 , x 2 ) . The boundary C of the two regions is given by g ( x ) = z β = 0 , z = ( 1 , z 2 , z 3 , , z m ) , z k = z k ( x ) , k = 1 , 2 , , m .

Assumption 2

z k ( x 1 , x 2 ) is the continuous and bounded function of x 1 , x 2 in the proper neighborhood of C , C 0 . There exists ϵ 1 > 0 such that | z β | > ϵ 1 if x = ( x 1 , x 2 ) C 0 .

Assumption 3

{ u i j } are independent random variables but are not necessarily identically distributed. Let F i j ( u ) = F ( u | x i j ) , x i j = ( x i , x j ) be the distribution function of u i j . Then F i j ( 0 ) = 1 / 2 , F i j ( u ) > 1 / 2 if u > 0 and F i j ( u ) < 1 / 2 if u < 0, F i j ( u ) is a continuous function of u, and there exists ϵ 2 , ϵ 3 > 0 such that | F i j ( u ) 1 2 | > ϵ 2 | u | in ( x 1 i , x 2 j ) C 0 and | F i j ( u ) 1 2 | > ϵ 3 if ( x 1 i , x 2 j ) C 0 .  

Assumption 4

{ x i j } satisfy the following conditions.

1) Let S i j ( δ ) be the neighborhood of x i j such that S i j ( δ ) = { x : x x i j < δ } and n i j ( δ ) be the number of observations in S i j ( δ ) . Then there exist α 1 , α 2 , a 1 , a 2 > 0 such that n i j ( δ ) > a 1 n α 1 and δ < a 2 n α 2 for any x i j .

2) 1 n i , j z i j z i j converges to a nonsingular matrix.

3. Grouping Estimator for Binary Cases

Divide S into T non-overlapping subsets S 1 , S 2 , , S T so that the conditions of Assumption 4 are satisfied.

Let n t be the number of observations in S t . Define

z ¯ t = x i j S t z i j / n t (6)

and

y ¯ t = { 1 if y t + > n t / 2 a random variable which takes 0 and 1 with a probability of 1 / 2 eachif y t + = n t / 2 0 if y t + < n t / 2

where y t + = x i j S t y i j .

z ¯ t represents the mean of z i j and y ¯ t is the median of y i j in S t . The grouping estimator for the binary model is the probit estimator using ( z ¯ t , y ¯ t ) , t = 1 , 2 , , T . The estimator maximizes

L ( b ) = y ¯ t = 0 { 1 Φ ( n t z ¯ t b ) } y ¯ t = 1 Φ ( n t z ¯ t b ) . (7)

where Φ is the distribution function of the standard normal distribution. b ^ represents the estimator maximizing L ( b ) . Since the boundary of the two regions does not change if we multiply a non-zero constant, we need to normalize b ^ = ( b ^ 1 , b ^ 2 , , b ^ m ) . Therefore, b ^ is standardized by the i-th element b ^ i and the grouping estimator is defined by

β ^ = b ^ / b ^ i . (8)

From Theorem 4.3 of Nawata [29] , the estimator is consistent. Since the idea of the grouping estimator is based on the normality of the asymptotic distribution of the median and the proof uses Bernstein’s inequality, which gives the precise probabilities of the tail portions of the sum of random variables (for details see Benntett [34] ), it is useful to consider the normal distribution and probit model.

4. Monte Carlo Experiments

In the Model Carlo study, we consider the case where S is the rectangle given by 0 < x 1 5 and 0 < x 2 5 . Both x 1 and x 2 are divided by 1000 equidistant grid lines. Let x 1 i be the i-th grid line in x 1 and x 2 j be the j-th grid line in x 2 . The intersection of x 1 i and x 2 j is denoted as x i j ,and each trial contains 1 million observations ( n = 1000000 ).

We consider the basic but important models given by

y i j * = γ 0 γ 1 x 1 i + x 2 j + u i j . (9)

The boundary C is given by

x 2 = γ 0 + γ 1 x 1 . (10)

The parameter value of γ 0 is 0 for all cases, and the cases in which γ 1 = 1 ,2, and 4 are considered. The areas of A and B are the same for γ 1 = 1 ; the area of A is 1/4 of S for γ 1 = 2 ,and the area of A is 1/8 of S for γ 1 = 4 . First, the cases in which the error terms are i.i.d. random variables are analyzed. For the distributions of u i j ,the normal (normal distribution cases, Cases 1 - 3) and Cauchy (Cauchy distribution cases, Cases 4 - 6) distributions are considered. Then, non-i.i.d. (heteroscedastic) cases such that 1) u i j = ε if x i j A and u i j = 2 ε if x i j B (heteroscedastic distribution cases I, Cases 7 - 9) and 2) u i j = ε if x i j A and u i j = 4 ε if x i j B (heteroscedastic distribution cases II, Cases 10 - 12) where ε follows the standard normal distribution are analyzed.

For all cases, γ 0 and γ 1 are estimated by the probit maximum likelihood estimator (MLE) and grouping estimator. For the grouping estimator, each group contains 9 intersection points determined by 3 neighboring grid lines of x 1 and x 2 . The number of groups becomes 333 × 333 = 110889 . (The points on x 1 = 5 or x 2 = 5 are not used.) As shown in this example, the grouping estimator essentially reduces the resolutions of the images. The number of repetitions is 100.

Tables 1-4 show the results of the Monte Carlo experiments. When the error terms are i.i.d. and follow the normal distribution (Table 1), the probit MLE is an efficient estimator, and the biases and standard deviations (SDs) are quite small. The biases of the grouping estimator are very small; however, the SDs are larger than those of the probit MLE. When the error terms follow a Cauchy distribution (Table 2), the biases of the probit MLE are very small when γ 1 = 1 . This is considered to occur because the distribution y i j is symmetric with respect to the boundary C in this case. In the cases in which γ 1 = 2 and 4, the biases of the probit MLE become larger. In particular, in the γ 1 = 4 case, the biases are quite large, and are −0.9485 and −1.2541 for γ 0 and γ 1 ,respectively. On the other hand, the biases of the grouping estimator are very small for the cases in which γ 1 = 1 and 2. For the γ 1 = 4 case, the biases are −0.0421 and −0.2264 for γ 0 and γ 1 ,respectively, much smaller than those of the probit MLE. Although the SDs of the grouping estimator are larger than those of the probit MLE in many cases, the SDs are much smaller than the biases. Figure 3

Table 1. Normal distribution cases (Cases 1 - 3).

SD: Standard Deviation.

Table 2. Cauchy distribution cases (Cases 4 - 6).

Table 3. Heteroscedastic cases I (Cases 7 - 9).

shows the boundaries obtained from the true parameter values, the probit MLE, and the grouping estimator for the Cauchy and γ 1 = 4 case. The boundaries of the probit MLE and grouping estimator are calculated for Case 6 in Table 2. The result obtained with the grouping estimator is much more accurate than that of the probit MLE.

Figure 3. Boundaries obtained from the true parameter values (True), probit MLE (Probit) and grouping estimator (Grouping) for Cauchy distributions: Case 6 in Table 2. The grouping estimator clearly improves the probit MLE.

Table 4. Heteroscedastic cases II (Cases 10 - 12).

The results of heteroscedastic error term cases are given in Table 3 (heteroscedastic distribution cases I, Cases 7 - 9) and 4 (heteroscedastic distribution cases II, Cases 10 - 12). For the heteroscedastic distribution cases I, the grouping estimator clearly reduces the biases of γ 0 ,but the biases of γ 1 become slightly larger. (Although the SDs of the grouping estimator are larger than those of the probit MLE, the effects of the SDs are much smaller than those of the biases, as noted above.) Figures 4-6 show the boundaries obtained from the true parameter values, probit MLE, and grouping estimator. As before, the boundaries of the probit MLE and grouping estimator are calculated from the results in Table 3. The grouping estimator clearly improves the probit MLE in these cases heteroscedastic distribution cases I. For the heteroscedastic distribution cases II, the grouping estimator reduces the biases of γ 0 ,but the biases of γ 1 are slightly increased in the cases in which γ 1 = 2 and 4. Figures 7-9 show the boundaries obtained from the true parameter values, probit MLE, and grouping estimator. As before, the boundaries of the probit MLE and grouping estimator are calculated from the results in Table 4. The grouping estimator clearly improves the probit MLE in Cases 10 and 11 but slightly improves the probit MLE in Case 12.

Figure 4. Boundaries obtained from the true parameter values (True), probit MLE (Probit), and grouping estimator (Grouping) for heteroscedastic distribution I: Case 7 in Table 3. The grouping estimator clearly improves the probit MLE.

Figure 5. Boundaries obtained from the true parameter values (True), probit MLE (Probit), and grouping estimator (Grouping) for heteroscedastic distribution I: Case 8 in Table 3. The grouping estimator clearly improves the probit MLE.

Figure 6. Boundaries obtained from the true parameter values (True), probit MLE (Probit), and grouping estimator (Grouping) for heteroscedastic distribution I: Case 9 of Table 3. The grouping estimator clearly improves the probit MLE.

Figure 7. Boundaries obtained from the true parameter values (True), probit MLE (Probit), and grouping estimator (Grouping) for heteroscedastic distribution I: Case 10 in Table 4. The grouping estimator clearly improves the probit MLE.

Figure 8. Boundaries obtained from the true parameter values (True), probit MLE (Probit), and grouping estimator (Grouping) for heteroscedastic distribution II: Case 11 in Table 4. The grouping estimator clearly improves the probit MLE.

Figure 9. Boundaries obtained from the true parameter values (True), probit MLE (Probit), and grouping estimator (Grouping) for heteroscedastic distribution II: Case 12 in Table 4. The grouping estimator slightly improves the probit MLE.

5. Discussion

Analyses of high- and very-high-resolution images [35] - [44] are becoming more important as such data become more widely available. When the boundary between two regions is not deterministic, a stochastic approach must be used to determine the boundary, and it is necessary to identify the proper functional form of P [ y t = 1 | x i ] . Although distributions such as normal, logistic, and linear probabilities are frequently used, we cannot obtain consistent results unless the distribution is correctly specified. The Monte Carlo experiments conform this conclusion. The probit MLE has very large biases in many cases.

The grouping estimator is a semiparametric estimator and does not depend on the probability functions. It is consistent under very general assumptions. The results of the Monte Carlo experiments show that the grouping estimator clearly improves the conventional probit MLE when the distribution of the error terms is not only non-normal but also heteroscedastic. The grouping estimator essentially reduces the resolutions of the images. The previous low-resolution-image analyses [35] [45] [46] [47] [48] unintentionally used the methods of the grouping estimator. In other words, misspecification of the probability distributions might not be a critical problem for low-resolution images, but it might not be proper to use the methods of low-resolution images to high-resolution images. When we analyze high-resolution images, the conventional methods (used in low-resolution-image analyses) might not produce satisfactory results, and special attention should be paid to the selection of the models. Shao et al. [49] used a pyramid scene parsing pooling module that combines high-resolution and low-resolution images. Xu et al. [50] also suggested a method that involved changing a high-resolution image into low-dimensional images by bicubic downsampling and combining them. However, their methods lack a theoretical background. The grouping estimator may provide theoretical justifications for these methods. The results of high-resolution image analyses performed by conventional methods such as the probit MLE should be combined and compared with the results obtained low-resolution images. Although 2D cases are considered in this paper, this method can easily be applied to 3D cases [51] [52] .

6. Conclusions

Analyses of 2D images are increasing in importance as high-resolution images become more commonly available. Dividing 2D images into two regions, A and B, is a basic but very important challenge. When the boundary of the two regions is not deterministic, a stochastic approach must be used to determine the boundary between the regions. In this case, it is necessary to identify a proper probability functional form. Although distributions such as normal, logistic, and linear probability are frequently used, accurate results cannot be obtained unless the distribution is correctly specified, as shown in the Monte Carlo experiments.

The grouping estimator does not depend on probability distributions. It is a consistent estimator not only in i.i.d. cases but also in heteroscedastic cases. The Monte Carlo experiments show that the grouping estimator improves the probit MLE in many cases when the distribution of error terms is either non-normal or heteroscedastic. The grouping estimator is based on grouping the data, and it essentially decreases the resolutions of the images. In other words, misspecification of the distributions of the error terms might not be critical for low-resolution images, but it is critical for high-resolution images. The grouping estimator gives the theoretical justifications for this. It implies that we might not obtain proper results by applying the conventional methods used for low-density images to the analysis of high-resolution images. If the probability distributions are mis-specified, we may obtain incorrect results in high-resolution image analyses. It is important to combine and compare the high- and low-resolution-image results.

The methods to determine the optimal grouping (for example, numbers of observations in each group) are not yet unknown. The proper methods to combine and compare the high- and low-resolution-image results are important. However, proper methods are not developed yet. Researches to use the grouping estimator for 3D images are also important. These are the topics to be studied in the future.

Acknowledgements

The author would also like to thank an anonymous reviewer for his/her helpful comments and suggestions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] De la Calleja, J. and Fuentes, O. (2004) Machine Learning and Image Analysis for Morphological Galaxy Classification. Monthly Notices of the Royal Astronomical Society, 349, 87-93.
https://doi.org/10.1111/j.1365-2966.2004.07442.x
[2] Duro, D.C., Franklin, S.E. and Dubé, M.G. (2012) A Comparison of Pixel-Based and Object-Based Image Analysis with Selected Machine Learning Algorithms for the Classification of Agricultural Landscapes Using SPOT-5 HRG Imagery. Remote Sensing of Environment, 118, 259-272.
https://doi.org/10.1016/j.rse.2011.11.020
[3] Li, X., Cheng, X., Chen, W., et al. (2015) Identification of Forested Landslides Using LiDar Data, Object-Based Image Analysis, and Machine Learning Algorithms. Remote Sensing, 7, 9705-9726.
https://doi.org/10.3390/rs70809705
[4] Arganda-Carreras, I., Kaynig, V., Rueden, C., et al. (2017) Trainable Weka Segmentation: A Machine Learning Tool for Microscopy Pixel Classification. Bioinformatics, 33, 2424-2426.
https://doi.org/10.1093/bioinformatics/btx180
[5] Kan, A. (2017) Machine Learning Applications in Cell Image Analysis. Immunology and Cell Biology, 95, 525-530.
https://doi.org/10.1038/icb.2017.16
[6] Zhang, Y.C. and Kagen, A.C. (2017) Machine Learning Interface for Medical Image Analysis. Journal of Digital Imaging, 30, 615-621.
https://doi.org/10.1007/s10278-016-9910-0
[7] Bulat, A. and Tzimiropoulos, G. (2018) Super-FAN: Integrated Facial Landmark Localization and Super-Resolution of Real-World Low Resolution Faces in Arbitrary Poses with GANs. 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2018), Salt Lake City, 18-23 June 2018, 109-117.
https://doi.org/10.1109/CVPR.2018.00019
[8] Komura, D. and Ishikawa, S. (2019) Machine Learning Approaches for Pathologic Diagnosis. Virchows Archiv, 475, 131-138.
https://doi.org/10.1007/s00428-019-02594-w
[9] Adams, J., Qiu, Y., Xu, Y., et al. (2020) Plant Segmentation by Supervised Machine Learning Methods. Plant Phenome Journal, 3, e20001.
https://doi.org/10.1002/ppj2.20001
[10] Bi, Q., Goodman, K.E., Kaminsky, J., et al. (2020) What Is Machine Learning? A Primer for the Epidemiologist. American Journal of Epidemiology, 188, 2222-2239.
https://doi.org/10.1093/aje/kwz189
[11] Kose, K., Bozkurt, A., Alessi-Fox, C., et al. (2020) Utilizing Machine Learning for Image Quality Assessment for Reflectance Confocal Microscopy. Journal of Investigative Dermatology, 140, 1214-1222.
https://doi.org/10.1016/j.jid.2019.10.018
[12] Tang, H. and Hu, Z. (2020) Research on Medical Image Classification Based on Machine Learning. IEEE Access, 8, 93145-93154.
https://doi.org/10.1109/ACCESS.2020.2993887
[13] Turner, O.C., Aeffner, F. and Bangari, D.S. (2020) Society of Toxicologic Pathology Digital Pathology and Image Analysis Special Interest Group Article: Opinion on the Application of Artificial Intelligence and Machine Learning to Digital Toxicologic Pathology. Toxicologic Pathology, 48, 277-294.
https://doi.org/10.1177/0192623319881401
[14] Wei, P.W., He, F. and Zou, Y. (2020) Content Semantic Image Analysis and Storage Method Based on Intelligent Computing of Machine Learning Annotation. Neural Computing and Applications, 32, 1813-1822.
https://doi.org/10.1007/s00521-020-04739-4
[15] Li, J., Shao, S. and Hong, J. (2021) Machine Learning Shadowgraph for Particle Size and Shape Characterization. Measurement Science and Technology, 32, Article ID: 015406.
https://doi.org/10.1088/1361-6501/abae90
[16] Santhi, K. and Reddy, A.R.M. (2021) An Automated Framework for Coronary Analysis from Coronary Cine Angiograms Using Machine Learning and Image Analysis Techniques. IT in Industry, 9, 1406-1412.
https://doi.org/10.17762/itii.v9i1.284
[17] Sato, S., Maki, S., Yamanaka, T., et al. (2021) Machine Learning-Based Image Analysis for Accelerating the Diagnosis of Complicated Preneoplastic and Neoplastic Ductal Lesions in Breast Biopsy Tissues. Breast Cancer Research and Treatment, 188, 649-659.
https://doi.org/10.1007/s10549-021-06243-2
[18] Botero, U.J., Lson, R., Lu, H., et al. (2021) Hardware Trust and Assurance through Reverse Engineering: A Tutorial and Outlook from Image Analysis and Machine Learning Perspectives. ACM Journal on Emerging Technologies in Computing Systems, 17, Article 62.
https://doi.org/10.1145/3464959
[19] Tang, X., Kusmartseva, I., Kulkarni, S., et al. (2021) Image-Based Machine Learning Algorithms for Disease Characterization in the Human Type 1 Diabetes Pancreas. American Journal of Pathology, 191, 454-462.
https://doi.org/10.1016/j.ajpath.2020.11.010
[20] Wang, P., Fan, E. and Wang, P. (2021) Comparative Analysis of Image Classification Algorithms Based on Traditional Machine Learning and Deep Learning. Pattern Recognition Letters, 141, 61-67.
https://doi.org/10.1016/j.patrec.2020.07.042
[21] Liakos, K.G., Busato, P., Moshou, D., et al. (2018) Machine Learning in Agriculture: A Review. Sensors, 18, 2674.
https://doi.org/10.3390/s18082674
[22] Gewali, U.B., Monteiro, S.T. and Saber, E. (2019) Machine Learning Based Hyperspectral Image Analysis: A Survey.
[23] Martin-Isla, C., Campello, V.M., Izquierdo, C., et al. (2020) Image-Based Cardiac Diagnosis with Machine Learning: A Review. Frontiers in Cardiovascular Medicine, 7, Article No. 1.
https://doi.org/10.3389/fcvm.2020.00001
[24] Zahia, S., Zapirain, M.B.G. and Sevillano, X. (2020) Pressure Injury Image Analysis with Machine Learning Techniques: A systematic Review on Previous and Possible Future Methods. Artificial Intelligence in Medicine, 102, Article ID: 101742.
https://doi.org/10.1016/j.artmed.2019.101742
[25] de Matos, J., Ataky, S.T.M., de Souza Britto Jr., A., et al. (2021) Machine Learning Methods for Histopathological Image Analysis: A Review. Electronics, 10, 562.
https://doi.org/10.3390/electronics10050562
[26] Brown, S. (2021) Machine Learning, Explained. MIT Sloan School of Management, Cambridge.
https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
[27] Mahesh, B. (2020) Machine Learning Algorithms—A Review. International Journal of Science and Research, 9, 381-386.
[28] Ma, X., Niu, Y., Gu, L., Wang, Y., et al. (2021) Understanding Adversarial Attacks on Deep Learning Based Medical Image Analysis Systems. American Journal of Pathology, 110, Article ID: 107332.
https://doi.org/10.1016/j.patcog.2020.107332
[29] Nawata, K. (1994) Estimation of the Boundary of the Two Regions by the Grouping Method. Journal of the Japan Statistical Society, 24, 14-35.
[30] Nawata, K. (1990) Robust Estimation Based on Grouped-Adjusted Data in Linear Regression Models. Journal of Econometrics, 43, 317-336.
https://doi.org/10.1016/0304-4076(90)90123-B
[31] Nawata, K. (1990) Robust Estimation Based on Grouped-Adjusted Data in Censored Regression Models. Journal of Econometrics, 43, 337-362.
https://doi.org/10.1016/0304-4076(90)90124-C
[32] Amemiya, T. (1985) Advanced Econometrics. Harvard University Press, Cambridge.
[33] Miguel-Hurtado, O., Guest, R., Stevenage, S.V., et al. (2016) Comparing Machine Learning Classifiers and Linear/Logistic Regression to Explore the Relationship between Hand Dimensions and Demographic Characteristics. PLoS ONE, 11, e0165521.
https://doi.org/10.1371/journal.pone.0165521
[34] Bennett, G. (1962) Probability Inequalities for Sum of Independent Random Variables. Journal of the American Statistical Association, 57, 33-45.
https://doi.org/10.1080/01621459.1962.10482149
[35] Mojica, E., Pertuz, S. and Arguello, H. (2017) High-Resolution Coded-Aperture Design for Compressive X-Ray tomography Using Low Resolution Detectors. Optic Communications, 404, 103-109.
https://doi.org/10.1016/j.optcom.2017.06.053
[36] Singh, S., Guo, Y., Winiarski, B., et al. (2018) High Resolution Low kV EBSD of Heavily Deformed and Nanocrystalline Aluminum by Dictionary-Based Indexing. Scientific Reports, 8, Article No. 10991.
https://doi.org/10.1038/s41598-018-29315-8
[37] Li, Y., Xu, L., et al. (2019) A Y-Net Deep Learning Method for Road Segmentation Using High-Resolution Visible Remote Sensing Images. Remote Sensing Letters, 10, 381-390.
https://doi.org/10.1080/2150704X.2018.1557791
[38] Alganci, U., Soydas, M. and Sertel, E. (2020) Comparative Research on Deep Learning Approaches for Airplane Detection from Very High-Resolution Satellite Images. Remote Sensing, 12, 458.
https://doi.org/10.3390/rs12030458
[39] Yi, Z., Tang, Q., Azizi, S., et al. (2020) Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 14-19 June 2020, 7508-7517.
https://doi.org/10.1109/CVPR42600.2020.00753
[40] Cao, Y. and Huang, X. (2021) A Deep Learning Method for Building Height Estimation Using High-Resolution Multi-View Imagery over Urban Areas: A Case Study of 42 Chinese Cities. Remote Sensing of Environment, 264, Article ID: 112590.
https://doi.org/10.1016/j.rse.2021.112590
[41] Giles, A.B., Davies, J.E., Ren, K., et al. (2021) A Deep Learning Algorithm to Detect and Classify Sun Glint from High-Resolution Aerial Imagery over Shallow Marine Environments. Journal of Photogrammetry and Remote Sensing, 181, 20-26.
https://doi.org/10.1016/j.isprsjprs.2021.09.004
[42] Horwath, J.P., Zakharov, D.N., Mégret, R., et al. (2021) Understanding Important Features of Deep Learning Models for Segmentation of High-Resolution Transmission Electron Microscopy Images. Computational Materials, 6, Article No. 108.
https://doi.org/10.1038/s41524-020-00363-x
[43] Wen, Q., Luo, Z., Chen, R., et al. (2021) Deep Learning Approaches on Defect Detection in High Resolution Aerial Images of Insulators. Sensors, 21, 1033.
https://doi.org/10.3390/s21041033
[44] Zamboni, P., Marcato Junior, J., de Andrade Silva, J., et al. (2021) Benchmarking Anchor-Based and Anchor-Free State-of-the-Art Deep Learning Methods for Individual Tree Detection in RGB High-Resolution Images. Remote Sensing, 13, 2482.
https://doi.org/10.3390/rs13132482
[45] Karsa, A., Punwani, S. and Shmueli, K. (2018) The Effect of Low Resolution and Coverage on the Accuracy of Susceptibility Mapping. Magnetic Resonace in Medicine, 81, 1833-1848.
https://doi.org/10.1002/mrm.27542
[46] Yu, X., Fernando, B., Hartley, R. and Porikli, F. (2018) Super-Resolving Very Low-Resolution Face Images with Supplementary Attributes. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 908-917.
https://doi.org/10.1109/CVPR.2018.00101
[47] Kuhlbrodt, T., Jones, C.G., Sellar, A., et al. (2018) The Low-Resolution Version of HadGEM3 GC3.1: Development and Evaluation for Global Climate. Journal of Advances in Modeling Earth Systems, 10, 2865-2888.
https://doi.org/10.1029/2018MS001370
[48] Wang, S., Zhang, K., et al. (2020) Physically-Based Landslide Prediction over a Large Region: Scaling Low-Resolution Hydrological Model Results for High-Resolution Slope, Stability Assessment. Environmental Modelling and Software, 124, Article ID: 104607.
https://doi.org/10.1016/j.envsoft.2019.104607
[49] Shao, Z., Zhou, Z., Huang, X., et al. (2021) MRENet: Simultaneous Extraction of Road Surface and Road Centerline in Complex Urban Scenes from Very High-Resolution Images. Remote Sensing, 13, 239.
https://doi.org/10.3390/rs13020239
[50] Xu, H., Li, X., Zhang, K., et al. (2021) SR-Inpaint: A General Deep Learning Framework for High Resolution Image Inpainting. Algorithms, 14, 236.
https://doi.org/10.3390/a14080236
[51] Weilharter, A. and Fraudorfer, F. (2021) HighRes-MVSNet: A Fast Multi-View Stereo Network for Dense 3D Reconstruction from High-Resolution Images. IEEE Access, 9, 11306-11315.
https://doi.org/10.1109/ACCESS.2021.3050556
[52] Saito, S., Simon, T., Saragih, J., et al. (2020) PIFuHD: Multi-Level Pixel-Aligned Implicit Function for High-Resolution 3D Human Digitization. 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 13-19 June 2020, 84-93.
https://doi.org/10.1109/CVPR42600.2020.00016

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.