Looking at the Statistical Texture Approach Applied to Weather Radar Rainfall Fields ()

Evaldo Araújo de Oliveira^{1}, Augusto José Pereira Filho^{2}

^{1}Department of Physics, Federal University of Sao Paulo, Diadema, Brazil.

^{2}Department of Atmospheric Sciences, University of Sao Paulo, Sao Paulo, Brazil.

**DOI: **10.4236/jgis.2022.141002
PDF
HTML XML
157
Downloads
531
Views
Citations

Texture analysis methods have been used in a variety of applications, for instance in remote sensing. Though widely used in electrical engineering, its application in atmospheric sciences is still limited. This paper reviews some concepts of digital texture and statistical texture approach, applying them to a set of specific maps to analyze the correlation between texture measurements used in most papers. It is also proposed an improvement of the method by setting free a distance parameter and the use of a new texture measurement based on the Kullback-Leibler divergence. Eight statistical measurements were used: mean, contrast, standard deviation, cluster shade, cluster prominence, angular second moment, local homogeneity and Shannon entropy. The above statistical measurements were applied to simple maps and a set of rainfall fields measured with weather radar. The results indicate some high correlations, *e.g.* between the mean and the contrast or between the angular second moment, local homogeneity and the Shannon entropy, besides the potentiality of the method to discriminate maps.

Share and Cite:

Oliveira, E. and Filho, A. (2022) Looking at the Statistical Texture Approach Applied to Weather Radar Rainfall Fields. *Journal of Geographic Information System*, **14**, 29-39. doi: 10.4236/jgis.2022.141002.

1. Introduction

Image texture analysis has received a considerable amount of attention over the last years as it forms an important basis of object recognition methods [1] [2] [3]. It uses computational techniques to study and to classify objects that normally are not well defined, *i.e.* for which there isn’t a precise mathematical rule that can be followed to get a unique and right answer or classification (for example: the circle can be defined as a geometrical figure that satisfies the equation
${x}^{2}+{y}^{2}={r}^{2}$ for every point
$P\left(x,y\right)$ belongs to the circle, but how to define mathematically, by precise equations, a human face?).

Several texture analysis methods have been proposed over the years [1] [4] and it is well recognized that they capture different texture properties of images. This variability is also present in the definition of texture [5] and depends on the objective of the application.

Generally, texture approaches are divided up into four groups [6]: statistical, structural, model based and transform. In the statistical approach, the texture is indirectly represented by the non-deterministic properties that govern the distributions and relationships between the gray levels of an image. In contrast, structural approach represents texture by well-defined primitives (micro-texture) and a hierarchy of spatial arrangements (macro-texture) of those primitives. They are usually defined as the average or maximum tone of the cells that compose the image. The texture is described by the number and types of its primitives and the spatial organization or their layout [7]. The model analysis consists in interpreting an image texture by a generative image model and stochastic model—the parameters of the model are estimated and then used for image analysis. The transform methods, sometimes called spectral approach [8], represents the image in another space by means of some transformation over the original pixels of the image, where the coordinate system is interpreted or closely related to the characteristics of the texture (such as frequency or size).

The application of texture analysis in Atmospheric Science stems from the need for an automatic radar or satellite image interpretation [9]. In the last years, many scientists have worked hard to improve calibration techniques of radar platforms [10] [11], not only by physically modeling the data, but developing validation programs to correlate ground measurements with the remote sensing data [12] and automated algorithms for the identification of pertinent features observed in radar imagery. Unfortunately, creating these algorithms remains a challenging task [13], mainly because of the variability and complexity of the natural systems. Therefore, it seems that any improvement can be considered very important and should be investigated.

The automation process can be structured into three parts [14]: data collection or data preprocessing; feature extraction; classification algorithm. In this paper, we focus on the feature extraction step, considering the statistical texture approach applied to remote sensing by density maps. The main purpose is to get a better comprehension of the relationships between the more common feature set used in statistical texture papers, as well as to present a new method of the statistical texture approach. For such, a distance parameter usually set *ad hoc* is defined as a variable and defined a new statistical texture based on the Kullback-Leibler divergence.

More information about texture methods can be found in [1] - [7] [15] and we structured the paper in three sections, besides this introduction section. In Section 2, it is presented the statistical texture and its parameters. Following the purpose of understanding these texture measurements, in Section 3 they are calculated from simple artificial patterns and also to some density maps obtained from weather radar. The results are also discussed in Section 3 and the conclusion is presented in Section 4.

2. Feature Extraction

Texture can be defined as the visual effect which is produced by the spatial distribution of the cells (the smallest unambiguous region of the image) over relatively small areas. If the cells have little variation on a small area of an image, then the dominant property of that area is a tone, but if they change widely, the dominant property is a texture. Hence, one can characterize the texture by statistical functions of their cells, working with the cells properties and the spatial interrelationships between them.

Texture measurements based only on histograms of the possible cells values don’t carry information about their spatial interrelationships. Thus, to overcome this limitation Haralick [7] suggested two-dimensional spatial dependence of the gray tones in a co-occurrence matrix for each fixed distance and angular spatial relationship, known as Grey Level Co-occurrence Matrix (GLCM). In a gray scale image, the GLCM is a 256 × 256 matrix whose elements are the frequency that a gray level occurs relative to another gray level for a fixed distance and angle.

Although results obtained with GLCM are satisfactory, it requires a great deal of memory and computational time. A cheaper alternative with similar performance is the Gray Level Difference Vector (GLDV) [16]. The GLDV is based on the absolute difference of the gray level between pairs of cells found at a distance d apart at an angle *θ* with a fixed direction—see Figure 1. Now, instead of a matrix we have a vector of 256 coordinates representing a probability distribution function
${P}_{d,\theta}\left(m\right)$ of the relative frequencies of
$m=\left|m\left(i\right)-m\left(j\right)\right|$, where
$m\left(k\right)$ is the gray level of the cell *k*.

Once
${P}_{d,\theta}\left(m\right)$ is built, several statistical functions can be used to discriminate texture, *e.g.* the moments of
${P}_{d,\theta}\left(m\right)$. In this paper it is considered a set of functions commonly used in some papers from atmospheric science [17] [18]:

Figure 1. Image representation to the calculus of the difference gray level
$m=\left|m\left(i\right)-m\left(j\right)\right|$, aparted by a distance *d* at direction*θ*, with *m*(*i*) representing the gray level of the cell “*i*” and *m*(*j*) the gray level of the cell “*j*”.

mean
${\mu}_{d,\theta}$, contrast
$C{o}_{d,\theta}$, standard deviation, cluster shade
$C{s}_{d,\theta}$, cluster prominence
$C{p}_{d,\theta}$, angular second moment
$As{m}_{d,\theta}$, local homogeneity
$L{h}_{d,\theta}$, Shannon entropy
$S{e}_{d,\theta}$ —see Table 1 for their mathematical expressions. These functions are based on a specific distance vector *d* and have the usual mean that has to be translated in terms of tone variation. The mean gives us an idea of the predominant difference of tone in the image. If it is low, then the tone of the cells doesn’t have great variations at *d* and *θ* or an almost constant gradient. Since *m* can’t be negative, the contrast will have high correlation with the mean. The standard deviation shows us how much the cells differ from the mean value, *i.e.* how narrow is
${P}_{d,\theta}\left(m\right)$. The cluster shade measures the skewness of
${P}_{d,\theta}\left(m\right)$ and cluster prominence the relative flatness. The last three,
$As{m}_{d,\theta}$,
$L{h}_{d,\theta}$, and
$S{e}_{d,\theta}$ are known as uniformity, homogeneity and disorder measurements, respectively. But, as one can ask, what are the differences between them?

Only the general knowledge about these statistical measurements does not provide a good intuition when applied to atmospheric data. Then, to get more information about their use and interpretation, they are applied on a set of selected density maps, which results are discussed in the next section.

3. Getting Information

The first step is to obtain the texture measures from one periodic map sample, Figure 2(a). Empty regions are set to white, so the gray scale is inverted and scaled such that 0 is set to white and 100 to black. Since the interest is on density maps, only the cells with $m\left(l\right)\ne 0$ are used for the calculation, otherwise the distributions would be dislocated to $m\left(l\right)=0$ for systems with small area in relation to the total image area. All pictures used in this paper are images of 240 × 240 cells.

The density map Figure 1(a) does not have tone variation on the vertical axis, so each measurement returns zero value at $\theta =\pi /2$. As expected, the period of

Table 1. The common statistical measurements—see [8] [17] [18] for more details.

Figure 2. The texture curves in function of *d* and with
$\theta =0$ : (d) for the density map (a), (e) for the density map (b) and (f) for the density map (c). Each map has 240 × 240 pixels of area.

the curves on Figure 2(d) is the same of the density map (Figure 2(a)) and $C{s}_{d,\theta}$ is maximum when ${\mu}_{d,\theta}$ is maximum and $C{p}_{d,\theta}$ when it is minimum. But the interesting observation is the correlation between some measurements. Figure 2(d) clearly shows that the information at $L{h}_{d,\theta}$ is different from the information at $As{m}_{d,\theta}$ and $S{e}_{d,\theta}$, although they are similar by the kind of information provided (homogeneity, uniformity and disorder, respectively). One can also be observed the positive correlation between $S{e}_{d,\theta}$ and ${\sigma}_{d,\theta}$, and negative between $As{m}_{d,\theta}$ and ${\sigma}_{d,\theta}$, as well as the negative correlation between $L{h}_{d,\theta}$ and $C{o}_{d,\theta}$. It can be questioned whether these are general features of this set of measurements. Indeed, this is not very apparent.

The curves shown in Figure 2(e) and Figure 2(f) were calculated from the maps Figure 2(b) and Figure 2(c). Now,
$L{h}_{d,\theta}$ is similar to
$As{m}_{d,\theta}$ and with inverse behavior of
$S{e}_{d,\theta}$, *i.e.* positive correlation with
$As{m}_{d,\theta}$ and negative with
$S{e}_{d,\theta}$. It is also noted a decrease in correlation between
$S{e}_{d,\theta}$ and
${\sigma}_{d,\theta}$ in Figure 2(f) and an even greater loss of correlation between them in Figure 2(e).

As the periodicity is a sharp property in Figure 2(a) and is also present in Figure 2(c) (although the difference between them), but not in Figure 2(b), these observations suggest the correlation between the pairs { $S{e}_{d,\theta}$, ${\sigma}_{d,\theta}$ } or { $As{m}_{d,\theta}$, ${\sigma}_{d,\theta}$ } as a possible propriety of periodic density maps, while the correlation between { $As{m}_{d,\theta}$, $S{e}_{d,\theta}$ } seems to be more general.

Figure 2(c) offers another interesting observation. If the texture curves are calculated for that map at π/2, they would be similar to the curves obtained from Figure 2(b). Therefore, the adjustment of *θ* can be very important to distinguish or to group density maps. But, how can it be adjusted? It will depend on the objective. For example, if the objective is to group periodic maps apart from non-periodic, it can be used a Fourier transform over the
${\mu}_{d,\theta}$ curve to select a parameter *θ* that presents the spectrum with narrow frequency spike. In fact, this context seems to be a good application to the Kullback-Leibler Divergence (also known as relative entropy) [19].

The Kullback-Leibler Divergence (KLD) is defined as

$\text{KLD}\left[P,Q\right]={\displaystyle {\sum}_{m}P\left(m\right)\mathrm{ln}\left[P\left(m\right)/Q\left(m\right)\right]}$, (1)

and gives a measure of how much the probability distribution $P\left(m\right)$ differs from the probability distribution $Q\left(m\right)$.

The idea is to define the new measure

$2{\lambda}_{d,\theta}=\text{KLD}\left[{P}_{d,\theta},{P}_{d,\theta +\pi /2}\right]+\text{KLD}\left[{P}_{d,\theta +\pi /2},{P}_{d,\theta}\right]$ (2)

that will provide a measure of the symmetry between the directions *θ* and *θ* + π/2

Figure 3 shows
${\lambda}_{d,0}$ (
$\theta =0$ ) calculated from a density map like Figure 2(b) (full line) and Figure 2(c) (dash line). For the map of Figure 2(c), at
$\theta =\pi /2$ the distribution
${P}_{d,\pi /2}\left(m\right)$ goes to zero for distances greater than 45 cells, so
${\lambda}_{d,0}$ is not defined and it is adjusted to zero. In the same Figure 3, the full line curve was calculated from a Gaussian density map and, in contrast to the last one, it has a greater width and continuum decaying (no abrupt) of the KLD over *d*. These observations suggest at least two new parameter candidates to discriminate the density maps: the angle of maximum divergence and the width of the KLD over *d*; besides the values of the maximum divergence or *d* for it.

Figure 3. The measure
${\lambda}_{d,\theta}$ in function of *d* and with
$\theta =0$. The full line was calculated from a Gaussian density map like Figure 2(b), but with 40 pixels at the horizontal variance and 20 at the vertical. The dash line was calculated from the islands density map of Figure 2(c).

In spite of the suitability of these properties to discriminate systems, they were observed for well-defined density maps, *i.e.* periodic, symmetric... But, density maps obtained from weather radar measurements are usually not so regular or well defined. Nevertheless, their information is highly important because it gives some bounds of what can be expected and how to use them.

To get a little more insight, three density maps of rainfall systems over eastern São Paulo state measured by the São Paulo Weather Radar (SPWR) were selected. The maps are radar-derived rainfall rates at 3 km altitude within 240 km radius and represent sea breeze, cold front and disperse bands—Figures 4(a)-(c).

The texture curves plotted in Figure 4(d), show some interesting features of their respective rainfall systems. The similarity between the cold front map (CF) and the disperse bands (DB) of Figure 4(b) and Figure 4(c) is reproduced in the texture curves—that was aggravated by some radar noise which caused a circular band of precipitation affecting the precipitation on the bottom of Figure 4(b). This similarity was due to the existence of a large nucleus in both maps in opposition to the sea breeze map (SB), which presents many clusters. The presence of these clusters in SB makes oscillations on its texture curves (*e.g.* the
${\mu}_{d,\theta}$ in Figure 4(d)) that increase with *d*, while the texture curves from the CF and DB are smoother. If the nuclei of CF and DB were more homogeneous or symmetric, their texture curves would be more similar to the curves obtained from the Gaussian density map (Figure 2(e)) and if *θ* was adjusted to the formation line of the SB and if it was more regular, its texture curves would be similar to the curves obtained from the islands density map, Figure 2(f).

Figure 4. Texture curves at *θ* = 0 (d) for some rainfall density maps and
${\lambda}_{d,0}$ (e). The full line was calculated from the sea breeze (a), the dash line from the cold front (b) and the dot line from the disperse bands (c). The rainfall density maps were derived from the São Paulo Weather Radar at mm/hour and each map has 240 × 240 Km^{2} of area.

In spite of these natural irregularities on the weather density maps, these statistical measures were enough to discriminate or group the SB, CF and DB. For example, the $As{m}_{d,\theta}$ can be used to group the CF with DB and to separate them from the SB, while $C{s}_{d,\theta}$ can be used to discriminate CF of DB.

These new possibilities come from the use of *d* as a variable that not only bring new information on the old statistical measures but also can be used to define new ones—for example, the measure
${\lambda}_{d,\theta}$ defined in Equation (2).

Figure 4(e) shows
${\lambda}_{d,0}$ for the density maps of Figures 4(a)-(c). Although the SB is well located, there are little islands of precipitation that produce a residual non-null probabilities distribution for large *d*, but at same time, almost islands have the same or similar structure making
${\lambda}_{d,0}$ small and almost constant when compared to the CF and DB. Besides, the asymmetry on CF and DB are clearly distinguished, not only in magnitude but also in relation to the scale (distance *d*).

As mentioned in the introduction, the interest is in getting more information about the statistical texture measurements (Table 1). Therefore, the correlation coefficients (*ρ*) of texture curves from a set of 5535 rainfall maps were calculated. The correlation was calculated between the texture curves from the same map and, as expected, the highest (the others *ρ*’s are around 0.5 or less) averaged coefficients were
$\rho \left(\mu ,Co\right)=0.953\pm 0.036$,*
$\rho \left(\mu ,\sigma \right)=0.964\pm 0.037$ *,
$\rho \left(Co,\sigma \right)=0.943\pm 0.048$,
$\rho \left(\sigma ,Se\right)=0.929\pm 0.074$ and
$\rho \left(Asm,Lh\right)=0.884\pm 0.143$. This shows a kind of redundancy between these statistics and can be used to reduce the set of texture measurements. Of course, its importance is more significant to methods that use texture curves instead of points (with *d* fixed). Then, we also fixed *d* and calculated *ρ* for the same set of rainfall maps. The correlations now are between the statistics calculated from different maps and with
$d=1$, as in almost papers. In opposition to the last calculus, more coefficients have high magnitudes:
$\rho \left(\mu ,Co\right)=0.981$,
$\rho \left(\mu ,\sigma \right)=0.945$,
$\rho \left(\mu ,Cs\right)=0.937$,
$\rho \left(\mu ,Lh\right)=-0.801$,
$\rho \left(\mu ,Se\right)=0.873$,
$\rho \left(Co,\sigma \right)=0.922$,
$\rho \left(Co,Cs\right)=0.926$,
$\rho \left(\sigma ,Cs\right)=0.794$,
$\rho \left(\sigma ,Se\right)=0.895$,
$\rho \left(Asm,Lh\right)=0.966$,
$\rho \left(Asm,Se\right)=-0.951$ and
$\rho \left(Lh,Se\right)=-0.939$. Large
$\left|\rho \left(x,y\right)\right|$ means that *x* and *y* are close to being linearly related and hence are closely related. Although one can not say that *x* and *y* are not related if
$\left|\rho \left(x,y\right)\right|$ is small, the dependence between them for large
$\left|\rho \left(x,y\right)\right|$ is clear. Therefore, one can expect a high capacity to discriminate patterns by using the texture curves than texture points (*d* fixed). The computational complexity, apparently increased on the use of texture curves, can be compensated by the reduction of the statistical measurements used and by the possible simplification on the discriminant algorithms, *i.e.* the possibility of the nonlinearity be better represented on the texture curves set than on the traditional one, due to the smaller linear dependence between the measurements.

Some authors have used these statistical textures as input to discriminant algorithms (*e.g.* artificial neural networks) without a major understanding of their meaning. As consequence, usually *d *is set to 1 and *θ* to 0 [17] [18] or to some more few values [20], transferring the high nonlinearity of the maps to the discriminant algorithm. Although this can be enough to some systems, atmospheric systems present complex dynamics, which can reduce considerably the performance of the classification algorithm. By setting *d* as an independent variable of texture measurements and considering the measurement
${\lambda}_{d,\theta}$, the precipitation maps can be more easily easier discriminated, as it is showed in Figure 3. Besides, it is possible to get information about the width of the islands inside the density map as well as their spatial distribution—as in the maps of Figure 2(c) and Figure 4(a). If *θ* is also defined as another independent variable, it is possible to get information on the orientation of the islands.

The necessity of geometrical information comes from the tendency of precipitation systems to have specific spatial distribution and orientation [21]. In [22], besides the texture features the authors used the size, length of the major axis, eccentricity and the compactness of the density maps as input to artificial neural networks to distinguish frontal from convective systems. They used a segmentation algorithm to calculate such geometrical inputs. Therefore, setting *d* and/or *θ* as variables and with the introduction of the measurement
${\lambda}_{d,\theta}$, the use of shape descriptors besides the texture measurements is expected to be no longer necessary.

4. Conclusions

It was shown in this paper the redundancy in information in the more common set of statistical texture features and its potential suitability to discriminate patterns, even of complex radar-derived patterns. In GLDV method,
$C{o}_{d,\theta}$ doesn’t give significant information not present in
${\mu}_{d,\theta}$ or
${\sigma}_{d,\theta}$. In the same way, the subset of measurements
$\left\{As{m}_{d,\theta},L{h}_{d,\theta},S{e}_{d,\theta}\right\}$ can be reduced to subsets of two or one elements, according to the main objective—*e.g.*
$As{m}_{d,\theta}$ can be preferred to
$S{e}_{d,\theta}$, if it is wanted normalized variables. Besides, the use of the new measurement proposed,
${\lambda}_{d,\theta}$, indicated potential improvements in the computational cost and performance of the classification algorithms.

Since the precipitating systems evolve rather quickly in a spatio-temporal frame and given the need for better rainfall analysis and forecasting based on weather radar measurements on any real time warning system, the method presented herein can be applied as a supporting tool in rainfall analysis and forecasting methods.

Acknowledgements

The authors were supported by the Fundação de Amparo à Pesquisa do Estado de São Paulo under the Grants 05/60141-0 and 01/13952-2. A.J.P.F is also sponsored by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under grant 302349/2017-6.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

[1] |
Humeau-Heurtier, A. (2019) Texture Feature Extraction Methods: A Survey. IEEE Access, 7, 8975-9000. https://doi.org/10.1109/ACCESS.2018.2890743 |

[2] |
Varghese, B.A., Cen, S.Y., Hwang, D.H. and Duddalwar, V.A. (2019) Texture Analysis of Imaging: What Radiologists Need to Know. American Journal of Roentgenology, 212, 520-528. https://doi.org/10.2214/AJR.18.20624 |

[3] |
Gandhimathi, A.U. and Vasuki, S. (2022) Significance of Texture Features in the Segmentation of Remotely Sensed Imagens. Optik, 249, Article ID: 168241.
https://doi.org/10.1016/j.ijleo.2021.168241 |

[4] |
Bianconi, F., Fernández, A., Smeraldi, F. and Pascoletti, G. (2021) Coulor and Texture Descriptors for Visual Recognition: A Historical Overview. Journal of Imaging, 7, 245. https://doi.org/10.3390/jimaging7110245 |

[5] | Coggins, J.M. (1982) A Framework for Texture Analysis Based on Spatial Filtering. Ph.D. Thesis, Michigan State University, East Lansing. |

[6] | Materka, A. and Strzelecki, M. (1988) Texture Analysis Methods: A Review. COST B11 Report, Technical University of Lodz, Ottawa. |

[7] |
Haralick, R.M. (1979) Statistical and Structural Approaches to Texture. Proceedings of the IEEE, 67, 786-804. https://doi.org/10.1109/PROC.1979.11328 |

[8] | Gonzales, R.C. and Woods, R.E. (2018) Digital Imaging Processing. 4th Edition, Pearson Education Limited, Essex. |

[9] |
Shanmugan, K.S., Narayanan, V., Frost, V.S., Stiles, J.A. and Holtzman, J.C. (1981) Textural Features for Radar Images Analysis. Transactions on Geoscience & Remote Sensing, 19, 153-156. https://doi.org/10.1109/TGRS.1981.350344 |

[10] |
Sharif, R.B., Emad, H.H. and ElSaadani, M. (2020) Evaluation of Radar-Rainfall Products over Coastal Lousiana. Remote Sensing, 12, 1477.
https://doi.org/10.3390/rs12091477 |

[11] |
Hrachowitz, M., et al. (2013) A Decade of Predictions in Ungauged Basins (PUB)—A Review. Hydrological Sciences Journal, 58, 1198-1255.
https://doi.org/10.1080/02626667.2013.803183 |

[12] | Pereira Filho, A.J. and Nakayama, P.T. (2001) Intercomparison of Radar Rainfall Estimates and Rain Gage Measurements in Sao Paulo, Brazil. 5th International Symposium on Hydrological Applications of Weather Radar, Kyoto, Japan. |

[13] |
Raval, M., et al. (2021) Automated Predictive Analytics Tool for Rainfall Forecasting. Scientific Reports, 11, Article No. 17704.
https://doi.org/10.1038/s41598-021-95735-8 |

[14] | Duda, R.O., Hart, P.E. and Stork, D.G. (2000) Pattern Classification. 2nd Edition, Wiley-Interscience, New York. |

[15] |
Haralick, R.M., Shanmugan, K. and Dinstein, I. (1973) Textural Features for Image Classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-3, 610-621. https://doi.org/10.1109/TSMC.1973.4309314 |

[16] |
Weska, J.S., Dyer, C.R. and Rosenfeld, A.A. (1976) A Comparative Study of Texture Measures for Terrain Classification. IEEE Transactions on Systems, Man, and Cybernetics, SMC-6, 2269-2285. https://doi.org/10.1109/TSMC.1976.5408777 |

[17] |
Chen, Q.W., Sengupta, S.K. and Welch, R.M. (1989) Cloud Field Classification Based upon High Spatial Resolution Textural Features II—Simplified Vector Approaches. Journal of Geophysical Research, 94, 14749-14765.
https://doi.org/10.1029/JD094iD12p14749 |

[18] |
Ameur, Z., Ameur, S., Adane, A., Sauvageot, H. and Bara, K. (2004) Cloud Classification Using the Textural Features of Meteosat Images. International Journal of Remote Sensing, 25, 4491-4503. https://doi.org/10.1080/01431160410001735120 |

[19] | Cover, T.M. and Thomas, J.A. (2006) Elements of Information Theory. 2th Edition, John Wiley & Sons Inc., Hoboken. |

[20] |
Nurtanio, I., Zainuddin, Z. and Setiadi, B.H. (2019) Cloud Classification Based on Images Texture Features. IOP Conference Series: Materials Science and Engineering, 676, Article ID: 012015. https://doi.org/10.1088/1757-899X/676/1/012015 |

[21] | Pereira Filho, A.J. and Silva, F.D.S. (2005) The Morphology of Tropical Rainfall Systems and Their Hydrological Significance. 32nd Conference on Radar Meteorology, Boston. |

[22] |
Walter, A. and Bennartz, R. (2006) Radar-Based Precipitation Type Analysis in the Baltic Area. Tellus A, 58, 331-343. https://doi.org/10.1111/j.1600-0870.2006.00183.x |

Journals Menu

Contact us

+1 323-425-8868 | |

customer@scirp.org | |

+86 18163351462(WhatsApp) | |

1655362766 | |

Paper Publishing WeChat |

Copyright © 2024 by authors and Scientific Research Publishing Inc.

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.