Brain Tumor Segmentation of HGG and LGG MRI Images Using WFL-Based 3D U-Net

Abstract

The semantic segmentation of a brain tumor is the essential stage in medical treatment planning. Due to the different characteristics of tumors, one of the main difficulties in image segmentation is the severe imbalance between classes. Also, a dataset with imbalanced classes is a common problem in multimodal 3D brain MRIs. Despite these problems, most studies in brain tumor segmentation are biased toward the overrepresented tumor class (majority class) and ignore the small size tumor class (minority class). In this paper, we propose an improved loss function Weighted Focal Loss (WFL), based on 3D U-Net to enhance the prediction of brain tumor segmentation. Using our proposed loss function (WFL) solves the imbalance between classes and the imbalance between weights by giving higher weights to the minority and lower weights to the majority. After assigning these weights to different pixel values, our work is able to resolve pixel degradation, which is one of the limitations of the loss function during model training. Based on our experiments, the proposed function (WFL) on the 3D U-Net model for high-grade glioma (HGG) and low-grade glioma (LGG) in the Brain Tumor Segmentation Challenge (BraTS) 2019 dataset has shown promising results for tumor core (TC), whole tumor (WT) and enhanced tumor (ET) with average dice scores of HGG: 0.830, 0.913, 0.815 and Dice scores of LGG for TC: 0.731, WT: 0.775 and ET: 0.685. Moreover, we deployed our training on BraTS 2020 in which we obtained mean Dice scores of HGG: TC: 0.843, WT: 0.892, ET: 0.871 and Dice scores of LGG: 0.7501, 0.7985, 0.6103 for TC, WT and ET, respectively.

Share and Cite:

Shomirov, A. , Zhang, J. and Billah, M. (2022) Brain Tumor Segmentation of HGG and LGG MRI Images Using WFL-Based 3D U-Net. Journal of Biomedical Science and Engineering, 15, 241-260. doi: 10.4236/jbise.2022.1510022.

1. INTRODUCTION

A brain tumor is a process of pathological transformation of brain cells or surrounding meninges, glandular and bone tissues. As a result of the transformation, abnormal cells begin the process of uncontrolled division, due to which a tumor is formed that compresses healthy tissues and increases intracranial pressure. The main cause of the tumor has not been identified to date; moreover, medical scientists from all over the world continue to search for a solution to this problem. More than 700,000 people in the United States [1] live with a brain tumor. Statistics from the Central Brain Tumor Registry from the United States Annual Report suggest that in 2021, more than 84,000 people will have a primary brain tumor, more than 120 different types of primary brain and CNS tumors, of which 29.7 percent brain and central nervous system (CNS) malignancies, and the increase in childhood brain tumors is more than 28,000, with an estimated 18,000 deaths from primary malignant brain tumors in 2021 [2]. There are two main types of brain tumors benign and malignant. Benign brain tumors are characterized by relatively slow growth. Although not cancerous, these tumors can still cause symptoms and may require treatment. These types include chordoma, more often in people aged 50 to 60 [3], craniopharyngiomas, gangliocytoma, ganglions, angioplasty ganglions mainly in young people, meningioma, pineocytoma, pituitary adenomas usually affect people aged 30 to 40. Malignant brain tumors can develop directly in the brain or where they spread from initial tumors in the body to secondary tumors, increasing the chance of recurrence. These tumors tend to grow faster and are more aggressive than benign tumors. The most common types of malignant brain tumors in adults are gliomas: astrocytomas in middle-aged men, ependymomas, and multihormonal glioblastomas in people between 50 and 70 years of age [3 , 4] and are more common in men than women, medulloblastoma, and oligodendroglioma. Gliomas are also divided into the following groups: low-grade gliomas (LGG) and high-grade gliomas (HGG). LGG type I is usually benign and remains undetectable and untreated. Low-grade gliomas (LGG) are a diverse group of primary brain tumors; most often, this type of brain tumor with relatively good prognosis guarantees [5 - 10] the patient’s chance of survival. However, LGG type II carries the risk of recurrence of HGG, a much more severe and advanced cancer stage. LGG type II tumors of intermediate, indeterminate and low grade are slow-growing tumors, which, however, are prone to recurrence after treatment due to their infiltrative nature of germination growth into normal tissues. The type of HGG usually develops very quickly [7] and affects the healthy tissues around the tumor; even after surgery, it is impossible to guarantee the overall survival time of this type of disease. There are three primary forms of glioma spatial structure, as described by its growth pattern: solid tumor with no peripheral isolated tumor cells (ITCs), tumor tissue with peripheral ITCs, and ITCs inside intact brain parenchyma [8]. Nowadays, different methods are used to diagnose brain tumors, one of the popular methods of which is the use of MRI to evaluate gliomas in clinical practice to obtain a sequence of data. Magnetic resonance imaging (MRI) is one of the most exciting technologies for soft tissue analysis. However, the low sensitivity of MRI limits its ability to distinguish pathological areas from normal tissues [9 , 10] because such images have complex tissues, structures and edges [11], thus making it difficult to detect any brain diseases. Due to the fact that features extracting medical information from MRI images is one of the most critical and challenging tasks in medical image analysis, detecting these features is an important step in segmenting an image for diagnostic purposes. The lack of automation in these tasks creates the need for accurate data processing by an expert, which leads to the possibility of errors related to the human factor. Although some problem-solving methods can be semi-automated, but they still rely on human skills. There are only a few absolute contraindications for magnetic resonance imaging. MRI uses magnetic fields and radio waves [12] to produce thin tissue sections tomographic images. A different magnetic field is used depending on the effect of the MRI scanner machine on changes in brightness [13 , 14] in images. The MRI method is based on measuring the electromagnetic response of the nuclei of hydrogen atoms to their excitation by a specific combination of electromagnetic waves in a constant magnetic field of high intensity. Most studies on the segmentation of various diseases focus on MRI images; the four standards of MRI techniques that are commonly used are fluid-enhanced T2 (Flair), T1-weighted (T1), and T1-weighted contrast (T1C) and T2-weighted (T2). Despite much research on the segmentation of medical images, there are still many problems in this field. One of the main problems with segmentation is the manual work performed by radiologists, which is laborious and time-consuming.

Researchers’ primary goal in the segmentation of medical images is to obtain better results and increase segmentation quality by different techniques in this area. Nevertheless, accurate tumor segmentation remains challenging due to the heterogeneity in shape, size, boundary, and appearance of gliomas and confusion between cancer and brain tumor tissue [15 , 16]. Also, for the prediction of brain tumor segmentation in the Brain Tumor Segmentation Challenge (BraTS) MRI image dataset, there is an imbalance between classes which makes our work difficult for accurate segmentation and prediction of patient survival. Nowadays, most studies are developed using deep learning based on the U-Net model. The U-Net network was first proposed by Ronneberger et al. [17] for the segmentation of biomedical images, which showed more promising results than other networks during image segmentation processing. U-Net is a deductive network that captures context information and combines depth information with coarse information to perform final learning using a cascade encoder-decoder operation to improve the segmentation effect [5]. Then Çiçek Ö. et al. [18] presented a network for volumetric segmentation that learns sparsely annotated volumetric images. Using two methods, the proposed network extended the previous U-Net architecture from Ronneberger et al. [17] by replacing all standard 2D operations with their 3D counterparts in a complex and highly variable 3D structure, the xenopus kidney. Table 1 summarizes several types of research for the improvement of segmentation based on U-Net network architecture from recent years, which have been proposed for the segmentation of datasets using different methods.

Ma, C., & Li, X. [19] proposed an improved U-Net model to expand the receptive field, obtain more information on this feature, and compensate for the shortcomings caused by limited computing resources using extended convolution. AboElenein N. et al. [20] proposed a new encoder-decoder architecture called Multi Inception Residual Attention U-Net (MIRAU-Net) to further improve segmentation performance. The proposed architecture integrates residual modules with interest gates into U-Net, and the encoder-decoder in this architecture is connected by Residual Inception paths to reduce the distance between their feature maps and solve the class imbalance problem of weighted entropy. Available dice generalized dice loss (GDL) was used with Tversky focal loss functions. Wang et al. [21] proposed a deep learning model based on 3D U-Net using intelligent brain normalization and patching strategies for the segmentation task, which normalizes for each modality by summing voxel values within the brain across all training images to calculate mean μ and standard deviation σ have been calculated. Wang, Wenxuan, et al. [22] suggested a TransBTS architecture that effectively included the transformer into a 3D deep convolutional neural network (CNN) based on encoder-decoder architecture. Initially, 3D CNN was utilised to extract local and geographic information from maps. These retrieved features were sent via a transformer for global feature capture. After that, the decoder component integrated local and global characteristics during

Table 1. Summary list of different methods and datasets used for brain tumor segmentation.

upsampling to provide segmentation results. Experiments were conducted using the BraTS 2019 and 2020 datasets, and the suggested model achieved equivalent outcomes. Sheng, Ning, et al. [23] proposed a new brain tumor segmentation network, SoResU-Net, to implement segmentation. This proposed network uses a series of secondary modules to replace the original connection operation, thereby increasing the series of transformation operations and increasing the also nonlinearity of the segmentation network. Raza, Rehan, et al. [24] proposed a hybrid deep residual network model and U-Net model (dResU-Net) for automatic 3D brain tumor segmentation. Gradient, the advantage of low and high-level features simultaneously to design for prediction. The integrated loss function based on the particle and focus loss was used in the presented research. Parmar, B., & Parikh [25] proposed a patch selection methodology based on a modified U-Net deep learning architecture with appropriate normalization and patch selection techniques for brain tumor segmentation. Chato, L., Kachroo, P, & Latifi, S [26] predict the survival time of glioma brain tumor patients based on volume, location, and shape characteristics, develop a 3D U-Net accurate segmentation model to identify and localize glioma brain tumor subregions and proposed a modified 3D U-Net segmentation model based on the aggregated encoder (U-Net AE) with generalized loss function (GDL) trained by ADAM optimization algorithm for better performance. Many studies [19 , 21 , 24 - 26] have utilized different methods to enhance segmentation results by modifying the architecture of the U-Net model instead of using the original model. Nevertheless, accurate tumor segmentation remains challenging due to the heterogeneity in shape, size, boundary, and appearance of gliomas and confusion between cancer and brain tumor tissue [15 , 16]. Also, for the prediction of brain tumor segmentation in the Brain Tumor Segmentation Challeng (BraTS) MRI image dataset, there is an imbalance between classes which makes our work difficult for accurate segmentation and prediction of patient survival. This paper proposes a method to solve and enhance the prediction of brain tumor segmentation, as segmentation is the best option for survival prediction in brain tumor patients. However, the problem of imbalanced data in segmentation creates an obstacle for segmentation brain tumor solutions. Unlike other works, our solution to this problem is by selecting the most appropriate function and improving the loss function to resolve imbalanced data and solve pixels degradation as one of the limitations of the loss function during model training.

The main contributions of this paper are summarized as follows:

● Eliminating the data imbalance classes, by improving the focal loss function to enhance brain tumor prediction.

● Addressing pixel degradation as one of the limitations of the loss function while training.

● For experiments, the standard form of 2D U-Net in 3D processing is returned and the U-net structure is improved for better performance in specific situations.

We present our method on the BraTS 2019-2020 dataset for the imbalance between classes, identifying LGG-HGG tumors and their equal weight, and paying close attention to pixels that lead to better prediction in terms of segmentation. We evaluate our enhanced 3D U-Net model by improving loss function (WFL) selection on the model. As we know, a loss function computes the pixel-to-pixel loss between the prediction and the target image. Common loss functions, such as cross-entropy loss, focal loss and dice loss, can mostly be applied between each pair of pixels on the predicted and the target variable. Since these loss functions evaluate the class prediction for each pixel vector separately and then calculate the average over all pixels, they assert the same learning for every pixel in the image. Class imbalance is common in pixel-level classification tasks when the various classes in the image data are unbalanced. Since semantic segmentation applies to pixel classification, it can suffer from class imbalance, to solve this, wegiving equal weight and concentrating small weights for a small number of samples for each class in the given set will classify the pixel degradation well. Our weighted focal loss is determined by the parameters values γ = 2.5 and a = 0.25, because, a higher value of γ will reduce the relative loss of better classified examples by placing more emphasis on samples that are difficult to classify, which controls the class weights and the degree of degradation of the pixels to be classified, respectively.

2. MATERIALS AND METHODS

In our experiment, we used Python, Anaconda, and Tensorflow 2.8 on Windows 10 pro with Intel i7 8th Gen and 8 GB RAM classes; experiments were conducted to predict the segmentation of brain tumors in the BraTS 2019-2020 dataset. As mentioned above, our primary goal in this work was to choose the most appropriate loss function to improve the processing of the 3D U-Net model for the imbalance between classes, degree of decline of pixels and improve the quality of predictions in segmentation of brain tumor. In our work, the batches size is 2, number of epochs is 100 epochs and for the optimizer we used Adam’s Optimizer. We then evaluated our model based on accuracy, dice score and IOU. This section presents our proposed method for tumor segmentation prediction and the proper choice of the loss function to solve the class imbalance. As shown in Figure 1, our work includes the following steps to solve this problem.

2.1. Data Preprocessing

BraTS dataset is a collection of preoperative multimodality MRI scans for glioblastoma high-grade glioma HGG and low-grade glioma (LGG) with a confirmed diagnosis. We test and evaluate our proposed method on the BraTS-(2019, 2020) dataset. BraTS-(2019, 2020) use multi-institutional preoperative MRI scans and focuses on the segmentation of heterogeneous appearance, shape, and histology of brain tumors, i.e., gliomas [27]. Furthermore, to determine the clinical relevance of this segmentation task, BraTS-(2019, 2020) focuses on the prediction of overall patient survival through the integrative analysis of radiomic features and machine learning algorithms [27 , 28]. 3D brain MRIs consisting of 4 modalities per case, each of them from an MRI examination with images 240 × 240 × 155 voxel: Native (T1), post-contrast T1-weighted (T1ce), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (T2-FLAIR) (see Figure 2) and a ground truth segmentation map that classifies each voxel into one of four categories. It also contains four classes of semantic labels and all the imaging datasets have been segmented manually and were approved by experienced neuro-radiologists: Unlabeled volume, Necrotic and non-enhancing tumor core (NCR/ NET), Peritumoral edema (ED), Missing (No pixels in all the volumes containing label 3), GD-enhancing tumor (ET).

The BraTS multimodal scans are available as NIfTI files (.nii.gz). All dataset MRI images (T1, T1ce, T2, and T2-FLAIR) have a volume dimension of 240 × 240 × 155 voxels. Before segmentation of MRI images, a preprocessing one crucial step is needed for MRI images to be altered by bias field distortion, which it is makes the intensity of the tissues vary across the image. We use the N4ITK method proposed by Tustison et al. [29] to solve it. Moreover, to remove useless empty areas around the actual volume of interest, images are resized to 128 × 128 × 128 voxel for use as input to ResNet. All MRI images are distributed to the networks individually as ET, TC, and TC classes. Since the intensity of MRI varies depending on the device manufacturer, the acquisition parameters and the sequence, the input images must be standardized.

Figure 1. General Architectural diagram of the proposed work.

Figure 2. An example of a brain tumor from the BraTS19_TMC_27374_1, and BraTS20_ Training_114 HGG dataset. Native (T1), post-contrast T1-weighted (T1ce), T2-weighted (T2), and T2 Fluid Attenuated Inversion Recovery (T2-FLAIR).

Preprocessing the MRI images is the initial step in every dataset investigation. It is worth mentioning that the standardization of MRI images was accomplished by removing the mean pixel value and dividing the result by the pixel standard deviation values. We focus most of our attention on processing weighted images (T1ce) (T2-FLAIR) because this can help the model extract more features during training. As a result of obtaining accurate information on weighted (T1ce) and (T2-FLAIR) images, we can evaluate the prognosis of T1-weighted and T2-weighted (T2) images. Because pathologies with hypervascularity appear bright on T1ce-weighted postcontrast images, and T1-weighted images are obtained using a primary spin-echo pulse sequence, they demonstrate the difference in T1 relaxation time in different tissues. When diagnosing inflammatory diseases of the brain and spinal cord, brain structures are assessed for the presence of foci of inflammation characterized by hyperintense luminescence in the T2-weighted images and FLAIR modes, and as a general rule, the isointense in the mode of the T1-weighted image. On such images, many pathological foci are seen well than on T2 weighted images, especially if they are adjacent to spaces that contain cerebrospinal fluid.

2.2. U-Net Architecture

The U-Net architecture consists of two parts downsampling layers and upsampling layers. The first is a typical architecture of a convolutional neural network, i.e. it contains a repeated overlay of two convolutions with a kernel size of 3 × 3, each of which is followed by a linear rectification layer and a pooling operation with a kernel size of 2 × 2 and with a step of 2 to reduce discretization. Thus, the number of image channels doubles after each stage of the first part. Then, each step of the second part consists of a 2 × 2 convolution upsampling layer, a concatenation with the corresponding feature map from the first part, and two 3 × 3 convolutions, each followed by a linear rectification layer. U-Net is known for achieving high results in various actual tasks even with a small amount of data, especially in the field of biomedical images. U-Net is a good base architecture characterized by a constant foreshortening and scale of objects, which corresponds to the formulation of our problem. U-Net makes significant use of the “skip-connection”, which gives the best results compared to conventional autoencoders. Therefore, we use U-Net due to its good performance in brain tumor segmentation. The proposed 3D U-Net is based on the standard U-Net architecture, which includes encoder and decoder paths, as illustrated in Figure 3. The architecture takes

Figure 3. Our developed 3D U-Net architecture.

an input image with a resolution of 128 × 128 × 128 voxels and outputs an image of the same size (128 × 128 × 128) voxels. For better performance in specific cases, we used the ResNet as our encoder; this allows the encoder to distinguish standard details such as lines, edges and image textures even before starting training, which speeds up the training process and increases the overall performance of the 3D U-Net.

In general, our work used the model for 100 epochs with the size of 4 channels of 128 × 128 × 128 voxels, the batch size of 2 and the modified Focal Loss function, which combines the total weighted focal loss in the model. Table 2 details the parameters during the training process.

After cropping images into 128 × 128 × 128 voxels and batch size 2 for 100 epochs, the next step was to choose the parameters for training in the model to predict brain tumor segmentation. Choosing the Adam optimizer in networks is generally better than every other optimization algorithm and has a faster computation time; our work uses Adam with a learning rate of 0.0001, an adaptive first-order gradient-based optimization algorithm. Also, the Adam optimizer for our model might give the best probability of getting the best results. Moreover, sigmoid is one of the popular activation functions in binary image segmentation models; in a network, after the last linear block in the up-sampling path, a final convolutional layer with the sigmoid activation is used to obtain the prediction segmentation result of the target image.

2.3. Class Imbalanced

Class imbalance datasets are a common problem in image classification and image segmentation. Nowadays, various training methods are proposed to combat class imbalance. One of the ways to solve this problem is to choose a suitable loss function for the network. The general focal loss introduced in [30] is used to address the one-stage object detection; in our work, we improve the focal loss to solve the imbalance classes problem in brain tumor segmentation. To solve the imbalance using the proposed loss function, WFL is to reduce the multi-class imbalance problem by training two separate classifiers, one for the majority class and the other for the minority classes; by weighting the complex samples, the networks are regulated to pay more attention to the minority classes, because neural networks can’t ignore patterns of small structures often underrepresented in the training set. Another difficulty with focal loss functions is that the enhancing or suppressing effect of the focal parameter is applied to all classes, which can affect convergence at the end of training. To solve this problem, we used ResNet architecture as our encoder. The experimental results show that the ResNet model has the accuracy and classification effect that can overcome the convergence problem to some extent and improve the performance of the U-Net semantic model.

Table 2. Parameters used for our training model.

2.4. Loss Function

A common problem in segmentation and image analysis is detecting or separating a minimal anomalous area of a large image. Such data are called unbalanced which are easily classified samples make up the majority of the training sample and dominate [31] in calculating the loss function. Complex examples where the network is faulty are practically ignored in the study because their number is relatively small. The problem of data mismatch can be solved, for example, by changing the study pattern or selecting the loss function. Akil M. et al. [32] proposed a new Deep Convolutional Neural Networks (CNNs) dedicated to fully automatic segmentation of glioblastoma brain tumors with high grades of glioma. The CNNs model is inspired by the Occipito-Temporal pathway, which uses different receptive field sizes in successive layers to figure out the crucial objects in a scene. They solved the class imbalance issue by selecting equal patches for images. They applied the experiments on the weighted Cross-Entropy loss function in their work. Zhou, Xinyu, et al. [33] proposed an efficient 3D residual neural network (ERV-Net) for brain tumor segmentation on GPU memory with lower computational complexity. In ERV-Net, an efficient computing network, 3D ShuffleNetV2, is first used as an encoder to reduce GPU memory and improve the efficiency of ERV-Net. Then, a decoder with residual blocks (Res-decoder) is introduced to avoid degradation. In their work, they solve the problem of network convergence and data imbalance by using a fusion loss function consisting of dice loss and cross-entropy loss. They propose a practical fusion loss function combining Dice and Cross entropy. The dice loss function is used to reduce the impact caused by unbalanced data, e.g., the 3D segmentation task. Besides this, the softmax function is used to normalize the output of ERV-Net before Dice loss. Latif, Urva, et al. [34] proposed a multi-level inception U-Net (MI-UNet) architecture, which has different primitive modules at each level of the U-Net to capture multi-scale features for the segmentation stage. For class imbalance, they considered smooth, cross-entropy and weighted-entropy loss in their experiments for comparison. In our work, our proposed method investigated three loss functions and compare them to cross-entropy and Dice loss.

2.4.1. Cross Entropy Function

Nowadays, as we know, the most common function in semantic segmentation is Cross Entropy, which has achieved good results in most studies as a function of loss in regular data samples. Cross entropy is a loss function determining the difference between two probability distributions.

C E ( p , y ) = { log ( p ) if y = 1 log ( 1 p ) otherwise (1)

In most cases of segmentation, the entropy function has many shortcomings, and for example, there are given data in which there is an imbalance of classes. In these cases, by using the cross entropy as a loss function, especially in pixel classification, the entropy function is trained for each pixel with an absolute number in the model, which is very difficult to obtain the best results. To solve this problem, we offer the weighted focal loss function.

2.4.2. Our Training Weight Focal Loss

Focal loss is a variant of binary cross-entropy loss designed to solve single-stage object detection scenarios. There is a severe imbalance in learning between classes and backgrounds [30], as shown in Equation (2).

F L ( p t ) = ( 1 p t ) γ log ( p t ) (2)

when an example is misclassified, p t will become 0, and the modulation factor will become 1, making the loss function virtually unchanged. On the other hand, if the example is properly classified, p t will tend to become 1, and the modulation factor will tend to become 0, resulting in a loss very close to 0, which reduces this particular example. The focus parameter γ smoothly controls the rate of weight reduction for easily classified examples.

F L ( p t ) = α t ( 1 p t ) γ log ( p t ) (3)

Here 1 p t is modulating factor to decrease the original cross-entropy loss, with hyperparameters: alphat and gamma. When p t is larger, the weight is smaller; when an example is misclassified, and p t is small, the modulating factor is near 1, and the loss is unaffected. The authors propose to add the modulating factor ( 1 p t ) γ to the cross-entropy loss function with an adjustable focusing parameter γ 0 . A disadvantage of the Focal loss function is that it may underestimate the importance of samples in the classes of concern, and the enhancing effect of the focal parameter is applied to all classes, which may affect the convergence at the result of the study.

To bring the function of weighted focal loss (WFL), we present the function of Focal loss with a few changes.

W F L = i = 0 n α t β ( 1 p t ) γ log ( p t p n ) (4)

The weighted factor in our work α = 0.25 and γ = 2.5 , thus α t = 0.25 for positive samples, and α t = 0.5 for negative samples. In Equation (1), y { 0 , 1 } defines the ground truth class, focal loss for several values of γ { 0 , 5 } . We set γ = 2.5 in our experiment to solve the class imbalance, and α = 0.25 is a balancing factor that controls the overall loss function. In the modified focal loss function, both of the degree of degradation of the pixels p n , the parameters β γ solve the class imbalance problem for the majority and minority classes, and α solves the weighting problem, and α solves the weighting problem. Also, our proposed weighted focal loss can be expressed mathematically as Equation (5):

W F L n , c = w [ p n β ( x n , c ) + ( 1 p n , c ) γ log ( p t p d ( x n , c ) ) ] (5)

where c is the class number, n is the number of the sample, and p n β solves the imbalance between classes and the imbalance between weights by giving higher weights to the minority and lower weights to the majority p d pixel degradation. We focus on pixels that have not yet been studied and are challenging to train, along with the weight balance adjustment using the Focal Weight Loss function. To solve this problem, WFL uses proportional and class weights to control the degree of pixel degradation. Also, the WFL can appreciate the importance of complex samples, regardless of class, by giving them light samples and more weight.

2.4.3. Dice Loss Function

Like the two functions mentioned above, the dice loss function is widely used in image segmentation to solve the problem of balanced data. The following formula calculates the particle coefficient (Dice):

D = 2 i N p i g i i N p i 2 + i N g i 2 (6)

Here p i and g i represent pairs of corresponding pixel values of prediction and ground truth. The main problem with this function is that it ignores the disparities between the examples that affect the model in the learning process. However, the particle loss gradient is unstable, which is more pronounced with high-class correlation data [35], where the gradient calculations include small denominators. Compared to complex examples, many straightforward examples are generated from a medical image and dominate the training model, resulting in suboptimal or worse learning [4]. Thus, the choice of the loss function is one of the determining factors of how well a deep network framework will perform because, with the proper loss function, the learning process can converge faster and deliver good results.

3. RESULTS AND DISCUSSION

To apply semantic segmentation of a brain tumor with the proposed model, we used two datasets BraTS’19-20, training and validation, to predict and compare them with ground truth labels.

3.1. BraTS 2019

Firstly the proposed model was evaluated on BraTS 2019 HGG and LGG dataset. BraTS 2019 dataset includes 335 patient cases, including 76 low-grade gliomas (LGG) patients and 259 high-grade gliomas (HGG) patients. T1-weighted (T1), T1 with gadolinium enhanced contrast (T1c), T2-weighted (T2), and FLAIR are the four MRI sequences accessible for every patient in BraTS, for a total of 1675 MRI images, of which 1173 training images, 167 validation images, and 335 testing images. We used MICCAI_BraTS_2019_ Data_Training HGG and LGG in our segmentation prediction. We reduced the original size of the given images to 128 × 128 × 128 voxels by removing the zero background to achieve the efficiency of the given images of 240 × 240 × 155, and the results are illustrated and graphed in shown below. In Table 3, we present only the summary statistics of the results in the final test set; here, we show only LGG and HGG patients’ average values of the Dice score compared with other proposed predictors in the segmentation of brain tumors. Table 4 and Table 5 also show the general results of the proposed function (WFL), Accuracy and IOU score in the training processes and validation of the Weighted Focal Loss, Accuracy and IOU score BraTS 2019 Data HGG and LGG in their forecast.

To show the efficacy of our methodology, we compared the methods in the U-Net and the improved Loss function for the 3D U-Net using a range of loss function training models. When the segmentation

Table 3. Comparison of our dice score results in the BraTS 2019 HGG and LGG data.

Table 4. Weighted focal loss, accuracy and IOU score in the training processes BraTS 2019 data HGG and LGG.

Table 5. Validation of the weighted focal loss, accuracy, and IOU score in BraTS 2019 data HGG and LGG.

output areas are too small, the prediction error significantly influences the loss, which threatens to destroy the training and compromises the performance of the loss function method to some degree. The prediction error was calculated in our work through the root mean squared error (RMSE) of the brain tumor. RMSE is used to measure the difference between the source image and the segmented image; the smaller the value of RMSE, the better the segmentation performance. The formula for calculating RMSE is provided in Equation (7):

RMSE = i = 1 N ( P r e d i c t e d i A c t u a l i ) 2 N (7)

The RMSE calculates the difference between the source and segmented images; the smaller the value of the RMSE, the better the segmentation performance. Equation (8) provides a mathematical representation of RMSE.

RMSE = i = 1 M j = 1 N [ M ( i , j ) F ( i , j ) ] 2 M × N (8)

Thus, as seen Equation (8), the root mean squared error (RMSE) is the square root of the mean of the squared differences between the estimate and the actual value of the variable function. Where Equation (8), M and N are the sizes of the image, i and j are the pixel positions in the image. M(i, j) is the segmented image, and F(i, j) is the original image. Moreover, our WFL loss function in the predicted majority class when pixels degradation is difficult to segment, WFL loss function in imbalanced classes also predicts the majority classes. When segmentation is challenging pixels degradation, our loss function (WFL) solved fixes pixel deterioration with good results. Table 3, the proposed model has been evaluated on the BraTS 2019 dataset, in terms of Dice score, compared with other methods that showed the best prediction performance in segmentation for HGG data. In addition, Figure 4, Figure 5 indicate that the WFL based 3d U-Net has a satisfactory prediction in HGG and LGG data brain tumor segmentation effect, which demonstrates the excellent performance of the improved Loss function. To determine the percentage agreement between the target mask and the result of our prediction, IOU is used to evaluate. Our results of validation and IOU scores from the BraTS 2019 dataset, the test using the proposed loss function based on the 3d U-Net model for the brain tumor segmentation prediction, are shown in Figures 6(a)-(c) and Table 4, Table 5.

3.2. BraTS 2020

A second experiment, proposed model was performed on the BraTS20 dataset. The BraTS 2020 dataset includes 369 cases, including 76 patients with low-grade glioma (LGG) and 293 patients with high-grade glioma (HGG). There are four MRI sequences accessible for every patient in BRATS: T1-weighted (T1), T1 with gadolinium-enhanced contrast (T1c), T2-weighted (T2), and FLAIR in a total of 2470 MRIs, 1759 training images, 247 validation images, and 494 testing images.

We have predicted BraTS 2020 in LGG-HGG grade patients using the proposed method for the brain tumor segmentation prediction, shown in Figure 7, Figure 8. As well as in Table 6, the average values of the Dice score are compared with other proposed predictions in the segmentation of brain tumors. Table 7, Table 8 show the general results of function (WFL), accuracy and IOU score in the training processes

Figure 4. BraTS 2019 Data_Training: Demonstration of the ground truth and our prediction in slice 64-num 211; HGG_BraTS19_TMC_27374_1.

Figure 5. BraTS 2019 Data_Training: Demonstration of the ground truth and our prediction in slice 64-num 66; LGG_BraTS19_TMC_09043_1, prediction in the following colors; green-Edema, yellow-Necrotic and Non-Enhancing Tumor, blue-Enhancing Tumor.

Table 6. Comparison of our dice score results in the BraTS 2020 HGG and LGG data.

Table 7. Weighted focal loss, accuracy and IOU score in the training processes BraTS 2020 data HGG and LGG.

(a)(b)(c)

Figure 6. The scores were obtained from the BraTS 2019 dataset, the test using the proposed loss function based on the 3d U-Net model for the prediction of brain tumor segmentation. (a) Training and Validation loss score of the HGG and LGG data, (b) training and validation accuracy of the HGG and LGG data, and (c) training and validation IOU score of the HGG and LGG data.

Figure 7. BraTS 2020 Data_Training: Demonstration of the ground truth and our prediction in slice 78-num 211; HGG_BraTS20_Training_114.

Figure 8. BraTS 2020 Data_Training: Demonstration of the ground truth and our prediction in slice 67-num 10; LGG_BraTS20_Training_261 prediction in the following colors; green-Edema, yellow-Necrotic and Non-Enhancing Tumor, blue-Enhancing tumor.

Table 8. Validation of the weighted focal loss, accuracy, and IOU score in BraTS 2020 data HGG and LGG.

and validation of the Weighted Focal Loss, Accuracy, and IOU score BraTS 2019 Data HGG and LGG are presented in their forecast. The proposed WFL-based 3D U-Net model has better results for TC, WT, and ET Dice scores and has exhibited better performance when compared with either [24 - 26] the modified architecture U-Net which uses other loss functions. Figure 7, Figure 8 showed results original MRI image and the ground truth of the result of our prediction in brain tumor segmentation for the HGG and LGG cases, respectively. As seen in Figure 7, the red arrows in the HGG MRI image and our prediction image show the areas not shown in the ground truth image, our proposed method of predicting tumor segmentation is more accurate than the manually segmented labels ground truth by physicians annotated. However, the validation and comparison method for the segmentation of magnetic resonance images (MRI) that suggest pathology is a difficult task due to the lack of reliable ground truth. Based on experiments on the BraTS’19-20 dataset, which includes both HGG and LGG patients, we have shown that our method can achieve promising results compared to the manual ground truth. Our proposed method of predicting tumor segmentation works well in predicting the majority and minority classes equally, which are the difficult to predict in segmentation. Also, as mentioned above, we use IOU to determine the percentage agreement between the target mask and the result of our prediction to evaluate in the BraTS dataset. Figures 9(a)-(c) and Table 7, Table 8 show the proposed method prediction results in the IOU score, validation of the weighted focal Loss (WFL) and validation accuracy for evaluation on the BraTS 2020 dataset.

This paper presents a method that solves the imbalance problem and predicts brain tumor segmentation using a weight loss function based on a 3D U-Net model. Our scheme uses semantic segmentation on MRI scans for tumor prediction and compares the ground truth with predicted labels. In our work, experiments were conducted on two MRI datasets, BraTS’19-20, to verify the feasibility of our work for predicting brain tumor segmentation. Experiments show that the proposed method can achieve better efficiency. For example, Table 3, Table 6 and Figure 4, Figure 7 show our prediction in brain tumor segmentation and the accuracy of dice score value for tumor core (TC), whole tumor (WT) and enhanced tumor (ET) in BraTS’19-20; the dataset shows optimum result for HGG grade patients. In addition, it can be seen in Table 4, Table 5 and Table 7, Table 8 that weighted focal loss, best accuracy and IOU score in the training process for BraTS’19-20 in HGG, and LGG Data results performed better. During the study, we observed that the use of WFL, with the values of γ = 2.5 and α = 0.25, as well as α t = 0.25 for positive samples and α t = 0.5 for negative samples, works very well in the model. Lin, Tsung-Yi, et al. [34] suggested that when γ = 0 , FL is equivalent to 1, it is easy to extend the focal loss to the multi-class case

(a)(b)(c)

Figure 9. The scores were obtained from the BraTs 2020 dataset, the test using the proposed loss function based on the 3d U-Net model for the prediction of brain tumor segmentation. (a) Training and Validation loss score of the HGG and LGG data, (b) training and validation accuracy of the HGG and LGG data, and (c) training and validation IOU score of the HGG and LGG data.

and works well; as γ increases, the effect of the modulation factor also increases. In their work, they found that γ = 2 in the experiments works better.

However, the disadvantage of the focal loss is that it may underestimate the importance of samples in the classes of concern. In addition, it is sensitive to mislabeled samples in the training dataset because mislabeled samples are treated as complex samples. Also, selecting each pixel based on the loss function removes the target pixels. Considering the limitations of the loss function for each pixel, the improved function of WFL adjusted the balance of weights, concentrating small weights for a small number of samples in the data set and the degree of degradation of classified pixels. The difference between the ground truth labels and our predicted labels can be observed in Figure 4 and Figure 7 for the HGG images. Our model has shown promising results for the majority and minority class classification. When we compared our findings with other related works, as in Table 3, Table 6 the segmentation parameters such as IOU and Dice score were shown to be more accurate and reliable numbers than others. Our proposed method can increase the model’s performance and produce results equivalent to the most accurate brain tumor segmentation models. In particular, our WFL can capture the model weakness caused by data imbalance. Also, it can attain equivalent performance to high-performance models while preserving efficiency, which can be applied in real-time diagnostics.

4. CONCLUSIONS

Choosing the most appropriate loss function is one of the main factors for exact prediction in brain tumor segmentation, which boost the learning process to gather faster and give better results to the model. In our work, we have demonstrated our improved WFL function based on a 3D U-Net model for the disparity between the given classes in BraTS19 and BraTS20 in LGG-HGG grade patients and preserving pixels during the training. WFL-based 3D U-net model is evaluated on the BraTS 2019 and BraTS 2020. Compared with other results, our proposed achieved the dice coefficient scores TC, WT and ET in BraTS’19 for HGG data 0.830, 0.913, 0815 and LGG data 0.731, 0.775, 0.685 on BraTS’20 for HGG data 0.843, 0.892, 0.871 and LGG data 0.750, 0.798 and 0.610 respectively. In addition, we evaluated our model based on the IOU score that our proposed work achieved the IOU scores in the training processes for BraTS 2019 data HGG: 0.807 and LGG: 0.644, on BraTS’20 HGG: 0.769 and LGG 0.682 dataset. The experiments showed that the weight loss function is more effective than the cross entropy in solving the class imbalance, classifying the target pixels, overrepresented tumor class (majority class) and small size tumor class (minority class). Despite various studies, the problem of segmentation in medical images of brain tumor remains a crucial problem, since the first treatment option for many brain tumors is surgery:

● To remove the entire tumor at once, the surgeon will need to remove the tumor without damaging the surrounding healthy tissue, but this involves a very responsible position.

● The surgeon usually removes the largest part of the tumor in order to remove the tumor. However, the tissues that are damaged due to the surgery can regenerate the tumor, which poses a severe risk to human life.

● Irregular and complex boundaries of the tumors area make it difficult to accurately separate healthy tissue surrounding the tumor from tumor-damaged tissues.

In future we intend to identify the exact irregular and complex boundaries of areas to remove the tumor without damaging the surrounding healthy tissues.

ACKNOWLEDGEMENTS

This research is supported by: the 2021-2023 National Natural Science Foundation of China under Grand (Youth) No. 52001039, 2022-2025 National Natural Science Foundation of China under Grand No.52171310, 2020-2022 Funding of Shandong Natural Science Foundation in China No. ZR2019LZH005, 2022-2023 Research fund from Science and Technology on Underwater Vehicle Technology Laboratory under Grant 2021JCJQ-SYSJJ-LB06903.

ABBREVIATIONS

The following abbreviations are used in this manuscript:

WFL Weighted Focal Loss

BraTS Brain Tumor Segmentation Challenge

T1 longitudinal relaxation time

T1Gd T1 Gadolinium contrast media

T1c Longitudinal relaxation time with contrast

T2 Transverse relaxation time

T2-FLAIR T2 weighted-Fluid-Attenuated Inversion Recovery

HGG High-grade Glioma

LGG Low-grade Glioma

TC Tumor Core

WT Whole Tumor

ET Enhanced Tumor

IOU Intersection over Reunion

ED Peritumoral Edema

NCR Necrotic

NET Non-Enhancing Tumor

Conflicts of Interest

The authors declare no conflicts of interest.

References

[1] Goceri, E. (2020) CapsNet Topology to Classify Tumours from Brain Images and Comparative Evaluation. IET Image Processing, 14, 882-889. https://doi.org/10.1049/iet-ipr.2019.0312
[2] Waite, K.A., Cioffi, G., Kruchko, C., Patil, N., Brat, D.J., Bruner, J.M. and Barnholtz-Sloan, J.S. (2022) Aligning the Central Brain Tumor Registry of the United States (CBTRUS) Histology Groupings with Current Definitions. Neuro-Oncology Practice. https://doi.org/10.1093/nop/npac025
[3] Wang, J., Li, D., Yang, R., Tang, X., Yan, T. and Guo, W. (2020) Epidemiological Characteristics of 1385 Primary Sacral Tumors in One Institution in China. World Journal of Surgical Oncology, 18, 1-12.
https://doi.org/10.1186/s12957-020-02045-w
[4] Bacanin, N., Bezdan, T., Venkatachalam, K. and Al-Turjman, F. (2021) Optimized Convolutional Neural Network by Firefly Algorithm for Magnetic Resonance Image Classification of Glioma Brain Tumor Grade. Journal of Real-Time Image Processing, 18, 1085-1098. https://doi.org/10.1007/s11554-021-01106-x
[5] Tan, L., Ma, W., Xia, J. and Sarker, S. (2021) Multimodal Magnetic Resonance Image Brain Tumor Segmentation Based on ACU-Net Network. IEEE Access, 9, 14608-14618. https://doi.org/10.1109/ACCESS.2021.3052514
[6] Xiang, Z., Chen, X., Lv, Q. and Peng, X. (2021) A Novel Inflammatory lncRNAs Prognostic Signature for Predicting the Prognosis of Low-Grade Glioma Patients. Frontiers in Genetics, 12, Article ID: 697819.
https://doi.org/10.3389/fgene.2021.697819
[7] Jabbar, M., Hussain, F. and Dawood, S. (2022) Brain Tumor Augmentation Using the U-Net Architecture. EasyChair Preprint No. 7511.
[8] Silva, M., Vivancos, C. and Duffau, H. (2022) The Concept of Peritumoral Zone in Diffuse Low-Grade Gliomas: Oncological and Functional Implications for a Connectome-Guided Therapeutic Attitude. Brain Sciences, 12, Article No. 504. https://doi.org/10.3390/brainsci12040504
[9] Li, H., Hai, Z., Zou, L., Zhang, L., Wang, L., Wang, L. and Liang, G. (2022) Simultaneous Enhancement of T1 and T2 Magnetic Resonance Imaging of Liver Tumor at Respective Low and High Magnetic Fields. Theranostics, 12, 410-417. https://doi.org/10.7150/thno.67155
[10] Wang, P., Weng, L., Xie, S., He, J., Ma, X., Bo, L.I., Gao, Y., et al. (2021) Primary Application of Mean Apparent Propagator-MRI Diffusion Model in the Grading of Diffuse Glioma. European Journal of Radiology, 138, Article ID: 109622. https://doi.org/10.1016/j.ejrad.2021.109622
[11] Schad, L.R. (2022) Problems in Texture Analysis with Magnetic Resonance Imaging. Dialogues in Clinical Neuroscience, 6, 235-242.
[12] Battista, J.J. (2022) Introduction to 3D Medical Imaging: Of Mice and Men, Music and Mummies. In: Van Dyk, J., Ed., True Tales of Medical Physics, Springer, Cham, 359-384. https://doi.org/10.1007/978-3-030-91724-1_16
[13] Mamatha, S.K., Krishnappa, H.K. and Shalini, N. (2022) Graph Theory Based Segmentation of Magnetic Resonance Images for Brain Tumor Detection. Pattern Recognition and Image Analysis, 32, 153-161.
https://doi.org/10.1134/S1054661821040167
[14] Hankiewicz, J.H., Stoll, J.A., Stroud, J., Davidson, J., Livesey, K.L., Tvrdy, K., Celinski, Z.J., et al. (2019) Nano-Sized Ferrite Particles for Magnetic Resonance Imaging Thermometry. Journal of Magnetism and Magnetic Materials, 469, 550-557. https://doi.org/10.1016/j.jmmm.2018.09.037
[15] Ali, M., Gilani, S.O., Waris, A., Zafar, K. and Jamil, M. (2020) Brain Tumour Image Segmentation Using Deep Networks. IEEE Access, 8, 153589-153598. https://doi.org/10.1109/ACCESS.2020.3018160
[16] Wang, G., Li, W., Vercauteren, T. and Ourselin, S. (2019) Automatic Brain Tumour Segmentation Based on Cascaded Convolutional Neural Networks with Uncertainty Estimation. Frontiers in Computational Neuroscience, 13, Article No. 56. https://doi.org/10.3389/fncom.2019.00056
[17] Ronneberger, O., Fischer, P. and Brox, T. (2015) U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab, N., Hornegger, J., Wells, W.M. and Frangi, A.F., Eds., International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, 234-241.
https://doi.org/10.1007/978-3-319-24574-4_28
[18] Çiçek, Ö., Abdulkadir, A., Lienkamp, S.S., Brox, T. and Ronneberger, O. (2016) 3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation. In: Ourselin, S., et al., Eds., International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, 424-432.
https://doi.org/10.1007/978-3-319-46723-8_49
[19] Ma, C. and Li, X. (2021) Multi-Modal Brain Tumor Image Segmentation Based on Improved U-Net Model. 2021 IEEE 5th Information Technology, Networking, Electronic and Automation Control Conference, Vol. 5, 706-710. https://doi.org/10.1109/ITNEC52019.2021.9587180
[20] AboElenein, N.M., Piao, S., Noor, A. and Ahmed, P.N. (2022) MIRAU-Net: An Improved Neural Network Based on U-Net for Gliomas Segmentation. Signal Processing: Image Communication, 101, Article ID: 116553.
https://doi.org/10.1016/j.image.2021.116553
[21] Wang, F., Jiang, R., Zheng, L., Meng, C. and Biswal, B. (2019) 3d u-Net Based Brain Tumor Segmentation and Survival Days Prediction. In: Crimi, A. and Bakas, S., Eds., International MICCAI Brainlesion Workshop, Springer, Cham, 131-141. https://doi.org/10.1007/978-3-030-46640-4_13
[22] Wang, W., Chen, C., Ding, M., Yu, H., Zha, S. and Li, J. (2021) Transbts: Multimodal Brain Tumor Segmentation Using Transformer. In: de Bruijne, M., et al., Eds., International Conference on Medical Image Computing and Computer-Assisted Intervention, Springer, Cham, 109-119. https://doi.org/10.1007/978-3-030-87193-2_11
[23] Sheng, N., Liu, D., Zhang, J., Che, C. and Zhang, J. (2021) Second-Order ResU-Net for Automatic MRI Brain Tumor Segmentation. Mathematical Biosciences and Engineering, 18, 4943-4960.
https://doi.org/10.3934/mbe.2021251
[24] Raza, R., Bajwa, U.I., Mehmood, Y., Anwar, M.W. and Jamal, M.H. (2022) dResU-Net: 3D Deep Residual U-Net Based Brain Tumor Segmentation from Multimodal MRI. Biomedical Signal Processing and Control, 79, Article ID: 103861. https://doi.org/10.2139/ssrn.4024177
[25] Parmar, B. and Parikh, M. (2020) Brain Tumor Segmentation and Survival Prediction Using Patch Based Modified 3D U-Net. In: Crimi, A. and Bakas, S., Eds., Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, Cham, 398-409. https://doi.org/10.1007/978-3-030-72087-2_35
[26] Chato, L., Kachroo, P. and Latifi, S. (2020) An Automatic Overall Survival Time Prediction System for Glioma Brain Tumor Patients Based on Volumetric and Shape Features. In: Crimi, A. and Bakas, S., Eds., Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries, Springer, Cham, 352-365.
https://doi.org/10.1007/978-3-030-72087-2_31
[27] Multimodal Brain Tumor Segmentation Challenge 2019. https://www.med.upenn.edu/cbica/brats2019.html
[28] Multimodal Brain Tumor Segmentation Challenge 2020.
https://www.med.upenn.edu/cbica/brats2020/data.html
[29] Tustison, N.J., Avants, B.B., Cook, P.A., Zheng, Y., Egan, A., Yushkevich, P.A. and Gee, J.C. (2010) N4ITK: Improved N3 Bias Correction. IEEE Transactions on Medical Imaging, 29, 1310-1320.
https://doi.org/10.1109/TMI.2010.2046908
[30] Lin, T.Y., Goyal, P., Girshick, R., He, K. and Dollár, P. (2017) Focal Loss for Dense Object Detection. Proceedings of the IEEE International Conference on Computer Vision, Venice, 22-29 October 2017, 2980-2988.
https://doi.org/10.1109/ICCV.2017.324
[31] Zhao, R., Qian, B., Zhang, X., Li, Y., Wei, R., Liu, Y. and Pan, Y. (2020) Rethinking Dice Loss for Medical Image Segmentation. 2020 IEEE International Conference on Data Mining (ICDM), Sorrento, 17-20 November 2020, 851-860. https://doi.org/10.1109/ICDM50108.2020.00094
[32] Akil, M., Saouli, R. and Kachouri, R. (2020) Fully Automatic Brain Tumor Segmentation with Deep Learning-Based Selective Attention Using Overlapping Patches and Multi-Class Weighted Cross-Entropy. Medical Image Analysis, 63, Article ID: 101692. https://doi.org/10.1016/j.media.2020.101692
[33] Zhou, X., Li, X., Hu, K., Zhang, Y., Chen, Z. and Gao, X. (2021) ERV-Net: An Efficient 3D Residual Neural Network for Brain Tumor Segmentation. Expert Systems with Applications, 170, Article ID: 114566.
https://doi.org/10.1016/j.eswa.2021.114566
[34] Latif, U., Shahid, A.R., Raza, B., Ziauddin, S. and Khan, M.A. (2021) An End-to-End Brain Tumor Segmentation System Using Multi-Inception-UNET. International Journal of Imaging Systems and Technology, 31, 1803-1816.
https://doi.org/10.1002/ima.22585
[35] Wong, K.C., Moradi, M., Tang, H. and Syeda-Mahmood, T. (2018) 3D Segmentation with Exponential Logarithmic Loss for Highly Unbalanced Object Sizes. In: Frangi, A.F., et al., Eds., Medical Image Computing and Computer Assisted Intervention—MICCAI 2018, Springer, Cham, 612-619.
https://doi.org/10.1007/978-3-030-00931-1_70

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.