Using Radial Neural Network to Predict the Ultimate Moment of a Reinforced Concrete Beam Reinforced with Composites

Abstract

This article is intended as a proposal for a numerical model for the prediction of the ultimate moment of a reinforced concrete beam reinforced with composite materials based on neural networks, which are classified in the artificial intelligence method. In this work, a RBF network or radial basis function type model was created and tested. The validation of the RBF architecture consists in judging its predictive capacity by using the weights and biases computed during the training, to apply them to another database which did not participate to the training and testing of the model. So, with Bayesian regularization, a maximum error of 0.0813 Tm in absolute value was found between the targets and predicted outputs. The value of the mean square error MSE = 1.1106 * 10-4 allowed us to quantify and justify the prediction performance of this network. Through this article, RBF network model was justified perform and can be used and exploited by our engineers with a high reliability rate.

Share and Cite:

Randrianarisoa, S. , Andriambahoaka, L. , Rakotondranja, H. and Raminosoa, A. (2022) Using Radial Neural Network to Predict the Ultimate Moment of a Reinforced Concrete Beam Reinforced with Composites. Open Journal of Civil Engineering, 12, 353-369. doi: 10.4236/ojce.2022.123020.

1. Introduction

For decades, different rehabilitation techniques have been developed: shotcrete, additional prestressing or steel plate bonding. These traditional techniques are effective but they have shown their limitations in terms of long-term behavior [1]. Furthermore, the profitability of a maintenance operation is conditioned by its durability, and thus, by the decrease in the frequency of interventions. This again explains the multitude of research efforts to improve the technique and processes of reinforcement of structures. Research in the field of rehabilitation has been directed towards the use of new materials capable of meeting the various criteria required for the maintenance of structures.

Given the advantages and properties of composite materials, their use is becoming an interesting alternative to steel in the reinforcement of reinforced concrete structures [1]. In other words, it is interesting to replace traditional materials with materials that are relatively inert to oxidation. So, composite reinforcement has become a world-leading technology in construction engineering, but unfortunately Madagascar has not yet seen its development. This led us to modeling the behavior of a reinforced concrete beam reinforced with composite materials.

Among the fundamental basis of neuroscience, neural networks are known as a computer model inspired by the functioning of the human brain capable of learning and then deciding, predicting or classifying, and thus making it possible to build a model of behaviour from the data provided to it [2] [3] [4] [5] [6]. Given its various advantages, artificial intelligence is widely used in the fields of robotics and aeronautics [7], but is it reliable for the calculation of reinforced concrete structures?

The objective of this work is to use artificial neural networks to predict the ultimate moment of a reinforced concrete beam reinforced with composites. But to ensure the predictive power of our neural model, a reliability analysis and performance analysis will be evaluated respectively before the model can be used and operated by our engineers with any degree of confidence.

2. Material and Methods

2.1. Equation of the Ultimate Resistance Moment

The following Figure 1 illustrates the internal stresses of a rectangular reinforced concrete beam reinforced by external bonding on its lower side (tension zone):

· To the static equilibrium:

F b + F s c F s t F c f = 0 (1)

Σ M / : M r Z F b F s c ( d d ) F c f ( H d ) = 0 (2)

· Finding the position of the neutral fibre:

x denotes the position of the neutral fibre, illustrated in the following Figure 2:

F b = b x 2 σ b c (3)

F s c = A s c σ s c (4)

F s t = A s t σ s t (5)

F c f = A c f σ c f (6)

By introducing (3), (4), (5), (6) into Equation (1), we have:

Fst, Fsc: forces in tensioned and compressed steel. Fb: force in compressed concrete (zero in tensioned concrete). Fcf: force in the laminate. Mr: bending moment.

Figure 1. Transversal section of a reinforced beam and internal forces.

Figure 2. Position of the neutral axis of a reinforced beam on its underside.

b x 2 σ b c + A s c σ s c A s t σ s t A c f σ c f = 0 (7)

There is linear variation, so:

σ b c x = σ s t n d x = σ s c n x d = σ c f n c f H x (8)

n = E s E c (9)

steel-concrete equivalence coefficient

n c f = E c f E c (10)

composite equivalence coefficient

(n = 12 for lamella type S and n = 15 for lamella type M)

According to (8), (9) and (10) we have:

σ s t = σ b c ( d x ) n x ; σ s c = σ b c ( x d ) n x ; σ c f = σ b c ( H x ) n c f x (11)

Introducing (11) into (7), we have:

b x 2 σ b c + A s c σ b c ( x d ) n x A s t σ b c ( d x ) n x A c f σ b c ( H x ) n c f x = 0 (12)

x is thus the solution of the following second degree equation:

b x 2 2 + n A s c ( x d ) n A s t ( d x ) n c f A c f ( H x ) = 0 (13)

· Determination of the quadratic moment

Let I/horizontal axis be the quadratic moment of the system: {compressed concrete + tensioned steel + compressed steel + laminate} with:

➢ Quadratic Moment of Compressed Concrete

I b = b x 3 12 + b x ( x 2 ) 2 = b x 3 3 (14)

➢ Quadratic moment of tensioned steels: (15)

I s t = n b a r r e s π D s t 4 64 + n A s t ( d x ) 2 n A s t ( d x ) 2

➢ Quadratic moment of compressed steels: (16)

I s c = n b a r r e s π D s t 4 64 + n A s c ( x d ) 2 n A s c ( x d ) 2

➢ Quadratic moment of the composite:

I c f = n c f A c f ( H x ) 2 (17)

This gives the quadratic moment of the system:

I o = b x 3 3 + n A s t ( d x ) 2 + n A s c ( x d ) 2 + n c f A c f ( H x ) 2 (18)

· Finding the Resistive Moment Equation

In Equation (2), the unknowns are Z and Fb

The values of x and Z are thus sought according to Equation (1) with:

F b = 0.8 x b f b u (19)

F s c = A s c f s u (20)

F s t = A s t f s u (21)

F c f = A c f f f d (22)

By introducing (19), (20), (21) et (22) into Equation (1), we have:

0.8 x b f b u + A s c f s u A s t f s u A c f f f d = 0 (23)

So: x = f s u ( A s t A s c ) + A c f f f d 0.8 b f b u (24)

Therefore: Z = d 0.8 2 x (25)

By introducing (25), (19), (20) and (22) into Equation (2), we have:

M r = ( d 0.8 2 x ) ( 0.8 x b f b u ) + A s c f s u ( d d ) + A c f f f d ( H d ) (26)

After development:

M r = 0.8 x b d f b u 0.32 x 2 b f b u + A s c f s u ( d d ) + A c f f f d ( H d ) (27)

By introducing x (24) into Equation (27), we have:

M r = f s u d A s t A s c f s u d + H A c f f f d 0.50 b f b u [ f s u 2 ( A s t A s c ) 2 + A c f 2 f f d 2 + 2 f s u ( A s t A s c ) A c f f f d ] (28)

With an approximation of d = 0.9 h, the expression for the resistive moment is given by following equation:

M r = 0.6672 f e 2 b f c 28 ( A s t A s c ) 2 + f e 1.15 ( 0.9 h A s t A s c d ) + A c f f f d [ H 0.8824 A c f f f d b f c 28 1.5345 f e ( A s t A s c ) b f c 28 ] (29)

With:

H = h + t f h (30)

If tfis the nominal lamella composite thickness [1.20 mm: 1.40 mm], finally we have:

M r = 0.6672 × 10 6 b f c 28 ( f e A s t ) 2 + 0.7826 × 10 2 h f e A s t + A c f f f d × 10 4 [ h 0.8824 × 10 6 A c f f f d b f c 28 1.5345 × 10 4 f e A s t b f c 28 ] (31)

With:

Mr: ultimate moment of a reinforced beam [Tm] fcj: characteristic compressive strength of concrete at j days [MPa].

fe: guaranteed yield strength of the steel [MPa].

ffd: design strength of the composites [MPa].

b: width of the beam [m].

h: height of the beam [m].

Ast: sectional area of tensioned reinforcement [cm2].

Acf: sectional area of composites [mm2].

2.2. The Radial Neural Networks

1) Architecture

The radial basis function (RBF) has the same structure as the multilayers Perceptron [8]. Except for its activation function which is a Gaussian function. This network, because of its architecture, Figure 3, most often uses the error correction learning rule and the competitive learning rule.

Unlike sigmoid neurons, radial neurons work locally in the input space. This is the main feature of the RBF network. It consists of three layers: an input layer that retransmits the inputs without distortion, a single hidden layer that contains the radial neurons, and an output layer whose neurons are usually driven by a linear activation function. Each layer is completely connected to the next and there are no connections within a layer.

Its transfer function is written as: r a d b a s ( n ) = e n 2 .

This network consists of N input neurons, M hidden neurons and J output neurons

The output of the mth neuron of the hidden layer is given by:

y m ( q ) = exp [ x ( q ) ν m 2 / ( 2 σ m 2 ) ] (32)

νm is the centre of the mth hidden layer neuron or the mth Gaussian neuron and σm is the width of the mth Gaussian.

The output of the jth neuron of the output layer is given by:

z j ( q ) = 1 M [ ( m = 1 , M ) w m j y m ( q ) ] (33)

m = 1 , , M and j = 1 , , J

wmjare the weights connecting the hidden layer to the output layer.

2) Learning algorithm [9]

RBF network learning was first presented by Moody and Darken. It consists in setting four main parameters:

- the number of neurons in the single hidden layer or the number of Gaussians,

Figure 3. Architecture of RBF network.

- the position of the centers of these gaussians,

- the width of these gaussians,

- the connection weights between the hidden neurons and the output neuron(s).

The RBF network consists in minimizing the total squared error E computed between the obtained outputs of the network and the desired ones:

E = q = 1 Q j = 1 J ( t j ( q ) z j ( q ) ) 2 (34)

For the RBF network, the adjustment of the weights wmj connecting the hidden layer to the output layer is performed by the Widrow-Hoff rule. It is done as follows:

w m j ( i + 1 ) = w m j ( i ) + η ( t j z j ) y m (35)

tj is the output of the jth desired neuron, zj is the output of the jth computed neuron, ym is the output of the mth hidden layer neuron and η is the learning step whose value is between 0 and 1.

3) Bayesian regularization [10]

First of all, we choose the architecture of the network: the number of neurons of the hidden layer. The network must be neither too flexible nor too rigid. There are now several methods adapted to these considerations, such as Bayesian regularization.

The learning phase of the RBF network is faster than that of the MLP but requires many more neurons. An alternative is to optimize the parameters of the RBF model by Levenberg Marquardt optimization. To do this, we use a network training function “trainbr” on Matlab, which will update the values of weights and biases.

It then minimizes a combination of squared errors and weights, and determines the correct combination to produce a network that generalizes well. The process is called Bayesian regularization.

2.3. Neuronal Modeling

The ultimate moment of the reinforced beam is modelled as a function of the input variables of the process: the characteristic strength of the concrete, the yield strength of the steel, the width of the beam, the height of the beam and the sectional area of the reinforcement, the design strength of the composite lamella and the sectional area of the composite lamella (Type S and M), by means of the equation of the BAEL 91 [11]:

Y = f [ X i ] = 0.6672 × 10 6 X 3 X 1 ( X 2 X 5 ) 2 + 0.7826 × 10 2 X 4 X 2 X 5 + X 7 X 6 X 3 X 1 × 10 4 [ X 4 + 0.8824 × 10 6 X 7 X 6 1.5345 × 10 4 X 2 X 5 ] (36)

Figure 4 shows the shematic of the neural network.

The following Table 1 shows us the ranges of variation of each input variable:

Therefore, we have a model with one (01) output and seven (07) input variables.

To do the simulation, a database composed of 714 samples was taken. The data set used for the development of the neural network model is divided into three parts:

- 70% of the set for learning,

- 20% for testing,

- 10% for validation.

Figure 4. Schematic of the neural network.

Table 1. Range of variation of the model parameters.

The network studied is a RBF with Bayesian regularization.

The simulation is launched with a maximum number of neurons MN = 50, a number of neurons to be added between each evaluation DF = 5 and the propagation of Radial functions SPREAD = 1.

An optimization by the Levenberg-Marquardt algorithm is associated to the RBF network for its regularization.

The performance to be reached is of the order of 10 - 7.

2.4. Methods of Analyzing the Performance of the Model

Before the RBF network can be used with any degree of confidence, it is necessary to analyze its performance and quantitatively evaluate the results it produces.

This analysis consists in proposing a series of performance indicators to evaluate the predictive power of the network. The proposed indicators make it possible to evaluate: FIDELITY, TRUENESS and ACCURACY of a model, following the next Figure 5.

1) The general indicators

a) The bias—Fidelity criteria

A first condition desired in the validation is an unbiased model. That is to say that the average of all deviations eiis as close as possible to zero.

The bias can be calculated as follows:

bias = 1 n i = 1 n ( Y r e e l , i Y p r e d i t , i ) = 1 n i = 1 n e i (37)

b) RMSE criteria—Accuracy criteria

The RMSE criteria (Root Mean Square Error) allows the calculation of the amplitude of the deviations which can be characterized by the average of the

Figure 5. Performance criteria according to indicators.

squares of the deviations ei.

The calculation is as follows:

RMSE = 1 n i = 1 n e i 2 (38)

When we use the indicator without the square root, we obtain another indicator, which we call MSE (Mean Square Error).

MSE = 1 n i = 1 n e i 2 (39)

This declination of RMSE, expressed in units of the variable Y squared, is also very useful for additional explanations of model accuracy.

→ The closer the value of the RMSE or MSE criteria is to zero, the better the model evaluated in terms of accuracy.

c) Variance—Trueness criteria

The variance of the termei over the entire simulation time interval will be defined as the “correctness” of the modeling.

The trueness σ e 2 can be calculated using the following equation [12]:

σ e 2 = RMSE 2 biais 2 (40)

RMSE = σ e 2 + biais 2 (41)

Thus, a model that is judged to be accurate through bias (close to zero) may be highly inaccurate (high RMSE values and MSE) due to variability in deviations or accuracy (high σ e 2 values) [12].

2) The standardized indicators

In standardized indicators, a reference performance value or a relative performance in each indicator is established in order to standardize the evaluation of the model.

Indeed, the great strength of normalized criteria is that they are dimensionless, which allows for the comparison of models between them. In the following, we will present standardized indicators that will allow us to provide more information on the relevance of a model.

a) Nash-Sutcliffe criteria

The Nash-Sutcliffe criteria is a performance indicator constructed from the normalization of the MSE, with values in the interval ]−∞; 1].

It is used to estimate the ability of a model to reproduce an observed behavior.

It is calculated as follows (42):

NS = 1 MSE σ Y 2 NS = 1 i = 1 n ( Y r e e l , i Y p r e d , i ) 2 i = 1 n ( Y r e e l , i Y r e e l ¯ ) 2

The closer the value obtained for these criteria is to 1, the better the fit of the model to the observed values. It is generally accepted that the Nash-Sutcliffe criteria must be higher than 0.7 to be able to affirm that a model is satisfactory, the model and the observed values are consistent.

A Nash below about 0.6 shows a poor fit of the model to the observed values.

b) RSR criteria

The RSR is a criteria similar to the Nash-Sutcliffe, nevertheless less used, based on the normalization of the RMSE, instead of the MSE. It can be expressed as follows [13]:

RSR = RMSE σ Y RSR = i = 1 n ( Y r e e l , i Y p r e d , i ) 2 i = 1 n ( Y r e e l , i Y r e e l ¯ ) 2

→ The closer the value obtained for this criteria is to 0, the better the fit of the model to the observed values.

A value that indicates an acceptable simulation should be less than 0.2. These criteria can be interpreted as the percentage of the standard deviation σY not explained by the model.

c) RVE criteria

The Relative Volume Error is the sum of the errors related to the sum of the observed values, expressed as a relative value or percentage.

This is done by dividing the bias by the total simulation volume as follows:

RVE = biais i = 1 n Y r e e l , i (44)

→ The RVE indicator can be interpreted as the error on the modeled volume relative to the total observed volume (in percent, if desired). The lower the RVE, the better the overall fit between modeled and observed volume.

3. Results

3.1. Network Training

The learning is of supervised type. On the 70% of the database that will be used for learning, here are the results in Figure 6:

Figure 6. Networks performances. (a) RBF performance; (b) RBF with Bayesian regularization performance.

With no parameterization, the RBF network simulation stopped at only 50 epochs with a bad performance of 0.032 in Figure 7(a). But with Bayesian regularization, a better performance of 4.47 * 10−7 is achieved at 1000 epochs in Figure 7(b). The regression lines of the network without regularization and with Bayesian regularization are shown as follows:

The equation of the regression straight of the RBF network is of the form Output = 0. 99 Target + 0.038 with a correlation coefficient R = 0.9974 which is the bad result.

With Bayesian regularization, the equation of the straight becomes Output = 1 Target + 6.2 × 10 6 with a better correlation between the observed outputs and those predicted by the network and a coefficient R = 1.

3.2. Network Test

To justify the predictive quality of the model, the network will be tested with 143 samples drawn in a random manner not belonging to the model. Following Figure 8 shows us the values of the real ultimate moment of the model and those predicted by the RBF network (Figure 8(a)) and regularized RBF (Figure 8(b))

With the Bayesian regularization, we can see that all the points are very close to each other, thus justifying the good predictive quality of our regularized RBF network.

The prediction errors can be qualitatively evaluated by Figure 9.

For the simple RBF model, a maximum error of 1.9747 Tm was found between the target and predicted output. The difference is thus too high.

With Bayesian regularization, a maximum error of 0.0157 Tm was found, and that is tolerable.

Figure 7. Regressions straights of networks. (a) Regression straight of RBF; (b) Regression straight of RBF regularized.

(*) targets values; (o) predicted values.

Figure 8. Outputs observed by RBF and RBF using Bayesian regularization. (a) Outputs observed by RBF network; (b) Outputs observed by RBF regularized.

Figure 9. Prediction errors of the simple RBF and regularized RBF.

Table 2. Error indicators for RBF and regulated RBF networks.

Table 2 above summarizes the quantitative values of the error deviation indicators:

The error indicators of the RBF using Bayesian regularization are very satisfactory.

3.3. Network Validation

The validation of the RBF architecture with Bayesian regularization consists in judging its predictive capacity by using the weights and biases computed during the training, to apply them to another database composed of 71 remaining samples, which did not participate to the training and testing of the model.

The following Figure 10 shows the results of the network outputs from the remaining samples taken at random:

A maximum error of about 0.0813 Tm in absolute value was found for these 71 samples. The deviation indices to quantify and measure the prediction error of the network during the validation phase are MSE = 1.1106 * 10−4, RMSE = 0.0105 and MAE = 0.0040. And which are very satisfactory with a standard deviation of error σerror = 0.0105.

4. Discussions

To validate the reliability of the regularized RBF network, it was necessary to evaluate the deviation indicators and analyze the performance of the model from the remaining 71 samples which are the model validation samples.

4.1. Evaluation of Deviation Indicators

The deviation indicators allow us to measure and quantify the error differences between the target and the output predicted by the model in Table 3.

The values of the error deviation indicators are satisfactory, so we can say that the outputs predicted by the network are reliable. It remains to verify the performance indicators of the model.

4.2. Evaluation of Performance Indicators

The performance indicators of the model will be evaluated by two categories which are general indicators and standardized indicators, following Figure 11.

Figure 10. Targets and outputs using networks.

Table 3. Indicators of differences between targets and predicted outputs.

Figure 11. Performance indicators of the regularized RBF model.

For each deviation indicator evaluated, we conclude a better fit between the target and predicted output and that the regularized RBF network is a faithful, accurate and fair model.

5. Conclusions

In this study, the training of the regularized RBF neural model allowed us to obtain a mean square error of 0.0105 and a Pearson correlation coefficient equal to 0.9992, which represents the best result.

After the analysis and evaluation of the different performance criteria defined, we have a model that is faithful, accurate and fair. The Nash-Sutcliffe calculated is equal to 0.9999995 which indicates a better fit of the model to the observed values.

To conclude, our regularized RBF model is very efficient and can be used and exploited by our engineers to evaluate the ultimate moment of a reinforced concrete beam reinforced by external bonding of composites with a high reliability rate.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Minh Duc Ngo (2016) Renforcement au cisaillement des poutres béton armé par matériaux composites naturels (fibre de Lin). thèse de Doctorat, de l’Université de Lyon, Lyon.
[2] Najjar, Y., Basheer, I.A. and Hajmeer, M.N. (1997) Computational Neural Networks for Predictive Microbiology: I. Methodology. International Journal of Food Microbiology, 34, 27-49. https://doi.org/10.1016/S0168-1605(96)01168-3
[3] Fortin, V., Ouarda, T., Rasmussen, T.P. and Bobée B. (1997) Revue bibliographique des méthodes de prévision des débits. Revue des Sciences de l’Eau, 4, 461-487.
[4] McCulloch, W.S. and Pitts, W. (1943) A Logical Calculus of the Ideas Imminent in Nervous Activity. Bulletin of Mathematical Biophysics, 5, 115-133.
[5] Senthil Kumar, A.R., Sudheer, K.P., Jain, S.K. and Agarwal, P.K. (2004) Rainfall-Runoff Modelling Using Artificial Neural Networks: Comparison of Network Types. Hydrological Processes, 19, 1277-1291. https://doi.org/10.1002/hyp.5581
[6] Mas, J.F., Puig, H., Palacio, J.L. and Sosa Lopez, A. (2004) Modeling Deforestation Using GIS and Artificial Neural Networks. Environmental Modeling and Software, 19, 461-471. https://doi.org/10.1016/S1364-8152(03)00161-0
[7] Randriamamonjy, L.J. (2019) Recherche d’architecture minimale et approche d’apprentissage par pseudo-inverse généralisée d’un réseau de neurones artificiels. Thèse de doctorat, Université d’Antananarivo, Antananarivo.
[8] El Badaoui, H., Abdallaoui, A. and Chabaa, S. (2014) Perceptron Multicouches et réseau à Fonction de Base Radiale pour la prédiction du taux d’humidité. International Journal of Innovation and Scientific Research, 5, 55-67.
[9] Boudebbouz, B., Manssouri, I., Mouchtachi, A., Manssouri, T. and El kihel, B. (2015) Utilisation des réseaux de neurones artificiels de type RBF pour la modélisation du régime normal à point de fonctionnement variable d’une installation industrielle. European Scientific Journal, 11, No. 18.
[10] Nohair, M., St-Hilaire, A. and Ouarda, Taha.B. (2008) Utilisation des réseaux de neurones et de la régularisation bayésienne en modélisation de la température de l’eau en rivière. Revue des sciences de l'eau/Journal of Water Science, 21, 259-382.
[11] Lacroix, M.R., et al. (Février 2000) Règles BAEL 91 révisées 99, Fascicule 62, titre 1er du CCTG - Travaux section 1: béton armé. CSTB.
[12] Gy, P. (1998) Sampling for Analytical Purposes. The Paris School of Physics and Chemistry, Masson Paris.
[13] Moriasi, D.N., et al. (2007) Model Evaluation Guidelines for Systematic Quantification of Accuracy in Watershed Simulations. American Society of Agricultural and Biological Engineers, 50, 885-900.

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.