Economic Science and Entropy

Abstract

Economic theory often does not fully specify the quantitative details of proposed models of economic activity. Instead, only the “direction of influence” of the associated variables is proposed. Such specifications are expressed by the sign patterns of the Jacobian array of the mathematical system specified by the theory. This paper shows that such arrays often only specify the frequency distributions of the sign patterns of arrays that might be estimated from the corresponding data. Shannon’s concept of entropy from information theory can then be used to measure the information provided by the theory.

Share and Cite:

Lady, G. (2020) Economic Science and Entropy. Theoretical Economics Letters, 10, 1227-1238. doi: 10.4236/tel.2020.106074.

1. Introduction

A formalism that proposes to organize a subject matter is “scientific” if it provides propositions that limit the configurations that the data describing the subject matter can take on. If the data do not satisfy these limitations, then the proposition is said to be “falsified” (Popper, 1934, 1959). Accordingly, the falsifiable propositions of science are distinct from logical propositions, e.g., 2 + 2 = 4, based upon derivations from initial assumptions using agreed-upon rules or revealed propositions, e.g., God is good, that is based upon states of belief. In physics, propositions of the theory often limit the data by giving precise, quantitative predictions on how the data will turn out, often set in an experimental context. Set up and perturb a physical system in a particular way and the theory provides quantitative predictions of how features of the system will behave, often with very small differences in the measured quantities compared to those proposed by the theory.

Famously, in the physics of the very small, quantum mechanics, predictions of these sorts cannot always be made. Instead, the theory proposes a list of possible outcomes, each with a probability of actually taking place. Thus, the theory proposes the ex post frequency distribution of the outcomes of a particular perturbation of the system. Interestingly, this is done by proposing this distribution as the ex ante configuration of the system as given mathematically by a wave function. Whether the wave function is “only”, epistemologically speaking, a mathematical feature that is useful in predicting experimental outcomes or ontologically, a characterization of the actual, underlying physical system, e.g., specifying the actual “superpositions” of individual particles, is a matter of continuing controversy (Lewis, 2016). In any event, a wave function is fully falsifiable and, thus, fully scientific, even if not providing deterministic predictions.1

In many ways, the expression of economic theory follows the example of physics. Features of the subject matter are described by mathematical systems which, strictly speaking, can be used to make (usually) deterministic predictions. Nevertheless, this methodology has problems. As discussed in the next section, the features of the mathematical system at issue may not be fully quantifiable in terms of associated propositions of the theory. Accordingly, the route of derivation that shows what, exactly, the limitations on the data are that the theory proposes can be difficult to derive and implement. Worse yet, in terms of precise, quantitative predictions, many of the variables at issue would defy such predictions. Specifically, many features of the operation of the economy are described by the prices and quantities of the goods produced and consumed in the economy. Often, the ex post price time series if known in advance, at least for storable commodities that are well traded in both spot and futures markets, e.g., oil, would allow profitable arbitrage by simultaneous trading in the spot and futures markets. For example, if the price of oil quoted at a given future time is sufficiently higher than the present (spot) price, then buying a quantity of oil now, putting it in storage, and selling it at the given future time, would be profitable. If the higher future price could be reliably predicted and was well known and quoted in the appropriate futures contracts, then the strategy of buying oil now and selling it into a contract maturing at the given future time would bid up the current price and bid down the price in the future. Accordingly, the profitable difference in the present versus future price that had been predicted would be bid away (Bopp, 1991).

In the next section the manner in which a not fully quantified economic model can be falsified will be reviewed. In section 3 the use of entropy as specified in information theory will be applied and shown to measure the “degree” of limitation that the theory imposes on the data. The entropy measure is derived from a Monte Carlo sampling of mathematical expressions of the theory. Examples are provided based upon a Monte Carlo simulation. A summary of conclusions is given in section 4. A description of the Monte Carlo methodology is provided in an appendix.

2. Falsifying an Economic Model

Following Samuelson (1947), theoretical propositions in economic theory are based upon a system of equations that proposes to describe some feature of how the economy operates,

f i ( Y , Z ) = 0 , i = 1 , 2 , , n , (1)

where Y is an n-vector of endogenous variables and Z is an m-vector of exogenous variables. The theory is brought to the data through the method of comparative statics. This method analyzes the effects of disturbances in the exogenous variables, the entries of Z, as they relate to corresponding changes in the endogenous variables, the entries of Y, all with respect to a referent solution to (1). The system at issue is specified by a linear system of differentials,

j = 1 n f i y j d y j + k = 1 m f i z k d z k = 0 , i = 1 , 2 , , n , (2)

where the partial derivatives involved are evaluated at the referent solution. In setting up the statistical analysis it is often assumed that the system (1) is (at least locally) linear; and, as a result (1) and (2) can be expressed by,

β Y = γ Z , (3)

where β and γ are appropriately dimensioned matrices. (3) is usually called the “structural form”, the explicit formulation of the theory that will be submitted to estimation. This is done by estimating the entries of π in what is usually called the “reduced form”,

Y = π Z , for π = β 1 γ . (4)

For ease of discussion, but without loss in generality, it will be assumed that γ = I, so that

π = β 1 .

The theory, so expressed, is “scientific” to the degree to which a specification of β due to the theory limits what the outcome of the estimated entries of π can be. And, accordingly, if the estimated entries of π do not satisfy these limits, the theory has been falsified.

The issue now becomes: what does the theory propose about β and how does this in turn limit the outcomes that would be found for the estimated π? Samuelson (1947) notes that the theory doesn’t usually propose specific quantities for the entries of β.2 Instead (often), the theory proposes the “directions of influence” among the variables, e.g., that an increase in the market price of a commodity will motivate sellers to offer more for sale, but not specifically how much more. Accordingly, the theory so expressed gives the sign pattern of β, sgn β, i.e., specifying the entries as positive, negative, or zero. The problem then becomes that of the derivation of π = β−1 to see if any entries of π must have a particular sign, given sgn β, but otherwise independent of the magnitudes of the entries of β.3 If so, then if this sign does not show up in the estimated π, the structural model, as specified by sgn β, has been falsified. The enterprise of doing this became called a “qualitative analysis”.

Samuelson (1947) went on to propose that the chances of a successful qualitative analysis, i.e., finding that at least some of the entries of π are signable, based only upon sgn β, was too unlikely to take seriously. He reasoned that if an entry of π had to have a particular sign, then all of the entries in the expansions of β’s determinant and the appropriate cofactor, each with possibly millions of terms for a system of any size, would have to all have the same sign. And this, he reasoned further, would be most unlikely to happen. Alternatively, he proposed that if β were derived from the second order conditions to an optimization problem; or, if the system (1) was dynamically stable, then some entries of π would be signable, e.g., π’s, main diagonal would have to be all negative.

Nevertheless, the (sufficient to start out) conditions for successful qualitative analysis were initially presented by Lancaster (1962) with the subsequent literature, including necessary and sufficient conditions and algorithmic methods for detecting the conditions, being well summarized in Hale (1999). But, none of this dispelled Samuelson’s pessimism about the likelihood of a successful qualitative analysis. The algorithmic methods of conducting the analysis were not widely available and usually not successful when applied.4 Accordingly, qualitative analyses were rarely attempted and even less often successful.

A recent literature, e.g., Lady (2011), Buck, (2012), Buck, (2015) has shown that Samuelson’s pessimism and the subsequently proposed need for the application of refined, algorithmic analyses were misplaced. Specifically, a given sgn β always places limits on the sign patterns that π can take on, even if no entry of π is signable. This can be shown through a simple example, assume that,

sgn β = [ + + + + + + + + + ] .

Since βπ = I and πβ = I, (such as),

sgn π = [ ? ? ? ? ? ? ] ,

where entries marked “?” can have any nonzero sign, is not allowed for the given sgn β. Further, given that the last two rows can have any signs, of the 512 possible sign patterns for a 3 × 3 array, this one limitation eliminates 64 sign patterns for the estimated reduced form, based upon the structural hypothesis.

An automated Monte Carlo simulation can investigate how many sign patterns that π can allowably take on, given a hypothesized sgn β. The reader can access a version of the Monte Carlo and view associated instructions for it use at: https://optimagroup.us/RF_Finder/Finder_Page.htm.

In brief, the method assigns quantitative values to the entries of β consistent with the hypothesized signs (called a sample of β), inverts the result, and tabulates the resulting sgn π that was found. When this was done for the hypothesized sgn β above only 102 allowable sign patterns for sgn π were found. No other sign patterns were found with samples of several million, so that the chances are vanishingly small that any sign pattern for sgn π exists that could be generated by the hypothesized sgn β for particular magnitudes of its entries that was not found by the Monte Carlo. The probability of this taking place is reported in Lady (2011). In all of the examples used here, the sample size reduces this probability to virtually zero (or in some cases demonstrably zero). Henceforth, it will be assumed for the examples here that no allowable sign patterns are missed by the Monte Carlo simulation and the number of signs found will be assumed to be the total number that are “allowable” with the hypothesized sgn β. In addition to the limitations on the allowable sgn π due to the requirements that βπ = I and πβ = I, other limitations due to the inference structure of sgn β, i.e., which variables appear in which equations, can also place limitations on the allowable sgn π, e.g., Lady (2019).

It is proposed here (and seems intuitive) that the degree to which the hypothesized structure limits the allowable outcomes for the estimated reduced form represents the “scientific content” of the hypothesized structure. Specifically, the greater the number of limits, the more readily the hypothesis can be falsified, i.e., the number of outcomes for the reduced form that are not allowable is greater. The issue becomes how to measure the “degree” of limitation on the allowable configurations of sgn π due to the hypothesized configuration of sgn β. It was briefly noted in Buck (2012) that the concept of entropy from information theory as presented in Shannon (1948) could be used as an appropriate measure. The use of entropy in this way is very much elaborated in the next section.

3. Entropy as a Measure of Information

For a physical system, Entropy was proposed by Boltzman as a measure of the number of microscopic configurations of the system that are macroscopically equivalent, e.g., Ligrone (2019). For example, the number of configurations of air molecules and other particles in the air in a closed room that would result in the same measures of temperature and pressure is very large. As such, the measure is sometimes proposed to be a measure of “chaos”. Shannon (1948) proposed a similar measure in information theory. Suppose that a message is expected that could have Q-many different contents. Let Fi be the frequency with which the ith message content is expected to be the message received. Then the “entropy” of the expectation is given by,

Entropy = i = 1 Q F i log ( F i ) , (5)

where log(Fi) is to the base 2.

To present a simple example in the context at issue here that can be readily confirmed by hand, let

sgn β = [ + + ] ,

with the “message” at issue being the estimated sgn π. This message has four bits, one for each entry of sgn π with (say) “1” for “+” and “0” for “−”. Since zero-entries in sgn π are not allowed (β is irreducible and a zero entry in its inverse would be a quantitative accident which will be ignored, i.e., the model being worked with is always assumed to be nonsingular and have nonsingular cofactors for its Jacobian matrix), there are 24 = 16 sign patterns that sgn π can take on. The issue at stake is the degree to which the frequency distribution of allowable configurations of sgn π limits the configurations that might be measured. One extreme would be that there is no limit at all. Specifically, that any of the possible 16 sign patterns is allowable and each is equally probable. In this case, Fi = 1/16 for all i, Q = 16, and applying the formula (5) above, Entropy = 4. At the other extreme, only one sign pattern is allowable, Fi = 1 for some i, Q = 1, and entropy = 0. Accordingly, the measure of entropy reports the portion of the information at issue in the message that is not known in advance due to the frequency distribution and will be found out when the message is delivered, i.e., when sgn π is estimated in our context. The complementary information is that which is known in advance, assuming that the proposition of the theory provided by the frequency distribution is correct. This can be measured by,

INFO % = 100 ( 1 Entropy n m ) , (6)

where m = n for γ = I. Accordingly, for all possible configurations of sgn π allowable and each equally probable, INFO% = 0; and, for only one possible sign pattern allowable, INFO% = 100.

For the example above, the adjoint of β is fully signed and has all negative entries. The determinant can be positive or negative. Accordingly, only two sign patterns for sgn π are allowed, all negative entries or all positive entries, depending upon whether det β is positive or negative. If each of these sign patterns is equally probable, then the entropy of the assumed sgn β in terms of the information it provides about the possible patterns of sgn π is “1” and the corresponding INFO% = 75. In the Monte Carlo simulation used here, the absolute values of the magnitudes of the entries of β are chosen from a uniform distribution. Accordingly, for this example the frequencies of each allowable sgn π found in the simulation will be approximately equal, subject to small-sample sampling variation.

Since the two allowable sgn π are equally probable, the (converged) simulated entropy and the maximum entropy for two allowable sign patterns is the same. Changes in the frequency distribution can make a difference. For example for F1 = frequency all negative entries = .75 and F2 = frequency all positive entries = .25, then entropy = .81 and INFO% = 79.7. In the more extreme case of F1 = .9 and F2 = .1, entropy = .47 and INFO% = 88.3. Nevertheless, from the standpoint of falsification, the number of allowable sgn π is the more significant result from a qualitative analysis. Particularly since the distributional rules for selecting the absolute values of the entries of β is presumably less empirically robust than the specification of their signs.

A shortcut to computing the entropy of Q-many allowable alternatives, assuming that each is equally probable is: entropy = log(Q) where “log( )” is to the base 2; or, more generally for (such as) common or natural logs: entropy = log(Q)/log(2).

It is tempting to suppose that simply an enumeration of the allowable sgn π as compared to the number of possible sgn π would be a sufficient measure of the “information”, i.e., limitations on the data, of a specified structural sign pattern. A quick example can show that this is not so. The 2 × 2 matricial form used above is an example of a “Metzler” matrix (Metzler, 1945): a matrix with all negative main diagonal entries and non-negative off-diagonal entries. In economics such a matricial form corresponds to the sign pattern of the Jacobian matrix of a system of excess demand equations where all commodities are (strongly if no zeros are allowed) gross substitutes. These arrays have the important characteristic of being Hicksian stable (Hicks, 1939), i.e., disturbed solutions “move back” in the direction of the equilibrium solution, if and only if they are dynamically stable (Samuelson, 1941), i.e., disturbed solutions “move convergently back” to the equilibrium solution. The table below presents results from the qualitative analysis of Metzler matrices with no zeros for n = 2, 3, 4, and 5.

In all cases except for n = 5, samples of 1.5 million were generated. For each of those cases, an additional sample of 1.5 million was generated to confirm that all allowable sign patterns for sgn π had been found. For n = 5, the results of the sampling were more problematical. The 38,004th allowable sign pattern for sgn π was not found until the 44th sample of 1.5 million. Additional samples were run and the 39,005th sign pattern for sgn π was not found until the 53rd sample of 1.5 million. An additional seven samples of 1.5 million were run without any additional sign patterns being found, although there is a small chance that some were missed. Since the software does not tabulate cross-sample frequency distributions, only the maximum entropy measure could be calculated for the case of n = 5. Since the absolute values of the entries of β were chosen from a uniform distribution for all of the samples generated, the differences between the simulated entropy and maximum entropy, i.e., all frequencies are the same, was due to the

Table 1. Sgn β = metzler matrix without zeros.

algorithm for computing β−1. For example, for the Metzler matrix for which n = 3 the expansion of det β has six terms, five of which are positive, and all off-diagonal cofactors are positive.

Inspection of Table 1 reveals that the allowable sign patterns for sgn π as a proportion of possible sign patterns falls significantly as n increases. Nevertheless, the associated entropy increases and the level of a priori information provided by the qualitative structural specification decreases. Accordingly (as expected), the entropy measure is the proper measure for the information, i.e., “scientific content”, provided by the qualitative structural specification.

It should be noted that even weaker structural specifications can be measured by entropy. For example, suppose the theory only specifies which variables appear in which equations, i.e., only specifies which entries in β are zero and which are nonzero. A specification of this kind is given for sgn β below where the entries marked “?” are nonzero, but equally probably can have a positive or negative sign. The specification below has only one cycle of inference.

sgn β = [ ? 0 0 ? ? ? 0 0 0 ? ? 0 0 0 ? ? ] .

Even for this austere structural specification, there are only 256 allowable sign patterns for sgn π. Each of these are equally likely for the uniform distribution of magnitudes for entries assumed here so that the entropy of this structural specification is 8 and the corresponding INFO% = 50. The specification eliminates over 99% of the possible sign patterns for sgn π. Accordingly, since qualitative analyses are not typically performed, if a standard multi-stage regression estimation procedure were used to find values for the nonzero entries of β based upon the estimated reduced form, there is a good chance that the specification would be (unknowingly) falsified and the values found for the nonzero entries of β could not possibly, regardless of their signs or values, have generated the sgn π that was estimated, e.g. Buck (2016).

4. Conclusion

As discussed above, the qualitative specification of the structural form does not usually lead to a specific prediction for the reduced form sign pattern. Instead, depending upon the distributional rules assumed for the absolute values of the nonzeros in the structure, the theory specifies a frequency distribution of the sign patterns that sgn π can take on. This is similar to features of quantum mechanics where the theory specifies a wave function that corresponds to an expected ex post frequency distribution of experimental results. In economics, this frequency distribution is not interpreted as the ex ante configuration of the actual system as is sometimes done for quantum mechanics. Nevertheless, the similarity is striking.

There are many statistical issues related to the conduct of a qualitative analysis, e.g., selecting the more probable structural specification from two (or more) proposed when the estimated reduced form is allowable for all of them Buck (2005). But these are beyond the scope of this paper which is only intended to show that a measure of entropy can be used to reveal the information content of a structural model. There is a need for more robust computing platforms and innovative numerical methods to apply the concepts presented here to large arrays. For straight-forward model falsification, the allowability of an estimated sgn π can be assessed now with the software used in support of this paper. The important point proposed here is that qualitative analysis is an important tool in model development and evaluation.

Appendix: Qualitative Analysis

The Monte Carlo approach utilized in this paper was first presented in Buck (2011). The software used for this paper has been enhanced in a number of ways since that initial version. The computer program used and instructions for its use can be found at: https://optimagroup.us/RF_Finder/Finder_Page.htm.

The initial problem approached in conducting a qualitative analysis was to detect entries in sgn π that had to have a particular sign, based upon sgn β independent of the magnitudes of the entries of β, e.g., Lancaster (1966), Ritschard (1983), Maybee (1986), and Lady (1993). The software used was not generally available and the algorithmic principles were difficult to apply. As noted below, the Monte Carlo simulation reported on here readily solves this problem in a very straight-forward way.

The Monte Carlo algorithm used here is as follows:

1) The number of samples is set by the user and the Monte Carlo simulation is initiated.

2) The sign patterns of β, and as appropriate γ, are specified, i.e., input to the program as data files.

3) For each sample the absolute value of the nonzero entries of β, and as appropriate γ, are each selected randomly from a uniform distribution defined on the open interval, ]0,Max[. The default value of Max is set equal to 10, but the value can be set by the user. These values are set positive or negative as specified by the data files.

4) π = β−1γ is computed. For discussion purposes, assume γ = I.

5) The sgn π as computed by the simulation is compared to a pre-specified sgn π, e.g., as previously estimated. The number of times that the simulation equals the pre-specified sgn π is tabulated and reported once the simulation is done.

6) The number of times that each entry of sgn π is positive or negative (a zero entry is treated as an error and the iteration is repeated) is tabulated and reported when the simulation is done. Accordingly, if any of the entries of sgn π is signable, this is represented by the entry always being positive or negative for each simulation. This result is very simple to achieve and far easier than attempting the more complicated procedures cited above.

7) For each sgn π found a base 10 index is computed, this is done by forming the base 2 index found by placing the rows of sgn π end to end and setting “+” entries equal to “1” and negative entries equal to “0”. For example for,

sgn π = [ + + + + + + + ] ,

the base 2 index = 011101111 and the corresponding base 10 index = 239. A variable differentiated by this base 10 index is then used to tabulate the number of times, if any, each of the possible sign patterns for sgn π is found by the simulation. The (long) integer used in the computing platform for this index is limited to ±231. Accordingly, indexed counts of the reduced form sign patterns cannot be tabulated for mn > 30. This limitation can be mitigated by using other computing platforms or indexing schemes.

Options

8) The data files for β and γ indicate a positive entry with “1”, a negative entry with “−1”, and a zero entry with “0”. If the user sets an entry as “2”, then the simulation sets the entry as nonzero, but equally probably positive or negative. This was done for the array considered at the end of section 3 for the entries marked “?”. In this case, only the zero restrictions on β limited the quantifications of β used by the simulation.

9) The same convention is used for the pre-specified sgn π (although there are no zeros in this array). In this case, if an entry is set at “2” it is ignored by the simulation in detecting the sign patterns found by the simulation. This enables sub-patterns of sgn π to be investigated as the basis for the array not being found, i.e., the structural form being falsified, e.g., [6].

NOTES

1Since Popper (1934, 1959), an instance of model falsification may not be viewed as the basis for rejecting the “main” aspects of the theory; instead, it may “simply” be part of the process of model development or refinement, e.g., Lady and Moody (2019).

2There can be exceptions. For example, if the system (1) includes the accounting equation, Gross Domestic Product (GDP) = Consumption (C) + Investment (I) + Government Spending (G) + Exports (X) − Imports (M), then the corresponding entries of β (and γ) would equal “1” in absolute value.

3It is assumed that β is irreducible, so that π will have no entries that are necessarily equal to zero.

4With rare exceptions, e.g., Hale (1995).

Conflicts of Interest

The author declares no conflicts of interest regarding the publication of this paper.

References

[1] Bopp, A., & Lady, G. (1991). A Comparison of Petroleum Futures versus Spot Prices as Predictors of Prices in the Future. Energy Economics, 13, 274-282.
https://doi.org/10.1016/0140-9883(91)90007-M
[2] Buck, A., & Lady, G. (2005). Falsifying Economics Models. Economic Modelling, 22, 777-810.
https://doi.org/10.1016/j.econmod.2005.05.007
[3] Buck, A., & Lady, G. (2012). Structural Sign Patterns and Reduced form Restrictions. Economic Modelling, 29, 462-470.
https://doi.org/10.1016/j.econmod.2011.12.003
[4] Buck, A., & Lady, G. (2015). A New Approach to Model Verification, Falsification and Selection. Econometrics, 3, 466-493. http://www.mdpi.com/2225-1146/3/3/466/html
https://doi.org/10.3390/econometrics3030466
[5] Buck, A., & Lady, G. (2016). Estimating a Falsified Model. Advances in Pure Mathematics, 6, 523-531.
https://doi.org/10.4236/apm.2016.68040
[6] Hale, D., & Lady, G. (1995). Qualitative Comparative Statics and Audits of Model Performance. Linear Algebra and Its Applications, 217, 141-154.
https://doi.org/10.1016/0024-3795(94)00125-W
[7] Hale, D., Lady, G., Maybee, J., & Quirk, J. (1999). Nonparametric Comparative Statics and Stability. Princeton, NJ: Princeton University Press.
[8] Hicks, J. (1939). Value and Capital. London: Oxford University Press.
[9] Lady, G. (1993). SGNSOLVE.EXE Analysis Package. Prepared as a Job of Work for the Energy Information Administration, Washington DC: U.S. Department of Energy.
[10] Lady, G., & Buck, A. (2011). Structural Models, Information and Inherited Restrictions. Economic Modelling, 28, 2820-2831.
https://doi.org/10.1016/j.econmod.2011.08.021
[11] Lady, G., & Moody, C. (2019). Econometric Modeling and Model Falsification. Advances in Pure Mathematics, 9, 762-776.
https://doi.org/10.4236/apm.2019.99036
http://www.scirp.org/journal/paperinformation.aspx?paperid=95039
[12] Lancaster, K. (1962). The Scope of Qualitative Economics. Review of Economic Studies, 29, 99-132.
https://doi.org/10.2307/2295817
[13] Lancaster, K. (1966). The Solution of Qualitative Comparative Statics Problems. Quarterly Journal of Economics, 53, 278-295.
https://doi.org/10.2307/1880693
[14] Lewis, P. (2016). Quantum Ontology. New York: Oxford University Press.
[15] Ligrone, R. (2019). Biological Innovations that Built the World: A Four-Billion-Year Journey through Life & Earth History. (pp. 478). Berlin: Springer.
https://doi.org/10.1007/978-3-030-16057-9
[16] Maybee, S. (1986). A Method for Identifying Sign Solvable Systems. M.S. Theses, Boulder, CO: University of Colorado.
[17] Metzler, L. (1945). Stability of Multiple Markets: The Hicks Conditions. Econometrica, 13, 277-292.
https://doi.org/10.2307/1906922
[18] Popper, K. (1934, 1959). The Logic of Scientific Discovery. Reprint, New York: Harper and Row.
[19] Ritschard, G. (1983). Computable Qualitative Comparative Statics Techniques. Econometrica, 51, 1145-1168.
https://doi.org/10.2307/1912056
[20] Samuelson, P. (1941). The Stability of Equilibrium: Comparative Statics and Dynamics. Econometrica, 9, 97-120.
https://doi.org/10.2307/1906872
[21] Samuelson, P. (1947). Foundations of Economic Analysis. Cambridge, MA: Harvard University Press.
[22] Shannon, C. E. (1948). A Mathematical Theory of Communication. Bell System’s Technical Journal, 27, 623-656.
https://doi.org/10.1002/j.1538-7305.1948.tb00917.x

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.