Monetary Policy and Unemployment—A Study on the Relationship Exists in the United States

Abstract

This paper intends to make research on the relationship between monetary policy and unemployment of the United States from the first quarter of 1983 to the second quarter of 2018. Data is collected and later divided into two groups, namely an ex-crisis group and a post-crisis group, based on the event of the 2008 world financial crisis. This paper uses an extended version of the original Taylor’s rule by adding the concept of unemployment degree. The results suggest that in both periods, the unemployment gap degree does make a positive impact on the Fed interest rate and has a constantly significant impact on the Fed rate. As a result, Central Banks should adopt an easy monetary policy to stimulate the domestic economy from recession.

Share and Cite:

Zhou, Y. (2021) Monetary Policy and Unemployment—A Study on the Relationship Exists in the United States. Open Journal of Social Sciences, 9, 306-322. doi: 10.4236/jss.2021.94024.

1. Introduction

Monetary policies have been constantly utilized by the authorities and governments, since good monetary policies could be one of the most effective tools to stabilize consumption price, increase economic output and also, more importantly, create job spaces for the public. When there exists an increasing number of unemployed people, the authority must improve credit as well as the supply of money, which in turns will stimulate the demand for investment from domestic companies or international capitals. Consequently, as more companies are founded and a larger manufacturing scale is achieved, there will be a highly growing demand for the labour force to fill these empty job spaces. Hence, more individuals will be able to find a job without difficulties and have a fixed monthly income. On the other hand, due to the increasing demand and greater capital inflow to the market, the price of goods, or the CPI (Consumer Price Index), would rise subsequently, and hence cause the unemployment rate to grow up again. A relatively high unemployment rate is also considered as a signal to the government that the domestic economy is at the bottom level and requires Central Banks to inject more capital into the economic cycle. The optimal aim of monetary policies is to stabilize the goods price as well as control the unemployment rate at a reasonable level that the government of the country could stand. In recent years, remarkable progress has been made on controlling the inflation rate and unemployment rate at an acceptable and low level due to the adoption of policy rules by independent Central Banks in developed countries. Taylor’s (1993), demonstrated in his study that how monetary policies in the US during the last two decades of the last century could be explained under a specified rule. A large number of later papers made an extension to the original linear Taylor’s rule and raised more arguments on the existence of nonlinearities in the reaction function of Central Banks (Ball, 2000), which are caused mainly by two reasons, one is nonlinear macroeconomic links amongst, and another possible reason is from different preferences or priorities of decision-makers (Castro, 2011; Taylor & Davradakis, 2006). There has been evidence shown in the latest empirical studies that governments could make the respond to inflation and production gaps if there exist nonlinear effects (Taylor & Davradakis, 2006; Castro, 2011; Martin & Milas, 2013). This paper would use an extended version of the Taylors rule to find the potential relationship among several variables, namely, Federal Interest rate, Consumer Price Index, unemployment rate gap and production gap. Most data would be collected every quarter, which is the shortest interval that is available from the online database for the US. The layout of the paper is as follows: Section 2 mainly reviews empirical studies; Section 3 introduces the methodology been used in this paper; Section 4 defines variables, carries out the unit root test and describes the statistics; Section 5 focuses on the results and last not least a conclusion will be completed to summarize the findings of the paper.

2. Literature Review

From the start of the last decade of last century, many Central Banks have adopted an inflation targeting structure (Bernanke & Mishkin, 1997), which is regarded as a beneficial way in many aspects. These advantages include 1) an increase of independence of Central Banks; 2) an improvement in the accuracy about the potential level of inflation; 3) an improvement in the communication between leaders who make policies and the public who follow policies due to greater transparency; 4) an increase in the credibility of the monetary policies themselves (Bernanke & Mishkin, 1997; Svensson, 2000). As the history of studies on monetary policies develops, researchers have invested more attention to Taylor’s (1993) rule, and the attention has been caught for more than ten years. Taylor (1993) suggested that the monetary policies of the federal could be described by an interest rate rule which is built on the base of the range of the output product and inflation rate from the expected value (Orphanides, 2002). The adoption of this rule has initially made enormous improvements in performance in the US (Siegfried, 2000). Collectively, the Economic and Monetary Union area as well as the United Kingdom, are found by Gerlach and Schnabel (1999) that their monetary policies could be well described by Taylor’s rule, whereas this particular rule does not provide significant evidence of fitting for the Canadian economy, as none of the research carried out of Taylor’s rule are robust enough to hold the conclusion. As Anderson (2009) argues, Taylor’s rule is a linear algebraic interest rate rule that specifies how Central Bank or Reserves must adjust the national and federal funds rate to the inflation and output gap.

Central Banks should make some moves to announce and adjust to a less complex instrument rule, argued by Svensson (2000), and the instrument rule was mentioned by Judd (1998). There are many objections and negative opinions on implementing Taylor’s rule, a substantial number of papers published by researchers such as Ball (2000), Svensson (2000) and Ko et al. (2011) have made a disapproval of Taylor’s rule, criticizing that following it mechanically is undesirable. During the Asian stock market crisis in 1997 and 1998 and the recent 2008 global financial crisis, the Federal Reserve reduced its interest rate dramatically, which could be proof to support the criticism made by the papers above. Another example that could be raised was that the Bank of England cut the interest rate by 4.5 per cent, from 5 per cent to only 0.5 per cent in only one year shortly after the 2008 global financial crisis, which is one of the very biggest reductions from the creation of interest rate itself in the late 17th century (Astley, 2009). Hence, to make the best use of Taylor’s rule and adjust it to fit the best to own country’s monetary policies and systems, some new transformations are required immediately when there exists new information (Woodford, 2001). As Martin & Milas (2013) indicated that the Bank of England\ relinquished its policies between the period of the 2008 global financial crisis, to achieve stability, finance-wise.

The new-Keynesian model, which is also known as the NK model, has been a useful and effective method for analyzing monetary policies due to the existence of nominal rigidities. As frictions that are similar as the one relates to the Diamond Mortensen-Pissarides search (the DMP model) in the labour market was introduced by Blanchard (2010) in his study, a more realistic labour market was simulated and hence it has better ability to figure out the influences caused by productivity shocks on inflation as well as unemployment, along with the ability to illustrate how these influences could depend on federal policies as well as work market frictions. As a result, the optimal monetary policy could be derived. Blanchard and Galí (2007) also found in his study that wage rigidities, labour market frictions and staggered price settings are three key unreplaceable elements to the model if an explanation of dynamics in unemployment, as well as effects of productivity shocks and the role monetary policies played in shaping these effects, is required. The central determinant in an economy of a country is the degree of labour market tightness, as shown in his paper, as a tighter market could induce inflation issues by increasing the marginal cost, which then becomes an issue of the relationship between unemployment and labour market tightness. In terms of inflation stabilization, it does not deliver the optimal monetary policy to the decision-makers, due to the existence of the labour market frictions and real wage rigidities, which could be explained as distortions change with shocks, as suggested by Blanchard and Galí (2007). Optimal monetary policies could bear a certain room of inflation and restricts the turbulence of unemployment within a smaller range.

According to Gertler, Sala and Trigari (2008), a DSGE (Dynamic, stochastic and general equilibrium) model was used which is based on a new Keynesian paradigm to investigate the procreation of shock events and inflation motion. Under this framework, there exists a relationship between real activities and nominal activities, as built by price rigidities. Clarida et al. (1999) indicated that the inflation motion behaviour is strongly connected to the marginal cost of the companies, as represented by unit labour cost. Some papers carry out researches based on an assumption of a frictionless labour market, for example, Clarida, Gali and Gertler (1999) assume a frictionless work market, on the other hand, Faccini, Millard and Zanetti (2011) argued that the research should be carried out based on a labour market with frictions due to two main reasons, firstly, a labour market with frictions is comprehensive and thus makes it easy for researchers to introduce unemployment determinants into the model. Secondly, more empirical papers adjusted their model by introducing the labour market frictions to gain a better fit and accuracy. As substantial work was done by Charpe and Kühn (2012) allowing for unemployment and staggered nominal salary, they found a different result that is in contrast to many existing empirical studies, as the result points out that the efficiency of employment of existing workers is positive, which in turn proves that the model is immune to the criticism from Barros et al. (2016) who argues that model relies on wage rigidity ignore mutual achievements from trade between firms and labours who have ongoing contracts. Significant evidence shows that the DSGE model with rigidity is a better description of the data, which is superior to a model with flexible wage measurement. Different findings were demonstrated by Faccini, Millard and Zanetti (2011), as they found that marginal cost relies on unit labour cost along with the frictional cost of seeking, and that salary rigidities with a model that uses flexible wage measurements could produce opposite reactions in the frictional costs of employment as well as labour cost per unit. Krause and Lubik (2003) also demonstrated similar conclusions and viewpoints to Faccini, Millar and Zanetti. Irrelevance between wage rigidities and inflation motion that relies on parameter estimates of the model is shown in their study of the UK economy, where key features of British economics were revealed.

Other issues that could affect the research that is raised in the previous empirical studies, such as the delivery of the accurate estimation of anticipated output along with the error of the data with real-time as opposed to previous data (Orphanides, 2002). Wrong policy decision could be made if there has been uncertainty in forecasting the output gap, by either over-forecasting or under-forecasting. The most commonly used method of filtering is the Hodrick-Prescott (HP) method, which has a few vulnerabilities such as the lack of accuracy, chances of misspecification of the underlying economic structure, as the suggested value is specific to the data from the US region and may cause significant error in other countries. Besides, it is known that the output could vary more frequently especially in most of the emerging countries where economic stabilization depends highly on many outside elements, and hence the fluctuation of the anticipation of the output could be stronger (Levin et al., 1999). Moreover, Central Banks are not allowed to smooth interest rate motions due to the basic standard of Taylor’s rule, whilst a smoothing variable that contained in the reaction function could play a crucial role in gaining credibility as well as avoiding any disruptions or noises from the financial market itself (Clarida et al., 1999).

3. Empirical Framework

3.1. Basic Taylor’s Rule

The Basic Taylor’s Rule can be shown in the following formula, i.e.,

i t = r * + π t + f π ( π t π * ) + f y y t , (1)

where i t is for the policy interest of the central bank, r * is the real interest rate in equilibrium statues, π t is for one-year inflation rate, π * is for target inflation rate set by central banks and y t is for production gap. According to Taylor’s Rule, if parameters are set as r * = 2 % , π * = 2 % , f π = 0.5 , and f y = 0.5 , the equation can fits the US Fed rate between 1987 and 1992. Taylor’s policy rules are designed to ensure that the inflation gap and production gap are minimized through the objective function of the policy makers. The expected value is determined by currency loss, which has the following expression:

λ ( y t y t * ) 2 + ( 1 λ ) ( π t π t * ) 2 , λ [ 0 , 1 ] , (2)

where ( y t y t * ) is for production gap, π t is for the real time inflation rate at time t and π t * is the target inflation rate. Taylor (1993) believes that the target variables of macroeconomic policy are production level, unemployment rate and the corresponding volatility of previous variables and their deviation to expected ones. However, the change of natural production and employment rate is not a direct goal of a prudent monetary policy. Meanwhile, the estimation of these variables is not determined by macroeconomic policy. Thus, it is important to choose a target inflation level which maximizes the economic aggregate output and ensures the real inflation rate is slightly fluctuate around the target rate.

There are two points worth mentioning for the policy objectives in previous Taylor’s rule. Firstly, the setting of the output target level is not determined by the policy model. The assumption of zero production gap is related to potential production gap. This point is also supported by Rotemberg and Woodford (1997). In an imperfectly competitive goods market, the economic system tends to become in efficient because of the fundamental mechanism for the market. However, the government subsidies or direct support will remain in a high level for efficient economic production. At the same time, the natural production level will be lower than the production level in equilibrium status, but the government may not stimulate the economy to raise the output to the best level. The most important part for Taylor’s Rule lies in that the monetary policy of a country is not for evaluating the efficiency of gross production but for an effective tool in remaining production volatility is a reasonable level.

In Taylor’s Rule, zero production gap is reasonable because such output level fits the natural rate hypothesis. According to Taylor’s Rule, the model should reflect the condition that policy makers may consider the economy will automatically return to the level of natural unemployment. Meanwhile, there is no long-term stable relationship between inflation and production gap. In the long-run, policy makers do not have to worry about the deviation of production level from its natural level. Setting the output gap function to zero seems reflect the real economic structure better.

3.2. Evolution of Basic Taylor’s Rule

Setting several limits on parameters in Equation (1), we have

i t = α + β π ( π t π * ) + β y y t , (3)

in which α = r * + π * , β π = 1 + f π . According to Equation (3), the central bank policy interest i t can be divided into three parts, i.e., the fixed real interest rate and target inflation degree α , the deviation of current inflation against target inflation β π ( π t π * ) , production gap with initial value 0, β y y t . Then we derive it as

i t = α + β π ( E t [ π t + k ] π * ) + β y E t [ y t + m ] , (4)

where E t [ . ] represents the conditional expectation of a variable at time t. The lag of interest is then introduced to Equation (4) to get Equation (5), i.e.,

i t = ( 1 ρ ) { α + β π ( E t [ π t + k ] π * ) + β y E t [ y t + m ] } + ρ i t 1 , (5)

in which ρ ~ [ 0 , 1 ] and the parameter ρ can be viewed as the marginal impact of lag t 1 interest rate to current interest rate i t 1 . The estimation of ρ is really a complicated work since it offers the evidence for the central banks to adjust current monetary policy. ρ determines the adjustment rate of interest rate by central banks. If ρ is relatively high and close to 1, it means that the interest adjustment process by central banks are relatively slow.

Taylor (1993) does not give a formal econometric analysis of monetary policy. The paper uses Equation (1) to simulate the Fed rate between 1987 and 1992. Taylor (1998) gives the adjustment of Taylor (1993) by introducing Ordinary Least Square (OLS) estimation in different sample periods in the U.S. by applying Equation (6), i.e.,

i t = δ + β π π t + β y y t + ε t (6)

where δ = r * ( β π 1 ) π * .

3.3. Analysis Framework

In this paper, our research is based on Taylor (1998)’s rule by introducing unemployment degree in Equation (6), i.e.,

i t = α + β π ( π t π * ) + β u ( u t u * ) + β y y t , (7)

where ( u t u * ) represents the unemployment gap between real unemployment π t and target unemployment rate π * . Specifically, we firstly define our variables in Equation (7).

3.3.1. Potential Production Level

We apply Hodrick-Prescott (HP) Filter in this part to specify the trend part and the fluctuation part in aggregate production. Given a production level y t at time t, we can write y t as

y t = y t T + y t c , t = 1 , 2 , , T (8)

where { y t } is a time series set containing both the trend part { y t T } and the fluctuation part { y t c } . To derive the trend part for total production function, we need to define Loss Square function, i.e.,

f ( λ ) = t = 1 T ( y t y t T ) 2 + λ t = 1 T [ ( y t + 1 T y t T ) ( y t T y t 1 T ) ] 2 , (9)

where λ is an important parameter to determine the tracking and smoothing degree of the trend part { y t T } to aggregate production { y t } . And the trend part is defined as minimized loss squared:

y t T = arg min ( { f ( λ ) = t = 1 T ( y t y t T ) 2 + λ t = 1 T [ ( y t + 1 T y t T ) ( y t T y t 1 T ) ] 2 } ) , (10)

In Equation (10), λ is an exogenous variable and a higher λ will cause a smoother trend part { y t T } . When λ goes to infinity, { y t T } is close to a linear function. In this paper, we would like to use quarterly data in the empirical part. Thus, we set λ = 100 according to empirical values.

3.3.2. Theoretical Foundations for Stationary Analysis

For stationary time series, the numerical characteristics (e.g. expectation and variance) are usually stable. So, it is effective to use past information to predict the future information for these variables. But it is more general to see non-stationary variables in economic variables such as GDP and amount of saving held by banks. They appear to be random at some point of time. Thus, as for non-stationary variables, it may not generate reliable and meaningful predictions by applying past information. One effective way to deal with the non-stationary variables is to do the differentiation of lagged variables to get integrated variables, e.g., I(1) for non-stationary variables by doing lag one differentiation. Generally, suppose a time series { y t } is stationary after doing lag d differentiation, we define { y t } as y t ~ I ( d ) .

In time series analysis, “spurious regression” refers to a sequence in which two non-stationary variables exhibit a mathematical long-term stationary relationship because of the existence of time term. Therefore, before performing co-integration analysis on time-series variables, it is necessary to conduct a unit root test on the original data to see whether it is stable. To check the existence of unit root, the Augmented Dickey-Fuller (ADF) method is applied. Basically, assume that a time series y t with first order auto-correlation, we have that

y t = α + β 1 y t 1 + ε t . (11)

Deducting both parts with y t 1 , we have

Δ y t = α + ( β 1 1 ) y t 1 + ε t = α + φ 1 y t 1 + ε t (12)

The necessary and sufficient condition for a stationary time series y t is φ 1 should be significantly different from zero. The null hypothesis for the test is that the time series to check has a unit root.

3.3.3. Theoretical Foundations for Cointegration Analysis

In this paper, we would like to employ the cointegration analysis for our variables. Basically, cointegration is a statistical description for long-term equilibrium relationship between several non-stationary economic variables. This long-term stable relationship can exist in two variables or in pairs for more than three variables. But the later situation is more complicated than the previous one.

In this paper, we will conduct the cointegration analysis on more than two variables. Based on the basic theories of stationary and cointegration analysis, we will firstly do the unit root analysis test for all variables to check whether these variables are stationary. If the variables are not stationary, we need to determine the differentiation lags.

The premise of cointegration analysis on the model is that all variables have the same integration order, then the cointegration relationships among them are stable. If the differentiation lags are different, it is necessary to determine whether the linear combinations of these variables are stable.

4. Empirical Analysis

4.1. Definition of Variables

In this paper, we would like to investigate the influencing impact of unemployment in monetary policy issued by the U.S. Federal Reserve before and after the 2008 financial crisis. Specifically, we divide our data in two samples, i.e. the pre-crisis sample (1983-2007) and the post-crisis sample (2008-2018). Our model is based on Taylor (1998)’s rule by expanding the influence of unemployment. Specifically, the model is as following:

i t = α + β π ( π t π * ) + β u ( u t u * ) + β y ( y t y * ) . (13)

Before conducting the time series analysis, we need to define our variables.

4.1.1. The Policy Interest Rate it

In this paper, we follow Taylor (1998)’s framework by choosing the Fed Rate as our policy interest rate i t , which is a free market determined rate. To match the sample range and the frequency of other variables, we choose quarterly data ranging from 1983Q1 to 2018Q2. The Fed Rate is in annul format and the unit is percent (%).

4.1.2. The Inflation Gap (πt - π*)

In the existing literatures studying Taylor’s Rule, indicators such as Consumers Price Index (CPI), core CPI and GDP deflator are usually used as the measurement of inflation. In this paper, we apply the quarterly CPI deflator on the year-to-year basis as the measurement of real inflation degree π t . As for the target inflation level, we follow Taylor’s basic model by setting π * = 2 % annually, or 0.5% quarterly consistently.

4.1.3. The Unemployment Rate Gap (ut - u*)

There are limited number of papers discussing the natural unemployment rate of the United States. But a consensus has been achieved that the unemployment has been increasing since 1992, reaching the peak at 5.6% in 2002. The unemployment is fluctuating between 4.8% and 5.6% after 2000 and it is relatively stable. Compared with the countries with similar development degree, the US has a relatively low unemployment degree. Based on existing research conclusions, we apply the average value of listed unemployment rates, i.e., 4.5% as the target rate.

As for the real unemployment rate, we use the quarterly unemployment disclosed by the U.S. Social Security Administration.

4.1.4. The Production Gap (yt - y*)

There has not been a consensus on the measurement of aggregate production since there is no unique method for the computation of potential outputs. Among many research methods of potential aggregate output, there are two main categories. The first one is the statistical decomposition trend method, in which the time series are decomposed into permanent and parodic components. Popular methods include Boron Nitride (BN) decomposition by ARIMA model, Unscented Kalman Filter and Hodrick-Prescott Filter.

The other type is the economic structure relationship estimation method, which uses basic macro-economic theory to separate structural cyclical factors which impose an impact on the aggregate output level. One typical method is the production function method, which considers a comprehensive impact of capital, labor and technological progress on output. The shortcomings on this method, however, lies in the complicated computation process.

In this paper, we mainly adopt the Hodrick-Prescott Filter, which realizes the decomposition of time series components by minimizing the variance of fluctuations. As for the aggregate output, the use the real quarterly Gross Deposit Product (GDP) in billion US dollar.

4.2. Data Preparation and Descriptive Statistics

In this paper, we use quarterly data between 1983Q1 and 2018Q2. Our data source is Bloomberg database and we use E-Views 8.0 for the data processing.

We firstly apply the basic Hodrick-Prescott Filter analysis in decomposing the real GDP into two parts. As shown in Figure 1, the red line is for the trend part, which is highly coincident with the real GDP. Green line is for the cycle part. In this paper, we set the trend part as the expectation of real output. Thus, the production gap is defined as the cycle part, i.e., the green line in Figure 1.

The descriptive statistics for all variables are shown in Table 1. It is worth mentioning that the corresponding probabilities of the J-B statistics for i t , y t y * , u t u * are less than 0.05.

This means the null hypothesis that each variable here is normally distributed is rejected by 95% confidential degree. However, we cannot reject the null hypothesis that y t y * is normally distributed.

4.3. Unit Root Test

The result of unit root test by applying Augmented Dickey-Fuller (ADF) method

Figure 1. Hodrick-Prescott Filter of GDP.

Table 1. Descriptive statistics of variables.

Table 2. ADF test for unit roots.

is shown in Table 2.

According to Table 2, we notice that the corresponding probabilities of the ADF-t statistics are smaller than 0.1. This means under a 90%-degree confidential degree, all variables have no unit roots in our sample range. Thus, we can use them directly for time-series analysis.

4.4. Johanson Cointegration Test

Starting from this part, we divide our sample into two groups, i.e., the pre-crisis group and the post-crisis group. The pre-crisis group contains the data from 1983 to 2007 and the post-crisis group contains data from 2008 to 2018.

According to the empirical framework analysis in last chapter, we know that there may be more than one pairs of long-term cointegration relationship among our variables. Thus, it is necessary to conduct the Johanson Cointegration test to determine the long-term cointegration relationships among our variables.

The fundamental mechanism for Johanson Cointegration is to set the null hypothesis that there is no cointegration existing among previous variables. Then the trace statistics are computed to check the statistical significance of null hypothesis. If the null hypothesis is rejected, we then assume that at most 1 relationship exists and repeat the trace statistics computation again. We keep repeating the testing process until we not reject the hypothesis that there are at most N cointegration relationship. Then we can say that there are N pairs of cointegration relationships in these variables.

According to Table 3, we notice that we cannot reject the null hypothesis that there are two cointegration relationships in the pre-crisis period but only 1 pair of relationship in the post-crisis period.

4.5. Granger Causality Test

We conduct the Granger Causality Test in this part and the corresponding results are shown in Table 4. It is obvious to see from Table 4 that in the pre-crisis there are two Granger Causality relationships. The first one is i t is the Granger cause of y t in the pre-crisis period. This result indicates that the monetary policy helps stimulate economy of the United States effectively before the 2008 financial crisis. The second causality relationship is that y t is the Granger cause of u t . This relationship indicates that the domestic economy has a lagged effect towards unemployment conditions in the US.

As for the post-crisis group, we notice there is only one Granger Causality relationship where y t is the Granger cause of u t . This result goes consistent with the pre-crisis group and also indicates the lagged impact of economy to unemployment degree in the U.S.

Based on previous analysis, we can draw the conclusion that monetary policy does have a significant impact on the unemployment statues in the pre-crisis group. The mechanism behind this relationship is that monetary policy firstly stimulates the domestic economy in an effective way. Afterwards, the economy has a lagged effect towards the employment status in the U.S.

4.6. Regression Result and Conclusion

We apply the Ordinary Least Squared (OLS) method for the Taylor (1998)’s expansion function as following,

Table 3. Johanson cointegration test.

Table 4. Granger causality test.

i t = α + β π ( π t π * ) + β u ( u t u * ) + β y ( y t y * ) .

And the regression results for two groups are reported in Table 5.

The interpretation of the regression coefficients in the pre-crisis group is as below.

Table 5. OLS regression result.

1) An increase of 1% of ( π t π * ) will increase 1.002% of i t ;

2) An increase of 1% of ( u t u * ) will increase 0.578 % of i t ;

3) An increase of 1% of ( y t y * ) will increase 0.010% of i t ;

Similarly, the interpretation of coefficients in the post-crisis group is

1) An increase of 1% of ( π t π * ) will increase 0.033% of i t ;

2) An increase of 1% of ( u t u * ) will increase 0.119 % of i t ;

3) An increase of 1% of ( y t y * ) will increase 0.003 % of i t ;

According to Table 5, we find that in both periods, the employment gap degree does make a positive impact to the Fed interest. This indicates the higher the unemployment environment, the higher the Fed Rate may be employed. Based on the Granger Causality analysis in previous part, we understand that the economy development has a lagged impact on employment degree. When the unemployment rate is high, the domestic economy development may be slow. This may indicate that the Fed Rate can be decreased so as to adopt an easy monetary policy.

Meanwhile, the significance of this relationship has not decreased in the post-crisis group. This means in the post-crisis period, the unemployment rate also reflects the recession of the economy, and an easy monetary policy can have a significant stimulation impact to the domestic economy of the US.

5. Conclusion

This paper has examined variables of monetary policies and unemployment, which are Federal interest rate, Inflation gap which used indictors of CPI deflator, the unemployment rate gap that utilized the real unemployment rate disclosed by the United States Social Security Administration, as well as production gap where the real quarterly Gross Domestic Product in Billion US dollars is used, respectively. In conclusion, after reviewing the results that generated from the data that collected, this present paper is able to find that there is a significantly positive link between the unemployment gap degree and Federal Interest Rate in both ex-crisis and post-crisis periods for the US. In other words, a higher Federal Interest rate could be implemented if the unemployment rate is higher, and hence slow down the domestic economy, which implies the authorities to implement an easy monetary policy to decrease the Fed rate. Another finding is that the inflation rate Gap could affect the Federal Interest Rate in a strongly positive way before the financial crisis but drop dramatically after the financial crisis. The production gap has a relatively tiny connection with the federal Interest rate ex-crisis and decrease to an even smaller figure post-crisis. The significance of the link between unemployment rate and the Fed Interest Rate has not decreased after the 2008 global financial shock, which could be indicating that the unemployment rate could also reflect the downturn of the domestic economy in the United States even after the financial crisis. As a result, an implementation of an easy monetary policy could strongly stimulate the recession of the economy and make positive impacts on the domestic economy of the US. In order to receive a broader and deeper understanding between monetary policies and unemployment, future studies are recommended to carry out a cross country research and make comparisons with the differences from each country. Countries could be chosen as specific groups, for example, a group of developed countries or a group of emerging countries. Although there are many sample countries, it could be hard to find data in shorter intervals, i.e., monthly data or quarterly data, especially in emerging countries.

Conflicts of Interest

The authors declare no conflicts of interest regarding the publication of this paper.

References

[1] Anderson, J. (2009). The China Monetary Policy Handbook. In J. R. Barth, J. A. Tatom, & G. Yago (Eds.), China’s Emerging Financial Markets (pp. 167-265).
https://doi.org/10.1007/978-0-387-93769-4_5
[2] Astley, M. (2009). Monetary Policy Roundtable. Bank of England Quarterly Bulletin, 49, 238-240.
[3] Ball, L. (2000). Near-Rationality and Inflation in Two Monetary Regimes. Economics Working Paper Archive.
https://doi.org/10.3386/w7988
[4] Barros-Campello, E., Pateiro-Rodríguez, C., & Salcines-Cristal, J. V. (2016). Existe evidencia de asimetrías en la gestión de la política monetaria por parte del Banco Central Europeo? (1999-2014). El Trimestre Económico, 83, 537-564.
[5] Bernanke, B. S., & Mishkin, F. S. (1997). Inflation Targeting: A New Framework for Monetary Policy? Journal of Economic Perspectives, 11, 97-116.
https://doi.org/10.1257/jep.11.2.97
[6] Blanchard, O., & Galí, J. (2007). A New Keynesian Model with Unemployment. CFS Working Paper Series.
https://doi.org/10.2139/ssrn.1688968
[7] Blanchard, O., Dell’Ariccia, G., & Mauro, P. (2010). Rethinking Macroeconomic Policy. Journal of Money, Credit and Banking, 42, 199-215.
https://doi.org/10.1111/j.1538-4616.2010.00334.x
[8] Castro, V. (2011). Can Central Banks Monetary Policy Be Described by a Linear (Augmented) Taylor Rule or by a Nonlinear Rule? Journal of Financial Stability, 7, 228-246.
https://doi.org/10.1016/j.jfs.2010.06.002
[9] Charpe, M., & Kühn, S. (2012). Bargaining, Aggregate Demand and Employment. MPRA Paper.
[10] Clarida, R., Gali, J., & Gertler, M. (1999). The Science of Monetary Policy. Journal of Economic Literature, 37, 1661-1707.
https://doi.org/10.1257/jel.37.4.1661
[11] Faccini, R., Millard, S., & Zanetti, F. (2011). Wage Rigidities in an Estimated DSGE Model of the UK Labour Market. Bank of England Working Papers.
https://doi.org/10.2139/ssrn.1765841
[12] Gerlach, S., & Schnabel, G. (1999). The Taylor Rule and Interest Rates in the EMU Area. CEPR Discussion Papers.
https://doi.org/10.2139/ssrn.856944
[13] Gertler, M., Trigari, A., & Sala, L. (2008). An Estimated Monetary DSGE Model with Unemployment and Staggered Nominal Wage Bargaining. Social Science Electronic Publishing, 40, 1713-1764.
https://doi.org/10.1111/j.1538-4616.2008.00180.x
[14] Judd, J. P. (1998). Taylor’s Rule and the Fed: 1970-1997. Economic Review, No. 3, 3-16.
[15] Ko, G. T., Chow, C. C., Leung, G. et al. (2011). High Rate of Increased Carotid Intima-Media Thickness and Atherosclerotic Plaques in Chinese Asymptomatic Subjects with Central Obesity. International Journal of Cardiovascular Imaging, 27, 833-841.
https://doi.org/10.1007/s10554-010-9733-x
[16] Krause, M. U., & Lubik, T. A. (2003). The (Ir)relevance of Real Wage Rigidity in the New Keynesian Model with Search Frictions. Journal of Monetary Economics, 54, 706-727.
[17] Levin, A. T., Wieland, V., & Williams, J. C. (1999). Robustness of Simple Monetary Policy Rules under Model Uncertainty. In J. B. Taylor (Ed.), Monetary Policy Rules (pp. 263-299). Chicago, IL: University of Chicago Press.
https://doi.org/10.2139/ssrn.148695
[18] Martin, C., & Milas, C. (2013). Financial Crises and Monetary Policy: Evidence from the UK. Journal of Financial Stability, 9, 654-661.
https://doi.org/10.1016/j.jfs.2012.08.002
[19] Orphanides, A. (2002). Monetary-Policy Rules and the Great Inflation. American Economic Review, 92, 115-120.
https://doi.org/10.1257/000282802320189104
[20] Rotemberg, J. J., & Woodford, M. (1997). An Optimization-Based Econometric Framework for the Evaluation of Monetary Policy. NBER Macroeconomics Annual, 12, 297-346.
https://doi.org/10.1086/654340
[21] Siegfried, N. A. (2000). Monetary Policy and Investment in Germany. Quantitative Macroeconomics Working Papers.
[22] Svensson, L. (2000). How Should Monetary Policy Be Conducted in an Era of Price Stability? Seminar Papers, Stockholm University, Institute for International Economic Studies.
https://doi.org/10.3386/w7516
[23] Taylor, J. B. (1993). Discretion versus Policy Rules in Practice. In Carnegie-Rochester Conference Series on Public Policy (Vol. 39, pp. 195-214). Amsterdam: Elsevier.
https://doi.org/10.1016/0167-2231(93)90009-L
[24] Taylor, J. B. (1998). A Historical Analysis of Monetary Policy Rules. Working Paper 6768, Cambridge, MA: National Bureau of Economic Research.
https://doi.org/10.3386/w6768
[25] Taylor, M. P., & Davradakis, E. (2006). Interest Rate Setting and Inflation Targeting: Evidence of a Nonlinear Taylor Rule for the United Kingdom: Studies in Nonlinear Dynamics Econometrics. Studies in Nonlinear Dynamics Econometrics, 10, 1359-1359.
https://doi.org/10.2202/1558-3708.1359
[26] Woodford, M. (2001). The Taylor Rule and Optimal Monetary Policy. The American Economic Review, 91, 232-237.
https://doi.org/10.1257/aer.91.2.232

Copyright © 2024 by authors and Scientific Research Publishing Inc.

Creative Commons License

This work and the related PDF file are licensed under a Creative Commons Attribution 4.0 International License.