Working Paper Series

Browse the categories to access the content of academic, scientific and opinion publications of the professors and students of the Department of Economics PUC-Rio.

Ignorance is bliss: voter education and alignment in distributive politics

N 690, 21/09/2021

Central-government politicians channel resources to sub-national entities for political gains. We show formally that the central politicians' allocation decision has two drivers: political alignment (between central and local politicians) and the level of local political accountability. However, drivers count one at a time: alignment matters before local elections, while local political accountability matters before central elections. We then perform a test of our model using Brazilian data, which corroborates our results. Furthermore, we show and explain why political accountability becomes a curse: better-educated districts receive fewer transfers in equilibrium.

Federico Boffa, Amedeo Piolatto, Francisco de Lima Cavalcanti.

Anchored Inflation Expectations

N 689, 06/07/2021

We develop a theory of low-frequency movements in inflation expectations, and use it to interpret joint dynamics of inflation and inflation expectations for the United States and other countries over the post-war period. In our theory long-run inflation expectations are endogenous. They are driven by short-run inflation surprises, in a way that depends on recent forecasting performance and monetary policy. This distinguishes our theory from common explanations of low-frequency properties of inflation.  The model, estimated using only inflation and short-term forecasts from professional surveys, accurately predicts observed measures of long-term inflation expectations and identifies episodes of unanchored expectations.

Carlos Viana de Carvalho, Stefano Eusepi, Emanuel Moench, Bruce Preston.

Persistent Monetary Non-neutrality in an Estimated Menu-Cost Model with Partially Costly Information

N 688, 05/07/2021

We propose a model that reconciles microeconomic evidence of frequent and large price changes with sizable monetary non-neutrality. Firms incur separate lump-sum costs to change prices and to gather and process some information about marginal costs. Additional relevant information is continuously available, and can be factored into pricing decisions at no cost. We estimate the model by Simulated Method of Moments, using price-setting statistics for the U.S. economy. The model with free idiosyncratic and costly aggregate information fits well both targeted and untargeted microeconomic moments and generates more than twice as much monetary non-neutrality as the Calvo model.

Marco Bonomo, Carlos Viana de Carvalho, Rene Garcia, Vivian Malta Nunes, Rodolfo Dinis Rigato.

Multi-Product Pricing: Theory and Evidence From Large Retailers

N 687, 03/07/2021

We study a unique dataset with comprehensive coverage of daily prices in large multi-product retailers in Israel. Retail stores synchronize price changes around occasional "peak" days when they reprice around 10% of their products. To assess aggregate implications of partial price synchronization, we develop a new model in which multi-product firms face economies of scope in price adjustment, and synchronization is endogenous. Synchronization of price changes attenuates the average price response to monetary shocks, but only high degrees of synchronization can substantially strengthen monetary non-neutrality. Our calibrated model generates as little monetary non-neutrality as in Golosov and Lucas (2007).

Marco Bonomo, Carlos Viana de Carvalho, Oleksiy Kryvtsov, Rodolfo Dinis Rigato, Sigal Ribon.

Taylor Rule Estimation by OLS

N 686, 02/07/2021

Ordinary Least Squares (OLS) estimation of monetary policy rules produces potentially inconsistent estimates of policy parameters. The reason is that central banks react to variables, such as in ation and the output gap, which are endogenous to monetary policy shocks. Endogeneity implies a correlation between regressors and the error term, and hence, an asymptotic bias. In principle, Instrumental Variables (IV) estimation can solve this endogeneity problem. In practice, IV estimation poses challenges, as the validity of potential instruments depends on various unobserved features of the economic environment. We argue in favor of OLS estimation of monetary policy rules. To that end, we show analytically in the three-equation New Keynesian model that the asymptotic OLS bias is proportional to the fraction of the variance of regressors accounted for by monetary policy shocks. Using Monte Carlo simulation, we then show that this relationship also holds in a quantitative model of the U.S. economy. As monetary policy shocks explain only a small fraction of the variance of regressors typically included in monetary policy rules, the endogeneity bias is small. Using simulations, we show that, for realistic sample sizes, the OLS estimator of monetary policy parameters outperforms IV estimators.

A sair no Journal of Monetary Economics

Carlos Viana de Carvalho, Fernanda Feitosa Nechio, Tiago Santana Tristão.

Price selection

N 685, 30/06/2021

Price selection is a simple, model-free measure of selection in price setting and its contribution to inflation dynamics. It exploits comovement between inflation and the level from which adjusting prices departed. Prices that increase from lower-than-usual levels tend to push inflation above average. Using detailed micro-level consumer price data for the United Kingdom, the United States, and Canada, we nd robust evidence of strong price selection across goods and services. At a disaggregate level, price selection accounts for 37% of inflation variance in the United Kingdom, 36% in the United States, and 28% in Canada. Price selection is stronger for goods with less frequent price changes or with larger average price changes. Aggregate price selection is considerably weaker. A multisector sticky-price model accounts well for this evidence and demonstrates a monotone relationship between price selection and monetary non-neutrality. Revisto em maio 2021

Artigo aceito para publicação no Journal of Monetary Economics


Carlos Viana de Carvalho, Oleksiy Kryvtsov.

Residual Based Nodewise Regression in Factor Models with Ultra-High Dimensions: Analysis of Mean-Variance Portfolio Efficiency and Estimation of Out-of-Sample and Constrained Maximum Sharpe Ratios

N 684, 22/06/2021

We provide a new theory for nodewise regression when the residuals from a fitted factor model are used to apply our results to the analysis of maximum Sharpe ratio when the number of assets in a portfolio is larger than its time span. We introduce a new hybrid model where factor models are combined with feasible nodewise regression. Returns are generated from increasing number of factors plus idiosyncratic components (errors). The precision matrix of the idiosyncratic terms is assumed to be sparse, but the respective covariance matrix can be non-sparse. Since the nodewise regression is not feasible due to unknown nature of errors, we provide a feasible-residual based nodewise regression to estimate the precision matrix of errors, as a new method. Next, we show that the residual-based nodewise regression provides a consistent estimate for the precision matrix of errors. In another new development, we also show that the precision matrix of returns can be estimated consistently, even with increasing number of factors. Benefiting from the consistency of the precision matrix estimate of returns, we show that: (1) the portfolios in high dimensions are mean-variance efficient; (2) maximum out-of-sample Sharpe ratio estimator is consistent and the number of assets slows the convergence up to a logarithmic factor; (3) the maximum Sharpe ratio estimator is consistent when the portfolio weights sum to one; and (4) the Sharpe ratio estimators are consistent in global minimum-variance and mean-variance portfolios.

Mehmet Caner, Marcelo Medeiros, Gabriel F. R. Vasconcelos.

The Proper Use of Google Trends in Forecasting Models

N 683, 04/05/2021

It is widely known that Google Trends has become one of the most popular free tools used by forecasters both in academics and in the private and public sectors. There are many papers, from several different fields, concluding that Google Trends improve forecasts’ accuracy. However, what seems to be widely unknown, is that each sample of Google search data is different from the other, even if you set the same search term, data and location. This means that it is possible to find arbitrary conclusions merely by chance. This paper aims to show why and when it can become a problem and how to overcome this obstacle.

Marcelo Medeiros, Henrique Fernandes Pires.

Jumps in Stock Prices: New Insights from Old Data

N 682, 19/03/2021

We characterize jump dynamics in stock market returns using a novel series of intraday prices covering over 80 years. Jump dynamics vary substantially over time. Trends in jump activity relate to secular shifts in the nature of news. Unscheduled news often involving major wars drives jump activity in early decades, whereas scheduled news and especially news pertaining to monetary policy drives jump activity in recent decades. Jump variation measures forecast excess stock market returns, consistent with theory. Results support models featuring a separate jump factor such that risk premium dynamics are not fully captured by volatility state variables

James A. Johnson, Bradley S. Paye, Marcelo Medeiros.

Bridging factor and sparse models

N 681, 22/02/2021

Factor and sparse models are two widely used methods to impose a low-dimensional structure in high dimension. They are seemingly mutually exclusive. In this paper, we propose a simple lifting method that combines the merits of these two models in a supervised learning methodology that allows to efficiently explore all the information in high-dimensional datasets. The method is based on a very flexible linear model for panel data, called factor-augmented regression model with both observable, latent common factors, as well as idiosyncratic components as high-dimensional covariate variables. This model not only includes both factor regression and sparse regression as specific models but also significantly weakens the cross-sectional dependence and hence facilitates model selection and interpretability. The methodology consists of three steps. At each step, remaining cross-section dependence can be inferred by a novel test for covariance structure in high-dimensions. We developed asymptotic theory for the factoraugmented sparse regression model and demonstrated the validity of the multiplier bootstrap for testing high-dimensional covariance structure. This is further extended to testing highdimensional partial covariance structures. The theory and methods are further supported by an extensive simulation study and applications to the construction of a partial covariance network of the financial returns for the constituents of the S&P500 index and prediction exercise for a large panel of macroeconomic time series from FRED-MD database

Jianqing Fan, Ricardo Masini, Marcelo Medeiros.

Regularized estimation of high-dimensional vector autoregressions with weakly dependent innovations

N 680, 29/12/2020

There has been considerable advance in understanding the properties of sparse regularization procedures in high-dimensional models. In time series context, it is mostly restricted to Gaussian autoregressions or mixing sequences. We study oracle properties of LASSO estimation of weakly sparse vector-autoregressive models with heavy tailed, weakly dependent innovations with virtually no assumption on the conditional heteroskedasticity. In contrast to current literature, our innovation process satisfy an L1 mixingale type condition on the centered conditional covariance matrices. This condition covers L1-NED sequences and strong mixing sequences as particular examples. From a modeling perspective, it covers several multivariate-GARCH specifications, such as the BEKK model, and other factor stochastic volatility specifications that were ruled out by assumption in previous studies.

Ricardo Masini, Marcelo Medeiros, Eduardo F. Mendes.

Machine Learning Advances for Time Series Forecasting

N 679, 28/12/2020

In this paper we survey the most recent advances in supervised machine learning and highdimensional models for time series forecasting. We consider both linear and nonlinear alternatives. Among the linear methods we pay special attention to penalized regressions and ensemble of models. The nonlinear methods considered in the paper include shallow and deep neural networks, in their feed-forward and recurrent versions, and tree-based  methods, such as random forests and boosted trees. We also consider ensemble and hybrid models by combining ingredients from different alternatives. Tests for superior predictive ability are briefly  reviewed. Finally, we discuss application of machine learning in economics and nance and provide an illustration with high-frequency nancial data

Ricardo Pereira Masini, Marcelo Medeiros, Eduardo F. Mendes.

Do We Exploit all Information for Counterfactual Analysis? Benefits of Factor Models and Idiosyncratic Correction

N 678, 03/11/2020

The measurement of treatment (intervention) effects on a single (or just a few) treated unit(s) based on counterfactuals constructed from artificial controls has become a popular practice in applied statistics and economics since the proposal of the synthetic control method. In high-dimensional setting, we often use principal component or (weakly) sparse regression to estimate counterfactuals. Do we use enough data information? To better estimate the effects of price changes on the sales in our case study, we propose a general framework on counterfactual analysis for high dimensional dependent data. The framework includes both principal component regression and sparse linear regression as specific cases. It uses both factor and idiosyncratic components as predictors for improved counterfactual analysis, resulting a method called Factor-Adjusted Regularized Method for Treatment (FarmTreat) evaluation. We demonstrate convincingly that using either factors or sparse regression is inadequate for counterfactual analysis in many applications and the case for information gain can be made through the use of idiosyncratic  components. We also develop theory and methods to formally answer the question if common factors are adequate for estimating counterfactuals. Furthermore, we consider a simple resampling approach to conduct inference on the treatment effect as well as bootstrap test to access the relevance of the idiosyncratic components. We apply the proposed method to evaluate the effects of price changes on the sales of a set of products based on a novel large panel of sale data from a major retail chain in Brazil and demonstrate the benefits  of using additional idiosyncratic components in the treatment effect evaluations.

Jianqing Fan, Ricardo Pereira Masini, Marcelo Medeiros.

Targeting in Adaptive Networks

N 677, 27/10/2020

This paper studies optimal targeting policies, consisting of eliminating (preserving) a set of agents in a network and aimed at minimizing (maximizing) aggregate effort levels. Different from the existing literature, we allow the equilibrium network to adapt after a network intervention and consider targeting of multiple agents. A simple and tractable adjustment process is introduced. We find that allowing the network to adapt may overturn optimal targeting results for a fixed network and that congestion/competition effects are crucial to understanding differences between the two settings

Timo Hiller.

A Simple Model of Network Formation with Congestion Effects

N 676, 16/10/2020

This paper provides a game-theoretic model of network formation with a continuous effort choice. Efforts are strategic complements for direct neighbors in the network and display global substitution/congestion effects. We show that if the parameter governing local strategic complements is larger than the one governing global strategic substitutes, then all pairwise Nash  equilibrium networks are nested split graphs. We also consider the problem of a planner, who can choose effort levels and place links according to a network cost function. Again all socially optimal configurations are such that the network is a nested split graph. However, the socially optimal network may be different from equilibrium networks and efficient effort levels do not coincide with Nash equilibrium effort levels. In the presence of strategic substitutes, Nash equilibrium effort levels may be too high or too low relative to efficient effort levels. The relevant applications are crime networks and R&D collaborations among firms, but also interbank lending and trade.

Timo Hiller.

Price Dispersion in Dynamic Competition

N 675, 05/10/2020

In product markets, there exists substantial dispersion in prices for transactions of physically identical goods, and incumbent sellers sell at higher prices than entrants. This study develops a theory of dynamic pricing that explains these facts as results from the same fundamental friction: Buyers are imperfectly aware of which sellers are operating, and the degree of awareness about a seller is  ndogenous. The equilibrium is unique and efficient, and features randomized pricing strategies where incumbents post higher prices than entrants. If buyers' memory depreciation is low, then the equilibrium of the industry tends to approximate perfectly competitive conditions over time.

Rafael Roos Guthmann.

Online Action Learning in High Dimensions: A New Exploration Rule for Contextual et-Greedy Heuristics

N 674, 29/09/2020

Bandit problems are pervasive in various fields of research and are also present in  several practical applications. Examples, including dynamic pricing and assortment and the design of auctions and incentives, permeate a large number of sequential treat- ment experiments. Different applications impose distinct levels of restrictions on viable actions. Some favor diversity of outcomes, while others require harmful actions to be closely monitored or mainly avoided. In this paper, we extend one of the most popular bandit solutions, the original et-greedy heuristics, to high-dimensional contexts. Moreover, we introduce a competing exploration mechanism that counts with searching sets based on order statistics. We view our proposals as alternatives for cases where pluralism is valued or, in the opposite direction, cases where the end-user should carefully tune the range of exploration of new actions. We find reasonable bounds for the cumulative regret of a decaying et-greedy heuristic in both cases, and we provide an upper bound for the initialization phase that implies the regret bounds when order statistics are considered to be at most equal but mostly better than the case when random searching is the sole exploration mechanism. Additionally, we show that endusers have sufficient  exibility to avoid harmful actions since any cardinality for the higher-order statistics can be used to achieve stricter upper bound. In a simulation exercise, we show that the algorithms proposed in this paper outperform simple and adapted counterparts.


Claudio Cardoso Flores, Marcelo Medeiros.

Lockdown effects in US states: an artificial counterfactual approach

N 673, 15/09/2020

We adopt an artificial counterfactual approach to assess the impact of lockdowns on the short-run evolution of the number of cases and deaths in some US states. To do so, we explore the different timing in which US states adopted lockdown policies, and divide them among treated and control groups. For each treated state, we construct an artificial counterfactual. On average, and in the very short-run, the counterfactual accumulated number of cases would be two times larger if lockdown policies were not implemented

Carlos B. Carneiro, Iuri Honda Ferreira, Marcelo Medeiros, Henrique Fernandes Pires, Eduardo Zilberman.

From Zero to Hero: Realized Partial (Co)Variances

N 672, 10/07/2020

This paper proposes a generalization of the class of realized semivariance and semicovariance

measures introduced by Barndorff-Nielsen, Kinnebrock and Shephard (2010) and

Bollerslev, Li, Patton and Quaedvlieg (2020a) to allow for a finer decomposition of realized

(co)variances. The new “realized partial (co)variances” allow for multiple thresholds with

various locations, rather than the single fixed threshold of zero used in semi (co)variances.

We adopt methods from machine learning to choose the thresholds to maximize the out-ofsample

forecast performance of time series models based on realized partial (co)variances.

We find that in low dimensional settings it is hard, but not impossible, to improve upon the

simple fixed threshold of zero. In large dimensions, however, the zero threshold embedded

in realized semi covariances emerges as a robust choice.

Tim Bollerslev, Marcelo Medeiros, Andrew J. Patton, Rogier Quaedvlieg.

Search here