Let's Fix Up Funny US Data
GUEST COLUMN
NEW YORK
SINCE the initial days of the collection and reporting of the official economic statistics, there have been numerous problems. But with the advent of computers and the electronic age, it is certainly reasonable to expect that these problems would diminish. After all, more and more corporations are using very sophisticated inventory-monitoring techniques, with up-to-date information on their own sales, orders, and stocks. Retailers have modern checkout equipment that not only record their sales but simultaneously denote changes in inventory. With United States business so much better able to monitor its own position, government statisticians should find a way to tap into this sophisticated and highly reliable data network.
Unfortunately, there is no indication that the government data are improving. In fact, the errors appear to be more glaring than ever. With the release of the principal July business indicators during the month of August, all of the previously reported statistics for the month of June were revised upward. Some of the revisions were appalling. The June preliminary increase of payroll employment was boosted by 40 percent on the basis of presumed new information.
In addition, hours worked, industrial production, capacity utilization, housing starts, building permits, new durable goods orders, and retail sales were revised higher.
Durable goods orders, which are a notoriously volatile series, contained a degree of volatility that is even unusual for this unstable series. The preliminary June figure posted a 0.5 percent rise in nondefense capital goods orders, the most important subcomponent of the durable goods report and a critical leading indicator of capital spending. After revision, the rate of gain was 5.1 percent, or 10-fold higher.
The extent of the revisions and the fact that all of them were in the upward direction raise very troubling questions about the legitimacy of the estimation proceduresused by these various governmental agencies. Well-defined estimation procedures should produce errors on both sides of the zero line. In other words, some should be up and some should be down, so that over time they cancel out. But under no circumstances should all of the data be biased in one direction.
What accounts for this breakdown in the quality of the government data and what steps could be taken to eliminate the problem? First, the statistical collection agencies have suffered severe budget cutbacks; fewer resources obviously are going to influence the quality of the data. Second, government salaries are increasingly less competitive with those in the private sector. Many of the best and the brightest leave government to take more lucrative jobs in the private sector.
In spite of these financial constraints, there are still things that the statistical agencies can and should do. First, preliminary estimates should not rely as heavily on rotational samples as they do at present. In this electronic age there would seem to be little burden on reporters in transmitting sales inventories and other pertinent information to the government's computers. Thus the large- and middle-sized companies should always be in the sample. The matter of reporting burden could be an issue in rotating responses from small companies.
Second, the government should delay the release of economic indicators that have a clear record of being notoriously bad. Some of the indicators should be scrapped entirely so that the collection agencies can concentrate their efforts on doing a better job with the indicators that are more useful.
One series that should be terminated is the retail sales report. Far better and similar information is contained in the figures on personal consumption expenditures (PCE), which is released roughly two weeks later. Although the PCE figures are revised, they are not changed to the degree of the more unstable retail sales. Another series that should be scrapped is the index of leading economic indicators. The economy is simply too complex to be captured by an antiquated index that does not reflect the degree of globalization in the US economy nor the increasing share of the economy devoted to services.
A failure to deal with the quality of economic data will create problems for more than just the forecasters and business analysts. The poor quality of the data adds to investor uncertainty. Poor information has caused bad policy moves by the monetary and fiscal authorities.
In early 1988, the Fed eased policy slightly in response to preliminary data that suggested a weakening economy. Shortly thereafter, it had to reverse policy and tighten again after seeing major upward revisions to the earlier data. Such flip-flops in monetary policy are undesirable. A policy decision based on bad data may lead to either more inflation or poorer economic growth, neither of which is desirable. In summary, there is an urgent need to address the faltering quality of the US economic statistics.