Bounty: Even the US Fed Rips Official US Stats
If you can’t count on their counts, then count only on doing “freak-onomics.”
May 9, 2018
Jeffery J. Smith
“Facts are stubborn things, but statistics are pliable.”  ― Mark Twain

At the risk of biting the hand that feeds us, and kicking a dog while it’s down, and mixing metaphors again, let’s continue the critique. Not that we love finding fault, it’s just that we love knowing facts, while officialdom seems unable or unwilling to convey them.

It’s not just the absence of an official total for the worth of Earth in America but also the presence of official statistics for other economic phenomena which some insiders find worse than useless.

Shift from impenetrable diction (Ch 16) to so-called “data”. Official fuzzy numbers are a perfect match for the standard cover-thy-butt jargon. Especially when couched in officialese, people tend to accept official statistics as gospel. But are they? Are they relevant, or distracting? If relevant, are they accurate? Could they be tainted by politics? Is a competing unofficial number better?

Official figures both help and don’t—many are seriously flawed. The government gathers data, massages the numbers, creates statistics, then experts cite them, and most people base crucial decisions on them. Decider, beware. One should take these figures with a grain of salt.

Anyone turning to an official statistic should be forewarned that the counted phenomenon could’ve been measured with a rubber ruler. Official statistics for economics should come with a Surgeon Generals’s caveat, like on a pack of cigarettes: “Warning: Statistics contain high levels of political influence. Trusting them can be hazardous to your financial health.”

Warning: Statistics contain high levels of political influence. Trusting them can be hazardous to your financial health.”


One expects criticism of official statistics from critics of either left or right, but check these out; sometimes the critics of statistics are official insiders.

Across the Atlantic, the Organisation for Economic Co-operation and Development’s Working Party on Financial Statistics explains one reason behind bad data.

Looking at the focus on the two headline indicators on government finance, while it may provide a single and clear message on the status of public finances within the EU to the politicians and the public at large, it has also created great incentives to governments to compile figures on deficit and debt that look good, instead of them being good from an economic substance point of view. There is a clear tendency to continuously look for ‘grey areas’ to manipulate the relevant national accounts data, in order to stay within the stipulated deficit and debt limits. These practices have substantially increased in ‘popularity’ since the start of the financial crisis during which significant pressures on government finance emerged, amongst others by the direct and indirect effects of the economic downturn and the bailouts of banks.” — “Government Finance Indicators: Truth and Myth

There is a clear tendency to continuously look for ‘grey areas’ to manipulate the relevant national accounts data …” — OECD

The OECD has produced useful reports before on the link between land value and economic growth, so down the road maybe they could become a standard bearer for determining the size of all locational value.

If only US public agencies had the chutzpah to cry out when the emperor wears no clothes. Well, actually, sometimes some bureaucrats do. Fed officials and their staff are already dismissing large swathes of the most recent economic data because they view it as unreliable.

Fed officials and their staff are already dismissing large swathes of the most recent economic data because they view it as unreliable.”

Economic data is constantly revised, and final reads are often significantly higher or lower than initial measurements. Twisting around the stats can leave investors, businesses, and households twisting in the wind. Their plans can be wrecked by the central bank’s next interest-rate move.

What officials attempt—aggregating many sales into one grand total—is challenging. Even under the best of conditions, as prices are always fluctuating, it’s not easy. But add the political pressure to look away from land and you get the mess we got.

Trying to calculate aggregates of items is just the opposite of measuring the size of particles. Physicists have their angstroms down to the trillionth. Chemists measure parts per billion. Economists pretend that their stats are of equivalent stature and call their statistics “data”. No way. The commonly applied phrase “massaged data” is only half right; there is no data on the menu—only approximations. An official figure resembles an actual value about as much as a stick figure resembles a living body.

Like an urban myth, groundless numbers persist. Diane B. Paul, formerly an associate professor of political science at the University of Massachusetts, wrote a book about that: The Nine Lives of Discredited Data (1987). Once entrenched, false figures escape detection and hence correction.

An official figure resembles an actual value about as much as a stick figure resembles a living body.

Then economists who play it safe—safety is where the money and honors are—perform calculations using approved numbers. Thereby GIGO strikes again. According to Otis Dudley Duncan (1921-2004) in Notes on Social Measurement: Historical and Critical (1984), those academics suffer from statisticism,  a neologism he coined.

Coupled with downright incompetence in statistics, we often find the syndrome that I have come to call statisticism: the notion that computing is synonymous with doing research, the naïve faith that statistics is a complete or sufficient basis for scientific methodology, the superstition that statistical formulas exist for evaluating such things as the relative merits of different substantive theories or the ‘importance’ of the causes of a ‘dependent variable’; and the delusion that decomposing the co-variations of some arbitrary and haphazardly assembled collection of variables can somehow justify not only a ‘causal model’ but also, praise a mark, a ‘measurement model’. There would be no point in deploring such caricatures of the scientific enterprise if there were a clearly identifiable sector of social science research wherein such fallacies were clearly recognized and [kept] emphatically out of bounds.”


While our public agencies do not tell us how much we’re all spending for the land and nature we use in total, let’s not feel singled out. They slight other curious groups, too, who’d like to know statistics like a qualitative GDP, the true inflation rate, real unemployment rate, total assets of governments, actual debts of governments, etc. The more important the indicator, the more massaging it gets.

* Ecological economists object to GDP since it measures quantity of growth, not quality of growth. For instance, clear-cutting trees from a hillside, causing erosion that degrades a stream, contributes to GDP no differently than does selective logging that leaves a forest available to hunters and hikers. Nevertheless, the media report faster growth—no matter what kind—as a social good. And whoever is in office gladly takes credit for it.

The Report by the Commission on the Measurement of Economic Performance and Social Progress (Stiglitz et al., 2012) suggests that GDP is not an ideal measure of social welfare and “the time is ripe for our measurement system to shift emphasis from measuring economic production to measuring people’s well-being.”

* Leftists economists point out that the definition of unemployment was changed to consider the under-employed as employed and to not consider those no longer futilely seeking a job as unemployed. A smaller figure for unemployment, in the eyes of many, makes those in political office look good. The real unemployment figure is actually double, or triple, or over quintuple the Bureau of Labor’s figure.

* Populist economists remind anyone listening that the official definition of inflation has been changed at least 20 times in 30 years. One of those official changes deleted the very thing we’re looking for, which is the value of locations; more often than not, that value is inflating. Using the older definition, inflation would be more like 10%.

These critics of government statisticians come from within the discipline, so they themselves don’t lack credibility. However, what their alternative stats gain in accuracy, they lose in credibility. The alternatives rely on raw data that come from officialdom. And even if unofficial calculators can find a way around that conundrum, their measure is still not official. Hence, nobody pays it much attention. The major players making policy and huge investments ignore the homemade figures and stick with convention.

While I feel for all those critics above being ignored by most of their colleagues, their suffering is soothing. Their voices shouting in the statistical wilderness give us room to talk, to question the official dismissal of rents. As we search for any sign of rents, the relevant stats we’ve found do not inspire gobs of confidence. It feels better knowing other critics have gone before.

While above academics were able to fault an existing stat, they have their own axe to grind and their own pet theory to promote. They leave themselves open to a different criticism. They were not able to critique the absence of a stat—the worth of Earth.


No matter what phenomenon they measure, bureaucracies fail to reach one same estimate. The statisticians of one bureaucracy can not explain why the totals of another bureaucracy differ. Nor do they seem to take these discrepancies seriously.

Most professional economists shrug off absent data, even misleading “data”. Too many academics are indifferent. They have an attitude of “oh, that’s good enough” when clearly the figures are not. It’s like they and their statistician brethren have jobs with no curiosity allowed. Caution and conformity should be the job requirements listed right under the job title for a Public Information Officer.

For the academics and bureaucrats compiling them, the jumble of tables is what’s important. Whether they have any accuracy or utility or insight does not seem to matter. Doing a job that pays well and gets paid attention by the business media and academics authoring articles (since officials have a monopoly on both data and status, where else can the curious turn), that’s what matters, not the datum for Earth’s worth.

In particular, bureaucracies …

count things of minor import—e.g., consumer confidence;

over-count some indicators—e.g., GDP;

under-count other indicators—e.g., unemployment or inflation; and

bundle what should be kept apart—e.g., housing with utilities.

Their false frames yield misunderstandings and distorted world views.

Statisticians go deep but not broad.

Going deep, economic statisticians measure an enormous quantity of minutiae, like “advance US retail and food services sales”.

Failing to go broad, they leave out customs like trust which make civilized trade possible. Alan Greenspan, who was likely the most powerful person in economics while he reigned at the Federal Reserve, confessed to being surprised to learn that trust matters. That was his comment upon observing Russian criminals take over converting so-called communism into capitalism.

Conversely, economists go broad but not deep.

They fail to go deep into the two kinds of spending—for human-made goods and services versus for natural assets. Their catch-all category for spending is reminiscent of speed-reader Woody Allen’s review of Tolstoy’s War and Peace (or War and the World in the author’s native language): “It was about some Russians.” It was about some purchases.

And to top it off, going too broad, economists include political behavior like lobbying within market behavior like producing output. They fault “market failure” when actually what happened was “lobbying success”—or in their jargon, successful “rent-seeking”.

Rent-less Wonders

Official stats are not only way off the mark—real GDP is lower, real inflation higher—but their measurements shed little light on what economies are up to.

OTOH, tracking rent could reveal how economies operate, when they malfunction, and what we could do about it. However, not knowing how much society spends to never reward labor and capital (i.e., our spending for land)—when economies are nothing if not systems of incentives—means that economists can not make good statistical arguments. That guarantees the futility of economics.

Are sad stats mistakes or conscious incorrections? It’s like those guys go out of their way to not make sense. Official figures create a near impenetrable fog that hides rent—a useful smokescreen for somebody’s capturing of rents.

A statistician is a man who comes to the rescue of figures that cannot lie for themselves.”

At the end of the day, it’s a lot of noise to go with precious little signal. Official tabulation is a morass and it gets worse. Just wait until we analyze the official statistics for housing and other proxies for land.

This article is Part 17 of a series highlighting the forthcoming book, “Bounty Hunter: a gadfly’s quest to know the worth of Earth,” by Jeffery J. Smith. To date, the experts have not risen to meet the challenge. Indeed, some have even stood in the way. Yet the payoff for knowing this datum is huge.

Find Out More.
Inside information on economics, society, nature, and technology.
Jeffery J. Smith

JEFFERY J. SMITH published The Geonomist, which won a California GreenLight Award, has appeared in both the popular press (e.g.,TruthOut) and academic journals (e.g., USC's “Planning and Markets”), been interviewed on radio and TV, lobbied officials, testified before the Russian Duma, conducted research (e.g., for Portland's mass transit agency), and recruited activists and academics to A member of the International Society for Ecological Economics and of Mensa, he lives in Mexico. Jeffery formerly was Chief Editor at