How Money Became the Measure of Everything, by Eli Cook

This is a broad, catch-all category of works that fit best here and not elsewhere. If you haven't found it someplace else, you might want to look here.

How Money Became the Measure of Everything, by Eli Cook

Postby admin » Mon Apr 27, 2020 10:15 pm

How Money Became the Measure of Everything: Two centuries ago, America pioneered a way of thinking that puts human well-being in economic terms.
by Eli Cook
The Atlantic
October 19, 2017

Money and markets have been around for thousands of years. Yet as central as currency has been to so many civilizations, people in societies as different as ancient Greece, imperial China, medieval Europe, and colonial America did not measure residents’ well-being in terms of monetary earnings or economic output.

In the mid-19th century, the United States—and to a lesser extent other industrializing nations such as England and Germany—departed from this historical pattern. It was then that American businesspeople and policymakers started to measure progress in dollar amounts, tabulating social welfare based on people’s capacity to generate income. This fundamental shift, in time, transformed the way Americans appraised not only investments and businesses but also their communities, their environment, and even themselves.

Today, well-being may seem hard to quantify in a nonmonetary way, but indeed other metrics—from incarceration rates to life expectancy—have held sway in the course of the country’s history. The turn away from these statistics, and toward financial ones, means that rather than considering how economic developments could meet Americans’ needs, the default stance—in policy, business, and everyday life—is to assess whether individuals are meeting the exigencies of the economy.

At the turn of the 19th century, it did not appear that financial metrics were going to define Americans’ concept of progress. In 1791, then-Secretary of the Treasury Alexander Hamilton wrote to various Americans across the country, asking them to calculate the moneymaking capacities of their farms, workshops, and families so that he could use that data to create economic indicators for his famous Report on Manufactures. Hamilton was greatly disappointed by the paltry responses he received and had to give up on adding price statistics to his report. Apparently, most Americans in the early republic did not see, count, or put a price on the world as he did.

Until the 1850s, in fact, by far the most popular and dominant form of social measurement in 19th-century America (as in Europe) were a collection of social indicators known then as “moral statistics,” which quantified such phenomena as prostitution, incarceration, literacy, crime, education, insanity, pauperism, life expectancy, and disease. While these moral statistics were laden with paternalism, they nevertheless focused squarely on the physical, social, spiritual, and mental condition of the American people. For better or for worse, they placed human beings at the center of their calculating vision. Their unit of measure was bodies and minds, never dollars and cents.

Yet around the middle of the century, money-based economic indicators began to gain prominence, eventually supplanting moral statistics as the leading benchmarks of American prosperity. This epochal shift can be seen in the national debates over slavery. In the earlier parts of the 19th century, Americans in the North and South wielded moral statistics in order to prove that their society was the more advanced and successful one. In the North, abolitionist newspapers like the Liberty Almanac pointed to the fact that the North had far more students, scholars, libraries, and colleges. In the South, politicians like John Calhoun used dubious data to argue that freedom was bad for black people. The proportion of Northern blacks “who are deaf and dumb, blind, idiots, insane, paupers and in prison,” Calhoun claimed in 1844, was “one out of every six,” while in the South it was “one of every one hundred and fifty-four.”

By the late 1850s, however, most Northern and Southern politicians and businessmen had abandoned such moral statistics in favor of economic metrics. In the opening chapter of his best-selling 1857 book against slavery, the author Hinton Helper measured the “progress and prosperity” of the North and the South by tabulating the cash value of agricultural produce that both regions had extracted from the earth. In so doing, he calculated that in 1850 the North was clearly the more advanced society, for it had produced $351,709,703 of goods and the South only $306,927,067. Speaking the language of productivity, Helper’s book became a hit with Northern businessmen, turning many men of capital to the antislavery cause.

The Southern planter class, meanwhile, underwent a similar shift. When South Carolina’s governor, the planter and enslaver James Henry Hammond, sought to legitimize slavery in his famous 1858 “Cotton Is King” speech, he did so in part by declaring that “there is not a nation on the face of the earth, with any numerous population, that can compete with us in produce per capita … It amounts to $16.66 per head.”

What happened in the mid-19th century that led to this historically unprecedented pricing of progress? The short answer is straightforward enough: Capitalism happened. In the first few decades of the Republic, the United States developed into a commercial society, but not yet a fully capitalist one. One of the main elements that distinguishes capitalism from other forms of social and cultural organization is not just the existence of markets but also of capitalized investment, the act through which basic elements of society and life—including natural resources, technological discoveries, works of art, urban spaces, educational institutions, human beings, and nations—are transformed (or “capitalized”) into income-generating assets that are valued and allocated in accordance with their capacity to make money and yield future returns. Save for a smattering of government-issued bonds and insurance companies, such a capitalization of everyday life was mostly absent until the mid-19th century. There existed few assets in early America through which one could invest wealth and earn an annual return.

Capitalization, then, was crucial to the rise of economic indicators. As upper-class Americans in both the North and South began to plow their wealth into novel financial assets, they began to imagine not only their portfolio but their entire society as a capitalized investment and its inhabitants (free or enslaved) as inputs of human capital that could be plugged into output-maximizing equations of monetized growth.


In the North, such investments mostly took the form of urban real estate and companies that were building railroads. As capital flowed into these new channels, investors were putting money—via loans, bonds, stocks, banks, trusts, mortgages, and other financial instruments—into communities they might never even set foot in. As local businesspeople and producers lost significant power to these distant East Coast investors, a national business class came into being that cared less about moral statistics—say, the number of prostitutes in Peoria or drunks in Detroit—than about a town’s industrial output, population growth, real-estate prices, labor costs, railway traffic, and per-capita productivity.

Capitalization was also behind the statistical shift in the South, only there it was less about investment in railroad stocks or urban real estate than in human bodies. Enslaved people had long been seen as pieces of property in the United States, but only in the antebellum Deep South did they truly become pieces of capital that could be mortgaged, rented, insured, and sold in highly liquid markets. Viewing enslaved people first and foremost as income-yielding investments, planters began to keep careful track of their market output and value. Hammond, in his speech, had chosen to measure American prosperity in the same way that he valued, monitored, and disciplined those forced to work on his own cotton plantation.

As corporate consolidation and factories’ technological capabilities ramped up in the Gilded Age and Progressive Era, additional techniques of capitalist quantification seeped from the business world into other facets of American society. By the Progressive Era, the logic of money could be found everywhere. “An eight-pound baby is worth, at birth, $362 a pound,” declared The New York Times on January 30th, 1910. “That is a child’s value as a potential wealth-producer. If he lives out the normal term of years, he can produce $2900 more wealth than it costs to rear him and maintain him as an adult.” The title of this article was “What the Baby Is Worth as a National Asset: Last Year’s Crop Reached a Value Estimated at $6,960,000,000.” During this era, an array of Progressive reformers priced not only babies but the annual social cost of everything from intemperance ($2 billion), the common cold ($21 a month per employee), typhoid ($271 million), and housewife labor ($7.5 billion), as well as the annual social benefit of skunks ($3 million), Niagara Falls ($122.5 million), and government health insurance ($3 billion).

This particular way of thinking is still around, and hard to miss today in reports from the government, research organizations, and the media. For instance, researchers in this century have calculated the annual cost of excessive alcohol consumption ($223.5 billion) and of mental disorders ($467 billion), as well as the value of the average American life ($9.1 million according to one Obama-era government estimate, up from $6.8 million at one point during George W. Bush’s presidency).


A century ago, money-based ideas of progress resonated most with business executives, most of whom were well-to-do white men. Measuring prosperity according to the Dow Jones Industrial Average (invented in 1896), manufacturing output, or per-capita wealth made a good deal of sense for America’s upper classes, since they were usually the ones who possessed the stocks, owned the factories, and held the wealth. As recognized by the Yale economist Irving Fisher, a man who rarely met a social problem he did not put a price on, economic statistics could be potent in early-20th-century political debates. In arguing for why people needed to be treated as “money-making machines,” Fisher explained how “newspapers showed a strong aversion to the harrowing side of the tuberculosis campaign but were always ready to ‘sit up and take notice’ when the cost of tuberculosis in dollars and cents was mentioned.”

John Rockefeller Jr., J.P. Morgan, and other millionaire capitalists also came to recognize the power of financial metrics in their era. They began to plan for a private research bureau that would focus on the pricing of everyday life. Those plans came to fruition in the 1920s with the formation of the corporate-funded National Bureau of Economic Research. The private institution would go on to play a major role in the invention of Gross Net Product in the 1930s (and continues to operate today).

Many working-class Americans, though, were not as enthusiastic about the rise of economic indicators. This was largely because they believed the human experience to be “priceless” (a word that took off just as progress became conceptualized in terms of money) and because
they (astutely) viewed such figures as tools that could be used to justify increased production quotas, more control over workers, or reduced wages. Massachusetts labor activists fighting for the eight-hour workday spoke for many American workers when they said, in 1870, that “the true prosperity and abiding good of the commonwealth can only be learned, by placing money [on] one scale, and man [on another].”


The assignment of prices to features of daily life, therefore, was never a foregone conclusion but rather a highly contested development. In the Gilded Age, some labor unions and Populist farmers succeeded in pushing state bureaus of labor statistics to offer up a series of alternative metrics that measured not economic growth or market output, but rather urban poverty, gender discrimination, leisure time, indebtedness, class mobility, rent-seeking behavior, and exploitation of workers. The interests of businessmen, though, won the day more often than not, and by the mid-20th century economic indicators that focused on monetary output came to be seen as apolitical and objective.

That shift carried tremendous social ramifications: The necessary conditions for economic growth were frequently placed before the necessary conditions for individuals’ well-being.
In 1911, Frederick Winslow Taylor, the efficiency expert who dreamed of measuring every human movement in terms of its cost to employers, bluntly articulated this reversal of ends and means: “In the past the man has been first; in the future the system must be first.”

In the end, men like Taylor got their wish.
Since the mid-20th century—whether in the Keynesian 1950s or the neoliberal 1980s—economic indicators have promoted an idea of American society as a capital investment whose main goal, like that of any investment, is ever-increasing monetary growth. Americans have surely benefited materially from the remarkable economic growth over this period of time, an expansion wholly unique to capitalist societies. Nevertheless, by making capital accumulation synonymous with progress, money-based metrics have turned human betterment into a secondary concern. By the early 21st century, American society’s top priority became its bottom line, net worth became synonymous with self-worth, and a billionaire businessman who repeatedly pointed to his own wealth as proof of his fitness for office was elected president.

ELI COOK is an assistant professor of history at the University of Haifa. He is the author of The Pricing of Progress: Economic Indicators and the Capitalization of American Life.
admin
Site Admin
 
Posts: 36125
Joined: Thu Aug 01, 2013 5:21 am

Return to Articles & Essays

Who is online

Users browsing this forum: No registered users and 14 guests

cron