The history of money and banking in the United States since World War II is one of extremes.
From stability to chaos, hubris to paralyzing fear, the era is perhaps best understood in terms of two roughly overlapping periods. The first, spanning approximately 1945–71, was characterized by relative stability. Backstopped by the dollar-centric international monetary system hashed out at Bretton Woods during the final year of World War II and the domestic banking industry constrained by New Deal–era regulations, the US economy grew steadily, inflation after 1950 remained under control, and money and banking were not the focus of any serious policy issues until the 1960s. The second era of money and banking in the United States since World War II, roughly 1971 to the present, has been characterized by turbulence. From the series of ad hoc solutions arrived at in the wake of the breakdown of the Bretton Woods system of fixed exchange rates, the world moved toward a new political and economic paradigm.
Faced with record levels of public and private debt, inflation, and a flat adjusted GDP growth stretching back to the Great Recession, it would be well going forward to understand how the United States arrived at this moment.
Any discussion of money and banking in the United States post-1945 starts with introducing its two central features: the Bretton Woods International Monetary System and the raft of New Deal–era banking regulations that had minutely rewritten the rules governing banking in the United States.
First, at its most basic the Bretton Woods system sought to peg all currencies to the US dollar, to back those currencies with access to US dollars, and to redeem all dollars at a fixed price of gold. It was thus not a true gold standard, but a dollar standard, with domestic central bankers pyramiding local currencies atop dollar reserves nominally redeemable in gold. This would become a problem both as the world economy grew and US fiscal discipline flagged.
As for the New Deal banking regulations, these were highly restrictive, determining everything from what types of business banks were allowed to transact to the interest rates they were allowed to offer depositors. That the system was still able to function profitably was a result of the wider monetary stability of the Bretton Woods system. When the system failed, however, the ramifications of these ossified government regulations would be severe and further interventions would only compound and prolong the problems.
During the 1940s and 1950s, however, the real and perceived benefits of these institutional inheritances meant there was little questioning of the status quo. This was particularly so after Dwight D. Eisenhower’s defeat of William Howard Taft, in 1952, after which opposition to the northeastern Republican establishment within the party was marginalized. Still, several events are worth noting, as they ultimately helped lead to the demise of the Bretton Woods system and the crisis of confidence of the 1970s.
First, of course, was the decision by Harry S. Truman and subsequent administrations to fight the Cold War and to pursue the general policy of containment. This led to costly interventions that ran up debt, limited trade, and necessitated the establishment of a military-industrial complex. It also entailed the formation of the North American Treaty Organization (NATO), as well as the rebuilding of Germany and Japan as industrial powers, and then maintaining these states’ stability through access to US dollars and markets on preferential terms. Then there was the so-called loss of China to communism, in 1949, when Mao Zedong’s communists defeated the US-backed forces of Chiang Kai-shek. Apart from any military or political considerations, the planners of the Franklin D. Roosevelt and Truman years had envisioned a new world order in which Japan’s excess industrial production was dumped on China. That now being impossible, the decision was made to gradually open up the US market to its ally as needed. This was a policy applied to virtually every Western-allied government during the Cold War, as allies were helped maintain employment to ward off any threat from the left.
On its own domestic front, social spending under Eisenhower’s slimmed-down “dime store” New Deal, as well as increased defense spending during the Korean War, resulted in a small but persistent deficit until the end of the decade. These minor deficits, however, were being dwarfed by the growth in the US economy. As the only industrial power on earth not completely destroyed during the Second World War, the US was enjoying a brief unipolar moment. Much as it would when the Cold War ended, such an extreme power asymmetry distorted the expectations of those who lived through them. As late as 1966, the US still had more industrial output and capacity than Europe and Japan combined. Reality, however, was inexorably to come calling.
For as countries’ citizens, businesses, and governments all bought US products, took US dollars in investments, or required them for transacting international business, their central banks, particularly in Europe, began amassing large quantities of dollars. These were ostensibly payable in gold on demand by the US Treasury, and as the amount of dollars in circulation increased rapidly during the 1960s displeasure among allies grew. Their currencies were pegged to the dollar and thus undervalued against it. They were being made to subsidize the dollar, and they knew it. Under Lyndon B. Johnson spending on the Vietnam War and Great Society programs had soared, and the dollar glut, as it came to be known, caused a fatal rift. For his part, Richard Nixon cared little. After the gold window was abruptly closed in 1971—with Nixon not even bothering to notify the World Bank, the International Monetary Fund, or any allied government ahead of time—the Smithsonian Agreement attempted to preserve the essence of the system via a large one-off devaluation of the dollar and the move to partially floating currencies. Europe’s consternation, though considerate, was softened both by the US dropping its insistence on greater NATO burden sharing, as well as recognition that the Bretton Woods system had become untenable and needed to be abandoned. The new arrangement, however, would last only a few months. With the dollar still overvalued and currency speculation continuing, the Fed once again cut rates. In response, Britain floated the pound, with the rest of Europe to follow suit. It was 1972, and the era of market-determined rather than government-determined currency values had arrived.
Whatever the grumblings of the Japanese and Europeans over this unilateral overturning of the global monetary order, they were soon the least of Nixon’s worries. There was a particular set of countries especially displeased by the devaluation of the dollar, and in 1973 the Organization of the Petroleum Exporting Countries (OPEC) member states coordinated an embargo on the United States. Just hours after the US declared its support for Israel in the Yom Kippur War, OPEC’s Arab member states ceased shipments of oil to the United States, and shortly thereafter all members participated in a doubling of the price of oil. It was a tremendous psychological as well as economic shock, with both made to feel more acute as the effects of the embargo were intensified by several exogenous factors. First, inflation had already been heating up because of the massive increases in social and military spending committed during the 1960s. The so-called Great Society and Vietnam War had already cost over $120 billion ($850 billion in today’s dollars) between them, while tax cuts had further widened the increasing budget deficits. Second, Nixon’s close ally at the Federal Reserve, Arthur F. Burns, had successively lowered interest rates in an effort to boost the economy in the run-up to the 1972 election and now did so again. Third, a rash of crop failures that year put upward pressure on food prices. Lastly, this was all happening within the context of a US economy burdened by ossified industry relations that produced above-market wage increases, New Deal–era regulations that suffocated large parts of the economy, and an overactive antitrust policy that prevented the development of economies of scale. These structural deficiencies were amplified by the increasing competition of rebuilt Japanese and European manufactures: from just 403 cars to the United States in 1957, by 1975 Japan would be exporting over eight hundred thousand.
One of the places structural deficiencies created by government regulations was having its most deleterious impacts, however, was in the banking sector, the very arteries of capitalist society.
Among other things, the banking regulations put in place during the 1930s, such as Glass-Steagall, placed tight restraints on the activities of banks, confining them primarily to flat-rate mortgage and business lending. As such, the accelerating inflation that accompanied the monetary instability of the post–Bretton Woods monetary order placed great strain on the banking sector. In fact, the restrictions put in place limiting the types of business banks could engage in, how many branches they could have and within what geographical confines, as well as caps on the interest rates they could offer savers and investors all combined with the high inflation of the decade to render the entire industry model unprofitable. Forced to offer below-market interest rates, they struggled to maintain deposits as savers saw the real value of their money turn negative. Highly reliant on mortgages, the profits generated by the original terms of the vast book of thirty-year fixed mortgages were also being consumed by inflation. Fearing for their survival, many lenders had begun sidestepping the regulations, offering new products such as jumbo certificates of deposit, which allowed banks to offer market interest rates to interested investors. The government reacted belatedly, removing these caps, as well as geographical restrictions on branch locations, but the problems created by its interference had already cut deeply. Particularly devastated had been the savings and loan industry. Virtually its entire business had consisted of generating and holding thirty-year mortgages. In short, the seeds of the later Savings and Loan Crisis had been sowed.
Nixon’s exit changed little, and Gerald Ford entered the White House to an economic situation the Keynesian postwar macroeconomic planners had said was impossible: high inflation, high unemployment, and low growth. For a country already reeling from the loss of the Vietnam War and disillusioned by the public scandals of the Pentagon Paper revelations and Watergate, stagflation added a sense of decline to the public sense of crisis. For all his efforts during his one term in office, Ford was unable to resolve the Gordian tangle at the core of the economic malaise. The dollar had been “liberalized.” It was time for regulations to follow suit.
Continued in part 2.
The Mises Institute exists to promote teaching and research in the Austrian school of economics, and individual freedom, honest history, and international peace, in the tradition of Ludwig von Mises and Murray N. Rothbard. These great thinkers developed praxeology, a deductive science of human action based on premises known with certainty to be true, and this is what we teach and advocate. Our scholarly work is founded in Misesian praxeology, and in self-conscious opposition to the mathematical modeling and hypothesis-testing that has created so much confusion in neoclassical economics. Visit https://mises.org