Creditocracy: A Geopolitical Economy
- Introduction
- Money Under Capitalism
- World Money, World Creditocracy
- The Gold Standard, 1870–1914: Gold or Empire?
- The Thirty Years’ Crisis, 1914–1945
- Bretton Woods: US Altruism or Imperialism? 1945–1949
- The Golden Age: Creditocracy in Abeyance, 1945–1971
- The Re-emergence of Creditocracy: 1971 to 2008
- World Money Beyond Creditocracy
- Emerging Alternatives
Introduction
As President Biden continues his predecessor’s New Cold War on China, it is clear that the pandemic has vastly accelerated the on-gong shift in the international balance of power, away from the US and towards China. For former US Treasury Secretary, Lawrence Summers, it was likely a ‘hinge of history’: ‘[i]f the 21st century turns out to be an Asian century as the 20th was an American one, the pandemic may well be remembered as the turning point’. It would erase 9/11 and 2008 from memory and rank alongside ‘the 1914 assassination of the Archduke, the 1929 stock market crash, or the 1938 Munich Conference’ (Summers 2020).
However, Professor Summers misses the point. The twentieth century, from our point of view, was actually more an attempted American Century than an accomplished one (Desai 2013) and the shift away from it is looking more certain and decisive than the ‘ifs’ in his assessment let on. The pandemic is less a hinge than an acceleration of the decline of US power based on financialised neoliberal capitalism (Desai 2020a). The structure of world domination that the US had sought to foist on the world in recent decades is breaking down. The US never succeeded; the structure was too unstable and volatile to work. Therefore, one cannot blame the pandemic for reversing even its limited successes. The reversal is rooted in a geopolitical economic earthquake whose rumblings date back decades. They have loosened more and more countries from the contradictory and crisis-prone structures of US domination.
The core of all international power structures of the ‘capitalist mode of foreign relations’ (Van der Pijl 2014) lies in the international monetary system – what James Steuart called ‘the money of the world’ in 1767, referring to the means by which countries settle their trade or financial imbalances among one another. The domination the US sought to exert was no different. At its heart lay the dollar-denominated international financial system that we call the Dollar Creditocracy. It has undergirded the dollar’s world role since the early 1970s and its unravelling leads the denouement of US power.
The financial commentariat is already expressing foreboding of the dollar’s coming doom. ‘The decline of the U.S. dollar could happen at “warp speed”’, warns Market Watch, while Reuters reports more sedately on how ‘King dollar’s decline ripples across the globe’. While set-tos between dollar boosters and gloomsters have long been a feature of the crises that have regularly punctuated the dollar system, what was remarkable is how many are changing sides. Benjamin Cohen (2020) warned of the end of the dollar’s ‘exorbitant privilege’ and Stephen Roach (2020) warned of a 35 percent drop in the dollar index over the coming two to three years. Although some boosters such as Barry Eichengreen (2020) stuck to their guns, they were clearly low on ammunition, unable to find solace in anything other than lack of alternatives.
Such commentators sense that doom lies ahead. However, they are far from explaining why. Cohen blamed it on Trump’s disastrous pandemic management, added to his tendency to weaponise the dollar, and Roach blames it on increased US borrowing. However, these explanations, like most commentary on the dollar’s world role, is tangled in that combination of wishful thinking and wager that one of us identified as the international financial intermediation hypothesis (IFIH) (Hudson 1972/2003). It emerged from the difficulties that ended the dollar’s link to gold in 1971 to conjure up a new basis for the dollar’s world role. By making the so-very-clever argument that the US was no ordinary indebted country but the world’s banker and that its deficits were loans to the world, a public service the world should accept gratefully by lifting capital controls and deregulating finance, this interpretation attempts to normalise the transformation of the US economy from super creditor to super debtor. However, it was never more than a barely adequate fig-leaf.
Our purpose in this article is to cut through this interpretation. Despite its faults, it dominates our understanding of the dollar system. In its place we reveal one that is theoretically sound and accords with the historical record, a geopolitical economy (Desai 2013) of the international monetary system of modern capitalism. We begin with a theoretical outline of how money operates under capitalism. We then consider how capitalism needs world money and, at the same time, makes its stable functioning difficult. We then go on to trace the fundamental instability of the modern international monetary systems based on national currencies of dominant countries, from the gold standard to the current volatile and predatory dollar-centred system, and their close connection to short-term and speculative as opposed to long-term and productive finance. We conclude by discussing of the key instabilities of the dollar system and the paths that various countries and international organizations are already taking to move beyond its destructive logics.
Money Under Capitalism
No other notion sets back our understanding of money than that money is a commodity. Money is an ancient social institution that put capitalism in a bind: The essentially public character of money is pitted against capitalism’s urge to privatise, control and commodify it. However, success in doing so only lays the basis for crisis. Karl Polanyi, following the Marxist sociology of Ferdinand Tönnies, called money a fictitious commodity (Desai, 2020b).
Unlike commodities, Marx noted, money has no ‘natural’ price, no real cost of production (Marx, [1894] 1981: 478). Precious metal coinage was the earliest of the attempts to commodify money. Marx hit the nail on the head when he observed that
For coin, the road from the mint is also the path to the melting pot… In the course of circulation, coins wear down … The weight of gold fixed upon as the standard of prices diverges from the weight which serves as the circulating medium… The history of these difficulties constitutes the history of the coinage throughout the Middle Ages and in modern times down to the eighteenth century. (Marx, 1867/1977: 222)
The acceptance of coins relied not on the precious metal they contained, but on minting by a sovereign authority that undertook to exchange them for the right quantity of the metal. ‘[A]s coin, gold becomes completely divorced from the substance of its value. Relatively valueless objects, therefore, such as paper notes, can serve as coins in the place of gold’. The coin therefore is always ‘capable of being replaced by valueless symbols of itself’ (Marx 1867/1977, pp. 223-4, 225-6).
Moreover, as Pierre Vilar points out, capitalism requires that money not be too much like a commodity. If it were, it would be punishingly deflationary: ‘if a single stable monetary system existed, a perpetual fall in prices would have continually discouraged producers and sellers, for whom the prospect of increases is the best stimulus.’ (Vilar, 1976: 11). That is also why John Maynard Keynes pointed out that there were only two periods in history when metallic money functioned tolerably well: the Elizabethan and Victorian ages, when the supply of precious metals was sufficiently plentiful. Even then, other devices were needed to forestall deflationary consequences (Keynes, 1980: 30).
We must therefore understand money as an historical institution, created by human societies and changing as social forms change. Capitalism has changed money in a very distinctive fashion, seeking to force it into the mould of a commodity. Such an endeavour could never be entirely successful, but the effort did transform money in critical ways. Two elements are important here.
First, all money is debt, whether issued by states or owed by households and firms to private creditors. Repayment extinguishes the debt owed to private creditors. State-issued notes and coins constitute an accounting liability on government balance sheets. We have already seen the fate of the earliest attempt to commodify money and reduce state control over it by making a commodity the material bearer of money, usually a precious metal such as gold or silver: the social and political character of money showed through in the very exercise. Though governments were liable to exchange notes and coins for gold and such exchange discharged the debt, most holders never demanded gold. Since the mid-twentieth century, governments have largely ceased offering gold in exchange. This has freed governments to fund their expenditures with paper debt, as the United States did during its Civil War with its greenbacks.
While gold coinage or convertibility did not last as ways of commodifying money, two others ways persist in the new situation of fiat or government-backed money. First, in the private financial sector, the originally social and political debt relations became exchange relations. Second, through a self-denying ordinance, capitalist states limited their own issue of money, permitting private credit a greater role than public debt in issuing money.
The second element of capitalism’s transformation of money relates to how debt is managed. The earliest human societies managed debt for social stability by holding both parties to the social relation of debt co-responsible when debts could not be paid. In the ancient Near East, such management included jubilees: at regular intervals these celebrations extinguished all debts, freeing debtors to make new beginnings with ‘Clean Slates’, maintaining social cohesion and economic stability by cancelling unpayable debt (Hudson 2018 and 2020).
Only in Roman times did debt become a relation of pure contractual exchange, making it inescapable. Five centuries of civil wars were fought to reverse forfeiture of collateral, land and liberty, wars that led to the fall of Rome. Once debt was contracted, the debtor had to pay it without regard to adverse personal and social consequences. Creditors bore no responsibility for having made loans that could not be paid, often at interest rates as high as 42 percent. With mounting compound interest unmoored from real growth rates and the ability to pay, debts inevitably mounted to unsustainable levels and racked Rome with recurrent and politically destabilizing debt crises.
Since Roman times, creditors have forced debtors who could not repay to forfeit their assets through foreclosure or forced sale. Though the medieval age recognised the ills of debt in its injunctions against usury, capitalism resurrected this aspect of Roman law. To be sure, the tyranny of creditors was sometimes vanquished by powerful debtors: Philip IV of France destroyed his creditors, the Knights Templar and Edward III of Britain defaulted against Italian banks, bankrupting them. Overall, however, the creditor interest has asserted itself repeatedly. In the post-Civil War US, it imposed a deflation that led to widespread farm bankruptcies, impoverishing farmers in an infamous monetary deflation. This was repeated in the Great Depression of the 1930s, by President Obama after 2009, as well as by the IMF and its Structural Adjustment Programmes in the developing world in the 1980s and 1990s.
Enforcing the legal fiction of debt as an exchange relation was the necessary condition for commodifying paper money. The sufficient condition involved capitalist states imposing on themselves a monetary self-abnegation when it came to issuing money. Government-created money never needs to be paid back, and does not expand the power of private creditors. So, when governments began limiting their own issuance of money and even borrowing form private creditors, they left the overwhelming amount of money creation as a source of profits for private creditors, banks and financial institutions and founded veritable creditocracies, by backing their financial interest with political power. Such arrangements were already being made in the earliest years of capitalism, when private creditors made their pacts with states hungry for funds to fight wars. Lenders ensured that states did not tax them but borrowed from them (Ingham, 1984, 48-9, 99-100) and states often settled war loans by giving creditors monopolies, such as the East and West India Companies, South Sea Company and the Bank of England.
This is how capitalist states have used their power to create, preserve and extend that of their financial sectors, including over themselves. There is a cost to this. Leaving the issuance of the overwhelming amount of money in circulation to competing profit-seeking private creditors makes them touts and pushers of debt and their activities regularly lead to crises, followed by state bailouts and new financial regulation.
World Money, World Creditocracy
We are now ready to approach the question of how these national monetary orders of capitalism relate to one another internationally. One key contradiction has powered the history of world money under capitalism. On the one hand, money is created by states or those delegated and controlled by them. On the other, there can be no world state under capitalism, and thus no world money. When dominant states nevertheless seek to foist their currency on the world as world money, they add new layers of contradictions and volatilities to the already unstable logic inherent in the geopolitical economy of capitalism (Desai 2013), the ‘relations between [its] producing states’ as Marx once put it (Marx, 1858/1973, 886).
Dominant states and their capitalists seek to externalise onto other states or territories the consequences of their capitalism’s contradictions, such as excess commodities and capital, or the need for cheap labour and raw materials. These efforts victimize subordinated economies, but make rivals of states that are able to contest this domination. When the latter happens, there are confrontations – diplomatic, economic or even military – like those between Britain and her nineteenth century rivals, such as Germany. The result then was a Thirty Years’ Crisis (1914-45), including two world wars and a Great Depression. Today, we are witnessing rising tensions between the US and countries like China and Russia. The struggles resulting from international victimization, rivalries and resistance prevent any world state from being formed, also preventing stable world money.
That is why all major critical writers on the subject, from Marx through Keynes to Polanyi, distinguished the understanding of national currencies from the distinctly different arrangements world monies have needed. That is also why the gold-sterling standard before the First World War and the dollar-centred system since the Second World War have been inherently unstable arrangements, the latter even more than the former. National states posing as world states offer their national currency as world money, and use force to integrate the world economy though their goal of a seamless realm of its acceptance has not and could not be realised , thanks to the inherent instabilities of capitalism’s geopolitical economy (Desai 2013 and 2020b).
The key to understanding the world monetary systems based on the national currencies of the dominant capitalist countries is that they are primarily financial systems: private credit forms the battering ram of their international projection as world money. International monetary systems have, therefore, been the financial systems of particular countries. Governed by central banks that in most countries represent the interests of the financial sector, they generate vastly more private debt than public money. The results have been international rentier elites and world creditocracies, first centred on sterling and then the dollar. Their power extends through networks of institutions offering private credit to the world’s households, firms and governments and dealing in financial assets, such as stocks, bonds and other securities and their derivatives, especially for real estate and natural resources. The network is ultimately protected by the international power of that state. The 1950s and 60s constituted an exception to this when the United States supplied gold and exports to other countries. (Much of the gold was simply a return of the flight capital that had come to the United States in the 1930s.)
These arrangements have shaped the world’s trade and production patterns in the interest of financial classes, seeking to lock in the world balance of power. Other countries became satellites of the dominant economies, buying their surpluses and monopoly goods, and opening their capital markets. Open capital markets let dominant-country capitalists own and control their most lucrative sectors, especially those involved in primary commodities and public-infrastructure monopolies, earning higher returns on their capital than they would enjoy at home. They also let dominant nations’ financial houses speculate in the asset markets – for stocks, bonds, real-estate etc. – of the satellite countries, profiting while the going is good and leaving the country’s government to clean up the financial and economic mess after the inevitable financial crisis strikes. Whether such countries are colonies or formally independent countries, their freedom to do otherwise is severely curtailed. A great deal of this is achieved by backing compliant satellite oligarchies, often by overt military force and covert operations.
As the core instrumentality of domination, creditocracies are intricately enmeshed in international conflict. Major shifts in the international balance of power are expressed in parallel shifts in international monetary systems and the domestic financial systems on which they rest. Each international monetary system has rested on an inherently unstable financial system. Of course, this is precisely what is hidden by the dominant discourses about them.
The Gold Standard, 1870–1914: Gold or Empire?
In popular myth, the international gold standard (c 1870 to 1914) was a pervasive, stable and beneficial arrangement, automatically adjusting the gold value of the world’s currencies upward or downward as economies improved or deteriorated. Only the First World War ended it.
These myths have been busted in recent times by those who argue that the post-1971 dollar system is just the contemporary version of that system. The new account clarified that it was not a gold standard but a gold-sterling standard, that it was not automatic but managed, and that though sterling was predominant, there were other ‘key currencies’, such as the French franc or the German mark. Such revisions served the purpose: to cast flattering light on the dollar system. Unsurprisingly, they refrained from saying anything about the sterling system’s less flattering parts, its imperial basis, instability and dysfunctionality for working classes and subordination of colonies.
What was the sterling system really and how did it emerge? In medieval times, gold and silver coins circulated together and, with the development of capitalism, this system was progressively transformed. In Britain, silver was driven out and bank notes came into increasing use to supplement gold in circulation. The inflationary financing of the Napoleonic Wars led, eventually, to the 1844 Bank Charter Act. It limited the notes the Bank of England and other banks could issue to a conservative ratio of the Bank’s gold reserves. This British gold standard became international when other countries began pegging their currencies to gold in the 1870s (De Cecco 1984, p. 2). Sterling was only the most widely used such currency.
Britain’s commitment to gold is storied, and colonies, such as British India, were dragooned into the sterling standard at considerable disadvantage to them. Other countries’ commitment and motivations varied. While gold appreciated, some countries, such as the oligarchical primary commodity exporters, Austria-Hungary and Russia, remained with depreciating silver (ibid., pp. 51-2). And the countries which adopted the gold standard did so for varied reasons: to escape the depreciation of silver, to obtain credit, or, in the case of industrial challengers like Germany, to gain international acceptability for their own currency as part of a drive to expand market share (ibid., ch. 3) and challenge Britain’s control over international financial flows.
The gold-sterling standard was not automatic but highly managed. The Bank of England managed the value of sterling and gold outflows by raising or lowering the interest rate. The mechanism worked simply because of London’s short-term lending through British financial institutions. They simply left more of their deposits in London when interest rates went up. Other countries, particularly Britain’s productively superior challengers whose financial system was geared to long-term lending for production, did not have financial systems with such hair trigger mechanisms ensuring short term in- and out-flows. They had to accumulate considerable gold reserves to defend the gold value of their currencies. More generally, governments and central banks decided how to balance transmitting the disciplinary effects of international price movements to their domestic economies and protecting them from the same movements (Polanyi 1944). When such choices proved too difficult, governments could also go off the gold peg.
However, the gold standard system was not only a managed sterling standard that had to contend with other key currencies and domestic considerations, it was also imperial, unstable and economically dysfunctional.
While the gold peg made sterling internationally acceptable, the British Empire’s financial flows actually underwrote it, permitting the system to work with a comparatively tiny gold reserve. The empire was able to furnish liquidity by financing trade and investment in its white settler colonies and the US with surpluses squeezed out of its non-settler colonies, chiefly British India (Desai 2018, Patnaik, 2017, Saul 1960 and De Cecco 1984). While geopolitically motivated deposits from countries like Greece or Japan helped, as did those of Britain’s own increasingly powerful joint-stock banks, India’s contribution was indispensable (Saul 1960, 6; De Cecco 1984, 36-8, 122-26).
The sterling standard’s imperial character did not, however, protect it from the instability of the wider system. This instability arose because, Marcello De Cecco pointed out, the world economy was not a Ricardian one, seamlessly unified and ruled by the currency of its most powerful country, but a Listian geopolitical economy of competing and struggling national states and economies (De Cecco 1984, p. 13). The gold standard era was in fact witness to acute industrial and imperial competition as new industrial powers rose to challenge Britain’s pre-eminence and led, as is well known, to the First World War (Hobsbawm 1987). How could the sterling system remain unaffected?
Countries that successfully industrialised behind protectionist walls adopted the gold standard not to subordinate themselves to its discipline but to challenge Britain’s sterling system as they had challenged her control over the world market. The challengers had, moreover, radically different financial systems that could not prompt inflows and outflows through small interest rate changes and hoarded gold to defend their currencies’ gold value. As they did so, ‘the Bank of England found to its chagrin that when it raised the bank rate gold did not flow in as easily’ and it had to raise the rate much higher. The adverse ‘effects on the economy became substantial, and they were noticed by the public and by the financial and political class’ (De Cecco 2009, 126). This ensured that the sterling system was weakening well before war broke out in August 1914.
This was the context in which US policy and business elites began refining their foreign policy objectives. Rather than merely seeking an ‘open door’, they now sought to ‘topple and replace British business interests as the managing component of the world economy’ (Parrini 1969, 1). They sought to do this not by acquiring a territorial empire but by replacing sterling and London with the dollar and New York as the world money and financial centre respectively, and presiding over an open world economy.
Dollar boosters have encouraged the belief that the sterling and dollar systems are the acme of financial sophistication. Nothing could be farther from the truth if financial sophistication be said to consist fostering economic dynamism. Quite simply, the sterling standard operated in the declining part of the world capitalist system while a completely contrasting one prevailed in the vigorously rising part, consisting of the contender nations, such as Germany, the US and Japan.
The sterling system combined short-term speculative and rentier activity with long-term investment abroad, chiefly in Britain’s settler colonies and the US. There it aided their industrialization. British investors were passively earning only low interest while borrower capitalists in these countries reaped high profits (Hilferding 1910/1981) even as British industry began its still un-reversed relative decline (Gamble 1994, Ingham 1984). The sterling system also ruined nominally independent non-Western countries, such as Persia and Egypt. Only Britain’s non-settler colonies permitted sterling’s paramountcy in the face of industrial decline. They absorbed Britain’s increasingly uncompetitive exports, generated export surpluses that compensated for Britain’s growing trade deficit while increasing Britain’s capital exports. They constituted the system’s famished foundation (Patnaik and Patnaik 2016).
The Central European (Mitteleuropäisch), particularly the German system, by contrast, used a three-way coordination between governments, banks and industrial firms to prioritise industrial expansion. Most contemporary observers considered the latter superior (Hudson 2010, Hilferding 1910/1981 and Desai 2020c).
Finally, we may note that the sterling system’s gold link relied on another luxury, a politically quiescent working class on whom the burden of high interest rates and unemployment could be imposed to maintain the gold value of sterling. It would not survive the coming era of working class empowerment (Eichengreen 1992).
In sum, the sterling standard, the benchmark against which the dollar system is usually compared, was not only managed but also unstable, dysfunctional and already in crisis well before the war broke out in 1914. It relied on quiescent working classes and colonies, both conditions that would cease to obtain in the decades to come. That was the chief reason why it would not be resurrected in the interwar decades, try as British authorities might.
The Thirty Years’ Crisis, 1914–1945
The sterling system’s inherent instability showed how impossible it was for a national currency, even one at the helm of the greatest empire ever, to serve as world money. New arrangements were clearly needed. However, the Great War transformed international finance in an unexpected manner. The US Government, pursuing its ambition to replace Britain at the centre of world money and finance, emerged as the overwhelming world creditor because of the loans made to the Allies to fight the war.
In pursuit of its ambitions, the US had already started moving away from its more productively focused financial system by adopting English commercial banking principles. It established the Federal Reserve in late 1913, becoming the last major country to acquire a central bank. The next step was US entry into the Great War. The nation’s banks had exhausted their ability to finance exports to the warring allies, leading the US government to step in.
The war had already brought the US economy out of a depression. It now proceeded to transform the US from a great debtor to the worlds’ creditor, and hence arbiter of the peace that followed. In single-minded pursuit of its ambitions the US government undermined its English and French allies, and ultimately its own economy and corporations.
The key was the US insistence that Britain, France and other Allies pay the debts they had incurred to fight the war. This demand led the Allies, in turn, to demand reparations from defeated Germany. Many US corporate leaders saw the dangers inherent in demands for repayment of such unpayable debt, used as it had been for destruction, not production, and called for at least a partial cancellation. The British also reminded the US of their forgiveness of Austrian debts owed them after the Napoleonic wars. Keynes, mindful of limits both on war-weary Europe’s ability to pay and on the US’s ability to absorb the exports such repayment would prompt, called for a ‘general bonfire’ of the ‘vast paper entanglements’ (Keynes 1919, 283). However, US Government demands for payment prevailed. This accomplished two things. First, governmentalized international finance displaced the private financial flows over which sterling had presided. Second, the creditor orientated principle that all debts must be paid, regardless of how socially destabilizing the consequences were, was established in inter-governmental finance just as in private finance.
World growth and stability were sacrificed to these inter-governmental creditor claims. Satisfying such large creditor demands of a single government led debtor countries to pay by subjecting their economies to austerity. Allied governments and their central banks siphoned off economic surpluses to pay debts owed to the US Government, a sum far in excess of what they owed to America’s private bankers.
The US government for its part was chiefly concerned with its own world power, and pursued objectives quite distinct from those of Wall Street. This became clear when, in 1931, President Herbert Hoover announced a moratorium on US Inter-Ally debt demands. This led to one on German reparations and stock markets jumped throughout the world. The resulting restoration of foreign-exchange stability more than repaid the United States for the loss of the nominal $250 million sum of foregone debt service. Suspension of the government’s claims had a salutary initial effect on private international finance capital.
US interwar financial actions were also implicated in the crash of 1929 and the Great Depression. Given that its demands for debt repayment and reparations were unsound, the US had to organise a veritable financial merry-go-round to keep them going as long as they did: Germany paid reparations to the European allies, who paid their war debts to the US – and its banks, in turn, lent to Germany, chiefly to German municipalities. The US Federal Reserve maintained low interest rates through an early form of Quantitative Easing to encourage this circular flow, and to help the British put the pound back on gold. However, a side effect of low US interest rates was leveraged speculation in the US stock market, which rose even faster as foreign lending slowed. The US raised interest rates to tame it, triggering the crash of 1929. The already slowing economy, ultimately due to the undermining of the very markets on which the US relied to keep its war-bloated economy expanding, careened into the Great Depression. Lacking protected colonial markets, the US was its worst sufferer.
Advocates of US Hegemony (such as Kindleberger 1973) bemoan the US’s ‘failure’ to provide international leadership in the interwar period. What they do not understand is that the US’s pursuit of world power was necessarily a zero sum game. The last thing Roosevelt and his advisors wanted was the kind of internationalist leadership or even world recovery that would rehabilitate British, French and other European economies, enabling their governments to act as equals. The US aimed to subordinate foreign interests to its creditor claims, while escalating America’s protective tariffs and quotas to make it harder for these governments to repay. The Roosevelt administration justified its actions with the rationale that freeing Europe from having to pay its war debts to the United States would simply leave its governments with more money to re-arm and threaten the world once more with war. In reality, US actions from Versailles onwards were already making the Second World War inevitable.
Seeing it as a ‘second chance’ to pursue its goals, the US Government organized its intervention in the Second World War much the same way, tempered here and there by a lesson or two of the disastrous inter-war experience. In 1944-45 it tried to absorb the Sterling and Franc areas into its own dollar-centered financial system, based once again on inter-governmental debts. Once more, success eluded it.
Bretton Woods: US Altruism or Imperialism? 1945–1949
In 1944, with the war’s end imminent, planning for the post-war international order got under way. The US sought to use the Bretton Woods negotiations to revive its plans for world domination by securing the dollar’s position as world money. The first aim was to limit the potential power of rivals, pre-eminently British sterling and Keynes’s proposals for bancor. US officials used US creditor power, re-charged by the Second World War and by the capital that had fled Europe for the United States since the 1930s, as a lever to pry open foreign markets for US exporters and investors. Finally, they ensured that the newly formed institutions of international economic governance, chiefly the IMF and the World Bank, were designed to impose free trade and financial flows, both of which were expected to benefit US business.
However, these arrangements could not be imposed on other capitalist economies, weakened by war. They could not withstand the rigors of free trade and capital movements and the resulting debility would increase the attractions of Communism, which had, in any case, removed vast territories and populations from the ambit of capitalism. Nevertheless, soon after 1945 much of the infrastructure for US plans was, if not operational, in place.
To bring pressure on Britain, the US abruptly stopped loans to Britain through wartime Lend-Lease as soon as hostilities ended. The US used negotiations over the repayment of the $20 billion debt Britain already had incurred to secure three aims: take over what remained of British overseas assets, private and public, by obliging Britain to sell them off to pay Lend Lease credit; to pry open Britain’s Imperial Preference system; and to secure British support for the design of the IMF and the World Bank.
Under existing colonial and Imperial Preference arrangements, Britain had effectively frozen nearly $10 billion in sterling deposits of major exporters such as India or Egypt in London to ensure they would be spent on British exports through preferential tariff arrangements. The US wanted to open up British and Europe’s colonial raw-materials resources and markets for US corporations so that the blocked sterling credits could be spent on US exports instead of being limited to British products.
The US aim was to gain US access to world markets, a precondition for achieving full employment at home. To this end, the US pressure on Britain through the loan negotiations enlisted it in a common front against Europe. In the immediate post-war period, the effect was to concentrate in US Government hands most of the major decisions regarding which countries could borrow how much and on what conditions.
Finally, the US targeted proposals for an alternative to a US dollar-denominated, creditor-oriented international financial system. John Maynard Keynes proposed a multilateral International Clearing Union to settle international payments in a new multilaterally created currency, the ‘bancor,’ whose value would be determined by a price index of 30 widely traded commodities. The proposal was designed to eliminate persistent trade and financial imbalances by putting pressure for adjustment on creditor economies (mainly the United States) as well as debtor economies, by charging interest on positive as well as negative balances, and by wiping out the excess accumulations when they failed to find a counterpart in the ability of debtor countries to pay. Keynes’s scheme also underlined creditor nations’ obligation to make debts payable by importing goods from debtor countries and taking steps to improve their productive capacity. These proposals rested on Keynes’s critique of German reparations and Inter-Ally debt excesses of the 1920s and his acute awareness that a dollar system would subject Britain, with her declining industry and imminent loss of empire, to practically colonial pressures.
US designs for the IMF and the World Bank, the two institutions of international economic governance to emerge from the Bretton Woods conference, involved ensuring that their lending would be conditional. Conditions would include refraining from enacting protective tariffs or quotas, or erecting financial barriers such as competitive devaluation, multiple exchange rates, bilateral clearing agreements or blocked funds beyond a brief transition period.
Post-war European determination to expand the productive base and reduce balance of payment pressures converted what would have been an even greater US trade surplus into the great post-war expansion of US corporate investment in Europe. US corporations bought foreign companies and set up production facilities near markets and cheaper labour. As foreign earnings became an increasingly large proportion of US international profits, US corporations appeared to thrive. However, this investment outflow heralded the great US investment outflow to China and its Asian neighbours after 1990 that, while keeping down US consumer prices, shifted industrialization away from the United States itself.
Given US export surpluses at the time, US foreign investment seemed almost the only way to recycle its export earnings as international liquidity. While some US economists worried about shifting industrial production abroad, politicians on both sides of the Atlantic thought it would provide the basis for a stable equilibrium.
However, this was not the kind of equilibrium that Keynes had proposed. His ideas would have formally ended the financial monopoly of the single payments-surplus nation and its currency, precisely what US officials desired. By using the US’s post-war economic and financial weight, and by promising to back the dollar with gold, the US ensured the rejection of Keynes’s plan. That left the world with no alternative to the US dollar. Even so, the rest of the world was in no mood to swallow this bitter pill and the US had to promise to continue backing the dollar with gold at $35 an oz. When the war ended in 1945, the United States held about $20 billion in gold, accounting for 59 per cent of world gold reserves and these reserves only grew when, amid the dollar shortage, the Europeans were forced to pay for much needed US imports with gold. Europe lost gold rapidly to the US Treasury. US holdings rose by $4.3 billion by 1948, and by 1949 its gold stock reached an all-time high of $24.8 billion, reflecting an inflow of nearly $5 billion since the end of the war. France lost 60 percent of its gold and foreign exchange reserves during 1946-47, and Sweden’s reserves fell by 75 percent. Over the next two decades, however, the tables would turn dramatically.
The Golden Age: Creditocracy in Abeyance, 1945–1971
Ideas about US hegemony that emerged in the 1970s (e.g. Kindleberger 1973; Gilpin 1971. For a fuller discussion see Desai 2013, 124-137) retrospectively designated the 1950s and 1960s as a period when the US had been hegemonic, reluctantly accepting the burdens of world leadership and permitting the dollar to serve as the world’s currency. However, the US was neither reluctant nor successful. Having nursed the ambition to emulate Britain’s erstwhile dominance over a world economy mostly open to it, and squandered its opportunities to realise it after the First World War, it was determined to succeed at this ‘Second Chance’. However, despite the considerable power it wielded, circumstances were not propitious.
Though at Bretton Woods the US succeeded in preventing the emergence of any alternative to the dollar in international payments, it had to promise to back it with gold and it did not succeed in preventing capital controls, considerable state intervention in economies and financial regulation. Given the fragile state of war-torn allied economies, insisting on free markets, trade and capital flows, as the State Department under Cordell Hull wished to, would have been tantamount to handing them over to Communism. With the stabilization and extension (to Eastern Europe and China) of the Communist World and decolonised countries pursuing state-directed development, these compulsions made for the most dirigiste period ever witnessed since the beginnings of capitalism. Little wonder then that it was characterised by heavily regulated financial systems focused on productive expansion, with capital controls and low interest rates. The result was the ‘golden age,’ the most sustained period of growth the world had witnessed.
In such a dirigiste, far from open, world economy, defeating alternatives to the dollar could only have been a Pyrrhic victory. The dollar did not preside over a world-girding financial system but only served to settle imbalances between central banks, apexes of their respective, heavily regulated and closed financial systems. Without an empire, in a world of national economies all seeking growth and therefore investment, the US was in no position to export capital. Early on, with the US running an export surplus and sucking in the world’s dollars, there were shortages of the means of international payment. After 1958, when European currencies became convertible and could serve in international payments, practically overnight, the dollar shortage turned into a dollar glut.
Robert Triffin (1961) knew why. Unable to export substantial capital, US current account deficits due to its military expenditures in Korea and Vietnam became the way the US furnished the world with dollars. This method was subject to the Triffin Dilemma: deficits were necessary to provide liquidity but lowered the dollar’s value. After 1958, when major European currencies became convertible, the US’s vast gold hoard was drained down so quickly that by 1961 there was not enough to back dollars in circulation given that US law required 20 percent of the paper currency in circulation to be backed by gold. The US has to persuade its allies to pool their gold to retain the dollar’s gold peg.
Over the next decade, the dollar lurched from crisis to crisis and exhausted all expedients for dealing with the situation. Kennedy dealt with it by claiming that there was no objective problem, only one of confidence. Johnson, for his part, ended domestic gold convertibility engaged in ‘special transactions’, and persuaded allies to repay war and Marshall Plan debts early, buy more US military supplies, make advance payments on them, hold their surplus dollars in non-convertible US treasury bills and, not least, agree to a de facto embargo on US gold sales.
Having exhausted all expedients, knowing that restoring the dollar’s gold value would require punishing economic measures at home, Nixon abandoned convertibility in August 1971. Just over twenty-seven years after the US scuttled Keynes’ plan at Bretton Woods to install the dollar as world money, the US had failed and all it had to show for it was the loss of its enviable gold reserves.
The Re-emergence of Creditocracy: 1971 to 2008
By the early twenty-first century, the dollar was well into its second, even more volatile and destructive career, now reinforced by the rhetoric of Clinton’s ‘globalization’ and Bush’s ‘empire.’ New discourses proposed to regard the 1971 closing of the gold window as a masterful move, at one stroke unburdening the US from backing the dollar with gold while leaving the dollar’s preeminent position intact, perhaps even enhancing it in a veritable new ‘Bretton Woods II’ (Dooley et al 2005).
One has to cut through the fog of these discourses to retrieve the real history of the dollar after 1971. Initially, it took the form of a dollar-Treasury Bill standard (Hudson 1972). As the US continued to run its current account deficits, US Treasury securities per force became the ‘safe’ asset that foreign central banks could hold their surplus dollars in, instead of demanding gold. However, neither it, not the other measures the US now took, could prevent the dollar’s slide.
The US scuttled the Committee of Twenty negotiations to reform the international monetary system on a more equitable and less asymmetric basis when it concluded agreements with OPEC to recycle their oil surpluses in US and allied banks (Williamson 1977, xi), lifting capital controls to facilitate this. However, this too appeared incapable of halting the dollar’s slide. On the one hand, unimpressed Europeans took the first step towards European monetary integration by creating the ‘snake’ mechanism to place limits on fluctuations of member currencies in terms of one another. On the other, US and allied banks, stuffed to the gills with petrodollars and unable to lend in a stagnating West, went on a Third World lending spree, aided by the World Bank where less credit worthy nations were concerned. The result was a veritable ‘magic liquidity machine’ (Calleo 1982, 138) that triggered a new bout of Third World industrial deepening boding ill for US ambitions.
The dollar’s decline became precipitous, sending the price of gold up to over$800 an ounce around 1980. Clearly amid stagflation and negative interest rates of the 1970s, US Treasuries were not attractive. Only after the new Chairman of the Federal Reserve, Paul Volcker, permitted rates to rise as high as necessary – to 20 percent at one point – to stabilize the dollar did the new arrangement stabilise. The Japanese now became the major holders of US treasury bills. High interest was not the only cost: Volcker also reinforced the tendency of US industry to achieve competitiveness at the expense of workers even as Japanese manufacturers’ access to US markets marked the beginning of the rapid deindustrialization of the US that is still ongoing. It mirrors that of Britain in the gold standard period.
The punishing Volcker Shock recession of the early 1980s did push interest rates down from their stratospheric heights, though they remained historically high throughout the 1980s and 1990s in order to attract the funds needed to finance high US government deficits. By 1982, they triggered the Third World Debt Crisis as Mexico, Brazil and Argentina warned of impending default. The US, aided by the IMF and the World Bank, swung into action. In the first major post-war assertion of creditor interests, they enforced the rule that governments never go bankrupt (since they can always tax their citizens). The debt restructurings that followed ensured that by the end of the 1980s, Brazil and Argentina were each paying an enormous 45% interest rate on their dollar-denominated bonds (held mainly by their own kleptocratic elites).
Meanwhile, astronomical interest rates had sent the dollar to unsustainable heights and the 1985 Plaza Accord between the key currency countries was necessary to ensure that its inevitable decline was relatively orderly. The US had to put its financial house in order by closing its deficits in the late 1980s and early 1990s. This did not prevent the dollar from hitting another nadir as the euro emerged as a new rival. The defects of the Euro’s architecture should not draw attention away from the Europeans’ intention to withdraw their mutual transactions from the dollar system, much like the countries today concluding bi- and multilateral agreements to sidestep the dollar are doing. And, like the stronger European currencies it brought together, the euro is also used in international payments farther afield.
However, by the late 1980s, financial deregulation picked up pace and started the regressive transformation of the US financial system. During the gold standard era, it had resembled the productive German ‘finance capital’ model. After a brief and disastrous flirtation with the speculative UK model that culminated in the Crash of 1929, Depression era banking regulation, such as the famous Glass Steagall Act, turned it into one of the most regulated of financial systems. So the US financial system remained until the 1980s when it set off once again on the deregulatory path to ever more resemble the UK’s archaic, predatory, speculative and short-term finance model.
In this form, it was finally ready to expand the supply of assets – denominated in US dollars or in currencies easily convertible into US dollar – for private holders such that this private financial demand for the dollar would counteract the Triffin Dilemma downward pressure on the dollar that continued thanks to the US’s infamous twin – government and current account – deficits. This demand was many times greater than central banks’ demand for dollars as reserves. The resulting rise in financial activity in most counties was analysed as ‘financialization’ (Krippner 2005) though most scholars neither disaggregated the phenomena to examine particulars – agents, assets, flows and regulatory environment – of each discrete financialization, nor analysed their usually intimate connection with the requirements of the dollar system. Ever greater US budget, trade and current account deficits now became fixtures on the scene, but the volumes of international financial flows necessary to undergird the dollar was many times greater.
These processes accelerated with the 1987 appointment of Wall Street lobbyist Alan Greenspan as Chairman of the Federal Reserve. Under his supervision, an even stronger speculative financial dynamic was introduced into the dollar-centred system (Fleckenstein 2008). From here on, it would be undergirded by the ‘Greenspan put’, a promise by the US Federal Reserve to rescue the US financial sector from the inevitable losses as bubbles burst chiefly by monetizing aid through lowered interest rates and, since 2008, Quantitative Easing. Essentially, it involved giving financial institutions good money for their bad ‘toxic’ assets so they could recover from their losses and re-build their balance sheets. This promise has been solemnly kept by all Greenspan’s successors to this day.
In the US economy as a whole, financial engineering replaced industrial engineering. Wealth was decreasingly made by building new means of production and hiring labour to produce new goods and services to sell at a profit. Instead, money was made purely by buying and selling financial securities and real estate. This is fundamentally contradictory because financial activities constrict and strangulate production even as they prey upon the very incomes it generates. This is the fundamental logic behind the regular financial and asset market bubbles and crashes and shrinking productive base of our time.
By the mid-1990s these bubbles and crashes, essentially dollar-denominated financializations necessary to sustain the dollar’s international acceptability, were getting ever larger, more volatile and dangerous. They were aided by the Clinton administrations’ crusade, supported by the IMF and the World Bank, against capital controls worldwide, to bring even more countries into the dragnet of the dollar creditocracy.
Each bubble culminated in a resounding crash: financial crises became more frequent, touching first world (Sweden, Britain), transition (Russia) and developing countries (Mexico, India) alike. A culmination of sorts was reached in the 1997-8 East Asian Financial crisis. Thereafter, it was the turn of the already developing bubble in US stocks, particularly high technology stocks, which burst in 2000. That was followed by the housing and credit bubbles which burst in 2008.
World Money Beyond Creditocracy
The dollar-centred world financial and monetary system of recent decades relies on short-term speculation, with the Federal Reserve financial engineering asset market bubbles. The effect is to increase inequality among nations and classes and undermining economies instead of building them up. The system is anti-labour, imposing austerity policies to squeeze out rising debt service from working populations. This ‘austerity’ and the adjustment imposed on debtor countries are designed to preserve the financial gains of creditors. Unlike productive activities, financial activities involve a zero-sum game. Gains can only be made by some when corresponding losses and suffering are imposed on the indebted wage-earning population, small business and debtor countries.
The inherent contradictions of the system and the conflicts it generates have been maturing over the decades and they are now rapidly unravelling the dollar creditocracy.
Let us deal with the contradictions first. The sheer scale of money creation – already stratospherically high amid the Quantitative Easing after 2008, it reached astronomic proportions amid the pandemic – threatens the dollar’s value. Ever since easy monetary policy became necessary after the dot com bubble burst in 2000, the dollar’s value has fallen captive to two competing imperatives: the financial sector’s need for plentiful, cheap or outright free liquidity to finance leveraged speculation in asset markets with ever thinning margins, and the need to limit liquidity to boost the dollar’s value. Pandemic liquidity issuance is sending asset markets soaring past even the unprecedented heights reached in the past decades. Instead of halting or bursting them at a sufficiently safe early stage, the Federal Reserve has been encouraging their inflation through its low interest rate policies and by buying bonds of all kinds, including government, junk and corporate bonds. The question is how long it can continue inflating its balance sheet without the government doing something to expand the rapidly shrinking productive base of US from which alone these assets gain their value. One only need add that such an expansion of the productive base in the difficult circumstances of the US economy will require such a radical about-turn from neoliberalism that it is practically impossible in present circumstances given the Federal Reserve and the incoming Biden administration’s commitment to the neoliberal policy paradigm.
Meanwhile Federal Reserve liquidity issuance has transformed ‘[t]he long, long bull market since 2009 … into a fully-fledged epic bubble … [f]eaturing extreme overvaluation, explosive price increases, frenzied issuance, and hysterically speculative investor behaviour’ and rivalling ‘the South Sea bubble, 1929, and 2000’ (Foroohar 2021, quoting investment strategist Jeremy Grantham). The crash is only a matter of time and circumstance. When it comes, the Federal Reserve will face two equally unpalatable choices. It may react by letting the financial system that is invested in stocks go down, which will bring down the dollar creditocracy with it, or it can prop up the financial system to the tune of more trillions of liquidity, making the system’s contradictions more acute. It is no wonder. The oceans of liquidity this creditor-oriented system has created has only burdened working people, small business and governments everywhere with debt that only provides creditors with a means to control them and weakens productive systems rather than setting them free and strengthening economies with productive investment.
To these contradictions, we add the conflicts the system generates, expanding the ranks of the system’s rivals and victims. Since 2008, major international financial institutions have become more national, reducing foreign monies flowing into the dollar system and helping counteract the Triffin Dilemma, part of the reason the Federal Reserve has had to support asset prices and expand its balance sheet so massively.
The system is, moreover, exposing weaker economies without adequate capital controls to politically unsustainable levels of currency volatility, as most acutely revealed by Turkey today.
These countries are seeking alternative sources of finance and payments systems.
Further, the dollar system could function so long as it maintained a semblance of neutrality. However, in recent decades, its legal regime and payments system have been weaponized by increasingly aggressive US diplomacy to favour its own corporations one-sidedly (Wolf 2014) and to further US foreign policy goals questioned even by allies, such as the sanctions against Iran. This is beginning to make rivals and victims such as Russia and Iran, and even long-standing allies such as Western European countries and substantial US Treasury holders, including China, wary.
Finally, in the context of the present crisis, the Federal Reserve has clearly crossed another line. After 2008, it released torrents of liquidity to save the financial sector both within and beyond the US. However, in recent months it has provided the same to US non-financial corporations, including buying the junk bonds of debt-ridden “zombie” companies undermining any pretence of being even the whole US economy’s impartial central bank, let alone the world’s (Bair and Goodman 2021, Brenner 2020, Foroohar 2020).
Emerging Alternatives
Other countries are seeking three types of ways out. First, Russia, the EU and China are building alternative international payments systems in the form of SPFS, INSTEX and CIPS respectively as well as domestic ones such as China’s Union Pay, Russia’s Mir Pay, India’s RuPay and Brazil’s ELO. These are, further, being coordinated internationally (Losev 2019). These rapidly expanding systems, based on other currencies, will increasingly replace the need for international transaction to be routed through the US-controlled dollar system.
Second, many countries, particularly those targeted by or rejecting US diktats, are actively pursuing the de-dollarization of their payments, prices and financial systems (Kuznetsov and Ivanova 2018) and choosing to trade with other countries in each other’s currencies in order to avoid the rigged dollar system, while Sino-Russian monetary and financial cooperation is widening even further. These practices constitute a reversion to means of settling balance-of-payments deficits that were used in the inter-war period, when sterling’s role had shrunk and the dollar was giving the initial demonstration of its incapacity for a world role.
Third, the BRICS New Development Bank and Contingency Reserve Arrangement and, particularly, China’s international financial initiatives increasingly constitute an alternative source of finance with advantages the dollar system simply does not have. China’s Asian Infrastructure Investment Bank, Belt and Road Initiative and other financial initiatives are based on the principle of long-term patient capital making productive investment in a cooperative spirit that preserves the policy-autonomy of recipient countries. This contrasts sharply with the dollar creditocracy that, over past decades, has provided only short-term fickle capital for largely financial investment in an aggressive system that constrains policy, is loaded in favour of creditors and is willing to wage conventional and hybrid wars against countries seeking to exit the system. With the expansion of the AIIB and the BRI, and planning to extend membership of the NDB to regional partners of the BRICS countries (Lissovolik 2019), these initiatives are demonstrating their attractiveness to ever more countries.
Finally, though opinion is divided on whether the recent EU fiscal deal will resurrect the euro as a rival to the dollar, it continues to subtract the Eurozone from the dollar payments system. Amid the pandemic, de-dollarization can only accelerate further, making the dollar system and more exclusively a US affair. Even the dollar’s traditional boosters have had to admit that its end is near (Cohen 2020, for example).
The urge to escape from the predatory dollar creditocracy is strong, and the alternatives are today mostly China-centred. This is because, as in the period before 1914, the breakaway or challenge has to be led by countries whose financial systems are public utilities, focusing on financing production. Today, China’s financial system is the most powerful such system, and it has enough international currency reserves to withstand speculative attacks by raiders or hostile powers.
Only such an alternative is in a position to create credit that enables economies to grow, in contrast to being the means of impoverishing them through debts that neither finance production (out of which they can be repaid) nor can be repaid out of existing trade and investment trends without impoverishing the debtor.
This basic difference in financial philosophy is generating powerful pressures toward the creation of a multilateral, multicurrency world. The contradictory dollar creditocracy is surviving only by lending debtor countries more money to pay and remain solvent. That only increases their debt, prolonging the period during which debtors must acquiesce in political and commercial ‘conditionalities’ laid down by the powers that maintain and protect the dollar creditocracy: the US, the IMF and associated institutions plying their financial diplomacy of privatization sell-offs, anti-labour policies, pro-US trade favouritism and general impoverishment .
How exactly the contradiction of the dollar creditocracy and the conflicts it has generated will play out, and with what results, remains something of an open question. Today’s fracturing world economy and New Cold War certainly make Keynes’s ideas of a bancor and the International Clearing Union impractical on a world or universal scale. However, in the hands of a China-Russia-Iran centred bloc, perhaps expanding to include other European and Third World countries seeking to avoid debt deflation and austerity à la Greece or Argentina, its principles can be adapted to the needs of a less universal and partial, though still large, bloc.
Such a bloc is likely to use the most expedient measures first. Gold is the path of least resistance and de facto expedient in such a transition. It has the virtue of being a widely demanded asset not taking the form of debt to the reserve-currency government. At the same time, there is the long-standing difficulty of using a commodity whose price fluctuates with fluctuating production costs and demand conditions. These difficulties, along with the limits of gold supply, mean that any transition away from the dollar creditocracy will need other expedients, chief among which will be government holdings of the securities of allied governments. These mutual balances, extended as necessary through swap agreements, could become the basis for the new international reserve system.
These expedients will still need to be supplemented by a solution to the most fundamental problem for any stable and non-polarizing international monetary reform. It is the problem that Keynes emphasized in his 1944 proposals for bancor: in a world of unevenly developed productive capacities, some countries may run prolonged payments surpluses and become large net creditors, while other countries accumulate payments deficits.
To prevent such imbalances, Keynes proposed a system that would generate pressures on creditor nations to provide debtors with the means to pay, essentially by purchasing imports of their goods and services. These pressures included not only the interest charged on positive balances but also the threat that if disequilibrium persisted to a serious degree, the build-up of credits and debts should simply be wiped out. Either way, imposing austerity on debtor countries, undermining the world’s aggregate productive capacity, was the option avoided.
These systemic incentives were directed at the long-term stabilization of the system in which productively superior creditor nations (such as China) help build up the economies of their debtors and customer countries so as to create a balanced circulation of goods and services. That possibility is contained, for instance, in the Belt and Road Initiative, creating ports and transport infrastructure, and China’s other overseas investment and financing operations creating a foundation for mutual regional prosperity.
To be workable, such a system must expand bilateral balances into a truly region-wide bank, empowered to create its own money to finance this overall development. That would create long-term, patient and productive credit in a system of mutual gain. That expansionary international credit system, like the one Keynes sought to devise, is what the Eurozone failed to create for its member-nation governments.
The result has been to fracture between northern creditor nations and the debt-strapped southern and Western periphery, the so-called PIIGS (Portugal, Ireland, Italy, Greece and Spain).
Neither the United States nor its dollar-area satellites are likely to approve of such a region-wide financial entity. The US will not join any system that it cannot dominate and veto, and refuses to submit to decisions reached by what may be thought of as a democracy of nations. If it persists in this mode, it can only watch the demise of its contradictory dollar creditocracy and the rise of alternative systems fostering productive expansion elsewhere.