28 April 2008

Why the US Has Really Gone Broke


This is a difficult essay for an American of this generation to read, because we have grown up with the assumption that the security of the United States is intimately tied to massive amounts of spending for military preparedness. The first response to any essay such as this is often an emotional one: "What about the troops?"

It requires an effort to realize that the vast majority of this spending has absolutely nothing to do with what the troops want or need. The recent examples of the lack of adequate armor on vehicles carrying troops, to the abysmal conditions in the military hospital system, are more than just anomalies. The military industrial complex, of which we had been warned in the farewell address of Dwight Eisenhower, does not value the troops, the US citizen army, highly in its equations.

The United States has reached its limit. It can no longer aspire to be 'the world's policeman.' We are not able to do this and maintain a viable and healthy democracy at home. We are not protecting ourselves and our liberties; we are promoting the interests of pseudo-american global corporations around the world. As Mussolini observed, corporatism is fascism.

The global corporate complex, though nominally based in part in the US, exists for its own purposes, serves its own purposes, and consumes everything which we the American people hold most valuable: our lives, our liberties, and our pursuit of peace and happiness with justice for all.

"Some of the damage can never be rectified. There are, however, some steps that the U.S. urgently needs to take. These include reversing Bush's 2001 and 2003 tax cuts for the wealthy, beginning to liquidate our global empire of over 800 military bases, cutting from the defense budget all projects that bear no relationship to national security and ceasing to use the defense budget as a Keynesian jobs program. If we do these things we have a chance of squeaking by. If we don't, we face probable national insolvency and a long depression."


Why the U.S. Has Really Gone Broke
Chalmers Johnson
Le Monde Diplomatique
February, 2008

Global confidence in the US economy has reached zero, as was proved by last month’s stock market meltdown. But there is an enormous anomaly in the US economy above and beyond the subprime mortgage crisis, the housing bubble and the prospect of recession: 60 years of misallocation of resources, and borrowings, to the establishment and maintenance of a military-industrial complex as the basis of the nation’s economic life

The military adventurers in the Bush administration have much in common with the corporate leaders of the defunct energy company Enron. Both groups thought that they were the “smartest guys in the room” — the title of Alex Gibney’s prize-winning film on what went wrong at Enron. The neoconservatives in the White House and the Pentagon outsmarted themselves. They failed even to address the problem of how to finance their schemes of imperialist wars and global domination.

As a result, going into 2008, the United States finds itself in the anomalous position of being unable to pay for its own elevated living standards or its wasteful, overly large military establishment. Its government no longer even attempts to reduce the ruinous expenses of maintaining huge standing armies, replacing the equipment that seven years of wars have destroyed or worn out, or preparing for a war in outer space against unknown adversaries. Instead, the Bush administration puts off these costs for future generations to pay or repudiate. This fiscal irresponsibility has been disguised through many manipulative financial schemes (causing poorer countries to lend us unprecedented sums of money), but the time of reckoning is fast approaching.

There are three broad aspects to the US debt crisis.

First, in the current fiscal year (2008) we are spending insane amounts of money on “defence” projects that bear no relation to the national security of the US. We are also keeping the income tax burdens on the richest segment of the population at strikingly low levels.

Second, we continue to believe that we can compensate for the accelerating erosion of our base and our loss of jobs to foreign countries through massive military expenditures — “military Keynesianism (which I discuss in detail in my book Nemesis: The Last Days of the American Republic). By that, I mean the mistaken belief that public policies focused on frequent wars, huge expenditures on weapons and munitions, and large standing armies can indefinitely sustain a wealthy capitalist economy. The opposite is actually true.

Third, in our devotion to militarism (despite our limited resources), we are failing to invest in our social infrastructure and other requirements for the long-term health of the US. These are what economists call opportunity costs, things not done because we spent our money on something else. Our public education system has deteriorated alarmingly. We have failed to provide health care to all our citizens and neglected our responsibilities as the world’s number one polluter. Most important, we have lost our competitiveness as a manufacturer for civilian needs, an infinitely more efficient use of scarce resources than arms manufacturing.

Fiscal disaster

It is virtually impossible to overstate the profligacy of what our government spends on the military. The Department of Defense’s planned expenditures for the fiscal year 2008 are larger than all other nations’ military budgets combined. The supplementary budget to pay for the current wars in Iraq and Afghanistan, not part of the official defence budget, is itself larger than the combined military budgets of Russia and China. Defence-related spending for fiscal 2008 will exceed $1 trillion for the first time in history. The US has become the largest single seller of arms and munitions to other nations on Earth. Leaving out President Bush’s two on-going wars, defence spending has doubled since the mid-1990s. The defence budget for fiscal 2008 is the largest since the second world war.

Before we try to break down and analyse this gargantuan sum, there is one important caveat. Figures on defence spending are notoriously unreliable. The numbers released by the Congressional Reference Service and the Congressional Budget Office do not agree with each other. Robert Higgs, senior fellow for political economy at the Independent Institute, says: “A well-founded rule of thumb is to take the Pentagon’s (always well publicised) basic budget total and double it” (1). Even a cursory reading of newspaper articles about the Department of Defense will turn up major differences in statistics about its expenses. Some 30-40% of the defence budget is “black”,” meaning that these sections contain hidden expenditures for classified projects. There is no possible way to know what they include or whether their total amounts are accurate.

There are many reasons for this budgetary sleight-of-hand — including a desire for secrecy on the part of the president, the secretary of defence, and the military-industrial complex — but the chief one is that members of Congress, who profit enormously from defence jobs and pork-barrel projects in their districts, have a political interest in supporting the Department of Defense. In 1996, in an attempt to bring accounting standards within the executive branch closer to those of the civilian economy, Congress passed the Federal Financial Management Improvement Act. It required all federal agencies to hire outside auditors to review their books and release the results to the public. Neither the Department of Defense, nor the Department of Homeland Security, has ever complied. Congress has complained, but not penalised either department for ignoring the law. All numbers released by the Pentagon should be regarded as suspect.

In discussing the fiscal 2008 defence budget, as released on 7 February 2007, I have been guided by two experienced and reliable analysts: William D Hartung of the New America Foundation’s Arms and Security Initiative (2) and Fred Kaplan, defence correspondent for Slate.org (3). They agree that the Department of Defense requested $481.4bn for salaries, operations (except in Iraq and Afghanistan), and equipment. They also agree on a figure of $141.7bn for the “supplemental” budget to fight the global war on terrorism — that is, the two on-going wars that the general public may think are actually covered by the basic Pentagon budget. The Department of Defense also asked for an extra $93.4bn to pay for hitherto unmentioned war costs in the remainder of 2007 and, most creatively, an additional “allowance” (a new term in defence budget documents) of $50bn to be charged to fiscal year 2009. This makes a total spending request by the Department of Defense of $766.5bn.

But there is much more. In an attempt to disguise the true size of the US military empire, the government has long hidden major military-related expenditures in departments other than Defense. For example, $23.4bn for the Department of Energy goes towards developing and maintaining nuclear warheads; and $25.3bn in the Department of State budget is spent on foreign military assistance (primarily for Israel, Saudi Arabia, Bahrain, Kuwait, Oman, Qatar, the United Arab Republic, Egypt and Pakistan). Another $1.03bn outside the official Department of Defense budget is now needed for recruitment and re-enlistment incentives for the overstretched US military, up from a mere $174m in 2003, when the war in Iraq began. The Department of Veterans Affairs currently gets at least $75.7bn, 50% of it for the long-term care of the most seriously injured among the 28,870 soldiers so far wounded in Iraq and 1,708 in Afghanistan. The amount is universally derided as inadequate. Another $46.4bn goes to the Department of Homeland Security.

Missing from this compilation is $1.9bn to the Department of Justice for the paramilitary activities of the FBI; $38.5bn to the Department of the Treasury for the Military Retirement Fund; $7.6bn for the military-related activities of the National Aeronautics and Space Administration; and well over $200bn in interest for past debt-financed defence outlays. This brings US spending for its military establishment during the current fiscal year, conservatively calculated, to at least $1.1 trillion.

Military Keynesianism

Such expenditures are not only morally obscene, they are fiscally unsustainable. Many neo-conservatives and poorly informed patriotic Americans believe that, even though our defence budget is huge, we can afford it because we are the richest country on Earth. That statement is no longer true. The world’s richest political entity, according to the CIA’s World Factbook, is the European Union. The EU’s 2006 GDP was estimated to be slightly larger than that of the US. Moreover, China’s 2006 GDP was only slightly smaller than that of the US, and Japan was the world’s fourth richest nation.

A more telling comparison that reveals just how much worse we’re doing can be found among the current accounts of various nations. The current account measures the net trade surplus or deficit of a country plus cross-border payments of interest, royalties, dividends, capital gains, foreign aid, and other income. In order for Japan to manufacture anything, it must import all required raw materials. Even after this incredible expense is met, it still has an $88bn per year trade surplus with the US and enjoys the world’s second highest current account balance (China is number one). The US is number 163 — last on the list, worse than countries such as Australia and the UK that also have large trade deficits. Its 2006 current account deficit was $811.5bn; second worst was Spain at $106.4bn. This is unsustainable.

It’s not just that our tastes for foreign goods, including imported oil, vastly exceed our ability to pay for them. We are financing them through massive borrowing. On 7 November 2007, the US Treasury announced that the national debt had breached _$9 trillion for the first time. This was just five weeks after Congress raised the “debt ceiling” to $9.815 trillion. If you begin in 1789, at the moment the constitution became the supreme law of the land, the debt accumulated by the federal government did not top $1 trillion until 1981. When George Bush became president in January 2001, it stood at approximately $5.7 trillion. Since then, it has increased by 45%. This huge debt can be largely explained by our defence expenditures.

Our excessive military expenditures did not occur over just a few short years or simply because of the Bush administration’s policies. They have been going on for a very long time in accordance with a superficially plausible ideology, and have now become so entrenched in our democratic political system that they are starting to wreak havoc. This is military Keynesianism — the determination to maintain a permanent war economy and to treat military output as an ordinary economic product, even though it makes no contribution to either production or consumption.

This ideology goes back to the first years of the cold war. During the late 1940s, the US was haunted by economic anxieties. The great depression of the 1930s had been overcome only by the war production boom of the second world war. With peace and demobilisation, there was a pervasive fear that the depression would return. During 1949, alarmed by the Soviet Union’s detonation of an atomic bomb, the looming Communist victory in the Chinese civil war, a domestic recession, and the lowering of the Iron Curtain around the USSR’s European satellites, the US sought to draft basic strategy for the emerging cold war. The result was the militaristic National Security Council Report 68 (NSC-68) drafted under the supervision of Paul Nitze, then head of the Policy Planning Staff in the State Department. Dated 14 April 1950 and signed by President Harry S Truman on 30 September 1950, it laid out the basic public economic policies that the US pursues to the present day.

In its conclusions, NSC-68 asserted: “One of the most significant lessons of our World War II experience was that the American economy, when it operates at a level approaching full efficiency, can provide enormous resources for purposes other than civilian consumption while simultaneously providing a high standard of living” (4).

With this understanding, US strategists began to build up a massive munitions industry, both to counter the military might of the Soviet Union (which they consistently overstated) and also to maintain full employment, as well as ward off a possible return of the depression. The result was that, under Pentagon leadership, entire new industries were created to manufacture large aircraft, nuclear-powered submarines, nuclear warheads, intercontinental ballistic missiles, and surveillance and communications satellites. This led to what President Eisenhower warned against in his farewell address of 6 February 1961: “The conjunction of an immense military establishment and a large arms industry is new in the American experience” — the military-industrial complex.

By 1990 the value of the weapons, equipment and factories devoted to the Department of Defense was 83% of the value of all plants and equipment in US manufacturing. From 1947 to 1990, the combined US military budgets amounted to $8.7 trillion. Even though the Soviet Union no longer exists, US reliance on military Keynesianism has, if anything, ratcheted up, thanks to the massive vested interests that have become entrenched around the military establishment. Over time, a commitment to both guns and butter has proven an unstable configuration. Military industries crowd out the civilian economy and lead to severe economic weaknesses. Devotion to military Keynesianism is a form of slow economic suicide.

Higher spending, fewer jobs

On 1 May 2007, the Center for Economic and Policy Research of Washington, DC, released a study prepared by the economic and political forecasting company Global Insight on the long-term economic impact of increased military spending. Guided by economist Dean Baker, this research showed that, after an initial demand stimulus, by about the sixth year the effect of increased military spending turns negative. The US economy has had to cope with growing defence spending for more than 60 years. Baker found that, after 10 years of higher defence spending, there would be 464,000 fewer jobs than in a scenario that involved lower defence spending.

Baker concluded: “It is often believed that wars and military spending increases are good for the economy. In fact, most economic models show that military spending diverts resources from productive uses, such as consumption and investment, and ultimately slows economic growth and reduces employment” (5).

These are only some of the many deleterious effects of military Keynesianism.

It was believed that the US could afford both a massive military establishment and a high standard of living, and that it needed both to maintain full employment. But it did not work out that way. By the 1960s it was becoming apparent that turning over the nation’s largest manufacturing enterprises to the Department of Defense and producing goods without any investment or consumption value was starting to crowd out civilian economic activities. The historian Thomas E Woods Jr observes that, during the 1950s and 1960s, between one-third and two-thirds of all US research talent was siphoned off into the military sector (6). It is, of course, impossible to know what innovations never appeared as a result of this diversion of resources and brainpower into the service of the military, but it was during the 1960s that we first began to notice Japan was outpacing us in the design and quality of a range of consumer goods, including household electronics and automobiles.

Can we reverse the trend?

Nuclear weapons furnish a striking illustration of these anomalies. Between the 1940s and 1996, the US spent at least $5.8 trillion on the development, testing and construction of nuclear bombs. By 1967, the peak year of its nuclear stockpile, the US possessed some 32,500 deliverable atomic and hydrogen bombs, none of which, thankfully, was ever used. They perfectly illustrate the Keynesian principle that the government can provide make-work jobs to keep people employed. Nuclear weapons were not just America’s secret weapon, but also its secret economic weapon. As of 2006, we still had 9,960 of them. There is today no sane use for them, while the trillions spent on them could have been used to solve the problems of social security and health care, quality education and access to higher education for all, not to speak of the retention of highly-skilled jobs within the economy.

The pioneer in analysing what has been lost as a result of military Keynesianism was the late Seymour Melman (1917-2004), a professor of industrial engineering and operations research at Columbia University. His 1970 book, Pentagon Capitalism: The Political Economy of War, was a prescient analysis of the unintended consequences of the US preoccupation with its armed forces and their weaponry since the onset of the cold war. Melman wrote: “From 1946 to 1969, the United States government spent over $1,000bn on the military, more than half of this under the Kennedy and Johnson administrations — the period during which the [Pentagon-dominated] state management was established as a formal institution. This sum of staggering size (try to visualize a billion of something) does not express the cost of the military establishment to the nation as a whole. The true cost is measured by what has been foregone, by the accumulated deterioration in many facets of life, by the inability to alleviate human wretchedness of long duration.”

In an important exegesis on Melman’s relevance to the current American economic situation, Thomas Woods writes: “According to the US Department of Defense, during the four decades from 1947 through 1987 it used (in 1982 dollars) $7.62 trillion in capital resources. In 1985, the Department of Commerce estimated the value of the nation’s plant and equipment, and infrastructure, at just over _$7.29 trillion… The amount spent over that period could have doubled the American capital stock or modernized and replaced its existing stock” (7).

The fact that we did not modernise or replace our capital assets is one of the main reasons why, by the turn of the 21st century, our manufacturing base had all but evaporated. Machine tools, an industry on which Melman was an authority, are a particularly important symptom. In November 1968, a five-year inventory disclosed “that 64% of the metalworking machine tools used in US industry were 10 years old or older. The age of this industrial equipment (drills, lathes, etc.) marks the United States’ machine tool stock as the oldest among all major industrial nations, and it marks the continuation of a deterioration process that began with the end of the second world war. This deterioration at the base of the industrial system certifies to the continuous debilitating and depleting effect that the military use of capital and research and development talent has had on American industry.”

Nothing has been done since 1968 to reverse these trends and it shows today in our massive imports of equipment — from medical machines like _proton accelerators for radiological therapy (made primarily in Belgium, Germany, and Japan) to cars and trucks.

Our short tenure as the world’s lone superpower has come to an end. As Harvard economics professor Benjamin Friedman has written: “Again and again it has always been the world’s leading lending country that has been the premier country in terms of political influence, diplomatic influence and cultural influence. It’s no accident that we took over the role from the British at the same time that we took over the job of being the world’s leading lending country. Today we are no longer the world’s leading lending country. In fact we are now the world’s biggest debtor country, and we are continuing to wield influence on the basis of military prowess alone” (8).

Some of the damage can never be rectified. There are, however, some steps that the US urgently needs to take. These include reversing Bush’s 2001 and 2003 tax cuts for the wealthy, beginning to liquidate our global empire of over 800 military bases, cutting from the defence budget all projects that bear no relationship to national security and ceasing to use the defence budget as a Keynesian jobs programme.

If we do these things we have a chance of squeaking by. If we don’t, we face probable national insolvency and a long depression.


(1) Robert Higgs, “The Trillion-Dollar Defense Budget Is Already Here” , The Independent Institute, 15 March 2007, http://www.independent.org/newsroom ...
(2) William D Hartung, “Bush Military Budget Highest Since WWII”, 10 February 2007, http://www.commondreams.org/views07 ...
(3) Fred Kaplan, “It’s Time to Sharpen the Scissors”, 5 February 2007, http://www.slate.com/id/2159102/pag ...
(4) See http://www.encyclopedia.com/doc/1G1 ...
(5) Center for Economic and Policy Research, 1 May 2007, http://www.cepr.net/content/view/11 ...
(6) Thomas E Woods, “What the Warfare State Really Costs”, http://www.lewrockwell.com/woods/wo ...
(7) Thomas E Woods, Ibid.
(8) John F Ince, “Think the Nation’s Debt Doesn’t Affect You? Think Again”, 20 March 2007, http://www.alternet.org/story/49418 /



Bear Stearns Bailout 'Worst Policy Mistake in a Generation'


Here's one for the leaders of the cabal which argued that anyone who was not unreservedly in favor of the Bear Stearns (and investment banks) bailout was a Moral Hazard fundamentalist.

Apparently it's not such a no-brainer, but then again we always knew that. When an economist has a weak case to make, its the name-calling that becomes the weapon of first resort, especially in the rarefied atmosphere far from the trading pits where the unintended consequences can be most easily seen.


April 28, 2008, 3:55 pm
Wall Street Journal
Ex-Fed Official: Bear Deal ‘Worst Policy Mistake in a Generation’
By Greg Ip

The Federal Reserve’s moves to prop up Bear Stearns Cos. will come to be seen as “the worst policy mistake in a generation,” the Fed’s past head of monetary affairs said.

The action is comparable to “the great contraction” of the 1930s and “the great inflation” of the 1970s, said Vincent Reinhart, a scholar at the American Enterprise Institute, who retired from the Fed last fall. (That sounds like some serious stagflation - Jesse)

Mr. Reinhart’s assessment, delivered at a panel discussion at the institute Monday, is one of the harshest appraisals yet by a high-profile observer of the Fed’s decision in mid-March to lend money to Bear both as temporary funding to make a merger possible and then to finance $29 billion of Bear’s assets to make its takeover by J.P. Morgan Chase & Co. possible.

How the Ratings Agencies Enabled the Credit Crisis


April 27, 2008
The NY Times
Triple-A Failure
By ROGER LOWENSTEIN
The Ratings Game

In 1996, Thomas Friedman, the New York Times columnist, remarked on “The NewsHour With Jim Lehrer” that there were two superpowers in the world — the United States and Moody’s bond-rating service — and it was sometimes unclear which was more powerful. Moody’s was then a private company that rated corporate bonds, but it was, already, spreading its wings into the exotic business of rating securities backed by pools of residential mortgages.


Obscure and dry-seeming as it was, this business offered a certain magic. The magic consisted of turning risky mortgages into investments that would be suitable for investors who would know nothing about the underlying loans.
To get why this is impressive, you have to think about all that determines whether a mortgage is safe. Who owns the property? What is his or her income? Bundle hundreds of mortgages into a single security and the questions multiply; no investor could begin to answer them. But suppose the security had a rating. If it were rated triple-A by a firm like Moody’s, then the investor could forget about the underlying mortgages. He wouldn’t need to know what properties were in the pool, only that the pool was triple-A — it was just as safe, in theory, as other triple-A securities.

Over the last decade, Moody’s and its two principal competitors, Standard & Poor’s and Fitch, played this game to perfection — putting what amounted to gold seals on mortgage securities that investors swept up with increasing élan. For the rating agencies, this business was extremely lucrative. Their profits surged, Moody’s in particular: it went public, saw its stock increase sixfold and its earnings grow by 900 percent.

By providing the mortgage industry with an entree to Wall Street, the agencies also transformed what had been among the sleepiest corners of finance. No longer did mortgage banks have to wait 10 or 20 or 30 years to get their money back from homeowners. Now they sold their loans into securitized pools and — their capital thus replenished — wrote new loans at a much quicker pace.

Mortgage volume surged; in 2006, it topped $2.5 trillion. Also, many more mortgages were issued to risky subprime borrowers. Almost all of those subprime loans ended up in securitized pools; indeed, the reason banks were willing to issue so many risky loans is that they could fob them off on Wall Street.

But who was evaluating these securities? Who was passing judgment on the quality of the mortgages, on the equity behind them and on myriad other investment considerations? Certainly not the investors. They relied on a credit rating.

Thus the agencies became the de facto watchdog over the mortgage industry. In a practical sense, it was Moody’s and Standard & Poor’s that set the credit standards that determined which loans Wall Street could repackage and, ultimately, which borrowers would qualify. Effectively, they did the job that was expected of banks and government regulators. And today, they are a central culprit in the mortgage bust, in which the total loss has been projected at $250 billion and possibly much more.

In the wake of the housing collapse, Congress is exploring why the industry failed and whether it should be revamped (hearings in the Senate Banking Committee were expected to begin April 22). Two key questions are whether the credit agencies — which benefit from a unique series of government charters — enjoy too much official protection and whether their judgment was tainted. Presumably to forestall criticism and possible legislation, Moody’s and S.&P. have announced reforms. But they reject the notion that they should have been more vigilant. Instead, they lay the blame on the mortgage holders who turned out to be deadbeats, many of whom lied to obtain their loans.

Arthur Levitt, the former chairman of the Securities and Exchange Commission, charges that “the credit-rating agencies suffer from a conflict of interest — perceived and apparent — that may have distorted their judgment, especially when it came to complex structured financial products.” Frank Partnoy, a professor at the University of San Diego School of Law who has written extensively about the credit-rating industry, says that the conflict is a serious problem. Thanks to the industry’s close relationship with the banks whose securities it rates, Partnoy says, the agencies have behaved less like gatekeepers than gate openers. Last year, Moody’s had to downgrade more than 5,000 mortgage securities — a tacit acknowledgment that the mortgage bubble was abetted by its overly generous ratings. Mortgage securities rated by Standard & Poor’s and Fitch have suffered a similar wave of downgrades.

Presto! How 2,393 Subprime Loans Become a High-Grade Investment

The business of assigning a rating to a mortgage security is a complicated affair, and Moody’s recently was willing to walk me through an actual mortgage-backed security step by step. I was led down a carpeted hallway to a well-appointed conference room to meet with three specialists in mortgage-backed paper. Moody’s was fair-minded in choosing an example; the case they showed me, which they masked with the name “Subprime XYZ,” was a pool of 2,393 mortgages with a total face value of $430 million.

Subprime XYZ typified the exuberance of the age. All the mortgages in the pool were subprime — that is, they had been extended to borrowers with checkered credit histories. In an earlier era, such people would have been restricted from borrowing more than 75 percent or so of the value of their homes, but during the great bubble, no such limits applied.

Moody’s did not have access to the individual loan files, much less did it communicate with the borrowers or try to verify the information they provided in their loan applications. “We aren’t loan officers,” Claire Robinson, a 20-year veteran who is in charge of asset-backed finance for Moody’s, told me. “Our expertise is as statisticians on an aggregate basis. We want to know, of 1,000 individuals, based on historical performance, what percent will pay their loans?”

The loans in Subprime XYZ were issued in early spring 2006 — what would turn out to be the peak of the boom. They were originated by a West Coast company that Moody’s identified as a “nonbank lender.” Traditionally, people have gotten their mortgages from banks, but in recent years, new types of lenders peddling sexier products grabbed an increasing share of the market. This particular lender took the loans it made to a New York investment bank; the bank designed an investment vehicle and brought the package to Moody’s.

Moody’s assigned an analyst to evaluate the package, subject to review by a committee. The investment bank provided an enormous spreadsheet chock with data on the borrowers’ credit histories and much else that might, at very least, have given Moody’s pause. Three-quarters of the borrowers had adjustable-rate mortgages, or ARMs — “teaser” loans on which the interest rate could be raised in short order. Since subprime borrowers cannot afford higher rates, they would need to refinance soon. This is a classic sign of a bubble — lending on the belief, or the hope, that new money will bail out the old.

Moody’s learned that almost half of these borrowers — 43 percent — did not provide written verification of their incomes. The data also showed that 12 percent of the mortgages were for properties in Southern California, including a half-percent in a single ZIP code, in Riverside. That suggested a risky degree of concentration.

On the plus side, Moody’s noted, 94 percent of those borrowers with adjustable-rate loans said their mortgages were for primary residences. “That was a comfort feeling,” Robinson said. Historically, people have been slow to abandon their primary homes. When you get into a crunch, she added, “You’ll give up your ski chalet first.”

Another factor giving Moody’s comfort was that all of the ARM loans in the pool were first mortgages (as distinct from, say, home-equity loans). Nearly half of the borrowers, however, took out a simultaneous second loan. Most often, their two loans added up to all of their property’s presumed resale value, which meant the borrowers had not a cent of equity.

In the frenetic, deal-happy climate of 2006, the Moody’s analyst had only a single day to process the credit data from the bank. The analyst wasn’t evaluating the mortgages but, rather, the bonds issued by the investment vehicle created to house them. A so-called special-purpose vehicle — a ghost corporation with no people or furniture and no assets either until the deal was struck — would purchase the mortgages. Thereafter, monthly payments from the homeowners would go to the S.P.V. The S.P.V. would finance itself by selling bonds. The question for Moody’s was whether the inflow of mortgage checks would cover the outgoing payments to bondholders. From the investment bank’s point of view, the key to the deal was obtaining a triple-A rating — without which the deal wouldn’t be profitable. That a vehicle backed by subprime mortgages could borrow at triple-A rates seems like a trick of finance. “People say, ‘How can you create triple-A out of B-rated paper?’ ” notes Arturo Cifuentes, a former Moody’s credit analyst who now designs credit instruments. It may seem like a scam, but it’s not.

The secret sauce is that the S.P.V. would float 12 classes of bonds, from triple-A to a lowly Ba1. The highest-rated bonds would have first priority on the cash received from mortgage holders until they were fully paid, then the next tier of bonds, then the next and so on. The bonds at the bottom of the pile got the highest interest rate, but if homeowners defaulted, they would absorb the first losses.

It was this segregation of payments that protected the bonds at the top of the structure and enabled Moody’s to classify them as triple-A. Imagine a seaside condo beset by flooding: just as the penthouse will not get wet until the lower floors are thoroughly soaked, so the triple-A bonds would not lose a dime unless the lower credits were wiped out.

Structured finance, of which this deal is typical, is both clever and useful; in the housing industry it has greatly expanded the pool of credit. But in extreme conditions, it can fail. The old-fashioned corner banker used his instincts, as well as his pencil, to apportion credit; modern finance is formulaic. However elegant its models, forecasting the behavior of 2,393 mortgage holders is an uncertain business. “Everyone assumed the credit agencies knew what they were doing,” says Joseph Mason, a credit expert at Drexel University. “A structural engineer can predict what load a steel support will bear; in financial engineering we can’t predict as well.” (Extremistan versus Mediocristan and a nod to Taleb - Jesse)

Mortgage-backed securities like those in Subprime XYZ were not the terminus of the great mortgage machine. They were, in fact, building blocks for even more esoteric vehicles known as collateralized debt obligations, or C.D.O.’s. C.D.O.’s were financed with similar ladders of bonds, from triple-A on down, and the credit-rating agencies’ role was just as central. The difference is that XYZ was a first-order derivative — its assets included real mortgages owned by actual homeowners. C.D.O.’s were a step removed — instead of buying mortgages, they bought bonds that were backed by mortgages, like the bonds issued by Subprime XYZ. (It is painful to consider, but there were also third-order instruments, known as C.D.O.’s squared, which bought bonds issued by other C.D.O.’s.)

Miscalculations that were damaging at the level of Subprime XYZ were devastating at the C.D.O. level. Just as bad weather will cause more serious delays to travelers with multiple flights, so, if the underlying mortgage bonds were misrated, the trouble was compounded in the case of the C.D.O.’s that purchased them.

Moody’s used statistical models to assess C.D.O.’s; it relied on historical patterns of default. This assumed that the past would remain relevant in an era in which the mortgage industry was morphing into a wildly speculative business. The complexity of C.D.O.’s undermined the process as well. Jamie Dimon, the chief executive of JPMorgan Chase, which recently scooped up the mortally wounded Bear Stearns, says, “There was a large failure of common sense” by rating agencies and also by banks like his. “Very complex securities shouldn’t have been rated as if they were easy-to-value bonds.”

The Accidental Watchdog

John Moody, a Wall Street analyst and former errand runner, hit on the idea of synthesizing all kinds of credit information into a single rating in 1909, when he published the manual “Moody’s Analyses of Railroad Investments.” The idea caught on with investors, who subscribed to his service, and by the mid-’20s, Moody’s faced three competitors: Standard Statistics and Poor’s Publishing (which later merged) and Fitch.

Then as now, Moody’s graded bonds on a scale with 21 steps, from Aaa to C. (There are small differences in the agencies’ nomenclatures, just as a grande latte at Starbucks becomes a “medium” at Peet’s. At Moody’s, ratings that start with the letter “A” carry minimal to low credit risk; those starting with “B” carry moderate to high risk; and “C” ratings denote bonds in poor standing or actual default.) The ratings are meant to be an estimate of probabilities, not a buy or sell recommendation. For instance, Ba bonds default far more often than triple-As. But Moody’s, as it is wont to remind people, is not in the business of advising investors whether to buy Ba’s; it merely publishes a rating.

Until the 1970s, its business grew slowly. But several trends coalesced to speed it up. The first was the collapse of Penn Central in 1970 — a shattering event that the credit agencies failed to foresee. It so unnerved investors that they began to pay more attention to credit risk.

Government responded. The Securities and Exchange Commission, faced with the question of how to measure the capital of broker-dealers, decided to penalize brokers for holding bonds that were less than investment-grade (the term applies to Moody’s 10 top grades). This prompted a question: investment grade according to whom? The S.E.C. opted to create a new category of officially designated rating agencies, and grandfathered the big three — S.&P., Moody’s and Fitch. In effect, the government outsourced its regulatory function to three for-profit companies.

Bank regulators issued similar rules for banks. Pension funds, mutual funds, insurance regulators followed. Over the ’80s and ’90s, a latticework of such rules redefined credit markets. Many classes of investors were now forbidden to buy noninvestment-grade bonds at all.

Issuers thus were forced to seek credit ratings (or else their bonds would not be marketable). The agencies — realizing they had a hot product and, what’s more, a captive market — started charging the very organizations whose bonds they were rating. This was an efficient way to do business, but it put the agencies in a conflicted position. As Partnoy says, rather than selling opinions to investors, the rating agencies were now selling “licenses” to borrowers. Indeed, whether their opinions were accurate no longer mattered so much. Just as a police officer stopping a motorist will want to see his license but not inquire how well he did on his road test, it was the rating — not its accuracy — that mattered to Wall Street.

The case of Enron is illustrative. Throughout the summer and fall of 2001, even though its credit was rapidly deteriorating, the rating agencies kept it at investment grade. This was not unusual; the agencies typically lag behind the news. On Nov. 28, 2001, S.&P. finally dropped Enron’s bonds to subinvestment grade. Although its action merely validated the market consensus, it caused the stock to collapse. To investors, S.&P.’s action was a signal that Enron was locked out of credit markets; it had lost its “license” to borrow. Four days later it filed for bankruptcy.

Another trend that spurred the agencies’ growth was that more companies began borrowing in bond markets instead of from banks. According to Chris Mahoney, a just-retired Moody’s veteran of 22 years, “The agencies went from being obscure and unimportant players to central ones.”

A Conflict of Interest?

Nothing sent the agencies into high gear as much as the development of structured finance. As Wall Street bankers designed ever more securitized products — using mortgages, credit-card debt, car loans, corporate debt, every type of paper imaginable — the agencies became truly powerful.

In structured-credit vehicles like Subprime XYZ, the agencies played a much more pivotal role than they had with (conventional) bonds. According to Lewis Ranieri, the Salomon Brothers banker who was a pioneer in mortgage bonds, “The whole creation of mortgage securities was involved with a rating.”

What the bankers in these deals are really doing is buying a bunch of I.O.U.’s and repackaging them in a different form. Something has to make the package worth — or seem to be worth — more that the sum of its parts, otherwise there would be no point in packaging such securities, nor would there be any profits from which to pay the bankers’ fees.

That something is the rating. Credit markets are not continuous; a bond that qualifies, though only by a hair, as investment grade is worth a lot more than one that just fails. As with a would-be immigrant traveling from Mexico, there is a huge incentive to get over the line.

The challenge to investment banks is to design securities that just meet the rating agencies’ tests. Risky mortgages serve their purpose; since the interest rate on them is higher, more money comes into the pool and is available for paying bond interest. But if the mortgages are too risky, Moody’s will object. Banks are adroit at working the system, and pools like Subprime XYZ are intentionally designed to include a layer of Baa bonds, or those just over the border. “Every agency has a model available to bankers that allows them to run the numbers until they get something they like and send it in for a rating,” a former Moody’s expert in securitization says. In other words, banks were gaming the system; according to Chris Flanagan, the subprime analyst at JPMorgan, “Gaming is the whole thing.”

When a bank proposes a rating structure on a pool of debt, the rating agency will insist on a cushion of extra capital, known as an “enhancement.” The bank inevitably lobbies for a thin cushion (the thinner the capitalization, the fatter the bank’s profits). It’s up to the agency to make sure that the cushion is big enough to safeguard the bonds. The process involves extended consultations between the agency and its client. In short, obtaining a rating is a collaborative process.

The evidence on whether rating agencies bend to the bankers’ will is mixed. The agencies do not deny that a conflict exists, but they assert that they are keen to the dangers and minimize them. For instance, they do not reward analysts on the basis of whether they approve deals. No smoking gun, no conspiratorial e-mail message, has surfaced to suggest that they are lying. But in structured finance, the agencies face pressures that did not exist when John Moody was rating railroads. On the traditional side of the business, Moody’s has thousands of clients (virtually every corporation and municipality that sells bonds). No one of them has much clout. But in structured finance, a handful of banks return again and again, paying much bigger fees. A deal the size of XYZ can bring Moody’s $200,000 and more for complicated deals. And the banks pay only if Moody’s delivers the desired rating. Tom McGuire, the Jesuit theologian who ran Moody’s through the mid-’90s, says this arrangement is unhealthy. If Moody’s and a client bank don’t see eye to eye, the bank can either tweak the numbers or try its luck with a competitor like S.&P., a process known as “ratings shopping.”

And it seems to have helped the banks get better ratings. Mason, of Drexel University, compared default rates for corporate bonds rated Baa with those of similarly rated collateralized debt obligations until 2005 (before the bubble burst). Mason found that the C.D.O.’s defaulted eight times as often. One interpretation of the data is that Moody’s was far less discerning when the client was a Wall Street securitizer.

After Enron blew up, Congress ordered the S.E.C. to look at the rating industry and possibly reform it. The S.E.C. ducked. Congress looked again in 2006 and enacted a law making it easier for competing agencies to gain official recognition, but didn’t change the industry’s business model. By then, the mortgage boom was in high gear. From 2002 to 2006, Moody’s profits nearly tripled, mostly thanks to the high margins the agencies charged in structured finance. In 2006, Moody’s reported net income of $750 million. Raymond W. McDaniel Jr., its chief executive, gloated in the annual report for that year, “I firmly believe that Moody’s business stands on the ‘right side of history’ in terms of the alignment of our role and function with advancements in global capital markets.”

Using Weather in Antarctica To Forecast Conditions in Hawaii

Even as McDaniel was crowing, it was clear in some corners of Wall Street that the mortgage market was headed for trouble. The housing industry was cooling off fast. James Kragenbring, a money manager with Advantus Capital Management, complained to the agencies as early as 2005 that their ratings were too generous. A report from the hedge fund of John Paulson proclaimed astonishment at “the mispricing of these securities.” He started betting that mortgage debt would crash.

Even Mark Zandi, the very visible economist at Moody’s forecasting division (which is separate from the ratings side), was worried about the chilling crosswinds blowing in credit markets. In a report published in May 2006, he noted that consumer borrowing had soared, household debt was at a record and a fifth of such debt was classified as subprime. At the same time, loan officers were loosening underwriting standards and easing rates to offer still more loans. Zandi fretted about the “razor-thin” level of homeowners’ equity, the avalanche of teaser mortgages and the $750 billion of mortgages he judged to be at risk. Zandi concluded, “The environment feels increasingly ripe for some type of financial event.”

A month after Zandi’s report, Moody’s rated Subprime XYZ. The analyst on the deal also had concerns. Moody’s was aware that mortgage standards had been deteriorating, and it had been demanding more of a cushion in such pools. Nonetheless, its credit-rating model continued to envision rising home values. Largely for that reason, the analyst forecast losses for XYZ at only 4.9 percent of the underlying mortgage pool. Since even the lowest-rated bonds in XYZ would be covered up to a loss level of 7.25 percent, the bonds seemed safe.

XYZ now became the responsibility of a Moody’s team that monitors securities and changes the ratings if need be (the analyst moved on to rate a new deal). Almost immediately, the team noticed a problem. Usually, people who finance a home stay current on their payments for at least a while. But a sliver of folks in XYZ fell behind within 90 days of signing their papers. After six months, an alarming 6 percent of the mortgages were seriously delinquent. (Historically, it is rare for more than 1 percent of mortgages at that stage to be delinquent.)

Moody’s monitors began to make inquiries with the lender and were shocked by what they heard. Some properties lacked sod or landscaping, and keys remained in the mailbox; the buyers had never moved in. The implication was that people had bought homes on spec: as the housing market turned, the buyers walked.

By the spring of 2007, 13 percent of Subprime XYZ was delinquent — and it was worsening by the month. XYZ was hardly atypical; the entire class of 2006 was performing terribly. (The class of 2007 would turn out to be even worse.)

In April 2007, Moody’s announced it was revising the model it used to evaluate subprime mortgages. It noted that the model “was first introduced in 2002. Since then, the mortgage market has evolved considerably.” This was a rather stunning admission; its model had been based on a world that no longer existed.

Poring over the data, Moody’s discovered that the size of people’s first mortgages was no longer a good predictor of whether they would default; rather, it was the size of their first and second loans — that is, their total debt — combined. This was rather intuitive; Moody’s simply hadn’t reckoned on it. Similarly, credit scores, long a mainstay of its analyses, had not proved to be a “strong predictor” of defaults this time. Translation: even people with good credit scores were defaulting. Amy Tobey, leader of the team that monitored XYZ, told me, “It seems there was a shift in mentality; people are treating homes as investment assets.” Indeed. And homeowners without equity were making what economists call a rational choice; they were abandoning properties rather than make payments on them. Homeowners’ equity had never been as high as believed because appraisals had been inflated.

Over the summer and fall of 2007, Moody’s and the other agencies repeatedly tightened their methodology for rating mortgage securities, but it was too late. They had to downgrade tens of billions of dollars of securities. By early this year, when I met with Moody’s, an astonishing 27 percent of the mortgage holders in Subprime XYZ were delinquent. Losses on the pool were now estimated at 14 percent to 16 percent — three times the original estimate. Seemingly high-quality bonds rated A3 by Moody’s had been downgraded five notches to Ba2, as had the other bonds in the pool aside from its triple-A’s.

The pain didn’t stop there. Many of the lower-rated bonds issued by XYZ, and by mortgage pools like it, were purchased by C.D.O.’s, the second-order mortgage vehicles, which were eager to buy lower-rated mortgage paper because it paid a higher yield. As the agencies endowed C.D.O. securities with triple-A ratings, demand for them was red hot. Much of it was from global investors who knew nothing about the U.S. mortgage market. In 2006 and 2007, the banks created more than $200 billion of C.D.O.’s backed by lower-rated mortgage paper. Moody’s assigned a different team to rate C.D.O.’s. This team knew far less about the underlying mortgages than did the committee that evaluated Subprime XYZ. In fact, Moody’s rated C.D.O.’s without knowing which bonds the pool would buy.

A C.D.O. operates like a mutual fund; it can buy or sell mortgage bonds and frequently does so. Thus, the agencies rate pools with assets that are perpetually shifting. They base their ratings on an extensive set of guidelines or covenants that limit the C.D.O. manager’s discretion.

Late in 2006, Moody’s rated a C.D.O. with $750 million worth of securities. The covenants, which act as a template, restricted the C.D.O. to, at most, an 80 percent exposure to subprime assets, and many other such conditions. “We’re structure experts,” Yuri Yoshizawa, the head of Moody’s’ derivative group, explained. “We’re not underlying-asset experts.” They were checking the math, not the mortgages. But no C.D.O. can be better than its collateral.

Moody’s rated three-quarters of this C.D.O.’s bonds triple-A. The ratings were derived using a mathematical construct known as a Monte Carlo simulation — as if each of the underlying bonds would perform like cards drawn at random from a deck of mortgage bonds in the past. There were two problems with this approach. First, the bonds weren’t like those in the past; the mortgage market had changed. As Mark Adelson, a former managing director in Moody’s structured-finance division, remarks, it was “like observing 100 years of weather in Antarctica to forecast the weather in Hawaii.” And second, the bonds weren’t random. Moody’s had underestimated the extent to which underwriting standards had weakened everywhere. When one mortgage bond failed, the odds were that others would, too.

Moody’s estimated that this C.D.O. could potentially incur losses of 2 percent. It has since revised its estimate to 27 percent. The bonds it rated have been decimated, their market value having plunged by half or more. A triple-A layer of bonds has been downgraded 16 notches, all the way to B. Hundreds of C.D.O.’s have suffered similar fates (most of Wall Street’s losses have been on C.D.O.’s). For Moody’s and the other rating agencies, it has been an extraordinary rout.

Whom Can We Rely On?

The agencies have blamed the large incidence of fraud, but then they could have demanded verification of the mortgage data or refused to rate securities where the data were not provided. That was, after all, their mandate. This is what they pledge for the future. Moody’s, S.&P. and Fitch say that they are tightening procedures — they will demand more data and more verification and will subject their analysts to more outside checks. None of this, however, will remove the conflict of interest in the issuer-pays model. Though some have proposed requiring that agencies with official recognition charge investors, rather than issuers, a more practical reform may be for the government to stop certifying agencies altogether.

Then, if the Fed or other regulators wanted to restrict what sorts of bonds could be owned by banks, or by pension funds or by anyone else in need of protection, they would have to do it themselves — not farm the job out to Moody’s. The ratings agencies would still exist, but stripped of their official imprimatur, their ratings would lose a little of their aura, and investors might trust in them a bit less. Moody’s itself favors doing away with the official designation, and it, like S.&P., embraces the idea that investors should not “rely” on ratings for buy-and-sell decisions.

This leaves an awkward question, with respect to insanely complex structured securities: What can they rely on? The agencies seem utterly too involved to serve as a neutral arbiter, and the banks are sure to invent new and equally hard-to-assess vehicles in the future. Vickie Tillman, the executive vice president of S.&P., told Congress last fall that in addition to the housing slump, “ahistorical behavorial modes” by homeowners were to blame for the wave of downgrades. She cited S.&P.’s data going back to the 1970s, as if consumers were at fault for not living up to the past. The real problem is that the agencies’ mathematical formulas look backward while life is lived forward. That is unlikely to change.

Roger Lowenstein, a contributing writer, last wrote for the magazine about the Federal Reserve chief, Ben Bernanke. His new book, “While America Aged,” will be published next month.

WWBD: What Will Bernanke Do on Wednesday 30 April?


It would be hard to say that the Wall Street banks are expecting the Fed to hold rates steady given the results of today's Treasury auction. Although they could hold the headline rate steady, and just continue to ignore their target and price debt well below that in their myriad interest rate welfare programs for the hedge funds aka Wall Street banks.

And just why didn't Treasury take any of those higher bids? Repugnant collateral offered? Stuff even Timmy at the Fed wouldn't touch as well? LOL.

This looks more like a capitalization problem, as in the lack thereof of quality capital and a surfeit of off the books rubbish, than a liquidity problem. A basic insolvency scenario.

Homeowners may be sitting on overpriced houses, but the banks are sitting on a mountain over overpriced and overrated debt instruments that they will not confess to or write off more aggressively.

All that cutting rates from here will accomplish will be to try and bury the problem under a carpet of inflated paper, hoping that the stench of rotten debt does not permeate the markets.

Remember all those snide comments that US economists made about Japanese banks and their unwillingness to write down bad debts in the 1990s?