Wednesday, June 29, 2011

Are Productivity Gains in Higher Education Possible?

Yes, but not until institutions are provided with incentive to pursue them.

Here’s a puzzle: leaders are calling on colleges and universities to produce more degrees, but cash-strapped states are cutting higher education spending. What’s the solution? Be careful how you answer—this question has become the most prominent fissure in contemporary debates about higher education reform.

On the one hand, many observers within higher education argue that colleges and universities are fundamentally handicapped when it comes to increasing productivity because of the nature of their core business. The argument (explored in the recent book Why Does College Cost So Much?) is that higher education is a service industry, where the “product” is heavy on human interaction, requires a fixed amount of time with the consumer, and is run by highly educated individuals with high reservation wages. These forces translate to increases in wages and costs without any increase in outputs, leading to declines in overall productivity. This dynamic is what economists call the “cost disease.”

Increasing wages leads to rising costs with no increase in outputs, and together this translates to declines in productivity.

Reform-minded analysts within and outside of higher education have argued that institutions can conceivably become more productive by leveraging technology, reallocating resources, and searching for cost-effective policies that promote student success. Advances in technology and in our understanding of how students learn have opened new avenues for online and hybrid courses that can build capacity and reduce cost. Decisions about how to structure programs—like requiring students to register full-time and creating a set sequence of courses—can promote retention and degree completion over a shorter time frame, leading some colleges to be far more productive than others. And some institutions have shown a willingness to think strategically about how to cut costs so that funding is preserved for elements that are both effective and efficient in promoting student success.

This divide—between those who see no way out the “cost disease” and those who believe colleges and universities can change to become more productive—has risen to the fore of current higher education policy debates. While age-old arguments about whether everyone should go to “college” and who should pay for it still rage, the productivity question is the most prominent dividing line between reformers and the status quo.

Objections to the Productivity Agenda: The ‘Cost Disease’

The standard response to calls for more higher education productivity is to invoke Baumol and William Bowen’s “cost disease.” The “cost disease” posits that service sector firms whose “products” involve interactions with customers (i.e., a nurse treating a patient, a barber giving a haircut) will have difficulty increasing their productivity because those interactions typically entail a fixed amount of time with the customer. Meanwhile, because industries outside of service sector routinely enhance their productivity by utilizing new technology and re-allocating labor, the wages for workers in those industries will increase as productivity increases. As wages increase in other sectors, service sector firms must pay their own employees more in order to prevent them from defecting to industries where the pay is higher, even though they are not producing more of their product. Increasing the wages of these service sector workers leads to rising costs with no increase in outputs, and together this translates to declines in productivity.

The authors argue that new technologies actually increase higher education costs.

Applying this argument to higher education leads many to conclude that productivity gains will prove elusive—students are required to spend about as much time listening to lectures as they were 50 years ago, and grading essays takes about as long as it did when typewriters ruled the day, yet a university must pay faculty and staff more in order to retain them. Moreover, if compelled to increase productivity, institutions will likely respond by decreasing the quality of the education they provide. As Robert Archibald and David Feldman write in their recent study of college costs and productivity:

An institution can increase class size to raise measured output (students taught per faculty per year) or it can use an increasing number of less expensive adjunct teachers to deliver the service, but these examples of productivity gain are likely to be perceived as decreases in quality, both in the quality rankings and in the minds of the students.

What about the promise of technology, which has so markedly increased the productivity of firms in many different industries? A blind alley, say Archibald and Feldman:

For the higher education industry, new technologies are not transforming the industry in ways that allow significant reductions in input use, especially of highly educated labor, and the shift toward an ever-more-highly-skilled workforce has not led to any measured productivity gain for the sector as a whole. Costs must go up as a consequence.

In fact, the authors argue that new technologies actually increase higher education costs as colleges seek to maintain a “standard of care” that keeps up with technological change and what employers need.

In light of this apparent iron law of higher education, it is not surprising that higher education advocates bristle at the suggestion that their institutions could improve without an influx of new dollars. In a recent editorial in Inside Higher Education, the director of South Carolina’s commission on higher education pilloried the idea that prodding colleges and universities to become more productive is a sensible approach to reform:

The thinking goes like this: 1) Higher education is getting more expensive; 2) Higher education is more necessary than ever; so 3) we should be able to get our colleges and universities to produce the same product at half the cost.

That shrieking sound in the background is the logic alarm going off. Unfortunately, many can’t hear it over the loud, unceasing babble about reform ...

We need to escape from the “creating more degrees through better management” box. If we don’t, my fear is that the ersatz reform movement will win and higher education will come to resemble K-12: a vast machine run by bureaucrats and focused on outputs that are truly quantitative but only pretend qualitative.

If focusing on outputs and better management are dead ends, how should we go about making real gains? By providing more money to higher education’s “experts” and improving inputs, naturally. A favorite recommendation: pour grant money “into projects designed to create more of a pervasive education culture in the U.S.” Walters’ belief “is that much of the inefficiency in our education system ... occurs because students don’t think learning is important or don’t believe they can learn, or both.” Translation: It’s those darn lackadaisical students who need to be reformed, not the institutions they attend. With remediation rates at community colleges hovering around 40 percent, we clearly have a lot of work to do on the preparation front. But this does not take colleges off the hook. If students must be prepared for college, colleges must be prepared for students.

The main target of Walters’ ire was a report released by McKinsey and Company last year (“Winning by Degrees”) which highlighted how improved management and use of technology could increase the productivity of postsecondary institutions.

Researchers and institutions themselves have rarely paid much attention to whether their policies and practices are cost-effective.

I’m as skeptical of management consultants in public policy as the next guy. The solutions always seem a little too simple and self-evident (“better management”) and the numbers are provocative but largely un-replicable (e.g., “the achievement gap costs the U.S. $3 to $5 billion a day”). More to the point, Walters is right that small-bore tinkering with management is only likely to produce incremental benefits. And experimentation with new ideas often requires some start-up investment to get them up and running.

But these caveats don’t add up to a rejection of the McKinsey report’s basic premise: institutions of higher education can learn to become more productive and can do so without a big infusion of new dollars or a decline in quality. At the very least, providing incentives for colleges to rethink the way they organize and do business seems like a more tractable approach than pie-in-the-sky proposals to increase students’ appreciation for learning before they enroll in college.

Getting Past the ‘Known Unknowns’ Is a First Step to Enhancing Productivity

In some sense, it is not surprising that colleges and universities would argue that they cannot possibly become more productive. As an influential paper by Doug Harris and Sara Goldrick-Rab argues, researchers and institutions themselves have rarely paid much attention to whether policies and practices are cost-effective. How would you know whether you’re spending money effectively if you’ve never even asked the question?

A redesigned course features computer-based interactive tutorials that periodically quiz students to gauge their mastery of concepts.

Harris and Goldrick-Rab argue that higher education research has largely ignored questions about the cost-effectiveness of important institutional policies—everything from student-faculty ratios, to the use of adjunct faculty, to call centers for student support. Without concrete information about cost-effectiveness, it is difficult, if not impossible, to figure out which changes might enhance productivity. The authors suggest that “the absence of [this] type of information ... is perhaps the strongest evidence that we are falling short of our productivity potential.”

On the basis of their analysis, Harris and Goldrick-Rab conclude that colleges are far from “helpless” when it comes to confronting productivity, and that their results “suggest a need to break out of this mindset, to actively search for new and better ways to serve students.” Shedding light on what policies and programs are cost-effective—the “known unknowns”—is a critical first step in enhancing productivity.

Are Productivity Improvements Possible?

In spite of the “cost disease,” some institutions and providers are experimenting with productivity-enhancing reforms, providing scattered proof points to the reform-minded.

The National Center for Academic Transformation (NCAT) is probably the most oft-cited example of how to reform instructional delivery in a way that maintains quality and reduces costs. NCAT partners with existing institutions to redesign large enrollment introductory courses using information technology. Instead of the standard model, where students sit in a professor’s lecture for 3-4 hours per week and attend an hour-long discussion section, a redesigned course features computer-based interactive tutorials that periodically quiz students to gauge their mastery of concepts. The redesigned courses also feature “on-demand” assistance from peer tutors or course assistants. Because these lower-cost assistants and tutors handle organizational and technical issues, faculty can spend less time fussing with these elements and more time on instructional matters. And because students can rely on these intermediaries for assistance, student-faculty ratios can increase in redesigned courses.

The proof is in the pudding: NCAT’s various redesign efforts, hosted at a variety of institutions and in a variety of courses, boast student outcomes that are as good if not better than the analogous traditional courses. Most importantly, they do so at a lower cost. NCAT’s rigorous evaluations have estimated cost savings of between 15 and 75 percent when compared with the traditional model. No loss of quality there, and a whole lot less expensive.

Reallocating resources away from costly policies and practices with dubious track records toward those that show promise is another route to enhanced productivity.

Leveraging technology is not the only route to enhanced productivity. Reallocating resources away from costly policies and practices with dubious track records toward those that show promise is another. For example, a high percentage of community college students are placed in remedial courses on the basis of an exam taken just before the semester begins. These remedial courses rarely count toward a certificate or a degree but must be taken before the student can advance to credit-bearing coursework. The costs of providing these remedial courses are enormous and research suggests that remediation may be negatively related to retention and completion rates. The good news is that some institutions are thinking of low-cost ways to help their students avoid remedial classes The key insight is simple: people are likely to do better on a test if they are prepared for it—if they know the stakes, are familiar with the format, and know which concepts will be tested. Some colleges have realized that a dose of such test preparation can go a long way toward reducing remediation rates.

Some institutions have invested a fraction of the money that they spend on remedial courses in a summer “bridge” program, where students are pre-tested and then brush up on their basic math and English skills before taking the real exam. Others, like El Paso Community College, have reached down into local high schools to let students know what they can expect on the Accuplacer exam. EPCC has found that simply explaining what a placement test is, pre-testing high school juniors, and providing targeted instruction on the basis of the pretest can help a nontrivial proportion of incoming students avoid remedial classes. Avoiding these “false positives” saves the student money and lowers the college’s cost per degree.

Getting Around the ‘Cost Disease’ Argument

The cost disease is clearly not a figment of college administrators’ imaginations. Indeed, Archibald and Feldman do an excellent job illustrating that the cost and productivity curves of other service industries (i.e., legal services, healthcare) look similar to those in higher education. Given the pathologies that plague these two industries, higher education should take little solace in the fact that it has company. But the costs of hiring highly educated workers are what they are, and policy makers must acknowledge that.

But they must also acknowledge that we are unlikely to see increases in productivity, or even experimentation with innovations that might cut costs, until institutions are provided with incentive to pursue them. Funding colleges and universities based on bodies in seats rather than successful outcomes is a fundamental handicap in advancing a productivity agenda. Competitive grant-making that rewards successful programs without attention to whether those programs are cost-effective leaves us without the information that would make productivity gains possible. Policies that provide incentives to focus on productivity not only test the limits of the cost disease, but can provide further proof that it is not an iron law.

At this stage, though, this debate is still as rhetorical as it is empirical. So long as entrenched higher education interests skirt responsibility for stagnant productivity by citing the cost disease or the academic preparation of their students, reformers who believe institutions can improve will cede any rhetorical momentum.

Inputs like state funding and the types of students that schools enroll are obvious determinants of institutional success, but they are not the only ones. In an era of budget cuts and mass enrollment, the path to raising attainment rates must start with colleges and universities themselves.

Andrew Kelly is a research fellow in education policy at the American Enterprise Institute.

How Not to Balance the Budget

Means and Extremes: How Not to Balance the Budget

Should we impose a means test on Social Security and Medicare benefits? No.

Last month, the Peter G. Peterson Foundation released budget plans authored by analysts at six think tanks from across the ideological spectrum: the American Enterprise Institute, the Bipartisan Policy Center, the Center for American Progress, the Economic Policy Institute, The Heritage Foundation, and the Roosevelt Institute Campus Network. Along with Joe Antos, Alan Viard, and Alex Brill, I was one of the authors of the proposal from AEI (and I’d note that it represents the authors’ opinions and not those of AEI or other AEI scholars).

All of the plans managed to put the budget on a more-or-less sustainable track through tax increases, spending reductions, or a combination of the two. One plan, from Heritage, stood out in that it went further in cutting spending and it stabilized the debt sooner than the others. The principle difference between Heritage’s plan and AEI’s (which are similar in many other respects) is that Heritage imposed a means test on Social Security and Medicare benefits.

The reason means tests have been unpopular in the past remains true today: they penalize people who work and save.

I’m working on a general article for National Affairs on means testing that will come out in the fall, but since we’ve gotten some questions about how the plans differ and why we didn’t go for a means test in our approach, I thought I’d quickly run through the issues. I’ve talked to the folks at Heritage and welcome a back-and-forth discussion. I’ve worked with Heritage and even written papers published by them, so this should be taken as a disagreement among friends.

A means test makes the payment of a government benefit contingent upon the income or assets—that is, the “means”—of the recipient. Many programs for low earners are means tested, but means tests for Social Security and Medicare are more limited. Retirees may pay income taxes on part of their Social Security benefits and high-income retirees pay higher Medicare premiums, but the effects of these de facto means tests are usually pretty small.

Older individuals are particularly sensitive to marginal rates, because unlike others they have the option of retiring.

Heritage imposed a means test more aggressively, which may reflect a new willingness among conservatives to use means tests to limit spending. Columnist Charles Krauthammer has called for a means test for Social Security “so that Warren Buffett’s check gets redirected to a senior in need.” Likewise, former Minnesota Governor Tim Pawlenty favors means-tested Cost of Living Adjustments (COLAs).

But the reason means tests have been unpopular in the past remains true today: they penalize people who work and save. Some may perceive this as unfair; fair or unfair, means tests have negative effects on incentives to work and save.

Heritage’s means test would reduce Social Security and Medicare benefits for retirees with non-Social Security income over $55,000 and eliminate them for those with incomes over $110,000. The means test is limited: around 9 percent of individuals would have some benefit reduction and around 3.5 percent would lose their benefits completely. (It’s not clear if these percentages would remain stable over time.)

Since a typical person at that income level has Social Security benefits of around $14,000 and Medicare benefits, under Heritage’s plan of around $11,000, Heritage’s means test implies a loss of around 45 cents in benefits for each dollar of income over $55,000. In effect, it’s an implicit marginal tax of 45 percent on top of the 25 percent flat tax rate under Heritage’s proposal, for a total of around 70 percent.

We don’t want to set a precedent that 70 percent marginal rates for anyone are a good policy prescription for addressing the U.S. deficit.

One of the lessons of tax policy is that most people aren’t going to pay that kind of tax rate. My National Affairs article will point to some interesting research finding that older individuals are particularly sensitive to marginal rates, because unlike others they have the option of retiring. If people stop working then they won’t get the income and the government won’t get the spending reductions.

In practice (or using dynamic scoring), I doubt Heritage’s means test would produce nearly the savings that a static projection implies. Some people would limit their earnings to avoid it, as individuals already do with Social Security’s Retirement Earnings Test, which reduces social benefits for early retirees who earn more than around $14,000. If individuals work less to avoid the Heritage means test, that’s bad for them, but it also means lower budget savings. Other retirees might avoid the means test by delaying claiming Social Security benefits until after they stop working completely. The main savings would come from individuals with incomes far above $110,000, for whom the negative marginal incentives wouldn’t apply. But there are only a tiny number of people in this category.

Heritage’s plan projects a reduction in Social Security and Medicare outlays of 1.4 percent of GDP from 2011 to 2015, much of which presumably comes from the means test (although savings may also come from abandoning ObamaCare and other areas). Personally, I do not think this reduction would really happen. Moreover, I don’t think it should happen, in the sense that we don’t want to set a precedent that 70 percent marginal rates for anyone are a good policy prescription for addressing the U.S. deficit.

Heritage imposed a means test more aggressively, which may reflect a new willingness among conservatives to use means tests to limit spending.

To be fair, Heritage’s plan would exempt the first $10,000 in retirees’ incomes from taxes, which would encourage low-income retirees to work. However, for individuals in the means test’s income range of $55,000-110,000, this exemption would actually exacerbate disincentives to work (in econo-speak, they’d have a positive income effect on top of the means test’s negative substitution effect, both of which discourage work). I believe Heritage’s plan would do away with some of the smaller entitlement means tests under current law, although the implicit taxes from these current policies aren’t huge. So, on net, this is a pretty big change.

Because Heritage’s plan (like AEI’s) would shift the general tax code to a consumption base, it might seem that the true marginal rate would not be the sum of the implicit tax through the means test and the 25 percent consumption tax rate. But that’s not right. Consumption taxes alter the incentives whether income, once earned, would be spent or saved, but they don’t change work incentives overall.

If you needed to reduce Social Security and Medicare outlays in a hurry, there is a way around this: cut benefits based on individuals’ lifetime earnings rather than their current incomes. Social Security benefits are based on lifetime earnings, so we already know what people earned. If you cut benefits for retirees with high lifetime earnings, you would give them the incentive to earn more to make up the difference, not the incentive to earn less to avoid the penalty. I’m not saying it would be popular and it wouldn’t be as closely targeted as a means test based on current income, but it would be more effective in generating budget savings and more conducive to encouraging individual work and saving.

Clearly we need to get on top of the budget deficit, and to their credit, Heritage’s folks were willing to make tougher choices than most of the rest of us. But, at least when it comes to the means test, I think the benefits aren’t worth the cost.

Andrew G. Biggs is a resident scholar at the American Enterprise Institute.

To Help U.S. Economy, Raise Rates

To Help U.S. Economy, Raise Rates

by Steve H. Hanke

The rate of broad money growth (M3) in the United States is weak. The ultra-low federal funds rate (0.25%) has acted to keep a lid on broad money growth and, in turn, economic activity. Yes, "low" interest rates imposed by the Fed are contributing to a credit crunch and anaemic money growth. But, wait. This is counterintuitive. And if that's not enough, it's not what the textbooks tell us, either.

While the Fed has pumped huge quantities of so-called high-powered money into the economy, the United States is paradoxically facing a credit crunch. Banks have utilized their liquidity to pile up cash and accumulate government bonds and securities. In contrast, bank loans have actually decreased since May 2008. And since credit is a source of working capital for businesses, a credit crunch acts like a supply constraint on the economy. Even though it appears as though the economy has loads of excess capacity, the supply side of the economy is, in fact, constrained by the credit crunch. It is not surprising, therefore, that the economy is not firing on all cylinders.

To understand why, in the Fed's sea of liquidity, the economy is being held back by a credit crunch, we have to focus on the workings of the loan markets. Retail bank lending involves making risky forward commitments. A line of credit to a corporate client, for example, represents such a commitment. The willingness of a bank to make such forward commitments depends, to a large extent, on a well-functioning interbank market — a market operating without counterparty risks and with positive interest rates. With the availability of such a market, even illiquid (but solvent) banks can make forward commitments (loans) to their clients because they can cover their commitments by bidding for funds in the wholesale interbank market.

Steve H. Hanke is a professor of applied economics at The Johns Hopkins University in Baltimore and a senior fellow at the Cato Institute in Washington, D.C.

More by Steve H. Hanke

At present, the major problem facing the interbank market is what can be termed a zero-interest rate trap. In a world in which the fed funds rate is close to zero, banks with excess reserves are reluctant to part with them for virtually no yield in the interbank market. Accordingly, the interbank market has dried up — thanks to the Fed's "zero" interest-rate policy. And, with that, banks have been unwilling to scale up their forward loan commitments.

But, how can banks make money without making wholesale and/or retail loans? Well, it's easy and "risk free" to boot. By holding the federal funds rate near zero, the Fed creates an opportunity for banks to borrow funds at virtually no cost and use them to purchase two-year U.S. treasury notes, which yield about 40 basis points. That doesn't sound like much. But, considering that banks don't have to hold capital against U.S. treasuries, their positions in U.S. government securities can be leveraged to the moon. Well, not really. But, at a leverage ratio of 20, a bank can do quite well by playing the treasury yield curve.

It's time for the Fed to recognize market realities and raise the federal funds rate. A higher fed funds rate would release the credit squeeze created by the Fed's misguided "low" interest-rate policy. If the Fed boosted the funds rate to 2%, the all-important broad money measure — M3 — would get a boost, and so would the slumping economy.

Time to Ax Federal Jobs Programs

Time to Ax Federal Jobs Programs

by Chris Edwards and Daniel Murphy

With the nation's unemployment rate still above 9 percent and a steady stream of worrisome labor news (the latest statistic: 429,000 new unemployment claims last week), federal policymakers are facing pressure to do something about joblessness. The giant 2009 stimulus bill was supposed to cut unemployment to less than 7 percent by now — but that clearly hasn't worked as planned.

Some policymakers are now looking at expanding job training and other federal employment programs. Even conservative House Budget Committee ChairmanPaul Ryan (R-Wis.) proposed to "strengthen" these programs in his recent fiscal plan. Alas, the history of waste and failure in these programs argues for termination, not expansion.

Federal programs for unemployed and disadvantaged workers now cost $18 billion a year, yet the Government Accountability Office recently concluded that "little is known about the effectiveness of employment and training programs we identified." Indeed, many studies over the decades have found that these programs — though well intentioned — don't help the economy much, if at all.

As Congress scours the budget looking for spending cuts, employment and training programs would be good targets.

Worse, federal jobs programs have long been notorious for wastefulness. The word "boondoggle" was coined in the 1930s, to describe the inefficiencies of New Deal jobs programs. Laborers on Works Progress Administration projects were generally viewed as slackers, and a popular song of the era went, "WPA, WPA, lean on your shovel to pass the time away."

The modern era of jobs programs began in the 1960s, under Presidents John F. Kennedy and Lyndon B. Johnson, who created an array of employment services. Indeed, so many overlapping programs were created that, in 1969, Labor Secretary George Shultz called the organization chart for jobs programs a "wiring diagram for a perpetual motion machine."

In the 1970s, President Richard Nixon created — and President Jimmy Carter then expanded — the Public Service Employment Program, which used federal dollars to create hundreds of thousands of jobs in local governments and community groups. The program was "scandal-ridden," according to Congressional Quarterly, and led to many "newspaper exposés of local instances of nepotism, favoritism and other kinds of fraud."

Luckily, President Ronald Reagan killed the entire program. Unfortunately, he then backed a wasteful "conservative" solution to high unemployment — expanded job training. Reagan said the Job Training Partnership Act of 1982 would not be "another make-work, dead-end bureaucratic boondoggle."

But in his book, <="" em="">, University of Oregon labor professor Gordon Lafer found that "from its start, JTPA was plagued by widespread abuse and mismanagement." A 1994 official study on JTPA found that job-training programs created no significant benefits for most participants.

Subsequent legislation has allegedly improved these programs. But, Lafer notes, "as successive generations of job-training programs fail to produce the hoped-for results, policymakers have cycled through a stock repertoire of procedural fixes that promise to solve the problem."

They don't, but politicians keep trying.

Ryan now argues that the budget "is dotted with failed, unaccountable and duplicative job-training programs." But rather than repealing them, his recent budget plan wants to make them more "targeted."

Ryan's budget-reform efforts are laudatory, but he should follow his own plan's advice to "limit government to its core constitutional roles" — and terminate federal jobs programs.

The good news is that federal job-training and employment programs don't fill any critical need that private markets can't fill in the modern economy. The vast majority of U.S. job training, for example, is done by individuals and businesses without government help in the normal pursuit of higher wages and profits. U.S. organizations spent more than $120 billion a year on employee learning and development, according to the American Society for Training and Development.

Chris Edwards is editor of the Cato Institute's Downsizing Government.org. Daniel Murphy is a former special assistant in the Labor Department.

More by Chris Edwards

Other Labor Department activities are also redundant. The department helps fund 3,000 offices nationwide that provide unemployed workers with job-searching and job-matching services. In turns out that remarkably few people use these services, even with today's high unemployment.

Instead, job seekers mainly rely on the Internet, personal networking, temp agencies and other market institutions. Essentially, Monster.com has made federal employment offices obsolete.

To solve the federal budget mess, policymakers need to restrain their impulses to "help" the economy with spending programs. Federal help usually doesn't work — it just consumes resources that would have created jobs and growth if left in the private sector.

As Congress scours the budget looking for spending cuts, employment and training programs would be good targets.

Tax Increase Con Men

Tax Increase Con Men

by Richard W. Rahn

Would you prefer to have 25 percent of $200 that you can see or 20 percent of $300 that you cannot see immediately? Many battles, including the current ones in Washington, are fought between those who can see the consequences of actions several steps in the future, like good chess players, and those who cannot.

Former Vice Chairman of the Federal Reserve Alan Blinder wrote an article in the Wall Street Journal this past week attacking Republicans who have said that more government spending will kill jobs. In the same vein, my old friend Bruce Bartlett, a Treasury official in the George H.W. Bush administration, wrote an article attacking former Minnesota Gov. Tim Pawlenty and other Republicans for claiming the Reagan tax cuts paid for themselves. (Note: Mr. Bartlett used to be a supply-side advocate, but in the past few years, he has become an almost full-time Republican basher and, not surprisingly, now writes for the New York Times.) Mr. Blinder, Mr. Bartlett and others of their stripe no longer seem to be able to see beyond the first-order effects of an economic policy.

The reason these old questions are still causing arguments is because the answers give a guide to which future economic policies are likely to be effective and which are not. Did President Reagan's tax cuts pay for themselves? Mr. Bartlett states: "The fact is that the only metric that really matters is revenues as a share of the gross domestic product. By this measure, total federal revenues fell from 19.6 percent of GDP in 1981 to 18.4 percent of GDP in 1989."

Richard W. Rahn is a senior fellow at the Cato Institute and chairman of the Institute for Global Economic Growth.

More by Richard W. Rahn

The real fact is that Mr. Bartlett has it dead wrong.

It is not only the size of the revenue share of GDP that matters, but, more important, the size of the pie (GDP in this case). Real GDP grew by more than 34 percent from 1982, when Reagan's tax-cut policies started to take effect, through 1989. (A mixture of both tax increases and tax-rate cuts over the Reagan years resulted in a massive drop of the top marginal rate from 70 to 28 percent.) This was an average annual real growth rate of approximately 4.5 percent, much higher than either opponents or proponents of the Reagan policies had forecast. When Mr. Bartlett and others argue that the Reagan-era tax cuts did not pay for themselves, even though real inflation-adjusted revenues rose, they are, in effect, saying that some other, unspecified policies would have brought in more revenue. From the beginning of 1979, there was no real economic growth under President Carter's tax-rate regime. The economy did not start steady growth again until the fourth quarter of 1982, when the Reagan tax cuts were being phased in.

Why should one believe the economy would have grown as fast under the Carter tax regime with a 70 percent top rate? If you assume, as I and many others do, that a continuation of the Carter policies would have produced an economic growth rate of no more than 2.8 percent and probably lower (which is the 30-year average economic growth rate and also is almost identical to the growth rate of the two years of the so-called Obama "recovery") then the Reagan tax cuts did pay for themselves in seven years. Again, the size of the pie counts, and a much larger pie results when it compounds at 4.5 percent rather than 2.8 percent.

Higher economic growth also results in a bigger permanent stock of capital, which can have wealth-enhancing effects many years into the future. Some Roman aqueducts built 2,000 years ago are still being used. Lower tax rates, particularly on capital, make it easier for entrepreneurs to raise capital for new ventures, some of which can have enormous productivity and other beneficial effects on society. Those beneficial innovations that were squashed by high taxes or destructive regulation are never seen. Without specifying the alternative policy regime and knowing the precise effects of it, as contrasted with the Reagan policies, Mr. Bartlett and the others cannot "prove" that the Reagan tax cuts did not pay for themselves in seven years.

Likewise, Mr. Blinder can only see the jobs that are created by direct government spending but not the jobs that are destroyed by it. When funds are spent on a less productive government job, there is diversion of productive resources along with the dead-weight loss of the taxes or borrowing needed to pay for it. Many studies show that most governments are larger than optimal and thus more government jobs lead to fewer total jobs. A study published in the British Economic Policy Journal concluded: "Empirical evidence from a sample of [Organization for Economic Co-operation and Development, OECD] countries in the 1960-2000 period suggests that, on average, creation of 100 public jobs may have eliminated 150 private sector jobs."

Finally, Mr. Blinder ignores the important fact that as government spending increases as a share of GDP, the percentage of adults employed in the civilian labor force decreases and vice versa.

There is little doubt that if much of the counterproductive government spending, regulation and taxation were removed, the U.S. economy could grow for many years at a 5 percent or higher rate. If higher taxes and more government spending were the keys to prosperity, it would be easy to see. The question is: What is unseen?

Public Versus Private Risk

Public Versus Private Risk: Heed the Lesson from Greece

by Jagadeesh Gokhale

Greece's current debt problems were created by government officials who used underhanded financial mechanisms to hide the size of the nation's debt and exposure to risky assets from their foreign creditors. Once these problems were revealed, the Greek economy began an endless downward spiral.

In order to protect its own banks from exposure to debts in Greece and other fiscally struggling nations, the European Central Bank created the European Stability Fund to provide bailouts. But these payments are conditional on recipient nations adopting very stiff austerity policies.

The latest round of Greek austerity measures is meeting with violent resistance on the nation's streets. The next vote could support or sink the entire bailout enterprise, with potentially huge consequences for the ECB's financial sectors and the survival of the euro.

Jagadeesh Gokhale is a senior fellow at the Cato Institute, a member of the Social Security Advisory Board, and author of Social Security: A Fresh Look at Policy Options.

More by Jagadeesh Gokhale

With the twin spiral of imploding private and public sectors, it's difficult not to sympathize with Greek demonstrators. This is a problem created not by them, but by their elected officials. Yet in its outcome, it is very similar to the situation that proponents of government service and insurance provision fear and wish to protect against: a steep and economywide recession from distant and uncontrollable economic shocks and market failures. We need to acknowledge the significant possibility of government failure and understand how Greece got there.

Greece owed $587 billion to foreign creditors as of December 2010. Of that, $280 billion was owed to foreign governments and $290 billion to foreign banks.

The country's debt problem has two features: insolvency of the government and insolvency of its banks. Bailouts from other Euro-zone countries and the International Monetary Fund can temporarily stave off government insolvency. Bank insolvency is a more difficult problem: As foreigners continually withdraw their investments from Greece, the nation's private banks are being forced to liquidate their assets, starving businesses of the credit they need to continue operating.

In a vicious cycle, worsening business prospects in an already uncompetitive economy are inducing additional capital flight from Greece. By the end of 2012, analysts expect Greek bank deposits to shrink by 40 percent compared with their amount in mid-June 2010. And ECB members are demanding additional austerity policies to bail out the state: cutting payrolls, laying off government workers, reducing public services and scaling back the overly generous (even by EU standards) retirement and health benefits of public workers and retirees.

The weight of expert opinion now is that unless it is rescued, Greece's economy will implode. But the provision of periodic bailouts may prove to be yet another government failure because they work only when the recipient institution is economically viable but short on liquidity, not when it is insolvent as the Greek state and its financial sector are.

Moreover, defaults on foreign private sector loans will adversely affect other ECB nations, chiefly Germany, where banks are exposed to Greek, Irish, Portuguese and Spanish debt; all those nations are also facing sovereign debt problems.

With a shrinking economy, it's difficult to predict when the Greek economy will stabilize, but that's key because the capacity of ECB nations to provide bailout funds is limited. That's when the ECB's financial system and the Euro will face their true test of survival.

Many proponents of broad government intervention and management of the economy, including comprehensive social insurance programs, insist those solutions to be the only way to provide needed public services and to protect workers and other vulnerable segments of the population from brutish market forces. But the recent events in Greece suggest the government's ability to operate, manage, self-insure and protect the population can be just as tenuous as that attributed by liberal-leaning analysts to the market.

Although the government can provide a qualitatively different source of risk management, it is not a fail-safe mechanism, and certainly not an unambiguous improvement over private sector mechanisms. The safer option would be to diversify between public and private sector mechanisms to protect the economically vulnerable.

Good-Bye Recession, Hello Slump

Good-Bye Recession, Hello Slump

by Steve H. Hanke

The U.S. recession officially ended in June 2009. With that, a normal post-recession boom failed to materialize. Instead, an unwelcomed slump ensued. Since the recession bowed out, the average annual GDP growth rate has been a paltry 1.6% — well below the long-run trend growth rate of 3.1%.

The economic policy prescriptions of the Obama administration — contrary to the President's oft-repeated assertions — have failed to mitigate the damage from the Panic of 2008-09. Rather, they have kept the patient in sick bay.

The first misguided advice was peddled by the fiscalists (Keynesians) who dominate the Washington, D.C. stage. According to them, increased government spending, accompanied by fiscal deficits, stimulates the economy. That dogma doesn't withstand factual verification.

Steve H. Hanke is a Professor of Applied Economics at The Johns Hopkins University in Baltimore and a Senior Fellow at the Cato Institute in Washington, D.C.

More by Steve H. Hanke

Nothing contradicts the fiscalists' dogma more conclusively than former President Clinton's massive fiscal squeeze. When President Clinton took office in 1993, government expenditures were 22.1% of GDP, and when he departed in 2000, the federal government's share of the economy had been squeezed to a low of 18.2% (see the accompanying chart and table). And that's not all. During the final three years of the former President's second term, the federal government was generating fiscal surpluses. President Clinton was even confident enough to boldly claim in his January 1996 State of the Union address that "the era of big government is over."

President Clinton's squeeze didn't throw the economy into a slump, as Keynesianism would imply. No. President Clinton's Victorian fiscal virtues generated a significant confidence shock, and the economy boomed.

As for President Clinton's proclamation about the era of big government being over, he obviously hadn't anticipated the uncontrolled government spending that would accompany former President George W. Bush's eight years in office and the truly shocking two-year's worth of government spending on President Obama's watch. All told, the George W. Bush and Obama Good-Bye Recession, Hello Slump administrations have added a whopping 5.6 percentage points to government spending as a proportion of GDP. The current federal government outlays are at 23.8% (see the accompanying chart and table). This is significantly above the average of 20.1%.

The surge in government spending — coupled with President Obama's anti-market, anti-business and anti-bank rhetoric — does not inspire confidence. In consequence, the current U.S. fiscal stance has fueled a slump.

That said, it is important to stress what the fiscalists refuse to acknowledge: money dominates. When fiscal and monetary policies move in opposite directions, the direction taken by monetary policy will dictate the economy's course. During the Clinton era, fiscal policy was tight (confidence was "high") and monetary policy was accommodative. The economy boomed.

Since the Panic of 2008-09, fiscal policy has been ultra expansionary, while the growth in the money supply has fallen from a peak annual growth rate of over 15% to an annual rate of contraction of over 5% (see the accompanying chart). No surprise that the economy suffered a serious recession and then became mired in a slump. With the current anemic money supply growth rate, it looks like the slumping economy — something I first warned about in my August 2010 column "Money Dominates" — will, unfortunately, be with us for the foreseeable future.

What makes that gloomy prognosis more likely is the prospect for continued muted growth in the broad measure of the money supply, M3. To understand this, we must understand the implications of the so-called Basel III capital-asset standards for banks, which are set by the Bank for International Settlements in Basel, Switzerland — a bank that counts the U.S. and twenty-six other countries as members.

Basel III, among other things, will require banks in member countries to hold more capital against their assets than under the prevailing Basel II regime. While the higher capital-asset ratios that are required by Basel III are intended to strengthen banks (and economies), these higher ratios destroy money. In consequence, higher bank capital-asset ratios contain an impulse — one of weakness, not strength.

To demonstrate this, we only have to rely on a tried and true accounting identity: assets must equal liabilities. For a bank, its assets (cash, loans and securities) must equal its liabilities (capital, bonds and liabilities which the bank owes to its shareholders and customers). In most countries, the bulk of a bank's liabilities (roughly 90%) are deposits. Since deposits can be used to make payments, they are "money."

Accordingly, most bank liabilities are money.

Under the Basel III regime, banks will have to increase their capitalasset ratios. They can do this by either boosting capital or shrinking assets. If banks shrink their assets, their deposit liabilities will decline. In consequence, money balances will be destroyed. So, paradoxically, the drive to deleverage banks and to shrink their balance sheets, in the name of making banks safer, destroys money balances. This, in turn, dents company liquidity and asset prices. It also reduces spending relative to where it would have been without higher capital-asset ratios.

The other way to increase a bank's capital-asset ratio is by raising new capital. This, too, destroys money. When an investor purchases newly-issued bank equity, the investor exchanges funds from a bank deposit for new shares. This reduces deposit liabilities in the banking system and wipes out money.

As banks ramp up in the anticipation of the introduction of Basel III in January 2013, we observe stagnation in the growth of broad money measures. And if that isn't bad enough, Federal Reserve Governor Daniel Tarullo has suggested that capital-asset ratios for some larger U.S. banks should be mandated to be set at higher levels than those imposed by Basel III. Governor Tarullo's views appear to be widely shared by his colleagues at the Federal Reserve and most who inhabit the environs of Washington, D.C.

The suggestion of ultra-high capital-asset ratios for some banks will not go down without a fight, however. Indeed, Jamie Dimon, Chairman and CEO of JPMorgan Chase & Co. recently confronted the Chairman of the Federal Reserve Ben S. Bernanke. Dimon argued that excessive bank regulation, including ultra-high capitalasset ratios would put a damper on money supply growth and the U.S. economy. While Dimon might have been arguing JPMorgan's book, he was on the right side of economic principles and Chairman Bernanke was on the wrong side.

Banks in the eurozone come under the purview of Basel III. Like banks in the U.S., eurozone banks are shrinking their risk assets relative to their equity capital, so that they can meet Basel III. Broad money growth for the euro area is barely growing and moving sideways (see the accompanying chart). And Greece, which is at the epicenter of Europe's current crisis, is facing a rapidly shrinking money supply. These money supply numbers will ultimately be the spike that is driven into the heart of the Greek economy and the false hopes of a peaceful resolution of Greece's fiscal woes. Greece will be yet another case in which money dominates.

In China, money matters, too. During the 1995-2005 period, when China fixed the yuan- U.S. dollar exchange rate at 8.28, China's overall inflation rate mirrored that of the U.S. and was relatively "low." Once China caved in to misguided pressure — notably from the U.S., France and international institutions, like the International Monetary Fund — and allowed the yuan- U.S. dollar exchange rate to wobble around, problems arose. The money supply growth rate surged in the wake of the Panic of 2008-09. And as night follows day, inflation has raised its ugly head in China. The monetary authorities are scrambling to cool down the inflationary pressures by slowing monetary growth — from almost 30% per annum to 15%.

The combination of Basel III (or Basel III, plus) and China's attempt to squeeze inflation out of the economy via tighter money leads to a less than encouraging money supply picture. Good-bye recession, hello slump.

Triumphalism Hides Many Important Foreign-Policy Failures

Triumphalism Hides Many Important Foreign-Policy Failures

Nationalism in many countries prompts their governments to trumpet foreign-policy successes while sweeping disappointments under the rug. The inclination toward such biases may only be human nature, but democracies should also take the difficult step of heeding and analyzing the failures—that is, if they want to embrace truth and avoid the path to authoritarianism.

Because American nationalism is especially strong, the U.S. government regularly attempts to take maximum credit for events such as the fall of the communist bloc and the killing of terrorist Osama bin Laden—while forgetting about profligate blunders that have made America and its citizens less secure, a failure in the most importance function of government.

Although communism did fall, in most such societal revolutions, domestic factors usually overwhelm external influences. Although Reaganophiles give that president almost sole credit for toppling the communist bloc, an unviable economic system is what ultimately brought down the Soviet Union and its communist allies.

As for killing Osama bin Laden, it took the gold-plated U.S. intelligence community, which probably spends as much on intelligence as the rest of the world combined, a decade and a half to neutralize him. Moreover, the CIA’s greatest triumph has been its greatest failure; its encouragement and funding of radical Islam, including the Afghan “freedom fighters,” as a counterweight to Soviet communism helped create al-Qaeda in the first place. Moreover, the U.S. government’s unneeded meddling and military presence in the Islamic world motivated bin Laden to attack the United States and continue to fuel Islamists’ anti-American attacks—for example, the nation-building occupations in Afghanistan and Iraq inflame Islamic jihadists worldwide. Because many anti-American jihadists in Iraq came from eastern Libya, American provision of the air force for the Libyan rebels may replicate the unintended threat-creation experience of U.S. aid to Islamists in Afghanistan. And getting rid of Moammar Gadhafi—whom Ronald Reagan originally demonized and attacked but who had more recently given up his nuclear-weapons program and made nice with the West—won’t enhance U.S. security very much.

But such reckless behavior should not be surprising. The U.S. government has made or strengthened enemies before. Recent examples are in Somalia, Lebanon, Pakistan, Yemen, Iraq, and Iran. In Somalia, the U.S. government recently trumpeted the killing of Fazul Abdullah Mohammed, a leader in both the Somali Islamist Shabaab movement and al-Qaeda. Yet the Islamists had little support in the moderate Islamic country of Somalia until the U.S. government began supporting corrupt, violent warlords there. And Somali support for Shabaab against foreign influence really spiked during the catastrophic U.S.-backed Ethiopian invasion of Somalia in 2006. The Bush administration’s encouragement—with weapons and advisers—of a Christian-led Ethiopian government’s invasion of a Muslim country further stoked the fires of radical Islam, coming in the wake of the post-9/11 U.S. invasions of Muslim Afghanistan and Iraq.

In Lebanon, the Shi’ite Islamist group Hezbollah—which formed during the 1982 invasion of Lebanon by U.S.-backed Israel and which enhanced its reputation by throwing Reagan’s forces out of Lebanon soon thereafter—now dominates the Lebanese government. This result was made possible by Israel’s second invasion of Lebanon in 2006, which again enhanced Hezbollah’s reputation by demonstrating its ability to withstand an attack by a stronger power.

In Pakistan, the Pakistani Taliban did not try to attack targets in the United States—the attempted Times Square bombing—until the United States began to kill Pakistani Taliban fighters with drone attacks in Pakistan. Similarly, Islamist militants in Yemen did not try to attack targets in the United States until our government escalated military involvement in Yemen.

In Iraq, the United States helped bring Saddam Hussein to power, made him the dominant power in the Persian Gulf by supporting him in his successful war with Iran, and then demonized him and fought two wars against him.

Finally, creating an enemy in Iran goes way back to 1953, when the CIA helped overthrow the elected anti-communist government of Mohammed Mossadegh because he had nationalized British oil interests. The United States restored the autocratic shah, who allowed U.S. companies to have some of Iran’s oil, oppressed his people with the secret police, and spent too many of the country’s resources on U.S.-made weapons and not enough on economic development. He was overthrown by a Shi’ite Islamist regime, which has always been predictably hostile to the United States.

Throw in the expensive and pointless Korean and Vietnam Wars, the reckless and failed invasion of Cuba at the Bay of Pigs, and the resulting near incineration of the world during the Cuban Missile Crisis for no American strategic gain, and post-World War II U.S. foreign policy doesn’t look so successful after all.

Like Nixon, Obama Will Waste Lives to Get Reelected

Like Nixon, Obama Will Waste Lives to Get Reelected

No one needs to tell the public that politicians are slick — and the ones who get elected are the oiliest. President Obama, in a recent speech announcing the phased withdraw of 33,000 U.S. surge forces from Afghanistan by September 2012, told the country that the United States had largely achieved its goals in Afghanistan and that “we are starting this drawdown from a position of strength.” The public could be forgiven for missing the real message: “We’ve lost the war, but we are declaring victory anyway and getting out.”

The reality of withdrawing 33,000 of about 100,000 troops in that country is that the president’s “counterinsurgency” strategy — the U.S. clearing areas of Taliban forces until “good government” can take hold and the Afghan forces are competent enough to take over — has failed. The strategy was designed to achieve battlefield gains that would not eradicate the Taliban but cause the group to come to the negotiating table. Although the Taliban is negotiating, it is not doing so seriously because it knows it is winning the war. If it were losing, more Taliban would be defecting to the Afghan government; so far, only 1,700 out of between 25,000 and 40,000 insurgents have done so.

Superior U.S. forces have cleared some areas of the southern provinces of Helmand and Kandahar, traditionally Taliban strongholds, but they only have an illegitimate, corrupt Afghan government and incompetent Afghan security forces to hand them over to. Yet it is still nearly impossible to drive safely from the capital of Kabul to Kandahar. Furthermore, the Taliban merely lies low in those two provinces until the U.S. leaves, or they move to other parts of the country where American forces are much more sparse. The Taliban in eastern Afghanistan — which have more links to al-Qaeda than those in the south but who have enjoyed less U.S. attention — can withdraw to sanctuaries in Pakistan. The U.S. and NATO have never had enough forces in Afghanistan to run an effective counterinsurgency strategy. And if the insurgents are not losing, they are winning. Time is on their side, because it’s their country and they can simply outwait the United States, which the insurgents know will eventually withdraw.

Since according to counterinsurgency expert William R. Polk, guerrilla warfare is 80 percent political, 15 percent administrative, and only 5 percent military, the U.S.-sponsored corrupt and illegitimate Afghan government is a major albatross around America’s neck. Also, even after Afghan security forces have been trained for almost a decade, they are incapable of securing Afghanistan on their own.

Yet if there hasn’t been a terrorist threat from Afghanistan for seven to eight years, as the Obama administration maintains, then why did we need the surge and 18-month counterinsurgency strategy in the first place, and why can’t troops come home faster? The answer is that the withdrawal timetable is not based on military considerations but on electoral politics.

Instead of going against the Taliban during the next fighting season, those 33,000 troops already will have been withdrawn or will be packing to leave Afghanistan by September 2012. Thus, with an eye toward the November 2012 presidential election, Obama can say that the surge is over, that it was a success, and that all surge forces have been withdrawn. But if the withdrawal table is political, why not claim the same victory and remove all 100,000 U.S. troops to satisfy a war-weary public?

Richard Nixon faced the same dilemma presiding over the lost Vietnam War. In 1971, he wanted to withdraw U.S. forces from South Vietnam until Henry Kissinger reminded him that the place would likely fall apart in 1972, the year Nixon was up for reelection. To avoid this scenario, Nixon unconscionably delayed a peace settlement until 1973, thus trading more wasted American lives for his reelection.

Obama appears to be up to the same thing. A phased withdrawal of 33,000 U.S. troops before the election will push back at Republican candidates’ demands for more rapid withdrawal and signal to the conflict-fatigued American public that he is solving the problem, while leaving 70,000 forces to make sure the country doesn’t collapse before that election. Again, American lives will be needlessly lost so that a slick politician can look his best at election time.

It’s ‘Kinetic,’ So Don’t Get Frenetic

It’s ‘Kinetic,’ So Don’t Get Frenetic

Throw away your dictionary – we’re not at war in Libya

Explaining the Obama administration’s rationale for violating the War Powers Act by not asking Congress for authorization to attack Libya, the White House claims that what’s going on in Libya isn’t war, it’s a “kinetic military action.” This set off such a round of guffaws – even from Libya war supporters in the Democratic congressional caucus – that the administration felt compelled to send a government lawyer to Congress to elaborate on this exercise in Doublespeak. Harold Koh, the State Department’s lawyer-in-chief, explained to the Senate Foreign Relations Committee that since there was no back-and-forth firing between American and Libyan forces, the Libyan intervention isn’t a real war – and therefore the President is not in violation of the War Powers Act. (No word yet on whether he’s in violation of the Constitution, which gives Congress, and not the President, the power to make war.)

This, by the way, is the same Harold Hongju Koh who once authored a legal brief [.pdf] challenging George Herbert Walker Bush’s authority to fight the first Iraq war, on the grounds that “the Constitution requires the president to consult with Congress and receive its affirmative authorization – not merely present it with faits accomplis – before engaging in war.”

Oh, but this isn’t a war – didn’t you hear me the first time? As Koh explained to the befuddled solons in his opening statement: the word “hostilities,” which “triggers” the 60-day time line imposed by the War Powers Act, is “an ambiguous term of art.” Translation: it can mean anything anyone wants it to mean – especially if that anyone is a sitting Democratic president. After all, Koh argued, the word wasn’t defined in the legislation, and there is no legislative precedent that would define it for us. Oh, and put down that dictionary – we don’t use them in ObamaWorld, which is in the same galaxy as Bizarro World. Instead, we must stick to “historical practice.”

It is precisely “historical practice” that argues against Koh’s Orwellian linguistics, because never in the history of the world has anyone ever argued that bombing and killing citizens of a foreign country isn’t war plain and simple – not even the Soviets, who were masters of Doublespeak. That didn’t deter our State Department’s legal eagle from defending the indefensible: after all, this administration is all about “change” – and yet they didn’t tell us they were changing the language and the clear meaning of words.

According to Koh, there are four factors that qualify the Libyan adventure as a “kinetic action” rather than a war, the first being that the action has “international support,” and – due to its multilateral character – transcends the need for congressional approval. That is the view taken by his boss, Hillary Clinton, who stated that the only authorization needed came from the United Nations. Koh echoed Hillary again when he said that even if the Senators disagreed with the administration’s position on the issue of authorization, they should support the Libyan war “kinetic action,” because congressional opposition only “serves Gadhafi’s’s interests.” A less dramatic way of saying, as Hillary did, “Whose side are you on?”, but just as offensive.

Furthermore, argued Koh, this “kinetic action” was launched in pursuit of “limited goals,” i.e. protecting Libyan civilians by preventing an alleged impending “massacre” (as administration spokesmen put it). Yet this is another brazen falsehood, because the goals of the NATO alliance have changed – and with record rapidity.

You’ll recall it was only a few months ago that the pro-war pundits and their friends in the White House were telling us that “regime change” was not on the agenda, that it would be “a matter of days, not weeks,” and that the whole idea was to prevent the Mad Dog Dictator from slaughtering as many as 100,000 of his political enemies. In a matter of weeks, all three of the NATO principals – Obama, British Prime Minister David Cameron, and French President Nicolas Sarkozy – published a jointly-authored op ed piece openly acknowledging that the goal had changed, and the allies were now going for regime change.

There is nothing limited about America’s war on Libya: Washington’s war aims are as unlimited as their ambition. Libya is just the beginning. Wait until they go into Sudan, again on “humanitarian” grounds.

In any case, whatever “limited” objectives this administration is currently pursuing in North Africa – or anywhere else, for that matter – you can be sure it’s in the service of a much larger objective: ensuring US domination of the region. Given the current circumstances, in which American-supported dictators in the Middle East and North Africa are being kicked out of power left and right, the only way Washington can accomplish this is through war. But please – don’t call it that.

Another argument made by Koh is that, since there is little or even no danger of incurring casualties – US planes are bombing from heights unreachable by the ramshackle Libyan air defenses – this action doesn’t meet the definition of a war. There have been no deaths on the US side, nor are any likely to occur, said Koh – but what about the Libyans? In particular, what about those civilians we keep “mistakenly” killing? Apparently, only the number of American deaths enters into Koh’s calculations.

Oh, and did I tell you Senor Koh is noted as a great defender of “human rights”? Indeed, he served as U.S. Assistant Secretary of State for Democracy, Human Rights and Labor in the Clinton administration.

In today’s world, it is entirely possible – indeed, probable – that a “human rights” champion of renown would argue in favor of a military action on the grounds that the enemy is completely at our mercy, and unable to mount an effective defense. That’s what we mean by “human rights” in ObamaWorld.

Koh’s third point was that US military action in Libya is unlikely to escalate, because a ground presence has been ruled out in advance. Yet that is not what we’ve heard from our European allies, particularly the French, who have consistently pushed for an all-out invasion. Furthermore, how do we know there are no US troops are the ground – because the US government says so?

Well, I guess it all depends on how one defines “troops” – we’re back to playing word games, you’ll note – because the CIA is almost certainly “on the ground” in Libya, along with their British and French equivalents. What if one or more of these spooks are captured, and subjected to torture and/or public display? What if one of those US pilots crashes, and is captured? This is almost certain to result in an attempted rescue operation, and that will in itself represent a significant escalation of the conflict. Such a scenario would fatally undermine Koh’s fourth point, made in testimony to the Senate committee, that the US is utilizing limited means to achieve its limited objectives.

Koh, an advocate of “transnational” law, is not only an enemy of Libyan sovereignty, he’s also an enemy of US sovereignty: we don’t need congressional authorization to commence “kinetic” actions, according to Koh and his fellow transnationalists, because “international law” precedes – and overrides – the US Constitution.

To Obama and his minions, the Constitution is an obstacle to be ignored, where possible, and “reinterpreted” when necessary. During his presidency, the US military is the instrument of a militant internationalism, one that murders civilians in the cause of “human rights” and seeks to spread “democracy” abroad even while ignoring basic democratic precepts on the home front.

This administration, armed with an ideology so far removed from American traditions and sheer common sense, is far more dangerous than its war-maddened predecessor. At least Bush spared us the verbal gymnastics and never denied he intended to take us to war. The current occupant of the Oval Office wants us to consider him a modern Gandhi while besting Bush at his own game. The pretentious doubletalk engaged in by this White House is an insult to the American people, and yet another measure of Obama’s monumental arrogance.

Idolizing Absolute Power

Idolizing Absolute Power

By JAMES BOVARD

The Christian Science Monitor published a piece I wrote last month wrote opposing allowing the U.S. government to kill Americans without a warrant, trial, or any judicial niceties. The article, “Assassination Nation: Are there any limits on President Obama's license to kill?,” spurred a torrent of feedback on Yahoo.com that vividly illustrates how some Americans now view absolute power.

Some folks believed that opposing “extrajudicial killings” should be a capital offense. My article mentioned an American Civil Liberties Union lawsuit pressuring the Obama administration “to disclose the legal standard it uses to place U.S. citizens on government kill lists.” “Will R.” was indignant: “We need to send Bovard and the ACLU to Iran. You shoot traders and the ACLU are a bunch of traders.” (I’m not aware that the ACLU is engaged in either interstate or international commerce). “Jeff” took the high ground: "Hopefully there will soon be enough to add James Bovard to the [targeted killing] list." Another commenter - self-labeled as “Idiot Savant” - saw a grand opportunity: "Now if we can only convince [Obama] to use this [assassination] authority on the media, who have done more harm than any single terror target could ever dream of..."

Many folks feared that any restrictions on U.S. government killing could be fatal. As “Rogmac” groused: “You guys who are against killing these guys are going to be the death of all of us.” Other commenters started from the self-evident truth that, as “Bert” declared, “In the best interest of the United Sates and it's citizen's, someone has to be the judge, jury and executioner.” This theory of government differs significantly from that proffered in the Federalist Papers. “Rich” was sure everything had been done properly: “The warrants have already been signed, the execution orders have all been approved now we just need to find them and eradicate them.” Having a president approve his own execution orders is more efficient than the procedures used by the U.S. government in earlier times. “Coder Cable” joined the pro-power parade: “In a time of war, the military (ie: President) is allowed to execute anyone for the crime of treason, assuming there is strong evidence to backup the claim.”

This was practically the only pro-assassination comment that referred to a standard of evidence. The question of whether government officials can be trusted to arbitrarily label Americans as enemies did not arise. Instead, most commenters favored “faith-based killings,” blindly accepting the assertions of any political appointee as the ultimate evidence. “Dark Ruby Moon” wrote: “I won't loose a minutes sleep over these people being eliminated.... One of the reasons presidential elections are so important is we are picking someone who must make such difficult decisions and who is in the end accountable for those decisions.” Perhaps future presidential races will feature campaign promises such as “Vote for Smith - he won’t have you killed unless all his top advisers agree you deserve to die”?

Commenter “FU” played the race card: “James bovard, I don't think the killing started with Obama but I wonder if you would write the same article if the cowboy from Texas was pulling the trigger? Or is it that you are angry because the existence of plantations run with blacks are done in this country and Obama managed to become president? We would all be better off if bigots like you stopped writing crap.” Bigotry is the only reason to oppose permitting a black president to kill Americans of all races and ethnicities.

For “Rocketman1945,” the fact that I opposed unlimited presidential power proved I was a foreigner: “WOW! You can sure tell what side of the political spectrum this article came from. Not one word of support for the currant American President. Who are these people that write this drivel? Not Americans that's for sure.”

The newspaper won few fans on Yahoo for publishing that piece. “Zaria” said it was no surprise that an article that was “all nonesense” came from the Monitor. “Nomadd” denounced the Monitor as a “socialist rag” that should be “put in supermarket checkout lines.” Perhaps “Nomadd” assumed that only left-wingers had anything to fear from this new power. (I never saw socialist rags in grocery checkout lines, except maybe at the Boston Food Co-op).

Unfortunately, the primary difference between some assassination advocates and Washington apologists for targeted killing is that the latter use spellcheckers. For both groups, “due process” is an anachronism - if not a terrorist ploy. And for both groups, boundless groveling to the Commander-in-Chief is the new trademark of a good American. Anything less is national suicide.

James Bovard is a policy advisor for The Future of Freedom Foundation and is the author of Attention Deficit Democracy, The Bush Betrayal, Terrorism and Tyranny, and other books.

A World Overwhelmed by Western Hypocrisy

In America, Lawlessness is Now Complete

A World Overwhelmed by Western Hypocrisy

By PAUL CRAIG ROBERTS

Western institutions have become caricatures of hypocrisy.

The International Monetary Fund and the European Central Bank are violating their charters in order to bail out French, German, and Dutch private banks. The IMF is only empowered to make balance of payments loans, but is lending to the Greek government for prohibited budgetary reasons in order that the Greek government can pay the banks. The ECB is prohibited from bailing out member country governments, but is doing so anyway in order that the banks can be paid. The German parliament approved the bailout, which violates provisions of the European Treaty and Germany’s own Basic Law. The case is in the German Constitutional Court, a fact unreported in the US media.

US president George W. Bush’s designated lawyer ruled that the president has “unitary powers” that elevate him above statutory US law, treaties, and international law. According to this lawyer’s legal decisions, the “unitary executive” can violate with impunity the Foreign Intelligence Surveillance Act, which prevents spying on Americans without warrants obtained from the FISA Court. Bush’s man also ruled that Bush could violate with impunity the statutory US laws against torture as well as the Geneva Conventions. In other words, the fictional “unitary powers” make the president into a Caesar.

Constitutional protections, such as habeas corpus, which prohibit government from holding people indefinitely without presenting charges and evidence to a court, and which prohibit government from denying detained people due process of law and access to an attorney, were thrown out the window by the US Department of Justice, and the federal courts went along with most of it.

As did Congress, “the people’s representatives”. Congress even enacted the Military Tribunals Commissions Act of 2006, signed by the White House Brownshirt on October 17.

This act allows anyone alleged to be an “unlawful enemy combatant” to be sentenced to death on the basis of secret and hearsay evidence not presented in the kangaroo military court placed out of reach of US federal courts. The crazed nazis in Congress who supported this total destruction of Anglo-American law masqueraded as “patriots in the war against terrorism.”

The act designates anyone accused by the US, without evidence being presented, as being part of the Taliban, al-Qaeda, or “associated forces” to be an “unlawful enemy combatant,” which strips the person of the protection of law.

The Taliban consists of indigenous Afghan peoples, who, prior to the US military intervention, were fighting to unify the country. The Taliban are Islamist, and the US government fears another Islamist government, like the one in Iran that was blowback from US intervention in Iran’s internal affairs. The “freedom and democracy” Americans overthrew an elected Iranian leader and imposed a tyrant. American-Iranian relations have never recovered from the tyranny that Washington imposed on Iranians.

Washington is opposed to any government whose leaders cannot be purchased to perform as Washington’s puppets. This is why George W. Bush’s regime invaded Afghanistan, why Washington overthrew Saddam Hussein, and why Washington wants to overthrow Libya, Syria, and Iran.

Barack Obama inherited the Afghan war, which has lasted longer than World War II with no victory in sight. Instead of keeping with his election promises and ending the fruitless war, Obama intensified it with a “surge,”

The war is now ten years old, and the Taliban control more of the country than does the US and its NATO puppets. Frustrated by their failure, the Americans and their NATO puppets increasingly murder women, children, village elders, Afghan police, and aid workers.

A video taken by a US helicopter gunship, leaked to Wikileaks and released, shows American forces, as if they were playing video games, slaughtering civilians, including camera men for a prominent news service, as they are walking down a peaceful street. A father with small children, who stopped to help the dying victims of American soldiers’ fun and games, was also blown away, as were his children. The American voices on the video blame the children’s demise on the father for bringing kids into a “war zone.” It was no war zone, just a quiet city street with civilians walking along.

The video documents American crimes against humanity as powerfully as any evidence used against the Nazis in the aftermath of World War II at the Nuremberg Trials.

Perhaps the height of lawlessness was attained when the Obama regime announced that it had a list of American citizens who would be assassinated without due process of law.

One would think that if law any longer had any meaning in Western civilization, George W. Bush, Dick Cheney, indeed, the entire Bush/Cheney regime, as well as Tony Blair and Bush’s other co-conspirators, would be standing before the International Criminal Court.

Yet it is Gadaffi for whom the International Criminal Court has issued arrest warrants. Western powers are using the International Criminal Court, which is supposed to serve justice, for self-interested reasons that are unjust.

What is Gadaffi’s crime? His crime is that he is attempting to prevent Libya from being overthrown by a US-supported, and perhaps organized, armed uprising in Eastern Libya that is being used to evict China from its oil investments in Eastern Libya.

Libya is the first armed revolt in the so-called “Arab Spring.” Reports have made it clear that there is nothing “democratic” about the revolt.

The West managed to push a “no-fly” resolution through its puppet organization, the United Nations. The resolution was limited to neutralizing Gadaffi’s air force. However, Washington, and its French puppet, Sarkozy, quickly made an “expansive interpretation” of the UN resolution and turned it into authorization to become directly involved in the war.

Gadaffi has resisted the armed rebellion against the state of Libya, which is the normal response of a government to rebellion. The US would respond the same as would the UK and France. But by trying to prevent the overthrow of his country and his country from becoming another American puppet state, Gadaffi has been indicted. The International Criminal Court knows that it cannot indict the real perpetrators of crimes against humanity--Bush, Blair, Obama, and Sarkozy--but the court needs cases and accepts the victims that the West succeeds in demonizing.

In our times, everyone who resists or even criticizes the US is a criminal. For example, Washington considers Julian Assange and Bradley Manning to be criminals, because they made information available that exposed crimes committed by the US government. Anyone who even disagrees with Washington, is considered to be a “threat,” and Obama can have such “threats” assassinated or arrested as a “terrorist suspect” or as someone “providing aid and comfort to terrorists.” American conservatives and liberals, who once supported the US Constitution, are all in favor of shredding the Constitution in the interest of being “safe from terrorists.” They even accept such intrusions as porno-scans and sexual groping in order to be “safe” on air flights.

The collapse of law is across the board. The Supreme Court decided that it is “free speech” for America to be ruled by corporations, not by law and certainly not by the people. On June 27, the US Supreme Court advanced the fascist state that the “conservative” court is creating with the ruling that Arizona cannot publicly fund election candidates in order to level the playing field currently unbalanced by corporate money. The “conservative” US Supreme Court considers public funding of candidates to be unconstitutional, but not the “free speech” funding by business interests who purchase the government in order to rule the country. The US Supreme Court has become a corporate functionary and legitimizes rule by corporations. Mussolini called this rule, imposed on Americans by the US Supreme Court, fascism.

The Supreme Court also ruled on June 27 that California violated the US Constitution by banning the sale of violent video games to kids, despite evidence that the violent games trained the young to violent behavior. It is fine with the Supreme Court for soldiers, whose lives are on the line, to be prohibited under penalty of law from drinking beer before they are 21, but the idiot Court supports inculcating kids to be murderers, as long as it is in the interest of corporate profits, in the name of “free speech.”

Amazing, isn’t it, that a court so concerned with ‘free speech” has not protected American war protesters from unconstitutional searches and arrests, or protected protesters from being attacked by police or herded into fenced-in areas distant from the object of protest.

As the second decade of the 21st century opens, those who oppose US hegemony and the evil that emanates from Washington risk being declared to be “terrorists.” If they are American citizens, they can be assassinated. If they are foreign leaders, their country can be invaded. When captured, they can be executed, like Saddam Hussein, or sent off to the ICC, like the hapless Serbs, who tried to defend their country from being dismantled by the Americans.

And the American sheeple think that they have “freedom and democracy.”

Washington relies on fear to cover up its crimes. A majority of Americans now fear and hate Muslims, peoples about whom Americans know nothing but the racist propaganda which encourages Americans to believe that Muslims are hiding under their beds in order to murder them in their sleep.

The neoconservatives, of course, are the purveyors of fear. The more fearful the sheeple, the more they seek safety in the neocon police state and the more they overlook Washington’s crimes of aggression against Muslims.

Safety uber alles. That has become the motto of a once free and independent American people, who once were admired but today are despised.

In America lawlessness is now complete. Women can have abortions, but if they have stillbirths, they are arrested for murder.

Americans are such a terrified and abused people that a 95-year old woman dying from leukemia traveling to a last reunion with family members was forced to remove her adult diaper in order to clear airport security. Only a population totally cowed would permit such abuses of human dignity.

In a June 27 interview on National Public Radio, Ban Ki-moon, Washington’s South Korean puppet installed as the Secretary General of the United Nations, was unable to answer why the UN and the US tolerate the slaughter of unarmed civilians in Bahrain, but support the International Criminal Court’s indictment of Gadaffi for defending Libya against armed rebellion. Gadaffi has killed far fewer people than the US, UK, or the Saudis in Bahrain. Indeed, NATO and the Americans have killed more Libyans than has Gadaffi. The difference is that the US has a naval base in Bahrain, but not in Libya.

There is nothing left of the American character. Only a people who have lost their soul could tolerate the evil that emanates from Washington.

Paul Craig Roberts was an editor of the Wall Street Journal and an Assistant Secretary of the U.S. Treasury. His latest book, HOW THE ECONOMY WAS LOST, has just been published by CounterPunch/AK Press. He can be reached at: PaulCraigRoberts@yahoo.com

No comments:

Post a Comment