Tuesday, January 25, 2011

Obama’s regulatory reform test

Only follow-through will prove the president means business

Assume your government job is to write regulations to require bicycle manufacturers to make safer bicycles. You know two things. The first is that if you say bicycles are being made about as safely as they can be, then you will no longer be needed; hence, no job. Second, you know there were no U.S. commercial airline fatalities in the U.S. in 2010 (an amazing and true fact) while about 1,000 people died in bicycle accidents in 2010. Thus, as long as you argue that riding a bicycle should be made as safe as flying in an airplane and that tougher regulations on bicycle manufacturers could make bike-riding safer, you can keep your job.

President Obama jumped on the regulatory-reform bandwagon last week after two years of greatly expanding costly regulations and reducing personal liberty, particularly on health care and financial services. I confidently predict his new initiative will be a failure. History has shown that the vested interest of the regulators in job preservation and expansion almost always swamps efforts at regulatory reform.

Mr. Obama said, in essence, that the benefits of regulations should exceed the costs - which every president, at least going back to Jimmy Carter, also has said. President Reagan made the most serious attempt to rein in the regulatory monster by staffing his administration with many talented and committed deregulators, but even they were often frustrated by the regulatory bureaucrats and Congress. We will now have a test of whether Mr. Obama is serious and will seek to carry out his own words.

The Obama Environmental Protection Agency (EPA) has ruled that carbon dioxide is a pollutant and, as a result, has been holding up the permitting of new power and manufacturing plants. If this continues, it will cause a significant drop in U.S. economic growth and job creation, yet it will have no measurable benefit. China, India and many other countries are rapidly increasing CO2 emissions, overwhelming whatever actions the United States may take. Even if all new CO2 emissions were stopped globally, it would be decades before there would be even a minor effect on global temperatures. Now, new research is indicating that sunspot activity is much more important than CO2 when it comes to influencing the earth’s temperature. The EPA ban is nothing more than national economic suicide. Let us see if Mr. Obama has the courage to tell the EPA to stop.

The Internal Revenue Service (IRS) has just issued a proposed regulation that would have an enormous cost on the U.S. economy with no benefit. Specifically, it is demanding that U.S. banks report the amount of interest they pay foreign nationals to their governments. The U.S. long ago decided not to tax interest earned by foreign investors in order to attract their money. Well-qualified, independent economists have estimated this will cost the United States in lost foreign investment roughly $100 billion a year and many thousands of jobs. This will make foreign tax collectors happy, even in corrupt countries, at the expense of U.S. jobs. If the IRS does not immediately withdraw this proposed regulation, it will show it pays no attention to Mr. Obama‘s words or does not care what he says.

If Mr. Obama is serious about regulatory reform, he will immediately instruct the EPA and the IRS to drop their no-benefit, job-killing proposals. If these proposals are still hanging out there a month from now, that will reveal that he is all talk and no action.

The Securities and Exchange Commission (SEC), an agency with a long record of destructive incompetence (remember the many warnings about Bernie Madoff?), was too busy creating such burdensome regulations on new public stock offerings that now few companies can afford the cost of going public. The SEC is off on a tangent of creating wild new theories of insider trading. This nonsense is making it difficult for officers and directors of companies to do their basic jobs of business development and corporate governance. Serious scholars of insider trading, notably Henry G. Manne, dean emeritus of the George Mason University Law School, have rightly concluded that the insider-trading regulations result in a denial of timely and important information to market participants, thus causing more harm than benefit. Unlike the SEC bureaucrats, Mr. Manne and other serous critics of the SEC have no vested interest in more, nonproductive regulation.

New regulation is often proposed under the guise of consumer protection. However, consumers are well-protected under our tort system, which makes it costly for firms to cheat or injure their customers. Both airplane and bicycle manufacturers understand better than any government bureaucrat that if their products end up killing the people who use them, it is not good for business or their pocketbooks. Yet the bureaucrats at the SEC and the IRS are engaged in the ultimate conflict of interest because it is much easier to be promoted and retain their jobs if their agencies are growing. Hence, the production of more regulations becomes an end in itself. And to the extent that the regulations are vague and incomprehensible, it only means more work for the regulators.

To reduce this inherent conflict of interest, those who are asked to write new regulations should be independent contractors or temporary employees. And every proposed regulation, no matter how small, should be accompanied by an independent cost-benefit analysis that is open to chal- lenge by any interested party.

Richard W. Rahn is a senior fellow at the Cato Institute and chairman of the Institute for Global Economic Growth.

'Capitalist' contradictions

'Capitalist' contradictions

GE hire no course-change for O

headshotCharles Gasparino

The naming of General Electric CEO Jeff Immelt to head the new Council on Jobs and Competitiveness is supposed to show the country that President Obama really is serious about dealing with the nation's economic woes through the free market system, rather than the government programs and handouts that characterized his first two years in office.

Except it doesn't.

Sure, Immelt brings a lot of business experience to the post -- he's spent many years in the trenches of one of the world's biggest companies, and his last 10 as its CEO. Problem is, this isn't necessarily the kind of experience needed to deal with "jobs and competitiveness."

Tour de farce: President Obama and GE CEO Jeffrey Immelt visiting the General Electric Plant in Schenectady last Friday. -
Tour de farce: President Obama and GE CEO Jeffrey Immelt visiting the General Electric Plant in Schenectady last Friday.

Rather, Immelt's selection underscores the inherent flaw in Obamanomics: It favors the crony capitalists at the banks and large corporations that feed off government bailouts and contracts at the expense of entrepreneurs and small businesses, who in the past have pulled the nation's economy out of recession and created jobs.

GE has seen some of its darkest days under Immelt. Since he replaced the iconic Jack Welch as head of the manufacturing/financial services conglomerate, the stock has fallen significantly, despite a more recent recovery.

Some of that can be attributed to timing; Immelt took over just before 9/11 and inherited a flailing stock price that never quite made it back to the highs achieved under Welch (highs undoubtedly bolstered by the late-'90s stock-market bubble).

But in the '08-'09 financial crisis, Immelt's GE was nearly decimated as its financial-services unit threatened to bring down the company. The once-mighty firm had to fall in line with the rest of the financial industry and accept a federal bailout.

When Obama took over, with his lofty "social justice" and environmentalist goals, a beaten and bruised GE became one its best corporate partners -- advocating policies that would make the US economy more like Europe's and supporting the talk of "green jobs" miracles and the wisdom of taxing energy use.

It paid off for GE, in the form of huge government contracts and other subsidies -- which, coupled with the bailout guarantees, helped the company survive and then thrive.

Despite all of this, friends of Immelt tell me he's a registered Republican who voted for John McCain in 2008, a staunch free-marketeer who wants to move Obama away from redistributionist policies.

One person close to Immelt told me he has plans to transform Obama from "community-organizer-in-chief" into the country's "chief marketing officer," selling the US economic brand across the globe.

Thing is, Immelt has been a fixture at the anything-but-free-market White House all along. Since early 2009, he's served on the president's Economic Recovery Advisory Board, chaired by the recently retired Paul Volcker, and said not a negative word publicly about the president's policies, from the $800 billion "stimulus" program to the absurd push to socialize health care when the economy was bleeding jobs. How is promoting Immelt to head of the (renamed) board supposed to change anything?

Plus, he's famous for having remarked "We're all Democrats now" after it was disclosed that GE, with his close ties to the administration, was feasting off of government contracts.

Back when GE was majority owner of CNBC (and I worked for the network), Immelt showed his allegiance by calling a meeting of top network talent to discuss whether coverage of the administration's left-leaning eco-

nomic policies was too negative. People who were present told me that Immelt didn't wind up overtly pressing for any change in coverage; he didn't have to, because his message was clear.

The president's message is clear, too. Crony capitalists like Immelt -- or William Daley, the new White House chief-of-staff and a former top executive at JP Morgan (another bailout winner) -- will be calling the shots.

The White House won't say what rules it has to prevent Immelt from using his appointment to get even more business for GE, but Immelt's spokesman says unabashedly that the CEO has no plans to back away from GE's government-serving business model and will continue to lobby the administration he's now a part of for business -- "as long as he's transparent," adding that he "doesn't intend" to use his role on the council for business purposes.

All of which might be good for Immelt's GE -- but it's hardly the cure for the joblessness and other economic problems facing the country.

For that, the president might seek advice from someone who actually created something -- rather than a guy whose claim to fame is knowing his way around the White House.

Charles Gasparino is a Fox Business Network senior correspon dent and the author of "Bought and Paid For."

How Government Failure Caused the Great Recession

How Government Failure Caused the Great Recession

The interaction of six government policies explains the timing, severity, and global impact of the financial crisis.

Today we see how utterly mistaken was the Milton Friedman notion that a market system can regulate itself. We see how silly the Ronald Reagan slogan was that government is the problem, not the solution . . . I wish Friedman were still alive so he could witness how his extremism led to the defeat of his own ideas.

— Economist Paul Samuelson (January 2009)

The people on Wall Street still don't get it. They're still puzzled—why is it that people are mad at the banks? Well, let's see. You know, you guys are drawing down 10, 20 million dollar bonuses after America went through the worst economic year that it's gone through in decades, and you guys caused the problem.

— President Barack Obama (December 2009)

The banking crisis that began in August 2007 shocked markets and precipitated the Great Recession. To fully explain the banking crisis, one must account for its timing, severity, and global impact. One must also confront a startling historical contrast. If we define “banking crisis” to mean bank failures and system losses exceeding 1 percent of a country’s gross domestic product (GDP), we find that in the period 1875-1913, a period of marked expansion in international trade and capital flows comparable to the last three decades, there were only four banking crises worldwide.1 By contrast, in the period 1978-2009, a period of much more extensive bank regulation, central bank intervention, government protection of depositors and other bank creditors, and government control of mortgage markets, about 140 banking crises occurred worldwide. Of these, 20 were more severe than any crisis from the earlier period of 1875-1913, in terms of total bank losses as a percent of GDP.

Leading financial economists such as Charles Calomiris have argued that a necessary condition for a banking crisis is government policy that distorts the micro-incentives of banks. Likewise, University of Chicago scholar Richard Posner has argued the banks that got into trouble during the recent crisis were simply taking “risks that seemed appropriate in the environment in which they found themselves.”2

In the period 1978-2009, about 140 banking crises occurred worldwide.

But then why didn’t a banking crisis erupt sooner—say, in the recession years of 1990-1991 or 2001-2002? What changed in recent years that led to business risk-taking capable of wrecking the U.S. housing market and the U.S. banking system and other banking systems throughout the world? Further, why were prudent credit practices reasonably maintained in credit card and commercial mortgage securitization in recent years, but wholly abandoned in residential mortgage securitization?

Some economists have criticized securitization as an inherently flawed business model, particularly since the process of securitization involves a “long chain” of players with “information asymmetries.” The buyer of the mortgage or security typically knows less than the seller. But many of the financial institutions involved in subprime securitization (e.g., Citigroup) held portions of their own securitizations, and they have for decades been securitizing credit card loans without major debacles. Calomiris has observed that even during the subprime boom, banks aggregating credit card loans for securitization and investors in securitizations closely examined the identity of originators, their historical performance, the composition of portfolios, and changes in composition over time.3

In contrast, from 2003 until the middle of 2007, the demand for subprime loans and securities proved extremely insensitive to changes in borrower quality and loan structure. There was dramatic new entry into subprime mortgage origination in 2004-2006 as many “fly-by-night” originators offered newer, riskier mortgage products to new customers and homeowners. Yet these new entrants were able to raise funds for origination on terms comparable to those governing originators with longer track records and who were continuing to originate more proven, lower-risk products.

Likewise, since the early 1990s, commercial property mortgages have been securitized just like home mortgages. Throughout most of the residential housing boom from 2000-2006, there was also a boom in commercial real estate values (see chart below). The two real estate bubbles are not directly comparable, because the residential housing downturn was associated with immediate erosion in property market fundamentals and spikes in mortgage default rates. In contrast, the initial decline in commercial property values—which occurred some 18 months after the housing peak—was mostly due to increased risk aversion in the capital markets. Commercial property fundamentals stayed strong in most markets and commercial mortgage default rates remained at historic lows until well after the onset of the recession. The housing bust, the banking crisis, and the recession brought down commercial real estate—not the other way around.

Perry_Dell 1

Yet from 2002-2007, the intensely competitive commercial mortgage-backed securities (CMBS) business became dysfunctional at times and lenders frequently complained of “too much money chasing too few good deals.” Declining long-term Treasury rates and falling debt and equity risk premia during this period drove up commercial property values, which in turn led to commercial properties being more highly leveraged (as measured by loan amount per square foot or loan amount to original cost). Yet despite some erosion in commercial mortgage underwriting (e.g., the percent of interest-only CMBS loans increased from 6 percent in 2002 to 59 percent in 20074), lender due diligence remained high and disciplined. Also, the 80 percent loan-to-appraised value and 1.20 property cash-flow-to-debt-service ratio, both long-established industry standards, were rarely violated.

In answer to the questions posed above about what specific factors explain the: causes and timing of the banking crisis and the extraordinary departure from historically sound underwriting and securitization standards for residential mortgages, we identify a potent mix of six major government policies that together rewarded short-sighted collective risk-taking and penalized long-term business leadership:

1. Bank misregulation, in particular the international Basel capital rules, including a U.S. adaptation to them—the 2001 Recourse Rule—and the outsourcing of risk assessment by regulators to government-sanctioned rating agencies incentivized (not merely “allowed”) the creation and highly-leveraged systemic accumulation of the highest yielding AAA- and AA-rated securities among banks globally. The demand for these securities was met mainly through the increased securitization of U.S. subprime and Alt-A mortgages, an artificially large portion of which carried credit ratings of AAA or AA. The charts below display the typical tranche shares for subprime and Alt-A mortgage-backed securities (MBSs) in 2006, and show that 85.9 percent of subprime MBS tranches, and 95.3 percent of Alt-A tranches, were rated either AAA or AA.

Perry_Dell 2

Perry_Dell 3

2. Continually increasing leverage—driven largely by Fannie Mae and Freddie Mac credit policies and the political obsession with taking credit for increased homeownership—into the U.S. mortgage system. Reduced down payments and loosened underwriting standards were a matter of government policy throughout the housing boom. The two nearby charts illustrate the leverage trends from 2001 to 2007—residential mortgage debt as a share of GDP rose from less than 50 percent in 2001 to almost 75 percent by 2007 (top chart); and the percent of residential real estate sales volume with loan-to-value ratios of 97 percent or higher (down payments of 3 percent or less) increased from about 10 percent in 2001 to almost 40 percent by 2007 (bottom chart). Taken together, these graphs show that housing leverage was increasing to historically unprecedented levels by 2007 at the same time that the quality of housing debt was deteriorating considerably due to an erosion of sound underwriting standards and lower down payments, as discussed above.

Creditors with the lowest cost of capital generally drive underwriting and leverage standards within the segment in which they compete. In the residential mortgage market, with government entities historically being the low-cost providers of capital and the dominant purchasers and guarantors of loans and securities, it is reasonable to hold government accountable for system-wide leverage.

Perry_Dell 4

Perry_Dell 5

Economist Eugene White has noted that the U.S. housing boom and bust in the 1920s was similar in magnitude to the recent one.5 With essentially no government intervention in the mortgage market in the 1920s, system-wide leverage expanded during the boom, but generally only up to the 80 percent loan-to-value level. Also, there were no special incentives provided to the banking sector for a concentrated build-up of balance sheet exposure to high-risk mortgages. Therefore, when real estate prices crashed in 1926, it was not enough to cause a banking crisis and, in fact, bank suspensions nationally were lower in 1927 and 1928 than in 1926. Further, bank losses in the late 1920s were concentrated in agricultural areas unaffected by the boom in residential real estate.

3. The enlargement of the riskier subprime and Alt-A mortgage markets by Fannie and Freddie through the abandonment of proven credit standards (e.g., dropping proof of income requirements) during the 2004-2007 period, and their combined accumulation of a $1.6 trillion portfolio of these loans to meet the affordable housing goals Congress mandated. As of mid-2008, government entities had purchased, guaranteed, or compelled the origination of 19 million of the 27 million total U.S. subprime and Alt-A mortgages outstanding.6

4. The FDIC, Federal Reserve, Treasury Department, and Congress undertook explicit or implicit creditor bailouts for large financial institutions starting in the 1980s (First Pennsylvania, Continental Illinois, the thrift industry, the Farm Credit System, etc.) and continuing to 2008 (Bear Stearns). These regulatory decisions led to an absence of creditor discipline of financial institution leverage and risk-taking (especially at Fannie and Freddie) and the “too big to fail” expectation of a government bailout.

Why didn’t a banking crisis erupt sooner, say in the recession years of 1990-1991 or 2001-2002?

Creditors—not shareholders—normally control business risk-taking. They do this by: 1) reducing leverage; 2) demanding higher interest rates; 3) declining to finance risky projects; 4) requiring more collateral; 5) imposing restrictive terms and loan covenants; and 6) moving deposits to safer alternatives (in the case of bank depositors, who are creditors of banks). Without excessive government protection of creditors, there is little doubt we would have seen creditors act to reduce risk in the U.S. financial system, particularly with respect to Fannie and Freddie.

5. The increase in FDIC deposit insurance from $40,000 to $100,000 per account in 1980 combined with the unchecked expansion of coverage up to $50 million under the Certificate of Deposit Account Registry Service beginning in 2003. These regulatory errors of commission and omission reduced the incentives of business, institutional, and high net-worth depositors to monitor and discipline excessive bank leverage and risk-taking. When federal deposit insurance legislation was first enacted in 1933, policy makers understood that it contributed to moral hazard, tempting bankers to take short-sighted risks. Accordingly, the initial coverage was limited to $2,500 per account (about $42,000 in today’s dollars), resulting in a large portion of bank liabilities without a government guarantee. Today, virtually no depositor has any “skin in the game” and, according to one estimate (Walter and Weinberg 20027), more than 60 percent of all U.S. financial institution liabilities, including all those of the 21 largest bank holding companies, were either explicitly or implicitly guaranteed. There were therefore almost no incentives in recent years to monitor the excessive risk-taking by banks that contributed to the housing bubble and financial crisis.

6. Artificially low and sometimes negative real federal funds rates from 2001 to 2005—a result of expansionary Fed monetary policy—fueled the subprime and Alt-A mortgage boom and widened the asset-liability maturity gap for banks (see chart below). Most subprime and Alt-A mortgages carried low initial rates made possible by low federal funds rates, which spurred borrower demand for these mortgages. In the context of federal funds rates falling faster than long-term rates in the 2002-2005 period, low federal funds rates —widened the duration gap inherent in borrowing short and lending long, making the rollover or refinancing of short-term instruments all the more precarious when the value and liquidity of the subprime and Alt-A mortgage securities this paper was financing became doubtful and the wholesale funding markets started to deleverage. In particular, many large investment banks reached for more firm leverage during the housing bubble and roughly doubled the proportion of total assets financed by overnight repos.

Perry_Dell 6

Underlying all these six government policies is the underappreciated problem of government failure, a problem rooted in the absence of incentives to reconcile a policy’s social costs and benefits with the costs and benefits to the policy makers. Therefore, the banking crisis should be understood more fundamentally as a government failure than as a market or business failure.

Government failure does not explain every aspect of the banking crisis and ensuing recession. It does not explain, for instance, why JPMorgan Chase, operating under the same regulatory regime and economic incentives as Citigroup, largely exited the residential MBS business as Citigroup and other large banks were ramping up. The crisis certainly could not have occurred without certain private firms (e.g., Citigroup, UBS, Merrill Lynch) engaging in excessive corporate short-termism (or perhaps “greed”) along the same lines as Fannie and Freddie. But greed is a timeless and universal component of human nature, and it influences the public sphere at least as much as the private sector. As such, greed has little relevance in explaining the timing and crucial facts of the recent crisis—such as why credit standards and due diligence practices in housing finance deteriorated so much more dramatically than in any other credit segment. The argument we advance is that the interaction of these six government policies explains timing, severity, global impact, and other important features of the banking crisis better than any faulty business practices unrelated to the perverse incentive effects of these government policies.

What is remarkable is that policy experts and politicians sympathetic to the views Paul Samuelson and President Obama have expressed—those who would have us believe that a combination of market defects, business greed, and under-regulation provide the better fundamental understanding of the crisis—rarely, if ever, argue along that line. They call our attention to business deficiencies such as “predatory lending” and incentive-based compensation practices based strictly upon annual performance. They are right to do so. But they do not provide a direct counter argument to the one we make. They do not tell us why the crisis reflects a failure of unfettered capitalism more fundamentally than a failure of government policies.

Why were prudent credit practices reasonably maintained in credit card and commercial mortgage securitization in recent years, but wholly abandoned in residential mortgage securitization?

For example, in his book Freefall, Joseph Stiglitz tells us that “blame for the crisis must lie centrally with the financial markets” and that “the financial crisis showed that financial markets do not automatically work well, and that markets are not self-correcting.”8 Yet nowhere in the book’s 361 pages does Stiglitz directly counter our argument analytically—only rhetorically and briefly, at that. In fact, while Stiglitz points fingers in every direction, what he seems to find most culpable is the cronyism inherent in the government’s “too big to fail” bailout policies, which he refers to as “ersatz capitalism.” The net effect of the Stiglitz book is to support our argument.

This issue—the relative contribution of government policies versus independent financial market practices to the financial crisis—is all-important. It is the “elephant in the room” of every current and future discussion of financial reform and the role of government in the economy generally.

A more accurate interpretation of the financial crisis as predominantly a government failure could pave the way for real financial reforms that would contribute to both future financial stability and productivity. These reforms would include: 1) the gradual reduction of government intervention in mortgage markets through legislation such as the GSE Bailout Elimination and Taxpayer Protection Act (HR 4889), sponsored by Representative Jeb Hensarling (R-Texas); 2) a reduction in federal deposit insurance and other transparent policy rules to reduce or eliminate creditor expectations of future bailouts, especially the “too big to fail” guarantee; 3) the replacement of elaborate regulatory micromanagement with more equity capital; and 4) a monetary policy rule or quasi-rule to govern the Federal Reserve’s policy making.

But just as the Patient Protection and Affordable Care Act (“ObamaCare”) ignores the government’s role in creating a crisis of runaway health costs and a low health-outcome-to-cost ratio, the Dodd-Frank Wall Street Reform and Consumer Protection Act, passed in July, was enacted on the faulty presumption that the fundamental cause of the financial crisis was financial market failure and under-regulation of the financial sector. In expanding government control over financial markets with more systemically imposed micro-regulations and inconclusive future bureaucratic rule-making, the Dodd-Frank Act is fundamentally flawed in its approach to reform Wall Street.

Many of the “Tea Party” Republicans swept into power in the November midterm elections ran on a platform of replacing or reforming ObamaCare. Their success at the polls partially reflects the correct perception of the majority of informed Americans that persistent problems in U.S. healthcare stem primarily from government failure. The same perception holds equally true for the U.S. financial system, and replacement or reform of the Dodd-Frank Act is an equally worthy undertaking.

Mark J. Perry is a visiting scholar at the American Enterprise Institute and professor of finance and economics at the University of Michigan in Flint. Robert Dell is a commercial real estate banker residing in Atlanta. They are co-authors of a forthcoming book, Back from Serfdom: A Republican New Deal for Pragmatic Democrats.

Our Real Food Problem

Our Real Food Problem

We don’t have a food system problem, but a problem of self-control. We can’t solve that with quinoa or locally grown, free-range chicken.

Divided We Eat,” a recent Newsweek cover story about class and income distinctions in our diet, may be the perfect example of what’s wrong with how we think about food. Written by Lisa Miller, the piece manages to accept every bit of the conventional wisdom about what is wrong with how we eat, without challenging a single assumption of New York foodies.

Miller is convinced that income disparity causes obesity. If only the poor could afford organic, locally raised food, we’d all be healthier. But Miller doesn’t explain how we could force them to eat that food. As a frequent visitor to a convenience store in a small, low-income farming town, my mind absolutely boggles at the size of government it would take to encourage my neighbors to share Miller’s breakfast of a hand-whipped, organic cappuccino and two slices of imported Dutch Parrano cheese. Miller is something of a farmer herself; never have so many axioms been planted in such stony ground.

Food is the newest battlefield in our culture wars. Now that the victory of the sexual revolution is complete, organic and local have replaced sex as the cultural dividing line. According to our culinary advisors, the only place left to improve the habits of the great unwashed is in the supermarket. Happy Meals have replaced Victorian morals as the best way to distinguish between those who are “cool” and those who subsist on starch and dreams of Sarah Palin.

A growing economy with lots of jobs leads to physical activity and responsible people, while generations on the public dole may result in the kind of inactivity that can cause obesity.

Miller begins at the breakfast table of her neighbor, a nutritionist who is feeding her son quinoa porridge sweetened with applesauce and laced with kale flakes. Tellingly, the article doesn’t describe what quinoa is. Those of us who live in the great food desert west of the Hudson aren’t familiar with quinoa, so I did a quick Google search. Quinoa, as it turns out, was a staple of the ancient Incas. Shockingly, the grain is typically not eaten whole, because the outside layer is bitter and acts as a mild laxative. This may well be why corn, also eaten by the ancient Incas, survived as a diet staple and quinoa did not. As for kale, most of us do know what it looks like. It’s often served as a garnish. People slide it around their plate until the waitperson takes it away. The first victims in the food wars are nutritionists’ children.

The second visit Miller makes is to a neighbor named Alexandra Ferguson, who keeps chickens in her backyard. We had chickens in our backyard when I was a boy, so I was glad to identify with at least one of the article’s foodies. While Miller conducts her interview, the chickens peer into the kitchen from the back stoop. I hope her neighbor wipes her feet before coming indoors. Saving the world through local food demands sacrifices, and a backyard covered with chicken droppings is clearly one of them. When the avian flu reappears, the first battleground won’t be large industrial buildings full of chickens but free-range birds interacting with well-meaning locavores and wild birds passing by … but I digress.

Ferguson believes that “eating organically and locally contributes not only to the health of her family but to the existential happiness of farm animals and farmers.” I’m pretty sure that most of the farm animals around here spend little time worrying about Ferguson’s contributions to their existential happiness, and I’m darned sure that we farmers don’t. Furthermore, Ferguson believes, correct food choices are necessary for the “survival of the planet.” She goes on to report that she spends several hours a day “thinking about, shopping for, and preparing food.” Not to mention collecting eggs and cleaning the back stoop.

Happy Meals have replaced Victorian morals as the best way to distinguish between the 'cool' and those who subsist on starch and dreams of Sarah Palin.

There is, of course, the obligatory visit to single mom Tiffiney Davis, whose family subsists on convenience food. Miller reports that doughnuts have been exiled from the family diet now that restaurants in New York City have begun posting calories. (It surprised Davis that doughnuts are high in calories? Only Mayor Michael Bloomberg’s rules about posting calorie counts saved her from jelly-filled, deep-fat-fried concoctions?) Anyway, the single mom reports that she still feeds her kids bodega food, including a muffin and a soda as her daughter’s breakfast. They eat takeout or McDonald’s several times a week. She doesn’t purchase fruits and vegetables because they’re too expensive and not fresh. Miller reports that Ferguson spends about $1,000 per month on food for her family, while Davis spends $400 per month feeding her brood. No comment appears about the Davis clan’s obesity level. It’s not clear whether Miller is too kind, or the reader is supposed to assume obesity from the diet description.

Miller acknowledges a dietary chasm has always existed between rich and poor, along income and class lines. She has a point. While poor people went hungry during the Great Depression, rich folks developed an interest in fad diets. One popular diet consisted of grapefruit, melba toast, and raw vegetables. It occurs to me, if not to Miller, that quinoa may be seen by succeeding generations as just as faddish as melba toast, but then I’m writing from the McDonald’s side of the great food divide.

The research Miller references is no better balanced than those fad diets. She relies heavily on a study that correlates obesity rates to income disparities. Japan, for example, has lower obesity rates than the United States, and incomes are less widely distributed there. She doesn’t mention obesity rates among Japanese-Americans, which is the first question that comes to mind. Sure enough, obesity rates rise among Japanese who move to the United States, but are still much lower than obesity rates among non-Japanese Americans. Genetics surely has more to do with varying obesity rates among different countries than wide spreads in income.

I’m pretty sure that most of the farm animals around here spend little time worrying about Ferguson’s contributions to their existential happiness.

Food stamp recipients have a higher obesity rate than the rest of the U.S. population. As Miller points out, diets purchased at convenience stores (or bodegas) can be much cheaper than meals prepared after trips to Whole Foods. That raises the important question: Is it possible to feed a family a nutritious and wholesome diet at an affordable price without quinoa and arugula? The problem with obesity is not that local or organic suppliers don’t provide nutritious foods at Safeway, but rather that consumers don’t purchase them. Industrially grown canned green beans and Tyson’s chicken breasts can be part of healthy diets, but frequent fast food is most surely not. Freshness has a lot to do with taste, but it’s quite possible to have a healthy diet without fresh mangoes, or even without locally grown apples. Davis can buy nutritious and non-fattening food at Safeway on her present food budget, but she chooses not to. It may well be that the reasons people must rely on food stamps rather than on their own earnings are the same reasons they struggle with obesity. A growing economy with lots of jobs leads to physical activity and responsible people, while generations on the public dole may result in the inactivity that can cause obesity. The number of Americans on food stamps is growing exponentially (up 17 percent in the past year and 53 percent in the past three years), due both to our present economic situation and eased eligibility requirements.

Between 2004 and 2008, according to researcher Adam Drewnowski, a market basket of politically correct foods increased in price by 25 percent, while the “least nutritious” foods only increased by 16 percent. Governmental spending on food assistance increased 33 percent in the same time period. Should we assume that “good” food increases in price faster than Cheetos because of how our food system is aimed directly at the poor?

Another explanation comes to mind.

As organic farming becomes more popular in the marketplace, the price of organic food will tend to rise faster. Bestselling books, Oscar-nominated documentaries, the Oprah show, and articles in Newsweek have all encouraged the top slice of our society to change its diet, thereby increasing demand for organic, local food produced by farmers with a social conscience (or, at least, a trendy ad agency). Surely this increase in demand is now surfacing at the cash register. Farming without technology lends itself to production in places with already fertile soil and low weed and pest pressure. The economics of raising a crop organically is completely different in the Mississippi Delta than it is in the arid areas of the Northwest near where Drewnowski does his surveys. It is an extremely safe bet that organic and local food will continue to increase in price faster than conventionally produced food as the production of those crops expands to places less suited for organic methods.

Organic and local food will continue to increase in price faster than conventionally produced food as the production of those crops expands to places less ideally suited for organic methods.

“Locally produced food is more delicious than the stuff you get in the supermarket; it’s better for the small farmers and the farm animals; and as a movement it’s better for the environment.” Miller provides no evidence to substantiate any claim made in that sentence. I’ll readily admit that the best-tasting food I eat throughout the year is what I raise, but I’ll grant our New York food experts absolutely nothing else. A rough proxy for the demands that food makes on resources is its price—the burden of proof is on Miller to show why organic food is easier on the environment than conventionally produced food, given the huge price premium organic foods bring. Conventional food may use more fossil fuel, but organic production is more profligate with land, water, and human labor. And we cannot assume that local food has fewer transportation costs or is necessarily fresher. Most of the transportation cost in food from farm to table is in the trip that begins with the retail purchase at a supermarket or farmer’s market and ends at the consumer’s kitchen. Local food may be fresher, but not necessarily: where I live, milk arrives sooner from New Mexico dairies three states away than it does from nearby Missouri dairies.

Miller spends a lot of time talking about nutritious food, but we don’t really have a nutrition problem. Beri-beri and scurvy are not endemic in American society. We don’t really have a hunger problem, either. Some 6 percent of American households are what the U.S. Department of Agriculture calls “very low food security.” That’s a problem, but one extraordinarily difficult to solve with traditional food assistance. What we have is a fat problem. It matters not whether doughnuts made from industrially grown, highly processed wheat flour and fried in genetically modified soybean oil, or French pastries made from whole organic wheat and lightly sautéed in organic canola oil are a staple of one’s diet, if we insist on eating so many of either that we gain weight. We don’t have a food system problem, but a problem of self-control. We can’t solve that with quinoa or locally grown, free-range chicken breasts.

All the present critics of the food system rightly criticize Americans’ dinner habits: Our failure to make meals a center of family life, our preference for convenience over taste. In articles like Miller’s, the French are always held up as an example, and she does not disappoint. Fair enough. But beating the world over the head with your food choices takes the fun out of eating just as surely as treating food as fuel. Saving the world through your dietary choices is just too heavy a load for any enjoyable meal to carry. After all, it is just lunch.

Blake Hurst is a Missouri farmer.

Save the Filibuster!

Save the Filibuster!

In an age of intensely polarized politics, the filibuster assures that a genuine consensus exists for Congress to move forward.

Whoever said that people seeking to make trouble should not underestimate the possibilities of “reform” would smile in acknowledgement of the progressive “reform” community’s latest target—the Senate filibuster. The usual “coalition” (another dead giveaway term) of labor and Left-activist groups has organized around the banner “Fix The Senate Now” to advocate changing the Senate’s 60-vote filibuster threshold that currently allows the minority party to hold up legislation and key personnel appointments to the judiciary and executive branch.

Never mind the hypocrisy of folks who now lament the filibuster after having defended it from Republican threats to curtail its use against many of President George W. Bush’s judicial appointments a few years ago. Majorities are always frustrated when a determined minority uses—and occasionally abuses—the rules to thwart the majority. And it is unquestionably the case that using the once-rare filibuster has become frequent in recent years by both parties, changing the Senate into a chamber now requiring a de facto 60-vote supermajority for nearly everything. Why has this happened? Is changing the rules the right remedy for abuses? And does using the filibuster, even in its frequent form just now, thwart the rightful purposes of our constitutional design, or in fact fulfill them?

We should always beware of seemingly neutral ‘process reform’ sold as a means of making government more ‘effective.’

We should always beware of seemingly neutral “process reform” sold as a means of making government more “effective.” Process reforms of this type are always a masquerade of the one-way ratchet to make it easier for government to acquire more power and do more things without having to argue openly for the additional power. The coalition Left is enraged that its key agenda items such as card check, the DREAM Act, the public option in the healthcare bill, tax hikes for the rich, key Obama appointments, and many other items fell victim to Republican filibusters.

Advocates of filibuster reform, whose ranks include my distinguished American Enterprise Institute colleague Norm Ornstein, do not propose abolishing it completely and having the Senate operate as a pure majoritarian body like the House, with severely limited debate. The proposed changes appear to tinker at the margins, such as prohibiting second- and third-order filibusters after initial cloture votes to proceed to the floor with the main legislation; prohibiting filibusters of proceeding to conference committees; requiring senators to actually hold the floor, like the filibusters of the “Mr. Smith Goes to Washington” days of old; and so forth. It is telling, though, that many advocates of filibuster reform have borrowed a phrase from the 1990s: “Mend it—don’t end it.” This phrase was first linked with affirmative action quotas in the 1990s, and its real meaning was to prevent any real change to the increasingly unpopular regime of racial preferences. In this case, the slogan means exactly the opposite. If progressive reformers had their way, they would end the filibuster, and are constrained from doing so only by political reality. (I exempt Brother Ornstein from this charge; he merely dislikes the untidiness and disrepute that the perception of a dysfunctional Congress conveys to the public.) This fact is never clearer than when reformers complain that the Senate filibuster makes the chamber “undemocratic.”

To which I say: precisely. Long may it continue to be so undemocratic.

Majorities are always frustrated when a determined minority uses and occasionally abuses the rules to thwart the majority.

Several observations should be brought to bear on how to think about the filibuster. First, keep in mind that the original cloture rules to end filibusters were instituted about 100 years ago, precisely to end the ability of a single senator to tie up the Senate forever in debate. Prior to these rules, Senate debate was completely unlimited, and a group of senators could filibuster forever, if they wanted.

But the core point in defense of the filibuster as we know it today is that, while not mentioned in the Constitution, it is wholly consistent with the framers’ intent that the Senate not be a purely democratic body. Even a casual reader of the Federalist Papers and other founding-era thought will know that the central idea of our republic’s design is to operate not by simple majority rule but by a certain kind of majority—a deliberative majority. While the whole House is elected directly every two years to represent transient and shifting public opinion on a nearly real-time basis, the Senate, with its rolling turnover and (previously) indirect method of selection, is intended to move slowly, to be, as the overused metaphor from the founding had it, the “cooling saucer of democracy.” In the case of many liberal items the filibuster held up in the last Congress, the filibuster can be said to have worked exactly according to the framers’ design: to ensure that a genuine consensus exists for major changes, and to prevent a transient majority from imposing its will on the public without its consent.

I can hear Brother Ornstein ask, “So what, exactly, is “deliberative” about blocking appointments and slowing even noncontroversial items through procedural obstacles?” Just this: In an age of intensely polarized politics, rooted in deep and possibly irresolvable differences of principle over the nature and reach of government, the filibuster, even in its extreme and abused forms, assures that a genuine consensus exists for Congress to move forward both in general and on particular pieces of legislation. Reserving the power to invoke a filibuster in successive steps of the process is also necessary to balance one of Senate Majority Leader Harry Reid’s favorite parliamentary maneuvers, a process known as “filling the tree,” whereby bills brought to the Senate floor cannot be amended.

Never mind the hypocrisy of folks who now lament the filibuster after having defended it from Republican threats to curtail its use against many of President George W. Bush’s judicial appointments a few years ago.

But is the filibuster dangerous to the republic’s necessary business? Here we must make an empirical inquiry, though final judgment will depend on subjective views about what constitutes the “vital” functions of our government. To people inside the Beltway, everything is vital. Citizens may have a more balanced view, which is why public opinion is the ultimate check on the filibuster.

Are filibusters blocking any of the truly essential business of the nation? Both houses of Congress failed to pass an actual budget for this fiscal year, despite a legal requirement to do so and Senate rules that reduce the filibuster’s influence on the budget process. Complaints about the filibuster slowing vital matters ring hollow before this kind of congressional irresponsibility. Are any executive departments or court houses failing to function because of a filibustered appointee? No. (You could double the number of federal judges, and everyone would still complain about clogged dockets. And most senior political appointees to the executive branch are captured by the careerists anyway; I doubt the careerists even notice the absence of the deputy assistant undersecretary for interagency affairs.) Are filibusters stopping timely defense appropriations for our troops in the field? No. Senators don’t dare do something that reckless.

Process reforms of this type are always a masquerade of the one-way ratchet to make it easier for government to acquire more power.

This raises the most important aspect of the issue—the ultimate restraining hand of the voters. It is telling that Senate Democrats chose not to filibuster either of George W. Bush’s Supreme Court nominees—John Roberts and Sam Alito—even though Democrats objected to their jurisprudence as much if not more so than the numerous lower court nominees they blocked through the filibuster. Needless to say, these Supreme Court appointments were much more significant than lower court nominees. So why didn’t Democrats use the filibuster against Roberts and Alito? The answer is clear: public opinion would have turned savagely against Democrats if they locked up the nation’s highest tribunal for purely ideological reasons. In fact, Senate Minority Leader Tom Daschle’s obstructionist use of the filibuster probably played a role in his defeat for re-election in 2004.

In several of his landmark decisions delineating the reach of the national government, Chief Justice John Marshall argued that the abuse of a power is not an argument against its existence. Moreover, Marshall argued that the remedy for the abuse of power is not in endless tinkering with our basic rules, but in the hands of the people through the ballot box. The Progressive Era Republican Senator Albert Beveridge invoked Marshall’s teaching in several Senate speeches in his career: “The limit is in our common sense and in our responsibility to our constituents. If we do exercise our power unwisely the remedy is in the hands of the American people at the ballot-box . . . Mr. President, if the possible abuse of a power is an argument against its existence, where are we?”

The irony here is that recourse to the ballot box is the same remedy Brother Ornstein and other opponents of term limits have (rightly in my view) advocated for that source of our democratic discontent. Why isn’t the remedy he and others suggest for entrenched incumbency just as good for the filibuster?

Steven F. Hayward is the F.K. Weyerhaeuser Fellow at the American Enterprise Institute.

US Mexican Border The Good

TOP SECRET US. Soldiers in Iraq with superhuman abilities

Top secret NSA - by Discovery Channel - 3/5

Top secret NSA - by Discovery Channel - 2/5

Top secret NSA - by Discovery Channel - 1/5

Lucid Dreams on Discovery Channel

In Praise of Anarchy

In Praise of Anarchy

01/23/11 Buenos Aires, Argentina – Left alone, good people tend to do good things. And, when unobstructed by coercion, force, violence or any other tool employed by the state in order to foster and maintain a more “responsible,” “socially conscious” citizenship, most people tend toward being good people…all on their very own.

Nowhere was this sentiment better expressed during the past few weeks than in the flood-stricken state of Queensland, Australia (and, more lately, in the state of Victoria, to Queensland’s south).

The rains that inundated an area the size of France and Germany (combined!) across the Sunshine State wrought havoc and destruction upon its people. Lives were lost, property damaged and industry crippled.

When the worst of Mother Nature’s wrath had subsided, Queensland residents were left with a monumental clean up.

To their credit, these individuals, in the face of near-immeasurable disaster, performed admirably. They did what came naturally. Contrary to the patriotic rally cries of politicians, they didn’t do what Queenslanders do; they did what good people do. And it was beautiful.

The general feeling was perhaps best summed up by Wally “The King” Lewis, a retired national football hero, who spent the last week of his holidays helping his fellow Brisbane residents prepare sandbags and to bail rising flood waters out of their homes. (It is worth pointing out here that, for many Australians, there is no higher office to be attained in the land than that of venerated sporting legend.)

Speaking to National Nine News from the waterlogged front yard of a neighbor – whom he had never met – Wally said, “If someone’s doing it tough, I think it’s the right thing to do to put the hand up and ask them if they want any help.”

The interviewer then turned his microphone to another volunteer. “What was your reaction when Wally Lewis turned up?”

Typifying the laid back disposition of the crowd, the young man casually replied, “[Laughs] Yeah, I was a little surprised but…you know…people help out. It’s all good.”

The Australian people appeared to be perilously close to discovering something very important about themselves; something, perhaps, they’ve always known; an instinctual tendency toward human solidarity, the natural urge to help a neighbor in distress, to lend a hand; in short, to volunteer.

Alas, barely had the first piece of debris been cleared away when the media, as it typically does, lost sight of the bigger picture. Alongside inspirational stories of non-violent, voluntary cooperation, the local papers turned their attention to the state’s role in the cleanup. Should the state and federal governments remain focused on returning “their” budgets to surplus, or should they deploy funds to assist those in need of help? In other words, how “best” should the state spend its citizens’ money…as if the only just, honest option had not already expired on point of expropriation in the first place? [The answer, in other words, is not to steal it.]

While sifting through the news reports and reading comments about what the state “should” do, we wondered how people who are so ready to do what is natural, to cooperate freely with neighbors and “mates down the street,” could so miss the overarching lesson in all this tragedy. Why do hostages of the state turn to their captor when it comes to arbitrating issues of freedom, issues they are, individually and through voluntary cooperation, demonstrably capable of resolving for themselves?

Perhaps it has to do, at least in part, with the misrepresentation of the concept of anarchy itself; a misrepresentation that serves not the interests of individuals, but of the state itself. We are taught that “anarchy” means violence, looting and the aggressive form of chaos that all-too-often flourishes in the wake of natural disasters. We are told that this is what happens given the absence of state control. Nothing could be further from the truth. The state IS control. It is the very incarnation of force and violence from which it purports to protect us.

As Murray Rothbard, the man credited with having coined the term anarcho-capitalism, expressed in Society and the State:

“I define anarchist society as one where there is no legal possibility for coercive aggression against the person or property of any individual. Anarchists oppose the State because it has its very being in such aggression, namely, the expropriation of private property through taxation, the coercive exclusion of other providers of defense service from its territory, and all of the other depredations and coercions that are built upon these twin foci of invasions of individual rights.”

We can expect nothing more from an agent of force than that which is its primary, defining characteristic; namely, more force. A mule is no more capable of giving birth to a unicorn than the state is capable of “granting” freedom.

Last night, with all this in mind, your editor telephoned his father. Dad lives about an hour south of Brisbane, where the post disaster clean up continues. In the aftermath of the flood, volunteer posts were set up around the city where groups of concerned individuals could assemble to donate their time and/or resources to help get the place back on its feet.

“Sixteen thousand people turned up to help on the first day,” Dad told us. “They came with their own equipment and made their own way there. In the end, they had to turn people away.

“I put my name down to lend a hand,” he continued, before adding, with sincere disappointment in his voice, “but I haven’t been called up yet.”

Then, as a man who has spent his life helping people, he added, enthusiastically, “but I’ve still got two more days of holiday left, Sunday and Monday. Hopefully I’ll have the chance to get up there and help out then.”

To those who would argue that coercion is necessary to foster freedom, that force is a prerequisite for peace and that the expropriation of individuals’ property on threat of violence is compulsory to fund an agency that, alone, is capable of guaranteeing safety and prosperity, we say: you don’t know the real meaning of anarchy, you don’t know what voluntarism is and, until you do, you will never know what it means to be truly free.

Thank you to all the people in Queensland – and around the world – who do understand these concepts and, through their fine example, prove statists everywhere and always wrong on a daily basis.

Budget Cuts in the Irrational Financial System

Budget Cuts in the Irrational Financial System


01/24/11 Baltimore, Maryland – Gold down another $5 on Friday. Are you tempted to get out of gold now…and get back in, after the dip bottoms out?

Forget it.

The advantage of investing family money rather than personal money is that you have time on your side. You can sit tight and let the big trends make you big money… If you can get them right.

But don’t try to trade in and out. Because you can’t know exactly when the trend will back off…and when it will explode to the upside.

So if you want to take advantage of a big trend, you have to just get in and stay in…and take a very long-term approach. In the case of the gold bull market, for example, it may be years before the final blow-off bonanza comes.

If you try to time the trend on a year-by-year basis, you’re almost sure to lose money…and miss the big payday. Because you’ll sell out…then, the price will rise. You’ll vow to get back in on the next dip. But a real bull market never gives you the dip you’re looking for. Prices go down…you hesitate, hoping to get in at the bottom of the move…and then, they rise again. Soon, the price is much higher than the price at which you sold out. So, you want to kick yourself. “I can’t buy back in at such a high price,” you say. You wait…and the bull market goes ahead without you.

That’s why speculators rarely make any real money in a major bull market. The fellow who makes the real money is the guy who buys in early…and stays in to the end. Imagine trying to trade in and out of stocks during the big bull market of ’82-’00, for example. Most likely, you would have sold out somewhere along the way…and been left behind…while those who just stayed in multiplied their money 11 times.

So, don’t be tempted to sell out. Buy. Hold. Be happy.

And be prepared to wait…years if necessary…for the crack-up of the monetary system and gold at $3,000 an ounce – or more.

It seems like a sure thing. But, wait…what’s this?

Here’s the latest from US News & World Report:

Moving aggressively to make good on election promises to slash the federal budget, the House GOP today unveiled an eye-popping plan to eliminate $2.5 trillion in spending over the next 10 years. Gone would be Amtrak subsidies, fat checks to the Legal Services Corporation and National Endowment for the Arts, and some $900 million to run President Obama’s healthcare reform program.

What’s more, the “Spending Reduction Act of 2011” proposed by members of the conservative Republican Study Committee, chaired by Ohio Rep. Jim Jordan, would reduce current spending for non-defense, non-homeland security and non-veterans programs to 2008 levels, eliminate federal control of Fannie Mae and Freddie Mac, cut the federal workforce by 15 percent through attrition, and cut some $80 billion by blocking implementation of Obamacare.

How do you like those Republicans! They’re trying to spoil our fun. Finally, they’re going to “pull a Volcker.” Tough guys, huh? They’re tough on spending. See… Francis Fukayama was wrong; they do have an appetite for fixing America’s real problems after all. There goes the bull market in gold! The yellow metal will probably fall in price for the next 20 years…just as it did after Paul Volcker got control of inflation in 1979.

But wait… $2.5 trillion sounds like a lot of money. But it’s over 10 years. That’s only $250 billion a year. And the budget deficit this year is supposed to be over $1 trillion.

So, unless we’re missing something…even these cuts are only a quarter of what they’d have to be in order to bring the budget back into balance.

Okay…you’re thinking…what’s a little deficit? But at $750,000 billion…that’s still a deficit of 5% of GDP, even if the cuts were 100% effective. And if the economy grows at only 3% or 4% of GDP…as Ben Bernanke has forecast…it means debt as a percentage of GDP is still growing.

So even these modest cuts proposed by the Republicans don’t have a chance. Every privileged group threatened by the cuts will mobilize. The wailing and gnashing of teeth will be reported from coast to coast. Compromises will be made. In the end, spending will probably go up – even for the programs that were supposed to be cut.

So, why bother to propose cuts that will be blasted as “drastic” and “draconian”…if they 1) won’t be passed…and 2) aren’t even a shadow of enough to get the job done anyway?

Why? Because that’s how the system works. It awards special benefits to power groups, whether the nation has the money to pay for them or not. Then, if there is a problem…it pretends to fix it.

And we know what you’re thinking… “Well, the system will just have to learn to do things differently.” But that imagines that the “system” is rational, thinking and responsive. It is not. It is merely reactive…like a primitive molecule or a PTA meeting. It has DNA. It desires to procreate. It will fight to protect itself and stay alive. But it cannot become a different thing. Look, tigers may be on the verge of extinction. But you don’t see them becoming house cats, do you?

That’s just not the way it works. The system must fight to protect itself. Not something else. It can’t become a different system. If it were to do so it would no longer be the system, would it? It will react to the bond market…to default…to revolution. It will not respond to the needs of fiscal integrity.

Got that? Hope so. That’s the final piece of our new idea about how the world works.

The Providential State…our advanced social welfare governments…were set up in a different time. They evolved under very different circumstances. The climate has changed. Where once there was abundant land, water and energy…now there are 6 billion people bidding for the same limited resources. Where once there was a handful of Western nations (and Japan) with a disproportionate share of the world’s wealth…now they are having to share the wealth with Brazilians, Chinese, Indians and others. Where once their populations got richer every year, now their wealth stagnates and falls. Where once there were more of them in every generation, now there are fewer workers to support the old people. Where once old people cooperated by dying soon after they stopped working, now they refuse to die at all.

And yet, the government cannot adjust to these new realities. The new realities are against its nature. It was designed to keep both the masses and the elites happy. The elites were paid off with big bribes. The masses got small ones. What will happen when the bribes stop?

That is what we will find out.

Food Crisis II

Food Crisis II


01/24/11 Gaithersburg, Maryland – A story I’ve been warning about for years is making sensational headlines right now.

It’s a story most people don’t realize could make a huge impact on all of our portfolios in a number of ways.

“US Crop Stock Forecasts Deepen Fears of Food Crisis” read a recent Financial Times headline. The US government cut its estimate for key crops. This came only a week after the UN warned the world faces “food price shock.” Corn and soybean prices jumped and now sit at 30-month highs. Inventories are very tight. Corn is up 94% since June!

And the world worries about a repeat of 2008, when food riots erupted in poor countries around the world.

This has been in the works for a long time. It was there for all to see. The ratio of arable land to people has been falling for decades. Gains in crop yields have slowed. Population has expanded and income levels have grown. Diets have shifted. More people are eating more meat, which is much more grain-intensive to produce.

And the love affair with biofuels puts food production in direct competition with energy. Plus, there are water scarcity issues affecting food supply. My readers have made tremendous gains from this trend by owning shares of agricultural fertilizer producers Potash (POT) and Mosaic (MOS).

I should also make the point that this fits in with another topic I’m concerned about: inflation. Now, the man on the street uses the term “inflation” to mean when prices for everything seem to go up. Or put another way, inflation is when the dollars in his pocket buy less. In truth, this is the effect of inflation. The root cause is simply money printing. When you print more money, that money has less value than if you didn’t print any new money at all.

So what we are seeing with rising commodity prices is not only the supply and demand story I led off with. It’s also the effect of paper money losing its purchasing power in the real world of things. This, too, was easy enough to see. Finally, all that money printing – the “quantitative easing” baloney you’ve heard about – is coming home to roost.

Still, it’s disconcerting to see it all playing out. For the sake of our world, I’d rather have gotten this one wrong. But we have to deal with the market we are in. So what might “Food Crisis II” mean from an investment point of view?

Food prices will have to rise: There is no way around this. We are all going to pay more for food. Wells Fargo predicts US retail food prices will rise about 4% this year. Some things will go up much more. Pork and beef could rise more than 10%.

This won’t necessarily mean that meat producer stocks are good buys, because they may not get to raise prices to fully offset the rise in feed costs. Anecdotally, for instance, The Wall Street Journal cited a Minnesota 300-cow operation that reported feed costs had doubled. Plus, I’ve listened in to the conference calls of a number of food producers – Tyson, Hormel, and Sanderson Farms. They all talk about getting squeezed by rising feed costs.

I do think these companies will be good buys sometime this year, because people will adapt and farmers will respond. Producers won’t produce meat at a loss for long. And farmers will bring every resource they have to bear. It’s been slow getting the crops in the ground so far in many places. But ultimately, there is a lot of potential supply from Brazil and the US.

Still, weather is the big wild card here. If we have a drought in the US or in Brazil, this could really get ugly.

Emerging markets are vulnerable: This follows from the above. It doesn’t really faze the typical American to have to pay 4% more at the grocery store. Food is still such a small part of the typical American’s budget. I think Michael Pollan in The Omnivore’s Dilemma points out that the US spends 9% of its income on food, which is among the lowest percentage of any people anywhere at any time in history.

The same is not true in India or China or many emerging markets. In China, people spend 50% of every incremental dollar on food. And in India, it’s more like 70%. So the rising price of food is felt more keenly in these markets.

The price of food is rising faster in emerging markets, too. In India, food prices are up 18% and at their highest level in a year. China has the same problem. Prices rose 5% in November alone. All around the world, emerging markets have a big problem with rising food prices. Indonesia’s president is trying to get people to grow their own chili peppers. And the South Korean government recently released emergency stores of cabbage, pork, mackerel, radish, and other staples. I could go on and on.

The point is that the emerging markets boom is not going to go far when it faces a food crisis. Already, the markets are starting to reflect this. India’s Sensex was down three straight days and off 6% to start the year. Other markets also started badly. And if China and India and the rest slow down, it’s going to have a huge impact on all those stocks and commodities most sensitive to emerging market growth.

I’m keeping a close eye on these developments. There will be opportunities in this crisis, as with all others. For instance, though rising grain prices are not good for meat producers or emerging markets right now, it’s a boon for fertilizer stocks. As the old golf saying goes, “Every putt makes somebody happy.”

The Future of School Choice

Security guard interrupts RT LIVE report from Domodedovo airport

US Politics - Cage Match Or Pillow Fight?

Monday Morning Outlook
US Politics - Cage Match Or Pillow Fight?
Brian S. Wesbury - Chief Economist
Robert Stein, CFA - Senior Economist

On Tuesday night, President Obama will deliver his third State of the Union address and the first since his self-described electoral “shellacking” last November. The Republican response will be delivered by House Budget Committee Chairman Paul Ryan of Wisconsin.
With the President from Chicago and Ryan from its northern neighbor this could be considered a sequel to the Bears-Packers NFC championship game. After all, if you listen to conventional wisdom, Republicans and Democrats are like gladiators in a cage match. Republicans want small government and less spending. Democrats want big government and more spending. Republicans believe in free markets and capitalism. Democrats believe in government control and the welfare state.

The problem is that this cage match often looks more like a pillow fight. At least the results look like there are few differences between the parties. Throughout history, almost no matter who was in power, the government has become bigger and more intrusive. The 1980s and 1990s were an aberration. In 1960, non-defense federal spending was 8.5% of GDP. By 1982, this spending had soared to 17.4% of GDP. Then Reagan and Clinton cut it back to 15.2% of GDP by 2000. This progress was reversed and for 2011, the latest budget from the White House projects non-defense federal spending will be 20.4% of GDP, the largest share ever in history. No wonder the Tea Party was founded and no wonder politicians are sounding more conservative.

By all accounts, President Obama will promise “responsible” spending reduction, and possibly Social Security reform and corporate tax cuts in his address on Tuesday night. After moving the country sharply to the left, he is trying to sound more like Bill Clinton in an attempt to absorb the lessons and message of last November’s election.

And Congressman Ryan will try to sound like Milton Friedman, Friedrich Hayek and Ronald Reagan all rolled into one. He will talk about the benefits of free markets and the negative economic impact of increased government spending and regulation. Republicans want to roll back Obamacare and Ryan will channel Ronald Reagan.

But Ryan has often voted like John Maynard Keynes. Back in February 2008, he agreed with President Obama’s future top economic advisor, Larry Summers, and voted for the Bush Stimulus Bill. He also voted for TARP, No Child Left Behind and the Medicare Part D drug benefit. In other words, he voted for large increases in government spending while a Republican was president, but is now arguing against spending when a Democrat is in the White House. We hope he can explain some of this in his speech on Tuesday night.

All of this is driven by politics, not economics. The economics are simple. Everywhere we look around the world and throughout history, the larger the government share of GDP, the higher the unemployment rate. Government spending, by definition, must be paid for by the private sector. The bigger the government is the smaller the private sector is and the fewer jobs are created. Cutting spending is the way to increase economic growth and create jobs. But this only happens when the political winds are blowing the right way.

And the good news is that because of the elections last November, the politics have changed. Politicians in Washington, of both parties, are being forced to consider more free market solutions to problems and to address the growth in government. Time will tell whether a new leaf has really been turned, but for now, the direction of policy is much better for markets and the economy than it has been in many years. Politicians have pulled out their pillows and are now debating how to shrink government, not expand it.

Letter about Obama Citizenship

Letter about Obama Citizenship
Atlah Media Network
michael master

Open Letter to: Congressman John Boehner Congressman Eric Cantor Congressman Frank Wolf US House of Representatives Washington, DC January 24, 2011

Dear Sirs,

We the people elected you Republicans to be the controlling party of the US House of Representatives last Nov, 2010 to hold the Democrats and Barack Hussein Obama accountable. The Democrat controlled House of 2008 did not vet Mr. Obama. After promising to be “transparent,” Obama was allowed by the Democrat House of 2008 to not be “transparent” about being a “natural born” citizen. The 2008 House did not vet him.

Your most current comments about this are very disturbing. Are you going to vet him? The president of the United States must be a natural born citizen to qualify to be president. The ways to lose natural born citizenship status are detailed in 8 USC 1481. One way is becoming naturalized in another country.

Are you going to inspect his long form birth certificate, his college applications, and his previous passports? Do any of those documents show him to have a dual or foreign citizenship that would have caused him to have lost his natural born status? Almost 2/3 of America think that there is something wrong with his citizenship. So are you going to vet him to put this issue to rest?

You took an oath of office to protect and defend the Constitution of the United States. So are you going to do that concerning the “natural born” status of Barack Hussein Obama (Barry Soetoro)?

Obama's State of the Union and U.S. Foreign Policy

Obama's State of the Union and U.S. Foreign Policy

By George Friedman

U.S. President Barack Obama will deliver the State of the Union address tonight. The administration has let the media know that the focus of the speech will be on jobs and the economy. Given the strong showing of the Republicans in the last election, and the fact that they have defined domestic issues as the main battleground, Obama’s decision makes political sense. He will likely mention foreign issues and is undoubtedly devoting significant time to them, but the decision not to focus on foreign affairs in his State of the Union address gives the impression that the global situation is under control. Indeed, the Republican focus on domestic matters projects the same sense. Both sides create the danger that the public will be unprepared for some of the international crises that are already quite heated. We have discussed these issues in detail, but it is useful to step back and look at the state of the world for a moment.


The United States remains the most powerful nation in the world, both in the size of its economy and the size of its military. Nevertheless, it continues to have a singular focus on the region from Iraq to Pakistan. Obama argued during his campaign that President George W. Bush had committed the United States to the wrong war in Iraq and had neglected the important war in Afghanistan. After being elected, Obama continued the withdrawal of U.S. forces from Iraq that began under the Bush administration while increasing troop levels in Afghanistan. He has also committed himself to concluding the withdrawal of U.S. forces from Iraq by the end of this year. Now, it may be that the withdrawal will not be completed on that schedule, but the United States already has insufficient forces in Iraq to shape events very much, and a further drawdown will further degrade this ability. In war, force is not symbolic.

This poses a series of serious problems for the United States. First, the strategic goal of the United States in Afghanistan is to build an Afghan military and security force that can take over from the United States in the coming years, allowing the United States to withdraw from the country. In other words, as in Vietnam, the United States wants to create a pro-American regime with a loyal army to protect American interests in Afghanistan without the presence of U.S. forces. I mention Vietnam because, in essence, this is Richard Nixon’s Vietnamization program applied to Afghanistan. The task is to win the hearts and minds of the people, isolate the guerrillas and use the pro-American segments of the population to buttress the government of Afghan President Hamid Karzai and provide recruits for the military and security forces.

The essential problem with this strategy is that it wants to control the outcome of the war while simultaneously withdrawing from it. For that to happen, the United States must persuade the Afghan people (who are hardly a single, united entity) that committing to the United States is a rational choice when the U.S. goal is to leave. The Afghans must first find the Americans more attractive than the Taliban. Second, they must be prepared to shoulder the substantial risks and burdens the Americans want to abandon. And third, the Afghans must be prepared to engage the Taliban and defeat them or endure the consequences of their own defeat.

Given that there is minimal evidence that the United States is winning hearts and minds in meaningful numbers, the rest of the analysis becomes relatively unimportant. But the point is that NATO has nearly 150,000 troops fighting in Afghanistan, the U.S. president has pledged to begin withdrawals this year, beginning in July, and all the Taliban have to do is not lose in order to win. There does not have to be a defining, critical moment for the United States to face defeat. Rather, the defeat lurks in the extended inability to force the Taliban to halt operations and in the limits on the amount of force available to the United States to throw into the war. The United States can fight as long as it chooses. It has that much power. What it seems to lack is the power to force the enemy to capitulate.


In the meantime, the wrong war, Iraq, shows signs of crisis or, more precisely, crisis in the context of Iran. The United States is committed to withdrawing its forces from Iraq by the end of 2011. This has two immediate consequences. First, it increases Iranian influence in Iraq simply by creating a vacuum the Iraqis themselves cannot fill. Second, it escalates Iranian regional power. The withdrawal of U.S. forces from Iraq without a strong Iraqi government and military will create a crisis of confidence on the Arabian Peninsula. The Saudis, in particular, unable to match Iranian power and doubtful of American will to resist Iran, will be increasingly pressured, out of necessity, to find a political accommodation with Iran. The Iranians do not have to invade anyone to change the regional balance of power decisively.

In the extreme, but not unimaginable, case that Iran turns Iraq into a satellite, Iranian power would be brought to the borders of Kuwait, Saudi Arabia, Jordan and Syria and would extend Iran’s border with Turkey. Certainly, the United States could deal with Iran, but having completed its withdrawal from Iraq, it is difficult to imagine the United States rushing forces back in. Given the U.S. commitment to Afghanistan, it is difficult to see what ground forces would be available.

The withdrawal from Iraq creates a major crisis in 2011. If it is completed, Iran’s power will be enhanced. If it is aborted, the United States will have roughly 50,000 troops, most in training and support modes and few deployed in a combat mode, and the decision of whether to resume combat will be in the hands of the Iranians and their Iraqi surrogates. Since 170,000 troops were insufficient to pacify Iraq in the first place, sending in more troops makes little sense. As in Afghanistan, the U.S. has limited ground forces in reserve. It can build a force that blocks Iran militarily, but it will also be a force vulnerable to insurgent tactics — a force deployed without a terminal date, possibly absorbing casualties from Iranian-backed forces.


If the United States is prepared to complete the withdrawal of troops from Iraq in 2011, it must deal with Iran prior to the withdrawal. The two choices are a massive air campaign to attempt to cripple Iran or a negotiated understanding with Iran. The former involves profound intelligence uncertainties and might fail, while the latter might not be attractive to the Iranians. They are quite content seeing the United States leave. The reason the Iranians are so intransigent is not that they are crazy. It is that they think they hold all the cards and that time is on their side. The nuclear issue is hardly what concerns them.

The difference between Afghanistan and Iraq is that a wrenching crisis can be averted in Afghanistan simply by continuing to do what the United States is already doing. By continuing to do what it is doing in Iraq, the United States inevitably heads into a crisis as the troop level is drawn down.

Obama’s strategy appears to be to continue to carry out operations in Afghanistan, continue to withdraw from Iraq and attempt to deal with Iran through sanctions. This is an attractive strategy if it works. But the argument I am making is that the Afghan strategy can avoid collapse but not with a high probability of success. I am also extremely dubious that sanctions will force a change of course in Iran. For one thing, their effectiveness depends on the actual cooperation of Russia and China (as well as the Europeans). Sufficient exceptions have been given by the Obama administration to American companies doing business with Iran that others will feel free to act in their own self-interest.

But more than that, sanctions can unify a country. The expectations that some had about the Green Revolution of 2009 have been smashed, or at least should have been. We doubt that there is massive unhappiness with the regime waiting to explode, and we see no signs that the regime can’t cope with existing threats. The sanctions even provide Iran with cover for economic austerity while labeling resistance unpatriotic. As I have argued before, sanctions are an alternative to a solution, making it appear that something is being done when in fact nothing is happening.

There are numerous other issues Obama could address, ranging from Israel to Mexico to Russia. But, in a way, there is no point. Until the United States frees up forces and bandwidth and reduces the dangers in the war zones, it will lack the resources — intellectual and material — to deal with these other countries. It is impossible to be the single global power and focus only on one region, yet it is also impossible to focus on the world while most of the fires are burning in a single region. This, more than any other reason, is why Obama must conclude these conflicts, or at least create a situation where these conflicts exist in the broader context of American interests. There are multiple solutions, all with significant risks. Standing pat is the riskiest.

Domestic Issues

There is a parallel between Obama’s foreign policy problems and his domestic policy problems. Domestically, Obama is trapped by the financial crisis and the resulting economic problems, particularly unemployment. He cannot deal with other issues until he deals with that one. There are a host of foreign policy issues, including the broader question of the general approach Obama wants to take toward the world. The United States is involved in two wars with an incipient crisis in Iran. Nothing else can be addressed until those wars are dealt with.

The decision to focus on domestic issues makes political sense. It also makes sense in a broader way. Obama does not yet have a coherent strategy stretching from Iraq to Afghanistan. Certainly, he inherited the wars, but they are now his. The Afghan war has no clear endpoint, while the Iraq war does have a clear endpoint — but it is one that is enormously dangerous.

It is unlikely that he will be able to avoid some major foreign policy decisions in the coming year. It is also unlikely that he has a clear path. There are no clear paths, and he is going to have to hack his way to solutions. But the current situation does not easily extend past this year, particularly in Iraq and Iran, and they both require decisions. Presidents prefer not making decisions, and Obama has followed that tradition. Presidents understand that most problems in foreign affairs take care of themselves. But some of the most important ones don’t. The Iraq-Iran issue is, I think, one of those, and given the reduction of U.S. troops in 2011, this is the year decisions will have to be made.

No comments:

Post a Comment