Breaking Out of the Box

 | 

It is a priori unlikely that a movie would enlighten viewers on subjects as different as autism and animal welfare, but there is such a film — a fine bioflick that aired on HBO last year and is now available on DVD. It’s the moving story of Professor Temple Grandin, its eponymous heroine.

Even today, autism is not well understood. It is a neurodevelopmental disorder that usually manifests itself in the first two to three years of a child’s life. It is typically characterized by severe difficulty in communication and social interaction, and by limited, repetitive behavior (such as endlessly eating the same kind of food or watching the same TV show). While the cause is still unknown, it appears to be a genetic defect afflicting about 1 or 2 children per thousand. While most autistic children are never able to live independently as adults, some — often called “high functioning” — are.

Temple Grandin is arguably the most famous high-functioning autistic person in the world. She was born in 1947, and was diagnosed with autism when she was three. With the help of speech therapists, she was able to learn to talk, and with the help of her extremely high intelligence, she went to an elite boarding school, where a gifted teacher mentored her.

She went on to get a bachelor’s degree in psychology from Franklin Pierce College in New Hampshire, a master’s degree from Arizona State University, and a doctoral degree from the University of Illinois in the field of animal science. She is now a professor at Colorado State University.

The focus of her work has been on the means by which animals, especially cattle, perceive and communicate, through the sounds they make and the way they move. Her autism seems to give her a distinct advantage here, because (as she explains it) she thinks the way cattle do, visually and concretely rather than verbally and abstractly, like ordinary people.

Her career has focused on the design of cattle lots, storage pens, and slaughterhouses that make them far more humane than in times past. Over half the slaughterhouses in America now incorporate her designs. The movie conveys her work beautifully. In one scene, we see her figuring out what is spooking cattle, as they move through a passageway, by getting down on all fours and walking the passageway herself, trying to capture visually exactly what is frightening them.

The movie also effectively conveys her view of our obligations towards animals, one that could be summarized as “Respect what you eat!” In her words, “I think that using animals for food is an ethical thing to do, but we’ve got to do it right. We’ve got to give those animals a decent life and we’ve got to give them a painless death. We owe the animals respect!”

The movie also deftly expresses certain aspects of Grandin’s autistic personality, such as her repetitive eating habits (she loves Jell-O) and her indifference to movies about emotional relationships (such as love!). In one charming scene, she is channel surfing and happens on the famous kiss-on-the-beach scene from the movie From Here to Eternity. She grimaces and quickly moves on to an action flick.

Temple Grandin is superbly done. Mick Jackson’s direction is sure and steady. He elicits perfectly pitched performances from the actors, performances that are emotionally true without being sentimental. And despite the serious nature of the story, the film is infused with humor. Julia Ormond is marvelous as Grandin’s mother Eustacia, evincing a combination of vulnerability and resilient strength. Catherine O’Hara is solid as Grandin’s Aunt Ann, whose ranch Grandin often visited. Another fine supporting actor is David Strathairn as Professor Carlock, a key mentor to Grandin in developing her understanding of science.

Especially wonderful is Claire Danes as Grandin. She shows us in sometimes painful detail what autism entails, and the suffering it brings, but she also shows us Grandin’s unique genius. Danes apparently studied her subject intensely, and it shows in her performance. She well deserved her Emmy for Outstanding Lead Actress.

The film was a clear success with both the critics and the audience. It was nominated for 15 Emmys, winning seven; besides Danes’ award for Outstanding Lead Actress, they included an Emmy for Outstanding Made for Television Movie.

This is a treat that should not be missed.


Editor's Note: Review of "Temple Grandin," directed by Mick Jackson. HBO Films, 2010, 103 minutes.



Share This


Getting Your Way

 | 

One of the most useful concepts I know is Friedrich Hayek’s distinction between freedom and power. Freedom, he says, is the right to be left alone; power is the ability to have and to do things. Confusion on this score can be fatal. The world is full of people who believe that their nation, race, or religion can be “free” only if it has power over its neighbors. Here at home, our government is bankrupting its citizens by forcing them to pay for everyone’s alleged “freedom” to have healthcare, to have a job, and to have 15 children whether you have a job or not.

Libertarians have rightly emphasized freedom in its true definition. But power is also important, nor is it a bad thing, if it helps one to enjoy one’s freedom and master one’s own life. To get a job, maintain a home, gain money and respect, improve one’s existence materially and spiritually — these are good things; these are ways of shaping life creatively. Yet personal power can easily be squandered, passed off to others, in the sordid transactions of daily life.

This is where Sharon Presley comes in. Her book starts in this way:

“Experts and authorities can take your power away by intimidating, manipulating, abusing and bamboozling you. Examples are everywhere. Physicians tell you to leave your treatment to them because they are the experts. Bureaucrats give you the run-around. Clerks and customer service reps say it can’t be done . . . .”

She continues in that vein, because such conflicts are everywhere; and although many of them are unimportant in themselves, they are always discouraging. Remember the last time something went wrong with your computer. How many hours did you spend trying to interpret the “advice” you got when you resorted to the “help” button? How many hours did you then waste on the phone, fuming while an “expert” treated you like a child, suggesting to you that your machine might not be plugged in, putting you on hold, interrupting your attempts to explain, mystifying you with terminology ten times more opaque than even the insultingly unhelpful “help” pages?

If you’re like me, your day was ruined. You lost your cool, yelled at the “expert,” yelled again at his supervisor and his supervisor’s supervisor, felt helpless and guilty, and finally found yourself searching the phonebook for a fixit company that would charge you a hundred dollars an hour to repeat the process.

That’s not power, and that’s not life. But these conflicts are inevitable, and some of them are much more serious than that glitch in your downloads. Just consider what may happen on one of those awful, though possibly “routine,” visits to your doctor’s office. I well remember the horrors of dealing with the office staff of my former “primary healthcare provider” — people who put me off, wouldn’t listen when I talked, said they’d return phone calls but didn’t, communicated lab results long after they should have been available, and offered me no help at all when, facing a possible diagnosis of cancer, I was unable to get an appointment with a relevant specialist without waiting three months for it. Finally I located a hospital ombudsman (actually a woman) who was concerned about my plight and in a few days got me a quick appointment with a specialist &‐ a magnificent doctor, who immediately found the cancer and removed it. Never once did my “primary healthcare provider” or his office check back with me.

Libertarians are often taught to value themselves on behavior that is “right,” though self-destructive — or even right because it is self-destructive.

At my next routine physical, which required months to arrange, I sat in the doctor’s waiting room for almost an hour after the scheduled time, wondering why medical doctors are the only people who keep you waiting like that. Then a fat nurse or para-nurse (all these people are fat) opened the door, boomed out “Cox” as if she were calling hogs, and led me into an examining room, where I sat for another half hour. At that point, I went crazy. When the doctor asked me how I was, I said, “Angry! I’m angry! I’m sick of being your patient and seeing myself and all your other patients being treated like cattle!” Then I recited what I’ve written above, except that by now I was shouting loud enough, I hoped, for the poor slaves in the waiting room to hear what I said.

What surprised me was the expression on the doctor’s face. It was obvious that he had never been talked to like that in his life. He wanted to object, but he didn’t know how, because he obviously had no idea of how his office operated, from anything like the patient’s end of things. I almost felt sorry for him — almost.

The next time I came back to that office, the situation had changed. I was now “Mr. Cox,” and there were more or less appropriate displays of civility. Later, I found that if I persisted to a moderate degree, I could actually get my calls put through to someone who knew something, without waiting weeks to obtain the information I required. This improvement may have had some relation to the fit I threw.

Was it worth it? I suppose it was. But perhaps I could have handled it better. I don’t want to live in a world in which people — even people like me — are always screaming at each other. I want things to work right, without my having to scream. I want the power to get things done, without throwing a fit.

Sharon Presley knows all about such situations, and she has excellent practical advice about how to deal with them. It’s not about the supposed delights of naked “self-assertion” (i.e., yelling). It’s about ways of gaining people’s attention and getting them to do what needs to be done for you, in the way that’s most likely to be successful and least likely to deplete your own energy. It’s not about sermons on self-esteem; it’s about gaining self-esteem by increasing your practical power. And of course a lot of it is about thinking through what authority figures, whether doctors or teachers or technical experts, have to say, to make sure that you possess enough information to take power over your own decisions. In short, a lot of it is about exercising your power of rational analysis.

Presley doesn’t want her readers to get locked into hopeless conflicts with The Man. She wants them — all of us — to succeed.

Presley’s practical advice is divided into sensible categories: dealing with doctors, lawyers, teachers, bosses, merchants, and so on. The subheading of one of her chapters reveals her primary concern: “Dealing with Bosses without Getting Fired.” A book of psycho-babble would focus on “taking back your power” by “communicating your feelings” and expressing your “true identity.” Presley isn’t opposed to such goals, but she doesn’t want you to lose your job, either. You don’t have much power if you don’t have a job. Presley wants you to be yourself and keep your paycheck, too — in other words, to have your cake and eat it. Sounds good to me.

One excellent feature of this book is the fact that Presley bases her advice on the experience of hundreds of real people; there are no made-up characters. Another is that she seems to have consulted every book, article, and website in the field of “critical thinking,” personal power relations, and just plain good advice for the contemporary world.  She tells you which texts she thinks are useful, and why. That’s a big gain.

I want to compliment Presley for her constant and persuasive suggestion that adults should act like adults. What she wants is for her readers to get their way, satisfy their legitimate demands, and achieve success and happiness. Dissident minorities, such as libertarians, are often taught to value themselves on behavior that is “right,” though self-destructive — or even right because it is self-destructive, as in the familiar zest for martyrdom. Presley will have none of this. She doesn’t want her readers to get locked into hopeless conflicts with The Man. She wants them — all of us — to succeed. She doesn’t mind getting down to basics:

“Develop a skill that you can succeed at. If you already have a skill, keep that in mind when you feel as if you can’t do things right. Perhaps there was a time when you were able to stand up to an authority figure. You lived through it, didn’t you? Remember your successes, not your failures.”

Isn’t that good advice? Wouldn’t we all be happier if we followed it? It’s a matter of perspective. Rather than banging the computer keys and screaming at that poor “technical consultant” in India, have some coffee, think about the good things you’ve done in your life, and turn to the chapter where Presley suggests how to deal with the immediate problem.


Editor's Note: Review of "Standing Up to Experts and Authorities: How to Avoid Being Intimidated, Manipulated, and Abused," by Sharon Presley. Solomon Press, 2010, 389 pages.



Share This


A New Record of Folly

 | 

A new report that hasn’t gotten much play in the afterglow of President Obama’s state of the union address was the Congressional Budget Office estimate for the budget deficit this year. The CBO estimates that the deficit will hit a new record of $1.5 trillion. And it projects that the 2012 deficit will be $1.1 trillion. Even though this will be the third year of trillion-buck-plus deficits, and it will top last year’s record-setting deficit of $1.4 trillion, no doubt Obama will continue to blame Bush (whose largest deficit was a little over $400 billion in 2008). But that rhetorical trope is working less and less well for Obama.

Obama seems now to be aware that the populace is becoming increasingly alarmed at these unprecedented budget deficits. His speech proposed a meager deficit reduction — about $400 billion over ten years.

The big question is how much stomach the Republicans have to push for steeper cuts. The public is now aware of the problem but is still fiscally incontinent: it favors cutting the budget generally but opposes cutting specific popular programs. Even Tea Party members typically support the major entitlement programs (Medicare, Medicaid, and Social Security) that are metastasizing most rapidly.

There are slight signs that the Republicans may strap on some stones and step up to the plate. Orrin Hatch (R-UT) has once again proposed a balanced budget amendment, stronger than the amendment that failed by one vote in the Senate 14 years ago. Hatch’s version would require a two-thirds majority for Congress to raise taxes. But while a dozen Republican senators have signed on, no Democrats have, indicating that this has little chance of passing.

Sen. David Vitter (R-LA) has raised the deficit issue in a different way. He has asked the administration to provide actual figures on the drop in offshore drilling revenue collected by the federal government. That information would show us just how much the Obama slowdown of offshore drilling has and will cost the government in revenues, and hence added to the deficit. One calculation puts the amount of lost revenue due to lower production in the Gulf at about $3.7 million a day.

But most interesting is the legislation introduced by Rep. Jim Jordan (R-OH) and Sen. Jim DeMint (R-SC) that would cut the deficit by $2.5 trillion over ten years. Some of the savings would come from the elimination of a number of programs and agencies, such as the US Agency for International Development (savings: $1.39 billion a year) and from cutting the subsidies of the Corporation for Public Broadcasting ($445 million a year), Amtrak ($1.5 billion a year), and high-speed rail ($2.5 billion a year). But most of the savings would come from rolling back all non-defense discretionary spending to 2006 levels across the board, and keeping it there until 2021.

It is unlikely that this proposal will even come to a vote in the Senate, much less be signed into law by Obama. Even if it did, as exemplary as it is, it would not address the real threat, which is the “Entitlement Explosion” — the ballooning costs of Medicare, Medicaid, and Social Security, costs that are going to drive the country’s economy to the wall as the Baby Boomers quite predictably age, retire, and sicken. In fact, the CBO just reported that this year Social Security will run a deficit of $45 billion (or $130 billion, if the cut in payroll taxes is included), and will continue to run deficits until the bogus “trust fund” is exhausted in 2037. As late as last year, please note, Social Security was projected to run surpluses until 2016.

The entitlement programs are what really endanger the country (and I haven’t even mentioned the state employee pension fund liabilities). The American people haven’t yet reached what I call the public-choice tipping point, the point at which a problem becomes so large that it is no longer rational for the average citizen to be ignorant of it. So the Republicans may do a little by way of deficit reduction, but I wouldn’t hold out hope that they will do a lot — the public isn’t there quite yet.

Give it a few more years, and a bout of high inflation . . . and that may finally do the trick. By then, of course, the economy will be in ruins. Rational ignorance is such a bitch, isn’t it?




Share This


Shot — Countershot

 | 

Four years ago, a Chicago real estate agent by the name of John Maloof discovered a large collection of candid “street” photographs taken in the 1950s and 1960s by a Chicago nanny named Vivian Maier. Helped by the internet and a carefully calculated public release, Maier’s work is now attracting tremendous popular interest, catching the eyes of both scholars and photographers.

I first encountered Maier’s work while searching the web for articles on photography and photographic equipment, as is my habit. Upon finding a blog with Maier’s work and self-portraits prominently displayed, my first thought was, “These photos look like they were taken by the world-famous Diane Arbus."

Granted, I'm no expert on Arbus’ photography, but I've heard things about it that always return to the words “surreal” and “weird” as a way of describing its peculiar quality. Beyond simply the strange appearance of many of her subjects, I guess it's a kind of instantaneousness in her photos that makes them appealing, something about how life can start to look strangely discordant when chopped into little temporal slices by the click of a shutter. That weirdness is certainly there in Maier's work, too. Often, Maier’s style — resulting from her close proximity to her subjects and the “avant-garde” timing and framing of her shots — seems nearly identical to that of Arbus.

It takes a kind of brusqueness and self-driven ability to get past strangers’ personal barriers and produce this kind of photo, an attitude somewhere between the “interested observer” and the “invader of privacy.” It's a psychological barrier that I struggle with, as do many other photographers. In my copy of “The Honeywell Pentax Way” (a 1966 guidebook intended for amateur users of Pentax 35mm single-lens-reflex cameras), the author advises photographers not to wait until a scene is devoid of human content, but to shoot pictures of people “without hesitation . . . [I]f they object, tip your hat and smile.”

The only times I've been able to break that barrier are when I’ve been shooting photographs of political protests or street scenes in foreign countries. In both cases, the barrier is easy to cross. Usually, no one cares about what you’re shooting (or even notices that you’re doing it). As for photographs taken in foreign parts, a practitioner of “subaltern” theory would say that tourists are simply practicing the “hegemony of the foreigner” when they snap a picture. Take the theory for what it’s worth; I’ve never paused long enough behind the camera to ask myself, “Am I being hegemonic?” Whether that question ever crossed Arbus’ or Maier’s mind, we may never know. If it did, it wouldn’t have helped them break any barriers, personal or artistic.

Whenever I think about their kind of photography, I remember an incident that took place when I happened to be on the other side of the camera, the subject side. My family and I were taking a trip to Death Valley and Las Vegas on one of those quirky tours run by a Chinese travel agency, and tailored specifically for Chinese wallets and efficiency — that is, we were given only 15 minutes per scenic vista, were housed in cheap hotels that reeked of equally cheap cigarettes, ate third-rate buffets for almost every meal, and were herded on and off the bus like cattle.

At one buffet stop — and these were all Chinese buffets — we were turned out of the bus in some Nevada country town, approximately in the middle of nowhere. We were waiting outside while the guide went in to secure tables, when a scrawny, dark-haired teenager with inverted baseball cap suddenly glided up to us on his skateboard. He produced a Pentax 35mm film camera with a 50mm lens, of the type ordinarily used by high school photo students and starving artists. He started to take pictures of us, moving down the line of tourists, all of whom gawked back at him. Watching him snap away, I realized that if I were in his position, and didn’t have the embarrassment I usually have about photographing strangers, I might have taken the same pictures. And it was a real Diane Arbus moment — a bunch of Asian tourists waiting outside a run-down buffet, in an equally run-down strip mall with an otherwise deserted parking lot.Weirdness and discord!

In this case, however, the subjects resisted — at least some of them. When the teenager skated up to my family and started to photograph us from close range, my dad bristled. “Don't give that punk the benefit of a smile,” he said in Chinese. We all scowled at the kid — but he didn't let up. He just snapped and grinned maniacally at us (I suppose that’s what Arbus looked like, when she was working) until he skated away. I imagine that somewhere, in some art gallery, there is now hanging a beautiful black and white print of us, an angry-looking family of Chinese tourists waiting impatiently outside a country town buffet. The photograph is probably entitled “Untitled No. 4,” or “The Visitors,” and the teenager has made a lot of money out of it. One might say that he, Arbus, and Maier were all cut from the same cloth — or perhaps printed from the same negative.

But were they? I think not. As photographers and persons known or unknown, they were all individuals in their own right, for better or for worse. After all, there is “human content” on both sides of the lens.

In this case, however, the subjects resisted




Share This


The Return of Moktada

 | 

On January 5th, radical Shia cleric Moktada al-Sadr returned to Iraq from more than three years of self-imposed exile in Iran. He brought with him the specter of renewed violence in that war-torn country.

For those readers who have done their best to forget America’s Iraq misadventure, here’s a bit of background. Al-Sadr is the son of a revered Shia imam who was murdered by Saddam Hussein in 1999. He became prominent by leading Shia opposition to the American occupation after 2003. In 2004 his militia, the Mahdi Army, twice battled U.S. troops. Though not victorious, the Sadrists lived to fight another day. Al-Sadr also avoided arrest by U.S. forces on a warrant issued against him for the murder of another cleric. America thus failed to nip in the bud the young cleric’s militant movement.

During the civil war of 2006-07, the Sadrists carried out brutal sectarian cleansings in Baghdad and elsewhere. Even the onset of the American surge of ground troops in early 2007 failed to slow the pace of the carnage. At the same time, the Mahdi Army began to slip out of al-Sadr’s control; by the summer of 2007 the frenzy of violence caused even many Shia to turn against the Sadrists. Then the weight of American power began to have an effect; many Sadrist cadres were killed or captured by US troops. At the end of August al-Sadr declared a unilateral ceasefire and took himself off to the Iranian holy city of Qom, where he sought safety and the opportunity to polish the rather rough edges he had displayed as a political and religious leader.

In his absence the government of Prime Minister Nouri al-Maliki, given a breathing space by the apparent success of the Surge, was able to consolidate its hold on power. In early 2008 Iraqi government forces, backed by US and British logistical, intelligence, and air support, defeated the Sadrists first in Basra, Iraq’s second-largest city, and then (though less decisively) in Baghdad itself. The Sadrist movement had reached its low point. Even so, however, it had once again survived. “We may have wasted an opportunity . . . to kill those that needed to be killed,” an anonymous US official stated at the time. Today that official looks more and more like a prophet.

After the Basra and Baghdad defeats the Sadrists eschewed the gun in favor of the ballot. They scored surprising successes in local elections in 2009. Then, in national elections this past March, they emerged as the second largest Shia bloc, barely trailing al-Maliki’s party. As a result, al-Sadr became a kingmaker; Maliki’s reappointment as prime minister in late 2010 was possible only because the Sadrists supported him. In return they received ministerial posts and at least one provincial governorship. They are in the enviable position of having power without true responsibility: if the government succeeds, they will share in the credit; if it fails, they will blame al-Maliki and bring the government down. The Sadrists have made it clear that al-Maliki has only so much time to restore services, revive the economy, and end what’s left of the American occupation.

An anonymous US official stated that “We may have wasted an opportunity . . . to kill those that needed to be killed.”

The question of a continued American presence is a vexing one for all concerned — except the Sadrists. There are less than 50,000 US troops left in the country. Under an agreement negotiated by the Bush administration, all US forces are supposed to leave by the end of 2011. The Obama administration has stated that it would consider an extension of the US military presence only if Iraq requests it. Al-Maliki would very much like to see some US troops remain, as would the Kurds and most of the Sunnis. But al-Maliki risks looking like an American puppet if he asks for an extended troop presence. The Sadrists, on the other hand, are unequivocally opposed to any US troops remaining after the Dec. 31, 2011 deadline. Their attitude is not merely designed to appeal to Iraqi nationalist feeling. At some point in the future the Sadrists could decide to seize power. They probably would have a good chance of succeeding, provided US troops are not available to stop them.

The US State Department is supposed to take over the American role in Iraq’s security after 2011. Its active arm will be thousands of contractors (that is, mercenaries) whom it has been hiring and trying to put in place before the last uniformed Americans depart. While the Wikileaks revelations have shown that US diplomats are an intelligent and dedicated group of professionals, the idea of putting diplomats in charge of security in a place like Iraq seems a dicey proposition indeed. The employment of contractors will undoubtedly lead to incidents in which Iraqi civilians are killed. The reaction of the Iraqi populace, and specifically the remaining militias, is all too easy to predict. Recall the burned bodies of American contractors hanging from a bridge in 2003.

The Sunni insurgency, despite heavy blows inflicted by US and Iraqi forces, remains able to carry out widespread and damaging attacks. It may in fact be on the brink of a resurgence, for many Sunnis who joined the pro-US, pro-government Awakening movement have grown disaffected with a Shia-dominated government that has cut back on cash payments and jobs for Sunnis.

We have then the makings of a new explosion in Iraq, with no prospect of an American “Surge II” should the worst occur. Into this maelstrom steps Moktada, the prophet and redeemer of the Shia masses and of the armed fanatics who thirst to avenge past beatings received at the hands of the Americans and al-Maliki. One is reminded of the situation in St. Petersburg in 1917, with al-Maliki in the role of Kerensky and al-Sadr as the “plague bacillus,” Lenin. Admittedly the two men are, for the present, partners, which Kerensky and Lenin never were. But one cannot help but feel that, given the past, their paths must diverge. It may be one, or two, or four years before the situation plays out. But I can’t help but think that one or the other of these men is going to wind up dead.

 




Share This


Count Down

 | 




Share This


Being Green Is Not a Sign of Health

 | 

There are two new reports in the Wall Street Journal about flops in the green energy movement — further illustrations of how much hype there is in it.

The first (Jan. 19) reveals that the vaunted new “amazingly energy efficient,” compact fluorescent light bulbs aren’t so energy efficient after all.

Pushing the hapless consumer to replace incandescent bulbs with CFLs (compact fluorescent lamps) has been the received wisdom among lawmakers for years, and no more so than in California, the ever-green state. California’s utilities alone spent $548 million over the past seven years in CFL subsidies. In fact, California utilities have subsidized over 100 million CFLs since 2006. And on the first of this year, the state started phasing out incandescent bulb sales.

Of course, when I say that the California utilities have been subsidizing the CFLs, I really should say that the aforementioned hapless consumers have been doing so, because all the subsidy money — about $2.70 out of the actual $4.00 cost of the CFL, i.e., more than two thirds of the actual cost — is paid by the consumer in the form of higher utility rates.

Naturally, the rest of the country — and, for that matter, the world — is set to follow California’s lead on CFLs. A federal law effective January 1 of next year will require a 28% step-up in efficiency for incandescent bulbs, and bans them outright by 2014. One consequence of this federal policy — unintended, perhaps, but none the less foreseeable — is that the last US plant making incandescent bulbs has been shut down, and China (which now makes all the CFLs) has seen even more of a jobs expansion, and is able to buy even more of our debt.

The UN is also pushing CFLs to help solve global warming, estimating that about 8% of all greenhouse gas emissions worldwide are caused by lighting. The World Bank has been funding the distribution of CFLs in poorer nations. Last year, for example, Bangladesh gave away five million World Bank funded CFLs in one day.

But now — surprise! — California has discovered that the actual energy savings of switching to CFLs were nowhere near what was originally estimated. Pacific Gas and Electric, which in 2006 set up the biggest subsidy fund for CFLs, found that its actual savings from the CFL program were collectively about 450 million kilowatt hours, which is only about one-fourth of the original estimate.

There are several reasons for the fact the switch to CFLs hasn’t lived up to expectations. First, not as many of the heavily subsidized CFLs were sold as originally estimated. PG & E doesn’t say why, but I will hazard a guess, based on personal experience. Many consumers dislike the light produced by CFLs, which they find dimmer or more artificial in its effect. Also, many complain that lights create static in AM radio reception. In a free market (i.e., one that, among other things, contains no subsidies), it is likely that few consumers would want to switch.

Surprise! — California has discovered that the actual energy savings of switching to compact fluorescent lamps were nowhere near what was originally estimated.

Second, the useful life of the CFLs is less than 70% of original estimates. Amazingly, the estimates were based on tests that didn’t factor in the actual frequency with which consumers turn them on and off. CFLs burn out more quickly the more often they are turned on and off than do the old incandescent bulbs.

Not mentioned in the story is the fact that CFLs contain mercury, and so are supposed to be specially disposed of (which presents an added cost to the consumer in time, money, and energy). The alternative is for the consumer to throw them out in the regular trash, making toxic waste sites out of ordinary trash dumps, with future clean-up costs of God only knows what.

The second Journal story (Jan. 18) reports that Evergreen Solar has closed its Massachusetts plant and laid off all the workers there.

This is deliciously ironic. Evergreen Solar was the darling of Massachusetts. Governor Deval Patrick, devout green and all-around Obama Mini-Me, gave Evergreen a package of $58 million in tax incentives, grants, and other handouts to open a solar panel plant there. In doing so, he simply ignored Evergreen’s lousy track record — a record of losing nearly $700 million bucks in its short life (its IPO was in 2000), despite lavish subsidies from federal and state governments.

Now Evergreen is outsourcing its operations, blaming competition with China, and whining like a bitchslapped baby about China’s subsidies of its solar energy and its lower labor costs. But Evergreen has itself sucked up ludicrously lavish subsidies, and it knew all along about China’s labor rates compared to Massachusetts’.

So Patrick winds up looking like a complete ass, and the taxpayers of Massachusetts wind up eating a massive loss.

But that’s not all. Near the end of last year, the Journal (Dec. 20) revealed still another case of American crony capitalism, of the green sort. It turns out that the wind industry — aptly dubbed “Big Wind” — copped a one-year, $3 billion extension of government support for wind power. It was part of the end-of-2010 tax deal.

Originally, this government subsidy was a feature of the infamous 2008 stimulus bill, under which taxpayers were forced to cover 30% of the costs of wind power projects. The American Wind Energy Association (AWEA) begged for the subsequent bailout, because without it 20,000 wind power jobs would be lost (one-fourth of all such jobs in America). But despite the billions in subsidies, Big Wind is sucking wind; its allure is dropping like a stone. The AWEA’s own figures show a 72% decline in wind turbine installations from 2009, down to the lowest since 2006.

Besides trying to make the 30% subsidy(!) permanent, the AWEA is pushing for a national “renewable energy” mandate that will force utilities to buy a large chunk of the power they sell from renewable sources (mainly solar and wind), irrespective of the fact that the price of renewable energy is sky high. The association has gotten more than half the states to enact such mandates, with higher energy bills for consumers as the result. Not surprisingly, Big Wind is also pushing the EPA to make energy from fossil fuels vastly more costly.

According to the federal government’s own figures, wind and solar take 20 times the subsidy to produce electricity than do coal and natural gas. So you can see why Big Wind keeps blowing smoke up the public’s rear about the fabulous future of renewable energy. You can also see why Big Wind is such a big contributor to the campaign coffers of Democratic politicians. They are the only ones who keep this outrageous boondoggle awash in money.

Meanwhile, the promises of green energy look more and more hollow, every day.




Share This


At Least Some People Get It

 | 

The Obama administration continues to scratch its collective head over what to do about creating jobs. After the disastrous failure of the numerous mega-billion-buck bailouts intended to lower the unemployment rate, even the now happily departed lame-duck Congress refused to pass another massive pork bomb. The Obamanistas, devoted Keynesians all, have pushed through more spending more quickly than any other administration in history. The national debt, which stood at $13 trillion on June 2, 2010, closed the year at $14 trillion. So we have spent beyond the dreams of Keynes’ avarice, and the unemployment rate still hovers near 10%.

Meanwhile, up in the Great White North, our Canadian friends have shown the way. For the fourth year in a row, they are lowering their federal corporate tax rate. It has just been dropped to 16.5%. This is less than half the American federal rate of 35%. Amazing, considering that Canada is sometimes supposed to be the pure welfare state, while we are the pure capitalist one.

And it won’t stop there. In 2012, the Canadian federal rate will drop to 15%, bringing the combined federal and provincial rate on businesses to about 25%. Back in 2000, the combined Canadian corporate income tax rate was 42.6%, so the decline has been dramatic.

Besides cutting the corporate tax rate, the Canadian government has eliminated corporate surtaxes as well as levies on capital.

All these incentives, combined with Canada’s healthy financial sector — Canada never created crazy government agencies to encourage and then purchase bad mortgages (it apparently grasps the concept of moral hazard!) — are enticing increased business investment. Spectra Energy of Houston, for example, has decided to invest $2 billion in Canadian energy and infrastructure projects. The Citco Group, a financial firm, has decided to open its only North American bank in Canada. And the big accounting firm KMPG has moved many of its operations to Canada.

American corporate taxes remain the second highest in the industrialized world. Our competitors to the north have grasped the idea that to tax an activity is to deter it. The Canadians obviously want more business, not less. And the reason they want more is that they grasp the fact that business creates jobs.




Share This


Escape and Transformation

 | 

War is evil, not only because of the atrocities committed by invading armies, but also because of the atrocities that victims are sometimes forced to commit in an effort to survive.

As The Way Back begins, Janusz (Jim Sturgess) a young Polish freedom fighter, is being interrogated by the Russian police. Janusz's wife, in obvious agony from both physical torture and mental anguish, testifies against him, and he is sent to Siberia. There he meets a variety of prisoners, some incarcerated for political crimes and others for street crimes. The true criminals run the living quarters, and the guards run the work camps.

The most notorious prisons have always been guarded not by men but by nature. Devil's Island, Alcatraz, Ushuaia at the southern tip of South America were all known for harsh surroundings that made successful escape virtually impossible. Siberia, the prisoners are told, is surrounded by "five million square miles of snow." Nevertheless, Janusz and others hatch a scheme to escape the prison and make their way across the Trans-Siberian tracks to Mongolia with a motley group of friends that includes Tomasz (Alexandru Potocean), an artist; Khabarov (Mark Strong) a pastor; Smith (Ed Harris), an American; and Valka (Colin Farrell), a common street thug. When Smith warns Janusz that not everyone will make it alive, Janusz responds, "They won't all survive, but they will die free men."

Within the prison we see the free market at work as the men barter their skills for meager material goods such as cigarettes, food, and clothing. One man paints pictures; another offers protection; yet another tells stories. Each of these men harbors a secret sorrow that fills him with unspeakable regret — regret for something he has done, as a result of the war, to a friend or family member; regret that drives him forward, seeking absolution or perhaps punishment. Janusz is driven by the determination to tell his wife he forgives her and release her from the self-loathing he knows she must feel for having informed against him.

The trek across 4,000 kilometers of snow and desert leads many of the men to a soul-cleansing sacrifice. This theme is personified in the portrayal of Irena (Saoirse Ronan), a girl they meet along the way. As they cross the Mongolian Desert, one of the men weaves for her a large wreath of bent twigs to protect her from the searing sun. The hat brings to mind the crown of thorns Christ wore during his trek toward Calvary. As they march across the desert she leans heavily upon a wooden staff and falters several times, falling to her knees and then being helped up by the men. At one point, she lies on the sand and the hat falls behind her to reveal the soft blue scarf she wears beneath it. She looks up at them with the calm serenity of a Madonna and smiles a peaceful benediction at them. If more proof is needed that she is a combination Madonna and Christ figure, Irena even walks on water — well, she runs across a frozen river — and she gently washes Smith's blistered feet when they find an oasis. These are small moments in the film, but they express one of its major themes in a subtle and moving way.

The richly orchestrated original score by Burkhard von Dallwitz contributes to the emotion of the film and keeps most of the audience in its seat till the end of the credits, savoring the experience. The cinematography by Russell Boyd is also gorgeous, focusing on the grand landscapes of the desert, the Himalayas, and the starlit skies, as one would expect from a film produced by National Geographic. Some wide-angle scenes of the weary travelers are so perfectly composed that they give new meaning to the phrase "moving pictures." These are literally photographs that move. Director Peter Weir adds to this impression by presenting each scene as a separate snapshot of the journey, without narrative transition. At times one almost feels that one is turning the pages of a photo album.

This does not distract, however, from the development of the characters and their story. When they start their journey, the men are almost like animals. They eat food that has been stomped into the earth; they lap water from muddy pools. Colin Farrell as the Russian criminal, Valka, paces like a lone wolf on the outskirts of the group. He is ruthless, unpredictable, and inhumanly willing to kill for survival. Smith only half jokingly calls Janusz's kindness a "weakness" that he plans to exploit when he needs someone to carry him. At one point the men chase a pack of wolves away from a freshly killed animal, then fall onto the carcass themselves, tearing at the raw meat and elbowing one another out of the way in their frenzied hunger.

These scenes are harrowing. But they do not dominate the movie. Even more impressive is the symbolic transition from the darkness of the Siberian forest to the bright light of the desert. The further the men journey from their physical prison, the more their sense of humanity returns, releasing them from their internal prisons. The Way Back is not just a movie about traveling back home, but about finding a way back from the darkness of war to the light of human dignity and self-respect. It is truly a wonderful film.


Editor's Note: Review of "The Way Back," directed by Peter Weir. National Geographic, 2010, 133 minutes.



Share This


A Cigar

 | 

In my youth, I was spoiled for a long time. No one really spoiled me. I took care to spoil myself, again and again. But the bitch Reality often intruded.

I was spending a dream summer on a small Mexican island on the Caribbean. Everyone should have at least one dream summer, I think, and no one should wait for old age. I had several dream summers myself. Anyway, my then-future-ex-wife, or TFEW (pronounced as spelled) and I were renting one of four joined concrete cubes right on the beach, on the seaward side of the island.

There was no running water in the cell but you could clear the indoor toilet with a bucket of seawater. You could also buy a bucket of nearly fresh water for a shower. There was a veranda and the doors locked. We slept in our own hammocks outside in the sea breeze most nights, although there was a cot inside. We also cooked on our butane stove on the veranda. We thought it was all cool. There was a million-dollar ocean view (probably an underestimate).

To feed ourselves, we bought pounds of local oranges and bread baked daily. Mostly, I dived for fish and spiny lobster all day. It got to the point where we grew tired of lobster. I even went to the water's edge slaughterhouse early one morning to compete for some shreds of bleeding turtle meat. Turtle meat, it turns out, looks like beef, but it tastes like old fish. Then I invented new ways of cooking lobster. The TFEW was a good soldier who liked reading. Also, her patience was frequently rewarded (but I am too much the gentleman to expand on this).

One morning, I woke up by myself near dawn and prepared my Nescafé, bent down on the small butane stove set on the tiled floor of the veranda. I looked up to the sea for a second and I was hit by a scene from the great Spanish director Luis Buñuel. Less than one hundred yards from me, bobbing up and down but stationary, was a low wooden boat packed with about 50 or 60 people just standing silently. They were not talking, they were not shouting, and they were not moving. It was like a dream, of course, but I knew I was not dreaming. Quickly, details came into focus. One detail was that one of the people in the boat wore a khaki uniform and the characteristic hat of the Cuban militia. Goddamn, I thought, this is what I have been reading and seeing on television for years! It's the real thing!

Then the practical part of my brain took over. I tried to yell at them that my rocky beach was not a good place to land. One made a gesture indicating they could not hear me because of the small breakers. Bravely, I abandoned my undrunk Nescafé and dived into the waters I knew well, because I had taken a dozen lobsters right there, under the same rocks, in front of my door. I did the short swim in a minute or two, and hanging from the side of the boat I told them how to go around a nearby point past which there was a real harbor. They thanked me in a low voice, like very tired people, in a language that was clearly Spanish but that sounded almost comical to my ears.

An hour later, I walked to the harbor where the main café also was to find out about my refugees. Naturally, I felt a little possessive of them since I had discovered them all by myself. Soon after I arrived, they started coming out of processing by the local Mexican authorities. (Incidentally, I think I witnessed there a model of humane efficiency worth mentioning.) Each walked toward the café, an envelope of Mexican pesos in hand.

A tall, skinny black Cuban spotted me, from earlier in the morning, when I was in the water. He walked briskly to me and took me in his arms. It was moving but pretty natural, since I was the first free human being he had laid eyes on in his peril-fraught path to freedom. He spoke very quickly with an accent I was not used to. What perplexed me is that he kept saying “negro,” with great emotion. After a few seconds in his embrace, I realized he was calling me, “mi negro.” I wondered for an instant how I had become a Negro's Negro. Then it came back to me, out of some long buried reading, that Cuban men sometimes call their mistress “mi negra,” irrespective of color, the overt color reference serving as a term of endearment, of tenderness.

I took my new buddy to the café to buy him breakfast. He pulled out my chair ceremoniously and took an oblong metallic object out of the breast pocket of his thin synthetic shirt. This he handed to me with tears in his eyes. Inside was a long Cuban cigar. I did not have the heart to tell him I did not like cigars. I smoked the damn thing until my stomach floated in my throat. He watched beatifically, in the lucid understanding that that little act testified to his personal victory against the barbarism of communism.




Share This


Are You Kidding?

 | 

Jonathan Alter, a leading Obama sycophant, appeared on a talk show recently with Washington Examiner columnist Tim Carney. The topic turned to a comment made by Rep. Darrell Issa (R-CA), who observed that Obama’s administration is “one of the most corrupt administrations ever.” Issa’s claim sent Alter into an apoplectically angry rant, saying there was “zero” evidence of corruption.

To Alter’s evident surprise, Carney immediately demurred, incredulous that Alter couldn’t see any evidence. While Carney said that Issa’s comments were “hyperbolic,” he gave a few choice illustrations, and later listed a few more on the Examiner’s website. Carney’s list included the following:

1. Ex-Google lobbyist Andrew McLaughlin, employed by the White House as a tech policy specialist, chatting with Google lobbyists about the rules governing “Net Neutrality,” from which Google stands to gain. (Carney’s blog doesn’t mention it, but Google gave lavishly to the Obama campaign.)

2. A former Goldman Sachs lobbyist took the job of the Treasury Department’s Chief of Staff within nine months of his Goldman employment. (Again, we can add that upper-level management at Goldman Sachs was a big contributor to Obama’s coffers.)

3. Former H&R Block CEO Mark Ernst was hired by Obama to help the IRS write new regulations on tax preparation (which Block subsequently endorsed, because it stands to benefit from them).

4. Obama officials have met “off-campus” repeatedly in order to dodge the Presidential Records Act.

5. When AmeriCorps Inspector General Gerald Walpin started investigating Obama’s friend and supporter Kevin Johnson, Walpin was summarily fired.

6. In the Obama-orchestrated Chrysler bankruptcy, the UAW was given the bulk of the stock in the new company (while the secured creditors were burned).  Naturally, the UAW was a big contributor to Obama’s campaign.

As good as Carney’s list is, it only scratches the surface. Here are a few more illustrations:

7. Besides the crony bankruptcy deal that gave the bulk of the stock in the new company to the UAW, there was a similar deal that gave a big chunk of the new GM stock to the UAW.

8. Moreover, in the recent IPO, the feds held onto the public’s shares while the UAW was allowed to dump its own. This ensured that the UAW pension fund would be made whole.

9. ACORN, the bogus “community service” group for which Obama was counsel and which worked so hard to register voters (even fictional ones) on Obama’s behalf, received lavish amounts of “stimulus” money.

10. When Obama was campaigning for Democrats in the last election, he doled out stimulus money in the states he visited (especially Nevada). Indeed, the stimulus fund was a grab-bag of cash for Obama’s backers. A disproportionate share of the money was spent in precisely those states that supported Obama.

As to whether Issa’s statement was “hyperbolic,” I don’t think so. The Obama administration has racked up a lot of corruption in its two years in office. If it isn’t one of the most corrupt administrations in history, it is surely on track to become so.




Share This


Latin America: Autumn of the Antipodes?

 | 

In response to last November’s flooding, mudslides, and destruction of homes that left tens of thousands of Venezuelans destitute in makeshift shelters, Hugo Chávez went camping to show solidarity with his people. The luxurious tent was a gift from Libyan strongman Muammar Gaddafi. Piling insult atop bad grace, Chávez was photographed inspecting the devastation in a Cuban(!) military uniform. He is not one to dwell on the negative. Instead of looking grave, concerned, and statesmanlike, he pursed his lips and spread his cheeks in a smirk that bordered on the manic, sticking his head out the window of a Jeep like a dog without good sense. Not since Mexican President Antonio López de Santa Anna (he of the Alamo) buried his leg with full military honors has a Latin American leader been this much fun.

But for Chávez, the bigger crisis is his impending loss of power. Though the 2010 parliamentary elections netted his United Socialist Party 95 seats, the opposition, successfully united, won 64 seats, thus depriving Chávez of the two-thirds and three-fifths majorities required to pass organic and enabling legislation or fiddle further with the constitution. But never mind, Hugo has a plan.

He requested that the lame-duck legislators grant him unlimited powers to rule by decree for 18 months, limit legislative sessions to four days a month, turn control of all parliamentary commissions over to the executive branch, limit parliamentary speeches to 15 minutes per member, restrict broadcasts of assembly debates to only government channels, and penalize party-switching by legislators with the loss of their seats. The lame duck legislators dutifully complied. Vice President Elías Jaua says the powers are necessaryto pass laws dealing with vital services after the disaster and with such areas as infrastructure, land use, banking, defense and the “socio-economic system of the nation.” For good measure, the lame-duck Chavista legislature also passed a law barring non-governmental organizations such ashuman rights groupsfrom receiving US funding; another law terminating the autonomy of universities; more broadcasting and telecommunications controls; and the creation of “socialist communes” to bypass local governments.

Not since Santa Anna buried his leg with full military honors has a Latin American leader been this much fun.

Opposition newspaper editor Teodoro Petkoff called it a “Christmas ambush,” writing in his daily Tal Cual that Chavez is preparing totalitarian measures that amount to “a brutal attack . . . against democratic life.” Chavez’s end-run around the new National Assembly, which convened on January 5, was blatantly illegal, as such emergency powers can only be granted by the legislature to cover a period within the term of the legislature in office. Chávez demanded powers that extend well beyond the previous legislature’s term and effectively emasculate the new legislature, which would never have given him the two-thirds vote he would need, since 40% of its members are in opposition. Venezuelans have reacted with roadblocks and peaceful but energetic massive resistance. Security forces and government thugs have counter-reacted violently. Many people have been injured, not only physically, but economically as well. On New Year's Eve the Bolivar was halved in value, from 2.6 to the dollar to 4.3.

The most infamous precedent for this maneuver was the German Reichstag’s March 1933 enabling law granting Adolf Hitler the right to enact laws by decree for four years, making him dictator of Germany. No doubt the affair will end up in court — decided by Chávez-appointed judges. Still, it’s only a matter of time before the ship of state either turns or crashes.

By contrast, Sebastián Piñera, Chile’s new president, responded to the March 2010 earthquake with grace and alacrity; and six months later rallied the country behind the 33 miners trapped for 70 days in a deep mine cave-in. Unlike President Obama, in his autarkic response to the BP fiasco, Piñera requested and received international assistance. But Piñera is perhaps more notable as the poster boy of a subtle, newly evolving trend throughout Latin America, a trend only now being recognized: the “normalization” of its politics.

Normalization means the peaceful alternation of center-left with center-right governments, which is the status quo in most developed, liberal democracies. Piñera, a center-right candidate, followed two decades of center-left government.

By definition, normalization is dull, boring, and bereft of transformational ideals. But it is nonetheless great news, especially when compared to the radical swings of the past, when left-wing revolutions followed right-wing golpes de estado (or vice versa). The inevitable mayhem, war, and death were always followed by authoritarian regimes.

Chávez demanded powers that extend well beyond the previous legislature’s term and effectively emasculate the new legislature.

In Chile, the Marxist government of Salvador Allende, which had begun to forcibly expropriate property, was overthrown by a military coup after inflation exceeded 140%. To restore order, General Augusto Pinochet brutally imposed a right-wing authoritarian regime. To his credit, however, he laid the groundwork for the political and fiscal stability Chile now enjoys. He invited the so-called Chicago Boys — Milton Friedman and his acolytes — to design a stable and prosperous economic framework, and he relinquished power slowly and honestly by means of a new constitution and open plebiscites. In 1989, Pinochet lost an election to the Concertación, the center-left coalition that would hold power until Piñera’s election.

As Fernando Mires, a Chilean political science professor at the University of Oldenburg, Germany, has observed, “Everything that does not directly deal with war and death, is a game.” Politics is a game, and games require rules. Once war and death enter the scene, the game of politics is over. Latin America is now anteing up to the table. Though not always perfectly correlated, political stability goes hand-in-hand with some degree of fiscal and institutional stability — preconditions for people’s ability to lead healthy, productive lives.

Independence and Chaos

When Father Miguel Hidalgo’s Grito de Dolores declared Mexican independence from Spain on September 16, 1810, it began a protracted independence movement throughout the continent. Two days later, Chile, at the other end of Latin America, instituted de facto home rule. To be sure, Haiti had already defeated Napoleon in 1804 to gain independence, and Cuba would throw off Spanish rule with US help as late as 1898 (and, some would argue, didn’t actually achieve full independence until the Castro regime nullified the Platt Amendment, which gave the US Congress a veto power over foreign affairs). But the main course of events took place within two decades.

In 1821, Spain recognized Mexico’s independence. By 1823, after it had recognized the independence of much of the rest of Latin America (with Portugal ceding Brazil in 1822), US president James Monroe felt comfortable enough to declare the Americas a European-free zone, in spite of Spanish forces still holding out in what was to become Bolivia.

Latin American independence movements were products of the Enlightenment, influenced by the US Declaration of Independence and subsequent constitution — in the context of the times, left-wing revolutions. But, as Marxist commentators never fail to decry, the American revolutions were not “true,” social revolutions, but rather bourgeois realignments. The original Spanish conquest had left most of the basic indigenous structures of authority intact, replacing Moctezuma and Atahualpa with the throne of Madrid. Latin American independence movements recapitulated that strategy, replacing the Spanish aristocracy with homegrown landed gentry.

Meanwhile, a new model of revolution had emerged: the French Revolution, in which the ideals of the Enlightenment metastasized into a nightmare. The monarchy was decapitated; the ancien regime gone; an empire was founded. Traditional concepts of how societies ought to be organized had been put aside.

In Latin America, would-be liberators, criticized from both Right and Left, became disillusioned and turned away from democracy. Up north, Agustín de Iturbide, the Mexican heir of a wealthy Spanish father, switched sides to fight for Mexican independence and declared himself emperor of Mexico; Santa Anna, another side-switcher, overthrew him and established a republic, then a dictatorship. Santa Anna ended up ruling Mexico on 11 non-consecutive occasions over a period of 22 years. Asked about the loss of his republican ideals, he declared,

It is very true that I threw up my cap for liberty with great ardor, and perfect sincerity, but very soon found the folly of it. A hundred years to come my people will not be fit for liberty. They do not know what it is, unenlightened as they are, and under the influence of a Catholic clergy, a despotism is the proper government for them, but there is no reason why it should not be a wise and virtuous one.

This general sentiment came to be shared by most of Latin America’s liberators.

Meanwhile, Central America (including the Mexican state of Chiapas but excluding Panama), and known as the Captaincy General of Guatemala, seceded from Mexico, becoming “The Federal Republic of Central America” after a short-lived land grab by Mexican Emperor Iturbide who pictured his empire extending from British Columbia to the other Colombia. The Federal Republic didn’t last. In the 1830s Rafael Carrera, led a revolt that sundered it. By 1838, Carrera ruled Guatemala; in the 1860’s he briefly controlled El Salvador, Honduras, and Nicaragua as well, though they remained nominally independent.

Not one to be left behind, the Dominican Republic jumped on the bandwagon in 1821, but was quickly invaded by Haiti. Not until 1844 was the eastern half of Santo Domingo able to go its own way. Sandwiched between Cuba and Puerto Rico (both still held by Spain), in 1861 the Dominican Republic — in a move unique in all Latin America — requested recolonization, having found the post-independence chaos untenable. Spain gladly acquiesced. The US protested but, mired in its own civil war, was unable to enforce the Monroe Doctrine. In 1865, the Dominican Republic declared independence for a second time.

Sebastián Piñera, Chile’s new president, represents a subtle trend only now being recognized: the “normalization” of Latin American politics.

South America fared no better. Simon Bolívar, after a series of brilliant campaigns that criss-crossed the continent, created the unstable Gran Colombia, a state encompassing modern Colombia, Panama, Venezuela, and Ecuador, with himself as president — a model Hugo Chávez aspires to emulate. Bolívar then headed to Peru, to wrest power from Joséde San Martín, its liberator. Bolívar was declared dictator, but the Spanish still held what is now Bolivia. He finished San Martín’s job by liberating it and separating it from Peru. The new state was christened with his name. By 1828, Gran Colombia proved unmanageable, so Bolívar declared himself dictator, a move that ended in failure and more chaos.

Southern South America was liberated by San Maríin and Bernardo O’Higgins, with Chile and Argentina going their separate ways. In Chile, Bernardo O’Higgins turned from Supreme Director into dictator, was ousted, and was replaced by another dictator. A disgusted San Martín exiled himself to Europe, abandoning Argentina to a fate of civil war and strongmen. Uruguay and Paraguay carved themselves a niche — but only after Uruguay’s sovereignty had been contested by newly independent Brazil. In Paraguay, José Rodriguez de Francia, Consul of Paraguay (a title unique to Latin America) became in 1816 “El Supremo” for life. An admirer of the French revolution — and in particular of Rousseau and Robespierre — he imposed an extreme autarky, closing Paraguay’s borders to all trade and travel; abolishing all military ranks above captain, and insisting that he personally officiate at all weddings. He also ordered all dogs in the country to be shot.

Strongmen and Stability

The Latin American wars of independence were succeeded by aborted attempts at unity or secession; wars of conquest, honor, and spite; land grabs, big and little uprisings, civil wars; experiments in democracy, republicanism, federation, dictatorship, monarchy, anarchy, and rule by warlords or filibusters; and even reversion to colonialism; all with radical “left-right” swings — in a word, by every imaginable state of affairs, none long lasting. It all culminated in the era of the caudillo: a populist military strongman, usually eccentric, sentimental, long-ruling, and (roughly speaking) right-wing.

In his novel Autumn of the Patriarch, Colombian author (and confidante of Fidel Castro) Gabriel García Márquez offers a profile of a caudillo that has yet to be surpassed. The stream-of-consciousness, 270-page, 6-sentence prose “poem on the solitude of power” was based on Colombia’s Gustavo Rojas Pinilla (1953–57) and Venezuela’s Juan Vicente Gómez (1908–1935), with dashes of Franco and Stalin thrown in. But its indeterminate timelessness, stretching from who-knows-when to forever, also evokes Mexico’s Porfirio Díaz (1876–1911), Paraguay’s Alfredo Stroessner (1954–89), the Dominican Republic’s Rafael Trujillo (1930–61), and Nicaragua’s Anastasio Somoza (1936–56). It could also easily include Brazil’s Getulio Vargas (1930-54), Argentina’s Juan Perón (1946–55 and 1973–76), Haiti’s “Papa Doc” and “Baby Doc” Duvalier (1957–86), and, yes, the longest ruling military strongman of all — Fidel Castro (1959–201?).

The caudillo period had no specific time frame; it was rather a response to instability (or injustice, in the case of left-caudillos) that varied over time, country, and cultural conditions. Take Mexico for example. Besides the usual post-independence chaos, it also suffered invasions from the US and France. So, in 1846, Porfirio Díaz, an innkeeper’s son and sometime theology student, left his law studies to join the army — first, to fight the US, then to fight Santa Anna in one of the latter’s multiple bids for power, and finally to fight the French-imposed Emperor Maximilian.

Politics is a game, and games require rules. Once war and death enter the scene, the game of politics is over.

In the war against Maxmillian, Díaz rose to become division general under Benito Juárez’ leadership but retired after Mexican forces triumphed and Juárez assumed the presidency in 1868. It didn’t take long for Díaz to become disillusioned. One principle that had developed in Mexican politics and ironically — especially considering the nearly 35 years in power that Díaz would enjoy — became institutionalized, was one-term presidential term limits. So when Juárez announced for a second term in 1870, Díaz opposed him. Losing, he cried fraud and issued a pronunciamento, a formal declaration of insurrection and plan of action accompanied by the pomp and publicity emblematic of Mexican politics. After another pronunciamento and additional revolts much politicking, and a term in Congress, Díaz succeeded in ousting his adversaries. He was elected president in 1877. Having based his campaign on a platform of “no reelection," he reluctantly stepped aside after one term and turned over the presidency to an underling, whose incompetence and corruption ensured Díaz’s victory in the 1884 contest.

He set out to establish a pax Porfiriana by (as he termed it) eliminating divisive politics and focusing on administration. The former was achieved by stuffing the legislature, the courts, and high government offices with cronies; making all local jurisdictions answerable to him; instituting a “pan o palo” (bread or a beating) policy, enforced by strong military and police forces; artfully playing the various entrenched interests against each other; and stealing every election. Porfirio Díaz opened Mexico up to foreign investment, built roads and public works, stabilized the currency, and developed the country to such a degree that it was compared economically to Germany and France.

Classifying caudillos as Left or Right is not always easy. Caudillos who focused on economic development, fiscal stability, and monumental public works are generally perceived as right-wing, while those who improved education, fought church privilege, or imposed economic controls are perceived as left-wing. Nearly all were initially motivated by idealism, followed by disillusionment with democracy and addiction to power. Nearly all lined their pockets. Venezuela alone, between 1830 and 1899, experienced nearly 70 years of serial caudillo rule, which, some would argue, continued intermittently to the present.

In Ecuador, 35 right-wing years initiated by a caudillo were followed by 35 left-wing years initiated by another caudillo. General Gabriel García Moreno had saved the country from disintegration in 1859 and established a Conservative regime that wasn’t overthrown until 1895, when Eloy Alfaro led an anti-clerical coup. Alfaro secularized Ecuador, guaranteed freedom of speech, built schools and hospitals, and completed the Trans-Andean Railroad connecting the coast with the highlands. In 1911, his own party overthrew him and further liberalized the regime by opening up the economy. The Liberal Era lasted until 1925. Altogether, Alfaro initiated four coups — two succeeded, and one finally killed him — that made him the idol of Rafael Correa, Ecuador’s present, left-wing, president.

One right-wing caudillo, the Dominican Republic’s Rafael Trujillo (1930–61), was prematurely Green, restricting deforestation and establishing national parks and conservation areas in response to the ravages in next-door Haiti. His successor (after a five-year, chaotic interregnum that included a civil war and US Marines) was Joaquín Balaguer, an authoritarian who dominated Dominican politics until 2000 and continued Trujillo’s conservation policies.

Some caudillos combined elements from both Left and Right, coming up with ideologies that were internally inconsistent but extremely popular. Argentina’s Perón absorbed fascism, national socialism and falangism while stationed as a military observer in Italy, Germany, and Spain. Back in Argentina he allied himself with both the socialist and the syndicalist labor movements to create a power base. In 1943, as a colonel, he joined the coup against conservative president Ramon Castillo, who had been elected fraudulently.

In Paraguay, José Rodriguez de Francia closed the borders to all trade and travel; abolished all military ranks above captain, and insisted that he personallyofficiate at all weddings. He also ordered all dogs in the country to be shot.

When Perón announced his candidacy for the 1945 presidential elections as the Labor Party candidate, the centrist Allied Civic Union, the Socialist Party, the Communist Party, and the conservative National Autonomous Party all united against him — to no effect. As president, his stated goals were social justice and economic independence; in fact, he greatly expanded social programs, gave women the vote, created the largest unionized labor force in Latin America, and went on a spending spree that nearly bankrupted Argentina (it included modernizing the armed forces, paying off most of the nation’s debt, and making Christmas bonuses mandatory). Perón also nationalized the central bank, railways, shipping, universities, utilities, and the wholesale grain market. By 1951, the peso had lost 70% of its purchasing power, and inflation had reached 50%.

During the Cold War, Perón refused to pick either capitalism or communism, instituting instead his “third way," an attempt to ally Argentina with both the United States and the Soviet Union. Today, Peronism remains a vital force in Argentina, with President Cristina Fernández at its helm.

Sandino Lives!

Not that caudillismo needed any intellectual justification, but the social Darwinism that developed during the late 19th century helped to rationalize many of the abuses committed under its aegis. Then, fast on its heels and in rebuttal to it, Marxism burst on the scene, invigorating the Left by advocating the forcible redistribution of wealth. The Left-Right divide widened, and conflict sharpened.

In 1910, old and ambivalent about retiring, Porfirio Díaz decided to run once more for president of Mexico. When he realized that his opponent, Francisco Madero, was set to win, he jailed him on election day and declared himself the winner by a landslide. But Madero escaped and, from San Antonio, Texas, issued his Plan de San Luis Potosí, a pronunciamento promising land reform. It ignited the Mexican Revolution.

The Zapatista Army of National Liberation now sells t-shirts and trinkets to finance its anti-capitalist jihad.

Though not specifically Marxist, the Mexican Revolution has been interpreted as the precursor to the Russian Revolution. Its ideologies — especially “Zapatismo” — are part of the progressive, Fabian, and socialist zeitgeist of the time. In fact, however, the Mexican Revolution — a many-sided civil war that lasted ten years — was so indigenously Mexican as to elude historians’ broader interpretive models. Yet it was the first effective and long-lasting leftist Latin American movement. Its successors are Cuban communism, liberation theology, Bolivarian socialism, and many others. Out of it coalesced Mexico’s Institutional Revolutionary Party (PRI), heir to a coalition of forces and ideologies that were, at last, fed up with fighting. The PRI, a member of the Socialist International, instituted de facto one-party rule, and controlled Mexico for over 70 years.

Other radical leftist revolutionary movements followed — some sooner, some later, not all successful — operating either by force or through the ballot. The earliest (1927) was that of the Sandinistas in Nicaragua. Augusto César Sandino identified closely with the Mexican Revolution. Although he was not a Marxist, his movement adopted that ideology after his death. Five years later, in next-door El Salvador, the Farabundo Martí National Liberation Front (Martí was a Communist Party member and former Sandinista) rose in revolt. Guatemala followed in 1944 with the Jacobo Arbenz coup, then Cuba in 1952 with Castro’s insurrection.

With Castro’s accession to power in 1959, the outbreak of Marxist revolts in Latin America intensified. During the 1960s the Tupamaros rose in Uruguay. In Peru, various groups, including the Shining Path, revolted. The FARC, ELN, and M-19 followed in Colombia. In 1967, Fidel’s own Che Guevara met his death while trying to organize a premature revolution in Bolivia. Then, in 1970, Chileans voted in — by only 36%, a plurality in a three-way race — the first elected Marxist regime in the Americas.

Venezuela was next. Hugo Chávez launched his first, unsuccessful coup in 1992. After a stint in jail he was pardoned, ran for president in 1998, and won.

In Bolivia, Evo Morales, a former trade union leader, and his Movement Toward Socialism won the 2005 elections with a majority.

Latin American Marxism, unlike the European sort, has little to do with the industrial revolution or conditions of the working class. Not only is it currently more tolerant of religious belief; it is more relaxed about ideology and — again, currently — lacks gulags and killing fields. It is more about land distribution and “Social Justice” — a term whose words, innocuous and benign in themselves, don bandoliers and carbines and become fighting words when capitalized.

Social Justice is the concept of creating a society based on the principles of equality, human rights, and a “living wage” through progressive taxation, income and property redistribution, and force; and of manufacturing equality of outcome even in cases where incidental inequalities appear in a procedurally just system.

The term and modern concept of "social justice" were created by a Jesuit in 1840 and further elaborated by moral theologians. In 1971 Peruvian priest Gustavo Gutiérrez justified the use of force in achieving Social Justice when he made it a cornerstone of his liberation theology. As a strictly secular concept, Social Justice was adopted and promulgated by philosopher John Rawls.

The Other Path

Mario Vargas Llosa is a Peruvian writer and 2010 Nobel laureate — pointedly awarded the prize for his literary oeuvre, as opposed to his political writings, but this from a committee that awarded Barack Obama a Peace Prize for nothing more than political penumbras and emanations. Vargas Llosa started as a man of the Left. His hegira from admirer of Fidel Castro to radical neoliberal candidate for president of Peru in 1990 is a metaphor for Latin America’s own swing of the pendulum today.

In 1971 he condemned the Castro regime. Five years later, he punched García Márquez (patriarch of Marxist apologists) in the eye. Their rupture has never been fully healed (or explained), but it is attributed by some to diverging political differences. In 1989, when Peruvian economist Hernando de Soto published his libertarian classic, The Other Path (an ironic allusion to the Shining Path guerrilla movement), Vargas Llosa wrote its stirring introduction. He and de Soto advocated individual private property rights as a solution to property claims by the Latin American poor and Indians. Both the fuzzy squatters’ rights of the urban poor and the traditional subsistence-area claims of indigenous communities were being — literally — bulldozed by corrupt or insensitive governments; the two authors believed that the individual occupiers of the land should own it as their private property. This proposed solution did not sit well with the Social Justice crowd. To them, communal rights trumped individual rights.

But it struck a chord with the poor and dispossessed. So Vargas Llosa declared for the presidency in 1990 on a radical libertarian reform platform (the Liberty Movement). In Peru, the Shining Path guerrillas were terrorizing the country and the economy was a disaster, having been run into the ground by left-wing populist Alan García, who was now running for reelection. In the outside world, Soviet communism and its outliers were disintegrating, both institutionally and ideologically. Between García and Vargas Llosa in the three-way race stood Alberto Fujimori, the center-right candidate. Vargas Llosa took the first round with 34%, nearly the same majority that had put Allende into office in next-door Chile. But he lost the runoff, handing Peru over to the authoritarianism (as well as the reforms) of the Fujimori regime.

Latin American politics culminated in the era of the caudillo: a populist military strongman, usually eccentric, sentimental, long-ruling, and (roughly speaking) right-wing.

Without skipping a beat and less than a month later, Vargas Llosa attended a conference in Mexico City entitled "The 20th Century: The Experience of Freedom." This conference focused on the collapse of communist rule in central and eastern Europe. It was broadcast on Mexican television and reached most of Latin America. There Vargas Llosa condemned the Mexican system of power, the 61-year rule of the Institutional Revolutionary Party, and coined “the phrase that circled the globe”: "Mexico is the perfect dictatorship.” “The perfect dictatorship,” he said, “is not communism, not the USSR, not Fidel Castro; the perfect dictatorship is Mexico. Because it is a camouflaged dictatorship."

But the “perfect dictatorship” was already loosening its grip. Recent PRI presidents had been well-degreed in economics and public administration, as opposed to politics and law. They had already moved Mexico rightward, to the center-left, by privatizing some industries and liberalizing the economy — especially by joining NAFTA. By the 1994 election, the PRI had opened up the electoral system to outside challengers: the center-right National Action Party (PAN) and the strong-left Party of the Democratic Republic (PRD). In the 2000 elections the PRI ceded power to the PAN’s Vicente Fox, though not entirely.

The popular but hapless Fox ended Mexico’s last Marxist uprising, Subcomandante Marcos’ Zapatista Army of National Liberation (they now sell t-shirts and trinkets to finance their anti-capitalist jihad). But he was unable to further the rest of his reform agenda through the PRI-controlled legislature. So Mexico reelected the PAN in 2006. Today, in a move emblematic of Latin America’s change to European-style, alternating center-left/center-right administrations, the PAN and the PRD are exploring avenues of cooperation to pass legislation through the PRI-controlled Congress.

But Vargas Llosa wasn’t through yet. During one of Hugo Chávez’s marathon television tirades in 2009, he challenged Vargas Llosa to a debate on how best to promote Social Justice. When Vargas Llosa accepted, Chavez — in his most humiliating public move to date — declined.

Ho-hum

Peru’s increasingly discredited Fujimori resigned because of corruption, a questionable third presidential term, and the exercise of disproportionate force, once too often. He was followed by Alejandro Toledo, an economist so centrist and dull that he bored his people into not reelecting him. By the 2006 elections, Peru’s centrist politics were entrenched in the most ironic of ways. Alan García, the disastrous, populist left-wing ex-president, ran on a center-right, neoliberal platform — and won. And against all odds, he kept his word. In 2009 Peruvian economic growth was the third highest in the world, after China and India. In 2010 it remained in double figures. The 2011 elections won’t include García, as he can’t succeed himself. They are expected to be contested by the technocratic Toledo and the center-right Keiko Fujimori, Alberto’s daughter and leader of the Fujimorista Party.

It’s much the same — with few exceptions — in the rest of Latin America. Brazil’s widly popular, fiscally prudent, and social justice-sensitive center-left Lula da Silva administration was reelected, this time led by Dilma Rouseff, Brazil’s first female president. She has promised more of the same. In next-door Paraguay, the exceptionally long-ruling (61 years) Colorado Party ceded power in 2008 to the country’s second-ever left-wing president, Fernando Lugo, an ex-bishop and proponent of liberation theology. But Lugo has moved to the center, distancing himself from Chávez and tempering his social and fiscal promises by seeking broad consensus. GDP growth in 2010 was 8.6%.

In 2009 Uruguay elected as president José Mujica, a former Tupamaro guerrilla. But Mujica, described by some as an “anti-politician,” has moved radically to the ceemnter. The tie-eschewing, VW Beetle-driving president has promised to cut Uruguay’s bloated public administration dramatically. He identifies with Brazil’s Lula and Chile’s Bachelet rather than Bolivia’s Morales or Venezuela’s Chávez. After 6% growth in 2010, Uruguay is expected to level at 4.4% in 2011.

With the unexpected death of her husband and her disastrous left-wing populist policies (inflation is close to 30%), Argentina’s Fernández is not expected to win reelection in 2011. Reading the writing on the wall, she (unlike Chávez) is tiptoeing toward the center.

In Colombia, the feared authoritarian tendencies of Alvaro Uribe turned out to be wildly exaggerated; and his successor, Juan Manuel Santos, has moved even closer to the center. The two — Santos was Minister of Defense under Uribe — brought the FARC insurgency to its knees, reducing the guerrillas to little more than extortionists and drug dealers. With Colombia’s new-found safety, high growth, and low inflation, its tourist industry is booming.

El Salvador, long the archetype of extreme polarization between the now-peaceful FMLN Marxist revolutionaries and the ex-paramilitary rightwing Arenas coalition, elected Mauricio Funes in 2009. Funes, the FMLN’s surprise candidate, ran on a centrist platform and has stuck to it — throwing the Arenas coalition into disarray. He enjoys a 79% approval rating, which makes him Latin America’s most popular leader. Neighboring Honduras, after deposing a power-grabbing Chávez clone in 2009, elected the center-right Pepe Lobo, who promised reconciliation and stability. Even Guatemala shows signs of progress. The 2007 elections inducted Álvaro Colom, the first center-left president in 53 years.

Latin American Marxism, unlike the European sort, has little to do with the industrial revolution or conditions of the working class.

Costa Rica, long Latin America’s exemplar of democracy and moderation, is becoming ever more so. The 2009 elections turned Laura Chinchilla into Costa Rica’s first female president (one even more stunningly beautiful than Argentina’s Fernández). In spite of being socially conservative, she continues Óscar Arias’ vaguely center-left policies. With the traditional center-right and center-left parties always closely vying for power, the libertarian Partido Movimiento Libertario (PML), which retains a 20% popular vote base (and 10% of the legislature), has emerged as the policy power broker in the Congress.

Latin American politics’ move to the center is even mirrored in its ancillaries. The Cuban American National Foundation, largest of the Cuban diaspora’s political representatives, abjured the use of force after the death of its founder, Mas Canosa, and advocates a more open US policy toward Cuba.

Not all is good news. Though Cuba is showing microscopic hints of change (as reported in Liberty’s December issue), Chávez’ power play in Venezuela after his electoral defeat is yet to play out, and Bolivia’s Evo Morales holds steady after a barely avoided civil war, Nicaragua’s anti-capitalist tyrant Daniel Ortega is bound and determined to hold onto power come what may. But their days, too, are numbered.




Share This


Tourist Class

 | 

Johnny Depp, my favorite actor to rave about, and Angelina Jolie, my favorite actress to rage against, together in the same film — how could I resist The Tourist? Despite its poor critical reviews, I had to see the film for myself.

The Tourist is an old-school spy thriller in the style of Alfred Hitchcock. Two strangers meet on a train. One is the cool and beautiful spy, Elise Clinton-Ward (Jolie). The other is Frank Tupelo (Depp), a hapless math teacher vacationing in Venice. Treasury agents are on the train, hoping she will lead them to her boyfriend, the mysterious Alexander. Elise needs to find a patsy to throw them off the trail. Frank fits the bill, and the game of cat-and-mouse begins. Add an international crime boss (Steven Berkoff) intent on regaining the money Alexander has stolen from him — a crime boss who also believes that Frank is Alexander — and the big dogs enter the chase.

Film buffs will recognize obvious allusions to Hitchcock's North by Northwest, including the famous cut to the overhead shot of the train barreling through the tunnel as Cary Grant and Eva Marie Saint discretely make love. The film is sprinkled with allusions to several other iconic films as well, including Star Wars and Indiana Jones. These allusions are subtle and fun, good for a knowing chuckle without becoming campy or distracting.

The Tourist is also blessed with a witty script, written by director Florian Henckel von Donnersmarck and Christopher McQuarrie. It contains several rapid-fire verbal exchanges worthy of a Word Watch column by Liberty's own Stephen Cox. One, involving the words "ravenous" and "ravishing," is hilarious, mainly because of Frank's deadpan sincerity. His continued use of Spanish tourist phrases as he tries to communicate with Italian hoteliers and policemen is equally humorous (and realistic). The plot itself has enough turns and twists to satisfy an audience of thrill seekers, and it keeps us guessing until the end. The supporting cast is fine too, especially Paul Bettany as the Treasury Inspector Acheson and Timothy Dalton as Inspector Jones.

So why is The Tourist being panned by most critics? There is that unfortunate casting decision — the selection of Angelina Jolie as the femme fatale. Even director Donnersmarck was unhappy with her selection, and left the project for a while after she was cast. Jolie is simply too cold and hard to play the role convincingly. Yes, Eva Marie Saint was cool and distant in North by Northwest, and it worked brilliantly. But she was a more versatile actress, and she played the role of Eve Kendall with intelligence and reserve. Her costumes — mostly smart suits topped by a sophisticated beehive hairstyle — emphasized a cool restraint that hinted at a hot passion simmering beneath the surface. The result was completely believable, and the scenes between Grant and Saint fairly sizzled with repressed desire.

Jolie, however, is the unwitting poster child for the age-old question: is it possible to be too rich or too thin? The answer, it seems, is Yes. She makes unintentional comedy as she tries, with her pencil-thin legs, to sashay down the street or through a room in a flowing silk dress while swaying her hips a la Marilyn Monroe. The trouble is, she has no hips to sway. To compensate for this problem, the costume designer added long ribbons to the back of each dress, apparently so there would be something to bounce. Nevertheless, the film contains several long scenes of heads turning as Elise skims through a restaurant, casino, or hotel lobby. One wonders, at times, whether this is a spy film or a perfume commercial.

There are some problems about Depp as well. His trademark quirkiness is intact, especially when he is running from the mobsters who think they are chasing Alexander. But his character’s ragged, cheek-length hairstyle emphasizes the fact that his face has rounded out with age, making him more reminiscent of an angsty Billy Crystal than the dashing Captain Jack Sparrow or debonair John Dillinger whom Depp has played in recent years. This may be good for his character as the timid and confused mathematics teacher, but not so good for viewers who look forward to seeing Depp's dashing good looks.

Several editing goofs also mar the film. For example, at one point Elise receives a hotel key inside a note card. When she uses the key, it has a thick red tassel attached, but when she received it, there was no tassel. Are we supposed to believe that she has spent the intervening moments shopping for a tassel and attaching it? Even more glaring is a mistake that happens when she drives a boat to take Frank to the airport. (This is in Venice, remember.) She is wearing a white sweater and dark slacks when she drops him off, but she has somehow changed into a gray knit dress when she drives away. Mistakes like this are very distracting, especially in a mystery thriller, where viewers are always on the lookout for clues.

The Tourist is an okay film, but it's a disappointment because it had the potential to be a great film. It will be worth watching on a long flight or when it comes to Showtime on TV, but it's unfortunately not worth the price of popcorn and admission.


Editor's Note: Review of "The Tourist," directed by Florian Henckel von Donnersmarck. Spyglass Entertainment, 2010, 103 minutes.



Share This


The Government's "Green" Jobs

 | 

Jonathan A. Lester, president of Continental Economics Inc., has written an insightful critique of artificial stimulus to wind power and the like (“Gresham’s Law of Green Energy,” in the Cato Institute’s journal Regulation, Winter 2010–2011, pp. 12–18). Like similar critiques, however, it could benefit from better placement of emphasis.

Lester laments the wasteful diversion of resources into uses that would otherwise not pay (waste, that is, barring untypically sound “externality” arguments). Furthermore, as for jobs created by subsidy- or regulation-based spending on green energy, the money so spent would have to come from somewhere. It would necessarily be diverted from spending on other public or private activities, where jobs otherwise created or maintained would be lost.

Such arguments put too much emphasis on spending itself relative to the allocation of real resources, including labor. Would the workers newly drawn into green jobs come from elsewhere, or would they have otherwise remained unemployed? Do green subsidies, tax breaks, and regulations really remedy economy-wide unemployment exceeding the frictional unemployment of normal times?

Unemployment in a recession reflects discoordination. Mutually advantageous transactions among workers, employers, and consumers are somehow frustrated. That is what needs attention. Putting emphasis on spending and its destinations is superficial. As W.H. Hutt used to say, spending is a measure of transactions accomplished; it is not what drives transactions.

Now, what impedes transactions in a recession? Usually or often it is a deficient quantity of money in relation to desires to hold it at the prevailing price and wage level. Sometimes (as now, apparently) the demand to hold money has increased, even if only passively or by default, because individuals and firms are especially uncertain about what transactions would be worthwhile. Adding to the usual uncertainty about business is uncertainty about taxation, government deficits and debt, complicated and costly regulations, and other government interventions, including unintended consequences of earlier ones.  This is what needs attention, not the allocation of spending between green and other employments.

Recession is not something to be remedied by shifting resources around. Besides channeling resources into relatively inefficient uses, artificially favoring green energy obscures the true nature of recession.




Share This


When Pigs Fly

 | 

There is an old adage in which I find considerable wisdom: “When a pig flies, you don’t criticize it for not staying up very long.” I take the meaning of this saying to be that when someone who has a habit of making poor choices finally makes a moderately good one, you ought to praise the success, even if you feel he could have done more.

Well, a pig has flown. President Obama, who for his first two years ran the most anti-free-trade administration since the days of the Smoot-Hawley tariff, has managed to salvage a free-trade agreement (FTA) with South Korea, after stalling it for two years and being snubbed in Asia when he tried to strongarm a new deal. He managed a minor renegotiation, getting some relief from Korea’s environmental regulations on our cars and a slowing of our phase-out of tariffs on the Koreans’ trucks. He did this, however, at the cost of keeping Korea’s tariffs on our cars in place for five more years, and of an extra two years of Korean tariffs on American pork products. Hardly worth the wait on a deal that was already well negotiated in 2007.

But the good news is that Obama will finally let the deal proceed, and that 95% of all US and South Korean tariffs will be eliminated within five years. The deal also opens up greater trade in services, allowing (for example) more Korean banks in America and more American banks in Korea. That’s all good.

Now that Speaker Pelosi is finally history, chances are good that Speaker Boehner (“Blubbering Boehner” to his chums) will get the FTA with Korea through the House — and also the FTAs with Colombia and Panama, which have been languishing on the sidelines since Bush left office. It would be helpful if the pig could stay aloft long enough to help get these deals past Congress. So far, Obama hasn’t bothered to do that. He has shown a touching deference to the unions that oppose them, and that gave so much to his presidential campaign.

Yet it seems to be dawning on the exceptionally obtuse Obama that it may be far more useful to his 2012 reelection (gag!) campaign to have lower unemployment than to have higher union contributions poured into his campaign coffers. Perhaps the pig isn’t just flying; perhaps he has had an epiphany.

If for that we are hardly ecstatic, we can at least be satisfied.




Share This


Finding a Voice

 | 

Robert Frost defined poetry as “a lump in the throat; a homesickness or a love-sickness . . . where an emotion has found its thought and the thought has found the words.”

Just as a poem can express an emotion or describe a relationship through a single snapshot, sometimes a person’s character can be summed up in a single experience. For King George VI (“Bertie,” as he was called by his family) that experience occurred in his determined effort to overcome a pronounced lifelong stammer. The King’s Speech is a wonderfully witty, brilliantly acted, and emotionally satisfying film that tells of the singular moment when an unorthodox speech therapist (Geoffrey Rush) helped the king (Colin Firth) to find his voice, at a time when England — and the free world — desperately needed it.

One of the unexpected pleasures of this film is the opportunity to see the royal family in their living rooms, so to speak, when they were young and not expecting to become king or queen. A rumble of recognition is heard in the theater, for example, as viewers realize with a start that Helena Bonham Carter’s character is the young Queen Mum, already demonstrating her twinkling smile and munching on the marshmallows that would eventually lead to her familiar round torso. “That’s Queen Elizabeth!” at least one viewer gasped as the young princess, Elizabeth (Freya Wilson) is seen romping in pink pajamas with her little sister Margaret (Ramona Marquez) as they listen eagerly to a bedtime story told by their father, the man who did not expect to be king.

It is also unexpectedly intimate to see the debonair playboy Prince of Wales cum King Edward VIII cum Duke of Windsor (Guy Pearce), bursting into tears and sobbing on the shoulder of his mother, Queen Mary (Claire Bloom), at the death of his father George V (Michael Gambon) — sobbing not because his father has died but because of what it will mean to his relationship with the twice (and still) married Wallis Simpson (Eve Best). He would remain king for less than a year, not even enough time for his official coronation, induced to abdicate because of the relationship with Mrs. Simpson and because of his general incompetence.

George VI was a great example of courage, confidence, and commitment during World War II, not only in his speeches but also in his actions.

As the film opens, Bertie, the timid younger son of a domineering father, attempts to stammer his way through a radio speech under the disapproving eye of King George V, whose Christmas radio speeches were as important to his people’s sense of good will as Roosevelt’s Fireside Chats became in America. The film suggests that George V’s raging disapproval and faultfinding may have contributed to Bertie’s stammer.

Sitting behind him and stoically willing him to succeed, his wife Elizabeth (Bonham Carter) exudes tender disappointment when he fails. She is not embarrassed by him; she hurts for him. Several royal doctors try to cure Bertie of his awkward and unregal stammer, but to no avail. In one particularly ironic scene, a doctor urges him to smoke cigarettes frequently because it will “relax” his vocal cords. (Sadly, George VI would die of lung cancer and arteriosclerosis at the age of 56.) Eventually Elizabeth finds an unconventional therapist, Lionel Logue (Rush), who changes Bertie’s life by restoring his voice, and by becoming his lifelong friend.

The film is a delightful mixture of royal protocol and unexpected earthiness. Logue refuses to treat the Duke of York any differently from the way in which he treats his other clients, even insisting that they use first names. Their banter is droll and often hilarious. When Myrtle Logue (Jennifer Ehle), who is unaware of her husband’s famous client, comes home a bit early to find the king and queen of England standing in her parlor, she asks querulously, “Will their Majesties be staying for dinner?” Elizabeth, knowing it would be inappropriate for them to stay, responds charmingly, “We would love to. Such a treat! But alas . . . a previous engagement. What a pity.” No wonder everyone loved the Queen Mum!

The King’s Speech is a tour de force for Colin Firth and Geoffrey Rush, two masters of the king’s English, who spar and vie for dominance in each scene. But despite the humor there is an underlying seriousness to Bertie’s effort to overcome his impediment, especially when his struggles are set against the backdrop of his older brother’s abdication and the run-up to World War II. One of the most brilliant scenes in the film is King George VI’s 1939 Christmas speech to the British people, on the eve of the war with Germany. The speech is familiar to anyone who has studied that period of history. It has always seemed emotionally charged and solemn because of its halting delivery. Learning that this delivery was the result of a speech impediment does not lessen its gravity. Instead, it increases it, as it demonstrates the king’s strength and courage at a time when the British people would be called upon to demonstrate strength and courage of their own. Logue literally conducts the king in his delivery of the speech in the way a maestro would conduct an orchestra, virtually transforming it into a lyric, set to the solemn strains of Beethoven’s Seventh Symphony, which fills the theater throughout the scene.

George VI was a great example of courage, confidence, and commitment during World War II, not only in his speeches but also in his actions. When Buckingham Palace was bombed and his advisors urged him to move his family to the safety of Canada for the duration of the war, the king refused, joining Londoners in underground air raid shelters. While his people sent their children to the safety of the English countryside, the princesses Elizabeth and Margaret stayed in town.

Meanwhile, the abdicated Edward VIII and his beloved Wallis Warfield Simpson led an unhappy life. Suspected of German sympathies, they met with Hitler before the war, and he was finally sent by Churchill to govern the Bahamas, mainly to keep him out of the way. Hitler himself was quoted as saying, "I am certain through him permanent friendly relations could have been achieved. If he had stayed, everything would have been different. His abdication was a severe loss for us." One has to wonder where England might have stood during World War II — and what Europe would be like today — if Wallis Simpson hadn’t stolen Edward’s heart and caused him to give up the throne.


Editor's Note: Review of "The King’s Speech," directed by Tom Hooper. See-Saw Films/The Weinstein Co., 2010, 118 minutes.



Share This


The New Landscape of Libertarianism

 | 

New York magazine published an article called “The Trouble With Liberty” in its January 3–10, 2011 issue. I was intrigued by a line on the magazine’s cover. It asked, “Are we all libertarians now?” And what I found in the essay was very interesting.

The author, Christopher Beam, presents a brief yet wide history of libertarianism, ranging from Ron and Rand Paul and Paul Ryan to David Boaz to Ayn Rand and Friedrich Hayek. Beam explains that libertarianism has elements from both the Right and the Left and does not fit easily into either mode, and he outlines the various attempts to promote a libertarian country — from those that would enlist the Republican Party or the Libertarian Party, to Brink Lindsey’s Liberaltarianism, to the Free State Project and the Seasteading Institute.

Beam pegs libertarians as crazy old uncles or Dungeons & Dragons players, but his history of libertarianism is quite complimentary. He says that the Founding Fathers and the Constitution were actually more libertarian than anything else. The gist of the essay is that with the Tea Party movement and the rise of Rand Paul and Paul Ryan, libertarianism is on the rise and our moment has come.

But halfway through, Mr. Beam changes his tone and gets to the heart of his essay, which is a critique of libertarianism and an explanation of why he thinks it is a bad policy for the United States. His arguments aren’t theoretically sophisticated and are designed to appeal to a mass audience: if there are poor people, and charity can’t provide for them, then we need welfare or else they will steal from us; we need public education in case the free market can’t educate everyone; we need a central bank in order to print a uniform currency. He mentions “asymmetrical information” and “public goods,” and argues that if the bailout had not happened then innocent investors and homeowners who innocently misunderstood the riskiness of their loans would have been punished. “There’s always a tension between freedom and fairness,” he says, and we libertarians “pretend the tension doesn’t exist.”

We must shift the alignment of America’s political discourse so that socialism no longer sounds like common sense, and our proposals seem like the new common sense.

Libertarianism can never succeed, he claims, because politicians must compromise and libertarians refuse to compromise or cooperate. One of the overarching criticisms in the essay, and perhaps its most obnoxious, is the subtle implication that libertarians have such a hard time accomplishing real change because we know that our theories are mere impractical abstractions unsuitable for pragmatic flesh-and-blood reality, so we would be revealed as idiots if we ever achieved political power.

The refutations of Beam’s arguments are so obvious that I need not detail them. What is more significant is the mere existence of his essay. It is, in my opinion, one of the early post-Tea Party attempts by the Left to come up with an ideological response to people with open minds from taking libertarianism seriously. I strongly doubt that libertarianism has reached the peak of its popularity, but what this essay signals to me is that people who ten or twenty years ago might never have known what libertarianism is are now hearing the word “libertarianism” and asking what it means. Beam provides a leftist answer to that question. But he also cites surveys showing that more people now define themselves as libertarians than ever before, and that this poses a threat to the liberal-conservative establishment.

If the Tea Party phenomenon grows and Rand Paul’s career continues, we should expect to see many more such essays. I think that they will all follow Beam’s pattern. “The Trouble With Liberty” shows what two challenges we must overcome in order to be taken seriously.

First, there is something, call it “common sense” or the “social imagination” or whatever, but there is a set of simple political ideas that, whether true or false, permeates a culture. We need to introduce arguments into the American intellectual culture to refute the “common sense” arguments for statism, such as the argument that we need a welfare state to rescue the poor. We must shift the alignment of America’s political discourse so that socialism no longer sounds like common sense, and our proposals, which Beam skewers as extremist, seem like the new common sense. This is similar to what Glenn Beck claims the socialists did to us with the Overton Window – shifting cultural common sense by gradually introducing extreme ideas until they become mainstream  — but it works in reverse.

Second, we must prove that libertarianism can work in practice as well as in theory, and we must call upon our libertarian politicians to show the American people that it is possible to have noble ideals while still being pragmatic and getting things done. In my opinion the danger is not that Rand Paul and Paul Ryan will make too many compromises; it is the opposite: they will be too idealistic and take an all-or-nothing approach to change, and thus will be unable to work with their Republican colleagues. In that way, they will confirm the fears that Beam would like to promote.

“Libertarianism is still considered the crazy uncle of American politics,” Beam writes. It is only natural for the liberal-conservative establishment to oppose us by laughing at us so loudly that nobody will take us seriously. That is, after all, right out of Ellsworth Toohey’s playbook. The question is how we will respond to the laughter — by behaving like weird extremists and impractical idealists, or by showing that we deserve to be taken seriously and that our abstract theories really will work in practical reality.




Share This


Not Gittin' Outta Gitmo

 | 

One has to think that the libertarian Obamanistas — libertarians who supported Obama, thinking that he couldn’t spend more money than the Republicans, and would at least end the war on terror and dramatically reduce the military posture of the country, must feel some uncertainty about their guy.

Certainly, in terms of spending and deficits, he makes Bush look like a fiscal hawk. In his two years in office, Obama’s yearly deficits have been over four times the size of Bush’s largest. And in terms of state control of the economy — the socialization of the medical system, the nationalization of the auto industries, the massive increase in regulations, the dramatic increase in the size of the federal bureaucracy, and the expansion of environmentalist hegemony over natural resources — he has explored a whole New Frontier of statist economics.

As to the war on terror, he hasn’t ended it, or even diminished it appreciably, much less brought in a new era of isolationism. We are still in Iraq — though scheduled to exit, but no earlier than Bush’s plans called for — and are fairly well stuck in Afghanistan. Virtually all of Bush’s executive orders on the war on terror remain essentially unchanged.

A recent Reuters report (Jan. 7) underscores this point. While Obama was in the Senate, then on the campaign trail, then during his first two years in office, he relentlessly bashed Bush for holding prisoners outside the regular court system, detained at the Guantanamo Bay prison. Obama promised to give the Gitmo detainees fair trials in our regular court system, though he also promised they would all be convicted and jailed — well, indefinitely!

But quietly, on a Friday when news coverage is guaranteed to be minimal, Obama signed a law that prohibits bringing the remaining 175 Gitmo prisoners here for court trials.

He said he had no choice but to sign the bill — the defense authorization act for fiscal 2011 — because the military funding was necessary, even though the bill contained that provision banning domestic civilian trials for the terrorist detainees. And he vowed to fight to get the provision repealed — although the ban was put in the bill by one of the most left-wing Congresses in American industry, so it is hard to see why he thinks he can get it through a more right-wing Congress.

Obama’s claim that he had to sign the bill is just a lie. He certainly could have vetoed it and made it clear to Congress that he would not sign any future bill that included the provision. But he didn’t, and this raises a dilemma about him.

Perhaps he still wants to give the Gitmo guys domestic civilian trials, and has merely decided that trying those prisoners here would be too politically costly. Certainly, the public opposes such trials by a large margin. But if that is the case, he is not much of a man of principle.

On the other hand, perhaps he has changed his mind on the matter, and no longer views such trials as worthwhile. After all, the showpiece of the Obama policy of domestic civil trials for terrorists was the trial of Ahmed Ghailani, the Gitmo guy who was involved in the 1998 bombings of US embassies. The trial ended late last year with the jury finding Ghailani not guilty on 279 of the 280 counts Obama’s Justice Department brought against him, finding him guilty on only one count: planning to destroy US property. He was not found guilty of even one of the 224 murder counts against him. Hardly bracing for the prospect of keeping the other Gitmo guys safely away from society.

However, if Obama has changed his mind, what does that say about his judgment — compared to, say, George Bush’s?




Share This


The Serious and the Buffoons

 | 

I like to congratulate countries that, unlike ours, take energy policy seriously. Serious energy policy simply means that you seriously try to find and exploit new energy sources, using reality-based rather than delusional thinking.

Our present administration, which cherishes the delusion that noisy, ugly, and inefficient windmill farms and costly, ugly, and inefficient solar panel farms will allow us to dispense with oil, gas, and coal, is the paradigm case of unserious (i.e., joke) policy makers.

For being serious, kudos should go to Israel. As noted by the Wall Street Journal on Dec. 30, it has encouraged extensive exploration for fossil fuels off its shoreline, and the search has paid off prodigiously. The most recent discovery may tip the Mideast balance of power in Israel’s favor. A huge field of natural gas, aptly called Leviathan, apparently contains 16 trillion cubic feet of natural gas (according to Noble Energy, the American firm developing it). That field alone could supply Israel’s gas needs for a century. It might even make Israel a net energy exporting country.

Leviathan was found in the vicinity of smaller fields discovered earlier in the Levant Basin, an area of Mediterranean seabed off the coasts of Israel and Lebanon. The first two fields, Noa and Mari, discovered in 1999 and 2000 respectively, together contain about 11 trillion cubic feet of natural gas. The Tamar and Dalit fields, both discovered in early 2009, together contain about 9 trillion cubic feet.

The US Geological Survey estimates that the Levant Basin holds a total of 122 trillion cubic feet of natural gas, not to mention 1.7 billion barrels of oil. To put that in perspective, the Levant Basin’s estimated gas reserves are nearly half of what America’s entire natural gas reserves are thought to be.

These huge fields, together with Israel’s laws favoring energy exploration and development, caused the Israeli energy sector stock index to soar 1,700% in 2010. They also led to Lebanon’s passing laws to develop its share of the Levant Basin.

A second story appeared in the Journal on Dec. 31. It reports that even as our unemployment rate hovers near 10% and the price of gasoline continues to rise, the harlequins in the Obama administration have issued a directive sealing off even more lands from productive exploration. This directive requires the Bureau of Land Management (BLM) to search its huge holdings to find “unspoiled” back country that it can then decree to be “wild lands” and lock away from development of any kind.

This may block from use many millions of acres of land in Alaska, Arizona, California, Colorado, and everyplace else where the feds own land. (The BLM supervises 250 million acres of land!) You can just forget about the uranium, oil, natural gas, and other valuable resources of the areas the BLM shuts down.

The BLM used this power freely back in the 1970s and 1980s, but in 2003, after a lawsuit from the government of Utah, it relinquished the power. Now Obama, having lost his legislative power, is trying to build up the executive power necessary to carry out his jihad against carbon energy, and reverse the 2003 decision. He seems to think that shortages of — and high prices for — energy are the keys to economic prosperity.

All this inclines me to say “Mazel tov!” to the Israelis, and “Go to hell!” to the Obamanista environmental extremists, who are trying to choke off this nation’s energy.




Share This


The Dangers of Diagnosis

 | 

“Nearly 1 in 5 Americans had mental illness in 2009.” This recent CNBC online headline captured my attention.

The brief article that followed was based on a report by the Substance Abuse and Mental Health Services Administration, a federal agency (oas.samhsa.gov). The article repeats highlights from the agency’s report entitled “Results from the 2009 National Survey on Drug Use and Health: Mental Health Findings,” available in PDF form.

The article states that an estimated 45 million US residents had a mental illness, and 11 million had a serious mental illness, and that these numbers reflect increasing depression among the unemployed.

The article’s intention — to create alarm — is loosely veiled. If people do not have access to interventionist and preventive treatment, any number of woes can follow: “disability, substance abuse, suicide, lost productivity and family discord.” Lost employment equals lost health insurance equals a lack of access to treatment equals a crisis. The insinuation is that government should step in to close the treatment gap.

Finding this article was fortuitous. Only days before I had read an article in Skeptic magazine about the “foibles of the Diagnostic and Statistical Manual V” — the diagnostic guide for mental health practitioners. (For details, see “Prognosis Negative” in Skeptic, volume 15, number 3 [2010], by John Sorboro, himself a licensed, practicing psychiatrist.)

The state of the psychiatric arts today, complicated by increased government control over our nation’s healthcare industry, should alarm all citizens, not just libertarians.

According to Dr. Sorboro, the upcoming version of the DSM will have a marked increase in diagnosable psychiatric disorders, which may include “compulsive shopping” and “Post Traumatic Embitterment Disorder.” But the problem with the DSM has to do with the validity of what it says.

To rectify the unscientific nature of prior versions of the work, the third version was intended to “increase reliability by standardizing definitions.” Still, critics maintained that “the rhetoric of science — rather than scientific data — was used by the developers of the DSM-III to promote their goal, and science did not support [their] claims.” In 1994, the DSM-IV was published, listing 297 disorders. The latest revision is set to increase that list. Yet according to Dr. Sorboro, almost “every major psychiatric construct is seen as being of questionable validity by a vocal group within the field itself[,] or outside it.”

Psychiatric disorders are supposed to be pathological constructs, as Parkinson’s disease is a pathological construct. For a construct to be valid, Sorboro states, it must differentiate itself from other pathological constructs and provide a theoretical framework for prediction and specific intervention. He likens psychiatric pathological constructs to the construct for fibromyalgia — “a loose collection of non-specific complaints.” Fibromyalgia lacks an underlying, identifiable pathology. So do psychiatric constructs.

Critiques of the DSM include claims that it’s a collection of “the moral objections of a group within power [who] desire to medically pathologize another group for self serving purposes,” and that it is “a-theoretical and purely descriptive.” Evidence in support of the former critique is that homosexuality was not entirely removed from the DSM’s list of mental disorders until the latter half of the 1980s!

A diagnosis based on the DSM is not a divination of pathology. The DSM is tautological. It describes. It does not explain. Thus, diagnosis is subjective, not objective. Sorboro uses bipolar disorders to illustrate. Bipolar I disorder appeared in the DSM-III in 1980, followed by Bipolar II Disorder, Bipolar Disorder NOS (not otherwise specified — that’s worrisome), and cyclothymia. There has been a correlative rise in the diagnoses of such disorders — one statistic that Sorboro cites is a 4000% increase in bipolar disorder diagnoses in children during the past decade, despite the fact that mental health practitioners know “hardly anything more of real scientific significance about bipolar disorder than we did in 1980.”

Soboro states that medical disease classification evolves in a messy and inconsistent way, “and often has to do with politics and not just compelling scientific fact. It’s just much worse in psychiatry.” For example, contributors to the DSM-V include “health care consumers”; and as Sorboro says, no other branch of medicine would ask consumers for advice in defining pathology. Moreover, the American Psychiatric Association taskforce handling this revision is conspicuously closed and non-transparent — task force members must sign confidentiality agreements and cannot keep written notes of their meetings.

Hmm.

I have been skeptical of the DSM since I first read it. I was a judicial clerk, and my judge kept a copy of the DSM-IV on one of his bookshelves. He used it for reference during sentencing hearings and when he presided over mental health hearings. During lulls in my clerkship tasks, I read several large chunks of the DSM-IV. My initial thoughts were: there certainly are some people with severe mental problems, but this is bullshit. Symptoms of the indicated mental “conditions” were so encompassing that anyone and everyone could be classified as having some type of mental disorder.

My best friend from high school is a psychiatrist, and after reading the DSM-IV, I asked her about it. She said that it gives a practitioner guidelines for diagnoses. But don’t guidelines have to guide? I asked. Isn't a diagnostic process that has no conceptual limits wholly subjective? The flu is marked by symptoms that make it the flu and not a common cold or pneumonia. But even a brief reading of the DSM shows that mental illnesses are not marked by unique symptoms. Why? My friend had a few forgettable justifications, but no answers.

Homosexuality was not entirely removed from the DSM’s list of mental disorders until the latter half of the 1980s!

Many Liberty readers are familiar with libertarian criticisms of the mental health industry. But the state of the psychiatric arts today, complicated by increased government control over our nation’s healthcare industry, should alarm all citizens, not just libertarians. Psychiatric abuse by states against citizens is well documented; psychiatric imprisonment for dissidents in the Soviet Union is just one example.

The dangers are clear. In the legal realm, when a criminal statute is overbroad, behavior otherwise constitutionally protected is criminalized, subjecting more citizens to state control. Overdiagnosis of overinclusive mental disorders will subject more citizens to treatment — which, under Obamacare, means subjection to more government control. This should be enough to give anyone an anxiety disorder. Considering the political nature of mental “disease” classification, I wonder if a disorder marked by “irrational fear” of a “benevolent government” might be among the disorders included in the new DSM.




Share This


Dickie Eklund's Punch-Out!!

 | 

Do we really need another rags-to-riches movie about boxing? Probably not. But filmmakers keep making them, and we keep watching them. Whether you like boxing or not, there is something cathartic about the hero's struggle itself. Like the best boxing movies, the latest one is more about the fighter than the fight, more about the family duking it out outside the ring than the boxing going on inside it. We can always use another film about family dynamics and the will to overcome obstacles, and The Fighter is one of those.

Dickie Eklund (Christian Bale) is the classic small-town hero, still basking in the glory of a quasi-victory 14 years earlier, in a bout where he knocked down Sugar Ray Leonard. Not knocked out, mind you, but knocked down. And some say that Sugar Ray actually tripped. Nevertheless, Dickie is called “The Pride of Lowell,” and as this film begins he is swaggering down the street in that Massachusetts town with an HBO film crew in tow, documenting his “comeback” as a trainer for his younger half-brother, Micky Ward (Mark Wahlberg).

Micky’s manager is his mother Alice (Melissa Leo), a hard-driving, chain-smoking, no-nonsense matriarch in tight pants and high heels. Leo is over-the-top perfect in this role, from the moment she prances into the gym, clipboard in hand, and asks the film crew, “Did you get that? Do you need me to do it again?”

Alice is the ultimate stage mother: pushy, strong, manipulative, and naively confident in her ability to manage her sons’ careers. “You gonna let her talk to me like that?” she rages at Micky when his girlfriend Charlene (Amy Adams) stands up to the rude and domineering matriarch. “I have done everything for you!” she screeches. She is also a classic enabler. Like many mothers who know how to give affection but don’t know how to parent, Alice sees no wrong in Dickie, and her constant sympathy and approval contribute to his sense of entitlement and its disastrous consequences.

Boxing movies are never really about the fights; they’re about the fighter.

The story of fraternal conflict is as old as Cain and Abel, Ishmael and Isaac, Esau and Jacob. In this case, two brothers vie for their parents’ love and attention, while trying to work out their own relationship. Dickie is clearly Alice’s favorite, but since he is virtually washed up as a boxer, Micky becomes the family’s new great hope — even his seven sleazy sisters are completely focused on the light they share from their brothers’ moments of fame. Dickie is the son mired in past glory, and Micky is the son trying to break away and rise above his toxic roots. But Micky is constantly pulled back by his love for his crazy family, and especially by his childlike love for his older brother.

As with many small-town heroes, adulthood has not been good to Dickie. He hangs out in bars and crack houses when he should be in the ring training and sparring with Micky. He shows up several hours late for training — while the HBO cameras keep rolling. He dives out the back window of his girlfriend’s house when he hears his mother coming, afraid of her disapproval. Nevertheless, throughout the first half of the film, Dickie is high on life, hopped up, and wide eyed. His backstreet swagger oozes confidence and joy.

Partway through the film, however, we realize that the documentary isn’t going to be about Dickie’s comeback as a fighter and trainer; it’s going to be HBO’s High on Crack Street: Lost Lives in Lowell (1995). Richard Farrell, who directed the HBO doc, plays a character much like himself in a cameo as a cameraman in this film. Of course, Dickie and his family don’t know the true topic of the documentary, and their moment of realization is devastating, performed with an understated emotion from each actor that is pitch-perfect. Farrell captured the tragedy of crack addiction in the real documentary, and Russell does it again with this film. (Note to self: never trust anyone with a movie camera, no matter what the person tells you is being filmed.)

Boxing movies are never really about the fights; they’re about the fighter, so this film as aptly titled. It’s not about one fighter, though, but about several — Dickie, the has-been boxer fighting to regain his former glory; Micky, the stronger brother fighting to break out of the other’s shadow; Charlene, the girlfriend fighting for respect; and Alice, the mother fighting for her family’s success. All of this takes place in a setting that has seen more rags than riches over the years, a place where boxing can be a pathway to money and status, but more often leads to broken hearts and broken bones.

The Fighter is a film about choice — about choosing to work hard, or not; choosing to be self-interested, or not; choosing the right friends, or not. Dickie’s choices land him in prison; Micky’s choices (when unencumbered by Dickie’s and Alice’s management) land him on a path to the welterweight championship. The scenes juxtaposing Micky’s training in the gym with Dickie’s sparring in the prison yard (and Alice’s chasing her husband with a frying pan) say a lot about choice and consequence in this film about fighting — it’s not just about beating someone up, but about fighting to survive.

This is also a film about love, and how to express it when the person you love is toxic; here, true love is expressed by knowing when someone is hurting, and reaching out to carry the load. This is a film about breaking away, but also about hanging on. How Dickie and Micky manage to do both makes The Fighter well worth watching, even though it might be called“justanother boxing film.”


Editor's Note: Review of "The Fighter," directed by David O. Russell. Paramount Pictures, 2010, 115 minutes.



Share This


The Capital Gang

 | 

For me “I’m a libertarian and I support the Washington Redskins” is right up there with “I’m from the government and I am here to help.” It makes my shoulders twitch and I feel creepy-crawlies run up and down my spine.

It all started in the run-up to Superbowl XVIII played at Tampa Stadium on January 22, 1984. The highly backed patrician ruling-class Redskins faced the underdog blue-collar working-class Raiders. Their respective QBs had some history as they had competed together for the highly prestigious Heisman Trophy back in 1970. Redskin QB (then with Notre Dame) Joe Theismann changed the way he pronounced his name from Thees-man to Thighs-man to make it rhyme with the name of the vaunted trophy in order to garner more votes. When Raider QB (then at Stanford) Jim Plunkett convincingly blew away Joe and also famous father Archie Manning (2,229 to 1,410 to 849), the Thighs-man camp infamously said that Jim had only won it because both his parents were blind. Please. What a classless act.

Happily the Raiders smashed the Redskins, leading 21–3 after just one quarter and scoring on special teams, defense, and offense. The final score was 38–9, and the record books had to be rewritten. Poetic justice?

One additional happy result of that total whipping was that the distinguished MVP scholar Charles Murray renamed his book of the mid-’80s, the book that shot him to stardom. As he recounts on pages xiii and xiv of the tenth anniversary edition, the working title had been F****** Over The Poor — The Missionary Position. Then it became Sliding Backward, but while he was watching the sad sack ‘Skins go nowhere late that Sunday, the title Losing Ground was born. Some TV commentator probably said something such as “the ‘Skins lose yet more ground to the Raiders,” and a light went on in Murray’s head.

There is only one good reason for the continued existence of the Landover Looters, and it is simply this: every single time they lose, absenteeism within the federal government soars the following Monday.

Eighteen months later I moved from California to northern Virginia and wall to wall, front to back, ceiling to floor ‘Skins fandom. There was no soccer (DC United) and no baseball (Nationals) and the basketball (Bullets) and ice hockey (Capitals) barely registered on the local sports radar screen. All that these rent-seeking, tax-guzzling federal employees and their hangers on cared about was the Redskins. Forget the country. They were totally nuts, completely besotted. There was a 30-year wait for season tickets and probably still is. People had to die before you could advance up the list. And it was all so PC that when the gun death-rate in DC hit record levels the Bullets had to be renamed and chose to be the Wizards.

In defense of all the other pro sport teams named Washington or DC, at least they all play there. The so-called Washington Redskins play in Landover MD and train in Ashburn VA. One wonders how many of the players and staff live in DC and how many in the suburbs or even further out.

I am curious as to why all five major sports leagues have to have a DC area franchise. Surely this cannot be connected to the special status that sports leagues enjoy under federal regulations.

There are large echoes here of the equally despised British soccer team Manchester United (fondly known in Manchester itself as “the scum”) which regularly sits atop the English Premier League. It plays in a city called Stretford, and its players live in the next door, very tony county of Cheshire rather than more downmarket Lancashire.

Hence the joke, How many soccer clubs are there in Manchester? Two: Manchester City and Manchester City Reserves. And hence the sign at the Manchester city line when Carlos Tevez signed to leave United for City: Welcome to Manchester.

The name refers to a criminal act of destruction of private property, deception, and sleight of hand; commemorating an attempt to point the finger of a crime falsely at a minority.

Common sense surely dictates that just as Manchester United should be renamed Stretford United so the Washington Redskins should become the Landover Redskins or perhaps the Landover Looters, to reflect the dominant local industry. It is simply dishonest to trade the way they do. They are living a lie.

But why is the team called the Redskins in the first place? What has the swamp of Washington got to do with Native Americans other than as a source of subsidy and special treatment? The answer is that the franchise started in Boston, Mass., as in the place where white patriots dressed up as Native Americans and chucked all that tea overboard. So the name refers to a criminal act of destruction of private property, deception, and sleight of hand; the name commemorates an attempt to point the finger of a crime falsely at a minority, an attempt to unleash the might of the British Army on peaceful natives. It really is disgraceful.

Speaking of minorities, these ‘Skins so beloved by Federal bureaucrats were the very last team in the NFL to integrate, and they did so with great reluctance and in a pretty surly, bad tempered way. The suspicion is that they did so only because the Department of the Interior owned their then stadium (typical) and the Kennedy administration was not impressed at seeing a non-integrated team in the nation’s capital — not really Camelot!

There are sports bars in the DC region with affiliations other than the ‘Skins, but they are nearly as rare as hens’ teeth. I used to frequent a Steelers bar with my friend Father Jack out toward Dulles on fall Sunday afternoons, until the BATF hit it. “Hands on the table — don’t reach for anything, not even your cutlery — don’t make our day.” I am sure the BATF agents were all ‘Skins fans.

The result is a cloying, all pervading, overarching pro-Redskins atmosphere that is not healthy. I recall taking elder son Miles to pre-K one Monday morning in say 1986; he was proudly wearing his brand new Dallas Cowboys shirt, a gift from Uncle Leonard. A female teacher stopped us in the corridor:

Teacher, somewhat condescendingly and pointing at said shirt: “Mr. Blundell, don’t you know this is Redskins country?”

Blundell in his best posh British accent: “Oh I am terribly sorry. I thought the Cowboys were America’s Team!”

If this were a comic strip, the next panel would show a woman with a screwed up face looking at the heavens, elbows stuck firmly into her ribs and clenched fists raised by her jaw, with a thought bubble reading “Argh! *&#%@?+^#*&.”

So as the population of the Swamp changes every election cycle, waves of well-meaning (I am being charitable) men and women, true sports fans who support good honest teams that play in privately owned stadiums, sweep in and are corrupted into supporting the Redskins. You can’t chat at the water cooler or over coffee or at lunch unless you are in the Skindom. It is so sad, but then Washington believes in monopolies such as currency issuance, taxation, and regulation.

When good internationally proven liberty-minded folk such as me confront these so-called libertarian Redskins we receive really mealy-mouthed responses, typical of which is “Oh, when I think of Washington I think of the person not the place!” Right! These people are confused and confusing, embarrassed and embarrassing, and not to be trusted until they go through therapy.

There is only one good reason for the continued existence of the Landover Looters, and it is simply this: every single time they lose, which is well over 50% recently, absenteeism within the Federal government soars the following Monday. This can only be a good thing.

But there is a solution for the Landover Looters problem. The team should move to Syracuse in upstate New York and become the Syracuse “Washington’s Redskins” with the nickname of the “Waistcoats.” Let me explain. George Washington signed a treaty with the Oneida Nation in that area to fight the Brits. So to the extent that Washington Redskins exist free of deceit, capture, and vainglory they are in the Syracuse-Finger Lakes region.

Why Waistcoats? Because Washington wore them and it’s probably a better nickname than “the big girls’ blouses,” which is what I call “libertarians” who support the Landover Looters.




Share This


Your 401k Is a Sitting Duck

 | 

In Liberty some time back (“Pension Peril,” March 2009), I reflected on President Kirchner of Argentina, who helped fund her country’s failing public pension system by simply stealing money from the private pension savings accounts that many of her countrymen had managed to accumulate. Her government expropriated (“nationalized”) the $24 billion private pension funds industry in order to save the public system, forcing citizens to trade their savings for Argentinean Treasury bills of dubious creditworthiness. I suggested then that such a thing might happen in the US, where Americans have many billions put aside in various retirement vehicles — a tempting target for any cash-starved government.

I think that dark day is growing closer. My feeling is confirmed by some troubling news, recently reported by the Adam Smith Institute’s wonderful website. The author of the report, economist Jan Iwanik, notes that a number of European countries are shoring up their tottering public pension plans by the Peronista tactic of stealing from those who have prudently put aside some extra money for their retirement.

Bulgaria, for example, has put forward a plan to confiscate $300 million from the private savings accounts of its already impoverished citizens and put those funds into the public social security system. Fortunately, organized protest has cut the amount transferred to “only” $60 million — for now, at least. And Poland has crafted a scheme to divert one-third of all future contributions that are made to private retirement savings accounts, so that the money flows instead to the public social security scheme. This will amount to $2.3 billion a year stolen from frugal people to shore up the improvident public system.

The most egregious case is that of Hungary. This state, which has been teetering on the verge of insolvency for years, has taken a drastic punitive step. Under a new law, all citizens who have saved for their retirement face a Hobson’s choice: either they turn over their entire retirement accounts to the government for the funding of the public system, or they lose the right ever to collect a state pension, even though they have paid and must continue paying contributions to the state system. The Hungarian government thus hopes to pocket all of the $14.2 billion that the hapless Hungarians have managed to squirrel away.

As our own national insolvency grows nigh, it is just a matter of time before the feds take a swing at the enormous pot of private retirement savings held by Americans. If you think you’ve heard nothing but class warfare rhetoric from this administration, just wait till it feels the need to take your 401ks, IRAs, and so on. The demonization of the productive and the prudent will be loud and shrill.




Share This


Turn Out the Lights, the Party's Over

 | 

With a budget of $65 million, Spider-Man: Turn Off the Dark is touted as the most lavish musical ever mounted on Broadway. Much of the money has been invested in mechanical lifts and flying machines, high-tech costumes, and, unfortunately, medical bills. Already one performer has broken both wrists, another has broken both feet, another has fractured his ribs and injured his back, and the leading actress has suffered a concussion that took her out of the show for a while. And Spider-Man hasn't even officially opened yet. (It's still in previews, and the official opening date, when the show will be set in stone and critics are invited to write their reviews, keeps being pushed back.)

You know you're in trouble when the stage manager has to make an announcement before the first act assuring the audience that OSHA representatives are on hand backstage to make sure the stunts are in full compliance with safety requirements, and that the state Department of Labor has okayed the production, despite the numerous injuries. (The continued injury rate gives you a lot of confidence in OSHA and the Department of Labor, doesn't it?) Going to a performance of this new musical feels eerily like going to a hockey game or a stock car race — you hate to admit it, but you're almost hoping to see blood. Look at all the laughs Conan O'Brien has milked from the show's growing injury list.

Let’s be frank: accidents aside, the show was doomed from the beginning. All the stunts and technical tricks in the world can't make up for a bad script, and this one is a snoozer. It gained the potential for an interesting plot by introducing an unexpected new character, the mythological Arachne of Greek mythology, who was transformed into a spider for boasting that she was a better weaver than Athena, patron goddess of weaving. Two characters from different eras cursed with spidery traits and struggling to become human again could have produced a dynamic new story.

Going to a performance of this new musical feels eerily like going to a hockey game or a stock car race — you hate to admit it, but you're almost hoping to see blood.

But instead of focusing on this new character development and trusting the audience to know the story of how Peter Parker became Spiderman (which any possible audience is certain to know already), the show's producers decided to leave Arachne dangling (literally) for most of the show and concentrate on retelling the core story.

The production is framed by four punk teens who seem to be writing a script or filming a video (it isn't clear what they are doing) in front of the stage. They tell each other the story, and then their story comes to life as the actors perform it, almost action-for-action and word-for-word the way we have already seen it in comic books, on film, and in amusement parks. First we hear it, then we see it — yet we already know it. Talk about overkill! I was ready to pull out the industrial strength Raid before the first act was finished.

Even then . . . The show could have survived a weak storyline if director Julie Taymor had delivered what she is known for: a montage of splashy, whimsical, creative production numbers that wow the audience with unexpected visual delights. This is what she did in her film Across the Universe and Broadway's phenomenal The Lion King. In both those shows, the story is just a vehicle for delivering breathtaking musical productions — and it works. Who can forget the spectacular parade of lifelike animals or the dancing grasses and rivers in The Lion King? The sets, the costumes, the choreographies, and the thrilling music are simply magnificent, despite the silliness of some of the main characters.

Unfortunately, Taymor's vision for Spider-Man falls as short as the safety harness that was supposed to catch Spidey's stand-in during his unintentionally death-defying drop into the orchestra pit. Yes, Arachne's spider costume is pretty cool as she hangs and twists in the air while her legs and abdomen grow. But we saw something quite similar at the end of Act One in Wicked. The dance of the golden spiders as they swing from 40-foot golden curtains is lovely as well, but we've seen that in every Cirque du Soleil show of the past 20 years. The fights between Spidey and Green Goblin as they fly above the audience and land in the balconies are probably the most unexpected and technically difficult, but only about half the audience can actually see them, since the fights take place high at the back of the theater.

In short, even if the production crew of Spider-Man: Turn Off the Dark can get its acts together and fix the technical problems, the show will still have artistic problems that may be insurmountable. It isn't as showy as Cirque de Soleil, or as campy as Spamalot, or as interesting as Wicked. It simply isn't very good, and it certainly isn't worth risking people's lives for. My advice: turn out the lights; the party's over.


Editor's Note: "Spider-Man: Turn Off the Dark" is currently in previews at the Foxwoods Theatre on 42nd Street.



Share This


Artists in the Movies: The Ten Best Films

 | 

I’m not sure why I consider artists so fascinating. Perhaps it is the especially acute way they see the world — vision being for me only a weak sensory modality. Perhaps it is the fact that they use more of the right side of the brain than I typically use in my own work. But whatever the reason, apparently I am not alone in my fascination, since movies about artists are fairly numerous in the history of cinema. In this essay I want to review ten of the best such movies ever made.

I will confine myself to artists in the narrow sense of painters, as opposed to creative writers, photographers, or musicians. I will even put aside sculptors, even though that rules out reviewing such interesting films as Camille Claudel (1988), the good if depressing bioflick about the sculptress who worked with and was the mistress of Auguste Rodin.

I will also confine myself to standard movies, as opposed to documentaries. There are many fine documentaries about individual artists and artistic movements. One particularly worth noting is My Kid Could Paint That” (2007), a film that honestly explores the brief career of four-year-old Marla Olmstead, who caused a sensation when her abstract paintings caught the attention of the media and the public, and started selling them for many thousands of dollars each. After an expose on CBS News, the public began to wonder if she had really produced her own work. That is the fascinating question the film investigates, but in the background is another, equally fascinating question — whether abstract art has any intrinsic quality, or whether it is all a matter of the perception of the critics.

But to return. One other restriction I will adopt is to consider feature films only, as opposed to TV series. This causes me some grief, since one of my favorite portrayals of painters on screen will have to be skipped — the delightful three-part BBC miniseries The Impressionists (2006). This series is TV at its finest. It is a historically accurate portrayal of the French impressionist school of painters (Manet, Monet, Renoir, Bazille, Degas, and Cézanne) that is compelling and entertaining story telling. It is structured as a series of memory flashbacks that occur to Claude Monet as he is interviewed late in his life by a journalist about the artistic movement he and his circle created.

In theory, it shouldn’t be any more difficult to produce a decent movie about a painter than about any other subject, but in practice, there are many possible pitfalls.

But what does a good movie about an artist include? Such a film can take many forms. It can be a straight bioflick recounting a person’s life and achievements — as in Lust for Life, The Agony and the Ecstasy, and Seraphine. It can explore a controversy, such as the merit of abstract art (Local Color). It can explore some of the ways artists interact with other artists — competition or romantic involvement (Modigliani, Frida, and Lust for Life again). It can examine the interaction between artists and mentors (Local Color), or patrons or art critics (The Agony and the Ecstasy, Girl with a Pearl Earring), or other intellectuals (Little Ashes). It can dramatize relationships between artists and family members (Lust for Life, Moulin Rouge). It can try to meaningfully convey the inspiration for or the process of artistic creation (The Agony and the Ecstasy, Rembrandt, Girl with a Pearl Earring). Finally, it can analyze the personality of an artist (The Moon and Sixpence, Moulin Rouge, Seraphine).

My criteria for ranking these movies are not much different from those I use to judge any other movies: quality of ideas, story, acting, dialogue, and cinematography. In theory, it shouldn’t be any more difficult to produce a decent movie about a painter than about any other subject, but in practice, there are pitfalls that can ensnare you.

In particular, it seems that many directors, in trying to make a movie about art, try to make the movie artsy. One thinks of the disastrously bad film Klimt (2007), an internationally produced bioflick about the Viennese artist Gustav Klimt (1862–1918), played by John Malkovich. The flick is tedious and hard to follow, with numerous hallucinatory scenes interspersed in the action. Malkovich gives a listless performance, portraying the artist as bereft of any charm. The result is risible.

I expect art, not artsiness. And I will mention one other thing I look for in movies about painters: if it accords with the story line, I like to see the artist’s work displayed. If a person is supposed to be great about doing something, one naturally wants to see the evidence.

To build suspense, I’ll present the movies that made my top ten in reverse order of my judgments of their importance and quality.

Number ten on the list is The Agony and the Ecstasy (1965). This lavishly produced film is based Irving Stone’s best seller of the same title, but actually just focusing on part of the story — Michelangelo (1475–1564, portrayed by Charlton Heston) painting the ceiling of the Sistine Chapel at the prodding of his patron, Pope Julius II (Rex Harrison). The eminent director Carol Reed directed the movie, and it was nominated for five Academy Awards, including those for cinematography, art direction, and score. In each of those areas the film is indeed excellent. Especially effective is the scene in which Michelangelo gets his key inspiration for his ceiling mural from observing the beauty of the clouds. The interesting idea explored in the movie is the way in which influence of a patron can help even a highly individual artist elevate the artistic level of his work. The pope insisted that Michelangelo do the job, even though he initially demurred, viewing himself primarily as a sculptor.

Many directors, in trying to make a movie about art, try to make the movie artsy. One thinks of the disastrously bad film Klimt.

The acting in this film isn’t as good as one would expect of the two leads. Heston and Harrison, both recipients of the Oscar for best actor in other movies, seem somehow miscast in their roles. But the movie transcends this weakness; the glory of Michelangelo’s art is on full display in a beautiful color production.

Number nine is Frida (2002), directed by Julie Traynor and starring Selma Hayek (who also coproduced the movie). This is an unvarnished look at the life of Frida Kahlo (1907–1954), focusing on the accident that made her a semi-invalid and caused her lifelong pain, and on her tempestuous marriage to the painter Diego Rivera. Rivera was already famous when they met, and her career grew alongside his. His numerous adulterous affairs are not hidden, nor are her affairs with other women (as well as Leon Trotsky). Both Rivera and Kahlo were devout socialists, as the movie emphasizes.

Selma Hayek’s performance is extraordinary — it is obvious she was completely devoted to the project. She convincingly conveys the physical suffering Kahlo endured, along with the mental anguish caused by Rivera’s endless philandering. She was nominated for an Oscar for her performance. Alfred Molina is excellent as Diego Rivera, and Edward Norton gives a nice performance as Nelson Rockefeller (who, ironically, commissioned Rivera to do a mural for him), as does Antonio Banderas (playing the painter David Alfaro Siqueiros). The cinematography is also excellent, and we get to see quite a few of the artist’s paintings. Traynor does a good job of integrating the history of the times with the story line.

Number eight is Local Color (2006). Written and directed by George Gallo, it is a fictionalized account of his friendship with the landscape painter George Cherepov (1909–1987), an artist he met while he was hoping to pursue art, before turning in his twenties to screenwriting and directing. Gallo’s earliest success was writing the screenplay for “Midnight Run.”

In the movie, the Gallo figure John Talia Jr. (Trevor Morgan) is thinking about what to do after high school. His father (perfectly played by Ray Liotta) hopes he will get a regular job, but young John wants to be a painter. He manages to gain the friendship of a crusty, profane, but gifted older Russian painter, here called Nicoli Seroff (played brilliantly by Armin Mueller-Stahl). Seroff invites John to spend the summer at his house, much to the worry of John’s father, who is concerned that Seroff is gay and will “take advantage” of his son. After some tension between the two, Seroff finally breaks down and shows John how to paint.

Besides being a nice meditation on the role a mentor can play in an artist’s life, the movie has as a subtext an exploration of two related and important questions about contemporary art: is there great artistic merit in abstract art, and should art divide the elites from the ordinary public? This subtext plays out in the exchanges between the prickly Seroff and a pompous local art critic Curtis Sunday (played exquisitely by Ron Perlman, of Hellboy and Beauty and the Beast). Their dispute culminates in a hilarious scene in which Seroff shows Sunday a painting produced by an emotionally disturbed child with whom Seroff has worked. Seroff shows Sunday the painting without revealing who made it, and asks for Sunday’s opinion about the artist. Sunday then begins to talk earnestly about the virtues of the artist, thinking he must be a contemporary painter. When Seroff tells Sunday the truth, Sunday storms off to the howls of Seroff’s laughter. The movie has excellent cinematography — which gathers interest from the fact that all the oil paintings shown in the film were painted by Gallo himself.

In the film Frida, Diego Rivera's numerous adulterous affairs are not hidden, nor are Kahlo's affairs with other women — as well as with Leon Trotsky.

Number seven on my list will be a surprise. It is The Moon and Sixpence (1942). The movie is a superb adaption of W. Somerset Maugham’s brilliant short novel of the same name. The story is about a fictional painter, Charles Strickland, and is loosely based on the life of Paul Gauguin (1848–1903). Strickland (well played by the underrated actor George Sanders, who could play the cad well) is a stockbroker who suddenly and unexpectedly leaves his wife and family in midlife to pursue his vision of beauty — his painting. He is followed by a family friend, Geoffrey Wolfe (a character I suspect Maugham based on himself, and beautifully portrayed by Herbert Marshall), who narrates as he tries to make sense of Strickland’s ethical worldview.

What we see is a man who is an egotist to the core, but we realize that this is an egotism driven by a desire to create. A key scene in this regard is the one in which Strickland explains to Wolfe that he doesn’t choose to paint, but he has to paint. Maugham doesn’t make it easy on us by portraying Strickland as also driven to flee civilization — by, for instance, a bad marriage or family. In fact, the title seems to indicate that in the end he himself fails to appreciate Strickland’s choice: it comes from a Cockney phrase about somebody who is so struck by the moon that he steps over sixpence: by focusing on something abstract — such as artistic beauty—one misses out on something that may be more worthwhile, such as rich human relationships.

But what makes this film powerful is its exploration of the idea that a person can be an egoist—even immoral by conventional standards — but still be a creative genius. Indeed, I recommend this film in my ethical theory classes as an example of Nietzsche’s brand of egoism.

Number six is Girl with a Pearl Earring (2003), a tale from the historical novel by Tracy Chevalier about the life of Johannes Vermeer (1632–1675). It imagines the story of a young woman, Griet, who comes to the Vermeer household as a maid. Griet’s father was a painter, but went blind, forcing her to support herself by working as a domestic servant. The Vermeer household is dominated by his all-too-fecund and extremely jealous (not to say shrewish) wife Catharina, along with her mother Maria Thins.

Griet is fascinated by Vermeer’s work, the colors and composition. Noticing her interest, Vermeer befriends her, letting her mix his paints in the morning. The viewer suspects that this friendship involves a romantic interest, at least on his part. He is careful to keep the friendship from Catharina’s notice. While shopping with the chief maid, Griet meets the butcher’s son Pieter, who is very attracted to her, and we suspect that the feeling is mutual.

As if this incipient romantic triangle weren’t enough excitement for poor Griet, Vermeer’s concupiscent patron Van Ruijven sees her and pushes Vermeer to let her work in his house. Faced with Vermeer’s refusal, Van Ruijven commissions him to paint her, which Vermeer agrees to do. Van Ruijven, obviously, isn’t motivated by art so much as by lust — he even attempts to rape Griet. All this culminates, however, in her becoming the model for Vermeer’s most famous masterpiece, “Girl with a Pearl Earring.” The earring, which is one of a pair borrowed from Catharina, making her extremely jealous, goes with Griet as she leaves Vermeer’s household, ending her adventure with an interesting memento.

What we see is a man who is an egotist to the core, but we realize that this is an egotism driven by a desire to create.

The art direction is superb. It is executed in colors reminiscent of the painter’s method (dark background with vivid tones in the key objects). Appropriately, the film received Oscar nominations for both best art direction and best cinematography. The acting was almost entirely excellent, with Essie Davis playing a very irascible Catharina, Tom Wilkinson a randy Van Ruijven, Judy Parfitt a practical Maria, and Cillian Murphy a supportive Pieter. Especially outstanding is Scarlett Johannson as a very self-contained Griet. She bears an uncanny resemblance to the girl in the actual painting. The sole disappointment is Colin Firth as Vermeer. He plays the role in a very inexpressive way — more constipated than contemplative, to put it bluntly.

Number five is a film about the life and work of a controversial modern artist, Pollock (2000).

Jackson Pollock (1912–1956) was a major figure in the abstract art scene in post-WWII America. He grew up in Arizona and California, was expelled from a couple of high schools in the 1920s, and studied in the early 1930s at the Art Students League of New York. From 1935 to 1943 he did work for the WPA Federal Art Project. During this period, as throughout his life, he was also battling alcoholism.

Receiving favorable notice in the early 1940s, in 1945, he married another abstract artist, Lee Krasner. Using money lent to them by Peggy Guggenheim, they bought what is now called the Pollock-Krasner House in Springs (Long Island), New York. Pollock turned a nearby barn into his studio and started a period of painting that lasted 11 years. It was here he developed his technique of letting paint drip onto the canvas. As he put it, “I continue to get further away from the usual painters’ tools such as easel, palette, brushes, etc. I prefer sticks, trowels, knives, and dripping fluid paint or heavy impasto with sand, broken glass or other foreign matter added.” He would typically have the canvas on the floor and walk around it, dripping or flicking paint.

In 1950, Pollock allowed the photographer Hans Namuth to photograph him at work. In the same year he was the subject of a four-page article in Life, making him a celebrity. During the peak of his popularity (1950–1955), buyers who were pressing him for more paintings, making demands that may have intensified his alcoholism.

He stopped painting in 1956, and his marriage broke up (he was running around with a younger girlfriend, Ruth Kligman). On August 11, 1956, he had an accident while driving drunk that killed both him and a friend of Ruth, and severely injured her. But after his death, Krasner managed his estate and worked to promote his art. She and he are buried side by side in Springs.

Critics have been divided over Pollock’s work. Clement Greenberg praised him as the ultimate phase in the evolution of art, moving from painting full of historical content to pure form. But Craig Brown said that he was astonished that “decorative wallpaper” could gain a place in art history. However one might view Pollock’s work, it has commanded high prices. In 2006, one of his paintings sold for $140 million.

The movie tracks the history fairly closely, starting in the early 1940s, when Pollock attracted the attention of Krasner and Guggenheim, and moving through his marriage to Krasner, his pinnacle as the center of the abstract art world, and the unraveling of his personal life. Throughout, we see him angry, sullen, and inarticulate, whether drunk or sober.

Ed Harris, who directed the film and played the lead, is fascinating (if depressing) to watch. He plays a generally narcissistic Pollock, with the problem of alcoholism featured prominently. He was nominated for a best actor award for this role. Especially good is Marcia Gay Hardin as Lee Krasner, who won a best actress Oscar for her performance. The main defect of the movie is that it gives us no idea why Pollock was angry and alcoholic. Was it lack of respect for his own work? Did he feel it wasn’t really worthy of the praise it received? We get no clue.

Number four is a piece of classic British cinema, “Rembrandt” (1936), meaning, of course, Rembrandt van Rijn (1606–1669), who is generally held to be the greatest painter of the Dutch Golden Age (an era that included Vermeer, his younger contemporary). Rembrandt achieved great success fairly early in life with his portrait painting, then expanded to self-portraits, paintings about important contemporaries, and paintings of Biblical stories. In the latter, his work was informed by a profound knowledge of the Bible.

But his mature years were characterized by personal tragedy: a first marriage in which three of his four children died young, followed by the death of his wife Saskia. A second relationship with his housekeeper Geertje ended bitterly; and a third, common law, marriage to his greatest love, Hendrickje Stoffels, ended with her death. Finally, Titus, his only child to have reached adulthood, died. Despite his early success, Rembrandt’s later years were characterized by economic hardship, including a bankruptcy in which he was forced to sell his house and most of his paintings. The cause appears to have been his imprudence in investing in collectables, including other artists’ work.

Rembrandt’s painting was more lively and his subjects more varied than was common at the time, when it was common to paint extremely flattering portraits of successful people. One of his pieces proved especially provocative: “The Militia Company of Captain Frans Banning Cocq,” often called “The Night Watch,”an unconventional rendition of a civic militia, showing its members preparing for action, rather than standing elegantly in a formal line up. Later stories had it that the men who commissioned the piece felt themselves to have been pictured disrespectfully, though these stories appear to be apocryphal.

The main defect of Pollock is that it gives us no idea why he was angry and alcoholic. Was it lack of respect for his own work? Did he feel it wasn’t really worthy of the praise it received? We get no clue.

The movie is fairly faithful to historical reality, except that it doesn’t explore Rembrandt’s financial imprudence, attributing his later poverty to his painting of “The Night Watch as an exercise in truth-telling. The movie shows him painting these middle-class poseurs for what they were, and their outrage then leading to a cessation of high-price commissions from other burghers. The direction is excellent, as one would expect from Alexander Korda, one of the finest directors Britain ever had. The support acting is first rate, especially Elsa Lanchester as Rembrandt’s last wife Hendrickje, Gertrude Lawrence as the scheming housekeeper and lover Geertje, and John Bryning as the son Titus. Charles Laughton’s performance as Rembrandt is masterful. There are great actors, and then there are legends, and he was both. Unlike his loud and dominating performances in such classics as The Hunchback of Notre Dame and Mutiny on the Bounty, his role in this film is that of a wise and decent man, devoted to his art and his family, and he makes it even more interesting. The main flaw in the film is that we don’t see much of the artist painting or of his paintings, but that is a comparatively minor flaw in an otherwise great film.

Number three is the great Moulin Rouge (1952), based on the life of Henri Marie de Toulouse-Lautrec-Monfa (1864–1901). Toulouse-Lautrec was born into an aristocratic family. At an early age he lived in Paris with his mother, and showed promise as an artist. But also early in his life he showed an infirmity. At ages 13 and 14 he broke first one leg than the other, and they both failed to heal properly. As an adult, he had the torso of a man and the legs of a boy, and he was barely 5 feet tall. A lonely and deformed adolescent, he threw himself into art.

He spent most of his adult life in the Montmartre area of Paris, during a time when it was a center of bohemian and artistic life. He moved there in the early 1880s to study art, meeting van Gogh and Emile Bernard at about this time. After his studies ended in 1887, he started exhibiting in Paris and elsewhere (with one exhibition featuring his works along with van Gogh’s).

He focused on painting Paris on the wild side, including portraits of prostitutes and cabaret dancers. In the late 1880s, the famous cabaret Moulin Rouge opened (it is still in Montmartre to this day), and commissioned Toulouse-Lautrec to do its posters. These brought him to public attention and notoriety, and won him a reserved table at the cabaret, which also displayed his paintings prominently. Many of his best known paintings are of the entertainers he met there, such as Jane Avril and La Goulue (who created the can-can).

By the 1890s, however, his alcoholism was taking its toll on him, as was, apparently, syphilis (not unknown among artists of the time). He died before his 37th birthday, at his parent’s estate. In a brief 20 years, he had created an enormous amount of art — over seven hundred canvases and five thousand drawings.

The movie, directed (and co-written) by John Huston, was a lavish production fairly true to history. It should not be confused with the grotesque 2001 musical of the same name. The cinematography and art direction are superb, showing us scenes of the Moulin Rouge in particular and Paris in general, as captured by the artist. The film won the Oscar for best art direction and best costume design.

The directing and acting are tremendous. (Huston was nominated for best director, and his film for best picture.) Zsa Zsa Gabor is great as Jane Avril, as are Katherine Kath as La Goulue and Claude Nollier as Toulouse-Lautrec’s mother. And Colette Marchand is perfect as Marie Charlet, the prostitute with whom Toulouse-Lautrec becomes involved. She wa nominated for an Academy Award as best supporting actress, and won the Golden Globe, for her performance. But most amazing is the work of the lead, Joses Ferrer, who plays both Toulouse-Lautrec père and fils. Playing Toulouse-Lautrec the artist required Ferrer (always a compelling actor) to stand on his knees. His was a bravura performance, making the artist both admirable and pitiable. Ferrer was nominated for an award as best actor, but unfortunately did not win.

If there is one flaw in the film, it is an unneeded sentimentality, well illustrated by the final scene. With Henri on his deathbed, his father cries to him that he finally appreciates his art, while figures from Henri’s memory bid him goodbye. Huston, one of the greatest directors in the history of film, especially adept at coldly realistic film noir (e.g., “The Maltese Falcon”), should have toned this down. My suspicion is that the studio wanted something emotionally “epic,” and Huston obliged.

Number two on my list is a small independent flick, Modigliani (2004). The story explores the art scene in Paris just after WWI, with artists such as Pablo Picasso, Amedeo Modigliani, Diego Rivera, Jean Cocteau, Juan Gris, Max Jacob, Chaim Soutine, Henri Matisse, Marie Vorobyev-Stebeslka, and Maurice Utrillo living in the Montparnasse district.

We meet Modigliani as he enters the café Rotonde, stepping from tabletop to tabletop as the patrons applaud.

Amedeo Modigliani (1884–1920) was born into a poor Jewish family in Italy. He grew up sickly, contracting tuberculosis when he was 16. He showed interest and talent in art at an early age, and went to art school, first at his hometown of Livorno, then later in Florence and Venice. He was fairly well read, especially in the writings of Nietzsche. He moved to Paris in 1906, settling in Montmartre. Here he met Picasso, and spent a lot of time with Utrillo and Soutine. He also rapidly became an alcoholic and drug addict, especially fond of absinthe and hashish (beloved by many artists then). He adapted rapidly to the Bohemian lifestyle, indulging in numerous affairs and spending many wild nights at local bars. Yet he managed to work a lot, sketching constantly. He was influenced by Toulouse-Lautrec and Cezanne but soon developed his own style (including his distinctive figures with very elongated heads). After a brief return home to Italy in 1909 for rest, he returned to Paris, this time moving to Montparnasse. He is said to have had a brief affair with Russian poetess Anna Akhmatova in 1910, and worked in sculpture until the outbreak of WWI. He then focused on painting, among other things painting portraits of any of other artists.

In 1916, he was introduced to a beautiful young art student, Jeanne Hebuterne. They fell in love, and she moved in with him, much to the anger of her parents, who were conservative Catholics, not fond of the fact their daughter was involved with a poor, drunken, struggling Jewish artist.

And struggle they did. While Modigliani sold a fair number of pieces, the prices he got were very low. He often traded paintings for meals just to survive. In January 1920, Modigliani was found by a neighbor delirious, clutching his pregnant wife. He died form tubercular meningitis, no doubt exacerbated by alcoholism, overwork, and poor nourishment.

His funeral attracted a gathering of artists from Paris’ two centers of art (Montmartre and Montparnasse). It was all very Nietzschean: brilliant young man does art his way, defies all slave moral conventions, and dies in poverty. The genius is spurned by hoi polloi too addled by slave morality to appreciate the works of the übermensch. Jeanne died two days later by throwing herself out a window at her parents’ house, killing herself and her unborn child. It was only in 1930 that the family allowed her to be reburied by his side.

The movie takes place in the pivotal year 1919. We meet Modigliani as he enters the café Rotonde, stepping from tabletop to tabletop as the patrons applaud. He winds up at Picasso’s table, where he kisses Picasso. This bravura entry invites us to focus where we should — on the relationship between these two artists, both important in a new era of art. The relationship is complex. On the one hand, they are obviously friends — and friends of a sort that Aristotle would have approved: their friendship is based on appreciation of each other’s intellectual virtue, their art. But there is a darker side to it: they are also rivals, competitors for the crown of king of the new artists.

Modigliani and Jeanne are struggling to pay rent, and Jeanne’s father has sent their little girl away to a convent to be raised. Modigliani sees a chance to get his child back, and provide for Jeanne. He will enter a work in the annual Paris art competition, one that, so far, he and his circle have scorned. Picasso, feeling challenged, enters also, with other members of the circle joining him. Modigliani knows that the competition will be tough, so by force of will he works to produce a masterpiece. This challenges Picasso as well, and we see the artists vying to see who will win.

But the denouement is tragic. Modigliani finishes, and asks Picasso to take his piece to the show. He goes to City Hall to marry Jeanne. After leaving late, he stops at a bar for a drink. And that foolish act leads to an ending whose bittersweet drama I don’t want to spoil.

The acting is excellent throughout. Hippolyte Girardot is notable as Maurice Utrillo, and Elsa Zylberstein is superb as Jeanne. But the best support is given by Omid Djalili as a smoldering, intense Picasso, who succeeds where Modigliani fails, but understands that his success did not result from greater genius. The amazing Andy Garcia gives a fabulous performance as Modigliani.

It was all very Nietzschean: brilliant young man does art his way, defies all slave moral conventions, and dies in poverty. The genius is spurned by hoi polloi too addled by slave morality to appreciate the works of the übermensch.

The critics panned this movie mercilessly, and it has some flaws — most importantly, you cannot appreciate the film without a knowledge of Modigliani’s biography, because it focuses only on his last year. (It also takes some liberties with the facts.) But the film powerfully conveys a unique friendship and rivalry, and explores an artist’s self-destructiveness. Very instructive, though not the makings of a box office bonanza.

And now — drum roll please! — coming in as number one on my list is a classic that holds up well, more than a generation after its making: Lust for Life (1956).

Like The Agony and the Ecstasy, this movie was based on a best-selling novel by Irving Stone. The director was the brilliant Vincente Minnelli. The film tells the tragic story of the life of Vincent van Gogh (1853–1890), with lavish attention to the man’s magnificent art. It follows the life of van Gogh from his youth, during which he his struggled to find a place as a missionary, to his mature years, during which he struggled to find a place as an artist. (The van Gogh family lineage was full of both artists and ministers.) Van Gogh is usually categorized with Gauguin, Cézanne, and Toulouse-Lautrec as the great post-impressionists.

Van Gogh is played by Kirk Douglas, who was primarily known as an action lead — adept at playing the tough guy outlaw or soldier (helped by his buff physique and chiseled handsome face). This was a casting gamble, but it paid off, with Douglas giving one of the best performances of his career, if not the best performance. He was nominated for a Best Actor Oscar and won the Golden Globe for playing the mentally tormented van Gogh with credibility.

The support acting just doesn’t get any better. Most notable is Anthony Quinn as a young, egoistic, and arrogant Paul Gauguin, who for a brief time was van Gogh’s roommate, but couldn’t handle van Gogh’s emotional intensity and instability. Quinn rightly received an Oscar for best supporting acting. Also excellent is James Donald as van Gogh’s loyal brother Theo.

The story development and dialogue are first rate (the screenwriter Norman Corwin was nominated for an Oscar), as is the art direction (also nominated for an Oscar).

The movie showcases many of van Gogh’s paintings. It also explores the crucial role his brother played in keeping him painting, supporting him financially as well as emotionally. If it were not for Theo van Gogh, the world would likely have never known Vincent. The contrast with Moulin Rouge is stark: Toulouse-Lautrec never got the support of his father until it was too late.

Five films that did not make my list deserve honorable mention. The first must be of a picture I have reviewed for Liberty (October 2009), Seraphine (2009). It is a wonderfully filmed, historically accurate bioflick of the French “Naïve” painter Seraphine Louis (1864–1942). She was discovered by an art critic, flourished for a brief period after World War I, but with the Depression her career ended, and she was eventually confined to an asylum. The relatively unknown actress Yolande Moreau is simply wonderful in the lead role.

The second honorable mention is Convicts 4 (1962), which tells the true story of artist John Resko. Resko was condemned to death after he robbed and unintentionally killed a pawnshop owner while attempting to steal a stuffed toy for a Christmas gift for his daughter. He was given a reprieve shortly before his scheduled execution, and with the help of some fellow inmates adapted to prison life. In prison, he learned to paint, and came to the notice of art critic Carl Calmer, who fought for — and eventually, with the help of the family of the man Resko killed won Resko’s release. Ben Gazzara is outstanding as Resko, and Vincent Price (in real life an art critic and a major art collector) convincing as Calmer.

If the producers had wished to fictionalize the story, they should have done so, and changed the names.

Third honorable mention goes to the television movie Georgia O’Keeffe (2009). Again, since I recently reviewed this movie for Liberty (August 2010), I will be brief. The film gives a nice account of one of the first American artists to win international acclaim, Georgia O’Keeffe (1887–1986). It focuses on her most important romantic and professional relationship, the one with Alfred Stieglitz, the famous photographer and art impresario. O’Keefe (played superlatively by Joan Allen) was hurt by Stieglitz’s philandering, but they remained mutually supportive professionally, even after a painful parting. Jeremy Irons is superb as Stieglitz.

As I noted in my review, at the end of the movie, one is left to wonder why Stieglitz was so callous in his treatment of O’Keeffe (in flouting his adultery, and in one scene bragging about his new paramour’s having his child to O’Keeffe, with whom he had earlier angrily dismissed the idea of having children). Was this merely the blindness of narcissism, or was there an undercurrent of profound envy at his wife’s success as an artist — one greater than his?

Fourth honorable mention is Artemisia (1998), based on the life of Artemisia Gentileschi (1593–1656). She was one of the earliest women painters to win widespread acclaim, being the first woman artist accepted into Florence’s Accademia di Arte del Disegno. The film is gorgeously produced, with a first-rate performance by Valentina Cervi as Artemisia and Miki Manojlovic as Agostino Tassi. Its major flaw is its historical inaccuracy, portraying Tassi as Artemisia’s chosen lover, while in fact he was her rapist. If the producers had wished to fictionalize the story, they should have done so, and changed the names. Stretching or selectively omitting history in a bioflick can make sense, but a total inversion of a pivotal event is a major flaw.

Watching a large number of movies about artists over a short period of time can be a recipe for depression.

The fifth film receiving honorable mention is Basquiat (1996), a good movie about the sad life of Jean-Michel Basquiat (1960–1988), who was one of the earliest African-Americans to become an internationally known artist. He was born in Brooklyn, and despite his aptitude for art and evident intelligence (including fluency in several languages and widespread reading in poetry and history), he dropped out of high school. He survived on the street by selling t-shirts and postcards, and got his earliest notice as a graffiti artist using the moniker “SAMO.” In the late 1970s, he was a member of the band Gray. In the early 1980s his paintings began to attract notice, especially when he became part of Andy Warhol’s circle. In the mid-1980s, he became extremely successful, but also got more caught up in drugs, which led to his early demise from a heroin overdose. Jeffrey Wright is superb as Basquiat, as are David Bowie as Andy Warhol and Dennis Hopper as the international art dealer and gallerist Bruno Bischofberger. Also compelling is Gary Oldman as artist Albert Milo, a fictionalized version of the director Julian Schnabel.

Watching a large number of movies about artists over a short period of time can be a recipe for depression, given the amount of tragedy and pain on display. Often this pain was caused by a lack of public and critical recognition or support, leading great painters to experience genuine deprivation and what must have been the torment of self-doubt. Worse, the pain was sometimes self-inflicted or inflicted on others, because of the narcissism or lack of self-control that made such messy lives for so many artists.

But watching these films is intellectually as well as visually rewarding. You see the triumph of creative will over unfavorable conditions and outright opposition — and the beauty that unique individuals have contributed to the world.




Share This


Snow White and Mayor Dork

 | 

On Sunday December 26, 2010, the blizzard of 2010 hit the northeastern United States. I, for one, enjoyed watching the snow fall. If we can’t have a white Christmas, a white day-after-Christmas is the next best thing.

But in New York City things were not so merry. Upwards of two feet of snow fell in New York. Clearing the roads after a snowstorm seems a relatively simple challenge, one for which Mayor Michael Bloomberg should have had ample time to prepare. The mayor’s absolute failure reveals him as an absolute incompetent.

For years Bloomberg has opposed libertarian freedoms in New York City, from gun rights to the right to smoke cigarettes in bars. (This was a pet peeve of mine, back when I used to smoke and drink.) But at the very least, he has tended to handle emergencies well — at least, one always saw him on the evening news at the scene of the disaster, once the mess had been cleared up. But not this time.

I spoke with my father two days after the blizzard. He lives in eastern Queens, and he was still snowed in, with the roads outside his house unplowed, the piles of snow too high to get past, and bus and subway lines in his area not running. His fate was shared by most people in Queens and Brooklyn.

I am spending my winter vacation at my mother’s home in southwestern Connecticut, and here I get New York TV news channels, which showed that the city was in a state of devastation. It was reported that the day after the snowstorm it took eight hours for ambulances to respond to 911 calls because of the condition of the roads. The next day, the news said that the mayor blamed his inability to plow the roads on drivers who had irresponsibly abandoned their cars in the middle of the street. TV reporters are consistent in saying that New Yorkers are outraged. The City Council plans to respond to this emergency by… holding a hearing.

What New York City needs is men of action, not windbag politicians. If the city is too incompetent to clear the roads after a snowstorm, it is only because politicians and bureaucrats have no accountability and suffer no monetary loss from the failure of state-owned infrastructure. Needless to say, two feet of snow is not the worst crisis that the city may face in the future. The only way to prevent a future disaster is to stick our hand into our magical bag of libertarian wisdom and pull out an idea whose time has come: privatize the roads.

If the streets of New York City were under private ownership, the owners would make certain that snow removal happened efficiently; if they failed then they would go bankrupt and someone else would buy the roads and operate them to the satisfaction of consumers. One TV news story showed a Brooklyn family with a newborn baby. With an oil truck trapped in piles of snow just a few streets away, their heat had gone out for lack of oil, and ambulances had trouble reaching them. Their baby’s death should weigh on the conscience of every statist who fights against allowing free market competition to improve upon the nightmare of state-owned infrastructure.




Share This


Hat Trick

 | 




Share This


Your Recovery Dollars at Work

 | 

About three months ago, a curious sign appeared at one end of my street. It reads, “Putting America to Work. Project Funded by the American Recovery and Reinvestment Act.” It depicts a hard-hat-wearing stick figure digging into a pile of dirt — as if this jaunty cartoon of a “shovel-ready” project would soothe my anger at the wealth confiscation that funds such ridiculous endeavors.

Not much goes on in my small, East Coast rural enclave. The acquisition of “city” sewage by the nearest two towns was a big deal around here. So the government sign was the talk of our street. There was no explanation of why the sign appeared, no explanation of what project was in the offing. This was strange.

Then, roughly two weeks after the sign was erected, road crews appeared on both ends of our street and started tearing up the asphalt. The re-paving project was completed two weeks later.

Some neighbors speculated that the project was inflicted on us to predispose us to vote Democratic in the upcoming local election. But elections here are the smallest of small potatoes. It wasn't logical that federal funds would be spent to influence local voting. One neighbor speculated that the road was being prepared for a utility development set to occur in the next few years; but another road is slated to be built specifically for that purpose, at the opposite end of the nearest big town. None of us could come up with a reasonable answer. I suppose I could have attended a township meeting to divine the reasons behind this project, but I don’t have the time to waste and it’s highly unlikely that the simple folk, and by that I mean simpletons, who make up the township committee would have a credible answer.

As I said, this is a rural area. Roads need only be passable  — pickup trucks and tractors do just fine. Given that my street is only one section of a decently long through road, this paving project does not qualify as a “road to nowhere”; but it is very strange that the project was limited to one section of the road. Even stranger, there was nothing wrong with this part of the road in the first place. Nary a pothole! There is no meaningful difference between the street in its pre-recovery- dollars condition and the street in its post-recovery-dollars state. The road is now black. It used to be to gray.

In short, the project was a colossal waste of money. The dollars devoted to it should not have been printed, let alone spent. The workers involved in it did not achieve sustainable employment; they simply received unemployment subsidies by another name. No one was “put to work” in the sense that the designers of the Recovery Act intended the populace to believe.

Increased employment results from increased demand for goods and services. Allowing taxpayers to keep the majority of our dollars is the best option for “Recovery and Reinvestment” in all areas. Greater disposable income spurs demand as well as mitigating the risk of investment in small ventures. A person can spend his or her own dollars on any number of goods or endeavors that would contribute to sustained economic activity. More dollars in the hands of the citizenry will “put more people to work” than dollars in the hands of government ever will.

The first step to an actual recovery is limiting government spending. How do we achieve this?

We can apply my friend’s sound advice on dealing with young children: give them only very limited options. For example, instead of asking, “Where would you like to go for your birthday dinner?” ask, “For your birthday dinner, would you like to go to Friendly’s or McDonald’s?” Young children are ill-equipped to handle unlimited discretion. Governments are too.

With the country in its present mood, severely limiting government’s spending discretion is an attractive and realizable goal. We already have the set of tools necessary to do this. It’s called the Constitution.




Share This


Well, at Least That's Over

 | 

Happy New Year! It gives me pleasure to report that we survived 2010 with fewer devastating hits to the language than we’ve seen in recent years.

If you’re inclined to whine about 2010, please remember “the audacity of hope” and its sad but well-merited fate in the year just past. Of course, there is usually an easy passage from pomposity to farce, but the passage of “audacity of hope” was particularly easy, and particularly gratifying to observe. Every friend of the English language shuddered on election day 2008, expecting that Obama’s stilted, painfully self-conscious phrase would be enshrined forever in America’s pantheon of quotations, alongside “The only thing we have to fear is fear itself,” “Fourscore and seven years ago,” and “Th-th-th-th-th! That’s all, folks!” But now it’s merely a subject for sardonic humor.

So much of interest might have been said in 2010, but wasn’t.

I’m sorry, however, that I can’t welcome the new year as ecstatically as Addison DeWitt once greeted the debut of Eve Harrington. I am not available for shouting from the housetops or dancing in the streets. It isn’t simply that a lot of muddy snow remains to be shoveled off America’s pavements; it’s that so much of interest might have been said in 2010, but wasn’t.

In 2010 we experienced comparatively little linguistic terror or catastrophe, but we didn’t experience many linguistic delights, either. Washington — Mordor on the Potomac — was more vulnerable to solemn sneers and glorious jests than it had been for many years, and that’s saying something, but its opponents were seldom equal to the occasion. The most eloquent and resonant sound of opposition was “Don’t touch my junk.” That saying will last, and deserves to last. Its four modest monosyllables combine a trenchant protest against authority with a wry parody of enforced sensitivity: if you nice people won’t let me say “penis” or “testicles,” I’ll just call them “junk”; now how do you like that?

But try to think of some equally generous gift to the language, received from 2010. Tell me if you do. I’ll be interested.

The year did afford its share of linguistic monstrosities. It promoted, for example, the further growth of the Great Blob “We.” You know what I mean. Your nurse says, “How are we doing today, Mr. Johnson?” Your boss says, “I think that we [meaning you] had better get that report out right away.” Yesterday a waitperson asked me (I was dining alone), “And how did we like our salad?” I was tempted to reply, “I don’t know; I haven’t had time to poll the rest of us”; but friends have told me that waiters do sometimes spit in your food, so I took refuge in a haughty silence.

All politicians now use “we” to describe themselves. Newt Gingrich was just obeying this professional ethic when, in December, he was interviewed by Fox News about whether he intended to run for president. He replied that “we” were considering it. This makes me wonder how many people may actually be lurking on my ballot, underneath the name of any single candidate that “we” might vote for. It also reminds me irresistibly of those cartoons in which a three-headed monster keeps talking to itself.

But it was Oprah Winfrey who, in 2010, broke all records for “we.” It happened in an interview with Barbara Walters. Barbara asked Oprah about rumors that she was gay, and Oprah responded, “We have said, ‘We are not gay,’ a number of times.” Well, I have never said that, not even once. Have you? But then we weren’t being given the third degree by Barbara Walters.

Nevertheless, “moving forward,” as politicians often said in 2010: the past year not only failed to come up with any colorful new phrases; it was churlish about using old ones that might still have some value. I was astonished by the neglect of a number of venerable expressions that should have seemed perfectly natural, indeed unavoidable, in the context of the year’s political events. These locutions may never have been star players, but their absence from the team made the game a lot less fun to watch.

Who are the war-speakers now? Who claims to be besieged, subverted, held hostage by today’s forces of evil? Why, it’s our pacifist president and his friends, that’s who.

While following the controversy over the tax bill, I was shocked to hear not one satirical reference to the fact that Democrats like to “soak the rich.” And amid the outpouring of sympathy for people who have missed their mortgage payments, I heard not one mention of “giving a hand” to “the deserving poor.” “The poor” no longer exist in our national vocabulary. In this respect, the president is fully representative of leaders left, right, and center: he never talks about “the poor”; he talks exclusively about “the middle class,” or at most about “working families.” (I thought that child labor had been outlawed — except on farms, because farm states have two senators each — but I must have been wrong.) No one ever thinks of po’ folks now.

This is disappointing to me, because I grew up around po’ folks, and a lotta folks I know are still po’. I can’t see why they should be omitted from the glossary, but in 2010 even the professional friends of the working man did exactly that. Obama used the word “folks” with fanatical phoniness, but he didn’t call the poor folks “poor.” I suppose that’s because he and his friends had discovered that really poor people don’t vote, and therefore shouldn’t be noticed, and that relatively poor people always insist that they are middle class.

Relatively rich people do that too. Have you ever met an American who referred to himself as “rich”? There’s no point in debating the question of whether to “soak” the rich. They’re linguistically extinct — except when the Democrats want to increase their taxes. Then, as we discovered in 2010, they become the “super-rich” (i.e., people who make more than $250,000 a year).

That is what the Republicans call “class warfare,” a phrase I am heartily sick of, despite its fair degree of accuracy. The reason I regard it as fairly accurate is that Obama’s leading supporters and administrative fixtures are virtually all super-rich themselves — and I’m not talking about people who make only $250K. I doubt that Obama knows anyone who makes as little as that, or has known anyone who makes as little as that during his own past years of political “service.” But some kind of warfare is going on. The most famous remark that Obama made in 2010 was his crack about Republican congressmen holding “hostages” (i.e., refusing, out of principle, to vote for his legislation). That’s war talk, that is.

If Obama came back, where did he come back from? From his dismally low popularity? From the 9.6% unemployment fostered and protected by his economic policies?

And it’s interesting: starting in the 1960s, “right wing” people were violently attacked by college professors and other kindly, mild-mannered folk for “militarizing” the language — you know, insisting on prosecuting a “cold war” against an “evil empire,” and calling communists “traitors” when they were merely plotting to set up a Stalinist dictatorship. The attack revived after 9/11, when a concerted attempt was made to ban the word “evil” as an aggressive, contemptuous piece of hate speech, reminiscent of . . . er . . . uh . . . Nazis or something. (Gosh, I almost said “radical Islamicists.”) But who are the war-speakers now? Who claims to be besieged, subverted, held hostage by today’s forces of evil? Why, it’s our pacifist president and his friends, that’s who.

The truth is less ideological and more rhetorical. Obama was desperate when he made that statement. He would have said anything if he’d thought it would help. To rescue his political career, he needed to make a deal with the Republicans, but he also needed to conciliate the many members of his party who hate Republicans. He decided that the best way to do it was to show that he, too, hated Republicans. That wasn’t hard, because it was true. He does hate them. So he charged that the Republicans had, in effect, manned up (another ridiculous 2010 expression) and were negotiating with him at the point of a legislative gun. Oh, the humanity! But he had to go along with them, for the sake of the republic.

If you can’t see through this stuff, you’re even more naïve than the New York Times.

But speaking of naïve journalism, this is the time for Word Watch to make its fearless forecast for 2011. Here goes.

During 2011, I envision a more complex linguistic situation than prevailed on 2010. I predict that the nation will be annoyed and harassed, not just by the usual guff, but by three rival political dialects.

1. Conservaspeak

This is a language in which I am well educated, a language that has come pretty naturally to me since I stopped being a leftist several generations ago; but I have to concede that it’s lacking in charm. The Republican leadership, which is not very charming to begin with, will speak continually of “balancing the budget,” “ensuring fiscal responsibility,” “setting the nation’s house in order,” “getting America back to work,” and so on and so on. Sound words, if sincerely spoken — which ordinarily they won’t be. But don’t go to John Boehner or Mitch McConnell for inspiring words. They’re too busy running across the fields, with the Tea Party chasing after them.

2. Progressish

Until 2010, “progressives” were old fogies who believed in everything that appeared in the Socialist Party platform of 1912. They went down to the community center on Friday night and listened to speakers (whom no one but other speakers had ever heard of) explain how Big Oil runs the government and will stop at nothing until it poisons the earth and destroys all its people. Outside of that, they had no life. They all voted enthusiastically for Obama but were then horrified to discover that he wasn’t prepared to outlaw capitalism the very next day. One or two of these advanced thinkers happened to be billionaires and thus managed to get themselves taken semi-seriously, so long as they doled out cash; but that was it.

Then came 2010, and by the time it was over, the most leftward people in the Democratic Party had all declared themselves “progressives” out of frustration with Obama. For one thing, he was a total loser. For another thing, they wouldn’t admit to themselves that the specific reason he had lost the November election was that he had followed their advice and “doubled down” on his least popular policy initiatives. To differentiate their wing of the party from the die-hard Obamaites, they needed their own special word for themselves — and lo! “progressive” was found and seized upon. Suddenly, like some animal species that was thought to be extinct until it blundered into a neighborhood where the garbage wasn’t always picked up on time, “progressives” propagated themselves everywhere. Congress and the old-fashioned media filled up with them, overnight.

The current “progressive” ideology isn’t much worth talking about; it consists largely of the idea that government should always expand exponentially, which it would be doing if the president would only ignore the wishes of nine-tenths of the American populace. The progressish dialect isn’t much fun, either; but it will be very prominent in 2011. Expect to hear much more about “empowerment,” “workers’ rights,” “corporate control,” “masters of war,” “the military-industrial complex,” and other standard shibboleths of the distant Left, as leftists try to hold Obama’s renomination hostage in the temple of their idolatries.

3. Obamablab

This is the worst one.

Obama’s popularity ratings have been in the swamp since mid-2009. His amateurish performance as president resulted in his opponents’ overwhelming victory in the election of 2010. Since that election, his biggest accomplishment has been rounding up enough Democrats to vote for the continuation of the Republican tax cuts he had campaigned against.

Strangely, in response to his questionable achievements a chorus of cheers is now being heard from the loftiest heights of the established media — cheers rendered in a barbaric, virtually untranslatable tongue, full of terms that have no plausible equivalent in normal English. Thus, Obama is complimented for his “thoughtful,” even “deeply intellectual and reflective” leadership, for his “moderation,” his “conciliatory approach,” and his “reaching-across-the-aisle method of government.” He is said to have “emerged victorious” and to have “surprised the pundits” as he “turned the corner” on his “struggle to lead America out of its financial doldrums.” Obama is, in short, “the comeback kid.”

Only an expert on mental illness could comprehend what all this means, but its chief characteristic is clearly its gross dishonesty. If Obama came back, where did he come back from? From his dismally low popularity? From the 9.6% unemployment fostered and protected by his economic policies? From the total disarray of his own party? From any other conditions that were just as evident on November 2 as they are today?

The president is not a kid, and the only way in which he has come back is by means of this hideously contrived and shopworn language. We’ve had a comeback kid before: his name was Bill Clinton. And there has never been a moment when modern liberals were not relabeled, when necessary, as conciliatory “moderates” of a “bipartisan spirit,” “pragmatists” who “govern from the center,” etc. Some years ago, the New York Times declared, in a lead editorial, that Walter Mondale was “a man of principle, who has always had the courage to compromise.”

To conclude. These three dialects are the linguistic survivors of 2010. We’ll have to put up with them. But I can think of a good thing about last year: it appears to have jettisoned one considerable wad of smarm: “transparency.” We used to hear a lot about the cellophane-like “transparency” of the Obama administration. Now it appears, what with the healthcare deals and the taxation deals and the stimulus deals and the immigration deals and the security deals and all the other kinds of deals, that “the era of transparency” was over before it started, banished by the era of obvious lies. And that’s the real come-back kid.




Share This


The Simple Life

 | 

Remember calculators? How simple. Even my three score and ten year-old brain could use a calculator without the benefit of a 12-year-old associate offering advice on the sidelines. Naturally, this was B.C. (Before Computers). Then the computer came along and with much difficulty — much cursing — much advice from mocking 12-year-olds who found an activity they loved, besides obnoxiousness and noisemaking — my stressed brain learned to operate the device. So I thought.

Then “they,” the strange pointy-headed people who lived in the woods and emerged to design software, somehow discovered that even I could use 30% of the functions on the computer. No good. They changed it.

Why, oh why, are they obsessed with change? No sooner do I learn X than they change it to Y.

Highly intelligent but aged minds hate change. “Leave it alone,” says the home page of my 15-year-old Mac, to those people who live in the woods.

It all reminds me of the mania to modify a product just to make it different — to stimulate sales, not efficiency. “Hey look, I’ve got the new whatchamacallit - newest model, makes popcorn, too. Bet your iPad or Raspberry can't make popcorn.”

Thank goodness, for the moment, we still live in a capitalist society. Companies like profits, and change is often the engine of profit. That’s OK, just give me a choice. If I don’t need to track the

number of passengers with green shirts flying out of Kennedy, don’t build it into the “M” key on my keyboard. And don’t ring bells and flash green naked women on my screen so I remember to upgrade to this bizarre requirement.

Because of those technical wood nymphs, change becomes religious. It doesn’t always bring improvement, but it does always bring complication. There ought to be two streams of development. The first would be like your car. You bought a 2010 Ford; it remains a 2010 Ford. The accelerator never moves from its floorboard position. The instrument panel still indicates miles per hour, not feet per second. My kind of device. The second would be a test of your mental flexibility. Here, everything changes. The accelerator is now the brake. This is for users who like puzzles and are intrigued by how the device operates, not by what it does.

But in the computer world, even if you stick with the same computer, it’s always bugging you to update this or that. And it has clever little tricks. While you’re playing tennis, it swaps out your operating system so you have to call that smart aleck 12-year-old just to send an email. This is a world that worships change — for better or worse.

My pet remembrance of the “fix it even if it ain’t broke” philosophy is the battery-powered watch. Yep, I’m convinced that’s when it all started — a pivotal date in the history of uselessness. Now, I’m not a watchmaker, but batteries cost money and add an item to your “to do” list. And I swear they’re dying sooner and sooner. How long will it be before it’s a daily ritual? And few stores will change a battery.

How hard was it in the old days to give that little stem a few twists? Free twists, I might add. Think about it.

Gotta go now — my computer is groaning, which means that if I don’t install the popcorn app, it’ll erase my files of all stories that contain the word “popcorn."




Share This

© Copyright 2013 Liberty Foundation. All rights reserved.



Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.