A Visit to Noah’s Ark

 | 

The tourist season is almost over, but I’m making plans. I’m also thinking about last year’s acts of tourism. I’m remembering the sunny day in September when I visited Noah’s Ark.

The Ark is the central feature of a sort-of-theme-park called Ark Encounter, in Grant County, Kentucky. It’s a wooden structure — possibly the largest wooden structure on earth — built to the dimensions prescribed in the sixth chapter of Genesis. There aren’t any live animals inside (at least I can’t remember any); they’re in the zoo next door. But there are full-scale models of animals in various kinds of enclosures. There are also models of Noah and his family, going about their lives on the Ark: caring for the animals, fixing meals for themselves, relaxing in their comfortable onboard cabins. Ramps lead from level to level, where one finds “scientific” exhibits, restrooms, and two theaters with continuous showings of movies. In the first theater, Noah is interviewed by a skeptical antediluvian reporter and explains how and why you would build an ark. In the second theater, a 21st-century ark advocate is interviewed by a reporter who is (I think) played by the same actress who played the ancient one. She also is skeptical and needs to be converted to the idea that the biblical account is literally true. I assume the conversion happens, although I left before the movie was over. Her snarky postmodern attitude was less congenial to me than the religious credulity of the rest of the Ark.

But “credulity” isn’t exactly the right word. For me, a charming aspect of the place was the scores of exhibits providing ingenious answers both to obvious questions and to questions that, I’m embarrassed to say, had never occurred to me.

  • How did all those animals fit into the Ark? Well, they didn’t represent species; they represented “kinds,” which are fewer and are capable of developing (not evolving) into more than one species.
  • How did all those really big animals fit inside? Well, Noah probably took the young, small ones. I hadn’t thought of that.
  • How could you carry food to all those animals? You could use lots of pulleys and dumbwaiters.
  • How could you remove all the dung from those animals? You could use lots of pulleys and dumbwaiters.
  • How could a family of eight take care of thousands of animals? It’s not too hard, when you figure how much work a normal man or woman can do in X number of hours . . . .

The continuous display of cleverness delighted me. It went a long way toward illustrating Chesterton’s observation that the last thing a crazy person has left is his logic. But the builders of the Ark aren’t crazy; their ideas are just naïve and innocuous, and the Ark lets you see how far naiveté and innocuousness can get you in America, and how much charm you can gather along the way.

The reporter's snarky postmodern attitude was less congenial to me than the religious credulity of the rest of the Ark.

The Arkists optimistically predicted that they would be visited by 2.4 million people during their first full season, which was 2017. When I visited, they’d gotten only about 1.5 million, maybe, and it was late in the season. I was concerned that their great enterprise might have a short life, despite a (to me) very regrettable but somewhat shaky subsidy from a neighboring town. But there’s a wall inside the Ark that shows the names of people who have contributed various amounts for its construction, and it’s a very long wall. The Ark came to rest within easy driving distance of Louisville, Lexington, Dayton, and Cincinnati, and that’s a church belt. Visitors to the Ark whom I saw were very “diverse” — whites, blacks, Asians, beards, bikers, families of nine. The only solo visitor was me. So the audience is large, and just when I was thinking that a lot more people could be packed into the Ark, I went to the restaurant outside, and there were hundreds more of them in there. More than in the Ark itself. They may not be museumgoers, but they are sure as hell good eaters.

I hope they eat their way to heaven. Their idea of Christianity isn’t mine, but their spirit of voluntarism enchants me. You want to build a giant ark? You want to make it pay? I’m with you — see if you can. And this is an American thing; you can’t imagine it happening in France. Maybe I’ll visit again this year.

The visitors may not be museumgoers, but they are sure as hell good eaters.

My pilgrimage to the Ark last year began with a visit to my ancestral homeland, a county in Southern Illinois where my family has lived since 1816. I myself have never lived there; my parents left before I was born. But I’m related to all the old families, and I like to see what’s going on. In the early 1890s my father’s father built a house on the main street of one of the county’s little towns. That house passed out of the family a few years ago, after the death of my beloved aunt, the last of my grandparents’ eight children. Next to her house are (going south on Main Street) two other big old houses and then the Methodist church, where my grandparents taught Sunday school. The church seems to be doing all right, despite its fluctuating congregation, but much of the rest of Main Street has been torn down, hideously altered, or left derelict. The town’s population has been declining since 1910, and the working population has been declining still more disastrously. The old families, who were poor, by the world’s standards (my grandparents never owned a car), are being replaced by people on welfare, many of whom have no standards. I’m sorry to say that, but it’s true. If you want to see used up sofas stashed in the yard, I can show you where to go.

Whenever I visit, I brace ourselves for some more sad social and architectural news, especially about those two houses next to my grandparents’ place. They’ve been empty for years, and before that they were subjected to destructive attempts to “modernize.” If you’re brave enough to step onto the sagging wooden porches and look in the windows, what you see is broken glass, naked lath, once-friendly rooms returning to a state of unfriendly nature.

Their idea of Christianity isn’t mine, but their spirit of voluntarism enchants me. You want to build a giant ark? You want to make it pay? I’m with you — see if you can.

But this time, I saw a truck out back, and a man walking toward me: “Can I help you?” I explained myself, we shook hands, and I learned that this man was there to help the houses. A 50ish gentleman from an even smaller town about ten miles up the road, he had purchased both properties from the bank (or some other entity on which possession had devolved), because he liked them and wanted to restore them. More important, he had the skills to restore them. He had learned those skills decades ago, when the local high school actually taught students how to do things. It offered courses — excellent courses — in all the construction trades. Every year, students built a house from scratch, and sold it. If anybody can do something for old family homes, a graduate of those courses can do it.

I don’t know whether this man will succeed. I don’t know whether the Ark Encounter will succeed. Both seem romantic and quixotic to me. Nothing could be more different from America’s Towers of Tech or its Mordor of urban “housing” than these vernacular architectural enterprises. They are the creations of individuals, not of the state or the lackeys of the state.

I live in coastal California, and I’m often surprised to discover that no one here ever goes to the Midwest, the real Midwest, or any portion of California that isn’t built of concrete and steel. I know I could say something similar about the travel habits of people from New York or Boston or Washington, or even Chicago. But the Midwest I’m thinking about has nothing to do with physical geography. It has to do with the geography of the mind. There are places in the mind where everything that is done has to be done by some enormous, statelike thing. And there are places in the mind where individual people still do things, because they want to. Those places I call America.




Share This


The Return of Malthusian Equilibrium

 | 

After the departure of Europeans from their colonies following the end of World War II, the Third World rapidly became tyrannical, and their economies began a long decline. The institutional collapse of the Third World has continued over all these years, except that in the past two decades, from an extremely low base, its economies have improved. This economic growth did not happen because the Third World liberalized its economies or adopted any fundamental cultural change in its societies. What enabled synchronous economic progress over the past two decades in the Third World was the internet and the emergence of China.

Cheap telephony and the internet came into existence in the late ’80s. The internet provided pipelines for the transfer of technology and enabled wage-arbitrage to be exploited. Also, many countries — particularly in Latin America and sub-Saharan Arica — benefited from the export of resources to gluttonous-for-resources China, the only emerging market I know of, and to the developed world, which contrary to propaganda is economically still by far the fastest growing part of the world.

Cherry-picking countries of subsistence farmers and cattle-herders for propaganda purposes tells you nothing about the sustainability of their growth.

It is hard to believe, but many countries in the Middle East and North Africa peaked economically in the 1970s. Their competitive advantage was oil, not human resources. The per capita real GDPs of Saudi Arabia and the UAE, despite the fact that they have had a relatively peaceful existence, are about half as large as they were in the ’70s. The situation is similar in Venezuela and to a large extent in Nigeria. Except for the personal use of cellphones, the information technology revolution has simply bypassed these and many other countries.

According to the propaganda — steeped in political correctness — of the international organizations, all the fastest growing economies are in the Third World. But simple primary school mathematics helps cut through this propaganda. Ethiopia is claimed to be among the fastest growing large economies. This is quite a lie. An 8.5% growth rate of Ethiopia on GDP per capita of US$846 translates into growth of a mere US$72 per capita per year. The US economy, with GDP per capita of US$62,152, is 73 times larger, and despite its growth at a seemingly low rate of 2.2%, it currently adds US$1,367 to its per capita GDP — 19 times more than Ethiopia. The situation looks even more unfavorable for Ethiopia if its population explosion of 2.5% per year is considered.

Cherry-picking countries of subsistence farmers and cattle-herders for propaganda purposes tells you nothing about the sustainability of their growth, and certainly does not in any way enable comparison with the developed world.

The developed world is growing much, much faster than the Third World. The only exception is China.

Over the past two decades, the low hanging fruit of economic growth has been plucked in the Third World. South Asia, Southeast Asia, West Asia, Africa, and Latin America are now starting to stagnate. As the tide of the economic growth rate recedes, institutional collapse will become more visible. It will be seen on the streets as civic strife. What is happening in Venezuela, Syria, Turkey, Nicaragua, Honduras, Pakistan, Congo, and South Africa — where institutions are collapsing, social fabric is falling apart, and tyranny is raising its ugly head — are not isolated events but part of the evolving Third World pattern. Once its institutions have been destroyed, there will be no going back. They simply cannot be rebuilt.

When one looks at the world map, one realizes that all colonized countries were created in European boardrooms.

On a simplistic organizational chart, institutions in the Third World may look the same as they looked when European colonizers departed, but without reliance on the rule of law, respect for individual rights, and a rational approach to problem solving — all foundational concepts propagated by the West. They have been swamped by tribalism, magical thinking, and arbitrary dogmas and rituals.

Without the foundation of rational, critical thinking, formal education merely burdens the mind. The result is that stress among the so-called educated people in the Third World is growing, and no wonder: formal education, unassimilated, can work only in narrow areas, where all you want is cogs that can do repetitive jobs in corner cubicles, without encouragement or reward for creativity. This is not a future-oriented environment; it is a merely pleasure-centric one, in which people become easy victims of cultural Marxism. Democratic politics devolved into the politics of anti-meritocratic mass rule, destroying any institutions of true self-government.

During my recent visit to Port Moresby in Papua New Guinea, a young Western girl working for a Western embassy told me that she once went out without her security force. The police stopped her car, and she was fortunate that her security arrived before the police could take her away. The negotiation between police and security was about how much it would take not to rape her. Rape is common in Papua New Guinea, as it is in the rest of the Third World; but because this was a girl working for the embassy, rapists would have had their bones broken the day after. But their bones would have been broken the day after, “too far in the future” to be of much concern.

Without institutions of liberty and protection of private property, financial and intellectual capital does not accumulate.

When one looks at the world map, one realizes that all colonized countries were created in European boardrooms. There was no country of South Africa, Zimbabwe, Congo, or even India before the arrival of Europeans. The people who now run these countries simply do not have the ability or impetus to manage such large societies. They have tribal mentalities, unable to process information outside the visible space. The rulers of modern tribes continuously increase the size of their bureaucracies, but this merely creates overcentralization, the ossification of institutions, and massive, though unseen, systemic risks. Of course, tribalism is irrational, and internecine rivalry a fact of existence that is experienced only on a moment-to-moment basis.

Before the arrival of the Europeans, most of sub-Saharan Africa had no written language and few tools, contrary to popular perception of a pre-colonial utopia. Warfare was the order of the day. Eating flesh and brains of an enemy killed in conflict was practiced from Papua New Guinea, to Africa, to the Americas. Cannibalism is not unknown even today. Contrary to politically correct versions of history, 19th-century colonization was a massive, sudden improvement for many colonized peoples, and a paradigm shifting event for the Third World.

Europeans of the 1940s clearly knew that if they left the Third World, entropy would rapidly ensue, the locals would fail to run their countries, and those countries would implode into tribal units. These wouldn’t be self-managed societies that libertarians dream of, but tribal ones afflicted with internecine warfare. That is indeed where the Third World is heading, and much of it has arrived.

Africa’s population is growing at a faster rate now than it was in 1950.

Without institutions of liberty and protection of private property, financial and intellectual capital does not accumulate. Indeed, the Third World actively destroys or dissipates any material benefit that accrue to it. This happens through war, overconsumption, expansion of the exploiting (ordinarily the governing) class, and the active destruction of capital that one sees in the crime, vandalism, riot, and other means of destroying property that characterize the Third World. Despite their extreme possessiveness, people who destroy the capital of other people fail to maintain their own. In many Third World cities, when there is a day of celebration it is easy to assume that it is the day when employees got their salaries — which disappear by the next morning, drunk away. Capital fails to be protected or accumulated; the rational structure of a productive, thrifty, and prudent culture is not there.

While people in the West are blamed for being materialistic, Third World people are often much more focused on their possessions. The great fleet of servants in India, who are destined to forever remain servants, may earn a mere $100 dollars or less a month, but must have the latest smartphone. For me it is impossible to comprehend how they pay their rent, buy food, and still have some money left to buy a phone; but I remind myself that actually they take loans to buy smartphones and are forever in debt.

And now — the population problem is becoming worse.

Consider Africa alone. Africa’s population in 1950 represented a mere 10% of the world population. By the end of this century Africa, the poorest continent, is predicted to have at least 40% of the world’s people. Africa’s population is growing at a faster rate now than it was in 1950. Given that this rate begins from a much higher base, Africa adds six times more people today than it did in 1950.

More important: in the Third World countries, population control has mostly happened within the relatively more educated, intellectually sophisticated part of society. In Northern India, to cite another example, the unstable, uneducated, chaotic, and backward part of the population is exploding in size. Southern India, which is relatively stable and better off, is falling in population.

With ease of mobility, segregation is picking up its pace. The economically best people of the Third World find it much easier to emigrate than to stay home and fight to make society better, or maintain it in its current state. In 2017 alone, 12% of Turkish millionaires and 16% of Venezuelan millionaires emigrated. So great has been the emigration from India that it is virtually impossible to find a decent plumber or electrician. Forget about finding a good doctor. In a survey, only 30% of Indian doctors could diagnose a simple ailment. Everywhere educated people move to cities, while the rest stay on in rural places. Segregation is real, leaving the underclass with a huge deficit in leaders.

There is also segregation by sector of the economy. As the private sector has evolved in the Third World, government institutions have increasingly become brain-dead, for the best brains now want to work for high salaries in the private sector, leaving state power in the hands of the worst brains. Naturally, people have become very stressed and unsure. As an emotional escape, superstitious rituals and religious-nationalism are increasing exponentially, contributing to the elevation of exploitive, sociopathic elements to positions of power.

Perhaps, payments made to people for having children must stop; instead people should get money not to have children.

It is possible that some parts of the Third World simply cannot be “governed.” A couple of years back I undertook what I consider the most dangerous trip of my life. I went to Goma in the Democratic Republic of Congo (DRC) on my own. Even for DRC, Goma is a lawless part. The Swedish police I was staying with told me one day that a pregnant woman had been raped, her fetus removed, cooked, and fed to other women of the tribe, who had all been raped. Listening to the stories of celebration of such brutalities in the Congo and elsewhere in Africa, I couldn’t but imagine what I would do if I were forced to run the DRC. I couldn’t imagine ever being able to bring it back to relative sanity without imposing the tyranny — for fear is the only restraint available in the absence of reason — for which Leopold II of Belgium is infamous.

This brings us to the terrible predicament of the Third World. Except for China, the countries of the Third World have failed to develop inner competencies and hence internal reasons to accumulate financial and intellectual capital. They have failed to maintain their institutions, which have continued to decay after the departure of European colonizers. The crumbs of economic benefits — the gifts of western technology — have been dissipated. What can be done? How would you deal with the predicament?

There is no hope unless the vast size of the underclass, who are statistically unable to participate economically, particularly in the age of AI, is reduced. Perhaps, payments made to people for having children must stop; instead people should get money not to have children. Even this first step can only happen if the Third World institutions are changed and rational leaders are imposed. But who will impose them?

The end result is obvious. With time — slowly and then rapidly — the Third World will continue to fall apart institutionally. The Third World will implode. This two-thirds of the world population will fall into tribes that, being irrational, will have no way to resolve disputes. They will enter a phase of neverending warfare, with other tribes and within their own tribes. If there is any surplus left, it will be dissipated through population growth and overconsumption. Ahead there is only entropy and a Malthusian future, mimicking the sad Malthusian equilibrium that existed before the colonizers came.




Share This


President Corleone

 | 

In late November 2016, less than a month after Donald Trump’s unexpected victory, President Obama was in Peru for the APEC Asia Pacific Economic Cooperation summit. Riding in the back of the US presidential limousine with a few of his closest aides, he turned to his longtime advisor, Ben Rhodes, and said, “I feel like Michael Corleone. I almost got out.”

This struck me as an odd thing for the president to say.

Michael Corleone and Barack Obama would seem to have little in common. To begin with, one is fictitious, the other is not.

It is from Rhodes’ new book, The World as It Is, which I have not yet read. I found it in Peter Baker’s review of the book in The New York Times.

In the following, I will explain why I thought it odd and then mull over why he said it. The purpose of the exercise is to amuse.

* * *

At first glance, Michael Corleone and Barack Obama would seem to have little in common. To begin with, one is fictitious, the other is not. More to the point, the life experiences of Corleone seem to bear little resemblance to those of Obama.

Michael Corleone, as every film buff knows, was not keen to join the Mafia. In his mid-20s, however, he murdered both the drug kingpin and the NYPD captain who had tried to kill his father, Don Vito Corleone, and, badabing, he was in.

Michael has his sister poison a rival don. Michael’s daughter is shot to death. Even the Pope gets whacked.

A few years later, when he became the head of the Corleone crime family, he orchestrated the murders of all his family’s rivals in New York City. Francis Ford Coppola’s masterful baptism montage in The Godfather tells the tale. Then, for decades, Michael Corleone controlled the bribery, blackmail, extortion, and murder that are the Mafia’s bread and butter. He was cold, cunning, and absolutely ruthless. He even had his brother Fredo murdered.

The scene Obama referenced in his comment to Rhodes is in the final film of the series, The Godfather, Part III. In it, Michael, who had been trying to extricate himself and his immediate family from the world of organized crime by transferring his ill-gotten gains from the rackets to legitimate businesses, has just survived a machine-gun attack from a helicopter arranged by Joey Zaza, who he had personally chosen to take over the Corleone family’s criminal interests. Michael, now about 60 years old and in ill health, stands in his kitchen and wails, “Every time I try to get out . . .they pull me back in.”

The rest of the movie is a series of betrayals, counter-betrayals, and murders. Michael has his sister poison a rival don. Michael’s daughter is shot to death. Even the Pope gets whacked. The trail of corpses only ends when, much later, Michael, broken, forgotten, and alone, falls off his chair, dead.

Obama has no haunting spectre trailing him, no litany of sins hanging over his head.

Now, it is pretty clear what Michael Corleone meant by his comment. He was trying to morph from a shady mafioso into a legitimate businessman, but his criminal past had created underworld entanglements so deeply rooted, so strong, that try as he might, he was never able to break free.

But what did President Obama mean? In what sense did he identify with this tragic figure, Michael Corleone?

President Obama is fit, rich, and relatively young, with a loving wife and family. He can choose from among the endless opportunities available to former presidents, or choose to do nothing at all. He can stay out of the political arena and Washington forever, if he wants to. Hollywood would welcome him. In fact, it already has.

He stepped down from the presidency with his head high, unbowed by scandal. He has no haunting spectre trailing him, no litany of sins hanging over his head. There is no Watergate, no Teheran Hostages, no Iran-Contra, no Monica Lewinsky, no missing WMDs, no Special Counsel to dog his footsteps for the rest of his days. There is no helicopter circling. In fact, some argue that his was an untainted, if not exemplary, presidency. Some even say that his has been a charmed life.

Likening his disappointment with the 2016 election results to Michael Corleone’s torment brings to mind the little boy whose ice cream falls from the cone and splats on the sidewalk. The boy looks at the sky and says, “Why me, God?” OK, that probably goes a little too far, but you get the point.

The remark seems odder because there was a more apt comparison much closer at hand.

In the runup to the election in November of 2000, Bill Clinton’s hand-picked successor, Al Gore, was thought by many to be the favorite. But while Gore won the popular vote, he lost in the Electoral College, some say because of an unfair assist by the Supreme Court. As a result, Bill Clinton had to give the keys to the White House not to his chosen successor but to George W. Bush, who opposed his policies in many areas, among them: taxes, gay rights, energy, abortion, education, the environment, and foreign affairs.

Bill Clinton really did get out, his wife’s career ambitions and the occasional tarmac meeting notwithstanding.

Before the 2016 election, Barack Obama’s chosen successor, Hillary Clinton, was the clear favorite. But while Clinton won the popular vote, she lost in the Electoral College, some say because of Russian help. As a result, Barack Obama had to give the keys to the White House to Donald Trump, who opposed his policies in many areas, among them: taxes, immigrant rights, energy, women’s health, education, the environment, and foreign affairs.

Now, had President Obama said to Rhodes, “I feel like Bill Clinton must have felt when Bush beat Gore,” it would have made perfect sense. True, the bit about “almost getting out” doesn’t quite fit here, in that Bill Clinton really did get out, his wife’s career ambitions and the occasional tarmac meeting notwithstanding. Still, the circumstances are remarkably similar.

But when Obama sought to explain himself to Rhodes, what popped into his mind was not the face of the charming former president whose liberal, if triangulated, legacy had suddenly been put in jeopardy by a more conservative successor. No. When he gazed deeply into the mirror of his consciousness what he saw staring back at him was the tortured face of Michael Corleone.

Go figure.

* * *

While the above should help clarify why I found the president’s comment odd, it does not explain why he made it. Three possible explanations follow.

Peter Baker suggested the first possibility in the NYT review of Rhodes’ book. Here’s the complete line that includes the comment: “In handing over power to someone determined to tear down all he had accomplished, Mr. Obama alluded to The Godfather mafia movie, ‘I feel like Michael Corleone. I almost got out.’

When Obama gazed deeply into the mirror of his consciousness what he saw staring back at him was the tortured face of Michael Corleone.

But in The Godfather, Michael was handing over power to Joey Zaza, his chosen successor. Joey wasn’t trying to tear down anything the Corleone family had built; he just wanted it all for himself, and Michael dead. That’s why Michael couldn’t get out. Am I missing something here? Hillary Clinton was Obama’s hand-picked successor. Is she supposed to be out to get him? Is Donald Trump or some other rival that I’m unaware of trying to keep President Obama from “getting out” of politics? Is there some opponent who’s trying either to assassinate him or to “pull him back” into the political arena? No. This explanation of Obama’s comment just isn’t working.

More importantly, is Baker suggesting that President Obama was equating his own life’s work, fostering peace, justice, and sustainability, with Michael Corleone’s, committing bribery, blackmail, extortion, and murder? That doesn’t sound like the kind of analogy that President Obama would encourage, not if he’s proud of his accomplishments. It certainly wouldn’t do much to burnish his legacy. No, Baker’s explanation just doesn’t fit. It lacks verisimilitude.

The second possibility is hypothetical. Given that bending the arc of the moral universe can be very hard work, let’s say that President Obama sometimes resorted to means that ever so slightly trimmed ethical or legal corners in order to achieve the precise curvature that the moral universe seemed to call for at the moment. By employing this hypothetical, we may be able to find a context in which the words that the president uttered in the back of “the Beast” that day in Lima make sense.

Is Baker suggesting that the president was equating his own life’s work, fostering peace, justice, and sustainability, with Michael Corleone’s, committing bribery, blackmail, extortion, and murder?

Let’s say that President Obama quietly approved the fix of Hillary Clinton’s illegal handling of classified documents, and her hamhanded attempt to cover it up in order to keep her candidacy alive. Let’s say that he put the desired end, a Democratic successor, on one side of the scale and the means proposed to achieve that end, a political decision not to indict, on the other side, and decided that the greater good would be served by putting the fix in, cut corners and all. When, in spite of the fix, the public’s confidence in Hillary Clinton’s trustworthiness plummeted, let’s say that President Obama became more eager than ever that his successor be a fellow Democrat. Let’s say that he approved of an effort to discredit Donald Trump by, among other means, using the fishy DNC-funded Steele dossier to manipulate a judge into allowing surveillance of the Trump campaign. Let’s say that when Donald Trump won the election despite this effort to derail his candidacy the president was concerned.

Let us now imagine how President Obama’s comment might sound in this hypothetical scenario.

A few weeks after the election, President Obama, wearing an immaculately tailored dark suit, was riding in the back of his armored black Cadillac Escalade with a few of his closest aides. He was looking through the five-inch thick bulletproof window. He knew that in order to get Hillary Clinton off the hook and to put Donald Trump on it he had done things worse than the Watergate break in. He also knew that, at that very moment, the effort to conceal those deeds was growing a web of semi-transparent lies that was threatening to ensnare him.

If only Hillary Clinton had won, as everyone had expected, he could have ridden the wave that had elected him twice all the way to the beach. He could have stepped off the board directly onto the sand, a free man. The new president would have had his back and her administration would have been composed of the very people who had helped him to put her in office. He would have been out, scot-free.

He closed his eyes and pressed his right temple to the glass. He realized that he was in a war. He would have to fight or he would end up like Nixon, disgraced. Sitting next to him was his long-time advisor, Ben Rhodes. The president turned to him, sighed, and said, “I feel like Michael Corleone. I almost got out.”

If only Hillary Clinton had won, as everyone had expected, he could have ridden the wave that had elected him twice all the way to the beach.

The third possibility is not as illogical as the first or as far-fetched as the second. It is this: the president was joking.

Frankly, this is my favorite explanation, in part because it is the least disheartening. No one wants to think ill of the president, do they? And all of that abusing of presidential power for personal gain and self-preservation in the second explanation would make the president seem so grubby, so small. No one wants to believe that possible. People want to think the best of the president, not the worst. Right? I mean, only Vladimir Putin would want the American people to think of their president as a Mafia don.

OK, then. So no one laughed. Maybe Ben Rhodes didn’t get the joke. That’s OK. Apparently, Peter Baker didn’t get it either. But I suspect that if President Obama were asked about it, and he was being perfectly honest, he would admit that he had just been trying to be funny.

Let’s just say.

“Politicians have always lied, but it used to be if you caught them lying they’d be like, ‘Oh man.’ Now they just keep on lying.” — Barack Obama, Nelson Mandela Annual Lecture, July 17, 2018




Share This


A FreedomFest Report

 | 

FreedomFest, LasVegas, July 2018: Fewer breakout sessions. Shorter hours. Only one special-event luncheon. What’s going on at FreedomFest? Are we losing it?

Actually, it’s quite the opposite. Too much choice can be daunting. As first timer Walter Block of the Mises Institute and Loyola University told us, “I attended FreedomFest for the first time in 2018. It was a magnificent experience. Rarely have so many lovers of liberty gathered under one roof. The only ‘problem’ I had with the event was the concurrent sessions. I wanted to attend ALL of them!”

We wanted this year’s event to involve our attendees more directly — not just sitting in chairs listening to speakers, but participating actively in the discussion.

History professor Barry Strauss of Cornell University concurred, saying, “FreedomFest was one of the few conferences that I’ve attended in my professional career of which I could say, ‘I only wish that I could have attended more sessions.’ From start to finish, it was an inspiration.”Imagine the frustration of previous years, when we offered 30% more sessions from which to choose!

Sometimes “less” really is “more.” When presentations are tightened, only the best remain. That’s what we decided to do at FreedomFest this year, reducing the number of concurrent breakout session from 13 to ten and ending each day at 6:30 instead of 8.

We wanted this year’s event to involve our attendees more directly — not just sitting in chairs listening to speakers, but participating actively in the discussion. So we lengthened our Q&A times, reduced the number of breakout sessions, created a scavenger hunt that brought attendees more actively into the exhibit hall, and added “conversation circles” in the evenings where attendees and speakers could discuss thematic topics. We expanded our “FreedomFest after Dark” activities with Karaoke led by “Lady of Liberty” Avens O’Brien and clubbing at a local night spot. The result was a more vibrant, engaged experience for everyone.

The Mock Trial was back too, this year charging the Public School System with fraud. We even had a hint of scandal in the jury box.

Of course, not everything was brand new. Perennial favorite Judge Napolitano was back, reporting on the Constitution and the significance of President Trump’s choice of Brett Kavanaugh to replace retiring Supreme Court Justice Anthony Kennedy. And we followed his speech with a special-event luncheon moderated by Steve Forbes. But most attendees enjoyed the break time by visiting the exhibit hall, viewing one of our lunchtime movies, or buying a sandwich and visiting with other attendees in our lounge areas.

The Mock Trial was back too, this year charging the Public School System with fraud. We even had a hint of scandal in the jury box, when the foreman announced a tie of 6–6, even though the collected ballots were clearly marked 7 to convict, 4 to acquit, and one with both options marked. Was this an example of the New Math? Or the “everybody wins a trophy” mindset? We promise Price Waterhouse wasn’t tabulating the results!

Of course, FreedomFest is never without controversy. Our panel on “The Rise and Triumph of the Angry Voter” led to some testy anger among the panelists, and the debate between Newsmax contributor Wayne Allyn Root and New York Times columnist Ross Douthat over whether Trump is more like Reagan or Mussolini became predictably (for Root) loud. The debate between Douthat and Hugh Hefner biographer Steve Watts on whether FreedomFest should dedicate a room to the late Hugh Hefner was controversial as well — was Hefner a hero who liberated women from Victorian sexual mores, or a lecher who objectified women by turning them into sexual playthings? Interestingly, the debate on “Faith and Reason” between Dan Peterson and Michael Shermer was more popular than the Playboy debate, with standing room only.

Eli Whitney, John Deere, Alexander Graham Bell, and even Ray Kroc drastically changed the face and future of America, “and it did not begin at the ballot box."

First-timer George Will was another keynote speaker, delivering an inspiring speech about the power of entrepreneurship and innovation. Referencing Ted Kennedy’s declaration that “change begins at the ballot box,“ Will offered several examples refuting the claim; he reminded the audience that Eli Whitney, John Deere, Alexander Graham Bell, and even Ray Kroc drastically changed the face and future of America, “and it did not begin at the ballot box. It began with the spark of entrepreneurial genius. . . . It began in individualism, which is important to everyone in this audience.”

Financial speakers have always been part of our faculty, and this year attendees enjoyed the new “Fast Money Summit” sponsored by Eagle Publishing, with its shortened 25-minute breakout sessions featuring top financial experts such as Steve Forbes, Mark Skousen, Doug Casey, Jim Rogers, Gena Lofton, Alex Green, Peter Schiff, Keith Fitz-Gerald, Marin Katusa, Jim Woods, and many more. At FreedomFest we believe that financial freedom is just as important as political freedom; money makes it possible to support causes and live a fuller personal life. “One good tip is worth the price of your admission,” was Eagle’s promise.

Others found their way to the Anthem Libertarian Film Festival — and some never left. “I can buy the recordings of the speeches,” one woman told me. “Where else can I watch these great films and meet the directors afterward?” In all modesty, as the director of the world’s only fully juried libertarian film festival — I couldn’t agree more. We had the best films and the best attendance in our eight-year history, with four world premiere films, five SRO screenings, 11 hard-hitting panels, and films that inspired us even as they told stories that outraged us. Libertarian films can be depressing when they’re set in dystopian futures or focus entirely on the hopelessness of big government; what I loved about this year’s lineup is that they offered hope for a brighter future through greater freedom, greater courage, greater understanding, and greater technology. And the production values of our films this year were top notch.

Storytelling can be more powerful than a lecture because of the emotional connection it creates with the audience.

Our films focused on themes such as immigration, escape from communism, criminal justice reform, and technology. Their messages were often indirect and compelling. One of my favorites was the Best Comedy winner The Inconsiderate Houseguest (Rob and Letitia Capili), which offers a subtle (Rob claims “unintended”) and unexpected theme about immigration beneath its quirky story about an uptight, rule-oriented roommate. “Subtle” is the key here; messages don’t need to shout if they are presented well. Storytelling can be more powerful than a lecture because of the emotional connection it creates with the audience. In fact, at our Thursday night Master Class for filmmakers, one of the panelists credited the television show Modern Family with changing public opinion, and thus public law, regarding gay marriage because of its likeable gay couple and its reluctantly tolerant and loving family patriarch. “Everyone knows the message of a Michael Moore movie, but almost no one watches his documentaries. They just hear about it on the news,” another panelist observed. Engaging stories with nuanced messages have the power to move hearts and change minds. That’s the main reason we started the Anthem Libertarian Film Festival.

The $2,500 Anthem Grand Prize went to Skid Row Marathon (Mark Hayes, director), an inspiring documentary about L.A. Judge Craig Mitchell who, troubled by the outrageous mandatory sentencing he was forced to impose, started a running club to help former felons regain their self-confidence and restart their lives. Mitchell has taken the club to marathon competitions throughout the world. The club is financed through private donations and teaches the principles of choice and accountability. Club member Rafael Cabrera was on hand for the Q&A following the screening. The film also won the $500 AnthemVault Prize for Best Original Score, featuring music composed by club member Ben Shirley. I defy you to watch this film with a dry eye.

Saber Rock (Matt and Thomas Locastro, directors), about a young Afghan interpreter for the American military who was targeted for assassination by the Taliban when he began teaching children about the principles of freedom, won the Anthem award for Best Short Documentary. The real Saber Rock attended the festival and gave an impassioned opening night speech to the FreedomFest crowd. Rock was a festival favorite, taking selfies with numerous fans throughout the week. He was awarded Anthem’s Special Jury Prize for heroism and received a standing ovation from the audience.

The room was so packed that we had to bring in 50 more chairs, while many leaned against the walls or sat on the floor and at least 20 more brought chairs to sit five-deep in the doorway.

Festival judge Gary Alexander argued at the judges’ meeting that America Under Siege: Antifa was one of the most important films at the festival because it reveals the truth behind the rising violence against free speech. Meanwhile, the gentle tone of Off the Grid with Thomas Massie won the hearts of festival attendees, who awarded it the Audience Choice trophy. Director Matt Battaglia follows the brilliant MIT graduate and inventor around the Kentucky farm that he built and maintains with his own hands as he talks about the priorities in his life and why he went to Congress. In one memorable segment he describes his congressional lapel pin, which garners him deferential treatment wherever he goes in Washington, as “Precious” and describes how difficult it can be to keep “Precious” from corrupting one’s focus and integrity.

A second Audience Choice trophy was awarded to Jimmy Morrison for his film The Housing Bubble, which features interviews with FreedomFest regulars Doug Casey, Peter Schiff, Jim Rogers, Gene Epstein, Tom Palmer, and others. It offers a cogent history of money, interest rates, inflation, and how they affect each one of us. The room was so packed that we had to bring in 50 more chairs, while many leaned against the walls or sat on the floor and at least 20 more brought chairs to sit five-deep in the doorway. The post-screening panel included all of the speakers who were featured in the film. Said director Morrison of the experience, “After all the delays with my movie, I really needed to make a statement with my premiere. I can't thank you enough for all that you did to make last week so successful!” That’s why we do what we do. These libertarian films need a venue. We provide it.

The Anthem Libertarian Film Festival is one of the fastest-growing features of FreedomFest, and also the best kept secret. Film aficionados can purchase a FilmLovers Pass for all four days for just $149, less than a third of the FreedomFest retail price. It includes all the films, plus film panels featuring top FreedomFest speakers and entrance to the exhibit hall. You can’t attend the FreedomFest general sessions or breakout sessions with it, but come on — with films and panels like these, who needs FreedomFest?

Members of the Reason crew presented the libertarian position on drug policy, gun control, biotechnology, pensions, prison reform, Bitcoin, transportation, and more. It was a libertarian feast.

My husband, Mark Skousen, who produces FreedomFest, completely disagrees with me on this, of course! “Why would anyone go to a movie when they can hear these great speakers in person?” he often asks me. And he has a point. With nearly 250 speakers and over 200 sessions, it’s hard to choose. A good point, but only one point.

This year, in honor of the 50th anniversary of Reason magazine, FreedomFest hosted six Reason Day breakout sessions, plus the Reason Media Awards at our Saturday night banquet. Reason notables Katherine Mangu-Ward, Nick Gillespie, Matt Welch, Bob Poole, Ronald Bailey, Jacob Sullum, Lisa Snell and others presented the libertarian position on drug policy, gun control, biotechnology, pensions, prison reform, Bitcoin, transportation, and more. It was a libertarian feast, culminating in presenting the Friedlander Prize to Steve Forbes at the Saturday night banquet.

But don’t just take me word for the success of FreedomFest 2018; here’s what Marc Beauchamp, former west coast bureau chief for Forbes Magazine, foreign correspondent in Tokyo, and trade association executive director in Washington DC, said about FreedomFest this year:

“For me . . . FreedomFest is where you hear things you don’t hear anywhere else.

“Like the foreign policy panel where it was pointed out that Russia’s economy is smaller than that of Italy or South Korea and Doug Casey said, ‘Russia is a gas station in a wheat field attached to a gun store.’

“You can get pretty glum watching talking heads on cable TV. The antidote is David Boaz’s optimism — that there’s never been a better time to be alive in the United States, and in almost any other country on the planet.

FreedomFest is an individualist’s dream (though admittedly, for those who arrange it, it can have its nightmare moments).

“FreedomFest is a movable feast. You never know what’s on the menu. I enjoyed Skeptic magazine’s Michael Shermer’s breakout session on the scientific search for evidence of an afterlife, and his conclusion that we should focus on living a full meaningful life rather than worrying about what might or might not happen in the afterlife.”

In sum, FreedomFest is an individualist’s dream (though admittedly, for those who arrange it, it can have its nightmare moments). As in those old “Choose Your Own Adventure” novels of the ’70s and ’80s, you can create your own conference as you circle your favorite sessions and decide what you’re going to hear and do.

We can’t wait to see all of our friends at FreedomFest 2019 where our theme is “The Wild West.” Escape the Deep State to Live Free! Come choose your own adventure in Las Vegas July 17–20. Hats and boots optional. Leave your horse at home.




Share This


When Nobody Knew What a Dollar Would Be

 | 

The Caxton Press has just published my book, The Panic of 1893, and I can now write for Liberty about it. Its topic is the final economic downturn of the 19th century. For more than three years, my head was in the 1890s — in books, articles, personal and official papers, lawsuits, and, especially, old newspapers, chiefly from my home state. The book’s subtitle is, The Untold Story of Washington State’s First Depression.

It is a popular history, not a libertarian book as such. But I have a few thoughts for a libertarian audience.

Many libertarians espouse the Austrian theory of the trade cycle, in which the central bank sets interest rates lower than the market rate, leading to a speculative boom, bad investments, and a collapse. In the 1890s the United States had no central bank. Interest rates before the Panic of 1893 were not low, at least not in Washington. The common rate on a business loan was 10%, in gold, during a period in which the general price level had been gently falling. Washington was a frontier state then, and it needed to pay high interest rates to attract capital from the East and from Europe. Credit standards, however, were low, sometimes appallingly low. Many of Washington’s banks had been founded by pioneers — optimistic patriarchs who lent freely to their neighbors, associates, relatives, and themselves. By a different road from the Austrians’ theory, the economy was led to the place it describes: a Hallowe’en house of bad investments.

The Sherman Silver Purchase Act was a sop to the inflationists, who wanted an increase in the money supply, and to the silver mining interests, who wanted the government to continue buying their silver.

The dollar was backed by gold, with the US Treasury intending to keep at least $100 million of gold on hand. But in 1890, at the peak of the boom period, Congress passed the Sherman Silver Purchase Act, obligating the Treasury to buy up the nation’s silver output with newly printed paper money. It was a sop to the inflationists, who wanted an increase in the money supply, and to the silver mining interests, who wanted the government to continue buying their silver, which it had been doing to create silver dollars. Politically the Sherman Silver Purchase Act was also part of a deal to pass the McKinley Tariff, which raised America’s already high tariff rates even higher.

The problem with the Sherman Silver Purchase Act was that the new paper money being paid to the silver miners could be redeemed in gold. The prospect of an increase every year in paper claims against the Treasury’s gold alarmed foreign investors, and they began to pull gold out. Two crises abroad also shifted the psychology of lenders and borrowers worldwide: Argentina defaulted on a gold loan from the Baring Brothers in 1890 and a real estate boom in Australia collapsed in 1893. These crises shifted the thoughts of financial men from putting money out to getting it back, from a preference for holding promises to a preference for cash.

By the time Grover Cleveland took office in March 1893, the Treasury’s gold cover had shrunk to $101 million. A run began on the Treasury’s gold — and that triggered the Panic of 1893.

In the Pacific Northwest, the four-year-old state of Washington (pop. 350,000 then) had 80 bank failures in the following four years.

Two crises abroad also shifted the psychology of lenders and borrowers worldwide: Argentina defaulted on a gold loan from the Baring Brothers in 1890 and a real estate boom in Australia collapsed in 1893.

Economists have listed the ensuing depression as the second-deepest in U.S. history. (One estimate: 18% unemployment.) But they don’t know. The government didn’t measure unemployment in the 1890s. And the rate of unemployment may not be the best comparison. America was less wealthy in the 1890s than in the 1930s, and living conditions were harsher. In absolute terms, the bottom of the depression of the 1890s was clearly lower than that of the 1930s.

The Left of the 1890s, the Populists and silverites, wanted cheap money. They blamed the depression on the gold standard. And gold is not an easy taskmaster; libertarians have to admit that.

The silverites wanted a silver standard. Most of them were “bimetallists,” claiming to favor a gold standard and a silver standard at the same time, with 16 ounces of silver equal to one ounce of gold. Their idea was that by using gold and silver the people would have more money to spend.

Free silver was a policy well beyond the Sherman Silver Purchase Act, which compelled the Treasury to buy silver at the market price. In the mid-1890s, silver fell as low as 68 cents an ounce. At that price, a silver dollar had 53 cents’ worth of silver in it and the silver-gold ratio was 30-to-1.

In absolute terms, the bottom of the depression of the 1890s was clearly lower than that of the 1930s.

The bimetallists wanted 16-to-1. That was the ratio for U.S. currency set in the late 1700s when the market was at 16-to-1. Later the market shifted and Congress changed the ratio to 15 1/2-to-1. Then came the Civil War, and the U.S. government suspended the gold standard, and printed up its first “greenbacks,” the United States Notes.

The United States Notes were effectively a new currency, and traded at a discount from metallic dollars. In September 1896, the Seattle Post-Intelligencer reminded readers of those times:

There never was a time from the beginning of the first issue of greenbacks down to the resumption of specie payments when the greenback dollar was ever accepted on the Pacific Coast for anything more than its market price in terms of gold.

The greenback was discounted, sometimes by 50 to 60%.

In 1873, Congress decided to define the dollar as a certain weight of gold, but not silver. The silver people in the 1890s called this “The Crime of ’73.”

Redemption of paper money under the gold standard began in 1879. To placate the silver interests, Congress had passed a law requiring the government to buy silver at the market price and coin it into dollars — the Morgan dollars prized by collectors today. At the beginning, the silver in a Morgan dollar was worth about a dollar, but by the 1890s, the value of silver had fallen.

In 1890, the silver-dollar law was replaced by the Sherman Silver Purchase Act, which created paper money. The government still coined silver dollars, and by 1896 had more than 400 million of them in circulation.

To placate the silver interests, Congress had passed a law requiring the government to buy silver at the market price and coin it into dollars.

The law did not require the Treasury to pay out gold for silver dollars, and it hadn’t. But the law declared all the different kinds of dollars (and there were five different kinds of paper money, at that point) to be equally good for everyday use except for taxes on imports. At the amounts an individual was ever likely to have, a silver dollar was as good as a gold dollar.

If you ask why a sane person would have designed a monetary system with gold dollars, silver dollars, Gold Certificates, Silver Certificates, National Currency, Treasury Notes, and United States Notes — Congress had designed it, one variety at a time.

Under the proposal for “free silver,” gold would be kept at the official price of $20.67 and silver set at one-sixteenth that price, or $1.29. Just as the world was free to bring an ounce of gold to the Treasury and take away $20.67 — “free gold” — the world would be free to bring an ounce of silver to the Treasury and take away $1.29. Free silver! The advocates called this the “unlimited coinage” of silver, but the aim was to create dollars, not coins. Most of the silver could pile up in the Treasury and be represented by crisp new pieces of paper.

The gold people argued that for the United States to set up a 16-to-1 currency standard in a 30-to-1 world was nuts. Essentially, the Treasury would be offering to pay out one ounce of gold for 16 ounces of silver. It would be a grand blowout sale on gold, and the world would come and get it until the gold was gone. The Treasury would be left with a Fort Knox full of silver, and the U.S. dollar would become a silver currency like the Mexican peso.

Surely the gold people were right about that. (And today’s ratio is 78 to 1.)

Milton Friedman argues in his book Money Mischief that two standards, with the cheapest metal defining the dollar in current use, would have worked all right. If the cheap metal got too expensive, the system would flip and the dollar would be defined by the other metal. In theory it makes sense, and apparently before the Civil War it had worked that way. But the financial people didn’t want a system like that.

The Treasury would be left with a Fort Knox full of silver, and the U.S. dollar would become a silver currency like the Mexican peso.

In 1896, America had a watershed election, with the silver people for Bryan, the Democrat, and the gold people for McKinley, the Republican. A third party, the People’s Party, endorsed Bryan. Its followers, the Populists, didn’t want a silver standard. They were fiat-money people. But Bryan was against the gold standard, and that was enough.

In that contest, the silver people were derided as inflationists. They were, to a point. They wanted to inflate the dollar until the value of the silver in dollars, halves, quarters, and dimes covered the full value of the coin. The silver people were not for fiat money.

Here is the Spokane Spokesman-Review of October 1, 1894, distinguishing its silver-Republicanism from Populism:

Fiat money is the cornerstone of the Populist faith . . . Silver money is hard money, and the fiatist is essentially opposed to hard money . . . He wants irredeemable paper money, and his heart goes out to the printing press rather than the mint.

The Populists and silverites argued in 1896 that the gold standard had caused the depression, and that as long as gold ruled, the nation would never recover. History proved them wrong. They lost, and the nation recovered. It began a recovery after the election settled the monetary question. Investors and lenders knew what kind of money they’d be paid with.

Milton Friedman makes a monetarist point in Money Mischief that starting in about 1890, gold miners had begun to use the cyanide process, which allowed gold to be profitably extracted from lower-grade ore. The result was an increase in gold production all through the decade. I came across a different story in my research. The increase in the supply of gold (about which Friedman was correct) was outstripped by the increase in the demand for gold. Prices in gold dollars declined sharply during the depression of the 1890s, including the prices of labor and materials used in gold mining. It became more profitable to dig for gold. Deflation helped spur a gold-mining boom — in the Yukon, famously, but also in British Columbia, in Colorado, and in South Africa.

The US began a recovery after the election settled the monetary question. Investors and lenders knew what kind of money they’d be paid with.

Under a gold standard, a deflation sets in motion the forces that can reverse it. This is a useful feature, but it can take a long time.

The recovery from the depression of the 1890s began not with a burst of new money but with a quickening of the existing money. What changed after the election was the psychology of the people. They knew what sort of money they held and could expect. The important point wasn’t that it was gold, but that it was certain. If Bryan had been elected and the dollar became a silver currency, people would have adjusted. With gold, they didn’t have to adjust, because it was what they already had.

The writers of the 1890s had a less mechanistic view of the economy than people have today. People then didn’t even use the term, “the economy.” They might say “business” or even “times,” as if they were talking of weather conditions. They talked less of mechanisms (except the silver thing) and more of the thoughts and feelings of the people. People today are cynical about politicians who try to manipulate their thoughts and feelings, and think that it’s the mechanisms that matter. And sometimes mechanisms matter, but the thoughts and feelings always matter.

Prices in gold dollars declined sharply during the depression of the 1890s, including the prices of labor and materials used in gold mining. It became more profitable to dig for gold.

Now some observations about the ideas of the 1890s.

The Populists, called by the conservative papers “Pops,” were much like the Occupy Wall Street rabblerousers of a decade ago: anti-corporate, anti-banker, anti-bondholder, anti-Wall Street, and anti-bourgeois, but more in a peasant, almost medieval way than a New Left, university student way. Many of the Pops were farmers, with full beards at a time when urban men were shaving theirs off or sporting a mustache only. More than anti-Wall Street, the Pops were anti-debt, always looking for reasons for borrowers not to pay what they owed. On Wikipedia, Populism is labeled left-wing, which it was mainly. It was also rural, Southern, Western, anti-immigrant, and often racist. In Washington state it was anti-Chinese.

In the 1890s traditional American libertarianism was in the mainstream. In the newspapers this is very striking, with the Republican papers championing self-reliance and the Democratic papers championing limited government. Democrats, for example, argued against the McKinley Tariff — which imposed an average rate of more than 50% — as an impingement on individual freedom. Here is Seattle’s gold-Democrat daily, the Telegraph, of September 10, 1893:

If it be abstractly right that the government shall say that a man shall buy his shoes in the United States, why is it not equally right for it to say that he shall buy them in Seattle? . . . Where shall we draw the line when we start out from the position that it is the legitimate and natural function of government to regulate the affairs of individuals . . .

Our idea is that the least government we can get along with and yet enjoy the advantages of organized society, the better.

Here is the silver-Republican Tacoma Ledger of Dec. 3, 1895:

Thoughtful men must perceive that our whole system of civilization is undergoing a revolution in its ideas; and we are in danger of gradually supplanting the old, distinctive idea of the Anglo-Saxon civilization — the ideas of the individualism of the man, his house as his castle, and the family as his little state, which he represents in the confederation of families in the state — by the Jacobinical ideas of . . . continental republicanism . . . The continental republican theory contemplates the individual man as an atom of the great machine called the nation. The Anglo-Saxon considers every man a complete machine, with a young steam engine inside to run it. The continental republican must have a government that will find him work and give him bread. The Anglo-Saxon wants a government only to keep loafers off while every man finds his own work and earns his own bread.

Contrast that with today’s editorial pages.

The Populists were anti-debt, always looking for reasons for borrowers not to pay what they owed.

Here’s a final one I particularly liked. Archduke Franz Ferdinand of Austria-Hungary — the same gent whose assassination 21 years later would touch off World War I — came through Spokane on the train in 1893. Americans, fascinated with him just as they would be a century later with Princess Diana, stood in the rain for hours to get a glimpse of the famous archduke — and they were sore because he never showed himself. On October 9, 1893, here is what the Seattle Telegraph had to say about that:

Why in the name of common sense should the people of this country go out of their way to honor a man simply because he happens to be in the line of succession to a throne . . . The correct thing is to let their highnesses and their lordships and all the rest of them come and go like other people. To the titled aristocracy of Europe there is no social distinction in America.

The America of the 1890s had some unlovely aspects. But in my view, the Telegraph’s attitude toward princes is exactly right. I recalled the Telegraph’s patriotic comment during all the blather over the wedding of Princess Diana’s son.

The 1890s had its blather, but after 125 years, sorting out facts from nonsense is easier. Silly statements, especially wrong predictions, don’t weather well. It makes me wonder what of today’s rhetoric will seem utterly preposterous in the 2100s.




Share This


Hip Replacement: Lesson One

 | 

“In a soldier’s stance, I aimed my hand
at the mongrel dogs who teach . . .”
                                      — Bob Dylan, My Back Pages (1964)

English, like every other living language, constantly evolves. Every utterance holds the promise of change: a new take, a fresh twist, an old word with a new meaning, or a neatly turned phrase that nudges the language, and the people who speak it, in a new direction. This is Donald Trump in the ’80s: “You have to think anyway, so why not think big?”

New words are created all the time. The verb “mansplain,” coined less than a decade ago, describes a practice at least twice as old: when a man explains something, a new word, say, to a woman in a condescending manner. And, of course, words just disappear. I haven’t heard “tergiversation” since William Safire died. Some words are like mayflies, here and gone. A word used only once is called an “onanym,” which, appropriately, is one.

As changes accumulate, the distance between the old and new versions of the language grows and the older version gradually becomes dated, then archaic, and, eventually, incomprehensible. Read Beowulf. (Or consider that less than 60 years ago we elected a president named “Dick.”)

And, of course, words just disappear. I haven’t heard “tergiversation” since William Safire died.

The sound of English changes, too. Its phonological components, such as tone, pitch, timbre, and even melody, change. If you learned American English, the North American dialect of Modern English, scores of years ago, as I did, you have heard many such changes and, while you can probably understand the current version, provided the slang isn’t too dense, you probably cannot reproduce its sound.

This, then, is a music lesson of sorts, designed to help you, my fellow older American, replicate the sounds of what we will call Post-Modern English, or PME, the successor to American English. Not the slang, mind you, but the sound of it, the music. If you are wondering why you should bother, reflect on this: you wouldn’t parade around in public wearing the same clothes that you wore as a child, would you? Of course not, because fashion changes and we should change with it, provided that we do it in an unaffected way. Choosing to update the sound of your English is as sensible as hanging up your coonskin cap. One must make an effort to ensure that one’s outfit looks snatched, after all.

The lesson includes a short passage from a radio broadcast I heard that contains many of the phonological changes that American English has undergone during the past several decades. While I managed to jot it down, I couldn’t get a copy of the audio. No matter. You can tune into any pop radio station and listen to the banter of the DJs. They are first-rate role models for Post-Modern English. (Dakota Fanning is not. I heard her interviewed on NPR, and to my ear she sounded like nothing so much as the valedictorian at a finishing school graduation, circa 1940. To be fair, NPR features many guests, and hosts, for that matter, whose mastery of PME is just totally awesome.)

Choosing to update the sound of your English is as sensible as hanging up your coonskin cap.

Ready? There are five parts. The first reveals the essence of Post-Modern English, so that you will know how to comport yourself when speaking it. The second will help you adjust your vocal cords to the proper register. The third comprises ten exercises drawn from the transcript of that radio broadcast. The fourth alerts you to a few problems you may encounter once you have mastered PME, and suggests practical solutions. The fifth and final part will put American English where it belongs: in the rear-view mirror. Just as Professor Higgins taught Miss Eliza Doolittle to drop the cockney and pass as posh, so I will teach you to drop your stodgy American English and sound cool. By the end of this linguistic makeover you will sound like a real hep cat. (The spellchecker just underlined “hep.” Bastards.)

* * *

Part One: The Essence

As French is the language of love and German is the language of Nietzsche, Post-Modern English is the language of wimps.

(Just now, you may have jumped to the conclusion that the word “wimps” was deployed in the previous sentence as a pejorative. It was not. It was chosen because it is the word that best embodies the defining characteristics of Post-Modern English. If you’ll bear with me, I think you’ll come to agree.)

When a French woman explains a line from Also Sprach Zarathustra,she sounds as if she were flirting. When a German man puts the moves on a fräulein in a dimly lit hotel bar, he sounds as if he were explaining how to change the oil in a diesel engine. Let us stipulate that the French woman is not a flirt and the German man is not a mechanic. It doesn’t matter; their languages make them sound that way. And when a fluent speaker of Post-Modern English asks you to move your car because he’s been boxed in, he sounds like a puppy that has just peed on your white carpet. He may not be a wimp, but he sure does sounds like one. It is simply the case that each of these languages, at least when heard by Americans of a certain age, creates a vivid impression of the speaker. It is no more complicated than that. So why does the American guy sound like such a wimp?

Post-Modern English is the language of wimps.

At the core of Post-Modern English are two directives that determine not just the attitude but also the moral stance that its speakers assume as they turn to face the oncoming challenges of the modern world. These two directives, sometimes called the Twin Primes, preempt both the laws enacted by governments and the commandments handed down by ancient religions. (Practically, this means that in the event of a conflict between any of those laws or commandments and either of these two directives, it is the latter that will be adhered to, not the laws of God and man, all other things being equal.) You may have heard one or both of the Twin Primes invoked when a speaker of Post-Modern English suspects a violation has occurred in the vicinity.

The First Directive is “Don’t judge.” The Second is “Don’t be a dick.”

How, you may be asking yourself, could two such sensible and straightforward prohibitions make millions of people sound wimpy? Enforced separately, they probably couldn’t, but enforced together, they lay a paradoxical trap that can make even the straightest spine go all wobbly.

When a fluent speaker of Post-Modern English asks you to move your car because he’s been boxed in, he sounds like a puppy that has just peed on your white carpet.

Step by step, now. To judge others is considered rude in Post-Modern English, especially if the judgment is thought to be harsh. A person who judges others in this way and then communicates that judgment to those being judged is often referred to as a dick. If, for example, you saw someone who was judging others and, in a completely sincere attempt to be helpful, you said to that person, “Don’t be a dick,” you would have, in effect, not only made a judgment about that person’s behavior, but also communicated it to that person in a harsh way. By definition, then, by telling this person that he has behaved badly, you yourself would have strayed off the reservation to which PME speakers have agreed to confine themselves, and would have become the very thing that you have judged so harshly: a dick.

Now, Post-Modern English speakers are not stupid. They are aware of this trap and, not wishing to be hoist with their own petards, do what any reasonable person would do. Not only do they rarely call other people “dicks,” but they fall all over themselves to avoid any communication that can be interpreted as passing judgment on others. Simple statements about mundane matters are qualified and watered down so that the likelihood of giving offense is minimized. Explanations are inflected to sound like questions, apologies, or cries for help. Commonplace opinions are framed semi-ironically, often attached to the word “like,” so that they can be retracted at the first sign of disagreement. This feature of the language is called “ironic deniability.” It also allows one to blame irony when the real culprit is stupidity.

As a result, fluent PME speakers, when compared with speakers of earlier forms of American English, sound more uncertain, unassertive, and nonjudgmental. To put it bluntly, they sound more sheepish. Not because they are, you understand, any more than the French woman was flirtatious. It is just that the rules of the language have prodded them, bleating, into the chute that leads inescapably to the waiting tub of dip. In short, to avoid being dicks, they end up being wimps.

By telling this person that he has behaved badly, you yourself would have become the very thing that you have judged so harshly: a dick.

Wake up, old son, and smell the nitro coffee. In this brave new world, wimpiness is cool.

And that, my crusty-but-benign student, is all you need to know. You don’t need a dissertation on the cultural and historical forces that forged this pained linguistic posture; all you need is to imitate its cringe as you complete the lesson ahead and go on to achieve fluency in Post-Modern English. Here’s an aspirational commandment: “Thou shalt be cool.” You can do this. It’s goals.

Part Two: The Vocal Register

Please say, “So, that’s pretty much it, right?” in your normal 20th century voice. OK? Now say it again, but make the pitch of your voice as low as you can.

How’d it go? When you lowered the pitch, did you hear a sizzle, a popping sound, like bacon frying? No? Try again, as low as it will go. Once you’ve achieved this effect, I’ll give you the backstory.

Ready? That sizzle is the sound of liquid burbling around your slackened vocal cords. As you may have noticed, this register, often called vocal fry, has been growing in popularity during the past few decades.

Fluent PME speakers, when compared with speakers of earlier forms of American English, sound more uncertain, unassertive, and nonjudgmental. To put it bluntly, they sound more sheepish.

In the 1987 movie Made in Heaven, Debra Winger played the archangel Emmett, God’s right-hand man, who was supposed to be a chain-smoker. As Ms. Winger was not, she had to simulate a smoker’s voice for the part, serendipitously producing a pitch-perfect proto-vocal fry. While this early mutation event does not appear to have lodged in the inheritable DNA of the language, it is fascinating in the same way that the Lost Colony of Roanoke is.

Vocal fry’s current run on the linguistic hit parade is more likely to have begun when Britney Spears croaked “Baby One More Time” in 1998, although it is occasionally said that the real patient zero was someone named Kardashian. Whatever.

Women tend to use vocal fry more than men. A wag on TV once said that women are trying to usurp the authority of the patriarchy by imitating the vocal register of the male. This would be in stark contrast to the southern belle or transvestite, both of whom artificially raise the pitch of their voices, sometimes into falsetto, to enhance the appearance of femininity.

Isn’t the theory that these bubbling vocal cords were repeatedly sautéed and baked less likely than the much simpler explanation of demonic possession?

Another theory holds that the phenomenon is simply the result of too much booze and marijuana. For this “Animal House Hypothesis” to be taken seriously, however, it must account for the fact that vocal fry did not present in the ’60s (except briefly in Clarence “Frogman” Henry’s 1956 recording of “Ain’t Got No Home”). Considering that the sound more nearly resembles an audition for the next installment of the Exorcist franchise, isn’t the theory that these bubbling vocal cords were repeatedly sautéed and baked less likely than the much simpler explanation of demonic possession? The smoker’s rasp sounds much drier, anyway.

There has been an effort to dismiss the bubbling as a mere affectation. But ask yourself: what are the odds that a vocalization nearly indistinguishable from Mongolian throat singing will be adopted by millions of young people, simply to strike a pose? I’m just not buying it. The simplest explanation may be best: it was an innocently adopted, thoroughly harmless preteen fad that unexpectedly took root in adolescence and grew into a well-established, widespread adult habit, like picking one’s nose.

Don’t sizzle when you uptalk. You’ll frighten the children.

We may not know where it came from, and we may not know why it came, but we do know that vocal fry, while not quite the sine qua non of Post-Modern English, sends the loud and clear message, to anyone who happens to be within earshot, that standing here is a proud master of the 21st-century version of American English, gargling spit while speaking. (I seem to recall once seeing something similar being done by a ventriloquist.)

Learn the sounds in the lesson below; sing them with the sizzle above, while acting like a sick toy poodle at the vet’s, and your quest will be over. The Holy Grail of this Elders’ Crusade will be yours: PME fluency. (Oh, and remember: don’t sizzle when you uptalk. You’ll frighten the children.)

Part Three: The Exercises

So, in the 2016 election, Clinton was really sure she would sort of capture the states in the rust belt, but she didn’t. I mean, the turnout there was pretty much deplorable, right?

1. Discourse markers, sometimes called fillers, such as those used above (so, really, sort of, I mean, pretty much, and right), while not integral to either the meaning or the music of Post-Modern English, enhance its aesthetics, signal that the speaker is part of the linguistic in-crowd, and help the speaker sound as if his grip on what he’s saying is less than firm. It gives him wiggle room and makes him seem all squirmy: the Daily Double. Placing fillers in a phrase to best effect calls for a keen ear, rigorous practice, and a constant monitoring of how it is being done by the cool kids.

Beginning immediately, insert at least one filler into each sentence you speak. Yes, it requires self-discipline, but don’t worry; in time, it will become habitual and you will be able to dispense with self-discipline entirely.

There are fillers galore. To gain perspective, note that like, actually, and dude, while still heard, have grown slightly stale.

Yes, it requires self-discipline, but don’t worry; in time, it will become habitual and you will be able to dispense with self-discipline entirely.

About ten years ago, like was like ubiquitous. Like it was in like every sentence like three or four times. I mean, it had like metastasized. Then, over the next few years, its rate of use fell by 73%, as though it had gone into remission. Often, when a word or fad becomes a pandemic, it burns itself out. There was a sign on a Mississippi country store: “Live Bait – Nitecrawlers – Cappuccino.” It could be that the overuse of like was deemed uncool by some shadowy teen language tribunal and labeled a bad habit, like smoking tobacco. But as with that addiction, many found it impossible to go cold turkey. You’ve probably heard of Nicorette,a gum used by smokers trying to ease withdrawal. Well, the discourse markers sort of, kind of, you know, I mean, and pretty much have been the linguistic Nicorette to millions of like addicts trying to kick the habit. Some former addicts have resorted to saying kinda-sorta. They are sincere in their belief that this constitutes an evolutionary step forward.

Actually, which often sounds a trifle pompous, has largely been replaced by so in the initial position and right in the final position, as demonstrated in the lesson. It can still be used, but sparingly. Once per minute ought to do it, actually; twice, at most.

In place of dude, try bro, or brah, or bruh, or perhaps you could consider using nothing at all.

In summary, “Actually, I like know what I’m talking about, dude,” compares unfavorably to, “So, that’s pretty much, you know, how it sort of is, brah — I mean, right?” While both sets of words still appear in the lexicon of New English, the latter reflects the more gracile stage of linguistic evolution that has been achieved, and is, therefore, preferred. It sounds more woke, too, doesn’t it, or is that just me?

They are sincere in their belief that this constitutes an evolutionary step forward.

2. The first two syllables in the word “election” should be mid-range in pitch, and clearly and crisply enunciated, while the final syllable should be lower pitched and slightly drawn out: “shuuun.” (In other applications, the terminal syllable must be uptalked. This will be covered in Lesson Two.) The increase in duration for the final “shun” is mandatory for all words ending in “-tion.” God knows why. But try it again, with a little sizzle: “elek- shuuun.” Nice.

3. “Clinton” should be pronounced “Cli/en” with a glottal stop entirely replacing the medial “nt” consonant blend. Glottal stops are a thing right now. “Mountain” is “mow/en,” and “important” is “impor/ent,” not to be confused with the mid-Atlantic pronunciation “impordent.” (Note that in the go-to example for glottal stops in American English, “mitten” becoming “mi/en,” it is only the “t” sound that is replaced, as it is in “impor/ent.” Replacing the “nt” seems to be the more recent, bolder approach, and is thus more worthy of imitation.) Practice these glottal stops in front of a mirror. To avoid embarrassment, it’s better to practice when you’re alone than to try them out in public before they’ve been thoroughly polished.

4. The word “sure” should not be pronounced like “shirt” without the “t” but rather as “shore,” rhyming with “snore,” with the long “o” and a strongly vocalized “r.” This pronunciation probably hails from Brooklyn, where it had been successfully detained for decades. Similarly, don’t say “toorist,” say, “toarist.” (By George, you’ve got it.) Again, practice. This is hot stuff. Cutting edge. Hundo P.

To avoid embarrassment, it’s better to practice when you’re alone than to try things out in public before they’ve been thoroughly polished.

5. In the word “capture,” the first syllable, “cap,” should be mid-range in pitch and clipped at the end, with a fraction of a second pause before dropping down to the second syllable, “chur,” which must be at a low pitch and slightly drawn out, so that it sounds like the endearing growl of a small dog.

This rule, first promulgated by anonymous Valley Girls back in the eighties, applies to all multi-syllabic words that end in “-ture” and most words of more than one syllable that end in “r.” The amount of fry used in this application has varied over time, and the appropriate level has been the subject of a lively but inconclusive debate. I take the view that it is a matter of personal taste. Experiment with the sizzle; go ahead. Practice with this list: rapture, juncture, fracture, puncture, rupture. Remember: Start high, go low, go long. Grrrr.

6. In “the rust belt,” “the” should be at mid-register pitch, while both “rust” and “belt” should be about five full notes higher. Yes, this is the famous sound of uptalk. The higher pitch of “rust” and “belt” suggests that a question is being asked. The goal is to create the impression that you are checking to see if the listener knows, as you are pretending to know, exactly what the rust belt is. What is desired is the illusion of a simultaneous, unspoken, empathetic interaction of mutual insecurity, something like, “Are you still with me? Am I doing OK?”, evoking at most an instant, tiny nod from the listener and a silent “Yes, I’m still with you, and you’re doing just fine, I think.” Try not to sound too needy. Aim for a subtle patina of clingy insecurity. It’s more credible. No need to ham it up.

Again, it is the legendary Valley Girls who are credited with this classic innovation. Australia recently filed a suit with the International Court of Justice disputing this claim. As if!

Aim for a subtle patina of clingy insecurity. It’s more credible.

Uptalk, like vocal fry, is used by women more than men, and is frowned upon by some, especially when it is “overused” and “exaggerated.” What crap. When it’s used once or twice per sentence, and the high-pitched words don’t pierce the falsetto barrier too often, uptalk reliably contributes to an authentic-sounding PME fluency. While I’ll grant that it may be something of an acquired taste, with practice and patience you’ll come to find its chirping high notes as precious as I do. Uptalk is cool and is likely to remain so. (I suspect that some men avoid uptalk because it makes their mansplaining hilarious.)

7. Then, after “rust belt,” comes a pause, as though the speaker were waiting for some confirmation of comprehension. This is a faux pause. The pause should not be so long that it gives the listener sufficient time to formulate and initiate an inquiry — in this instance, into the actual membership roster of states or cities in the rust belt. The duration of the pause will vary according to the speaker’s assessment of the listener’s level of expertise. Here, the assessment would involve the fields of (a) voter behavior in 2016 and (b) the deindustrialization of the non-Canadian area around the Great Lakes during the past half-century. To use the faux pause correctly, then, refer to this rule of thumb: Low expertise? Short pause. High expertise? Shorter pause. As always, the primary concern should be style, not substance.

8. The words “but she” should be two full steps lower than “belt” (from the fifth to the third), but “didn’t” should be right back at the same pitch as “belt.” That’s right, another dose of uptalk.

To master the technique, the novice should start by uptalking at least 50 times a day. When I was starting out, I kept a pencil stub and a little note pad in my shirt pocket to tally up my uses of uptalk during the course of the day with neatly crosshatched bundles of five. You might want to give it a try, as it keeps your shoulder to the wheel. I am proud to say that I uptalk effortlessly all the time now, and the surprise and sheer delight on the faces of young people when they hear an older gentleman “talking up” makes all the hours of practice worthwhile. I feel like I’m really making a difference.

While I’ll grant that it may be something of an acquired taste, with practice and patience you’ll come to find its chirping high notes as precious as I do.

A word of caution. When uptalk is employed at a very high frequency, volume, and pitch, and the whole sampler of fillers is tossed in, a critical mass can be achieved that has been known to set off a chain reaction. First your dog, then the neighbors’, then their neighbors’ — before you know it, the whole neighborhood is filled with the sound of a howling canine chorus. Once, when I overdid it, the damned coyotes even joined in. So mix fillers into your uptalk carefully. I’m just saying.

9. The word “didn’t” should be pronounced as a crisp, two-syllable “dident.” The short “e” sound should be clearly heard as in “Polident.” (Think “prissy.”) This same rule applies to “doesn’t,” which becomes “duhzent,” emphasis again on the short “e.” While “couldn’t” and “shouldn’t” also sometimes become “couldent” and “shouldent,” as one might expect, just as frequently they come out as, “coont” and “shoont,” utilizing the short “oo” of “schnook.” (Thinking back, the guys I heard using this pronunciation may have been lit.) Either of these modern variants is acceptable, but eschew the fuddy-duddy standard pronunciations of the original contractions, “could/nt” and “should/nt,” which, oddly, feature glottal stops. (Yesterday, I heard “coo/ent.” Very chill.) Oh, and don’t say “did/nt.” (With all due respect, you’d sound like a cave man.)

10. The final word, “right,” should be pronounced in a way that places it at an equal distance from (a) assuring the listener that what you just said was not only correct, but cool, and (b) seeking assurance from the listener that what you just said was not only correct, but cool. In order to achieve this effect, the coloration of “right” must be subtly blended so as to become a creature of the twilight world between the declarative and the interrogative: not falling, not rising, not whining, and never, ever abrupt. With the proper inflection, “right” will hit this sweet spot, where the listener will wonder, “Wait. What? Is he asking me or telling me?”

Practice these ten exercises. Practice hard, then get out there and commence pussyfooting.

Part Four: Problems and Solutions

As you gain fluency in Post-Modern English, what you seem to lose in self-confidence, you will more than make up for with an increased capacity to appear empathetic. Your use of PME will lower the walls and build new bridges between you and the people around you. Your sacrifice of the ability to assert your will and pass judgment on others will help create a more open, tolerant, and nonjudgmental human community. You will contribute to a world in which nobody will feel the need to say “Don’t judge me,” or “Don’t be a dick,” because there will be no one judging them and no one will be acting like a dick. That’s right: no more judges and no more dicks. It will be a world of greater respect, warmth, and, yes, love.

The bad news is that you’ll have to keep an eye out for three problems that may rear their ugly little heads.

What you seem to lose in self-confidence, you will more than make up for with an increased capacity to appear empathetic.

First, there is backsliding. Although you now sound hip, as you approach your dotage you may find among your life’s baggage a few truths that you feel should be self-evident to everyone. You may even feel a need to share these truths with the people who, sad to say, have not had the pleasure of reading the self-published revisions to your personal Boy Scout Handbook. (You may also feel a constant pain in your lower back. These symptoms often occur together.) Pretending to be wimpy may have grown so taxing that, as therapy, you decide to briefly drop the Post-Modern English charade and revert to your former pre-PME self. But how do you safely remount your high horse?

To avoid unjust accusations of hypocrisy, it is best to choose the venue and target of these code-switching episodes carefully. I’ve heard that a marvelous place to engage in them is on urban motorways. I am told that it is easy to find drivers who are unaware of your exhaustive personal list of driving dos and don’ts. What next?

You may find yourself behind the wheel of a large automobile. Some knucklehead in a little Kia cuts in front of you without even signaling, missing you by mere yards. Gunning it, you pull up next to him. You lower your window. He lowers his. Then you let him have it with both barrels — figuratively, of course. You tell him, in stark Anglo-Saxon terms, in as loud and clear a voice as you can muster, the obscene fate that awaits him and his mother before their imminent and humiliating deaths. After that, spleen thoroughly vented, you brake and swerve onto the exit ramp, switch back to PME,and reassume your Oprah-like pose of nonjudgmental equanimity.

Here are a few tips. Before you switch codes, make absolutely sure that the knucklehead in your crosshairs doesn’t know who you are. Anonymity is crucial. And avoid the rush hour, when traffic sometimes grinds to a halt. Offended knuckleheads have been known to leap from their cars, screaming obscenities and brandishing revolvers. They are, after all, knuckleheads. (Good thing it’s illegal to use a wireless telephone while driving. No one will be able to post your outburst on the internet.)

The best way to keep from backsliding is, obviously, to get a grip on yourself.

Before you switch codes, make absolutely sure that the knucklehead in your crosshairs doesn’t know who you are. Anonymity is crucial.

Second, should you choose to “just say no” to the temptation to backslide, beware of unsuccessful repression. If, in order to achieve PME fluency, you have to repress the wish to lord it up over everybody, and the repression fails to keep that wish contained, you may catch it sneaking out of that darkened back room of your consciousness, where you’ve been keeping it out of public view, and exposing itself in what is popularly known as a “Freudian slip.”

Attending a lovely garden party, you might intend to say, “Oh, You’re so kind. Thank you so much,” only to find yourself saying, “Why don’t you just go fuck yourself.” Remember, you could have said this to the knucklehead who cut you off, but you didn’t want to be seen as a hypocrite.

What then? The best way to avoid Freudian slips is to keep a civil tongue in your head. If you think that you might need professional help to accomplish this, weekly sessions with a competent therapist for a year or two should do the trick. And don't be put off if the hourly fee is hundreds of dollars. Medicare covers it.

Third, and finally: As bad as that slip would be, there is the possibility of something even more embarrassing. Freud himself believed that a sufficiently strong unfulfilled wish, if locked away in some dark dungeon of the subconscious, could create intolerable internal feelings that were then projected onto an external object in the form of a paranoid delusion of the kind that motivates such modern political extremists as white supremacists and their mirror-twins, the antifas. You may find yourself on the campus of a large university, waiving simplistic placards, shouting incoherent platitudes, and trading ineffectual blows with someone very much like yourself, a person who speaks Post-Modern English fluently but finds it difficult to express his opinions nonviolently. Why, he may even lack the most basic linguistic tools that are needed to engage in civil discourse.

You might intend to say, “Oh, You’re so kind. Thank you so much,” only to find yourself saying, “Why don’t you just go fuck yourself.”

The solution? Just pull yourself together, man. Snap out of it, for the love of God.

Given your age, maturity, and ability in archaic English, spotting these pitfalls early on and avoiding them should not be difficult. If, however, you find that you’re experiencing uncontrollable urges to play the pontiff, convert the heathen, or some such, and you feel the need for relief, there is a category of medications called anti-androgens that lower the testosterone levels often associated with judgmentalism. Most of the side effects are limited to the secondary sexual characteristics and are not life threatening. If this sounds right for you, you should consult your health care provider.

Should the medication prove ineffective and your symptoms persist, there is a growing body of evidence indicating that immediate and lasting relief can be achieved through gender reassignment surgery, provided that you are a male. While this has become a relatively safe and routine procedure, boasting a single-digit mortality rate, a small percentage of otherwise qualified candidates hesitate to “go under the knife.” But if you count yourself among these reluctant few, take heart. There is one more glimmer of hope: the experimental treatment protocol called “strategic relocation.” While there is insufficient data to conclusively prove the treatment’s therapeutic efficacy, the available anecdotal evidence suggests that, at the very least, more research is warranted.

Ferris T. Pranz, a postdoctoral fellow in the Department of Applied Metaphysics of Eastern Montana State University at Purdie, has been observing a band of people living with judgmentalism. These people were individually tagged and released over the past decade by the Montana Department of Behavior Management (MDBM) outside Fertin, a farming town near Lake Gombay, just south of the Missouri River. In his unpublished 2017 field notebook, Pranz records his painstaking efforts to gain the trust of this strategically relocated band at their watering hole, a smoke-filled bar called “Grumpy’s.”

There is one more glimmer of hope: the experimental treatment protocol called “strategic relocation.”

Pranz’s observations have raised some eyebrows in the judgmentalism community in Montana. Despite the Fertin band’s characteristically opinionated and aggressive communicational style and constant abuse of both alcohol and tobacco, they seem to share a gruff good humor while playing at pool, darts, and cards. Interestingly, they often refer to themselves as “blowhards,” apparently without shame or irony, and then laugh loudly. When Pranz would ask the group to explain the laughter, they would invariably laugh again, more loudly. Pranz has recommended that further research be conducted to discern the motives behind this laughter, possibly utilizing a double-blind design.

More broadly, Pranz and his colleagues at EMSUP have proposed a major longtitudinal study to explore the incongruity of the seemingly upbeat ambience in “Grumpy’s” by designing instruments to quantify (1) the specific characteristics of these Fertin people and the effect that such characteristics may have on their communicational dynamics; (2) the effects of the complete absence of treatment by means of any of the experimentally proven therapies for people living with late-stage degenerative judgmentalism. These effects can then be compared with therapeutic outcomes in matched groups receiving such treatments. Pranz has also recommended that the proposed longtitudinal study be completed prior to authorization of an expanded “strategic relocation” program to include areas beyond Fertin. In October of 2017, the Board of Directors of the Friends of Judgmentalism in Bozeman passed a resolution in support of Pranz’s proposal. Pranz plans to apply for a grant through the MDBM in June of 2018.

Part Five: Looking Backward

American English is the language of our past, already dated and quickly becoming archaic. As will be shown, the impression that it makes when spoken is not good. More importantly, it conveys an aggressive smugness that is out of step with today’s world. Even the founding documents of the United States, written in American English, sound absolutist, judgmental, and harsh.

By now, you must have asked yourself: “If French is the language of love, and German is the language of Nietzsche, and Post-Modern English is the language of wimps, then what the heck is American English?” Well?

American English is the language of our past, already dated and quickly becoming archaic. It conveys an aggressive smugness that is out of step with today’s world.

As a native speaker of American English, I am not qualified to answer. To find a place to sit somewhere outside of one’s own language and culture, and then to sit there and listen to one’s language being spoken in order to gather an impression of the speaker, using only the sound of the language, not its meaning, is like trying to street-park a Class A RV on the Upper East Side: while it may be possible, I’ve never seen it done. No, this question should be answered by people who don’t speak the language.

American English began in 1607, when the first British colonist stepped on the shore of the James River. How do you suppose American English sounds to British ears today? I’m told there are three main impressions. First, it is spoken more slowly than British English, giving the impression that the speaker is also a little slow. Second, it is spoken more loudly than British English, and with more emotion. As both of these characteristics are associated with children, the impression is that the speaker is somewhat immature. Third, American English is rhotic, meaning that “r” is pronounced both at the end of a word and before another consonant. As this pronunciation is normally associated with Scotland, Ireland, and remote rural areas, the impression is that the speaker is a bit rustic.

Taken together, then, to British ears American English is the language of dim-witted, childish yokels. One might call it the language of knuckleheads. That is not to say that Americans are knuckleheads. It simply means that our language makes us seem that way.

Post-Modern English, while less given to the glacial John Wayne drawl or the grating Jerry Lewis bray of American English, retains the rhotic accent, even doubling down on it with the vocal fry. Still, in two of the three categories, it constitutes an evolutionary step beyond its parent language. Even British children have begun to pick up Post-Modern English from Netflix, much to the delight and amusement of their parents.

To British ears American English is the language of dim-witted, childish yokels. One might call it the language of knuckleheads.

I was once told by a friend who spoke only the Arabic of the Nejd that French sounded like someone saying, “Loo, loo, loo, loo, loo,” and English sounded like someone saying, “Raw, raw, raw, raw, raw.” That was just one Bedouin’s opinion, of course. It seemed funnier in Arabic, somehow. “Loo, loo, loo.” We had a good laugh.

In 1776, less than 200 years after that first colonist was beached, Thomas Jefferson wrote the Declaration of Independence. What a marvelous symbolic moment in the evolution of English! He had to write it in American English, of course, because the Post-Modern branch wouldn’t emerge for two centuries. While this does not excuse him, it reduces his level of culpability. Listen:

We hold these truths to be self-evident, that all men are created equal.

Can you hear his certainty? Why, the phrase simply drips with self-confidence. To assert that a truth is self-evident is an act of rhetorical bravado that positively swaggers. (“Because I said so.”) Note the absence of fillers to dull the sharp edges. He seems to have missed the lesson that explains how “you have your truths and I have mine.” He seems to be saying that “all truths are not created equal,” pardon my French. And what is this nonsense about “men”?

So Jefferson was sure of himself, and assertive. But was he judgmental? Ask yourself: What is this Declaration of Independence, at its core? Is it a celebratory birth announcement, like a Hallmark card? (“Please welcome…”)

It seemed funnier in Arabic, somehow. “Loo, loo, loo.” We had a good laugh.

Far from it. This is Thomas Jefferson leveling a public and harsh judgment against one King George III. It spells out George’s crimes, like a rap sheet or an indictment. It is clear: Tom is judging George. Tommy is calling Georgie a dick. Listen:

A prince, whose character is thus marked by every act which may define a tyrant, is unfit to be the ruler of a free people.

This white, male, rich, privileged, powerful, slaveholding “founder” of America is writing in the scathingly self-righteous tones of archaic American English. The sound of Jefferson’s voice is clear. He is cocksure and in-your-face. He is your judge, jury, and executioner. The music of his American English is a march being played by a big brass band oompahing down Main Street on the Fourth of July, snare drums rattling like assault rifles. Courage is needed to follow the facts, no matter where they lead. It pains me to have to say it, but Thomas Jefferson was a dick.

Your final assignment is to translate the first of the two fragments above (the one with the “self-evident truths”) from American English into Post-Modern English. You have five minutes. Begin.

OK, time’s up. Answers will vary, of course, but it might be useful to compare your translation with the following:

So, some of us were sorta thinking? that a coupla of these like, ideas? or whatever? we had were, oh, I don’t know, kind of, you know, well, not so bad? I guess, right? And, uh, oh yeah, that all men, I mean, like women, too, kind of like, everybody? I mean, are pretty much, I’m gonna say, created? you know, like, equal? right. or whatever, so...”

It sounds vaguely Canadian, eh?

Yes, it is time to put American English out to pasture. Post-Modern English is not just cooler; it is more in keeping with the zeitgeist. It is the language best suited to the more equitable, inclusive, and nonjudgmental world that we are building together.

It pains me to have to say it, but Thomas Jefferson was a dick.

It is time to hang up that coonskin cap.

* * *

All living languages are continuously evolving — as are we, the species that speaks those languages. Do these two forms of evolution influence each other? Of course they do. Through millennia, the evolutionary pas de deux of our species on this earth has been and continues to be shaped by, and to shape, the words and music of our languages. To the extent that there is intent in this choreography, it is ours. We are making ourselves. The changes we make to our languages have consequences beyond the merely aesthetic. They affect the choices we make that determine our destiny. We should, therefore, make changes to our languages with the same caution we exercise in rearranging the molecules of our genome. Are we good?

“. . . Fearing not that I’d become my enemy
in the instant that I preach.”
                          — Bob Dylan, My Back Pages (1964)




Share This


Cuba, Race, Revolution, and Revisionism

 | 

When Cuba’s serial and multiple African military interventions began in 1963 with Guinea-Bissau’s war of independence from Portugal, Fidel Castro selected black Cuban soldiers and conscripts to man his liberation regiments. Dead black bodies in Africa were less likely to be identified as Cuban, according to Norberto Fuentes, Castro’s resident writer and — at the time — official biographer, confidant, and a participant in the later Angolan wars.

Cuba’s African — and Latin American — adventures were made possible by agreements reached among the USSR, Cuba, and the United States to end the Cuban Missile Crisis of 1962. One of those protocols was a promise from the US that it would respect Cuban sovereignty and refrain from invading the island. To Castro, this was a green light to build Cuba’s armed forces for the liberation of the world’s downtrodden instead of having to concentrate his resources for the defense of the island.

Ochoa was the only subordinate who could speak uninhibitedly with, and even kid or tease, the humorless, haughty, and overbearing Fidel Castro.

However, when it came to deploying his black brigades, Castro found himself short of black commanders. Enter Arnaldo (“Negro”) T. Ochoa Sánchez.

Ochoa had been part of Castro's 26th of July Movement ever since its creation, and by March 1957 he had joined Castro's guerrilla army in the Sierra Maestra, fighting against the Batista dictatorship. It was then that Ochoa and Raúl Castro forged a close friendship, one that also led to a certain intimacy with Raúl’s brother, Fidel. According to Fuentes, in his book Dulces Guerreros Cubanos, Ochoa was the only subordinate he knew who could speak uninhibitedly with, and even kid or tease, Fidel Castro — a humorless, haughty, and overbearing caudillo.

Ochoa, of humble Oriente peasant origins, had distinguished himself in the Revolution and during the Bay of Pigs fiasco, subsequently attending the Matanzas War College and Frunze Military Academy in the Soviet Union and rising to the Cuban Communist Party’s Central Committee. But he really distinguished himself in the Ethiopia-Somalia conflict. Cuba aided Ethiopia in this USSR vs. China proxy war, since both boasted Marxist regimes. Ochoa brilliantly defeated the Somalis in the tank battle of the Ogaden. For that he was dubbed “the Cuban Rommel.”

The problem was that Ochoa wasn’t really “black,” a racial classification that could apply to almost anyone in Cuba, especially if one uses the rule of thumb once common in the United States: that anyone with any black ancestry, no matter how distant or dilute, is black. (This author’s DNA test reveals a 1–3% West African ancestry, a detail not noticeable in his phenotype.) Ochoa is very swarthy, in a Mediterranean sort of way; yet his phenotype fails to show any classic “Negroid” features. It was Raúl Castro who nicknamed him Negro (black) by bestowing on him a promotion to “Black” General. The Armed Forces Minister wanted a black commander for the black troops he sent to Africa because he lacked a qualified, real black general who would realize both his political and his military objectives.

Ochoa brilliantly defeated the Somalis in the tank battle of the Ogaden. For that he was dubbed “the Cuban Rommel.”

Now, Cuba’s armed forces actually did include black commanders, among them General Víctor Schueg Colás (see below) and Juan Almeida Bosque. Almeida was a veteran of the assault on the Moncada Army barracks that launched the 26th of July Movement. Along with the Castros, Almeida was caught, imprisoned, amnestied, and exiled to Mexico after that defeat. He was on the Granma yacht as it landed survivors in Cuba, and he fought against Batista in the Sierra Maestra mountains. Later he was promoted to head of the Santiago Column of the Revolutionary Army. Wikipedia, without any sense of irony, says that “he served as a symbol for Afro-Cubans of the rebellion's break with Cuba's discriminatory past.” In his book Como Llegó la Noche, Huber Matos, third in command of the Revolutionary armies after Fidel and Raúl — though later to be purged — describes Almeida as unsuited for military command, a “yes” man. He says that Fidel kept him purely for his loyalty and as a symbol of the Revolution’s inclusiveness of Afro-Cubans. Almeida was the only black commander during the Revolution. He was Fidel Castro’s token black.

Ochoa took the nickname Negro in stride and probably even affectionately, fully understanding the political rationale behind the dubbing. In this author’s opinion, his attitude towards race (and by extension, Fuentes’ attitude) is pretty representative of one general streak of Cuban racial attitudes. Here is my translation of Norberto Fuentes’ description of Ochoa’s reaction to the moniker:

Ochoa, besides being mestizo, was very obstinate. When anyone alluded to Raúl’s reason for the nickname — that the Minister didn’t have any competent, real black generals — Ochoa would begin to vigorously shake his head. And he would continue this stubbornness even when reminded of General Víctor Schueg Colás — el Negro Chué — as he was generally known: a black Cuban general.

Ochoa responded that “el Negro Chué was not a negro who was a general.”

“And what kind of BS is that, Arnaldo?” asked a member of the group.

“He is a general who is black, and that’s not the same thing as a black who is a general.”

For a second I [Fuentes] thought Ochoa was about to write a second volume to Alex Haley’s Roots. My mind reviewed the list of black Cuban generals.

“And what about Kindelán? And Silvano Colás? And Moracén? And Calixto García? And Francis?” I challenged him.

“None of those are either generals or black,” he declared.

“But then what the fuck are they, Arnaldo?”

“Fictions, my friend. Nothing more than nonsense,” he blithely answered.

If you, dear reader, can’t make sense of that, don’t worry. It’s Ochoa’s way of saying that race doesn’t matter, that race is irrelevant, that concerns about race are nonsense. One Cuban-American academic, quoted in Guarione Diaz’ The Cuban American Experience: Issues, Perceptions and Realities, averring that humor is an essential trait of the Cuban personality, describes the archetypal Cuban as “one who jokes about serious matters while taking jokes seriously.” In that vein, there is a deeper intent in Ochoa’s flippancy that Fuentes, in a stream of consciousness rant, then goes on to elaborate.

The Castros were recapitulating the trans-Atlantic slave trade in reverse: shackled by the ideological chains of a monomaniacal dictator and sent back to Africa.

His idea is that Ochoa, in his own irreverent way, was seeking redemption for the tragedy of Cuba’s “stoical, forced, brave, sweet and immense blacks” who had to carry — since 1965 — the full brunt of the Revolutionary Armed Forces’ guerrilla campaigns in Africa, because the Castros believed that dead black bodies in Africa couldn’t really be traced back to Cuba. They didn’t contemplate any POWs.

In Fuentes’ view, the Castros were recapitulating the trans-Atlantic slave trade in reverse: two centuries ago, in physical chains across the Atlantic to the Americas; in the late 20th century, shackled by the ideological chains of a monomaniacal dictator and sent back to Africa.

To Ochoa, race was a trivial issue; to the Castros it was an essential component of their revolutionary tool kit in their struggle for universal social justice. When, according to Diaz, Cubans began leaving the island in droves to escape the repressive regime, “the revolutionary government denied exit visas to Blacks more than to Whites to show the international community that Cuban Blacks supported the revolution and did not flee Cuba.”

Castro himself, coming down to Girón, interrogated the black prisoners — just before their sham execution — accusing them of treason both to their country and to their race.

The Castros’ revisionist racial attitude reared its ugly head again during the Bay of Pigs fiasco when the invading members of Brigade 2506 surrendered or were captured. Black prisoners were singled out for extra abuse. They were perceived as traitors since, in the Castro calculus, the Revolution had been fought — in part — for them. Haynes Johnson, in his book, The Bay of Pigs: The Leaders’ Story, adds that “of all prisoners, Negroes received the worst treatment.” They didn’t fit Castro’s Revolutionary narrative, and their presence on the invasion force infuriated him. He himself, coming down to Girón, interrogated them — just before their sham execution — accusing them of treason both to their country and to their race. Osmany Cienfuegos, a Minister in Castro’s government and brother of Revolutionary Commander Camilo Cienfuegos, second in popularity only to Fidel, lined them up against a wall and told them: “We’re going to shoot you now, niggers, then we’re going to make soap out of you.”

One notable exchange during the prisoners’ trial was with Tomás Cruz, a paratrooper of the 1st Battalion. “You, negro, what are you doing here?” Castro asked, reminding Cruz that the Revolution had been fought for people like him, and of the swimming restrictions at some tourist resort hotels before the Revolution (a pathetic concession to attract American tourists).

Cruz, with all the dignity he could muster, responded, “I don’t have any complex about my color or my race. I have always been among the white people, and I have always been as a brother to them. And I did not come here to go swimming.”

Black is White and White is Black

Broadly speaking, in Cuba, race — in this context meaning skin color — is a relatively unimportant issue, on par with other physical traits such as weight, height, pulchritude, hair color, and even disposition. Unlike in the US, where large proportions of black people distinguish themselves from the broader population with distinctive clothing, hair styles, music, linguistic flourishes, political attitudes, and other traits, all kinds of Cubans share cultural values, patois, styles of dress, music, etc. Even religious affiliation, which in the Unites States often makes a visible difference between the races, tends toward a high degree of syncretism, with ancestral roots and beliefs to the fore instead of any racial overtones — a theme that the Castro regime has falsely exploited by preferential treatment of Santeria over other religions, treating it as compensation to a previously “oppressed” race (in Castro’s revisionist ideology). American hypersensitivity to race is unknown in Cuba.

In Cuba, slaves could marry, own personal property, testify in court, and run businesses.

But how did race virtually disappear as a contentious issue in Cuba, while persisting until modern times in the United States — especially considering that the former eliminated slavery 21 years after the latter?

In spite of the awful conditions of the sugarcane fields, slavery under Spanish colonial rule was nothing like what it had become in the United States by the eve of the Civil War. According to historian Jaime Suchlicki in Cuba: From Columbus to Castro and Beyond, “Spanish law, the Catholic religion, the economic condition of the island, and the Spanish attitude toward the blacks all contributed to aid the blacks’ integration into Cuban society.” After all, the Spanish had lived for centuries under the comparatively tolerant rule of Moors.

In the American south, negritude — to any degree, i.e., the notorious “one drop rule” enacted in several states — equated skin color with a deprivation of rights. In Cuba, slaves could marry, own personal property, testify in court, and run businesses. One 18th-century observer noted that many had become skilled craftsmen, “not only in the lowest [trades] such as shoemakers, tailors, masons and carpenters, but also in those which require more ability and genius, such as silversmith’s craft, sculpture, painting and carving.”

Joining the US became a nonstarter during the US Civil War when Cubans realized how badly Negroes were treated in the South.

Additionally, Spain’s liberal manumission policy “resulted in almost 40% of African-Cubans being free in 1792,” reports Andro Linklater in his book on the evolution of private property, Owning the Earth. The diverging legal and social attitudes toward race in Cuba and in the US presaged future developments in each country. The paradoxical contrasts are striking. Whereas Reconstruction in the US institutionalized policies that had grown more nakedly racist since Independence — equating skin color with the presence or absence of rights and talents — the opposite was true in Cuba. Under the influence of the Catholic Church, the fundamental humanity of Africans was uncontroversially established early on; slavery and skin color were philosophically separated. In the time of Cuba’s Wars of Independence, Antonio Maceo, an Afro-Cuban, became second-in-command of the rebel armies.

At about the time of these wars, a notable segment of Cuban intellectuals favored the Texas model: declare independence from the colonial power and petition the US Congress for admission to the Union. The idea was so popular that the proposed Cuban flag was modeled on the Texas flag: a single star on the left, stripes on the right, and the whole rendered in red, white, and blue. However, joining the US became a nonstarter during the US Civil War when Cubans realized how badly Negroes were treated in the South. It wasn’t just the exploitation of slaves (which also happened in Cuba), but rather the contempt for dark skin color that denied a person’s humanity.

Cuba has always had an amorphous racial climate, one mostly misunderstood or puzzling to Americans. Racism, in the sense of hating or fearing a person for his skin color, is unknown. Skin color was never an impediment to respect. But skin tone snobbery (rarely surpassing trivial tut-tutting or even semi-serious priggishness) was not uncommon. Color gradations, like degrees of body mass index ranging from the skeletal to the morbidly obese, extended into categories of people Americans would consider “white,” with the too-pale also looked at askance, as if they were anemic and rickety.

Fulgencio Batista, while president, was denied membership in the Havana Yacht Club: he was considered too swarthy; although his son, Jorge Luis, was admitted. That he didn’t take the rejection personally and, as a dictator, did not take reprisals, is inconceivable to an American. Instead, the president donated a marina to the Havana Biltmore Yacht & Country Club, as swanky a venue if not more, and, voila! he and his family became members of that club.

Racism, in the sense of hating or fearing a person for his skin color, is unknown in Cuba. Skin color was never an impediment to respect.

This nonchalant — politically-correct Americans might say insensitive — attitude is related to Cubans’ tendency to nickname everyone, even strangers. A person with epicanthic folds will be called Chino, a very black man Negro, a fat person Gordo (my own nickname after immigration), a starkly white-skinned person Bolita de Nieve (Snowball), a skinny woman Flaca, a large-nosed man Ñato, a full-lipped person Bembo (hence, Negro Bembón for a full-lipped black man), a pug-nosed man Chato . . . You get the picture.

But the irreverence also gets manifested post-ironically, in the same vein as Ochoa’s nonchalant whimsy: a very black man might be nicknamed Blanco or Bolita de Nieve, a fat woman Flaca (skinny), and so on.

My favorite example of this is Luis Posada Carriles’ nickname. Posada Carriles, a Cuban exile militant, is considered a terrorist by the FBI. He is generally thought to be responsible for the bombing of Cubana flight 455 in 1976, which killed 73, including 24 members of Cuba’s National Fencing Team. In addition, Posada Carriles is said to have been involved in the planning of six bombings at Havana hotels and restaurants during 1997. His rap sheet is much too long repeat here. Posada Carriles’ nickname? Bambi.

But I digress. Overtones of Americans’ racial (a term I hesitate to use, as you’ll see below) attitudes are making inroads into the Cuban-American experience. One white Cuban-American informant admitted to being fearful of and avoiding groups of black men after dark in the US, a behavior that had never crossed his mind back in Cuba. Would one call his reaction in the US “racism”? I wouldn’t. I’d call it adaptability based on experience, a phenomenon that black economist Thomas Sowell has explicitly addressed in his writings.

The Color of Culture

Americans, both black and white, are quick to cry racism in any untoward exchange between people of different hues when someone is being a boor or a snob or experiencing a misunderstanding or, more often than not, when mild ethnocentricity is at work. Ethnocentricity . . . a big word that simply means the tendency of most people to exercise a preference for congregating with like-minded, like-speaking, like-dressing and like-looking people — people they can easily “relate to.” Expressed hierarchically, people’s instinctive loyalty is first to their family, then to their clan (extended family), town, state, religion, in-group, political party, culture, nation, etc. One can see this in the popular slogans “buy local” and “buy American.”

Imagine you’re a small business owner looking for a sales rep. You interview two applicants, one black and one white. The white applicant is sloppily dressed, needs a shower, doesn’t speak clearly, and seems distracted. The black applicant, on the other hand, is fully engaged, is dressed smartly, and seems keen to join your operation. It’s a no-brainer — the black applicant has more in common with you; skin color is not a factor.

We all share a tendency to look at other cultures solipsistically: we see through the lens of our own values, evaluating people according to preconceptions originating in our own standards and customs.

Now imagine the opposite scenario: The black applicant displays plumber’s crack, reeks, and is unintelligible; while the white wears a coat and tie, speaks in your local accent and displays overwhelming enthusiasm. Again, a no-brainer, with skin color again not a factor; instead of that, it is shared values that determine your choice.

Ethnocentrism does, however, have its extremes, the ones you’ll most often come across in a dictionary, without the nuances of an Anthropology 101 course. The first — and one that we all share to some degree — is a tendency to look at other people and cultures solipsistically: we see through the lens of our own culture and values, evaluating other cultures according to preconceptions originating in the standards and customs of our own milieu. More extreme is the belief in the inherent superiority of one's own ethnic group or culture — an attitude that, taken to an absurd limit, can breed intolerance, chauvinism, and violence.

The Origin of Races

What is race? One doesn’t need to understand race in order to be a racist or accuse someone of racism. Contrary to popular opinion, skin color is not a determining factor of race. H. Bentley Glass and Ching Chun Li were able to calculate from blood group data that North American Negroes have about 31% white ancestry (cited in Stanley M. Garn and Charles C. Thomas, Readings on Race [1968]). For practical or political reasons, biologists and physical anthropologists are divided as to the validity of the concept.

First, the more practical biologists. In biology, race is equivalent to variety, breed, or sub-species. In a nutshell, it is incipient speciation. According to the Oxford English Dictionary, race is “a group of living things connected by common descent or origin” — as uncontroversial and far from the whole-picture definition as one can dream up. But to understand race one first has to understand species.

Contrary to popular opinion, skin color is not a determining factor of race.

A species is a group of living organisms consisting of similar individuals capable of exchanging genes or interbreeding. The species is the principal natural taxonomic unit, just below genus — yet even this is by no means a simple or clear-cut concept. Think of horses, donkeys, mules, Jennies, zebras and zorses (a horse-zebra hybrid); or dogs, wolves and coyotes. These animals can interbreed, with various rates of fertility success, but do not normally interbreed in the wild. To account for this, the classic definition of species was amended by the addition of a qualifier, that the group of organisms in question must not only be able to interbreed but must also do so regularly and not under extraordinary or artificial circumstances.

To further complicate things (or was it to simplify?), Ernst Mayr, one of the 20th century’s leading evolutionary biologists and taxonomists, formulated the theory of ring species (aka formenkreis) in 1942 to explain a natural anomaly in the distribution of closely related populations. According to Wikipedia, “a ring species is a connected series of neighboring populations, each of which can interbreed with closely sited related populations, but for which there exist at least two ‘end’ populations in the series, which are too distantly related to interbreed, though there is a potential gene flow between each ‘linked’ population.”

The term ‘ring species’ is a vestigial remnant of some of the first ring species identified, but the populations need not be in a ring shape. Examples include the circumpolar Larus herring gull complex, Ensatina salamanders, the house mouse, trumpet fish, drosophila flies, deer mice, and many other bird, slugs, butterflies, and others. Most natural populations are bedeviled by such complexities, including our closest relative, Pan troglodytes, among whom the East African subspecies shweinfurthii is separated by the Congo River and half a continent from the West African variant verus.

Gould believed that the concept of "race" had been used to persecute certain human groups to such an extent that it should be eliminated.

So that brings us back to race, or incipient speciation. Charles Darwin, in Origin of Species, identified the speciation process as occurring when a subpopulation of organisms gets separated from the larger group, fails to interbreed with them, and interbreeds strictly with itself. This process increases the smaller group’s genetic complement while reducing — again, within the smaller group — the larger group’s greater genetic diversity. The eventual result may be that the smaller group becomes distinct enough to form a new species. This part of the process is labeled “genetic drift.”

Two other factors usually contribute to speciation: genetic mutation and adaptation (through natural selection) to a new environment or way of life. Here “adaptation” does not carry the sense of individuals “getting accustomed to” a new situation but rather the sense of individuals carrying genes that are detrimental in that situation dying before they procreate — in time deleting those genes from the smaller group. This is called “natural selection.” After a subgroup separates from the main population and before it becomes a new species…this is when the term “race” properly applies.

But Darwin understood the limitations:

Certainly no clear line of demarcations has as yet been drawn between species and sub-species — that is, the forms which in the opinion of some naturalists come very near to, but do not quite arrive at the rank of species; or, again, between sub-species and well-marked varieties, or between lesser varieties and individual differences. These differences blend into each other in an insensible series; and a series impresses the mind with the idea of an actual passage.

Of course, a race may never become a new species; it may well, for any number of reasons, reintegrate back into the main population — which brings us back to human races and the more political anthropological concepts.

Some experts, the late Marxist paleontologist Stephen Jay Gould to the fore, believed that race, as applied to humans, was unhelpful, even invalid. He believed that the concept had been used to persecute certain human groups to such an extent that it should be eliminated. And forget “variety” (humans aren’t flowers) and “breed” (they aren’t dogs) and “subspecies” (the Nazis’ use of unter ruined that prefix).

On the other side stand the Physical Anthropologists (Stanley Garn, Paul T. Baker, Bentley Glass, Joseph S. Weiner, et al.) with the late physical anthropologist Carleton S. Coon, who pioneered the scientific study of human races under the Darwinian paradigm of adaptive and evolutionary processes.

Coon divided Homo sapiens into five races with origins in some distant past, distant enough that genetic and phenotypical differences appeared: the Caucasoid, Congoid, Capoid, Mongoloid and Australoid races. These had diverged not only because of genetic drift, but also as adaptations to their local conditions. The oldest races were the darkest: African Blacks, Australoids and Papuans; while whites, Asians, Pacific Islanders, and American Indians diverged later. Skin color varied according to sun exposure. For example, northern European climates favored fair skin to improve Vitamin D synthesis, while dark skin was a shield from Vitamin D overdose. However, in extremely hot and sunny climes such as the Sahel, too-black a skin would tend to heat a body too much, favoring a more swarthy tone. Along the lands of the upper Nile, tall, lanky bodies helped radiate accumulated heat.

When sickle-cell anemia was discovered in white populations, it clinched the notion that racial adaptations were responses to local environments and independent of adaptations such as skin color

On the other hand, the Inuit were physically well adapted to extreme cold: compact bodies to conserve heat; little facial hair to prevent frozen breath condensation that might freeze the face; lightly protruding noses to protect it from freezing; epicanthic eye folds to reduce the area of the eyes to the elements and yellow or yellow-brown skin. The yellow skin likely evolved as an adaptation to cold temperatures in northern Asia. The yellow color resulted from a thick layer of subcutaneous fat, visible through translucent outer layers of skin.

A more recent adaptation was lactose tolerance, which apparently evolved in whites, permitting adult consumption of milk following the domestication of cattle about 6,000 B.C. But one of the most curious adaptations was sickle cell anemia, a debilitating genetic disease that nonetheless provided partial immunity to malaria to the carrier of one allele. First discovered in black African populations, it was first considered a Negroid feature. However, when it was discovered in white circum-Mediterranean populations, it clinched the notion that racial adaptations were responses to local environments and independent of other adaptations such as skin color — a curious vestigial association from more unenlightened times.

Coon’s classifications — mostly unbeknownst to him because the later fine points post-dated him — were already a mélange built on a vast diversity of prehistoric Homo: neanderthalensis, sapiens, denisovans, floriensis, erectus, habilis, etc. Some scholars define these as separate species, others as separate races. I would argue that it is impossible to define an extinct species within a genus from bone remains alone. (Conversely, albeit ironically, modern skeletal remains often yield their race.) DNA researcher Svante Päävo, one of the founders of paleogenetics and a Neanderthal gene expert, has opined that the ongoing “taxonomic wars” over whether Neanderthals were a separate species or subspecies as the type of debate that cannot be resolved, “since there is no definition of species perfectly describing the case.”

Human evolution, ignoring all the tedious debates, continues to surprise us.

Luckily, some Neanderthal DNA has been sequenced and it was discovered that Sapiens includes some of those brutes’ genetic material — about 2% — in northern European populations. In our history, studies suggest there may have been three episodes of interbreeding. The first would have occurred soon after modern humans left Africa. The second would have occurred after the ancestral Melanesians had branched off — these people seem to have thereafter bred with Denisovans, 90% of whose genetic material is extant in modern Sapiens. The third would have involved Neanderthals and the ancestors of East Asians only, whose percentage of Neanderthal genetic material nears 20%.

One difficulty with Coon was his overly distinct racial categories. To some degree he realized this, even while recognizing many subraces, racial mixtures, and incipient formenkreis (before the phenomenon had a name). The problem was that these incipient races kept interbreeding at their verges (and even farther afield; consider Vikings, Mongols, and Polynesians), and accelerating racial mixture after 1500, when human populations began interbreeding willy-nilly, because of globalization.

And that, dear reader, is why Gould and others eschew human racial classifications.

Meanwhile, human evolution, ignoring all the tedious debates, continues to surprise us. The April 21 issue of The Economist reports the discovery of a new human racial variant in the Malay Archipelago. The Bajau people spend almost all of their lives at sea. “They survive on a diet composed almost entirely of seafood. And . . . spend 60% of their working day underwater . . . They sometimes descend more than 70 meters (240 feet) and can stay submerged for up to five minutes . . . They have lived like this for at least 1,000 years.” The evidence suggests strongly that these astonishing abilities are genetic, the result of mutations and natural selection.

The Bajau spleen, an organ that acts as an emergency reserve of oxygenated red blood cells, is 50% larger than those of neighboring populations — “a difference unconnected with whether an individual was a prolific diver or one who spent most of his time working above the waves on a boat. This suggests that it is the Bajau lineage rather than the actual activity of diving, which is responsible for a larger spleen,” continues The Economist.

There is nothing in any of this to suggest that race should be used for political purposes by governments and demagogues — Hitler, Castro, and others.

DNA analysis tells a similar story: a series of Bajau genetic mutations controls blood flow preferentially to oxygen-starved vital organs; another that slows the build-up of carbon dioxide in the bloodstream and one that controls muscle contractions around the spleen.

What to make of all this? Human racial differences, both behavioral and phenotypic, exist and are worth studying: for medicine, forensic science, DNA studies and just for basic scientific knowledge. Genes are not destiny; they encode broad parameters for modification, in the uterine environment, through nurturing, and now through technology (for better or worse). There is nothing in any of this to suggest that race should be used for political purposes by governments and demagogues — Hitler, Castro, and others.

Will Americans in general ever achieve Arnaldo Ochoa’s insouciance about race? We can only hope. After a Civil War, the Emancipation Proclamation, Reconstruction, the Ku Klux Klan, Jim Crow, segregation, and Civil Rights, we’re now experiencing a heightened sensitivity in the finer details of race relations — probably a good indication of the tremendous progress that has been made in the fundamentals.




Share This


Clueless in Seattle

 | 

In Seattle, where I have spent most of my life, I often walk around a lake near where I live — Green Lake, which is bordered by a strip of public park. It is the most popular park in the city, hosting walkers, runners, skaters, and bicyclists on the paved path around the water. In this urban idyll a coven of campers lives year-round in RVs and tents or, in good weather, sleeps in the open on the ground. If I drive to the University of Washington, I go past another encampment near the freeway exit. Under an overpass by the University Bridge is a rag-and-cardboard hovel surrounded by stolen Safeway carts and piles of garbage.

We didn’t have this when I was growing up here in the 1960s, or many years afterward. Then you could see alcoholics in downtown Seattle, where they sat on park benches and drank. We called them bums. They were male, and mostly white or Native American. They lived in missions and flophouses. They didn’t pitch tents under overpasses and in city parks, because the city didn’t allow it. Legally it still doesn’t, but legality alone doesn’t matter. Seattle does allow it, which is why people do it.

Seattle’s unemployment rate is close to 3%, which is as low as it ever gets. There is plenty of work.

The mix looks different now. I see modern nylon tents, some of them with bicycles parked next to them, or discarded office chairs. Many of the homeless have electronic devices to play music.

The situation is not at all like the famous picture from the 1930s of the “Hooverville” of squatter shacks with the pointed top of the Smith Tower in the background. That was a time of social emergency, of 20 or 30% unemployment. Today the city is booming. Seattle’s unemployment rate is close to 3%, which is as low as it ever gets. There is plenty of work. As I write, within one block of my house are two “Help Wanted” signs on restaurants. Within five blocks are several building sites where work has been repeatedly stopped, probably because of the shortage of labor.

Nor is the problem that Seattle is nasty to the homeless. Quite the contrary. I refer to The Hungry American, a book self-published 2004 by Tom McDevitt, an Idaho doctor who went slumming as his retirement project. Of all the cities in which he practiced being a bum — Pocatello, Salt Lake, Phoenix, San Francisco, New York, and Seattle — my city was the most generous. But in none of those cities, he said, did the homeless starve. The hunger he saw in the Sad Sacks around him “was not of the belly kind, but the gnawing hunger for tobacco, alcohol, drugs and relief for a tortured mind,” he wrote. “In America people are homeless because either consciously or subconsciously they want to be homeless.”

To Seattle progressives this is a cold, insensitive, reactionary, and racist point of view. Their view was correctly expressed in the Seattle Times last November by Adrienne Quinn, who earned $188,662 in 2017 as director of the King County (Seattle) Department of Community and Human Services.

When I see people living in tents on the shores of Green Lake, living out of rat’s-nest cars and RVs within a mile of my house, am I really to believe that it is not their fault?

“Homelessness is a symptom of failures in the child-welfare system, racism, wage inequity, the failure to adequately fund mental-health and addiction services, and skyrocketing housing costs,” she wrote. “Not being able to find an apartment for less than $2,000 a month, or being put on waitlists for housing or treatment, or living in foster homes as a child are not individual failings; they are societal failings.”

I have a relative who is adopting two boys from foster care. Before being in foster care they were living on the street and eating out of dumpsters. Their plight was terrible. But it was the fault of individuals, not “society.” The individuals at fault were their parents, who were heroin addicts.

When I see people — white men, mostly — living in tents on the shores of Green Lake, living out of rat’s-nest cars and RVs within a mile of my house, am I really to believe that it is not their fault? The other day I asked one of the Green Lake maintenance men about the camper that has been parked all winter in lower Woodland Park, a few hundred feet from the sign that says, “No Camping.”

“We can’t do anything about that,” he said. “They send social workers to talk to those guys.” The social workers’ job is to convince the homeless to use social services.

That’s Seattle.

The Seattle Times has a special team funded by outside donors — Starbucks, the Seattle Mariners, the Bill and Melinda Gates Foundation, and others — that writes about nothing but the homeless. Recently the Times had a story about the death on April 5 of Sabrina Tate, 27, who had been living out of a camper in a city-sanctioned homeless parking lot the politicians called a “safe zone.”

Economically, Seattle is a stunningly successful city.

Sabrina Tate was from Spokane, a city less transformed by entrepreneurial capitalism than Seattle. She got into the drug lifestyle as a teenager, after her parents divorced, and she eventually moved to the Pacific Northwest’s big city. She became a heroin addict and was for some years. In February she had gone back to Spokane and seen her mother, who was alarmed that Sabrina’s legs were swollen and infected as a side effect of drug use. Her mother offered to take her to the hospital to treat her legs and kick the heroin, but Sabrina insisted on returning to the “safe zone” and her camper, where shortly thereafter she was found dead on the floor.

Her parents got back together long enough to come to Seattle to see where their daughter had lived and died. They had never seen it. Inside Sabrina’s RV, wrote Times reporter Vianna Davila

The place was trashed. Flies buzzed around rotted food. There was hardly any room on the floor, though investigators told them that’s where her body was found. Much of the floor was covered with wet clothes, possibly the result of a leak in the roof. This looked nothing like the picture she had painted for them.

Her parents may never know if this was how Sabrina lived. They were told by police that the RV was quickly ransacked after her death.

The Times reporter recorded the reaction of Sabrina’s father, Tommi Tate. “I’m furious,” he said. Furious with his addict daughter? Furious with himself? Of course not. He was furious with the government.

“This kind of stuff shouldn’t happen and it doesn’t need to happen, and it’s only going to stop if people quit looking the other way and if our governments really, truly care,” he said. “Shame on Seattle.”

Shame on Seattle?

Reading that, I wanted to say, “Hey! You're her father. Where wereyou? It wasn’t the government's job to care about your daughter; it was yours. And at age 27, it was hers, and had been for some time. She made years and years of bad choices to get where she was. It had nothing to do with whether public employees ‘really, truly cared.’”

The median price of a single-family house in Seattle has jumped to $800,000. The median rent on a one-bedroom apartment is pushing $2,000.

This story spoke strongly to me, because I have a son the same age as the dead girl. The difference is, he is healthy and has a career and a home. Why is that? Is it because the city employees here really, truly cared for him?

Enough stupid questions.

Economically, Seattle is a stunningly successful city. Recalling the city I knew as a kid back in the early 1960s, I remember the brick buildings downtown, most of them, like the Smith Tower, built in a burst of investment in the second and third decades of the century. That old downtown has been buried in a forest of glass-and-steel skyscrapers, the latest of which are being built for Amazon. For most of my life, the city population was stuck between 525,000 and 550,000. Suddenly it’s at 700,000. Including Seattle, King County’s population is now 2.1 million.

Among the state’s 39 counties, King County, the largest in population, has the highest average per-capita personal income. Seattle’s figure is $40,868, more than 40% above the U.S. average. King County is the home to Boeing’s commercial airplane division, Microsoft, Amazon, Starbucks, Costco and Nordstrom. It is the home of Jeff Bezos, Paul Allen, and Bill Gates.

To the disappointment to the Democrats who run state government in Olympia, Washington does not have a state income tax.

The median price of a single-family house in Seattle has jumped to $800,000. The median rent on a one-bedroom apartment is pushing $2,000. Part of this is because of Seattle’s restrictive zoning code and King County’s growth-management policy, and the Left mostly ignores this, but the commercial growth is the most important reason.

Politically, King County is the most leftwing county in the state. Here’s the picture from 2016, in which Hillary Clinton easily carried the state of Washington. Statewide, Donald Trump took 37% of the vote. In King County, he got 21%. In Seattle, he got 9%. Since the 1980s, Seattle has been a one-party town. To be identified as a Republican in this city is instant political death.

But we do have a communist on the city council.

Am I “red-baiting?” I suppose so. Councilwoman Kshama Sawant calls herself a socialist and says she’s for democracy. But she has identified herself as a member of Socialist Alternative, which the Internet tells me is a Trotskyist organization — meaning Leon Trotsky, former chief commissar of the Red Army. Sawant’s campaign manager told me she was a Marxist, and in listening to her when she first ran for office, I judged that he was correct. She came out for nationalizing Boeing, for example. If all that doesn’t justify the c-word, then I withdraw it. Sawant is pretty far left, though. She voted against Seattle’s famous $15-an-hour minimum wage law because it wasn’t strong enough.

The rest of Seattle’s city council is all deeply Progressive. And given the view around here of the “root causes” of people sleeping in the park, there should be no surprise at the solution the council has reached.

Raise taxes on business.

Seattle is not a low-tax city. We have a property tax that hits most homeowners between $5,000 and $10,000 a year, and a retail sales tax at the nose-bleeding level of 10.1%. (Buy a car here, and feel the pain.) We have a tax on soda pop and a tax on disposable grocery bags. But to the disappointment to the Democrats who run state government in Olympia, Washington does not have a state income tax. The people voted for one in 1932, but the Washington Supreme Court threw it out, and statewide voters have since rejected it four times. Seattle has tried to impose a city income tax, and has been blocked in the courts.

This is a tax on employment to fund non-employment.

Now to the matter at hand, the head tax. This is how Seattle’s ruling class — its political ruling class — proposes to raise the $75 million it wants for the homeless: a 26-cent-an-hour tax on payrolls of companies with at least $20 million in annual gross sales.

Work out the math. Twenty-six cents an hour is more than $500 per employee per year. This is a tax on employment to fund non-employment. And the only employers obliged to pay it would be the for-profit companies. As I read it, the Bill and Melinda Gates Foundation would not have to pay, nor would Seattle’s big multimillion-dollar medical groups — Swedish, Virginia Mason, and Kaiser Permanente Washington — nor would Recreational Equipment Inc., a membership cooperative. Neither Boeing nor Microsoft nor (except one store) Costco Wholesale would have to pay, because they are not actually based in the city. But Nordstrom and Starbucks would have to pay, as would Amazon, which put itself right where Seattle progressives wanted, near the light-rail line at the north end of downtown. Amazon would be nailed for some $20 million a year.

And CEO Jeff Bezos, who has Amazon looking for a second headquarters city already, doesn’t want Amazon to pay. Amazon has announced that it is suspending planning for its next Seattle skyscraper, and that if the head tax is passed, it will build somewhere else.

The push for a head tax has not gone unchallenged. An opposition now coalesces.

I haven’t heard anyone say the company doesn’t mean it. The leaders of the Aerospace Machinists did say that a decade ago when Boeing threatened to open an assembly line for the 787 jet transport in South Carolina unless it got a ten-year no-strike agreement in the labor contract. The union guys didn’t believe the company. They rejected the concessions — and Boeing opened the line in South Carolina, just as it said. People in Seattle also remember when Boeing moved its corporate headquarters to Chicago.

They believe Bezos’ threat.

And the Left’s attitude toward this? Katie Herzog, writer for The Stranger, Seattle’s left-wing entertainment weekly, quotes Bezos on The Stranger’s blog saying that he wants to put his personal billions into space travel. Confusing Bezos’s personal money with Amazon’s corporate money, she writes:

“WHAT THE FUCKING FUCK, JEFF BEZOS??? THE ONLY WAY TO SPEND YOUR MONEY IS SENDING IT TO SPACE???? Please, excuse me for a moment while I go burn my Prime membership. (Just kidding. I use my dad's.) Here's one way Bezos, who has yet to make any significant philanthropic mark on the world, could spend his 130 billion dollars: PAY THE FUCKING HEAD TAX.”

I hear people who are angry — and almost all of them are Democrats.

Socialist Councilwoman Sawant, Herzog writes, should lock Bezos in a room and convince him to “get his shining head out of his ass and start using his wealth to help people other than himself.”

That is the Seattle Left in full, in its ideas and its manners.

The push for a head tax has not gone unchallenged. An opposition now coalesces. It includes the Greater Seattle Chamber of Commerce and other business groups long accustomed to the political culture called Seattle Nice. It includes the Seattle Times editorial page, which is urging Seattle Mayor Jenny Durkan to veto it. (Durkan, whom the Times supported for mayor, was Obama’s US Attorney here.) And when our socialist councilwoman and her groupies held a protest in front of Amazon’s new skyscraper, they faced a counterprotest of union ironworkers — the proletarians who would lose the chance to build Amazon’s new skyscraper.

The final vote is not scheduled until May 14. But whatever happens, much good has come of this. I hear people who are angry — and almost all of them are Democrats. Maybe Seattle will develop a two-party system — not a Republican-and-Democrat system, but some kind of opposition, some kind of choice. If it does, I will vote for whoever carries its flag.




Share This


No Cheers for Democracy

 | 

Democracy, the most celebrated religion of both the Left and Right, has spread like wildfire. Zimbabwe has recently fallen for more democracy. Social movements in the Middle East — with the most recent one known as the Arab Spring — are inching the area toward more democracy. Even in reclusive Saudi Arabia democracy is slowly gaining an upper hand. Bangladesh, Myanmar, Pakistan, and Nepal are solidifying their democracies; their military or traditional-religious heads have found it increasingly difficult to assert their will. Many political leaders in Africa now vacate their seats in response to the verdict of their citizens.

Democracy is winning. It is a religion, a faith, which is seen as an objective, universal truth, a truth that cannot be challenged. It is the solution to all ills. It is perfect and cannot be damaged by evidence. When a society does well, the true believers attribute this to an improvement in democracy. When a democratic system does not work — as it doesn’t in Africa, Latin America, the Middle East, and South Asia — the blame must go elsewhere. The true believers always ask for more democracy.

South Africa has continued to become more democratic, with its institutions increasingly reflecting the wishes and the culture of the masses. The political leadership is now openly in support of expropriating farms from the minorities. The masses, quite fallaciously, believe that such acts will improve their lot.

The true believers always ask for more democracy.

In 1994, before the advent of democracy, South Africa had a first-world infrastructure. Today, there are random electrical outages, water supply is in deep crisis, roads are bad, and crime is off the charts. Hate-crime against the minorities, including vicious torture and sadistic rape, is on the rise. For more than two decades the canniest people of South Africa have been emigrating to Canada, Australia, or the US.

The end of apartheid — in 1994 — did not have to begin the rule of the masses, but it did. Democracy has slowly changed the nation’s institutions, adjusting them to the mass’s demands and whims. The minority are whites, so the media and the intellectuals pay little heed to their rights. According to the media’s definition of “racist,” only whites can be that way.

Was South African apartheid a bad policy? Is the Indian caste system regressive? It is easy to say “yes” — and move on. But all changes in social and political systems have their collateral damages. A culture of individualism, decentralization, and the rule of law emerged in Europe to reduce collateral damage. From this point of view, supremacist democracy has been a disastrous regression.

The end of apartheid did not have to begin the rule of the masses, but it did.

South Africa now has apartheid against the whites, one consequence of which has been the destruction of the lives of blacks as well. The white minority — even today — is the intellectual and business spine of South Africa. As the minority loses its grip or emigrates, South Africa is imploding. Can the masses, peasants, and politicians not see what is coming? Apparently they cannot — which is the reason why democracy puts a society in a vicious cycle. Not just South Africa but the emerging democracies of Egypt, Myanmar, Papua New Guinea, and Nepal have been on assured paths to disaster.

Instead of thinking through why democracy might be the reason for the failure of societies, Western intellectuals blame a made-up recession in the number of democracies. When things go wrong, they credit the situation to a lack of democracy, even if democracy has been in ascendancy. If their rationalizations are no longer tenable, through circular reasoning they define and redefine “democracy” to ensure that it stays on the pedestal.

Over the long haul, Turkey and Malaysia have been among the best examples of progress in the third world. Not only have they become increasingly democratic but their GDPs per capita have grown relentlessly, making them middle-class societies. Both also have Muslim majorities; there is likely no other country in which increasing democracy in a Muslim majority society coincided with rapidly rising GDP per capita and maintenance of stability.

Can the masses, peasants, and politicians not see what is coming? Apparently they cannot.

It was not too far in the past that Turkey was under strict secular control by the army. Then, in 1997, the military asked the then Prime Minister, Necmettin Erbakan, to resign. His fault was that he had mixed religion with politics. Pressure from the US and international organizations meant that Turkey had to become more democratic and distance itself from the rule of the military.

It might be claimed that Turkey improved economically and socially because of this strengthening of its democracy. But Turkey was merely one beneficiary of a general trend of economic growth affecting the third world. The economies of Turkey, Malaysia, Latin America, South Asia, Africa, in fact, every country and particularly non-democratic China grew rapidly during the past two decades. None of them grew because of democracy. They grew because of the electronic revolution. Ironically, the growth of non-democratic China changed the economic structure of the world and made it possible for the third world to benefit, as the crumbs fall into its lap. Because it suited their purpose, ideologues credited this all to “democracy.”

But now, as democracy has grown, politics in Turkey and Malaysia increasingly reflect the will of the masses. Masses in the West might care more about hedonism, but it is religion, magical thinking, and the afterlife that occupy the minds of the masses in the third world. Fanaticism — hence totalitarianism and diminishment of the individual — has been growing rapidly in Turkey and Malaysia.

Most people in top positions in the media, the IMF, the World Bank, etc., maintain the usual, regurgitated, and extremely favorable view of democracy and multiculturalism. This has to be the case, for they cannot say (and eventually even think) anything that might be (mis)interpreted as racist, or they will be thrown out of their jobs. The result is that political correctness has absolute control over the institutions of the West.

Fanaticism — hence totalitarianism and diminishment of the individual — has been growing rapidly in Turkey and Malaysia.

Of course, it requires little reflection to notice that democracy isn’t the panacea it is made out to be. Quite to the contrary, it has been an unmitigated disaster for the third world. The Khmer Rouge in Cambodia had massive public support when they took power. During their rule, the guards at the concentration camps soon became the inmates, while the earlier inmates were sent to the killing fields after grotesque torture and dismemberment. Even the topmost “leaders” got caught up in this cycle of brutality. In a period of just over three years, they managed to kill as much as 25% of the population.

What they did in Cambodia is something no sane person, using the lenses of Western culture and political correctness, can understand. But perhaps that is exactly what needs to be understood to see the underpinning problems of democracy. One must understand the psyche of the masses and the peasants.

A vast majority of even the world’s enlightened society is made up of people who have no interest in public policy. While in the West, this is often reflected in an expectation of free-stuff and resulting social welfare programs, the counterpart in the third world is usually tribal and superstitious. In the West, the desires of the masses result in a politics of redistribution and envy, a win-lose paradigm that, like a termite from within, slowly destroys the morals and the institutions of society. In poor countries, these desires result in a politics that is increasingly sociopathic and tyrannical, a lose-lose paradigm.

To see the underpinning problems of democracy, one must understand the psyche of the masses and the peasants.

I travel around the world to understand what is happening, without the lenses of political correctness distorting my understanding. One soundbite that I often hear from economic analysts is that if a country wants to keep growing it has to allow entrepreneurialism to take hold, reduce regulations and the size of the state, and do what is right. If that is the way the world worked, in this modern age of technology there would have been no reason for vast areas of the world to suffer from abject poverty. These economists are either politically correct (or else they would be thrown out of their jobs), living in gated communities (real or virtual), or simply naive. In any case, they are paid well to stay ignorant about the problems that democracy is afflicting on the third world, and increasingly in the first world.

Why can the masses not see the problems they are creating for themselves by voting to destroy their wealth-generating class, the backbone of their society? Why do they not see that they are creating tyranny for themselves by imposing through their vote fanaticism in their institutions, a contest in which there is no winner? Why cannot the wisdom of the crowds — democracy — provide improvement in governance? Why don’t their collective votes align their economic structures for growth?

For the third world, tribalism and magical thinking are the mental and cultural operating system. While they claim to seek peace and economic growth, there is a list of numerous other dominant considerations — superstitions, religious dogma, the afterlife, pride in the tribe, which makes the individual impotent, the everpresent fear of Satan, family entanglements, envy, ego, and a conspicuous lack of understanding of the concept of causality. Even if they are keen on economic growth, their irrationality assures that they do more of what created their poverty, in a vain attempt to remove their poverty.

Economists are paid well to stay ignorant about the problems that democracy is afflicting on the third world, and increasingly in the first world.

The situation gets rapidly worse as you go down the class hierarchies of these societies and arrive at the people who mathematically are the major voting bloc. The peasants are traditionally tribal, superstitious, and envious. In a democracy, the bottom 51% of a society decides the nature of its institutions. Institutions take a long time to change, but eventually the psychology of the masses, their irrationalities, and their tribalism permeates it.

Many people worry ad nauseam that the USA supports the totalitarian regime in Saudi Arabia. But people from that area know that were Saudi Arabia to become democratic, it would become much more fanatical. While isolated locals might ask for more liberties, and their voice be exaggerated by the Western press, making Saudi government look like the one remaining province of tyranny, the masses insist on an increase in totalitarianism. While a few isolated women might burn their hijabs, the majority of women insist on them.

And what about other countries?

Quite in contrast to video images of recent protests, and the Western narrative of Iranians asking for liberties, 83% of Iranians favor the use of sharia law. It is a no-brainer that more democracy isn’t going to change Iran in the way romantics in the West think it will.

A rule of, by, and for the peasantry is the maturing of democracy, and it never ends well for anyone, including the peasants.

Syria is nothing but an advanced stage of the Arab Spring, of the movement for democracy. So, mutatis mutandis, is Venezuela, where the culture of the masses and peasants has seeped into the government. With each gyration of democracy, Pakistan has become an increasingly Islamic state, where a word against the holy book results in a death penalty. India, the world’s biggest democracy, is rapidly taking the same course, as its deep-rooted superstitions, tribalism, and magical thinking continue to permeate its institutions.

We must again ask whether any democratic change would increase the rule of law and the culture of individualism — or whether it would be detrimental to both.

A rule of, by, and for the peasantry is the maturing of democracy, and it never ends well for anyone, including the peasants. The peasant revolutions of Mao’s China, Stalin’s Russia, Pol Pot’s Cambodia, and the innumerable civil wars of sub-Saharan Africa have virtually no competitors in causing misery and destruction. Peasants, except in New Age literature, have high time preference; they lack education, critical thinking, and rationality; and they are unskilled in planning. They focus at best on the immediate accumulation of resources. Allowed to feel victimized, allowed to pass responsibility onto others for their predicament, they happily do so.

But haven’t the elite, the intellectuals, the businessmen, the entrenched classes, the feudal lords not been exploitative?

In Brazil, India, and Venezuela the middle class is extremely corrupt. In the caste system of India, the lower caste does not even exist as human being in the minds of the upper caste. The elites are the exploiting class. But when the peasants get into power, there are no limits left for corruption and exploitation. They enable lose-lose tyranny and brutality — pure, unadulterated savagery.

All power structures are exploitative. The question is which one does the most for society.

The state is a totalitarian instrument. Apartheid was the same. The caste system is the same. Among all these systems, the rule of peasants — democracy — is the worst. Their inability to think of the future and understand public policy means that once in control, they rapidly destroy the institutions, enter a phase of hedonism, go into conflicts over resources, or simply destroy the country’s capital, eventually trending society toward Malthusian equilibrium. One has to spend time in backward societies to see how, as if by magic, the masses instinctively destroy any advantages they get from technology and economic growth.

Capital, civilization, and prosperity do not occur in nature. Increasing capital and even maintaining it is the job of the elite — not of masses or peasants. All power structures are exploitative. The question is which one does the most for society and what steps to take to move society toward more liberty. Democracy isn’t that next step forward.




Share This


Making It Work

 | 

Libertarian policy proposals are often ridiculed for being too impractical and naively idealistic. This article will put forward practical solutions for implementing libertarian policies in ways that can, and will, work in the real world. Privatization and healthcare, two areas in which libertarian policy is hotly contested, are the focus.

I’ll start with a summary of two objections to freedom and follow with a solution for overcoming that objection. I will then add details.

First Objection: infrastructure — such as roads and train lines — and utilities cannot be privatized because they are natural monopolies: two operators cannot compete along the same line at the same time.

Most people are aware that public monopolies are often mismanaged by operators who have no accountability to the public.

First Solution: if the right to operate the space, be it the road, or train line, or power line, were auctioned off for very short periods, at open competitive bidding, it stands to reason that the efficient privatization company would make enough money to place the highest bid at the next round, and would have operated in the best way possible to maximize profits and consumers (if consumers cared to listen to reason). In other words, private operators would compete along the vector of time, not space, with the most efficient one winning the highest profit and likely making the highest bid for the next slot of time.

Second Objection: under the present system, which evolved under capitalism, health insurers pay for the healthcare of the people who pay healthcare premiums, the premiums bearing no direct relation to the healthcare actually received. The system would have to work this way, because the whole idea of insurance is that you pay for the risk that you may one day need insurance, not for the actual healthcare you receive thereafter. This system causes a disconnect between the healthcare buyers and the healthcare sellers, enabling the sellers to jack up their prices. Only big government and a bunch of crusty, arrogant, elitist bureaucrats have the power to step in and force prices down to affordable levels by setting or capping prices by law.

Second Solution: to the extent that health insurance as such poses a structural tendency to sever payment from delivery of service, the problem can be solved not by leaning toward big government but by moving toward greater freedom in free-market competition. Require doctors to publish schedules of what services they offer and at what costs, as would be reasonable in any capitalist system in which sellers must be honest about what they are selling. Then drastically deregulate health insurers so that any entrepreneur can start a health insurance company and compete in any state, across state lines. In this ideal world, health insurers would compete in a marketplace — not a fake Obamacare exchange but a real capitalist free market.

What this natural monopoly thought process ignores is that there are many ways for companies to compete, if you think outside the box.

What will naturally evolve from this is a situation in which, to pass along as much cost saving to customers as possible, in order to get as much business as possible, some health insurers will develop a system for the insureds to prepay for the price services they want, from specific doctors at specific prices. Then, if they get sick and need those services, they will get what they shopped and paid for. The actual payment mechanism would still be the insurer pooling all payments and then paying after the fact for the people who got sick, but price competition would force doctors to lower their prices to competitive levels to get buyers, and this same pricing pressure would force health insurers to pass along the best deal to the buyer. Premiums would be applied after the fact, pro rata, to the healthcare that people chose to buy before the fact. A buyer will compare prices and choose a seller, and buyers and sellers will naturally converge at the equilibrium price point between supply and demand — which as (smart, sane, rational, libertarian) economists know, is the right antidote for monopolistic price gouging.

Details:

Examples of so-called natural monopolies include transit routes, bandwidth, electric utilities and power lines, cable service, garbage collection, and air space for planes or drones.

“Natural monopoly” public infrastructure can be privatized. And they should be privatized. Most people are aware that public monopolies are often mismanaged by operators who have no accountability to the public.

But it is assumed that there can be no competing alternatives, since the land or space simply isn’t there. So let there be a monopoly, but have the government regulate it so it will be forced it to sell at price points below the monopoly price. What this natural monopoly thought process ignores is that there are many ways for companies to compete, if you think outside the box.

Competition in running natural monopoly infrastructure can take place along the dimension of time, not of space, such that, when the natural monopolies are privatized, what is sold is a lease, essentially, to last two or three years, but no longer. The buyer would have every right to do whatever he likes with the land or infrastructure and monetize and run it as he pleases, but only for the term of the lease, at which point the right to buy the next period of time would be up for open bidding and awarded to the highest bidder. Economic efficiency and capitalist theory dictate that the company that can make the most money from such an enterprise will tend to be both the highest bidder and the company that can continue to run it the best. If a transit route is run badly, sales will flag, profits will drop, and the opportunity will arise for someone better to place a higher bid in the next round. Thus, even with only one owner, there will be competition in the economic sense.

If you believe instead, as smart people do, that money is made in a free society by creating high quality at an affordable price where supply meets demand, then the objection collapses.

Additions to the scheme may need to be made, such as requiring a pro rata portion of an operator’s profits to be paid back to previous owners who invested in long-term durable equipment or improvements from which the current owner benefits. But such additions are not difficult to design. As a bonus, if any contractor commits massive fraud against the consumer, this will be easy to see, because if a competing operator wins the next lease bid, when he looks at the infrastructure he will see what the previous operator did to it, and consumers will be protected better than we would be under heavy regulator scrutiny.

Today’s economy already proves that this will work. There are hundreds of huge corporations that buy some downstream service from only one seller, for the term of a lease; and there is ample price competition, even though only one seller can get the deal to be a supplier at one time. The companies that sell “back end” human resources services (outsourced services such as paychecks and benefits management) to Fortune 500 corporations are an example: a buyer can sensibly go with only one seller at a time, but there is a ton of competition. Another example: places exist where various owners own the rights to different heights above the ground of a single plot of land, so that two companies can compete by owning different floors of the same building, competing along the dimension of height, not of length.

The person who made the original objection to privatization will object again, saying that the rich will bid big to get ownership of the monopoly, charge high prices while offering crappy service, and run away after their lease ends — taking profits derived from forcing people to pay a lot for a service with no alternatives. The operators’ costs would have been low, since they didn’t give a damn about infrastructure investments. But this objection reduces merely to the general argument against free market capitalism. The Marxists and socialists think that rich people get rich by fleecing their victims. If you believe instead, as smart people do, that money is made in a free society by creating high quality at an affordable price where supply meets demand, then the objection collapses. Specifically it is wrong because an operator who does a good job will always make more, net, long term, than a con artist, hence the good operator will have more money and more motivation to outbid the crooks.

New York City as subway operator does not, and cannot, spend the money it should to maintain the subway service as it deserves and needs.

This is not to say that the system can never be abused. No system is perfect. Privatization is certainly not less perfect that a regulated natural monopoly, and it would ultimately be far better. Just ask anyone who rides the subway in New York City: in addition to being a vital means of transportation for millions of New Yorkers, it is also the location that the wonderfully brainless liberal politicians of New York have chosen as the de facto living space for the mentally ill homeless people, just to get them off the streets. The bigger picture is that the economic demand for the subway would justify a rise in fares that is politically unpopular and therefore impossible. So New York City as subway operator does not, and cannot, spend the money it should to maintain the subway service as it deserves and needs. The New York Times even ran a crusade to get more spending for the subways, noting how horrible they are and how many people use them, which crusade did not succeed, and could not succeed. The free market would do better.

I have suggested two or three years as the basic contract period for the operation of natural monopolies. It needs to be short enough to enable consumers to hold bad operators accountable so that better ones can step in. Employees may not want two- or three-year contracts, and somewhat more may need to be paid them on this account. Nevertheless, we need to get away from the labor union mentality, according to which the labor pool only works if employees are chained to their jobs and employers are chained to long-term labor contracts. The United States is becoming "the gig economy," as they say, led by the Uber and Lyft drivers. A lot of industries are moving toward hiring employees for a temporary, shorter duration and away from hiring them for permanent, full-time jobs. Employees with strong professional skills are so valuable that no one who purchased a short-term lease on a natural monopoly would want to get rid of them.

As far as planning goes, there are examples in today's economy of businesses drawing up plans for long-term operations, because that is how they can best succeed, but if their basic contracts are not renewed, they just tear up the plans. In business you need long-term plans, but you also need to face the risk that these plans may fail dramatically, at any time. If you don't get investors in your second year of operation, you just eat the third, fourth and fifth years of your business plan, no matter how great those years might have been.

Thousands of small businesses will pop up to become micro-health insurers and facilitate the trade, between doctor and patient, of treatment for money.

Now to some details about healthcare. Free market economics doesn’t work if there is a disconnect between the person who pays the money for a benefit and the person who receives the benefit. The disconnect causes prices and costs to skyrocket, because the buyer cannot force the seller down. Many libertarians already know this: one of our objections to government spending is that the government will overspend because there is a disconnect between the taxpayer and the beneficiary. Healthcare, where the health insurer pays but the patient receives the treatment, and does not directly pay the doctor, and the doctors don’t compete for each individual patient on price, is a great example of a buy-sell disconnect.

The problem with health insurance is that, originally, it was in fact insurance that a person bought to mitigate the risk of getting sick, but it has become a behemoth that pays for all medical expenses and then collects exorbitant and arbitrary amounts from the public, with no connection between payments and collections in an individual patient’s case. The problem arises because, by the time people become sick, their medical costs are typically too great for them to pay, so they must have already had insurance to get treatment, and the insurance will then end up paying all costs.

To reform healthcare, first, require doctors, as a condition of receiving their license to practice medicine, or merely by means of laws mandating truth in advertising, to create a schedule of fees and prices for each of their services, and publish it, and let individual patients receive that care if they pay that fee from the schedule of rates. Second, break up the regulations of health insurance companies so that anyone can start one and can compete in every state with a minimum of red tape. Third, require that each health insurer publish the actuarial tables that each insurer is using, showing what portion of your payment will pay for what medical treatment in the future from what doctor’s schedule of fees. Fourth, allow the consumer to “buy” his future medical treatment by choosing what portion of his premium he chooses to allocate to the doctors’ services that he could potentially get, from the competing doctors’ fee schedules, “through” his health insurance company.

The doctors who succeeded would be those who proved they could deliver successful, effective treatments, but at cheaper prices.

The health insurer would pool the buyers’ payment to make the actual payment to the doctors for the insureds who become sick, but each buyer could take the income that he has allotted for health insurance and “spend” it by choosing the slate of healthcare services he would pay for at that price, selecting his doctor from among the competitors. Doctors would compete on the price to be chosen by each buyer when he decides how to allot his healthcare premium spend.

This would combine two novel approaches: “shopping” for treatment from the doctor, not the insurer, and expanding competition among health insurers by allowing small startup health insurers, akin to what was done for poor businesses in Asia by the “micro-credit” revolution that enabled any poor woman or man to open a business on a small loan. Thousands of small businesses will pop up to become micro-health insurers and facilitate the trade, between doctor and patient, of treatment for money. This would connect the buyer to the seller and enable massive price competition among doctors, so costs would plummet, because many doctors would seek patients by offering cheaper prices at affordable levels of quality. Obviously this would not lower the quality of healthcare, because the doctors who succeeded would be those who proved they could deliver successful, effective treatments, but at cheaper prices. In today’s world, where everyone finds ratings and reviews online, the doctors with the best value propositions, defined as higher quality at cheaper price, would be readily apparent.

The micro-health insurer could also prepay, locking the buyer and seller in at that price while taking profit up front and not when the healthcare is delivered. This would keep healthcare costs locked down at the competitive price the buyer chose to pay, and complete the sale for the buyer at the time of purchase, not after the fact when the patient-buyer becomes sick and his very life depends on paying for healthcare. Right now there are maybe a handful of insurers and 20 health insurance plans that compete in any given state Obamacare Exchange, but the initiative I have outlined would open the door to thousands of health insurers, and potentially hundreds of thousands of healthcare “menus” and “menu items” available to buyers pre-paying doctors a pro rata share of the healthcare premium cost of treatments received.

A free-market system could work for the benefit of all Americans by introducing price competition into the healthcare industry.

The analogy of healthcare options to a menu at a restaurant is apropos. People need food. If you don’t have it, you die, just as a sick person who needs medical treatment gets it or dies. This does not enable the farms to jack up the price of food until it is out of sight, as doctors, hospitals, and pharmaceutical makers are doing. Instead, thousands of restaurants and grocery stores compete, buying food from farms and selling it as a selection of options on a menu. People buy what they want, within the limits of their budget. Consumers win, and have tasty meals and full bellies. Yes, poor people may have to eat at cheap fast food stores, but they don’t starve to death (and the food at Dunkin Donuts is not that bad!). If you are willing to make do with less, such as by purchasing vegetables and cooking your food at home, you can eat quite nicely. So, too, could a free-market system work for the benefit of all Americans by introducing price competition into the healthcare industry, which would create affordable options across a range of price points.

The conclusion to infer from this article is that, while the statists object that libertarian policy cannot be implemented in a practical manner, this is simply not true. Thinking outside the box, and being creative and innovative about policy solutions, will meet the challenge of making liberty work for America.




Share This
Syndicate content

© Copyright 2018 Liberty Foundation. All rights reserved.



Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.