Butterfly Police

 | 

The iconic orange- and black-winged monarch butterfly, one of North America’s insect wonders, is on the path to extinction. Its population has collapsed by 90% since the 1990s.

Each fall, the butterflies travel up to 3,000 miles from their breeding grounds in the US and Canada to their winter sanctuaries in the oyamel fir forests of central Mexico. In late winter, they mate, and begin the return trip to the US and Canada, where they lay their eggs on milkweed plants, and die. The eggs hatch into caterpillars, who feed exclusively on these host plants, until they fly back to Mexico.

Freezing temperatures in March! In central Mexico! Blamed on global warming!

The reported population decline is based on annual estimates of the number of butterflies overwintering in Mexico. That number is, in turn, based on the number of acres occupied by the monarchs. In 2016, ten acres were occupied, compared with 44 acres 20 years ago. The cause of the decline has been attributed to habitat shrinkage, both in Mexico (trees, because of illegal logging) and in the US (milkweed acreage, because of urban sprawl and agriculture). The problem, of course, is anthropogenic: global warming and pesticide use. So says the Center for Biological Diversity, and the solution, of course, is “immediate action to rein in pesticide use and curb global climate change.”

And, of course, there is no real-world connection to either. Regarding devastation of the monarch’s Mexican habitat, environmentalist Homero Aridjis wrote, in 2016, "The Mexican government should be taking measures to mitigate the probable effects of climate change on the [monarch butterfly] reserve.” The operative word is “probable.” In March of that year, Mexico experienced the destruction of 133 acres of forest, in a storm that froze or killed an estimated 6.2 million monarch butterflies. Said monarch expert Lincoln Brower, "Never had we observed such a combination of high winds, rain and freezing temperatures.” According to Weather.com, “this storm was unexpectedly intense, fueled by shifting temperatures due to climate change.” Freezing temperatures in March! In central Mexico! Blamed on global warming!

As to the habitat effects of illegal logging, most of the land occupied by overwintering butterflies is owned by indigenous Mexicans, who must cut the forest to survive. To stave off such habitat devastation, conservationists have tried to convince impoverished landowners that “the forest is worth more to them in terms of tourism when left standing instead of being cut down.” The thinking apparently is that if the conservation pitch is successful, then future tourists will joyously snap memorable pictures of a soaring monarch migration, as it descends onto oyamel fir forests — whose then-dense canopy will hide the waning, forgotten indigenous farm and mountain communities, as they descend into deeper poverty.

No announcements have been made as to how the butterfly police will handle the environmental crimes of bark beetles.

But in case destitute locals cannot be persuaded to give up their supplemental logging incomes, “Mexico's government announced it would create a special national police squad to patrol nature reserves and fight environmental crimes.” No announcements have been made as to how the butterfly police will handle the environmental crimes of bark beetles, whose infestations of the monarch sanctuary have no doubt destroyed at least as many trees as has illegal logging.

Not to be outdone by Mexico, the US has concocted measures of equal inanity. For example, the Obama administration proposed a “fly-way” program in which milkweed refuges for the butterflies would be created along highways that follow monarch migration routes. “According to the national strategy plan released by the White House, the fly-way is intended to increase the population to 225 million butterflies by 2020.” Another plan calls for placing the monarch on the Endangered Species List. “Our government must do what the law and science demands, and protect monarchs under the Endangered Species Act, before it’s too late,” scowled George Kimbrell, legal director at the Center for Food Safety. As a resident of Alabama, I pledge that as soon as the insect appears on the list, never to stomp on a monarch that lands in my yard, and to encourage my fellow Alabamians to demonstrate similar restraint. Good God, it’s our state insect.

"Monarch Watch" counted milkweed instead of monarchs?

Unfortunately, what science demands is evidence. And the scientific evidence does not support the climate change or pesticide propaganda. According to an exhaustive study of World Wildlife Fund and citizen scientist butterfly migration data, it is most likely that neither milkweed nor herbicides limit monarch population. “Monarch numbers begin declining at the end of the summer, when the butterflies begin their long migration to Mexico, and the numbers continue to decline as they travel. During this southern migration, adult monarchs do not feed on milkweed,” wrote lead author Anurag Agrawal. “By the time they get to Mexico their numbers are plummeting, but at the end of the summer when they start their migration, their numbers are not down . . . Herbicides are not likely to be the problem, and genetically modified crops that are herbicide resistant are not likely to be the problem for the monarch.”

In their incurious haste to blame the plight of monarchs on the climate change and pesticide boogeymen that they so vividly, and obsessively, imagine, crack US scientists relied on the overwintering counts estimated by crack Mexican scientists. They didn’t think to estimate the number of butterflies that depart the US in the fall. They counted the milkweed loss (up to 6,000 acres of potential habitat a day, because of US land development, says Monarch Watch), but not the monarchs. Monarch Watch counted milkweed instead of monarchs?

Had that storm not occurred, the headline story might have been the miraculous resurgence of our cherished monarchs.

Who knows what is happening to the monarch butterfly? Most of its population decline — as any non-environmentalist would guess — seems to be occurring during its arduous 3,000-mile journey to Mexico. Some of the decline in Mexico may be caused by illegal logging, and some by the bark beetle. But even this possibility is suspect. It’s extremely difficult to believe that tenacious monarchs could not find 44 acres of sufficiently dense and healthy fir trees, unassaulted by loggers and bark beetles, somewhere in their 138,379-acre biosphere reserve. And none is caused by the shrinkage of milkweed acreage in the US.

The monarch population had been rebounding in the few years prior to the March 2016 storm. Had that storm not occurred, the headline story might have been the miraculous resurgence of our cherished monarchs. Instead, the storm was used to blame climate change and pesticides for their demise. One can only hope that this silly, condescending, ideological attribution — that millions of monarchs were frozen to death, in the spring of the year, in central Mexico, by global warming — causes a similar decline in the population of braying environmentalists, and the rapid extinction of moronic, politically motivated scientists who come up with ideas such as butterfly highways and butterfly police.




Share This


Why Do Economists Disagree?

 | 

The influence of economics suffers from the idea that economists disagree to the point of uselessness. George Bernard Shaw supposedly complained that “if all the economists were laid end to end, they'd never reach a conclusion.” A similar old adage says that if you ask the advice of five economists, you will get five different answers, or, if Keynes is one of the five, six answers.

Such talk may be fun, but it is unfair. "The first lesson of economics,” said Thomas Sowell, “is scarcity: there is never enough of anything to fully satisfy all those who want it. The first lesson of politics is to disregard the first lesson of economics." Why? With characteristic exaggeration, H.L. Mencken observed that “no educated man, stating plainly the elementary notions that every educated man holds about the matters that principally concern government, could be elected to office in a democratic state, save perhaps by a miracle . . . by a combination of miracles that must tax the resourcefulness even of God” (Notes on Democracy, 1926, pp. 103, 106). A politician who understands economics and tries to apply it loses votes. One who understands it but conceals that fact is dishonest. Honest ignorance is an electoral advantage.

Externalities, monopoly, inflation, recessions, mistakes, inadequate foresight — all do occur. Economists are tempted to damn reality for being real.

Economists agree on the basics of their subject; disagreement on policy has other sources. The following list merely names the main points of agreement. Explaining them would go beyond this note, although, toward the end, it does expand on the most fundamental of them.

  1. Scarcity and the need for choice; opportunity cost.
  2. The division of labor, gains from trade, and comparative advantage.
  3. Marginalism and diminishing marginal returns.
  4. The role of the price system in exploiting the fragmented knowledge and coordinating the productive efforts of millions and billions of people in the nationwide and worldwide economy. The task includes allocating resources between the present and the future. The midget economy of the Swiss Family Robinson on its desert island contrasts instructively with the vast capitalist world of diverse resources, abilities, and preferences.
  5. “Economic calculation,” which is more than the mere dovetailing of such activities as automobile production and tire production, suitably proportioned. It refers, further, to producing the chosen amount of each good and service at minimum sacrifice of other desired things. Efforts at such calculation without genuine markets and prices, whether in theory or in the real world, have failed.
  6. Money as an institution that vastly promotes specialization and gains from multilateral trade. Money prices express opportunity costs, convey information and incentives, and ration scarce resources and goods.
  7. Private property, innovation, and entrepreneurship as essential to a thriving economy.
  8. Refutation of fallacies that have contaminated policy for centuries, especially ones relating to international trade and to a supposed self-regulation of money — the “real-bills doctrine” that the money supply will be correct if based on short-term bank loans to finance the production or marketing of real goods.

Shared understanding does not end there. Economists agree that reality has “imperfections” in comparison with an imaginary perfectly working price system. Externalities, monopoly, inflation, recessions, mistakes, inadequate foresight — all do occur. Economists are tempted to damn reality for being real, and they agree on many such matters. Price inflation traces above all to creating too much money.

Recessions are episodes of snowballing impediments to transactions, and economists explain them in various ways. In no field do professionals totally agree. An example in macroeconomics is the opinion of central bankers worldwide, shared by many but not all economists, that 2% inflation — a halving of money’s purchasing power every 36 years — is a proper objective of policy and that lower inflation is a cause for concern. Some technically valid arguments do exist for chronic mild inflation, but they are not decisive. Economists disagree on the weights to be accorded to agreed considerations.

Disagreement on policy traces overwhelmingly to matters other than economics.

But disagreement makes news while agreement does not. Lack of total agreement parallels what also occurs in the natural sciences: total understanding and consensus never are reached; room always remains for further research. As in other disciplines, economists disagree, when they do, on details and at the “frontiers” of research but not on the basics.

Disagreement on policy traces overwhelmingly to matters other than economics. Economists are not equally bold in predicting the future. They (as well as political scientists) hold differing opinions about how well government and politics function. Scientific issues join in policy disagreement, as about how serious a problem global warming is.

Economists are not equally knowledgeable about history, as about periods of advance and stagnation, crises, recessions, and monetary systems. Historical knowledge is valuable for making judgments about prospective population growth and technical and other innovation, but agreement cannot be expected to the extent that it can be expected on the basics of economics.

Psychology is sometimes at issue. Not all economists have the same understanding of people’s psychological quirks and of whether policy “nudges” might improve their decisionmaking.

Economists sometimes yield to wishful thinking. An example is the belief that proposed tax-rate cuts will so stimulate economic activity as to increase, not reduce, tax revenues.

Sociological questions arise, such as whether and to what extent welfare programs foster a culture of dependency and undermine the traditional family. So do issues of ethics and social philosophy, as about inequality of wealth and income, concern for future generations, how progressive the tax structure should be, whether the estate tax is fair, and what claims poor people at home and abroad are entitled to make on the more fortunate. “Bleeding-heart libertarians” do exist and have a web site of that name.

Like other people, economists sometimes yield to wishful thinking. An example is the belief that proposed tax-rate cuts will so stimulate economic activity as to increase, not reduce, tax revenues. Such a belief does not mean rejection of economic principles; in rare circumstances, that happy result could occur.

Career advancement can be a factor. Some economists seek distinction in cleverly working on the “frontiers” of research, in deploying impressive mathematics, or in finding exceptions to generally agreed applications of basic principles. Alternatively, some may be paid for rationalizations about policy that selectively emphasize some valid principles while disregarding (though not denying) others.

Some economists, perhaps seeking influence and fame, make compromises by taking account of political feasibility (i.e., votes), endorsing policies other than those they truly consider best. Full honesty would require openly acknowledging what they are doing (see Clarence Philbrook’s eloquent article in the American Economic Review, December 1953).

If enough demand exists, wouldn’t private enterprise satisfy it, and in a less costly and otherwise more suitable part of town?

Not all so-called economists are real ones who have completed graduate studies in the field and try to keep up with and occasionally contribute to the professional literature. It is not enough to hold an economics-related government position or to be prominent on TV. Disagreement among such people shouldn’t be allowed to disparage the professionals.

The most basic economic principles concern scarcity and opportunity cost. The city council of Auburn, Alabama, has voted to build an outdoor ice-skating rink downtown, where it will gobble up scarce parking space, worsen traffic problems, and otherwise inconvenience nonskaters. Evidently the council has not made a full cost-benefit analysis. Might not the money be better spent for other city purposes or left to taxpayers for their own purposes? How intense, anyway, is the demand for ice-skating here in the Deep South, where, by the way, the ice would have to be artificial? If enough demand exists, wouldn’t private enterprise satisfy it, and in a less costly and otherwise more suitable part of town?

I conjecture that the city council simply agreed with someone’s idea that a rink would be a good thing. So why not build it? It is easy to forget asking how desirable it would be and how great the opportunity cost in sacrifice of other public or private use of resources.

Disregard of opportunity cost is disregard for a principle accepted by all economists.

Such blitheness about opportunity cost shows up on the big-city and national levels. If a proposed museum would be nice or another overseas military base would seem to be a wise precaution, why not vote for it? A new sports stadium might please the fans, and consultants will conceive of side benefits for nearby restaurants, so why not support it with city money? An individual legislator pays practically nothing himself and might gain some votes.

James L. Payne shows how disregard of opportunity cost supports thinking that government money is somehow “free” (The Culture of Spending, 1991). Lobbyists not only for governors and mayors but also for industries swarm Washington seeking local projects and grants of money. Understandably, witnesses calling for such favors in congressional hearings far outnumber those who dissent. A similar explanation applies to firms and industries seeking protection from competition. But disregard of opportunity cost is disregard for a principle accepted by all economists.

Nothing said here denies that economists have expertise in contributing to policy judgments and that they — and quasi-economists — often disagree. Such disagreement rarely hinges on core principles and does not excuse disregarding them. Specialists cannot and should not have the decisive vote on policy, but that judgment does not excuse neglecting the basic principles that concern everybody and on which economists emphatically do agree.




Share This


And Now For Something Completely Different

 | 

What can one say about a book on infinity that hasn’t been said before? An infinite number of things, presumably, but I’ll make this brief.

The book, Approaching Infinity, is by philosopher Michael Huemer. Perhaps you’ve heard of him — but why? If you’re a libertarian, but not a philosopher or “into philosophy,” it’s likely because of his well-received book, The Problem of Political Authority (2013).

If you’re a libertarian and, though not a philosopher, are into philosophy, you may also be aware of Huemer’s excellent online-available essays on the right to own a gun and the right to immigrate. (I imagine readers on both the Left and Right are now gnashing their teeth.)

Huemer, like Robert Nozick before him, is clearly better described as a philosopher who is a libertarian than as a “libertarian philosopher.”

But Huemer is nothing if not prolific. Libertarians who are really into philosophy may even be aware of his criticism of Ayn Rand, his argument that we sometimes have a duty to disregard the law, his argument that attorneys have a moral obligation not to defend unjust causes, his criticism of the US government’s War on Drugs, and his essay on why people are irrational about politics (also a TED talk!).

But — and this is the point I want to stress — even though he’s published much of interest to libertarians, Huemer, like Robert Nozick before him, is clearly a person better described as a philosopher who is a libertarian than as a “libertarian philosopher.” His first book, Skepticism and the Veil of Perception (2001), dealt with epistemology (the field of study that led to his hiring at University of Colorado, Boulder); his second, Ethical Intuitionism (2005), focused on ethics. Now, having covered epistemology, ethics, and politics, Huemer, in Approaching Infinity, turns to the philosophy of mathematics (with an occasional nod to some issues in the philosophy of science). Clearly a well-rounded guy, philosophically speaking.

Also an iconoclast:

  • Although most philosophers since Descartes have opposed direct realism (the view that we are directly aware of real, physical objects), Huemer argues for just that point of view.
  • Although most modern philosophers oppose ethical intuitionism, the view that we can have direct knowledge of objective moral truths, Huemer again argues for exactly that.
  • Although most people readily accept political authority, and most philosophers are not anarchists, Huemer argues both against political authority and for a capitalist version of anarchy.

So it should surprise no one that Huemer, in analyzing some foundational issues in mathematics in order to solve various paradoxes of infinity, is willing to advance bold claims.

Almost everyone is familiar with at least some infinity paradoxes. We’ve all heard about Zeno and why that ball coming at you will never reach you, or why the hare can never catch the tortoise. Any you’re probably aware that strangeness results when even simple arithmetic is applied to infinity. E.g., ∞ = ∞ +1. Subtract ∞ from both sides: 0 = 1.

But I had no idea there were at least 17 different paradoxes associated with infinity. From Hilbert’s hotel to Gabriel’s horn . . . from Thomson’s lamp to Benardete’s paradox . . . from ancient Greek problems to dilemmas developed only in the past century . . . Huemer describes them all and then starts to evolve some background needed to solve the infinity paradoxes. There are discussions of actual and potential infinities, of Georg Cantor’s set theory, of the theory of numbers, of time and space, of both infinity and infinitesimals. Of the metaphysically impossible and the logically impossible. Of the principle of phenomenal conservatism (which Huemer introduced in his epistemology text), and even of the synthetic a priori.

Huemer, in analyzing some foundational issues in mathematics in order to solve various paradoxes of infinity, is willing to advance bold claims.

In building the background to handle the infinity paradoxes, Huemer argues that extensive infinities (including the cardinal numbers) can exist but not as specific magnitudes. Thus, the positive integers are infinite, in the sense that for any such number you can find higher positive integers, but not in the sense that there is a number “infinity” that is higher than all the positive integers. You cannot add and subtract “infinity” as I did in the previous paragraph. And he argues that while extensive magnitudes (time, space, volume) can sensibly approach infinity in this understanding, infinite intensive magnitudes (such as temperature, electrical resistance, attenuation coefficient, etc.) are metaphysically impossible. This distinction allows several paradoxes to be solved, or avoided.

A fascinating section of the text discusses various forms of impossibility. Sometimes philosophers note that X is physically impossible, given the laws of the universe as we now understand them, but nonetheless that it could be possible in a similar but slightly different possible world — say, with a slightly different Coulomb constant. But at other times X is deeply physically impossible. Consider these two alternatives described by Huemer:

Compare this pair of questions:

A. If I were to add a teaspoon of salt to this recipe, how would it taste?

B. If I were to add a teaspoon of salt to this recipe in an alternative possible world in which salt is a compound of plutonium and mercury and we are sea creatures who evolved living on kelp and plankton, how would it taste?

Huemer notes that it’s not merely that we have no idea about how to answer B but that, more importantly, even if we could answer B, answering it gives us no intuitions, is of no help in trying to figure out the answer to A. Though Huemer makes this point in the context of determining what counts as a solution to an infinity paradox, it also has direct application to various thought experiments in other areas of philosophy and to what counts as a helpful or unhelpful thought experiment. (On this see my own work, “Experiment THIS!: Libertarianism and Thought Experiments.”)

Related to the paradoxes of infinity are the problems of infinite regress. You may have heard of the problem of the regress of causes: asked what caused A, you explain that it was caused by B. But what caused B? C caused B. But … here is an infinite regress. Does this imply that we never really understand what caused A?

There are other interesting infinite regresses: of reasons, of truths, of resemblances, etc. Huemer offers helpful insights here as well, elaborating various factors that determine whether such infinite regresses are vicious or benign.

Did I mention that Huemer can be iconoclastic? Consider these passages from Approaching Infinity:

  • There are certain philosophical assumptions that tend to generate strong resistance to my views, and these assumptions are commonly accepted by those interested in issues connected with science and mathematics . . . I have in mind especially the assumptions of modern (twentieth-century) empiricism . . . the doctrine that it is impossible to attain any substantive knowledge of the world except on the basis of observation.”
  • “In the original, core sense of the term ‘number,’ zero is not a number. . . . Why is zero not a number in the original sense? Because a number, in the primary sense, is a property that objects can have, whereas zero is not a property that objects can have.” Huemer extends the concept of number to include zero but explains why such an “extension” does not work for “infinity” as a number.
  • There are reasons to doubt that sets exist. No one seems to be able to explain what they are, they do not correspond to the ordinary notion of a collection, and core intuitions about sets, particularly the naive comprehension axiom, lead to contradictions.”

In his final chapter, Huemer, taking to heart Nozick’s concerns about coercive philosophy, offers readers his own thoughts about problems that remain: which of his answers leave him concerned or unsatisfied, arguments that are incomplete, areas for further exploration.

As in his earlier books on ethics, epistemology, and politics, Huemer’s style is as easy and enjoyable as his logic is rigorous. Intelligent laypeople who are interested in philosophy can follow his thoughts without difficulty. No Hegel here.

Because I have little background in the philosophy of mathematics, I approached Huemer’s latest effort with trepidation, despite having very much enjoyed his three earlier books. But now that I’ve read it, I highly recommend it. The best news: before finishing Approaching Infinity, you’ll have to read halfway through it, and before that one-quarter of the way, and before that one-eighth, and before that. . . . Yet despite this you can read it through to the very end, and be enthralled on every page.


Editor's Note: Review of "Approaching Infinity," by Michael Huemer. Palgrave Macmillan, 2016, 275 pages.



Share This


Pandora’s Book

 | 

What would you do if you were told that something you believe is not true? It would depend on who was telling you, I guess. It would also depend on how important the belief was to you, and on the strength of the evidence offered, wouldn’t it?

Suppose the belief in question had shaped your career and your view of how the world works. What if you were offered strong evidence that this fundamental belief was just plain wrong? What if you were offered proof?

Would you look at it?

In his 2014 book, A Troublesome Inheritance: Genes, Race and Human History, Nicholas Wade takes the position that “human evolution has been recent, copious, and regional.” Put that way, it sounds rather harmless, doesn’t it? In fact, the book has caused quite a ruckus.

What if you were offered strong evidence that this fundamental belief was just plain wrong? What if you were offered proof?

The following is not a review of Wade’s book. It is, instead, more a look at how the book was received and why. There are six parts: a story about Galileo, a summary of what I was taught about evolution in college, a sharper-edged rendering of the book’s hypothesis, an overview of some of the reviews, an irreverent comment on the controversy over Wade’s choice of a word, and, finally, an upbeat suggestion to those engaged in the ongoing nurture vs. nature debate.

1. It is the winter of 1609. In a courtyard of the University of Padua, Galileo Galilei closes one eye and peers at the moon through his recently improved telescope. As he observes the play of light and shadow on its surface, there comes a moment when he realizes that he is looking at the rising and setting of the sun across the mountains and valleys of another world. He is stunned.

Galileo hurries to tell his friend and colleague, Cesare Cremonini, then drags him to the courtyard, urging him to view this wonder. Cesare puts his eye to the scope for just a moment, then pulls his head back, pauses, frowns, and says, “I do not wish to approve of claims about which I do not have any knowledge, and about things which I have not seen . . . and then to observe through those glasses gives me a headache. Enough! I do not want to hear anything more about this.”

What a thing to say.

A little context might help. Cesare taught the philosophy of Aristotle at Padua. Aristotle held that the moon was not a world but a perfect sphere: no mountains, no valleys. Furthermore, the Inquisition was underway, and a tenured professor of philosophy who started rhapsodizing about “another world” would have been well advised to restrict his comments to the Celestial Kingdom. The Pope, you see, agreed with Aristotle. To him, and, therefore, to the Roman Catholic Church, the only “world” was the earth, the immobile center of the universe around which everything else moved. Any other view was taboo. Poor Cesare! Not only did he not want to look through the telescope; he did not want there to be mountains on the moon at all.

The question in the present drama is this: who is playing the role of Cremonini?

It would get worse. Soon Galileo would point his scope at Jupiter and discover its moons, heavenly bodies that clearly weren’t orbiting the earth. Then he would observe and record the astonishing fact that Venus went through phases as it orbited not the earth but the sun. So: Ptolemy was wrong, Copernicus was right, and Cesare Cremonini would go down in history as the epitome of willful ignorance. Galileo, of course, fell into the clutches of the Inquisition and became a hero of the Renaissance.

To be fair to Cesare, the story has been retrospectively streamlined into a sort of scientific morality tale. While the part about Galileo’s discovery is probably more or less right, Cremonini’s remark wasn’t made directly to Galileo. It was reported to him later in a letter from a mutual friend, Paolo Gualdo. The text of that letter is included in Galileo’s work, Opere II. And while those jagged borders of light and dark on the moon, imperfectly magnified, were certainly thought-provoking, to say that the case against Ptolemy was closed on the spot, that night in Padua, would be too neat.

It makes a good story, though, and a nice lens for viewing reactions to scientific breakthroughs. Changing our focus now from the moons of Jupiter to the molecular Rubik’s cube we call the human genome, the question in the present drama is this: who is playing the role of Cremonini?

2. In an undergraduate course, taken decades ago, I was taught that human evolution had more or less stopped when the glaciers retreated about 10,000 years ago. Evolution had been driven primarily by natural selection in response to a changing environment; and, as such changes had, for the time being at least, halted, so too had the evolution of man.

I was taught that races exist only as social constructs, not as meaningful biological categories, and that these constructs are only skin deep. They told me that the social behavior of an individual is not genetic, that behavioral and cognitive propensities just aren’t in our genes.

I was taught that the differences among social organizations are unrelated to the genetic differences of the populations that comprise those various organizations, and that social environments have no influence on human evolution.

3. To show how Wade’s book stirred things up, I will present his central hypothesis with an emphasis on the controversial parts. I’ll avoid scientific jargon, in an effort to make the meaning clearer to my fellow nonscientists.

Wade believes that humanity has been evolving rapidly during the past 30,000 years and continues to evolve rapidly today. It is not just our physical characteristics that continue to evolve. The genes that influence our behavior also evolve. (Yes, that’s what the book says, that our behavior is influenced by our genes.)

is humanity rapidly evolving? Is there such a thing as race in biological terms? Nicholas Wade believes that the answer is “yes.”

He also believes that humanity has evolved differently in different locations, most markedly on the different continents, where the major races evolved. (Yes, the book calls them races.)

These separately evolved genetic differences include those that influence behavior. (Yes, the book says that race is deeper than the skin.)

Furthermore, these genetic differences in behavioral propensities have contributed to the diversity of civilizations. The characteristics of any given civilization, in turn, influence the direction of evolution of the humans who compose it.

Oh, my.

We now know that the earth goes around the sun. But is humanity rapidly evolving? Is there such a thing as race in biological terms? Does the particular set of alleles in an individual’s genome influence how that person behaves? Does the particular frequency of alleles in the collective genetic material of the people who compose a civilization influence the characteristics of that civilization? Do the characteristics of a civilization influence the direction of the evolution of the humans that compose it? Nicholas Wade believes that the answer to all these questions is “yes.” While he does not claim that all of this has been proven, he is saying, in effect, that what I learned in college is not true. Am I now to be cast as Cremonini?

4. There are those who disagree with Wade.

In fact, lots of people didn’t like A Troublesome Inheritance at all. I’ve read about 20 reviews, few of them favorable. Even Charles Murray, writing in theWall Street Journal, seemed skeptical of some of Wade’s arguments.Most of the others were simply unfavorable, among them reviews in the Washington Post, the New York Review of Books, Scientific American, the New York Times, The New Republic, and even Reason. Slate and The Huffington Post piled on. While Brian Bethune’s review in MacLean’s was gentler than most, it was gently dismissive.

The reactions run from disdain to anger to mockery. Nathaniel Comfort’s satirical review Hail Britannia!,in his blog Genotopia, is the funniest. Donning the persona of a beef-fed, red-faced, pukka sahib at the height of the Raj, he praises Wade’s book as a self-evident explanation of the superiority of the West in general and the British in particular. (I once saw a retired British officer of the Indian Army being told by an Indian government official that he had to move his trailer to a remote area of a tiger preserve to ensure the security of a visiting head of state. He expressed his reluctance with the words, “I’m trying to be reasonable, damn it, but I’m not a reasonable man!”)

There’s some pretty heated language in these reviews, too. That the reviewers are upset is understandable. After all, they have been told that what they believe is not true. And the fellow doing the telling isn’t even a scientist.Sure, Nicholas Wade was a science writer and editor for the New York Times for three decades, but that doesn’t makehim a scientist. Several of the reviews charge that Wade relies on so many historical anecdotes, broad-brush impressions, and hastily formed conclusions that it’s a stretch to say the book is based on science at all.

Of course they’re angry. Some of these guys are professors who teach, do research, and write books on the very subject areas that Wade rampages through. If he’s right, then they’re wrong, and their life’s work has been, if not wasted, at the very least misguided.

The consensus is that Wade has made a complete hash of the scientific evidence that he cites to make his case: cherry-picking, mischaracterizing, over-generalizing, quoting out of context, that kind of thing.

Another common complaint is that, wittingly or not, Wade is providing aid and comfort to racists. In fact, the animosity conveyed in some of the reviews may spring primarily from this accusation. In his review in the New York Times, David Dobbs called the book “dangerous.” Whoa. As I said, they don’t like A Troublesome Inheritance at all.

So, is Nicholas Wade just plain wrong, or are his learned critics just so many Cremoninis?

5. While the intricacies of most of the disagreements between Wade and his critics are over my head, one of the criticisms is fairly clear. It is that Wade uses the term “race” inappropriately.

The nub of the race question is that biologists want the word “race” as it applies to humans to be the equivalent of the word “subspecies” as it applies to animals. As the genetic differences among individual humans and the different populations of humans are so few, and the boundaries between the populations so indistinct, biologists conclude that there are no races. We are all homo sapiens sapiens. We are one.

Several of the reviews charge that Wade relies on so many historical anecdotes, broad-brush impressions, and hastily formed conclusions that it’s a stretch to say the book is based on science at all.

Just south of Flathead Lake in Montana is an 8,000-acre buffalo preserve. One summer day in the mid-’70s, I walked into its visitors center with my wife and father-in-law and asked the woman behind the counter, “Where are the buffalo?” She did not hesitate before hissing, “They’re bison.” Ah, yes: the bison-headed nickel, Bison Bill Cody, and, “Oh, give me a home where the bison roam . . .” You know the critter.

Put it this way: to a National Park Ranger, a buffalo is a bison; to a biological anthropologist, race is a social construct. That doesn’t mean there’s no such thing as a buffalo.

I don’t mean to make light of it. I’ve read the explanations. I’ve studied the printouts that graph and color-code populations according to genetic variation. I’ve studied the maps and charts that show the differences in allele frequencies among the groups. I’ve squinted at the blurry edges of the clusters. I get all that, but this much is clear: the great clusters of genetic variation that correspond to the thousands of years of relative isolation on the various continents that followed the trek out of Africa are real, and because they are genetic, they are biological. In any case, we are not in a biology class; we are in the world, where most people don’t talk about human “subspecies” very often, if ever. They talk about human “races.” To criticize Wade’s use of the term “race” seems pedantic. Whether to call the clusters “races” or “populations” or “groups” is a semantic dispute.

Put it another way: If you put on your “there is no such thing as race” costume for Halloween, you’ll be out trick-or-treating in your birthday suit, unless you stay on campus.

Besides, use any word you want, it won’t affect the reality that the symbol represents. The various “populations” either have slightly different genetic mixes that nudge behavior differently, or they don’t. I mean, are we seeking the truth here or just trying to win an argument?

6. While Wade offers no conclusive proof that genes create behavioral predispositions, he does examine some gene-behavior associations that point in that direction and seem particularly open to further testing. Among them are the MAOA gene and its influence on aggression and violence, and the OXTR gene and its influence on empathy and sensitivity. (The symbols link to recent research results.)

What these have in common is that the biochemical chain from the variation of the gene to the behavior is at least partly understood. The chemical agents of the genes in question are L-monoamine oxidase and oxytocin, respectively. Because of this, testing would not be restricted to a simple correlation of alleles to overt behaviors in millions of people, though that is a sound way to proceed as well. The thing about the intermediate chemical triggers is that they could probably be measured, manipulated, and controlled for.

We are in the world, where most people don’t talk about human “subspecies” very often, if ever. They talk about human “races.”

The difficult task of controlling for epigenetic, developmental, and environmental variables would also be required but, in the end, it should be possible to determine whether the alleles in question actually influence behavior.

If they do, the next step would be to determine the frequency of the relevant allele patterns in various populations. If the frequency varies significantly, then the discussion about how these genetic differences in behavioral propensities may have contributed to the diverse characteristics of civilizations could be conducted on firmer ground.

If the alleles are proven not to influence behavior, then Wade’s hypothesis would remain unproven, and lots of textbooks wouldn’t have to be tossed out.

Of course, it’s not so simple. The dance between the genome and the environment has been going on since life began. At this point, it might be said that everything in the environment potentially influences the evolution of man, making it very difficult to identify which parts of human behavior, if any, are influenced by our genes. Like Cremonini, I have no wish to approve of claims about which I do not have knowledge.

But the hypothesis that Wade lays out will surely be tested and retested. The technology that makes the testing possible is relatively new, but improving all the time. We can crunch huge numbers now, and measure genetic differences one molecule at a time. It is inevitable that possible links between genes and behavior will be examined more and more closely as the technology improves. Associations among groups of alleles, for example, and predispositions of trust, cooperation, conformity, and obedience will be examined, as will the even more controversial possible associations with intelligence. That is to say, the telescope will become more powerful. And then, one evening, we will be able to peer through the newly improved instrument, and we shall see.

That is, of course, if we choose to look.




Share This


Born That Way

 | 




Share This


Do You Speak Political?

 | 

In Alexander Pope’s The Rape of the Lock, several members of the British aristocracy — back when it was an aristocracy — argue about the amorous theft of a lock of hair. A peer of the realm has captured the lock. Sir Plume, another aristocrat, demands that it be returned:

With earnest Eyes, and round unthinking Face,
He first the Snuff-box open'd, then the Case,
And thus broke out — "My Lord, why, what the Devil?
Zounds! — damn the Lock! 'fore Gad, you must be civil!
Plague on't! 'tis past a Jest — nay prithee, Pox!
Give her the Hair” — he spoke, and rapp'd his Box.

“It grieves me much” (reply'd the Peer again)
“Who speaks so well shou'd ever speak in vain.”

I thought of that passage when Drew Ferguson, Liberty’s managing editor, alerted me to the following statement by Timothy M. Wolfe, then president of the University of Missouri, responding to demonstrations about alleged mistreatment of blacks on his campus:

My administration has been meeting around the clock and has been doing a tremendous amount of reflection on how to address these complex matters. We want to find the best way to get everyone around the table and create the safe space for a meaningful conversation that promotes change.

The next day, Wolfe was forced to resign. He had spoken every bit as well as hapless Sir Plume, and yet he spake in vain.

You can see why. If there was ever a meaningless assemblage of bureaucratic buzzwords, Wolfe’s statement was it. “Address complex matters . . . get everyone around the table [query: does that include people like you and me?] . . . safe space . . . meaningful conversation . . . promote change.” It makes you long for just one academic politician to say, “I want a meaningless conversation, so I can get back to my golf game.” That would be honest, at least.

Anyone who speaks this way is either incapable of critical thought or believes that everyone else is. Who among us advocates change without saying what kind of change he means? Who among us wants to have conversations all day, with total strangers, or with people who don’t like us? And who thinks that what university students need is a safe space, as if they were surrounded by ravening wolves, or panzer battalions?

If there was ever a meaningless assemblage of bureaucratic buzzwords, Wolfe’s statement was it.

The answer is, I suppose, “the typical college administrator,” supposing that these people can be taken at their word, which on this showing is very hard to do. If you had something sincere and meaningful to say, would you say it like that?

My suggestion is that everyone who speaks that lingo should be forced to resign, no matter what his job and no matter what the occasion. I’ve had it with stuff like that. You’ve had it with stuff like that. I suspect that normal people all over the world have had it with stuff like that. Even members of the official class now faintly sense this fact, and they’re trying to turn the incipient rebellion against meaningless buzzwords into their own new set of meaningless buzzwords.

Before I give an example, I want to say something about the official class or, in the somewhat more common phrase, political class.

For many decades, libertarian intellectuals have engaged in what I call a two-class analysis. Instead of analyzing people’s behavior primarily in terms of economic classes, they think in terms of a political class and a class of everyone else. So, for instance, Bernie Sanders claims to represent the working class, and Hillary Clinton claims to dote on the middle class, but what they really are is people who crave official power and expect to get it from their class affiliation with other such people — politicians of all sorts, czars of labor unions, ethnic demagogues, environmental poohbahs, denizens of partisan thinktanks, lobbyists for the interests of women who attended Yale Law School, people who share their wisdom with Public Radio, and the like.

Who thinks that what university students need is a "safe space," as if they were surrounded by ravening wolves, or panzer battalions?

The two-class analysis works pretty well at explaining American political culture. But it wasn’t until this year that the phrase political class got into the political mainstream. It happened because the supposed outliers among Republican conservatives started using it. And when such people as Ben Carson used it, it wasn’t a buzzword. It meant something.

But now it has penetrated far enough to produce this:

I’m not gonna be part of the political class in DC. (Jeb Bush to Sean Hannity, October 29, 2015)

Message to the Chamber of Commerce: “Beware! Jeb’s gonna betray you on the immigration issue.” But of course he wouldn’t. He’d just lie about it, as his brother did. The good thing is that for once nobody believed what one of these icons of the official class had to say. The statement was scorned and ignored. Jeb spake in vain.

I suppose he thinks that nobody really understood him. If so, maybe he’s right. He’s used to speaking the language of the political class, and if you do that long enough, you start behaving like people who are trying to speak Spanish and don’t understand that when they think they’re asking where to catch the bus, they’re actually shouting obscenities. They wonder why the audience turns away.

Naturally, the linguistic divide functions in the other way, too. People who speak Political eventually think in Political too, and they can’t comprehend what people who speak a normal human language say or think.

Everyone who speaks that lingo should be forced to resign, no matter what his job and no matter what the occasion.

The process of linguistic self-crippling usually starts early. People learn Political in high school or college and soon are astonishing their friends with strange chatter about advocating for change around issues of social justice, or demanding that their college create a safe space for them, or else they’ll shut the m***** f***** down. To understand such comments, people who speak English must laboriously translate them into their own language, a boring process that they seldom complete. The Political speakers then complain that they are not being acknowledged, that they are not, in fact, being listened to. And indeed, they’re not — because they’re not speaking the same language as their audience, or hearing it.

A couple of weeks ago, Neil Cavuto, the business guy on Fox News, interviewed a college student representing the cause currently being advocated for by a nationwide coalition of students who have been speaking out on campuses throughout the country. Their program calls for a $15 an hour minimum wage for all campus workers, free education at all public colleges and universities, and forgiveness of all student loans.

“Who’s going to pay for this?” Cavuto asked.

There was a long silence. The advocate had apparently never heard those words before. Finally she struggled to answer, in her own language. She said that the hoarders would pay.

Now it was Cavuto’s turn to be surprised. He couldn’t understand what she meant by this strange, apparently foreign, word. When English speakers use those two syllables, hoard-ers, they’re referring to people who pile up supplies of some commodity — whether uselessly, out of obsession, or prudentially, to preserve life or comfort in case of emergency. It turned out, however, that in the young woman’s lexicon hoarder meant “the 1% who own 99% of the country’s wealth.” I know, that was somewhat like saying, “The unicorns will pay for it,” but I want to emphasize the linguistic, not the metaphysical, problem. She had obviously come to exist in a monolingual environment in which hoarders means something quite different from what it means to, let’s say, 99% of the population.

No one gets offended by a foreigner’s struggles with the language of a new country. Native speakers may, however, become upset by people who grew up speaking the common language and then suddenly decide to speak something else, to the bafflement of everyone they’re talking to. Or shouting at. Or lecturing, as if from a position of intellectual superiority. And that, I think, is what’s happening now, all over the Western world.

It turned out, however, that in the young woman’s lexicon "hoarder" meant “the 1% who own 99% of the country’s wealth.”

If you want to see the Platonic form and house mother of the political class, try Angela Merkel. It’s not surprising that her constituents are disgusted by her commitment to lecturing them in a foreign language. Responding to criticism that she has precipitated an uncontrolled flood of immigrants into her country, where taxpayers will be expected to support them, Merkel said it is “not in our power how many come to Germany.” This from a woman who runs a welfare society based on the idea of, basically, controlling everything. To make confusion more confusing, she also said that she and her government “have a grip on the situation.” Like other members of the political class, she left it to her listeners to divine the secret meanings of such terms as “power” and “have a grip,” and to discover when certain arrays of sound mean “I’m just kidding you” and when they mean “No, really, I’m telling the truth this time.”

When you’re trying to decipher a foreign language, you’re not just challenged by the vocabulary. You’re also challenged by those sentences in which you think you understand all the individual words, but there’s still just something about them — something about their logic or their assumptions or . . . something — that continues to elude your understanding. (This is especially true of French.) Sigmar Gabriel, Merkel’s Vice Chancellor and Economy Minister, provided a good example when he reproved people who might be alarmed by the terrorist attacks in Paris, in which at least one participant was carrying Syrian asylum-seeker documents. "We should not,” he said, “make them [Syrian migrants] suffer for coming from regions from which the terror is being carried to us."He appeared to be arguing that because a country generates terrorists we should welcome more people from that country. But that would be ridiculous; he must have meant something else.

Of course, in any language one finds expressions that, one thinks, must be symbolic of broad social attitudes, concepts that are deeply meaningful but that only a native speaker can understand. The difficulty is that there are no native speakers of Political. So when Merkel talks about keeping true to her “vision” and defines that vision by saying, as she said (unluckily) on the day of the Paris attacks, "I am in favor of our showing a friendly face in Germany," her thought remained elusive, even to Germans. What was she talking about? Was she simply babbling to herself?

President Obama’s use of language has long inspired such questions. You know the kind of tourists who inflict themselves on a foreign land, refusing to learn its language, and then get angry at the natives for not understanding them? That’s Obama, and he’s getting worse and worse. On November 21, he visited children in a refugee center in Malaysia and took the occasion to act out his incomprehension of the vast majority of the American populace — the people whom he often, in his own language, denounces as Republicans.

“They [the kids] were indistinguishable from any child in America,” Mr. Obama said after kneeling to look at their drawings and math homework. “And the notion that somehow we would be fearful of them, that our politics would somehow leave us to turn our sights away from their plight, is not representative of the best of who we are.”

More strange Obama statements can be read at the same place in the New York Times.

The repeated somehow (a word to which the president is becoming addicted) signals a profound linguistic divide. Obama marvels at the ordinary language of ordinary Americans. How can they say the things they do? How can they even think them? When they express their fears of such asylum seekers as the Tsarnaev family; when they comment on the many news reports, written in plain English, showing that the vast majority of people now seeking asylum in the West are not little kids from Muslim South Asian families enjoying the hospitality of the officially Muslim South Asian state of Malaysia but young men from the hotbed of Islamic fanaticism, bound for non-Islamic countries; when they reflect that these young men are destined to spend years living on the resentful charity of neighbors who have been forced by their governments to support them — when people speak of these things, Obama interprets all objections, fears, and caveats as the product of a hideous moral deficiency that has somehow insinuated itself into the body politic. Even supposing he’s right on the policy issue — which I don’t think he is — the word somehow is enough to convince most people that he’s no longer speaking their language.

This dawning realization, not just about Obama but about the entire political class, is good news. It means that people are finally thinking about the private language of the political elite. And here’s some more good news, though from an unlikely source.

Obama marvels at the ordinary language of ordinary Americans. How can they say the things they do? How can they even think them?

Last week, I saw an announcement that fellowships are being offered by something called the Center of Theological Inquiry in Princeton, New Jersey. The Center is inviting academics to come and be supported for eight months of research and “conversations” about the “societal” implications of “astrobiology.” The program appears to be supported, at least in part, by those friendly old astrobiologists, NASA.

The announcement begins in this way: “Societal understanding of life on earth has always developed in dialogue with scientific investigations of its origin and evolution.” That’s an assumption that may be questioned. It recalls the typical first sentence of a freshman essay: “Since the beginning of time, humanity has always been troubled by the problem of indoor plumbing.” But the “Societal understanding” sentence goes beyond that — although it’s hard to tell where it’s going, unless one pictures Neanderthals holding scientific seminars about the validity of Darwinism before deciding whether hunting and gathering is a good idea.

Yet the next sentence clearly has a hopeful tendency: “Today, the new science of astrobiology extends these investigations to include the possibility of life in the universe.”

True, the syntax is bad. Investigations don’t include possibilities. But you have to agree with the last part of the sentence: there is some possibility of life in the universe. And I believe that’s a good thing.




Share This


Marooned on Mars

 | 

The final story in Ray Bradbury’s collection The Martian Chronicles is called “The Million Year Picnic.” In it, an American family escapes the nuclear destruction of the earth and lands on Mars, where the father tells his children, “Tomorrow you will see the Martians.” The next day he takes them on a picnic near an ancient canal, where they look into the water and see their own reflections. Simply by moving there and colonizing, they have become Martians. Mark Watney (Matt Damon) makes a similar point when he is stranded on Mars in Ridley Scott’s The Martian: “They say once you grow crops somewhere, you have officially colonized it. So, technically, I colonized Mars.”

The Martian is a tense, intelligent, and engaging story about an astronaut who is left for dead when his fellow crew members are forced to make an emergency launch to escape a destructive sandstorm. Knocked out rather than killed, he regains consciousness and discovers that he is utterly alone on the planet. Solar panels can provide him with renewable energy, oxygen, heat, and air pressure. But the next mission to Mars isn’t due for another five years, and he has enough food to last just 400 days. What can he do?

As we approached the freeway and began to pick up speed, I realized I had only one chance for a safe outcome.

There is something fascinating about this storyline of being marooned or abandoned and left entirely to one’s own devices, whether the protagonist be Robinson Crusoe on his desert island; The 33 (2015) workers, trapped in a Chilean copper mine; Tom Hanks, Cast Away (2000) in the Pacific; the Apollo 13 (1995) crew, trapped in their capsule; Sandra Bullock, lost in space (Gravity, 2013);or even Macaulay Culkin, left Home Alone (1990), just to name a few. These films allow us to consider what we would do in such a situation. Could we survive?

I well remember the time I was left behind at a gas station at the age of ten on the way to a family camping trip. I had been riding in the camper of the pickup truck while my parents and sister rode in the cab. I had stepped out of the camper to tell my mother I was going to the bathroom, but before I could knock on her window, my father shoved the transmission into gear and started driving away. I didn’t know where we were, where we were going, or how I would contact my parents after they left without me. I was even more afraid of strangers than I was of being lost. It would be at least 300 miles before they stopped again for gas, and even then, they might not look into the camper until nighttime, and how would they find me after that? All of this went through my mind in a flash. Then I leapt onto the rear bumper of the truck as it eased past me and clung tightly to the handle of the camper.

I was hidden from sight by the trailer we were pulling behind us. No one would see me there, and if I jumped off or lost my balance, I would be crushed by the trailer. As we approached the freeway and began to pick up speed, I realized I had only one chance for a safe outcome. I managed to pry open the door of the camper, squeeze through the narrow opening, and collapse onto the floor, pulling the door shut behind me. Instead of being frightened by the experience, I was exhilarated by my successful maneuver and problem-solving skills. I could do anything! My only regret was that no one saw my amazing feat.

One of the reasons we enjoy movies like The Martian is that they allow us to participate with the protagonist in solving the problem of survival. Rather than curl up and wait to die, à la Tom Hanks’ character in Cast Away (honestly — five years on a tropical island and he’s still living in a cave, talking to a volleyball? He hasn’t even made a shelter or a hammock?), Watney assesses his supplies and figures out how to survive until the next mission arrives. A botanist and an engineer, he exults, “I’m going to science the shit out of this!” And he does. He makes the difficult decision to cut up some of his precious potatoes for seed, knowing that his only chance for survival is to grow more food. He figures out how to make water, how to extend his battery life, how to deal with the brutally freezing temperatures.

He also keeps a witty video journal, through which he seems to speak directly to the audience. This allows us to remain intensely engaged in what he is doing and avoids the problem encountered in Robert Redford’s 2013 castaway film All is Lost, where perhaps three sentences are uttered in the entire dreary film. Welike Watney’s upbeat attitude, his irreverent sense of humor, his physical and mental prowess, and his relentless determination to survive. We try to anticipate his next move.

A botanist and an engineer, he exults, “I’m going to science the shit out of this!” And he does.

The visual effects are stunning. Many of them would not have been possible even three years ago, before the innovations created for Alfonso Cuaron’s Gravity (2013). The techniques used to create weightlessness as the astronauts slither through the space station are especially impressive; we simply forget that they aren’t really weightless. The unfamiliar landscape — the red desert of Wadi Rum, Jordan, where the outdoor scenes were filmed — is a bit reminiscent of a futuristic Monument Valley. It contributes to the western-hero sensibility while creating a feeling that we really are on Mars. I’m not sure the science works in the dramatic ending, but I’m willing to suspend my disbelief. The Martian is smart, entertaining, and manages to work without a single antagonist — nary a nasty businessman or greedy bureaucrat can be found. If that’s what our future holds, I’m all for it.


Editor's Note: Review of "The Martian," directed by Ridley Scott. Scott Free Productions, 20th Century Fox, 2015, 142 minutes.



Share This


VW Bugs

 | 




Share This


Fakers and Enablers

 | 

Last month, a UCLA graduate student in political science named Michael LaCour was caught faking reports of his research — research that in December 2014 had been published, with much fanfare, in Science, one of the two most prestigious venues for “hard” (experimental and quantifiable) scientific work. Because of his ostensible research, he had been offered, again with much fanfare, a teaching position at prestigious Princeton University. I don’t want to overuse the word “prestigious,” but LaCour’s senior collaborator, a professor at prestigious Columbia University, a person whom he had enlisted to enhance the prestige of his purported findings, is considered one of the most prestigious number-crunchers in all of poli sci. LaCour’s dissertation advisor at UCLA is also believed by some people to be prestigious. LaCour’s work was critiqued by presumably prestigious (though anonymous) peer reviewers for Science, and recommended for publication by them. What went wrong with all this prestigiousness?

Initial comments about the LaCour scandal often emphasized the idea that there’s nothing really wrong with the peer review system. The New Republic was especially touchy on this point. The rush to defend peer review is somewhat difficult to explain, except as the product of fears that many other scientific articles (about, for instance, global warming?) might be suspected of being more pseudo than science; despite reviewers’ heavy stamps of approval, they may not be “settled science.” The idea in these defenses was that we must see l’affaire LaCour as a “singular” episode, not as the tin can that’s poking through the grass because there’s a ton of garbage underneath it. More recently, suspicions that Mt. Trashmore may be as high as Mt. Rushmore have appeared even in the New York Times, which on scientific matters is usually more establishment than the establishment.

I am an academic who shares those suspicions. LaCour’s offense was remarkably flagrant and stupid, so stupid that it was discovered at the first serious attempt to replicate his results. But the conditions that put LaCour on the road to great, though temporary, success must operate, with similar effect, in many other situations. If the results are not so flagrantly wrong, they may not be detected for a long time, if ever. They will remain in place in the (pseudo-) scientific literature — permanent impediments to human knowledge. This is a problem.

But what conditions create the problem? Here are five.

1. A politically correct, or at least fashionably sympathetic, topic of research. The LaCour episode is a perfect example. He was purportedly investigating gay activists’ ability to garner support for gay marriage. And his conclusion was one that politically correct people, especially donors to activist organizations, would like to see: he “found” that person-to-person activism works amazingly well. It is noteworthy that Science published his article about how to garner support for gay marriage without objecting to the politically loaded title: “When contact changes minds: An experiment on transmission of support for gay equality.” You may think that recognition of gay marriage is equivalent to recognition of gay equality, and I may agree, but anyone with even a whiff of the scientific mentality should notice that “equality” is a term with many definitions, and that the equation of “equality” with “gay marriage” is an end-run around any kind of debate, scientific or otherwise. Who stands up and says, “I do not support equality”?

The idea in these defenses was that we must see l’affaire LaCour as a “singular” episode, not as the tin can that’s poking through the grass because there’s a ton of garbage underneath it.

2. The habit of reasoning from academic authority. LaCour’s chosen collaborator, Donald Green, is highly respected in his field. That may be what made Science and its peer reviewers pay especially serious attention to LaCour’s research, despite its many curious features, some of which were obvious. A leading academic researcher had the following reaction when an interviewer asked him about the LaCour-Green contribution to the world’s wisdom:

“Gee,” he replied, “that's very surprising and doesn't fit with a huge literature of evidence. It doesn't sound plausible to me.” A few clicks later, [he] had pulled up the paper on his computer. “Ah,” he [said], “I see Don Green is an author. I trust him completely, so I'm no longer doubtful.”

3. The prevalence of the kind of academic courtesy that is indistinguishable from laziness or lack of curiosity. LaCour’s results were counterintuitive; his data were highly exceptional; his funding (which turned out to be bogus) was vastly greater than anything one would expect a graduate student to garner. That alone should have inspired many curious questions. But, Green says, he didn’t want to be rude to LaCour; he didn’t want to ask probing questions. Jesse Singal, a good reporter on the LaCour scandal, has this to say:

Some people I spoke to about this case argued that Green, whose name is, after all, on the paper, had failed in his supervisory role. I emailed him to ask whether he thought this was a fair assessment. “Entirely fair,” he responded. “I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data and grateful to the team of researchers who brought it to my attention.” He declined to comment further for this story.

Green later announced that he wouldn’t say anything more to anyone, pending the results of a UCLA investigation. Lynn Vavreck, LaCour’s dissertation advisor at UCLA, had already made a similar statement. They are being very circumspect.

4. The existence of an academic elite that hasn’t got time for its real job. LaCour asked Green, a virtually total stranger, to sign onto his project: why? Because Green was prestigious. And why is Green prestigious? Partly for signing onto a lot of collaborative projects. In his relationship with LaCour, there appears to have been little time for Green to do what professors have traditionally done with students: sit down with them, discuss their work, exclaim over the difficulty of getting the data, laugh about the silly things that happen when you’re working with colleagues, share invidious stories about university administrators and academic competitors, and finally ask, “So, how in the world did you get those results? Let’s look at your raw data.” Or just, “How did you find the time to do all of this?”

LaCour’s results were counterintuitive; his data were highly exceptional; his funding was vastly greater than anything one would expect a graduate student to garner.

It has been observed — by Nicholas Steneck of the University of Michigan — that Green put his name on a paper reporting costly research (research that was supposed to have cost over $1 million), without ever asking the obvious questions about where the money came from, and how a grad student got it.

“You have to know the funding sources,” Steneck said. “How else can you report conflicts of interest?” A good point. Besides — as a scientist, aren’t you curious? Scientists’ lack of curiosity about the simplest realities of the world they are supposedly examining has often been noted. It is a major reason why the scientists of the past generation — every past generation — are usually forgotten, soon after their deaths. It’s sad to say, but may I predict that the same fate will befall the incurious Professor Green?

As a substitute for curiosity, guild courtesy may be invoked. According to the New York Times, Green said that he “could have asked about” LaCour’s claim to have “hundreds of thousands in grant money.” “But,” he continued, “it’s a delicate matter to ask another scholar the exact method through which they’re paying for their work.”

There are several eyebrow-raisers there. One is the barbarous transition from “scholar” (singular) to “they” (plural). Another is the strange notion that it is somehow impolite to ask one’s colleagues — or collaborators! — where the money’s coming from. This is called, in the technical language of the professoriate, cowshit.

The fact that ordinary-professional, or even ordinary-people, conversations seem never to have taken place between Green and LaCour indicates clearly enough that nobody made time to have them. As for Professor Vavreck, LaCour’s dissertation director and his collaborator on two other papers, her vita shows a person who is very busy, very busy indeed, a very busy bee — giving invited lectures, writing newspaper columns, moderating something bearing the unlikely name of the “Luskin Lecture on Thought Leadership with Hillary Rodham Clinton,” and, of course, doing peer reviews. Did she have time to look closely at her own grad student’s work? The best answer, from her point of view, would be No; because if she did have the time, and still ignored the anomalies in the work, a still less favorable view would have to be entertained.

This is called, in the technical language of the professoriate, cowshit.

Oddly, The New Republic praised the “social cohesiveness” represented by the Green-LaCour relationship, although it mentioned that “in this particular case . . . trust was misplaced but some level of collegial confidence is the necessary lubricant to allow research to take place.” Of course, that’s a false alternative — full social cohesiveness vs. no confidence at all. “It’s important to realize,” opines TNR’s Jeet Heer, “that the implicit trust Green placed in LaCour was perfectly normal and rational.” Rational, no. Normal, yes — alas.

Now, I don’t know these people. Some of what I say is conjecture. You can make your own conjectures, on the same evidence, and see whether they are similar to mine.

5. A peer review system that is goofy, to say the least.

It is goofiest in the arts and humanities and the “soft” (non-mathematical) social sciences. It’s in this, the goofiest, part of the peer-reviewed world that I myself participate, as reviewer and reviewee. Here is a world in which people honestly believe that their own ideological priorities count as evidence, often as the determining evidence. Being highly verbal, they are able to convince themselves and others that saying “The author has not come to grips with postcolonialist theory” is on the same analytical level as saying, “The author has not investigated the much larger data-set presented by Smith (1997).”

My own history of being reviewed — by and large, a very successful history — has given me many more examples of the first kind of “peer reviewing” than of the second kind. Whether favorable or unfavorable, reviewers have more often responded to my work on the level of “This study vindicates historically important views of the text” or “This study remains strangely unconvinced by historically important views of the episode,” than on the level of, “The documented facts do not support [or, fully support] the author’s interpretation of the sequence of events.” In fact, I have never received a response that questioned my facts. The closest I’ve gotten is (A) notes on the absence of any reference to the peer reviewer’s work; (B) notes on the need for more emphasis on the peer reviewer’s favorite areas of study.

This does not mean that my work has been free from factual errors or deficiencies in the consultation of documentary sources; those are unavoidable, and it would be good for someone to point them out as soon as possible. But reviewers are seldom interested in that possibility. Which is disturbing.

I freely admit that some of the critiques I have received have done me good; they have informed me of other people’s points of view; they have shown me where I needed to make my arguments more persuasive; they have improved my work. But reviewers’ interest in emphases and ideological orientations rather than facts and the sources of facts gives me a very funny feeling. And you can see by the printed products of the review system that nobody pays much attention to the way in which academic contributions are written, even in the humanities. I have been informed that my writing is “clear” or even “sometimes witty,” but I have never been called to account for the passages in which I am not clear, and not witty. No one seems to care.

But here’s the worst thing. When I act as a reviewer, I catch myself falling into some of the same habits. True, I write comments about the candidates’ style, and when I see a factual error or notice the absence of facts, I mention it. But it’s easy to lapse into guild language. It’s easy to find words showing that I share the standard (or momentary) intellectual “concerns” and emphases of my profession, words testifying that the author under review shares them also. I’m not being dishonest when I write in this way. I really do share the “concerns” I mention. But that’s a problem. That’s why peer reviewing is often just a matter of reporting that “Jones’ work will be regarded as an important study by all who wish to find more evidence that what we all thought was important actually is important.”

You can see by the printed products of the review system that nobody pays much attention to the way in which academic contributions are written, even in the humanities.

Indeed, peer reviewing is one of the most conservative things one can do. If there’s no demand that facts and choices be checked and assessed, if there’s a “delicacy” about identifying intellectual sleight of hand or words-in-place-of-ideas, if consistency with current opinion is accepted as a value in itself, if what you get is really just a check on whether something is basically OK according to current notions of OKness, then how much more conservative can the process be?

On May 29, when LaCour tried to answer the complaints against him, he severely criticized the grad students who had discovered, not only that they couldn’t replicate his results, but that the survey company he had purportedly used had never heard of him. He denounced them for having gone off on their own, doing their own investigation, without submitting their work to peer review, as he had done! Their “decision to . . . by-pass the peer-review process” was “unethical.” What mattered wasn’t the new evidence they had found but the fact that they hadn’t validated it by the same means with which his own “evidence” had been validated.

In medicine and in some of the natural sciences, unsupported guild authority does not impinge so greatly on the assessment of evidence as it does in the humanities and the social sciences. Even there, however, you need to be careful. If you are suspected of being a “climate change denier” or a weirdo about some medical treatment, the maintainers of the status quo will give you the bum’s rush. That will be the end of you. And there’s another thing. It’s true: when you submit your research about the liver, people will spend much more time scrutinizing your stats than pontificating about how important the liver is or how important it is to all Americans, black or white, gay or straight, that we all have livers and enjoy liver equality. But the professional competence of these peer reviewers will then be used, by The New Republic and other conservative supporters of the status quo in our credentialed, regulated, highly professional society, as evidence that there is very little, very very very little, actual flim-flam in academic publication. But that’s not true.

ldquo;decision to . . . by-pass the peer-review processrsquo;s not true.




Share This


Unfinished Business

 | 

Back in the mid-1990s, Wall Street Journal reporter Ron Suskind chronicled the struggles of a poor, black honor student named Cedric Jennings as the latter aspired to get out of an inner-city high school and into a top-notch university. Suskind’s pieces garnered him a Pulitzer Prize and led to a book-length treatment of his subject, A Hope in the Unseen: An American Odyssey from the Inner City to the Ivy League (Broadway Books, 1998, 372 pages).

Cedric, a junior at Washington DC’s Frank W. Ballou Senior High School, has to suffer the slings and arrows of a student body that largely takes a dim view of academic achievement. Part of a small group of accelerated science and math students, he dreams of being accepted into MIT’s Minority Introduction to Engineering and Science (MITES) program, offered the summer before his senior year. Anywhere from one-third to one-half of those successfully completing the program go on to matriculate at MIT, and Cedric has his heart set on being one of them and majoring in mathematics.

The young man who wanted to major in mathematics at MIT and make mathematics a career instead bailed out of mathematics altogether with just a minor at Brown. Why?

Although he makes it into the MITES program, he quickly finds himself outclassed: most of the black students are middle-class, hailing from academically superior suburban high schools and having much higher SATs. Decidedly at a disadvantage, he nonetheless manages to complete the program. But during a meeting with academic advisor Leon Trilling, he is told that his chances of getting into MIT aren’t that good. Particularly telling are his SAT scores, 380 verbal and 530 math, for a combined total of 910 out of a possible 1600. Professor Trilling suggests that he apply instead to the University of Maryland and Howard University, even giving him the names of particular professors to contact. The distraught Cedric will have none of it though, even going so far as to accuse Trilling of being a racist.

If he can’t get into MIT, he’ll prove the critics wrong by getting into an Ivy League school. Pulling his SATs up to 960 from 910, he applies to Brown University because it has an impressive applied mathematics department. He’s accepted, and Suskind chronicles the trials and tribulations of his freshman year. The book came out during Cedric’s junior year, Suskind commenting in the Epilogue, “His major, meanwhile, is in applied math, a concentration that deals with the tangible applications of theorems, the type of high-utility area with which he has always been most comfortable” (364).

Thus concludes the summary of the book published 17 years ago. As the years went by, I wondered how Cedric had fared during the remainder of his Brown experience and after graduation. Every now and then I came across some tidbit of information. Although I was expecting to find him putting his major in applied mathematics to work in that field, I discovered instead that he had gone back to school, earning a master’s in education at Harvard and a master’s in social work at the University of Michigan; he had been involved in social work and then had gone on to become a director of government youth programs. Nothing particularly unusual about that, though; lots of folks get graduate degrees in fields other than their undergraduate major and end up veering off onto other career paths.

But I discovered that a revised and updated edition of A Hope in the Unseen had come out back in 2005, and I was surprised to come across this statement in the Afterword describing Cedric’s graduation from Brown: “Then Cedric proceeded, arm in arm with Zayd, Nicole, and a many-hued host of others, to receive his Bachelor of Arts degree, with a major in education, a minor in applied math, and a 3.3 grade point average” (377). Suskind casually lets slip that Cedric didn’t end up with a major in applied mathematics after all! That he only minored in that field means he didn’t have to take the final upper-level courses required for a major.

Suskind had also made Leon Trilling out to be some kind of Prince of Darkness thwarting the Journey of the Hero, and this is a most ungenerous characterization.

Although the book does have Cedric contemplating a second major in education along with his original major in applied mathematics, doubling up in that way just didn’t make much sense. As with his MITES experience, he found himself outclassed at Brown, having to compete with students from academically superior suburban schools, students with SATs hundreds of points higher than his own. He had trouble with some of his freshman courses, even his specialty, having to drop a course in discrete mathematics. Would it not have been more prudent, under those circumstances, simply to focus on one’s original major and on required courses without having to worry about the additional academic load of a new, second major? And if one did take on a second major and then had to scale back on the total number of courses taken, would it not have made more sense to scale back on the second major, getting a minor in that field instead, while going on with the original major? Something just wasn’t adding up here.

Although Brown had been unaware that Cedric was the subject of a series of articles in the Wall Street Journal when he was admitted under Brown’s affirmative action program, the college most certainly would have found out in short order, and it would have been in its best interest that this particular admit not get in over his head. Education is a much “safer” major than applied mathematics, and it is a popular major with many African Americans.

Cedric believed that getting into a top-notch university was a reward of sorts for all that he had to put up with through high school: “I could never dream about, like going to UDC or Howard, or Maryland or wherever . . . It just wouldn’t be worth what I’ve been through” (49). But it appears he may have had to strike a bargain in order to achieve that end. The young man who wanted to major in mathematics at MIT and make mathematics a career instead bailed out of mathematics altogether with just a minor. Why was the motivation behind such a tantalizing shift of academic focus not duly chronicled by Suskind in the Afterword to the revised and updated edition? He offers no explanation whatsoever for Cedric’s stopping short of a full major in applied mathematics, furtively sneaking the fact by as if hoping the reader wouldn’t notice.

Had Cedric gone to Maryland (or Howard) instead, would he have gone on to realize his STEM aspirations?

Suskind had also made Leon Trilling out to be some kind of Prince of Darkness thwarting the Journey of the Hero, and this is a most ungenerous characterization. In 1995, the mean math SAT score of entering freshmen at MIT was 756 out of a possible 800; Cedric’s score was 530. Dr. Trilling was absolutely correct to wonder whether Cedric was a good fit for MIT at the time. Trilling’s advice to Cedric to apply to the University of Maryland and Howard University was based on the fact that those schools were involved in a project with MIT called the Engineering Coalition of Schools for Excellence in Education and Leadership (ECSEL), a program aimed at underrepresented minorities in the field of engineering. Had Cedric been accepted by either of those schools and majored in engineering, he could have had another shot at MIT as a transfer student if his grades had been good enough and if he had been able to boost his SATs. Trilling was actually trying to keep Cedric’s STEM (science-technology-engineering-math) aspirations alive. Even if Cedric still fell short of getting into MIT, he could have gone on to get an engineering degree from Maryland or Howard and contribute to a STEM field in which blacks are woefully underrepresented relative to such fields as education and social work.

During the drafting of this review, I discussed its content with a friend who urged me to check out chapter three of Malcolm Gladwell’s most recent book, David and Goliath: Underdogs, Misfits and the Art of Battling Giants (Allen Lane, 2013, 305 pages). That chapter was titled, “If I’d gone to the University of Maryland, I’d still be in science.” Caroline Sacks — a pseudonym — is a straight-A “science girl” all the way up through high school in Washington, DC. Applying to Brown University as first choice, with the University of Maryland as her backup choice, she’s accepted by both and of course chooses Brown. But she has to drop freshman chemistry at Brown and take it over again as a sophomore. Then she has trouble with organic chemistry, finally having to leave her STEM track altogether and switch to another major. She achieves an Ivy League degree from Brown, but at the expense of her passion for science. Had she gone to Maryland instead, she believes, she’d still be in science. Had Cedric gone to Maryland (or Howard) instead, would he have gone on to realize his STEM aspirations?

A Hope in the Unseen has become widely assigned classroom reading, even spawning a number of accompanying classroom study guides. Although it is indeed an inspiring story, it’s simply not all that it’s cracked up to be. Legions of readers have assumed as a matter of course that Cedric proved the naysayers wrong by earning a major in applied mathematics at Brown when his dream of earning a major in mathematics at MIT was derailed by his low SATs. In reality, Cedric had to leave applied mathematics at Brown — and had he instead been admitted to MIT and attempted a major in mathematics there, he probably would have had to leave much earlier, perhaps even having to forgo the consolation prize of a minor.

Although many consider Cedric’s experience at Brown an affirmative action success story, his experience actually highlights the problems inherent in affirmative action policies that lower academic standards for minorities.


Editor's Note: Review of "A Hope in the Unseen: An American Odyssey from the Inner City to the Ivy League," by Ron Suskind. Revised and updated edition. Broadway Books, 2005, 390 pages.



Share This
Syndicate content

© Copyright 2018 Liberty Foundation. All rights reserved.



Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.