And Now For Something Completely Different


What can one say about a book on infinity that hasn’t been said before? An infinite number of things, presumably, but I’ll make this brief.

The book, Approaching Infinity, is by philosopher Michael Huemer. Perhaps you’ve heard of him — but why? If you’re a libertarian, but not a philosopher or “into philosophy,” it’s likely because of his well-received book, The Problem of Political Authority (2013).

If you’re a libertarian and, though not a philosopher, are into philosophy, you may also be aware of Huemer’s excellent online-available essays on the right to own a gun and the right to immigrate. (I imagine readers on both the Left and Right are now gnashing their teeth.)

Huemer, like Robert Nozick before him, is clearly better described as a philosopher who is a libertarian than as a “libertarian philosopher.”

But Huemer is nothing if not prolific. Libertarians who are really into philosophy may even be aware of his criticism of Ayn Rand, his argument that we sometimes have a duty to disregard the law, his argument that attorneys have a moral obligation not to defend unjust causes, his criticism of the US government’s War on Drugs, and his essay on why people are irrational about politics (also a TED talk!).

But — and this is the point I want to stress — even though he’s published much of interest to libertarians, Huemer, like Robert Nozick before him, is clearly a person better described as a philosopher who is a libertarian than as a “libertarian philosopher.” His first book, Skepticism and the Veil of Perception (2001), dealt with epistemology (the field of study that led to his hiring at University of Colorado, Boulder); his second, Ethical Intuitionism (2005), focused on ethics. Now, having covered epistemology, ethics, and politics, Huemer, in Approaching Infinity, turns to the philosophy of mathematics (with an occasional nod to some issues in the philosophy of science). Clearly a well-rounded guy, philosophically speaking.

Also an iconoclast:

  • Although most philosophers since Descartes have opposed direct realism (the view that we are directly aware of real, physical objects), Huemer argues for just that point of view.
  • Although most modern philosophers oppose ethical intuitionism, the view that we can have direct knowledge of objective moral truths, Huemer again argues for exactly that.
  • Although most people readily accept political authority, and most philosophers are not anarchists, Huemer argues both against political authority and for a capitalist version of anarchy.

So it should surprise no one that Huemer, in analyzing some foundational issues in mathematics in order to solve various paradoxes of infinity, is willing to advance bold claims.

Almost everyone is familiar with at least some infinity paradoxes. We’ve all heard about Zeno and why that ball coming at you will never reach you, or why the hare can never catch the tortoise. Any you’re probably aware that strangeness results when even simple arithmetic is applied to infinity. E.g., ∞ = ∞ +1. Subtract ∞ from both sides: 0 = 1.

But I had no idea there were at least 17 different paradoxes associated with infinity. From Hilbert’s hotel to Gabriel’s horn . . . from Thomson’s lamp to Benardete’s paradox . . . from ancient Greek problems to dilemmas developed only in the past century . . . Huemer describes them all and then starts to evolve some background needed to solve the infinity paradoxes. There are discussions of actual and potential infinities, of Georg Cantor’s set theory, of the theory of numbers, of time and space, of both infinity and infinitesimals. Of the metaphysically impossible and the logically impossible. Of the principle of phenomenal conservatism (which Huemer introduced in his epistemology text), and even of the synthetic a priori.

Huemer, in analyzing some foundational issues in mathematics in order to solve various paradoxes of infinity, is willing to advance bold claims.

In building the background to handle the infinity paradoxes, Huemer argues that extensive infinities (including the cardinal numbers) can exist but not as specific magnitudes. Thus, the positive integers are infinite, in the sense that for any such number you can find higher positive integers, but not in the sense that there is a number “infinity” that is higher than all the positive integers. You cannot add and subtract “infinity” as I did in the previous paragraph. And he argues that while extensive magnitudes (time, space, volume) can sensibly approach infinity in this understanding, infinite intensive magnitudes (such as temperature, electrical resistance, attenuation coefficient, etc.) are metaphysically impossible. This distinction allows several paradoxes to be solved, or avoided.

A fascinating section of the text discusses various forms of impossibility. Sometimes philosophers note that X is physically impossible, given the laws of the universe as we now understand them, but nonetheless that it could be possible in a similar but slightly different possible world — say, with a slightly different Coulomb constant. But at other times X is deeply physically impossible. Consider these two alternatives described by Huemer:

Compare this pair of questions:

A. If I were to add a teaspoon of salt to this recipe, how would it taste?

B. If I were to add a teaspoon of salt to this recipe in an alternative possible world in which salt is a compound of plutonium and mercury and we are sea creatures who evolved living on kelp and plankton, how would it taste?

Huemer notes that it’s not merely that we have no idea about how to answer B but that, more importantly, even if we could answer B, answering it gives us no intuitions, is of no help in trying to figure out the answer to A. Though Huemer makes this point in the context of determining what counts as a solution to an infinity paradox, it also has direct application to various thought experiments in other areas of philosophy and to what counts as a helpful or unhelpful thought experiment. (On this see my own work, “Experiment THIS!: Libertarianism and Thought Experiments.”)

Related to the paradoxes of infinity are the problems of infinite regress. You may have heard of the problem of the regress of causes: asked what caused A, you explain that it was caused by B. But what caused B? C caused B. But … here is an infinite regress. Does this imply that we never really understand what caused A?

There are other interesting infinite regresses: of reasons, of truths, of resemblances, etc. Huemer offers helpful insights here as well, elaborating various factors that determine whether such infinite regresses are vicious or benign.

Did I mention that Huemer can be iconoclastic? Consider these passages from Approaching Infinity:

  • There are certain philosophical assumptions that tend to generate strong resistance to my views, and these assumptions are commonly accepted by those interested in issues connected with science and mathematics . . . I have in mind especially the assumptions of modern (twentieth-century) empiricism . . . the doctrine that it is impossible to attain any substantive knowledge of the world except on the basis of observation.”
  • “In the original, core sense of the term ‘number,’ zero is not a number. . . . Why is zero not a number in the original sense? Because a number, in the primary sense, is a property that objects can have, whereas zero is not a property that objects can have.” Huemer extends the concept of number to include zero but explains why such an “extension” does not work for “infinity” as a number.
  • There are reasons to doubt that sets exist. No one seems to be able to explain what they are, they do not correspond to the ordinary notion of a collection, and core intuitions about sets, particularly the naive comprehension axiom, lead to contradictions.”

In his final chapter, Huemer, taking to heart Nozick’s concerns about coercive philosophy, offers readers his own thoughts about problems that remain: which of his answers leave him concerned or unsatisfied, arguments that are incomplete, areas for further exploration.

As in his earlier books on ethics, epistemology, and politics, Huemer’s style is as easy and enjoyable as his logic is rigorous. Intelligent laypeople who are interested in philosophy can follow his thoughts without difficulty. No Hegel here.

Because I have little background in the philosophy of mathematics, I approached Huemer’s latest effort with trepidation, despite having very much enjoyed his three earlier books. But now that I’ve read it, I highly recommend it. The best news: before finishing Approaching Infinity, you’ll have to read halfway through it, and before that one-quarter of the way, and before that one-eighth, and before that. . . . Yet despite this you can read it through to the very end, and be enthralled on every page.

Editor's Note: Review of "Approaching Infinity," by Michael Huemer. Palgrave Macmillan, 2016, 275 pages.

Share This

Pandora’s Book


What would you do if you were told that something you believe is not true? It would depend on who was telling you, I guess. It would also depend on how important the belief was to you, and on the strength of the evidence offered, wouldn’t it?

Suppose the belief in question had shaped your career and your view of how the world works. What if you were offered strong evidence that this fundamental belief was just plain wrong? What if you were offered proof?

Would you look at it?

In his 2014 book, A Troublesome Inheritance: Genes, Race and Human History, Nicholas Wade takes the position that “human evolution has been recent, copious, and regional.” Put that way, it sounds rather harmless, doesn’t it? In fact, the book has caused quite a ruckus.

What if you were offered strong evidence that this fundamental belief was just plain wrong? What if you were offered proof?

The following is not a review of Wade’s book. It is, instead, more a look at how the book was received and why. There are six parts: a story about Galileo, a summary of what I was taught about evolution in college, a sharper-edged rendering of the book’s hypothesis, an overview of some of the reviews, an irreverent comment on the controversy over Wade’s choice of a word, and, finally, an upbeat suggestion to those engaged in the ongoing nurture vs. nature debate.

1. It is the winter of 1609. In a courtyard of the University of Padua, Galileo Galilei closes one eye and peers at the moon through his recently improved telescope. As he observes the play of light and shadow on its surface, there comes a moment when he realizes that he is looking at the rising and setting of the sun across the mountains and valleys of another world. He is stunned.

Galileo hurries to tell his friend and colleague, Cesare Cremonini, then drags him to the courtyard, urging him to view this wonder. Cesare puts his eye to the scope for just a moment, then pulls his head back, pauses, frowns, and says, “I do not wish to approve of claims about which I do not have any knowledge, and about things which I have not seen . . . and then to observe through those glasses gives me a headache. Enough! I do not want to hear anything more about this.”

What a thing to say.

A little context might help. Cesare taught the philosophy of Aristotle at Padua. Aristotle held that the moon was not a world but a perfect sphere: no mountains, no valleys. Furthermore, the Inquisition was underway, and a tenured professor of philosophy who started rhapsodizing about “another world” would have been well advised to restrict his comments to the Celestial Kingdom. The Pope, you see, agreed with Aristotle. To him, and, therefore, to the Roman Catholic Church, the only “world” was the earth, the immobile center of the universe around which everything else moved. Any other view was taboo. Poor Cesare! Not only did he not want to look through the telescope; he did not want there to be mountains on the moon at all.

The question in the present drama is this: who is playing the role of Cremonini?

It would get worse. Soon Galileo would point his scope at Jupiter and discover its moons, heavenly bodies that clearly weren’t orbiting the earth. Then he would observe and record the astonishing fact that Venus went through phases as it orbited not the earth but the sun. So: Ptolemy was wrong, Copernicus was right, and Cesare Cremonini would go down in history as the epitome of willful ignorance. Galileo, of course, fell into the clutches of the Inquisition and became a hero of the Renaissance.

To be fair to Cesare, the story has been retrospectively streamlined into a sort of scientific morality tale. While the part about Galileo’s discovery is probably more or less right, Cremonini’s remark wasn’t made directly to Galileo. It was reported to him later in a letter from a mutual friend, Paolo Gualdo. The text of that letter is included in Galileo’s work, Opere II. And while those jagged borders of light and dark on the moon, imperfectly magnified, were certainly thought-provoking, to say that the case against Ptolemy was closed on the spot, that night in Padua, would be too neat.

It makes a good story, though, and a nice lens for viewing reactions to scientific breakthroughs. Changing our focus now from the moons of Jupiter to the molecular Rubik’s cube we call the human genome, the question in the present drama is this: who is playing the role of Cremonini?

2. In an undergraduate course, taken decades ago, I was taught that human evolution had more or less stopped when the glaciers retreated about 10,000 years ago. Evolution had been driven primarily by natural selection in response to a changing environment; and, as such changes had, for the time being at least, halted, so too had the evolution of man.

I was taught that races exist only as social constructs, not as meaningful biological categories, and that these constructs are only skin deep. They told me that the social behavior of an individual is not genetic, that behavioral and cognitive propensities just aren’t in our genes.

I was taught that the differences among social organizations are unrelated to the genetic differences of the populations that comprise those various organizations, and that social environments have no influence on human evolution.

3. To show how Wade’s book stirred things up, I will present his central hypothesis with an emphasis on the controversial parts. I’ll avoid scientific jargon, in an effort to make the meaning clearer to my fellow nonscientists.

Wade believes that humanity has been evolving rapidly during the past 30,000 years and continues to evolve rapidly today. It is not just our physical characteristics that continue to evolve. The genes that influence our behavior also evolve. (Yes, that’s what the book says, that our behavior is influenced by our genes.)

is humanity rapidly evolving? Is there such a thing as race in biological terms? Nicholas Wade believes that the answer is “yes.”

He also believes that humanity has evolved differently in different locations, most markedly on the different continents, where the major races evolved. (Yes, the book calls them races.)

These separately evolved genetic differences include those that influence behavior. (Yes, the book says that race is deeper than the skin.)

Furthermore, these genetic differences in behavioral propensities have contributed to the diversity of civilizations. The characteristics of any given civilization, in turn, influence the direction of evolution of the humans who compose it.

Oh, my.

We now know that the earth goes around the sun. But is humanity rapidly evolving? Is there such a thing as race in biological terms? Does the particular set of alleles in an individual’s genome influence how that person behaves? Does the particular frequency of alleles in the collective genetic material of the people who compose a civilization influence the characteristics of that civilization? Do the characteristics of a civilization influence the direction of the evolution of the humans that compose it? Nicholas Wade believes that the answer to all these questions is “yes.” While he does not claim that all of this has been proven, he is saying, in effect, that what I learned in college is not true. Am I now to be cast as Cremonini?

4. There are those who disagree with Wade.

In fact, lots of people didn’t like A Troublesome Inheritance at all. I’ve read about 20 reviews, few of them favorable. Even Charles Murray, writing in theWall Street Journal, seemed skeptical of some of Wade’s arguments.Most of the others were simply unfavorable, among them reviews in the Washington Post, the New York Review of Books, Scientific American, the New York Times, The New Republic, and even Reason. Slate and The Huffington Post piled on. While Brian Bethune’s review in MacLean’s was gentler than most, it was gently dismissive.

The reactions run from disdain to anger to mockery. Nathaniel Comfort’s satirical review Hail Britannia!,in his blog Genotopia, is the funniest. Donning the persona of a beef-fed, red-faced, pukka sahib at the height of the Raj, he praises Wade’s book as a self-evident explanation of the superiority of the West in general and the British in particular. (I once saw a retired British officer of the Indian Army being told by an Indian government official that he had to move his trailer to a remote area of a tiger preserve to ensure the security of a visiting head of state. He expressed his reluctance with the words, “I’m trying to be reasonable, damn it, but I’m not a reasonable man!”)

There’s some pretty heated language in these reviews, too. That the reviewers are upset is understandable. After all, they have been told that what they believe is not true. And the fellow doing the telling isn’t even a scientist.Sure, Nicholas Wade was a science writer and editor for the New York Times for three decades, but that doesn’t makehim a scientist. Several of the reviews charge that Wade relies on so many historical anecdotes, broad-brush impressions, and hastily formed conclusions that it’s a stretch to say the book is based on science at all.

Of course they’re angry. Some of these guys are professors who teach, do research, and write books on the very subject areas that Wade rampages through. If he’s right, then they’re wrong, and their life’s work has been, if not wasted, at the very least misguided.

The consensus is that Wade has made a complete hash of the scientific evidence that he cites to make his case: cherry-picking, mischaracterizing, over-generalizing, quoting out of context, that kind of thing.

Another common complaint is that, wittingly or not, Wade is providing aid and comfort to racists. In fact, the animosity conveyed in some of the reviews may spring primarily from this accusation. In his review in the New York Times, David Dobbs called the book “dangerous.” Whoa. As I said, they don’t like A Troublesome Inheritance at all.

So, is Nicholas Wade just plain wrong, or are his learned critics just so many Cremoninis?

5. While the intricacies of most of the disagreements between Wade and his critics are over my head, one of the criticisms is fairly clear. It is that Wade uses the term “race” inappropriately.

The nub of the race question is that biologists want the word “race” as it applies to humans to be the equivalent of the word “subspecies” as it applies to animals. As the genetic differences among individual humans and the different populations of humans are so few, and the boundaries between the populations so indistinct, biologists conclude that there are no races. We are all homo sapiens sapiens. We are one.

Several of the reviews charge that Wade relies on so many historical anecdotes, broad-brush impressions, and hastily formed conclusions that it’s a stretch to say the book is based on science at all.

Just south of Flathead Lake in Montana is an 8,000-acre buffalo preserve. One summer day in the mid-’70s, I walked into its visitors center with my wife and father-in-law and asked the woman behind the counter, “Where are the buffalo?” She did not hesitate before hissing, “They’re bison.” Ah, yes: the bison-headed nickel, Bison Bill Cody, and, “Oh, give me a home where the bison roam . . .” You know the critter.

Put it this way: to a National Park Ranger, a buffalo is a bison; to a biological anthropologist, race is a social construct. That doesn’t mean there’s no such thing as a buffalo.

I don’t mean to make light of it. I’ve read the explanations. I’ve studied the printouts that graph and color-code populations according to genetic variation. I’ve studied the maps and charts that show the differences in allele frequencies among the groups. I’ve squinted at the blurry edges of the clusters. I get all that, but this much is clear: the great clusters of genetic variation that correspond to the thousands of years of relative isolation on the various continents that followed the trek out of Africa are real, and because they are genetic, they are biological. In any case, we are not in a biology class; we are in the world, where most people don’t talk about human “subspecies” very often, if ever. They talk about human “races.” To criticize Wade’s use of the term “race” seems pedantic. Whether to call the clusters “races” or “populations” or “groups” is a semantic dispute.

Put it another way: If you put on your “there is no such thing as race” costume for Halloween, you’ll be out trick-or-treating in your birthday suit, unless you stay on campus.

Besides, use any word you want, it won’t affect the reality that the symbol represents. The various “populations” either have slightly different genetic mixes that nudge behavior differently, or they don’t. I mean, are we seeking the truth here or just trying to win an argument?

6. While Wade offers no conclusive proof that genes create behavioral predispositions, he does examine some gene-behavior associations that point in that direction and seem particularly open to further testing. Among them are the MAOA gene and its influence on aggression and violence, and the OXTR gene and its influence on empathy and sensitivity. (The symbols link to recent research results.)

What these have in common is that the biochemical chain from the variation of the gene to the behavior is at least partly understood. The chemical agents of the genes in question are L-monoamine oxidase and oxytocin, respectively. Because of this, testing would not be restricted to a simple correlation of alleles to overt behaviors in millions of people, though that is a sound way to proceed as well. The thing about the intermediate chemical triggers is that they could probably be measured, manipulated, and controlled for.

We are in the world, where most people don’t talk about human “subspecies” very often, if ever. They talk about human “races.”

The difficult task of controlling for epigenetic, developmental, and environmental variables would also be required but, in the end, it should be possible to determine whether the alleles in question actually influence behavior.

If they do, the next step would be to determine the frequency of the relevant allele patterns in various populations. If the frequency varies significantly, then the discussion about how these genetic differences in behavioral propensities may have contributed to the diverse characteristics of civilizations could be conducted on firmer ground.

If the alleles are proven not to influence behavior, then Wade’s hypothesis would remain unproven, and lots of textbooks wouldn’t have to be tossed out.

Of course, it’s not so simple. The dance between the genome and the environment has been going on since life began. At this point, it might be said that everything in the environment potentially influences the evolution of man, making it very difficult to identify which parts of human behavior, if any, are influenced by our genes. Like Cremonini, I have no wish to approve of claims about which I do not have knowledge.

But the hypothesis that Wade lays out will surely be tested and retested. The technology that makes the testing possible is relatively new, but improving all the time. We can crunch huge numbers now, and measure genetic differences one molecule at a time. It is inevitable that possible links between genes and behavior will be examined more and more closely as the technology improves. Associations among groups of alleles, for example, and predispositions of trust, cooperation, conformity, and obedience will be examined, as will the even more controversial possible associations with intelligence. That is to say, the telescope will become more powerful. And then, one evening, we will be able to peer through the newly improved instrument, and we shall see.

That is, of course, if we choose to look.

Share This

Born That Way


Share This

Do You Speak Political?


In Alexander Pope’s The Rape of the Lock, several members of the British aristocracy — back when it was an aristocracy — argue about the amorous theft of a lock of hair. A peer of the realm has captured the lock. Sir Plume, another aristocrat, demands that it be returned:

With earnest Eyes, and round unthinking Face,
He first the Snuff-box open'd, then the Case,
And thus broke out — "My Lord, why, what the Devil?
Zounds! — damn the Lock! 'fore Gad, you must be civil!
Plague on't! 'tis past a Jest — nay prithee, Pox!
Give her the Hair” — he spoke, and rapp'd his Box.

“It grieves me much” (reply'd the Peer again)
“Who speaks so well shou'd ever speak in vain.”

I thought of that passage when Drew Ferguson, Liberty’s managing editor, alerted me to the following statement by Timothy M. Wolfe, then president of the University of Missouri, responding to demonstrations about alleged mistreatment of blacks on his campus:

My administration has been meeting around the clock and has been doing a tremendous amount of reflection on how to address these complex matters. We want to find the best way to get everyone around the table and create the safe space for a meaningful conversation that promotes change.

The next day, Wolfe was forced to resign. He had spoken every bit as well as hapless Sir Plume, and yet he spake in vain.

You can see why. If there was ever a meaningless assemblage of bureaucratic buzzwords, Wolfe’s statement was it. “Address complex matters . . . get everyone around the table [query: does that include people like you and me?] . . . safe space . . . meaningful conversation . . . promote change.” It makes you long for just one academic politician to say, “I want a meaningless conversation, so I can get back to my golf game.” That would be honest, at least.

Anyone who speaks this way is either incapable of critical thought or believes that everyone else is. Who among us advocates change without saying what kind of change he means? Who among us wants to have conversations all day, with total strangers, or with people who don’t like us? And who thinks that what university students need is a safe space, as if they were surrounded by ravening wolves, or panzer battalions?

If there was ever a meaningless assemblage of bureaucratic buzzwords, Wolfe’s statement was it.

The answer is, I suppose, “the typical college administrator,” supposing that these people can be taken at their word, which on this showing is very hard to do. If you had something sincere and meaningful to say, would you say it like that?

My suggestion is that everyone who speaks that lingo should be forced to resign, no matter what his job and no matter what the occasion. I’ve had it with stuff like that. You’ve had it with stuff like that. I suspect that normal people all over the world have had it with stuff like that. Even members of the official class now faintly sense this fact, and they’re trying to turn the incipient rebellion against meaningless buzzwords into their own new set of meaningless buzzwords.

Before I give an example, I want to say something about the official class or, in the somewhat more common phrase, political class.

For many decades, libertarian intellectuals have engaged in what I call a two-class analysis. Instead of analyzing people’s behavior primarily in terms of economic classes, they think in terms of a political class and a class of everyone else. So, for instance, Bernie Sanders claims to represent the working class, and Hillary Clinton claims to dote on the middle class, but what they really are is people who crave official power and expect to get it from their class affiliation with other such people — politicians of all sorts, czars of labor unions, ethnic demagogues, environmental poohbahs, denizens of partisan thinktanks, lobbyists for the interests of women who attended Yale Law School, people who share their wisdom with Public Radio, and the like.

Who thinks that what university students need is a "safe space," as if they were surrounded by ravening wolves, or panzer battalions?

The two-class analysis works pretty well at explaining American political culture. But it wasn’t until this year that the phrase political class got into the political mainstream. It happened because the supposed outliers among Republican conservatives started using it. And when such people as Ben Carson used it, it wasn’t a buzzword. It meant something.

But now it has penetrated far enough to produce this:

I’m not gonna be part of the political class in DC. (Jeb Bush to Sean Hannity, October 29, 2015)

Message to the Chamber of Commerce: “Beware! Jeb’s gonna betray you on the immigration issue.” But of course he wouldn’t. He’d just lie about it, as his brother did. The good thing is that for once nobody believed what one of these icons of the official class had to say. The statement was scorned and ignored. Jeb spake in vain.

I suppose he thinks that nobody really understood him. If so, maybe he’s right. He’s used to speaking the language of the political class, and if you do that long enough, you start behaving like people who are trying to speak Spanish and don’t understand that when they think they’re asking where to catch the bus, they’re actually shouting obscenities. They wonder why the audience turns away.

Naturally, the linguistic divide functions in the other way, too. People who speak Political eventually think in Political too, and they can’t comprehend what people who speak a normal human language say or think.

Everyone who speaks that lingo should be forced to resign, no matter what his job and no matter what the occasion.

The process of linguistic self-crippling usually starts early. People learn Political in high school or college and soon are astonishing their friends with strange chatter about advocating for change around issues of social justice, or demanding that their college create a safe space for them, or else they’ll shut the m***** f***** down. To understand such comments, people who speak English must laboriously translate them into their own language, a boring process that they seldom complete. The Political speakers then complain that they are not being acknowledged, that they are not, in fact, being listened to. And indeed, they’re not — because they’re not speaking the same language as their audience, or hearing it.

A couple of weeks ago, Neil Cavuto, the business guy on Fox News, interviewed a college student representing the cause currently being advocated for by a nationwide coalition of students who have been speaking out on campuses throughout the country. Their program calls for a $15 an hour minimum wage for all campus workers, free education at all public colleges and universities, and forgiveness of all student loans.

“Who’s going to pay for this?” Cavuto asked.

There was a long silence. The advocate had apparently never heard those words before. Finally she struggled to answer, in her own language. She said that the hoarders would pay.

Now it was Cavuto’s turn to be surprised. He couldn’t understand what she meant by this strange, apparently foreign, word. When English speakers use those two syllables, hoard-ers, they’re referring to people who pile up supplies of some commodity — whether uselessly, out of obsession, or prudentially, to preserve life or comfort in case of emergency. It turned out, however, that in the young woman’s lexicon hoarder meant “the 1% who own 99% of the country’s wealth.” I know, that was somewhat like saying, “The unicorns will pay for it,” but I want to emphasize the linguistic, not the metaphysical, problem. She had obviously come to exist in a monolingual environment in which hoarders means something quite different from what it means to, let’s say, 99% of the population.

No one gets offended by a foreigner’s struggles with the language of a new country. Native speakers may, however, become upset by people who grew up speaking the common language and then suddenly decide to speak something else, to the bafflement of everyone they’re talking to. Or shouting at. Or lecturing, as if from a position of intellectual superiority. And that, I think, is what’s happening now, all over the Western world.

It turned out, however, that in the young woman’s lexicon "hoarder" meant “the 1% who own 99% of the country’s wealth.”

If you want to see the Platonic form and house mother of the political class, try Angela Merkel. It’s not surprising that her constituents are disgusted by her commitment to lecturing them in a foreign language. Responding to criticism that she has precipitated an uncontrolled flood of immigrants into her country, where taxpayers will be expected to support them, Merkel said it is “not in our power how many come to Germany.” This from a woman who runs a welfare society based on the idea of, basically, controlling everything. To make confusion more confusing, she also said that she and her government “have a grip on the situation.” Like other members of the political class, she left it to her listeners to divine the secret meanings of such terms as “power” and “have a grip,” and to discover when certain arrays of sound mean “I’m just kidding you” and when they mean “No, really, I’m telling the truth this time.”

When you’re trying to decipher a foreign language, you’re not just challenged by the vocabulary. You’re also challenged by those sentences in which you think you understand all the individual words, but there’s still just something about them — something about their logic or their assumptions or . . . something — that continues to elude your understanding. (This is especially true of French.) Sigmar Gabriel, Merkel’s Vice Chancellor and Economy Minister, provided a good example when he reproved people who might be alarmed by the terrorist attacks in Paris, in which at least one participant was carrying Syrian asylum-seeker documents. "We should not,” he said, “make them [Syrian migrants] suffer for coming from regions from which the terror is being carried to us."He appeared to be arguing that because a country generates terrorists we should welcome more people from that country. But that would be ridiculous; he must have meant something else.

Of course, in any language one finds expressions that, one thinks, must be symbolic of broad social attitudes, concepts that are deeply meaningful but that only a native speaker can understand. The difficulty is that there are no native speakers of Political. So when Merkel talks about keeping true to her “vision” and defines that vision by saying, as she said (unluckily) on the day of the Paris attacks, "I am in favor of our showing a friendly face in Germany," her thought remained elusive, even to Germans. What was she talking about? Was she simply babbling to herself?

President Obama’s use of language has long inspired such questions. You know the kind of tourists who inflict themselves on a foreign land, refusing to learn its language, and then get angry at the natives for not understanding them? That’s Obama, and he’s getting worse and worse. On November 21, he visited children in a refugee center in Malaysia and took the occasion to act out his incomprehension of the vast majority of the American populace — the people whom he often, in his own language, denounces as Republicans.

“They [the kids] were indistinguishable from any child in America,” Mr. Obama said after kneeling to look at their drawings and math homework. “And the notion that somehow we would be fearful of them, that our politics would somehow leave us to turn our sights away from their plight, is not representative of the best of who we are.”

More strange Obama statements can be read at the same place in the New York Times.

The repeated somehow (a word to which the president is becoming addicted) signals a profound linguistic divide. Obama marvels at the ordinary language of ordinary Americans. How can they say the things they do? How can they even think them? When they express their fears of such asylum seekers as the Tsarnaev family; when they comment on the many news reports, written in plain English, showing that the vast majority of people now seeking asylum in the West are not little kids from Muslim South Asian families enjoying the hospitality of the officially Muslim South Asian state of Malaysia but young men from the hotbed of Islamic fanaticism, bound for non-Islamic countries; when they reflect that these young men are destined to spend years living on the resentful charity of neighbors who have been forced by their governments to support them — when people speak of these things, Obama interprets all objections, fears, and caveats as the product of a hideous moral deficiency that has somehow insinuated itself into the body politic. Even supposing he’s right on the policy issue — which I don’t think he is — the word somehow is enough to convince most people that he’s no longer speaking their language.

This dawning realization, not just about Obama but about the entire political class, is good news. It means that people are finally thinking about the private language of the political elite. And here’s some more good news, though from an unlikely source.

Obama marvels at the ordinary language of ordinary Americans. How can they say the things they do? How can they even think them?

Last week, I saw an announcement that fellowships are being offered by something called the Center of Theological Inquiry in Princeton, New Jersey. The Center is inviting academics to come and be supported for eight months of research and “conversations” about the “societal” implications of “astrobiology.” The program appears to be supported, at least in part, by those friendly old astrobiologists, NASA.

The announcement begins in this way: “Societal understanding of life on earth has always developed in dialogue with scientific investigations of its origin and evolution.” That’s an assumption that may be questioned. It recalls the typical first sentence of a freshman essay: “Since the beginning of time, humanity has always been troubled by the problem of indoor plumbing.” But the “Societal understanding” sentence goes beyond that — although it’s hard to tell where it’s going, unless one pictures Neanderthals holding scientific seminars about the validity of Darwinism before deciding whether hunting and gathering is a good idea.

Yet the next sentence clearly has a hopeful tendency: “Today, the new science of astrobiology extends these investigations to include the possibility of life in the universe.”

True, the syntax is bad. Investigations don’t include possibilities. But you have to agree with the last part of the sentence: there is some possibility of life in the universe. And I believe that’s a good thing.

Share This

Marooned on Mars


The final story in Ray Bradbury’s collection The Martian Chronicles is called “The Million Year Picnic.” In it, an American family escapes the nuclear destruction of the earth and lands on Mars, where the father tells his children, “Tomorrow you will see the Martians.” The next day he takes them on a picnic near an ancient canal, where they look into the water and see their own reflections. Simply by moving there and colonizing, they have become Martians. Mark Watney (Matt Damon) makes a similar point when he is stranded on Mars in Ridley Scott’s The Martian: “They say once you grow crops somewhere, you have officially colonized it. So, technically, I colonized Mars.”

The Martian is a tense, intelligent, and engaging story about an astronaut who is left for dead when his fellow crew members are forced to make an emergency launch to escape a destructive sandstorm. Knocked out rather than killed, he regains consciousness and discovers that he is utterly alone on the planet. Solar panels can provide him with renewable energy, oxygen, heat, and air pressure. But the next mission to Mars isn’t due for another five years, and he has enough food to last just 400 days. What can he do?

As we approached the freeway and began to pick up speed, I realized I had only one chance for a safe outcome.

There is something fascinating about this storyline of being marooned or abandoned and left entirely to one’s own devices, whether the protagonist be Robinson Crusoe on his desert island; The 33 (2015) workers, trapped in a Chilean copper mine; Tom Hanks, Cast Away (2000) in the Pacific; the Apollo 13 (1995) crew, trapped in their capsule; Sandra Bullock, lost in space (Gravity, 2013);or even Macaulay Culkin, left Home Alone (1990), just to name a few. These films allow us to consider what we would do in such a situation. Could we survive?

I well remember the time I was left behind at a gas station at the age of ten on the way to a family camping trip. I had been riding in the camper of the pickup truck while my parents and sister rode in the cab. I had stepped out of the camper to tell my mother I was going to the bathroom, but before I could knock on her window, my father shoved the transmission into gear and started driving away. I didn’t know where we were, where we were going, or how I would contact my parents after they left without me. I was even more afraid of strangers than I was of being lost. It would be at least 300 miles before they stopped again for gas, and even then, they might not look into the camper until nighttime, and how would they find me after that? All of this went through my mind in a flash. Then I leapt onto the rear bumper of the truck as it eased past me and clung tightly to the handle of the camper.

I was hidden from sight by the trailer we were pulling behind us. No one would see me there, and if I jumped off or lost my balance, I would be crushed by the trailer. As we approached the freeway and began to pick up speed, I realized I had only one chance for a safe outcome. I managed to pry open the door of the camper, squeeze through the narrow opening, and collapse onto the floor, pulling the door shut behind me. Instead of being frightened by the experience, I was exhilarated by my successful maneuver and problem-solving skills. I could do anything! My only regret was that no one saw my amazing feat.

One of the reasons we enjoy movies like The Martian is that they allow us to participate with the protagonist in solving the problem of survival. Rather than curl up and wait to die, à la Tom Hanks’ character in Cast Away (honestly — five years on a tropical island and he’s still living in a cave, talking to a volleyball? He hasn’t even made a shelter or a hammock?), Watney assesses his supplies and figures out how to survive until the next mission arrives. A botanist and an engineer, he exults, “I’m going to science the shit out of this!” And he does. He makes the difficult decision to cut up some of his precious potatoes for seed, knowing that his only chance for survival is to grow more food. He figures out how to make water, how to extend his battery life, how to deal with the brutally freezing temperatures.

He also keeps a witty video journal, through which he seems to speak directly to the audience. This allows us to remain intensely engaged in what he is doing and avoids the problem encountered in Robert Redford’s 2013 castaway film All is Lost, where perhaps three sentences are uttered in the entire dreary film. Welike Watney’s upbeat attitude, his irreverent sense of humor, his physical and mental prowess, and his relentless determination to survive. We try to anticipate his next move.

A botanist and an engineer, he exults, “I’m going to science the shit out of this!” And he does.

The visual effects are stunning. Many of them would not have been possible even three years ago, before the innovations created for Alfonso Cuaron’s Gravity (2013). The techniques used to create weightlessness as the astronauts slither through the space station are especially impressive; we simply forget that they aren’t really weightless. The unfamiliar landscape — the red desert of Wadi Rum, Jordan, where the outdoor scenes were filmed — is a bit reminiscent of a futuristic Monument Valley. It contributes to the western-hero sensibility while creating a feeling that we really are on Mars. I’m not sure the science works in the dramatic ending, but I’m willing to suspend my disbelief. The Martian is smart, entertaining, and manages to work without a single antagonist — nary a nasty businessman or greedy bureaucrat can be found. If that’s what our future holds, I’m all for it.

Editor's Note: Review of "The Martian," directed by Ridley Scott. Scott Free Productions, 20th Century Fox, 2015, 142 minutes.

Share This

VW Bugs


Share This

Fakers and Enablers


Last month, a UCLA graduate student in political science named Michael LaCour was caught faking reports of his research — research that in December 2014 had been published, with much fanfare, in Science, one of the two most prestigious venues for “hard” (experimental and quantifiable) scientific work. Because of his ostensible research, he had been offered, again with much fanfare, a teaching position at prestigious Princeton University. I don’t want to overuse the word “prestigious,” but LaCour’s senior collaborator, a professor at prestigious Columbia University, a person whom he had enlisted to enhance the prestige of his purported findings, is considered one of the most prestigious number-crunchers in all of poli sci. LaCour’s dissertation advisor at UCLA is also believed by some people to be prestigious. LaCour’s work was critiqued by presumably prestigious (though anonymous) peer reviewers for Science, and recommended for publication by them. What went wrong with all this prestigiousness?

Initial comments about the LaCour scandal often emphasized the idea that there’s nothing really wrong with the peer review system. The New Republic was especially touchy on this point. The rush to defend peer review is somewhat difficult to explain, except as the product of fears that many other scientific articles (about, for instance, global warming?) might be suspected of being more pseudo than science; despite reviewers’ heavy stamps of approval, they may not be “settled science.” The idea in these defenses was that we must see l’affaire LaCour as a “singular” episode, not as the tin can that’s poking through the grass because there’s a ton of garbage underneath it. More recently, suspicions that Mt. Trashmore may be as high as Mt. Rushmore have appeared even in the New York Times, which on scientific matters is usually more establishment than the establishment.

I am an academic who shares those suspicions. LaCour’s offense was remarkably flagrant and stupid, so stupid that it was discovered at the first serious attempt to replicate his results. But the conditions that put LaCour on the road to great, though temporary, success must operate, with similar effect, in many other situations. If the results are not so flagrantly wrong, they may not be detected for a long time, if ever. They will remain in place in the (pseudo-) scientific literature — permanent impediments to human knowledge. This is a problem.

But what conditions create the problem? Here are five.

1. A politically correct, or at least fashionably sympathetic, topic of research. The LaCour episode is a perfect example. He was purportedly investigating gay activists’ ability to garner support for gay marriage. And his conclusion was one that politically correct people, especially donors to activist organizations, would like to see: he “found” that person-to-person activism works amazingly well. It is noteworthy that Science published his article about how to garner support for gay marriage without objecting to the politically loaded title: “When contact changes minds: An experiment on transmission of support for gay equality.” You may think that recognition of gay marriage is equivalent to recognition of gay equality, and I may agree, but anyone with even a whiff of the scientific mentality should notice that “equality” is a term with many definitions, and that the equation of “equality” with “gay marriage” is an end-run around any kind of debate, scientific or otherwise. Who stands up and says, “I do not support equality”?

The idea in these defenses was that we must see l’affaire LaCour as a “singular” episode, not as the tin can that’s poking through the grass because there’s a ton of garbage underneath it.

2. The habit of reasoning from academic authority. LaCour’s chosen collaborator, Donald Green, is highly respected in his field. That may be what made Science and its peer reviewers pay especially serious attention to LaCour’s research, despite its many curious features, some of which were obvious. A leading academic researcher had the following reaction when an interviewer asked him about the LaCour-Green contribution to the world’s wisdom:

“Gee,” he replied, “that's very surprising and doesn't fit with a huge literature of evidence. It doesn't sound plausible to me.” A few clicks later, [he] had pulled up the paper on his computer. “Ah,” he [said], “I see Don Green is an author. I trust him completely, so I'm no longer doubtful.”

3. The prevalence of the kind of academic courtesy that is indistinguishable from laziness or lack of curiosity. LaCour’s results were counterintuitive; his data were highly exceptional; his funding (which turned out to be bogus) was vastly greater than anything one would expect a graduate student to garner. That alone should have inspired many curious questions. But, Green says, he didn’t want to be rude to LaCour; he didn’t want to ask probing questions. Jesse Singal, a good reporter on the LaCour scandal, has this to say:

Some people I spoke to about this case argued that Green, whose name is, after all, on the paper, had failed in his supervisory role. I emailed him to ask whether he thought this was a fair assessment. “Entirely fair,” he responded. “I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data and grateful to the team of researchers who brought it to my attention.” He declined to comment further for this story.

Green later announced that he wouldn’t say anything more to anyone, pending the results of a UCLA investigation. Lynn Vavreck, LaCour’s dissertation advisor at UCLA, had already made a similar statement. They are being very circumspect.

4. The existence of an academic elite that hasn’t got time for its real job. LaCour asked Green, a virtually total stranger, to sign onto his project: why? Because Green was prestigious. And why is Green prestigious? Partly for signing onto a lot of collaborative projects. In his relationship with LaCour, there appears to have been little time for Green to do what professors have traditionally done with students: sit down with them, discuss their work, exclaim over the difficulty of getting the data, laugh about the silly things that happen when you’re working with colleagues, share invidious stories about university administrators and academic competitors, and finally ask, “So, how in the world did you get those results? Let’s look at your raw data.” Or just, “How did you find the time to do all of this?”

LaCour’s results were counterintuitive; his data were highly exceptional; his funding was vastly greater than anything one would expect a graduate student to garner.

It has been observed — by Nicholas Steneck of the University of Michigan — that Green put his name on a paper reporting costly research (research that was supposed to have cost over $1 million), without ever asking the obvious questions about where the money came from, and how a grad student got it.

“You have to know the funding sources,” Steneck said. “How else can you report conflicts of interest?” A good point. Besides — as a scientist, aren’t you curious? Scientists’ lack of curiosity about the simplest realities of the world they are supposedly examining has often been noted. It is a major reason why the scientists of the past generation — every past generation — are usually forgotten, soon after their deaths. It’s sad to say, but may I predict that the same fate will befall the incurious Professor Green?

As a substitute for curiosity, guild courtesy may be invoked. According to the New York Times, Green said that he “could have asked about” LaCour’s claim to have “hundreds of thousands in grant money.” “But,” he continued, “it’s a delicate matter to ask another scholar the exact method through which they’re paying for their work.”

There are several eyebrow-raisers there. One is the barbarous transition from “scholar” (singular) to “they” (plural). Another is the strange notion that it is somehow impolite to ask one’s colleagues — or collaborators! — where the money’s coming from. This is called, in the technical language of the professoriate, cowshit.

The fact that ordinary-professional, or even ordinary-people, conversations seem never to have taken place between Green and LaCour indicates clearly enough that nobody made time to have them. As for Professor Vavreck, LaCour’s dissertation director and his collaborator on two other papers, her vita shows a person who is very busy, very busy indeed, a very busy bee — giving invited lectures, writing newspaper columns, moderating something bearing the unlikely name of the “Luskin Lecture on Thought Leadership with Hillary Rodham Clinton,” and, of course, doing peer reviews. Did she have time to look closely at her own grad student’s work? The best answer, from her point of view, would be No; because if she did have the time, and still ignored the anomalies in the work, a still less favorable view would have to be entertained.

This is called, in the technical language of the professoriate, cowshit.

Oddly, The New Republic praised the “social cohesiveness” represented by the Green-LaCour relationship, although it mentioned that “in this particular case . . . trust was misplaced but some level of collegial confidence is the necessary lubricant to allow research to take place.” Of course, that’s a false alternative — full social cohesiveness vs. no confidence at all. “It’s important to realize,” opines TNR’s Jeet Heer, “that the implicit trust Green placed in LaCour was perfectly normal and rational.” Rational, no. Normal, yes — alas.

Now, I don’t know these people. Some of what I say is conjecture. You can make your own conjectures, on the same evidence, and see whether they are similar to mine.

5. A peer review system that is goofy, to say the least.

It is goofiest in the arts and humanities and the “soft” (non-mathematical) social sciences. It’s in this, the goofiest, part of the peer-reviewed world that I myself participate, as reviewer and reviewee. Here is a world in which people honestly believe that their own ideological priorities count as evidence, often as the determining evidence. Being highly verbal, they are able to convince themselves and others that saying “The author has not come to grips with postcolonialist theory” is on the same analytical level as saying, “The author has not investigated the much larger data-set presented by Smith (1997).”

My own history of being reviewed — by and large, a very successful history — has given me many more examples of the first kind of “peer reviewing” than of the second kind. Whether favorable or unfavorable, reviewers have more often responded to my work on the level of “This study vindicates historically important views of the text” or “This study remains strangely unconvinced by historically important views of the episode,” than on the level of, “The documented facts do not support [or, fully support] the author’s interpretation of the sequence of events.” In fact, I have never received a response that questioned my facts. The closest I’ve gotten is (A) notes on the absence of any reference to the peer reviewer’s work; (B) notes on the need for more emphasis on the peer reviewer’s favorite areas of study.

This does not mean that my work has been free from factual errors or deficiencies in the consultation of documentary sources; those are unavoidable, and it would be good for someone to point them out as soon as possible. But reviewers are seldom interested in that possibility. Which is disturbing.

I freely admit that some of the critiques I have received have done me good; they have informed me of other people’s points of view; they have shown me where I needed to make my arguments more persuasive; they have improved my work. But reviewers’ interest in emphases and ideological orientations rather than facts and the sources of facts gives me a very funny feeling. And you can see by the printed products of the review system that nobody pays much attention to the way in which academic contributions are written, even in the humanities. I have been informed that my writing is “clear” or even “sometimes witty,” but I have never been called to account for the passages in which I am not clear, and not witty. No one seems to care.

But here’s the worst thing. When I act as a reviewer, I catch myself falling into some of the same habits. True, I write comments about the candidates’ style, and when I see a factual error or notice the absence of facts, I mention it. But it’s easy to lapse into guild language. It’s easy to find words showing that I share the standard (or momentary) intellectual “concerns” and emphases of my profession, words testifying that the author under review shares them also. I’m not being dishonest when I write in this way. I really do share the “concerns” I mention. But that’s a problem. That’s why peer reviewing is often just a matter of reporting that “Jones’ work will be regarded as an important study by all who wish to find more evidence that what we all thought was important actually is important.”

You can see by the printed products of the review system that nobody pays much attention to the way in which academic contributions are written, even in the humanities.

Indeed, peer reviewing is one of the most conservative things one can do. If there’s no demand that facts and choices be checked and assessed, if there’s a “delicacy” about identifying intellectual sleight of hand or words-in-place-of-ideas, if consistency with current opinion is accepted as a value in itself, if what you get is really just a check on whether something is basically OK according to current notions of OKness, then how much more conservative can the process be?

On May 29, when LaCour tried to answer the complaints against him, he severely criticized the grad students who had discovered, not only that they couldn’t replicate his results, but that the survey company he had purportedly used had never heard of him. He denounced them for having gone off on their own, doing their own investigation, without submitting their work to peer review, as he had done! Their “decision to . . . by-pass the peer-review process” was “unethical.” What mattered wasn’t the new evidence they had found but the fact that they hadn’t validated it by the same means with which his own “evidence” had been validated.

In medicine and in some of the natural sciences, unsupported guild authority does not impinge so greatly on the assessment of evidence as it does in the humanities and the social sciences. Even there, however, you need to be careful. If you are suspected of being a “climate change denier” or a weirdo about some medical treatment, the maintainers of the status quo will give you the bum’s rush. That will be the end of you. And there’s another thing. It’s true: when you submit your research about the liver, people will spend much more time scrutinizing your stats than pontificating about how important the liver is or how important it is to all Americans, black or white, gay or straight, that we all have livers and enjoy liver equality. But the professional competence of these peer reviewers will then be used, by The New Republic and other conservative supporters of the status quo in our credentialed, regulated, highly professional society, as evidence that there is very little, very very very little, actual flim-flam in academic publication. But that’s not true.

ldquo;decision to . . . by-pass the peer-review processrsquo;s not true.

Share This

Unfinished Business


Back in the mid-1990s, Wall Street Journal reporter Ron Suskind chronicled the struggles of a poor, black honor student named Cedric Jennings as the latter aspired to get out of an inner-city high school and into a top-notch university. Suskind’s pieces garnered him a Pulitzer Prize and led to a book-length treatment of his subject, A Hope in the Unseen: An American Odyssey from the Inner City to the Ivy League (Broadway Books, 1998, 372 pages).

Cedric, a junior at Washington DC’s Frank W. Ballou Senior High School, has to suffer the slings and arrows of a student body that largely takes a dim view of academic achievement. Part of a small group of accelerated science and math students, he dreams of being accepted into MIT’s Minority Introduction to Engineering and Science (MITES) program, offered the summer before his senior year. Anywhere from one-third to one-half of those successfully completing the program go on to matriculate at MIT, and Cedric has his heart set on being one of them and majoring in mathematics.

The young man who wanted to major in mathematics at MIT and make mathematics a career instead bailed out of mathematics altogether with just a minor at Brown. Why?

Although he makes it into the MITES program, he quickly finds himself outclassed: most of the black students are middle-class, hailing from academically superior suburban high schools and having much higher SATs. Decidedly at a disadvantage, he nonetheless manages to complete the program. But during a meeting with academic advisor Leon Trilling, he is told that his chances of getting into MIT aren’t that good. Particularly telling are his SAT scores, 380 verbal and 530 math, for a combined total of 910 out of a possible 1600. Professor Trilling suggests that he apply instead to the University of Maryland and Howard University, even giving him the names of particular professors to contact. The distraught Cedric will have none of it though, even going so far as to accuse Trilling of being a racist.

If he can’t get into MIT, he’ll prove the critics wrong by getting into an Ivy League school. Pulling his SATs up to 960 from 910, he applies to Brown University because it has an impressive applied mathematics department. He’s accepted, and Suskind chronicles the trials and tribulations of his freshman year. The book came out during Cedric’s junior year, Suskind commenting in the Epilogue, “His major, meanwhile, is in applied math, a concentration that deals with the tangible applications of theorems, the type of high-utility area with which he has always been most comfortable” (364).

Thus concludes the summary of the book published 17 years ago. As the years went by, I wondered how Cedric had fared during the remainder of his Brown experience and after graduation. Every now and then I came across some tidbit of information. Although I was expecting to find him putting his major in applied mathematics to work in that field, I discovered instead that he had gone back to school, earning a master’s in education at Harvard and a master’s in social work at the University of Michigan; he had been involved in social work and then had gone on to become a director of government youth programs. Nothing particularly unusual about that, though; lots of folks get graduate degrees in fields other than their undergraduate major and end up veering off onto other career paths.

But I discovered that a revised and updated edition of A Hope in the Unseen had come out back in 2005, and I was surprised to come across this statement in the Afterword describing Cedric’s graduation from Brown: “Then Cedric proceeded, arm in arm with Zayd, Nicole, and a many-hued host of others, to receive his Bachelor of Arts degree, with a major in education, a minor in applied math, and a 3.3 grade point average” (377). Suskind casually lets slip that Cedric didn’t end up with a major in applied mathematics after all! That he only minored in that field means he didn’t have to take the final upper-level courses required for a major.

Suskind had also made Leon Trilling out to be some kind of Prince of Darkness thwarting the Journey of the Hero, and this is a most ungenerous characterization.

Although the book does have Cedric contemplating a second major in education along with his original major in applied mathematics, doubling up in that way just didn’t make much sense. As with his MITES experience, he found himself outclassed at Brown, having to compete with students from academically superior suburban schools, students with SATs hundreds of points higher than his own. He had trouble with some of his freshman courses, even his specialty, having to drop a course in discrete mathematics. Would it not have been more prudent, under those circumstances, simply to focus on one’s original major and on required courses without having to worry about the additional academic load of a new, second major? And if one did take on a second major and then had to scale back on the total number of courses taken, would it not have made more sense to scale back on the second major, getting a minor in that field instead, while going on with the original major? Something just wasn’t adding up here.

Although Brown had been unaware that Cedric was the subject of a series of articles in the Wall Street Journal when he was admitted under Brown’s affirmative action program, the college most certainly would have found out in short order, and it would have been in its best interest that this particular admit not get in over his head. Education is a much “safer” major than applied mathematics, and it is a popular major with many African Americans.

Cedric believed that getting into a top-notch university was a reward of sorts for all that he had to put up with through high school: “I could never dream about, like going to UDC or Howard, or Maryland or wherever . . . It just wouldn’t be worth what I’ve been through” (49). But it appears he may have had to strike a bargain in order to achieve that end. The young man who wanted to major in mathematics at MIT and make mathematics a career instead bailed out of mathematics altogether with just a minor. Why was the motivation behind such a tantalizing shift of academic focus not duly chronicled by Suskind in the Afterword to the revised and updated edition? He offers no explanation whatsoever for Cedric’s stopping short of a full major in applied mathematics, furtively sneaking the fact by as if hoping the reader wouldn’t notice.

Had Cedric gone to Maryland (or Howard) instead, would he have gone on to realize his STEM aspirations?

Suskind had also made Leon Trilling out to be some kind of Prince of Darkness thwarting the Journey of the Hero, and this is a most ungenerous characterization. In 1995, the mean math SAT score of entering freshmen at MIT was 756 out of a possible 800; Cedric’s score was 530. Dr. Trilling was absolutely correct to wonder whether Cedric was a good fit for MIT at the time. Trilling’s advice to Cedric to apply to the University of Maryland and Howard University was based on the fact that those schools were involved in a project with MIT called the Engineering Coalition of Schools for Excellence in Education and Leadership (ECSEL), a program aimed at underrepresented minorities in the field of engineering. Had Cedric been accepted by either of those schools and majored in engineering, he could have had another shot at MIT as a transfer student if his grades had been good enough and if he had been able to boost his SATs. Trilling was actually trying to keep Cedric’s STEM (science-technology-engineering-math) aspirations alive. Even if Cedric still fell short of getting into MIT, he could have gone on to get an engineering degree from Maryland or Howard and contribute to a STEM field in which blacks are woefully underrepresented relative to such fields as education and social work.

During the drafting of this review, I discussed its content with a friend who urged me to check out chapter three of Malcolm Gladwell’s most recent book, David and Goliath: Underdogs, Misfits and the Art of Battling Giants (Allen Lane, 2013, 305 pages). That chapter was titled, “If I’d gone to the University of Maryland, I’d still be in science.” Caroline Sacks — a pseudonym — is a straight-A “science girl” all the way up through high school in Washington, DC. Applying to Brown University as first choice, with the University of Maryland as her backup choice, she’s accepted by both and of course chooses Brown. But she has to drop freshman chemistry at Brown and take it over again as a sophomore. Then she has trouble with organic chemistry, finally having to leave her STEM track altogether and switch to another major. She achieves an Ivy League degree from Brown, but at the expense of her passion for science. Had she gone to Maryland instead, she believes, she’d still be in science. Had Cedric gone to Maryland (or Howard) instead, would he have gone on to realize his STEM aspirations?

A Hope in the Unseen has become widely assigned classroom reading, even spawning a number of accompanying classroom study guides. Although it is indeed an inspiring story, it’s simply not all that it’s cracked up to be. Legions of readers have assumed as a matter of course that Cedric proved the naysayers wrong by earning a major in applied mathematics at Brown when his dream of earning a major in mathematics at MIT was derailed by his low SATs. In reality, Cedric had to leave applied mathematics at Brown — and had he instead been admitted to MIT and attempted a major in mathematics there, he probably would have had to leave much earlier, perhaps even having to forgo the consolation prize of a minor.

Although many consider Cedric’s experience at Brown an affirmative action success story, his experience actually highlights the problems inherent in affirmative action policies that lower academic standards for minorities.

Editor's Note: Review of "A Hope in the Unseen: An American Odyssey from the Inner City to the Ivy League," by Ron Suskind. Revised and updated edition. Broadway Books, 2005, 390 pages.

Share This

Astonishing Life, Astonishing Performance


Stephen Hawking is the most celebrated and renowned physicist of our time, not only because of his astounding theory about time, but also because of his personal struggle with amyotrophic lateral sclerosis (ALS). He has spent his career searching for that “once simple, elegant equation that would prove everything.”

If you, too, are looking for clues to Hawking’s elusive equation, The Theory of Everything isn’t the place to look. Although it does contain a few brief and basic conversations about Hawking’s research along the lines of “quantum theory governs subatomic particles; relativity governs the planets,” the film decidedly is not about physics.

Instead, it is an intensely personal film about how a family copes with the day-to-day emotional and physical trauma caused by a debilitating disease. And yet, it’s not about that either. Stephen Hawking has managed to survive for half a century with a disease that kills most people in less than two years. It is a horrifying disease that gradually destroys the body from the outside in. Known variously as “motor neuron disease,” “Lou Gehrig’s disease,” and more recently “ALS,” it prevents the brain from communicating with the muscles, first in the extremities (hands and feet) and finally in the torso, face, and organs. The brain continues to think, but it can’t direct the muscles to move. It is simply devastating, and most people succumb soon after diagnosis.

But not Stephen Hawking. And I want to know why. Fifty years! I want to know something about the medical treatment and the personal regimen that have made the difference for him. Is it because he has such a strong sense of purpose and satisfaction derived from his research? Is it because he doesn’t believe in the “better place” that makes it easier for believers to “shuffle off this mortal coil”? Or is it because he can afford the reported millions it costs each year for round-the-clock healthcare and personal assistance? The film completely ignores these issues, so if you’re looking for a theory, either of astrophysics or of medical physics, you won’t find it.

Stephen Hawking has managed to survive for half a century with a disease that kills most people in less than two years.

The Theory of Everything is a love story. It includes the giddiness of first love, the devastation of being rejected, the warm settling in of married life, the trauma of dealing with chronic illness, the addition of children, and even the conflicts of infidelity. Stephen’s wry boyish smile belies the crippling devastation of his body and lights his face with charm and desirability. The emotional connection between Stephen (Eddie Redmayne) and Jane (Felicity Jones) is so raw and so tender that it sometimes feels like an intrusion to watch. The stunning musical score by Johann Johannsson contributes to the emotion of the film and will keep you in your seat through the final credits.

In short, The Theory of Everything is more Jane’s story than Stephen’s. According to the tag line of the film, “His mind changed our world. Her love changed his.” This should not be surprising, since the screenplay is based on Jane Hawking’s memoirs, Traveling to Infinity: My Life with Stephen (2007) and Music to Move the Stars: A Life with Stephen (1999). But it also very well may be true that her influence helped him continue his research and live, not as an invalid but as a scholar. Hawking himself has said that the film is “broadly true” and said of Eddie Tremayne’s performance, “At times, I thought he was me.”

Indeed, Eddie Redmayne is the reason this film works so well. He studied with therapists and dance instructors to learn how to isolate his muscles and contort them in just the right way so that he never becomes a caricature of Hawking but remains an embodiment of him. He expresses devastating frustration, unending optimism, witty charm, emotional pain, and tender love, all within the confines of a deteriorating body. Despite the pain, his eyes, his mind, and his smile remain bright. Both Hawking and Redmayne are remarkable.

Editor's Note: Review of "The Theory of Everything," directed by James Marsh. Working Title Films, 2014, 123 minutes.

Share This

Socialist Science


In his famous 1945 report to President Truman, Science: The Endless Frontier, Vannevar Bush attributed scientific progress to "the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown.” Bush argued that government need only support basic research, and that "freedom of inquiry must be preserved," leaving "internal control of policy, personnel, and the method and scope of research to the institutions in which it is carried on."

How did such an abstemious, unfettered funding scheme work out? According to MIT scientist Richard Lindzen, "The next 20 years witnessed truly impressive scientific productivity which firmly established the United States as the creative center of the scientific world. The Bush paradigm seemed amply justified."

But trouble was brewing. By 1961, President Eisenhower, in his farewell address, observed that "a steadily increasing share [of scientific research] is conducted for, by, or at the direction of, the Federal government" and warned of the day when "a government contract becomes virtually a substitute for intellectual curiosity." More than by the influence of the military-industrial complex, Eisenhower was troubled by the possibility that "public policy could itself become the captive of a scientific-technological elite." His worry was justified. Leftist intellectuals and social activists were already infiltrating the social and behavioral sciences and had, by the early 1970s, crept into influential positions of government, to bring science into a social contract for the common good.

It was no doubt this movement that American physicist Richard Feynman had in mind in 1968, when he observed "a considerable amount of intellectual tyranny in the name of science." In particular, liberal theories, as embodied in the programs of the Great Society, would fail the hypothesis testing of real science — their predicted performance has never been confirmed by observable evidence. The ambitious nostrums about poverty, welfare, education, healthcare, racial injustice, and other forms of socioeconomic worriment were based on what Feynman called Cargo Cult Science. These programs are not supported by scientific integrity; they are propped up by the statistical mumbo-jumbo of scientific wild-ass guesses (SWAG).

Leftist intellectuals and social activists were already infiltrating the social and behavioral sciences and had, by the early 1970s, crept into influential positions of government.

The centralized control of research that began in the early 1970s laid the groundwork for the liberal idea of science as a social contract. Under such a contract, the "common good" could not be entrusted to the intuition of unfettered scientists; enlightened bureaucrats would be better suited to the task of managing society's scientific needs. Similarly, normal scientific principles of evidence and proof became subordinate to the vagaries of social concepts such as the precautionary principle, whereby anecdotal and correlative evidence (aka, SWAG) is perfectly adequate for establishing risk to society — the slightest of which (including imaginary risk) is intolerable — and justification for government remedies. Mere suspicion of risk would replace scientific evidence as the basis for regulatory authority. New York state, for example, recently banned fracking, not because of any scientific determination of harm to public health, but because of the uncertainty of such harm.

As the autonomy envisioned by Bush and the integrity demanded by Feynman faded, hypothesis testing became lackadaisical, often not considered necessary at all. And, with the need for sharp "intellectual curiosity" in decline, egalitarian funding of scientific research was put in place. According to a recent New York Times article, agencies such as the National Science Foundation (NSF) and the National Institutes for Health (NIH) award grant money based on criteria other than scientific merit. Preferring "diversity of opportunity" over consequential scientific discovery, administrators now "strive to ensure that their money does not flow just to established stars at elite institutions. They consider gender and race, income and geography." Apparently, enriching our brightest scientists is a vile capitalist concept that diminishes the social value of the funding scheme.

So must it also be with the discovery process, where, as Lindzen observes, "the solution of a scientific problem is rewarded by ending support. This hardly encourages the solution of problems or the search for actual answers. Nor does it encourage meaningfully testing hypotheses." In Lindzen's view, such developments have produced a "new paradigm where simulation and programs have replaced theory and observation, where government largely determines the nature of scientific activity . . ." And now, with the pursuit of scientific truth trumped by the political passions of activist scientists and their funding agencies, "the politically desired position becomes a goal rather than a consequence of scientific research." In this paradigm, science is more easily manipulated by politicians, who cynically scare the public, as H.L. Mencken put it, "by menacing it with an endless series of hobgoblins, all of them imaginary."

Nowhere did this become more prominent than in the environmental sciences. During the 1980s, as socialism began its collapse, distraught western Marxists joined the environmental movement. If the workers of the world would not unite to overthrow capitalism because of its economic harmfulness, then regulators would destroy it because of its environmental damage. Government agencies, most notably the EPA and DOE, became coddling, Lysenkoist homes for activist scientists. By the end of the decade they had penetrated climate science, striking it rich in the gold mine of anthropogenic global warming (AGW). By the early 1990s, the hypothesis that humans had caused unprecedented recent warming, and would cause catastrophic future warming, became self-evident to a consensus of elite activist scientists. The establishment of fossil fuels as the sole culprit behind AGW — and progenitor of an endless series of climate hobgoblins — became the goal of government-funded climate science research.

Apparently, enriching our brightest scientists is a vile capitalist concept that diminishes the social value of the funding scheme.

Science, however, was not up to the task. It could not verify the AGW hypothesis. The existence of the Medieval Warm Period (MWP) was ground for rejection, as was the nonexistence of the so-called tropical hotspot (the "fingerprint of manmade global warming”) predicted by AGW computer models. Then there is the ongoing warming pause, a stark climatological irony that began in 1998, the very year following the adoption of the Kyoto Protocol to curb the expected accelerated warming. Even when confronted with such nullifying evidence, activist scientists refused to reject the AGW hypothesis. Nor did they modify it, the better to conform with observational evidence. Some simply rejected the science — science that they had come to view as "normal science," no longer suitable for their cause — and switched to Post-normal Science (PNS).

PNS replaces normal science when "facts are uncertain, values in dispute, stakes high, and decisions urgent." Invented by social activists, it is a mode of inquiry designed to advance the political agenda behind such large-scale social issues as pollution, AIDS, nutrition, tobacco, and climate change. PNS provides "new problem-solving strategies in which the role of science is appreciated in its full context of the complexity and uncertainty of natural systems and the relevance of human commitments and values."

In other words, in the face of uncertainty, researchers can use their "values" to shape scientific truth. As the late activist scientist Stephen Schneider counseled, "we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts one might have . . . Each of us has to decide what the right balance is between being effective and being honest."

Climate science luminary, Mike Hume, believes that scientists (and politicians) are compelled to make tradeoffs between truth and influence. In the struggle between rational truth and emotional value, Hulme advises (in Why We Disagree about Climate Change, sections 10.1 and 10.5), "we need to see how we can use the idea of climate change — the matrix of ecological functions, power relationships, cultural discourses and materials flows that climate change reveals — to rethink how we take forward our political, social, economic and personal projects over the decades to come." Expanding on Schneider's advice: "We will continue to create and tell new stories about climate change and mobilise them in support of our projects.”

One way or another the "projects" (renewable energy, income equality, sustainability, social justice, green economics, etc.) fall under the umbrella of global governance. There is no solution to global warming that does not require global cooperation, in the execution of a global central plan. The "scary stories" of climate catastrophe (storms, floods, droughts, famines, species extinctions, etc.) are the hobgoblins used to coerce acceptance of the socialist remedy, while obscuring its principal side-effect: the elimination of capitalism, democracy, and individual liberty, none of which can coexist with global governance.

Even when confronted with such nullifying evidence, activist scientists refused to reject the anthropogenic global warming hypothesis.

Under the old paradigm — the free play of free intellects, guided by skepticism and empirical truth — discoveries were prolific, albeit unpredictable with respect to their nature, significance, and timing. The centralized planning that began in the early 1970s attempted to control such fickleness, by selecting the research areas, the grant money, and, in many cases, the desired research result — all to harness science for the common good, of course.

How has the new paradigm — the circumscribed play of biased ideologues, guided by compliance and consensus — performed relative to the old paradigm? Abysmally. The methods of teaching mathematics and reading cited by Feynman have failed; US public education, the envy of the world in the early 1970s, is, at best, mediocre today. The "War on Cancer" that began in 1971 has failed to find a cure. Similarly, government research grants (substituting diversity and a paycheck for intellectual curiosity) have failed to produce cures for many other diseases (AIDS, Alzheimer's, diabetes, Parkinson's, MS, ALS, to name a few). The NSF website lists 899 discoveries — but these are not discoveries; they are discussions of scientific activity, coupled with self-congratulation and wishful thinking.

Activist scientists would shriek that such evidence of failure is anecdotal and correlative, and therefore illegitimate — and who are better qualified than activists to recognize SWAG when they see it? They would also vehemently assert that it is too difficult to establish a causal relationship between government-planned science and paltry discovery — perhaps as difficult as naming a single invention, technological advance, medical breakthrough, engineering development, or innovative product in use today that is not the result of scientific discoveries made prior to the early 1970s.

This evidence for a causal relationship between increasing government control and declining scientific achievement is no flimsier than the evidence for a causal relationship between increasing levels of atmospheric CO2 and increasing global temperature. Indeed, it is the very lack of such evidence that, to activist science, justifies PNS.

But PNS is a charade. It is hobgoblinology, masquerading as science and used to thwart skepticism about the unverified claims of socialist scientists masquerading as enlightened experts, pushing a political agenda masquerading as the common good. AGW is supported by nothing more than cargo cult science foisted on a fearful, science-illiterate people.

The scary stories, incessantly pronounced as scientific facts, are speculation. They are themselves hypotheses — additional, distinct hypotheses that would have to be verified, even if the parent AGW hypothesis could be established. But false syllogisms are permissible under PNS. The PNS scientist is free to infer scary stories from the unverified AGW hypothesis, provided there is uncertainty in the normal science and virtue in his political values. The scientific method of normal science is replaced by a post-normal scientific method, in which an hypothesis is tested not by empiricism but by scariness — that, and the frequency and shrillness with which it is stated. One could call this socialist science process Scary Hypothesis Inference Testing (SHIT). And one would find a strong causal relationship between SHIT and the aroma of SWAG.

Share This
Syndicate content

© Copyright 2017 Liberty Foundation. All rights reserved.

Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.