Fakers and Enablers


Last month, a UCLA graduate student in political science named Michael LaCour was caught faking reports of his research — research that in December 2014 had been published, with much fanfare, in Science, one of the two most prestigious venues for “hard” (experimental and quantifiable) scientific work. Because of his ostensible research, he had been offered, again with much fanfare, a teaching position at prestigious Princeton University. I don’t want to overuse the word “prestigious,” but LaCour’s senior collaborator, a professor at prestigious Columbia University, a person whom he had enlisted to enhance the prestige of his purported findings, is considered one of the most prestigious number-crunchers in all of poli sci. LaCour’s dissertation advisor at UCLA is also believed by some people to be prestigious. LaCour’s work was critiqued by presumably prestigious (though anonymous) peer reviewers for Science, and recommended for publication by them. What went wrong with all this prestigiousness?

Initial comments about the LaCour scandal often emphasized the idea that there’s nothing really wrong with the peer review system. The New Republic was especially touchy on this point. The rush to defend peer review is somewhat difficult to explain, except as the product of fears that many other scientific articles (about, for instance, global warming?) might be suspected of being more pseudo than science; despite reviewers’ heavy stamps of approval, they may not be “settled science.” The idea in these defenses was that we must see l’affaire LaCour as a “singular” episode, not as the tin can that’s poking through the grass because there’s a ton of garbage underneath it. More recently, suspicions that Mt. Trashmore may be as high as Mt. Rushmore have appeared even in the New York Times, which on scientific matters is usually more establishment than the establishment.

I am an academic who shares those suspicions. LaCour’s offense was remarkably flagrant and stupid, so stupid that it was discovered at the first serious attempt to replicate his results. But the conditions that put LaCour on the road to great, though temporary, success must operate, with similar effect, in many other situations. If the results are not so flagrantly wrong, they may not be detected for a long time, if ever. They will remain in place in the (pseudo-) scientific literature — permanent impediments to human knowledge. This is a problem.

But what conditions create the problem? Here are five.

1. A politically correct, or at least fashionably sympathetic, topic of research. The LaCour episode is a perfect example. He was purportedly investigating gay activists’ ability to garner support for gay marriage. And his conclusion was one that politically correct people, especially donors to activist organizations, would like to see: he “found” that person-to-person activism works amazingly well. It is noteworthy that Science published his article about how to garner support for gay marriage without objecting to the politically loaded title: “When contact changes minds: An experiment on transmission of support for gay equality.” You may think that recognition of gay marriage is equivalent to recognition of gay equality, and I may agree, but anyone with even a whiff of the scientific mentality should notice that “equality” is a term with many definitions, and that the equation of “equality” with “gay marriage” is an end-run around any kind of debate, scientific or otherwise. Who stands up and says, “I do not support equality”?

The idea in these defenses was that we must see l’affaire LaCour as a “singular” episode, not as the tin can that’s poking through the grass because there’s a ton of garbage underneath it.

2. The habit of reasoning from academic authority. LaCour’s chosen collaborator, Donald Green, is highly respected in his field. That may be what made Science and its peer reviewers pay especially serious attention to LaCour’s research, despite its many curious features, some of which were obvious. A leading academic researcher had the following reaction when an interviewer asked him about the LaCour-Green contribution to the world’s wisdom:

“Gee,” he replied, “that's very surprising and doesn't fit with a huge literature of evidence. It doesn't sound plausible to me.” A few clicks later, [he] had pulled up the paper on his computer. “Ah,” he [said], “I see Don Green is an author. I trust him completely, so I'm no longer doubtful.”

3. The prevalence of the kind of academic courtesy that is indistinguishable from laziness or lack of curiosity. LaCour’s results were counterintuitive; his data were highly exceptional; his funding (which turned out to be bogus) was vastly greater than anything one would expect a graduate student to garner. That alone should have inspired many curious questions. But, Green says, he didn’t want to be rude to LaCour; he didn’t want to ask probing questions. Jesse Singal, a good reporter on the LaCour scandal, has this to say:

Some people I spoke to about this case argued that Green, whose name is, after all, on the paper, had failed in his supervisory role. I emailed him to ask whether he thought this was a fair assessment. “Entirely fair,” he responded. “I am deeply embarrassed that I did not suspect and discover the fabrication of the survey data and grateful to the team of researchers who brought it to my attention.” He declined to comment further for this story.

Green later announced that he wouldn’t say anything more to anyone, pending the results of a UCLA investigation. Lynn Vavreck, LaCour’s dissertation advisor at UCLA, had already made a similar statement. They are being very circumspect.

4. The existence of an academic elite that hasn’t got time for its real job. LaCour asked Green, a virtually total stranger, to sign onto his project: why? Because Green was prestigious. And why is Green prestigious? Partly for signing onto a lot of collaborative projects. In his relationship with LaCour, there appears to have been little time for Green to do what professors have traditionally done with students: sit down with them, discuss their work, exclaim over the difficulty of getting the data, laugh about the silly things that happen when you’re working with colleagues, share invidious stories about university administrators and academic competitors, and finally ask, “So, how in the world did you get those results? Let’s look at your raw data.” Or just, “How did you find the time to do all of this?”

LaCour’s results were counterintuitive; his data were highly exceptional; his funding was vastly greater than anything one would expect a graduate student to garner.

It has been observed — by Nicholas Steneck of the University of Michigan — that Green put his name on a paper reporting costly research (research that was supposed to have cost over $1 million), without ever asking the obvious questions about where the money came from, and how a grad student got it.

“You have to know the funding sources,” Steneck said. “How else can you report conflicts of interest?” A good point. Besides — as a scientist, aren’t you curious? Scientists’ lack of curiosity about the simplest realities of the world they are supposedly examining has often been noted. It is a major reason why the scientists of the past generation — every past generation — are usually forgotten, soon after their deaths. It’s sad to say, but may I predict that the same fate will befall the incurious Professor Green?

As a substitute for curiosity, guild courtesy may be invoked. According to the New York Times, Green said that he “could have asked about” LaCour’s claim to have “hundreds of thousands in grant money.” “But,” he continued, “it’s a delicate matter to ask another scholar the exact method through which they’re paying for their work.”

There are several eyebrow-raisers there. One is the barbarous transition from “scholar” (singular) to “they” (plural). Another is the strange notion that it is somehow impolite to ask one’s colleagues — or collaborators! — where the money’s coming from. This is called, in the technical language of the professoriate, cowshit.

The fact that ordinary-professional, or even ordinary-people, conversations seem never to have taken place between Green and LaCour indicates clearly enough that nobody made time to have them. As for Professor Vavreck, LaCour’s dissertation director and his collaborator on two other papers, her vita shows a person who is very busy, very busy indeed, a very busy bee — giving invited lectures, writing newspaper columns, moderating something bearing the unlikely name of the “Luskin Lecture on Thought Leadership with Hillary Rodham Clinton,” and, of course, doing peer reviews. Did she have time to look closely at her own grad student’s work? The best answer, from her point of view, would be No; because if she did have the time, and still ignored the anomalies in the work, a still less favorable view would have to be entertained.

This is called, in the technical language of the professoriate, cowshit.

Oddly, The New Republic praised the “social cohesiveness” represented by the Green-LaCour relationship, although it mentioned that “in this particular case . . . trust was misplaced but some level of collegial confidence is the necessary lubricant to allow research to take place.” Of course, that’s a false alternative — full social cohesiveness vs. no confidence at all. “It’s important to realize,” opines TNR’s Jeet Heer, “that the implicit trust Green placed in LaCour was perfectly normal and rational.” Rational, no. Normal, yes — alas.

Now, I don’t know these people. Some of what I say is conjecture. You can make your own conjectures, on the same evidence, and see whether they are similar to mine.

5. A peer review system that is goofy, to say the least.

It is goofiest in the arts and humanities and the “soft” (non-mathematical) social sciences. It’s in this, the goofiest, part of the peer-reviewed world that I myself participate, as reviewer and reviewee. Here is a world in which people honestly believe that their own ideological priorities count as evidence, often as the determining evidence. Being highly verbal, they are able to convince themselves and others that saying “The author has not come to grips with postcolonialist theory” is on the same analytical level as saying, “The author has not investigated the much larger data-set presented by Smith (1997).”

My own history of being reviewed — by and large, a very successful history — has given me many more examples of the first kind of “peer reviewing” than of the second kind. Whether favorable or unfavorable, reviewers have more often responded to my work on the level of “This study vindicates historically important views of the text” or “This study remains strangely unconvinced by historically important views of the episode,” than on the level of, “The documented facts do not support [or, fully support] the author’s interpretation of the sequence of events.” In fact, I have never received a response that questioned my facts. The closest I’ve gotten is (A) notes on the absence of any reference to the peer reviewer’s work; (B) notes on the need for more emphasis on the peer reviewer’s favorite areas of study.

This does not mean that my work has been free from factual errors or deficiencies in the consultation of documentary sources; those are unavoidable, and it would be good for someone to point them out as soon as possible. But reviewers are seldom interested in that possibility. Which is disturbing.

I freely admit that some of the critiques I have received have done me good; they have informed me of other people’s points of view; they have shown me where I needed to make my arguments more persuasive; they have improved my work. But reviewers’ interest in emphases and ideological orientations rather than facts and the sources of facts gives me a very funny feeling. And you can see by the printed products of the review system that nobody pays much attention to the way in which academic contributions are written, even in the humanities. I have been informed that my writing is “clear” or even “sometimes witty,” but I have never been called to account for the passages in which I am not clear, and not witty. No one seems to care.

But here’s the worst thing. When I act as a reviewer, I catch myself falling into some of the same habits. True, I write comments about the candidates’ style, and when I see a factual error or notice the absence of facts, I mention it. But it’s easy to lapse into guild language. It’s easy to find words showing that I share the standard (or momentary) intellectual “concerns” and emphases of my profession, words testifying that the author under review shares them also. I’m not being dishonest when I write in this way. I really do share the “concerns” I mention. But that’s a problem. That’s why peer reviewing is often just a matter of reporting that “Jones’ work will be regarded as an important study by all who wish to find more evidence that what we all thought was important actually is important.”

You can see by the printed products of the review system that nobody pays much attention to the way in which academic contributions are written, even in the humanities.

Indeed, peer reviewing is one of the most conservative things one can do. If there’s no demand that facts and choices be checked and assessed, if there’s a “delicacy” about identifying intellectual sleight of hand or words-in-place-of-ideas, if consistency with current opinion is accepted as a value in itself, if what you get is really just a check on whether something is basically OK according to current notions of OKness, then how much more conservative can the process be?

On May 29, when LaCour tried to answer the complaints against him, he severely criticized the grad students who had discovered, not only that they couldn’t replicate his results, but that the survey company he had purportedly used had never heard of him. He denounced them for having gone off on their own, doing their own investigation, without submitting their work to peer review, as he had done! Their “decision to . . . by-pass the peer-review process” was “unethical.” What mattered wasn’t the new evidence they had found but the fact that they hadn’t validated it by the same means with which his own “evidence” had been validated.

In medicine and in some of the natural sciences, unsupported guild authority does not impinge so greatly on the assessment of evidence as it does in the humanities and the social sciences. Even there, however, you need to be careful. If you are suspected of being a “climate change denier” or a weirdo about some medical treatment, the maintainers of the status quo will give you the bum’s rush. That will be the end of you. And there’s another thing. It’s true: when you submit your research about the liver, people will spend much more time scrutinizing your stats than pontificating about how important the liver is or how important it is to all Americans, black or white, gay or straight, that we all have livers and enjoy liver equality. But the professional competence of these peer reviewers will then be used, by The New Republic and other conservative supporters of the status quo in our credentialed, regulated, highly professional society, as evidence that there is very little, very very very little, actual flim-flam in academic publication. But that’s not true.

ldquo;decision to . . . by-pass the peer-review processrsquo;s not true.

Share This

Dishonest Impositions on Business


In “Lying as a Research Tool” (Liberty, April 2013) I cited a study of employers’ possible discrimination by race as suggested by fictitious applicants’ names on fictitious résumés. Because such studies are remote from my own main interests, I was not then fully aware of how numerous and respected they have become.

One new example, not yet published in an academic journal, has received prominent and enthusiastic attention in the Wall Street Journal’s weekend issue of 17–18 May 2014 and in Auburn University’s online media. The researchers responded to job announcements by emailing thousands of phony résumés of recent college graduates. The fictitious applicants differed in college majors, recent employment or unemployment, internships, prestigiousness of home address, and typically white or typically African-American name. One conclusion was that experience as an intern before graduation improved one’s chances of being invited to a job interview.

The Southern Economic Journal of July 2014 publishes a similarly conducted study of landlords’ possible discrimination according to whether a prospective tenant’s name and writing style suggested (to use the authors’ categories) a white person, a well-assimilated Hispanic, or a recent immigrant from Latin America.

The authors of such studies cite dozens of similar ones, commenting on the particular questions investigated and on the effectiveness of the particular deceptions employed — but little if at all on their dishonesty. I discussed one of the studies mentioned above by email and then in person with one of its coauthors. What happens when an employer offers a job interview? Answer: the fictitious applicant replies that he or she has meanwhile accepted some other job. Apparently unabashed by the lying that pervades the study, the coauthor excused it with the remark that the end justifies the means, using those very words.

Sissela Bok’s Lying (1978) included a chapter on “Deceptive Social Science Research.” Bok expressed dismay at her examples (though not, of course, at the not-yet-familiar deceptions described here). One reason such deceptions are objectionable is that they create noise in the job and rental markets, possibly disadvantaging genuine applicants. They suggest unconcern about the additional burdens, slight in the individual case but significant in the aggregate, imposed on business, especially small business. The authors presumptuously call such studies, done by correspondence or occasionally with hired actors, “audits” (an “audit” being an official or formal investigation of someone’s accounts or activities to uncover possible error or worse).

Apparently unabashed by the lying that pervades the study, the coauthor excused it with the remark that the end justifies the means.

But why do they consider business firms fair game for such targeting, almost as if they just existed automatically? Actually, no one is obliged to be in business at all and hire employees or offer rental housing, let alone to endure just anyone’s intrusive impositions.

One ground for hope is that such experiments will destroy their own effectiveness if they become familiar enough to arouse the suspicion and noncooperation of the unwitting guinea pigs. By then, sadly, the general presumption of honesty and trustworthiness essential to a free society and market economy will have become further eroded. Many TV ads and the assertions and promises of politicians are already doing damage enough.

At least in their own profession, academic researchers should uphold standards of honesty.

Share This

Lying as a Research Tool


Several years ago a journal article reported on a mailing of hundreds of phony job-application resumés to potential employers. Conspicuously African-American-sounding names were assigned to some of the phony applicants. The researchers found a statistically significant degree of support for the differential response that they had conjectured.

Medical researchers convinced psychiatric hospitals to admit them as patients requiring treatment. Their purpose was to test how hard it was to convince physicians that these patients were sane, after all, and so gain release. In one twist, to see how admission procedures would be affected, one hospital was told, untruthfully, that fake patients would be sent its way (Sam Harris, The Moral Landscape, 141–142).

Research reported in NBER Digest, March 2013, involved sending about 12,000 phony resumés to employers who had posted some 3,000 job vacancies. The resumés showed how long a supposed applicant, if unemployed, had been unemployed. Statistics on “call-backs” from the employers supposedly confirmed discrimination against the long-term unemployed.

Such research raises several questions. Might not some of the employers (or hospitals) subjected to these experiments have vaguely sensed something peculiar and have responded or not responded accordingly? Is it fair to force the unagreed status of experimental guinea pig onto employers, wasting their time and imposing costs, all in addition to their ordinary burdens?  Most important, is lying a respectable tool of research? Should academics profit from having their own resumés augmented by such deceptions?

Share This
Syndicate content

© Copyright 2017 Liberty Foundation. All rights reserved.

Opinions expressed in Liberty are those of the authors and not necessarily those of the Liberty Foundation.

All letters to the editor are assumed to be for publication unless otherwise indicated.