Load WordPress Sites in as fast as 37ms!

“A Big Study on Honesty Turns Out to Be Based on Fake Data”

Fight Censorship, Share This Post!

So reports a very interesting article at BuzzFeed News (Stephanie M. Lee), based on research at Data Colada. One of the original researchers has stated that he is “completely convinced by the analyses provided by Simonsohn, Simmons, and Nelson and their conclusion that the field experiment (Study 3) in Shu, Mazar, Gino, Ariely, and Bazerman (2012) contains fraudulent data; as a result, Shu, Gino, and I contacted PNAS [Proceedings of the National Academy of Sciences] to request retraction of the paper on July 22, 2021.” (Or, wait, what if those are all forgeries? How can anyone really know?)

Here’s an excerpt from the Data Colada post:

In 2012, Shu, Mazar, Gino, Ariely, and Bazerman published a three-study paper in PNAS (.htm) reporting that dishonesty can be reduced by asking people to sign a statement of honest intent before providing information (i.e., at the top of a document) rather than after providing information (i.e., at the bottom of a document). In 2020, Kristal, Whillans, and the five original authors published a follow-up in PNAS entitled, “Signing at the beginning versus at the end does not decrease dishonesty” (.htm).  They reported six studies that failed to replicate the two original lab studies, including one attempt at a direct replication and five attempts at conceptual replications.

Our focus here is on Study 3 in the 2012 paper, a field experiment (N = 13,488) conducted by an auto insurance company in the southeastern United States under the supervision of the fourth author. Customers were asked to report the current odometer reading of up to four cars covered by their policy. They were randomly assigned to sign a statement indicating, “I promise that the information I am providing is true” either at the top or bottom of the form. Customers assigned to the ‘sign-at-the-top’ condition reported driving 2,400 more miles (10.3%) than those assigned to the ‘sign-at-the-bottom’ condition.

The authors of the 2020 paper did not attempt to replicate that field experiment, but they did discover an anomaly in the data: a large difference in baseline odometer readings across conditions, even though those readings were collected long before – many months if not years before – participants were assigned to condition. The condition difference before random assignment (~15,000 miles) was much larger than the analyzed difference after random assignment (~2,400 miles) ….

In trying to understand this, the authors of the 2020 paper speculated that perhaps “the randomization failed (or may have even failed to occur as instructed) in that study” (p. 7104).

On its own, that is an interesting and important observation. But our story really starts from here, thanks to the authors of the 2020 paper, who posted the data of their replication attempts and the data from the original 2012 paper (.htm). A team of anonymous researchers downloaded it, and discovered that this field experiment suffers from a much bigger problem than a randomization failure: There is very strong evidence that the data were fabricated.

We’ll walk you through the evidence that we and these anonymous researchers uncovered, which comes in the form of four anomalies contained within the posted data file. The original data, as well as all of our data and code, are available on ResearchBox (.htm).

Very interesting, and sobering. Science, whether social, medical, or physical, is tremendously important to sound decisionmaking, both societal and personal. We can’t expect it to be personal, but when done right, it’s much better than the alternative (which is generally intuition and limited and poorly remembered and analyzed observation). But scientists are humans, with all the faults that humans have; and we’ve seen lots of examples of them committing a wide range of human errors that have cast serious doubts on a wide range of scientific findings.

It’s not clear that any of the lead authors were actually complicit in the fraudulent data, even if the claims of fraud are correct. But one way or another, it appears that the 2012 study can’t be trusted.


Fight Censorship, Share This Post!

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.