Of 100 published papers recently tested, only one third could be reproduced. One of the duds had been cited 2000 times. Replication is the bedrock of science and it is sand in Behavioural Sciences. Data from tiny sample sizes of white American college students (often paid to participate) is massaged to find significant correlation by professors who vote 95% Democrat, occasionally cheat, and gain nothing reporting negative results.
Over 270 researchers, working as the Reproducibility Project, had gathered 100 studies from three of the most prestigious journals in the field of social psychology. Then they set about to redo the experiments and see if they could get the same results. Mostly they used the materials and methods the original researchers had used. Direct replications are seldom attempted in the social sciences, even though the ability to repeat an experiment and get the same findings is supposed to be a cornerstone of scientific knowledge. It’s the way to separate real information from flukes and anomalies. These 100 studies had cleared the highest hurdles that social science puts up. They had been edited, revised, reviewed by panels of peers, revised again, published, widely read, and taken by other social scientists as the starting point for further experiments. Except . . . Nearly two-thirds of the experiments did not replicate, meaning that scientists repeated these studies but could not obtain the results that were found by the original research team.”
Statistical significance works in large random samples but not in small non-random ones. You can always find some connection, like "sixteen amazing parallels between Abraham Lincoln and John F Kennedy", if you keep sorting for significance. This is like the joke that an infinite number of monkeys typing forever would eventually tap out the entirety of Shakespeare.
More than 70 percent of the world’s published psychology studies are generated in the United States. Two-thirds of them draw their subjects exclusively from the pool of U.S. undergraduates, according to a survey by a Canadian economist named Joseph Henrich and two colleagues. And most of those are students who enroll in psychology classes. White, most of them; middle- or upper-class; college educated, with a taste for social science: not John Q. Public. This is a problem—again, widely understood, rarely admitted. College kids are irresistible to the social scientist: They come cheap, and hundreds of them are lying around the quad with nothing better to do. Taken together, Henrich and his researchers said, college students in the United States make “one of the worst subpopulations one could study for generalizing about Homo sapiens.”
Publication bias, compounded with statistical weakness, makes a floodtide of false positives. “Much of the scientific literature, perhaps half, may simply be untrue,” wrote the editor of the medical journal Lancet not long ago. ..... The literature, continued the editor, is “afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance.”
The WeeklyStandard article is well worth reading in full.
UPDATE: Economics papers don't replicate either. This puts half into fake or slovenly territory.
"Economics research is usually not replicable."
That's the conclusion of economists Andrew C. Chang and Phillip Li in a new study released as part of the Finance and Economics Discussion Series at the Federal Reserve. Analyzing research from thirteen top economics journals, Chang and Li were able to replicate the findings of just 29 of the 59 papers they scrutinized, and that was with the assistance of the original authors.