Thursday, September 3, 2015

Study Delivers Bleak Verdict on Validity of Psychology Experimental Results


Vincent J. Curtis

3 Sept 15


In 1932, Jerome Michael and Mortimer J. Adler published a book entitled, Crime, Law, and Social Science.  The University of Chicago, for which they worked, had been offered $5 million to found a school of criminology, and the University commissioned their study into the status of criminology to determine whether that discipline was mature enough to warrant the University founding such a school.  The authors reported adversely, and the University turned down the offer.  That report was later released as the aforementioned book.

The authors concentrated primarily on criminology, but determined that their findings could be extended to the disciplines of sociology and psychology.  The authors found that these disciplines were not sciences at all, for they lacked foundational theories and analyses of their subject matters.  Consequently, the published empirical studies in these disciplines lacked validity and significance.

The authors in those disciplines, according to Michael and Adler, seemed to have no idea how to perform controlled statistical experiments, and besides having no idea what made a science a science, this handicapped their work even more.

Crime, Law, and Social Science was republished in 1971, and Adler makes a favorable reference to it in his 1976 autobiography, Philosopher at Large.

The book by Michael and Adler is an extremely heavy read.  It’s criticism of sociology, criminology, and psychology was to a limited extent accepted by those disciplines and better use of the methods of statistics are now in use; however, the fatal flaw of lacking a foundational theory or analysis still bedevils those disciplines and better statistical methodology has not relieved them of their problems with validity and significance.

A report by the British newspaper The Guardian of 27 Aug 15 covers an article in Science magazine which ran an analysis of 100 studies published in top ranking journals in 2008.  It found that 75 % of social psychology experiments and half of cognitive studies failed to be replicated.  Some extracts of that story are as follows:

“The study, which saw 270 scientists repeat experiments on five continents, was launched by psychologists in the U.S. in response to rising concerns over the reliability of psychology research.  ‘There is no doubt I would have loved for the effects to be more reproducible,’ said Brian Nosek, a professor of psychology who led the study at the University of Virginia.”

“In the investigation, a whopping 75 % of the social psychology experiments were not replicated, meaning that the originally reported findings vanished when other scientists repeated the experiments.  Half of the cognitive psychology studies failed the same test.”

“Even when scientists could replicate original findings, the sizes of the effects they found were on average half as big a reported first time around.  That could be due to scientists leaving out data that undermined their hypotheses, and by journals accepting only the strongest claims for publication.”

“Sadly, the picture it paints – a 64 % failure rate even among papers published in the best journals in the field – is not very nice about the current status of psychological science in general, and for fields like social psychology it is just devastating,” said John Ioannidis, professor of health research and policy at Stanford University.

“If I want to get promoted or get a grant, I need to be writing lots of papers.  But writing lots of papers and doing lots of small experiments isn’t the way to get one really robust right answer.  What it takes to be a successful academic is not necessarily that well aligned with what it takes to be a good scientist,” said Marcus Munafo, a co-author of the comparative study and professor of psychology at Bristol University.

All the authors quoted in the story went on to put the best possible gloss on the failure.  “That’s science for you, and we’ve got to do better,” being the general sentiment.  The criticism by Munafo is a little darker since it hints at corruption being at the root of failure.

However, neither optimism nor dark hints at funding driving bad research can or will change the grim outlook for the soft sciences.  These sciences are called soft because they are not yet sciences, and they won’t be until they are founded upon theories and analyses as physics and chemistry are.  (For example, Newton’s three laws of motion are the theoretical basis for kinetics; and every empirical test of those laws bears out the truth of those laws until quantum effects are encountered.) 

These disciplines can never (going beyond Michael and Adler) become sciences as physics and chemistry are because they deal with man and not matter.  They call themselves sciences because they seek the prestige of actual sciences, even though they have failed to deliver on the substance.  So far, they offer promises of delivery.  Soon.

It has been since 1932 that they were put on notice to deliver, and they manifestly haven’t.

Man is a far more complex subject to study than matter because matter behaves reproducibly and man does not.  It is easy for physics and chemistry to adopt a philosophy of materialism, putting the power of the form into the matter.  But man’s form is not his matter but his rational soul (to employ terms from Aristotle and Thomas Aquinas) and materialism simply cannot be applied usefully upon a rational object, any more than determinism can be.  Thus the alleged “scientists” in sociology and psychology bark up the wrong tree when they try to apply a philosophy of materialism and determinism to their disciplines.

This is partly the basis for the failures of validity and significance in those disciplines.  The problem of validity arises when measurements are not reproducible.  Two untrained observers are bad enough in creating invalid observations; but when two trained observers fail to observe the same thing, then there is a serious problem with the data being sought.

Even if valid data is obtained, the next problem is the significance of the observation in the context of the overall theory.  “So what?” is a question that needs to be answered in a non-trivial way.

When you try to apply the philosophy of materialism and determinism to a subject that is not bound by them, it is only a matter of time before studies and observations founded in materialism and determinism begin to go awry.  And so it was found.

Until “scientists” in the fields sociology and psychology come up with a foundational theory or analysis their studies will always be worthless because they lack significance, and the variables they analyze will lack validity since have no comprehensive theoretical basis.
-30-


No comments:

Post a Comment