OUTSIDE THE SCIENCES
A university is an interesting and
stimulating place. I have essentially lived in one since 1957.
The earliest institution resembling a university was the Great Library of Alexandria founded around 300 BC.
Something resembling the modern university began to appear and
slowly evolve even before the end of the Middle Ages. For
example, the University of Oxford is the oldest university in
the English-speaking world, with teaching there existing as
early as 1096. A modern university tends to divide between
practical fields such as engineering, scientific fields such as
physics and chemistry, and the "Humanities," which include a
huge spectrum of "soft" studies which, while maintaining some
scholarly traditions of honesty and openness, are not considered
to be in quest of any knowledge that could be considered valid
in a scientific sense. However, one scientific criterion can be
applied to any study of any kind... can the results of a
specific investigation be replicated by independent researchers?
Alas, studies in "the humanities" notoriously tend to fail to be
verified in this way. So it's interesting to ask if we can or
should believe any "social sciences" result.

|
The depressing result of recent surveys
is that at least half of all recent study/survey results
in "soft" fields cannot be replicated.
The only college departments which have a slightly better track
record in their research studies are university colleges of
education. Probably the reason is that new education methods and
classroom approaches are usually quickly tested out on actual
classes. However, my observations, which have continued for my
entire time at educational institutions, indicate that these "Ed
School Ideas" are handled as fads, tried for a while and then
abandoned for later fads. During my lifetime, no "new"
educational approach has had any noticable impact on student
learning and mastery of classroom material. Indeed, quite the
contrary! Student performance, particularly in K-12 education,
is in a continual state of decline, which has continued unbroken
since the late 1940s!
 |

|
 |

|
 |
 |
Statistical error problems in the
social sciences often stem from small sample sizes, data p-hacking,
measurement errors, and confusing correlation with causation.
Common issues include inflating units of analysis (e.g., treating
observations as independent subjects), improper control groups,
and ignoring outliers, all of which lead to biased,
irreproducible, or false-positive research results.
What is usually cited in defense of the validity of such studies
is the p-value,
the precise definition and significance of which is extremely
controversial.
Key Statistical Error Problems:
📉 Measurement Error: Occurs when constructs (e.g., trust,
intelligence) are not accurately measured, creating weak or biased
relationships.
📉 Inflated Units of Analysis: Researchers may mistakenly treat
multiple observations from one subject (e.g., pre/post-tests) as
independent experimental units, artificially increasing degrees of
freedom and boosting the chance of false positives.
📉 P-hacking and Flexibility: Manipulating data or analysis
choices (e.g., picking covariates)
until non-significant results become significant.
📉 Spurious Correlation: Finding patterns that are merely
coincidence or driven by a third, unseen variable.
📉 Small Sample Sizes: Low statistical power makes it difficult to
detect true effects or results in overestimation of effects.
[In my experience, this is the most common of all errors in the
social sciences.]
📉 Ecological Fallacy: Assuming that individual members of a group
have the same attributes as the average of that group.
Common Pitfalls and Data Misinterpretation:
📉 Circular Analysis: Analyzing data to support a hypothesis that
was used to select the data in the first place.
📉 Ignoring Outliers/Errors: Failing to screen data for typos
(e.g., entering 666 instead of 66) or extreme values.
📉 Misinterpreting Non-significant Comparisons: Claiming an
intervention worked in one group but not another without directly
comparing the two groups.
📉 Graph Misinterpretation: Using skewed axes to make small
differences appear large. [This is another very common
error.]
Solutions and Best Practices:
📈 Pre-registration: Clearly defining study hypotheses and
analysis plans before conducting studies, to reduce p-hacking.
📈 Data Visualization: Using clear graphs to identify outliers and
understand distributions.
📈 Rigorous Data Checking: Thoroughly cleaning data to remove
entry errors and typos.
📈 Transparency: Disclosing all statistical decisions and data
analyses performed.
The Infamous Marshmallow
Experiment!
MOST OF THE TEXTBOOKS ARE WRONG!
List of statistical
fallacies!
Another List!
Yet Another List!
Back