The Scientific Method
"Everyone is entitled to his own opinion, but not his own facts."
-- Daniel Patrick Moynihan (1927-2003), United States Senator from New York
"Facts do not cease to exist because they are ignored."
"When the facts change, I change my mind. What do you do, sir?"
--attributed to John Maynard Keynes
"It doesn't matter how beautiful your theory is, it doesn't matter how smart you are. If it doesn't agree with the experiment, it's wrong."
--Richard P. Feynman
Older slides for this topic in PDF format.
- There is an objective reality which is the same for everyone.
- There exist unchanging laws by which the Universe works and these laws can be discovered (not invented) through experimentation.
Before we can proceed, we all must agree that:
This point of view is called objectivism. This is a matter of belief;
we can't prove it to you. But we can justify it. The rules above
lead to progress. They put people on the Moon and robots on Mars and
Titan; they predict solar eclipses centuries in advance; they cure
diseases like smallpox and polio; they give you light at the flick of
a switch and more computing power in your hand than in all of 12th
The alternative to scientific objectivism is called relativism.
SolipsismSolipsism is the belief that everyone creates their own reality. This approach does not produce any useful results. If your own private reality includes a law of gravity that is different from Newton's, any predictions you make with it are not going to match reality. The solipsistic approach yields nothing useful and does not improve the human condition.
If you think that you are creating your own reality right now, why don't you see if you can make yourself fly? Having trouble?
New-Age or Post-Modern ThinkingThere are people today who insist that all points of view are equally valid. Cotton and Scalise are NOT among them. You may think that we are being one-sided or biased because we ignore some "points of view." In science you encounter the disturbing fact that, if your "point of view" does not agree with reality as determined by experiment through the scientific method, then your point of view is simply wrong. Our view of the universe may change as science uncovers more of its secrets, but that change of view will be driven by evidence.
Not only does the relativist viewpoint disagree with observations of the Universe, it is also logically inconsistent. It is self-refuting. See Schick and Vaughn, page 325:
To say that everything is relative is to say that no unrestricted
universal generalizations are true ... but the statement "No
unrestricted universal generalizations are true" is itself an
unrestricted universal generalization. So if relativism in any
of its forms is true, it's false. As a result, it cannot possibly
The Scientific Method
A way to ensure that you are not fooled by others and that you do not fool yourself.
1. Observation and description of a phenomenon.
e.g. I turned on my desk light, but nothing happened.
2. Formulation of a hypothesis to explain the phenomenon. The hypothesis often takes the form of a causal mechanism or a mathematical relation. This requires creative thinking.
e.g. I think the filament in the bulb is broken and this prevents
current from flowing through it causing the filament not
to glow with heat.
NOT e.g. The light doesn't work because I forgot to say the magic words:
Klaatu barada nikto.
The mechanism should be plausible.
NOT e.g. I think gnomes are eating all the light as fast as it is
3. Use of the hypothesis to predict the existence of other phenomena, or to predict quantitatively the results of new observations. This requires critical thinking.
e.g. If the bulb is placed in a fixture known to be
working, then no light will be produced.
4. Performance of experimental tests of the predictions by several independent experimenters and properly performed experiments.
e.g. The bulb did not light in a fixture that
was known to work.
Notice that you have NOT PROVED that the hypothesis is correct. You merely have more confidence in your hypothesis after the test. It still might be wrong.
e.g. The bulb might have a dirty contact which prevents current
Indeed, you can
NEVER PROVE the hypothesis correct. But if your hypothesis passes test
after test after test, you can be more certain of the hypothesis.
HYPOTHESIS -> MODEL -> "THEORY" -> LAW -> FACT
The word "theory" in the physical sciences (physics, chemistry, astronomy, etc.) does NOT mean the same as "theorem" in mathematics. A mathematical theorem can be proven deductively to be absolutely true. Theories in the physical sciences have a lot of evidence to back them up, but they may be disproven by a single counter-example at any time.
OR e.g. The bulb did work in a second fixture.
This result DISPROVED the hypothesis. Now you can try a different hypothesis and repeat steps 2, 3, and 4. Hypotheses that are not in principle disprovable are not in the purview of science.
e.g. Invisible gnomes that can not be detected
in any way are eating the light.
This can not be disproved by any test; it is a CONSTRUCT. It is worthless as a hypothesis. You learn nothing about bulbs, nothing about gnomes, nothing about anything.
Purpose of the Scientific Method: to construct an accurate, reliable,
self-consistent, non-arbitrary representation of the world.
The procedures are standardized to minimize any prejudice the
experimenter might have when testing the hypothesis.
The Scientific Method is carried out collectively by all researchers. Individual experiments or experimenters can be wrong, but Science is self-correcting. The Scientific Method is the ideal toward which scientists work.Reference http://teacher.nsrl.rochester.edu/phy_labs/AppendixE/AppendixE.html
A comment on Atomic Theory as a key example of a testable theory whose fundamental players are not themselves immediately observable
We had a great student question in class about the atomic theory of matter. This is the now well-established theory that the structure of matter and its behavior can be explained by "atoms," tiny building blocks of matter whose properties define the larger properties of materials. This idea was first advanced in India, and in Greece by the thinkers Leucippus and Democritus (the latter being the most famously associated with this idea) around the 5th century BCE. However, like most of their contemporaries the idea of experimentation was anathema to an understanding of the world. It wasn't until over two millennia later that the idea that experimental tests of the natural world could establish "useful" and "useless" ideas was accepted.
The atomic theory is a great example of an explanation that involved things you cannot see. You can see matter. But you cannot see the building blocks of matter. The atoms are too small to see. Indeed, we know now that visible light is an insufficient means by which to perceive the atom; one must use shorter wavelength light, such as x-ray and gamma-ray radiation; or, one must employ particles like the electron, which were not known to exist until the 1900s. Yet, evidence for the atomic hypothesis began appearing in the 1700s. How is it possible?
The theory of atoms originally required the assumption of unseeable sub-microscopic building blocks whose properties lead to the structure of matter. Sounds crazy! In fact, there was tremendous debate in the scientific community about the "reality" of atoms until the late 1800s and early 1900s. There were even competing scientific theories of matter, such as Benjamin Collins Brodie's "Calculus of Chemical Operations," perhaps one of the best structured alternatives to atoms that turned out to be wrong and is now almost completely forgotten. You can see a discussion of Brodie's role and ideas here: http://www.jstor.org/stable/27757239.
Why did the atomic theory remain while alternatives were discounted? The atomic theory made lots of predictions, and those predictions had observable (testable) consequences that were difference from, say, The Calculus of Chemical Operations. Atomic theory survived tests of its predictions tests. This is what allows a theory to continue without modification - it keeps successfully passing tests. So even though we were not able to literally "see" atoms until the last few decades, the idea that these sub-microscopic building blocks were there had consequences, and those consequences were observed to exist. This is how it is possible to propose (via creative or critical thinking) an explanation of a phenomenon that defies direct testing, while still being testable.
Here are a few more basic concepts about science and critical thinking
that we will need. Be sure you understand these definitions.
The National Academy of Sciences definition of fact:
An observation that has been repeatedly confirmed and for all practical purposes is accepted as true.
"In science, 'fact' can only mean 'confirmed to such a degree that it
would be perverse to withhold provisional assent.' I suppose that
apples might start to rise tomorrow, but the possibility does not
merit equal time in physics classrooms."
--Stephen Jay Gould
e.g. At Standard Temperature and Pressure,
lead is more dense than water.
The National Academy of Sciences definition of theory:
A well-substantiated explanation of some aspect of the natural world that can incorporate facts, laws, inferences, and tested hypotheses.
e.g. Einstein's Special Theory of Relativity
Theories are not easily discarded; new discoveries are first assumed
to fit into the existing theoretical framework. It is only when, after
repeated experimental tests, the new phenomenon cannot be accommodated
that scientists seriously question the theory and attempt to modify it.
A construct is "a non-testable statement to account for a set of observations.
The living organisms on Earth may be accounted for by the statement 'God
made them' or the statement 'They evolved.' The first statement is a
construct, the second a theory. Most biologists would even call
evolution a fact."
--Michael Shermer, Why People Believe Weird Things, page 20
Occam's razor is a logical principle attributed to the mediaeval
philosopher William of Occam (or Ockham) [1285-1349]. The principle
states that one should not make more assumptions than the minimum
needed. This principle is often called the principle of parsimony. It
underlies all scientific modelling and theory building. It admonishes
us to choose from a set of otherwise equivalent models of a given
phenomenon the simplest one. In any given model, Occam's razor helps
us to "shave off" those concepts, variables or constructs that are not
really needed to explain the phenomenon. By doing that, developing the
model will become much easier, and there is less chance of introducing
inconsistencies, ambiguities and redundancies.
The structure of the Solar System is a good example of the application of
Occam's Razor. The geocentric system requires planets circling about empty
points, with epicycles added to account for the non-uniform motions.
Copernicus' heliocentric solved the problems without need for epicycles and
the associated assumptions. "Adding epicycles" is modern jargon for
complicating an explanation beyond the point of confidence; it may be time
to stop trying to make the old explanation work and start looking for a new
Occam's Razor is a "heuristic", which means that it does not have a
theoretical base. It is something that is usually good to do. Important
to be aware that heuristics can fail; theoretically derived rules normally
Has Occam's Razor ever failed?
Sure! Almost every time. The Universe is complicated and the simplest
explanation is probably not correct. Then why use Occam's Razor?
Because one should only add new assumptions when forced to do so by the
evidence, not on a whim. Occam's Razor keeps Science on track by not
allowing it to wander too far afield.
An assumption is something taken to be true without proof. Assumptions are
necessary because nobody knows everything. An assumption is not necessarily
a guess - sometimes an assumption is made based on some knowledge of the situation.
You might call it an educated guess. This is in contrast to what is known as
a WAG (wild-ass guess) in which the guesser really knows nothing and is
A skeptic asks for evidence before accepting a claim. Anecdotes and "everyone knows it" aren't enough. Skeptics are open-minded enough to look at evidence and decide whether to accept the claim, but not so open-minded that their brains fall out.
Take careful note of the phrase "Extraordinary claims require extraordinary evidence."
It's from Carl Sagan. Whenever someone makes a really far-out claim, DON'T
just take it at face value. Ask for some real evidence in support of the claim.
A skeptic is not the same as a DENIER. For a denier, there is never enough evidence. There are Holocaust deniers, evolution deniers, HIV/AIDS deniers, ...
A cynic questions everybody's motives, figuring all actions are self-interested and/or self-serving. A real cynic is annoying. There are, however, some times when a cynical approach is useful - even helpful. We mean simply that there are some times that a cynical approach will get you the answers you need.
The old saying "If it seems too good to be true, it probably is" is still valid. Take, for example, the download program Kazaa. Millions use it for copying MP3 files on the Web. It was created by a company (profit expected) then given away for free. Questions are appropriate. What's the payoff? What do they get out of this? The answer is that Kazaa contains an unheralded payload - a package from Brilliant Digital that makes your computer part of a large computer processing network. Brilliant can access your computer and use it for their purposes, for which they get paid. They get to sell the use of your computer. That's the payoff.
If an e-mail or telemarketer makes you the most "wonderful" offer that will supposedly make you rich, beautiful, smart, or something, think first. Why are they offering you this? What's their payoff? Telemarketing isn't free; the promoter has to pay for it.