"There are two kinds of scientific progress: the methodical experimentation and categorization which gradually extend the boundaries of knowledge, and the revolutionary leap of genius which redefines and transcends those boundaries. Acknowledging our debt to the former, we yearn, nonetheless, for the latter."Ah, the scientific method, hated by The Fundamentalist, loved by anyone else. Misrepresenting or outright vilifying it leads immediately into Artistic License territory. The actual Scientific Method is a collection of multiple techniques that make up a cyclic process. Of which there are multiple versions and variations. A simplification is below:
—Academician Prokhor Zakharov, Sid Meierís Alpha Centauri
- Observe conditions. (Start recording.)
- Ask questions based on your observations. (Such as, "What would happen to Z if I change A?")
- Form a hypothesis. (Make a statement. "Since A behaves in X way, if Y is done Z should happen, because G." Or something of that sort.)
- Make testable predictions based on your hypothesis. (This gives you something to test. It also limits the power of your bias to interpret your results to be what you wanted. For example, if you're testing the effectiveness of a cleaning fluid, you can't just clean a mess with it, inspect the result, and decide afterward that what you see is "pretty clean"; making a specific prediction beforehand, with guidelines to determine how you measure things, forces you to interpret whatever really happened.)
- Test your hypothesis. (How does your hypothesis measure up to the data you gather?)
- Refine your hypothesis. (There's always something inaccurate, imprecise, or just plain wrong with your initial hypothesis.)
- Compare your hypothesis to current theories. (See how your latest hypothesis version fits in with the rest of scientific knowledge and develop a theory.)
- Repeatability. Despite popular portrayals, no one experiment can or should single-handedly overturn a theory, or create a new one. After all, experiments involve numerous variables and events; the results from just one won't necessarily tell you what you assume they are (and you may have simply made a mistake). By repeating experiments, you get a better sense of whether the phenomenon you're studying occurs throughout the year, with different entities, etc. If it keeps happening, the phenomenon is "repeatable", and the more repeatable it is the more likely it is to be real. (This is one of the hurdles faced by any Einstein Sue whose hypotheses contradict the current consensus.)
- Control. Your experiment may not be very meaningful if you only observe changes in a single subject. It's good to have a "neutral and ordinary" subject, kept as similar to the original as possible, for the sake of comparison. For example, in order to be approved by the United States FDA, a drug has to go through a test wherein one group of people take the drug and another (unknowingly) take a fake pill called a placebo. Without the control group, it would be harder to say whether any improvement was actually caused by the drug, as opposed to the Placebo Effect (when people feel better because they think they took the drug) or just because people with that condition happen to get better over time anyway. Post hoc ergo propter hoc is the fancy Latin term for the fallacy of assuming, "X happened, then Y happened, therefore X caused Y!"; in English we say, "correlation does not equal causation." Fictional science falls prey to the fallacy with depressing frequency, and in real life, control is one means of fighting it and keeping in touch with reality. One everyday situation in which non-scientists use "control subjects" is diagnosing problems with a piece of technology. E.g., if a DVD isn't playing properly, you might keep everything the same except for switching the DVD player with a different one, or trying a different DVD, etc, until you isolate the cause.
- Consilience. Whatever your findings are, they ought to match those of other scientists in other fields, or else something somewhere is askew. A geneticist comparing bird DNA to reptile DNA should have similar results as a zoologist comparing their bones, because the theory of evolution predicts that these types of data match up. Thus science is more cooperative between different disciplines rather than the popular portrayal of being competitive (an example of such a portrayal is Hard on Soft Science). When concilience isn't happening, it raises some flags, as with the apparent contradictions between the current models of relativity and quantum mechanics. This can be frustrating, but it also makes many scientists excited for the opportunity to fill a hole in our understanding and maybe win a Nobel prize or two.
- Falsifiability. This one can be counter-intuitive. Falsifiability is not about whether the theory is false, but rather concerns whether it can be verified experimentally at all. For example, a claim that an ancient Japanese coin is buried somewhere in the soil of Mars could hypothetically be proven (if we found the coin), but it is for all practical purposes impossible to disprove ("the soil of Mars" being essentially an infinite resource that we can never finish digging up; the coin could always be just in the next shovelful). This means the theory is not falsifiable, and therefore isn't a question science can answer.
The reverse situation, a falsifiable claim that can't exactly be proven, actually is a scientific question. Science isn't often in the business of proving things—that's a job for mathematics. Proof is nice, but if the question is unprovable, lots of evidence will suffice to convince scientists. An example of something that is simply unprovable with science is evolution, though it's widely accepted because of overwhelming evidence. An example of something that can be—and has been!—proven in science is the existence of gravitational waves, postulated by Einstein and announced in early 2016.
Statistical questions, like whether video games cause violence, are actually neither falsifiable nor provable, both because the effects are random and because you can't ever test a group of people that really represents all 7 billion of us. In practice, analysts have to pick a certain "confidence level" rather than saying the conclusion is outright true or false, and they also have to pick the largest, most diverse group possible. You can exploit this. Depending on whether you want to prove "yes" or "no", you pick the smallest, least diverse group possible to guarantee positive results for your opinion, or test a behavior that's also guaranteed to be positive but has nothing to do with violence. The best part is, once you're in the newspaper, they'll only look at your headlines. See Lies, Damned Lies, and Statistics.