NaimKabir

NaimKabir OP t1_j3d3znk wrote

Falsification is just when some other statement incompatible with a theory is "accepted". If you choose not to accept it again then the falsification doesn't occur. A falsification is also a single instance you are confident in. One experiment! If you do another experiment it's not a re-litigation of the previous falsification at time 1, it's actually just another falsification at time 2. You might choose not to accept Experiment 1s results for some reason, but Experiment 2 could still stand. You just need one instance you accept to falsify a theory.

To verify a theory you need to prove infinite cases

1

NaimKabir OP t1_j3cmvma wrote

Not quite, since it doesn't need to generalize, one counter example is enough. You need to be confident in just one counterexample.

In the verification scheme, you can't ever be confident because you could never test all examples ever.

In one case (falsification) confidence is at least possible, and in the other, it isn't—which makes one of them strictly better

1

NaimKabir OP t1_j3cdv23 wrote

Thankfully just being falsified once at any point in space and time is enough to say a theory isn't generally correct for all space and time, so you can throw it out.

This asymmetry in how easy it is to prove a counterexample vs how easy it is to universally verify is why we stick with falsification as the main avenue for scientific progress.

1

NaimKabir OP t1_j3cbv3f wrote

The Ptolemaic model could also be made to be correct, given more complexity.

Kuhn:

"Given a particular discrepancy, astronomers were invariably able to eliminate it by making some particular adjustment in Ptolemy’s system of compounded circles."

It's just that the juice isn't worth the squeeze as models grow more complex, so we switched

1

NaimKabir OP t1_j3adb2h wrote

You can disagree with the premise, but this is the philosophy that underlies most of scientific method today.

Science is a series of propositions that happen to be useful. Gravity is a name: we can model in different ways. In one case it's an ever-present force emanating from a mass, in other cases it's a geodesic in spacetime. These are models to put our observations into simple elegant pictures.

Reality is composed only of instances of observations: not theories (and so, not forces, laws, particles, etc.). Theories are just a net we throw over observations to give them a gestalt overall picture: but it's not real, the same way constellations aren't real. It's a picture connecting dots.

1

NaimKabir OP t1_j3a95k2 wrote

Verification of general principles needs us to go through for every instance of an event and check that it's true! The idea is that any theory we've got is only assumed generalizable until falsified, it can't be *true* for every domain for all time.

Hume explored it like this: Say we observe A causing B. It happens repeatedly, even when we kick off A ourselves. Is this enough to say A is always followed by B? We might say: yes, because past evidence has pointed at A->B. But why do we think past evidence means the trend will continue? We'd have to say: because past evidence has pointed at continuing trends in the past. But this argument is circular, so it can't work.

A sillier version of this argument: Descartes' evil demon. Let's say an Evil Demon has just been deceiving us with evidence at every turn, and in actuality they can stop at any time and reveal our generalizations to be poor matches for a non-Demon world. We can't be sure theories are true always (we can't do induction based on empirical fact)—we can only stick with a theory until it's falsified.

1

NaimKabir OP t1_j39bvfc wrote

My point is that we could have made an overly complex theory that perfectly models our solar system geocentrically. In the extreme case, imagine we used a neural net fed with geocentric images—this model could have millions of parameters and perform predictions perfectly. However we wouldn't call this model true because it's not simple. The truth is always at the edge of what is unfalsified and what is simplest, by convention

−2

NaimKabir OP t1_j395krh wrote

That's kind of getting at my thesis: in science, nothing is ever really there. All we have is the ability to falsify statements given some basic statements we make given sensory information.

There are a vast number of statements we can make that would be unfalsified by sense statements: what we call true are the theories and models that have highest potential for falsification. Because of some set theory assumptions I make in the article, the models with the highest potential for falsification are the simplest ones! (Out of a set of as-yet unfalsified theories)

1

NaimKabir OP t1_j38cma2 wrote

That framing makes it sound like the truth is out there, and the razor points at it. But rather, reality can be modeled by many combination of logical statements—and we use the razor to select one we call the true one.

Truth is a consequence of us deciding on a set of unfalsified "empirical statements" arranged in a certain way, and one of our requirements is simplicity

2

NaimKabir OP t1_j38361c wrote

Correct. I didn't say Occam's razor is the sole definer: the other side of the equation is if your model has been falsified.

But given two competing unfalsified theories, what we call "true" is given by simplicity considerations. This falls out naturally from Karl Poppers framework in The Logic of Scientific Discovery, and I draw out that logical argument here. This is something Popper puts forward indirectly himself.

−1