This is an interesting post about the formal basis of scientific inquiry, in which it’s proven that for a sufficiently large universe of observable things, the probability of a scientific law being true is zero!
Conjectures and Refutations » Blog Archive » Falsificationism In One Lesson
Even in such situations where the range of x is thus restricted, a high degree of falsifiability is a desirable trait in a hypothesis. It allows the hypothesis to be tested more readily and eliminated quickly if false, and a higher degree of falsifiability in a theory is isomorphic with a greater simplicity — there are more ways for a data point to lie off a straight line than off a quartic curve. Given any collection of data points there’s always a more probable function you can draw that runs through all of them — even if it’s hideously convoluted (think Ptolemaic epicycles). In the limiting case, we can always plead persistant delusion to avoid the need to accomodate recalcitrant data, or answer “god did it” to everything, or make up all manner of new assumptions on an ad hoc basis. But what ultimately distinguishes the scientific mindset from the unscientific one is a willingness to put a theory up against critical tests, a desire for elegance and an strong aversion to ad hockery. The other counterintuitive upshot to this conclusion is that it’s improbability rather than probability which is a virtue in a theory. A priori, it’s more logically probable that your data will be scattered all over the place than that they’ll all line up along a perfectly straight linear function. We should strive for explanations which are both true and simple, but we should also consider it something of an improbable miracle if we manage to find both together in one theory.