[Martin Taylor 2010.08.31.17.28]
Written as dated above, but my disconnect from the Internet
prevented its transmission.
[From Rick Marken (2010.08.31.0830)]
Martin Taylor
(2010.08.30.23.05)–
Rick Marken (2010.08.30.1210)
What if a model that is inconsistent with the laws of
thermodynamics accounts for data more successfully than
one that is consistent with those laws? Who are you going
to believe, the law of thermodynamics or your lying
eyes;-)
It’s very easy to make models that fit any data you want, if
you are able to make the model violate whatever natural law
you want.
Actually, you can do that even without violating "natural
law". With enough parameters you can get almost any model to
fit any data set.
Sure. That's why Occam's Razor is important. But it's irrelevant to
the issue.
But the solution to this apparent problem is simply to do
another experiment to test the model in new circumstances. If
the model still fits the data then the model is not rejected.
You keep doing this – varying experimental circumstances to
see if the same model continues to predict the data – and
as long as the model keeps fitting the data then it is not
rejected, whether the model violates someone’s notion of
“natural law” or not.
"Someone's notion of natural law" is usually a bit more than just
opinion. Our understanding of natural law has developed through huge
numbers of procedures such as you describe, none of which have
suggested, for example, that it is false that for every action there
is an equal and opposite reaction, or true that heat tends to pass
from a cooler to a hotter object when the two are in contact.
“Someone’s notion of natural law” depends on the consistent results
of many thousands of experiments of different kinds.
If the model you describe violates the current understanding of
natural law and yet consistently gives the right answers when
compared to real-world systems exposed to the experimental
conditions used in modelling (as would my model that uses a
“prophetic function” together with a “general curve-fit” function),
you have to look and see whether the particular violation is what
allows the model to fit, and whether other experiments can be
designed to test whether this violation can be shown to occur under
other circumstances. I don’t think I would have much success in
showing that a “prophecy function” could be made to work in general.
Nor would I expect early success in finding such a function in the
natural system I was modelling.
A completely arbitrary model that predicts one data set
will likely be quickly ruled out by a new experiment that is
designed to test that model under difference circumstances.
The model will almost certainly fail to account for the data
in the new situation. This is my concept of how science is
done, anyway.
Creationists use your argument all the time. Their model says that
6014 years ago, God made things the way they are so that we would
think what we do about the data. Absolutely all data fit this model
precisely, and further tests will never reject it. To me, that’s not
science, though it seems to be the way you think science should be
conducted. To me, “God’s will” is not a part of “natural law”,
though God may well want us to use our minds to discover the natural
laws he put in place.
My concept of how science is done includes taking into account the
work of the thousands of scientists whose results are all consistent
with what we know as “natural law”. That doesn’t eliminate the
possibility that some experiment some day will demonstrate that what
we call a natural law is in some way flawed. In fact, it is
generally believed that even though quantum chromodynamics and
general relativity both give exceedingly precise predictions of
observed data, they can’t both be true as currently described.
Natural Law may be immutable, but our understand of it is not, and
we will never know which aspects of what we believe will some day be
shown to be off the mark.
Even if PCT-based models
happened to fit data very badly, I would say they were
incorrectly structured or had badly judged parameter values,
not that PCT was wrong.
You talk about this in terms of PCT in general. But what about
individual models, like yours and mine, which are both based
on PCT. Can we forego the experiments and just test them by
seeing which is most consistent with the laws of
thermodynamics?
No. There is no such thing as "most consistent" with natural law.
There is “Consistent” and there is “Inconsistent”, with no middle
ground. Both our models are consistent, equally so. A model that is
inconsistent with the laws of thermodynamics cannot represent a real
live system and still be considered as belonging to the body of
science. To make that work, you would have to show that the real
system was itself inconsistent with the laws of thermodynamics, and
that would indeed be a Big Deal that would shake the scientific
world.
On the other hand, there might well be an argument for choosing one
model over another on the grounds of thermodynamic efficiency, but
in most cases one would do this only if there is no other ground for
thinking one mopdel more likely than the other. In most cases, it
would be better to seek properly discriminative observations rather
than to choose on the grounds of thermodynamic efficiency.
It would be on grounds of ineffectiveness leading to inefficiency
that I would expect any S-R behaviours to be eliminated over
evolutionary time, not because they are inconsistent with the laws
of thermodynamics. I would be surprised to find any, but S-R
behaviours could, theoretically, exist within a primarily PCT
structure. It’s conceivable that an S-R behaviour could be a stage
in the development of a new control unit. Again, I doubt it, but
it’s possible.
And yes, I am talking about PCT in general, not about HPCT or any
other particular variety of PCT.
Martin
···
On 2010/08/30 3:10 PM, Richard Marken wrote: