Determinism

[From Bruce Gregory (970130.1630 EST)]

Bruce Abbott (970130.1510 EST)]

No, Bruce. On what basis do you conclude that all words would be
meaningless to a determinist?

The meaning of a word, phrase, sentence, or even larger structure resides in
the associations it evokes in the listener.

Mea culpa. I made a new year's resolution not to engage in
philosophical discussions and I broke it. Sorry.

Bruce Gregory

[From Bjorn Simonsen (2005.12.02,09:20 EUST)]
Martin Taylor 2005.10.31.15.55
I am still working with your non-linear parameters in the error calculation
and the effect on errors when inner conflict exists. Later I will present a
spreadsheet example.
Just now I'll spend time on an earlier sentence from you.

To me, tolerance just means putting up with error in a control
system, whether that error is the result of an active attempt by someone to
change the state of something you care about or whether it is a passive
result of a disturbance.

I agree with both Rick and you about tolerance and I agree with your first
subordinate clause.
After re-studying texts about Teleology and parts of PCT, I have the
understanding that PCT is quite independent of any causal relations.
My question is: "Shall we be careful within PCT and not express what an
error is result of?" (cause-effect).
Am I wrong when I say that there is always a disturbance influencing our
Input functions and all our experiences are always represented in our brain.
It is our consciousness that tells us which part of our brain we experience
what we control at a moment. What consciousness is and how it function
nobody knows.
If I am correct, PCT (all purposes controlled by negative feedback) is a
theory that contributes to clear up the great philosophical problem modern
science created when they asserted determinism against freedom of will. PCT
tells us that it is neither - nor, it is both in a way.

Bjorn

[Martin Taylor 2005.12.02.10.17]

[From Bjorn Simonsen (2005.12.02,09:20 EUST)]
After re-studying texts about Teleology and parts of PCT, I have the
understanding that PCT is quite independent of any causal relations.

That's not right at all. Every component of each feedback loop is entirely causal. The output of the Perceptual Input Function is determined exactly by its input, and so on all around the loop.

Even considering the loop as a whole, the system is entirely causal. What is not true is that you can determine the system's output if you know its structure, parameters and the disturbance input. This inability exists only because each loop has a second input, the reference signal (except maybe at the very top level if you believe in strict HPCT). (And there may well be other inputs, such as controls on gain, which don't appear in strict HPCT).

My question is: "Shall we be careful within PCT and not express what an
error is result of?" (cause-effect).

No. If you act as an external analyst, you can measure both the disturbance and the reference signal (or postulate them), and you can say explicitly what the error is the result of. "External" here means "outside the loop being analyzed", not necessarily outside the body that incorporates that loop.

Am I wrong when I say that there is always a disturbance influencing our
Input functions and all our experiences are always represented in our brain.

Those are two indpendent assertions, aren't they? The first, that there is always a disturbance influencing the _Perceptual_ input function is probably true, if you recognize that the notion of "disturbance" is meaningful only if the perception in question is being controlled, and that a disturbance with a value of zero nevertheless is a disturbance that exists.

The second assertion: "all our experiences are always represented in our brain" has two possible meanings. If you are saying that we can't experience anything that has no effect on the brain, it's only a statement that we are physical beings. If you are saying that the effects of any experience continue to influence the brain state for the rest of our life, I doubt it. I presume that in the absence of external influences (sensory input), the brain activity would, like that of any physical system, relax toward an attractor. If that's true, then the effects of much prior experience would eventually diminish below the level of quantum uncertainty. Only those experiences that had pushed the brain into different attractor basins would have a permanently retained influence.

If the brain is, after the experience in question, open to further sensory input, those inputs could move the brain into different attractor basins, which would have the effect of eliminating (in the long term) the influence of the earlier experience.

So, taking your second possible meaning, noting the word "always", it is not true that "all our experiences are always represented in our brain".

It is our consciousness that tells us which part of our brain we experience
what we control at a moment.

"Tells us?" What does that mean? Are you expounding a tautology, rephrased as "it is our consciousness that allows us to be conscious of ..."?

And "what we control at a moment" is largely unconscious, isn't it? All we are conscious of is a small subset (which I think is all or largely those perceptions for which we are having problems in control, plus those for which we might relinquish or take on control at that moment).

Do we ever know what part of our brain is "experienceing" anything?

If I am correct, PCT (all purposes controlled by negative feedback) is a
theory that contributes to clear up the great philosophical problem modern
science created when they asserted determinism against freedom of will. PCT
tells us that it is neither - nor, it is both in a way.

I'm afraid I don't see the connection to the free-will debate.

I do, however, see how PCT contributes to the teleology issue, and how Darwinian evolution implies PCT and "purpose".

Martin

[From Bjorn Simons (2005.12.02,23:10 EAST)]
Martin Taylor 2005.12.02.10.17

That's not right at all. Every component of each feedback loop is
entirely causal. The output of the Perceptual Input Function is
determined exactly by its input, and so on all around the loop.

It's like saying that the farmers wife is causal for sowing the corn, the
winter is causal for late growing, the farmer for harvesting , the
marketing consultant for the sale ,etc., that is OK, but I think it's
wrong to say that the winter is responsible for the entirely production. I
don't think the winter is responsible for the sale neither. But maybe I am
wrong.

Yes I agree with your statement even though I dislike to say entirely
causal. When I said: "I have the
understanding that PCT is quite independent of any causal relations", I said
it in a connection with teleology = "The study of design or purpose in
natural phenomena". I don't think it's correct to say that the control of
perceptions are dependent of Purposes in the same way the behaviorists say
behavior is dependent of stimuli.
Still I think control is something else than determinism.

Even considering the loop as a whole, the system is entirely causal.

If you mean that the system of nerves is causal for human behavior, I of
course agree.

My question is: "Shall we be careful within PCT and not express what an
error is result of?" (cause-effect).

No. If you act as an external analyst, you can measure both the
disturbance and the reference signal (or postulate them), and you can
say explicitly what the error is the result of. "External" here means
"outside the loop being analyzed", not necessarily outside the body
that incorporates that loop.

I am not sure I understand what you say, and I disagree. If you grasp a
glass of water, you control those perceptions in one loop (the loop being
analyzed). External this loop you can control another perception (another
loop). I can count from 10 to 1 at the same time I grasp a glass of water.

But it is new to me that we are able to perceive the error in the first
loop. How do you measure the disturbance and the reference signal when you
grasp for a glass of water.
Of course, you can postulate that the distance to the glass is out of reach
(disturbance) and you postulate that you have a really strong wish for the
glass of water (reference). And because you know PCT you can postulate
that the error must have a great value. So great that you have to make a
step forward.
I don't think I will explain PCT in such cause effect way. I looks like you
will. Here we think different.

Am I wrong when I say that there is always a disturbance influencing our
Input functions and all our experiences are always represented in our

brain.

Those are two independent assertions, aren't they? The first, that
there is always a disturbance influencing the _Perceptual_ input
function is probably true, if you recognize that the notion of
"disturbance" is meaningful only if the perception in question is
being controlled, and that a disturbance with a value of zero
nevertheless is a disturbance that exists.

....

If you are saying that the
effects of any experience continue to influence the brain state for
the rest of our life, I doubt it. I presume that in the absence of
external influences (sensory input), the brain activity would, like
that of any physical system, relax toward an attractor. If that's
true, then the effects of much prior experience would eventually
diminish below the level of quantum uncertainty. Only those
experiences that had pushed the brain into different attractor basins
would have a permanently retained influence.

It looks like we agree about the disturbance.
I think that you without problems will brush your teeth the rest of your
life. We say that you remember it. There are some loops and the reference at
the level you control those perceptions has a value. The actual (or those
actual) comparators have their reference values the whole day and the whole
night in many years forward. The same I think about the comparator at the
level where you control the second derivate of a function. The comparator
has it's reference value. I think the value can change, but it exists all
day and all night in many years forward. I think we can say the same about
all perceptions you control.
Writing this and re-reading your text above, I think we agree. I also agree
with your last sentence.

If the brain is, after the experience in question, open to further
sensory input, those inputs could move the brain into different
attractor basins, which would have the effect of eliminating (in the
long term) the influence of the earlier experience.

I am not sure I agree. A child becomes able to control her perceptions when
she eats her porridge. Later she control her perceptions when she eats a
great soft ice. I don't think the loops that controlled her porridge eating
are eliminated. I think we acquire new loops. (Of course I must
misunderstand what you said).

So, taking your second possible meaning, noting the word "always", it
is not true that "all our experiences are always represented in our
brain".

Well, I read your conclusion and I don't agree. I know some people forget
what they have remembered earlier. But if they have forgotten something, it
is no longer an experience, is it? I talk about experiences that exist, of
course.

It is our consciousness that tells us which part of our brain we

experience

what we control at a moment.

"Tells us?" What does that mean? Are you expounding a tautology,
rephrased as "it is our consciousness that allows us to be conscious
of ..."?

I mean that I am conscious some of the perceptions I control. Some
perceptions I control, I am able to tell you about. I can tell you what
level I can control those perceptions.
I can't tell you anything about perceptions I control without being then
conscious them.

And "what we control at a moment" is largely unconscious, isn't it?
All we are conscious of is a small subset (which I think is all or
largely those perceptions for which we are having problems in
control, plus those for which we might relinquish or take on control
at that moment).

I think I am conscious when I write this mail. That's what I control at this
moment. Of course this writing is a part of a "higher" perception I control
at the Category level and on the level of Sequences, and quite sure at the
Principle level. These are perceptions I am unconscious.

Do we ever know what part of our brain is "experiencing" anything?

Yes, when I draw a trapeze I control the level of Transitions and the level
of Configurations. Didn't you answer that question when you said:

If you act as an external analyst, you can measure both the
disturbance and the reference signal (or postulate them), and you can
say explicitly what the error is the result of. "External" here means
"outside the loop being analyzed", not necessarily outside the body
that incorporates that loop.

?

Bjorn

[Martin Taylor 2005.12.02.17.48

[From Bjorn Simons (2005.12.02,23:10 EAST)]
Martin Taylor 2005.12.02.10.17

That's not right at all. Every component of each feedback loop is
entirely causal. The output of the Perceptual Input Function is
determined exactly by its input, and so on all around the loop.

It's like saying that the farmers wife is causal for sowing the corn, the
winter is causal for late growing, the farmer for harvesting , the
marketing consultant for the sale ,etc., that is OK, but I think it's
wrong to say that the winter is responsible for the entirely production. I
don't think the winter is responsible for the sale neither. But maybe I am
wrong.

I fail to see the analogy. There are lots of different influences that affect the sowing of the corn, and for all of your "like"s. That's not true of the functions of a control loop. The output of each is completely determined causally by its input and by the function it embodies.

When I said: "I have the
understanding that PCT is quite independent of any causal relations", I said
it in a connection with teleology = "The study of design or purpose in
natural phenomena". I don't think it's correct to say that the control of
perceptions are dependent of Purposes in the same way the behaviorists say
behavior is dependent of stimuli.

It could hardly be, could it? Control of perception (in PCT) is a _process_ that operates on the purpose, whereas in behaviourism, the response is a consequence of the stimulus. To make the ideas at least comparable, you would have to say something like: "I don't think it's correct to say that the outputs of the control of perceptions are dependent of Purposes" and naturally, PCT wouold say you are correct, because the outputs depend jointly on the purposes (reference values) and the disturbances.

Still I think control is something else than determinism.

I suppose you could introduce magic, but I doubt that many PCT theorists would agree.

To me, determinism means that if you know the current states of all relevant variables, and the functions and processes that act on them, you can forecast their future states.

The fact that chaotic systems diverge over time doesn't mean they are non-deterministic. It just means that if you want to forecast accurately, you have to know the initial states and the processes very accurately. They are just as determinstic as normal linear processes are.

>Even considering the loop as a whole, the system is entirely causal.

If you mean that the system of nerves is causal for human behavior, I of
course agree.

No, I'm talking about the canonical PCT loop.

My question is: "Shall we be careful within PCT and not express what an
error is result of?" (cause-effect).

No. If you act as an external analyst, you can measure both the
disturbance and the reference signal (or postulate them), and you can
say explicitly what the error is the result of. "External" here means
"outside the loop being analyzed", not necessarily outside the body
that incorporates that loop.

I am not sure I understand what you say, and I disagree.

From here on, I think your reponse is at cross-purposes with my comments, so I won't address what you say directly. I think what didn't understand was "external analyst".

An external analyst is someone who can see the whole circuit and can make measurements on any part of it. I'm not sure whether you were involved in CSGnet when there was a thread on the different observer possibilities, but it's quite important to keep them straight when you are discussing PCT.

One important "observer" is the one in the control loop, in other words, the perceptual input function (the "observation" being the perceptual signal. That observer cannot see the disturbance or the output (or, for that matter, the reference signal).

A quite different observer is one who looks AT the control loop. This is the "external analyst" who can put measuring probes on any component (or all of them). It's the external analyst who can say things like p = P(s) = P(o + d) etc., because all those signals, p, s, o, d, e, r are observables.

If you grasp a
glass of water, you control those perceptions in one loop (the loop being
analyzed). External this loop you can control another perception (another
loop). I can count from 10 to 1 at the same time I grasp a glass of water.

Which, as you may now understand, is irrelevant to my comment.

>...in the absence of
>external influences (sensory input), the brain activity would, like
>that of any physical system, relax toward an attractor. If that's
>true, then the effects of much prior experience would eventually
>diminish below the level of quantum uncertainty. Only those
>experiences that had pushed the brain into different attractor basins
>would have a permanently retained influence.

If the brain is, after the experience in question, open to further
sensory input, those inputs could move the brain into different
attractor basins, which would have the effect of eliminating (in the
long term) the influence of the earlier experience.

I am not sure I agree.

How can you not agree? It's a question of simple physics. To say "I am not sure I agree" is like saying "I'm not sure I agree that F = ma". To make sense of "I am not sure I agree" you have to demonstrate that the brain activity would NOT approach any attractor, and so far as I am aware, there is not physically possible system for which this would be true (it would be true, for example, of balls rolling without friction on an absolutely flat surface in a vacuum).

>So, taking your second possible meaning, noting the word "always", it

is not true that "all our experiences are always represented in our
brain".

Well, I read your conclusion and I don't agree. I know some people forget
what they have remembered earlier. But if they have forgotten something, it
is no longer an experience, is it? I talk about experiences that exist, of
course.

So, you diddn't mean "always". You meant "at that moment". A language confusion!

Martin

Martin Taylor replied to Bjorn Simons in the following
exchange:

>Still I think control is something else
>than determinism.

I suppose you could introduce magic, but I doubt
that many PCT theorists would agree.

On this particular statement I happen to agree with
Bjorn -- and there is no need to introduce magic in
doing so.

Control is not determinism; control is determination.

To me, determinism means that if you know the
current states of all relevant variables, and
the functions and processes that act on them,
you can forecast their future states.

I agree with this conceptualization. Recognizing this
can help us see that determinism is not relied upon in
control systems theory, outside of the determinism
that is incidental to our use of functions for
representation.

A remarkable feature of control systems is that they
allow us to forecast future states when we know rather
little about the current states of variables. This
contrasts sharply with the sort of knowledge that is
required for deterministic projections.

Control theory requires some degree of process
regularity but it does not require the perfect
regularity postulated in determinism. Since control
systems overcome disturbances by their nature, to
emphasize control systems is to attenuate concern for
knowledge of the sort that inspired earlier
scientists. Deterministic presumptions regarding the
nature of reality are not only unnecessary, they
contribute nothing; a world of deterministic
transformations is indistinguishable from a world of
reliable transformations with regard to what happens
when control occurs. Control systems exemplify
effective knowledge of the world. When dealing with
such effective knowledge we need not concern ourselves
with perfect knowledge of the world, even in theory.

I recognize that many advocates of PCT, cybernetics,
etc. are invested in deterministic theories, so let me
assure that I do not mean here to say that work in
systems theory is flawed when reality is so
conceptualized. The big problems with determinism
occur *outside* systems theory, so there need not be
unanimity on this point for good research and theory
to proceed. What I am seeking is agreement that
nondeterministic conceptualizations of reality are
readily compatible with perceptual control theory.

Tracy B. Harms

···

__________________________________
Start your day with Yahoo! - Make it your home page!
http://www.yahoo.com/r/hs

[Martin Taylor 2005.12.04.12.18]

[Tracy Harms, apparently Sun, 4 Dec 2005 08:26:59 -0800]

Martin Taylor replied to Bjorn Simons in the following
exchange:

>Still I think control is something else
>than determinism.

I suppose you could introduce magic, but I doubt
that many PCT theorists would agree.

On this particular statement I happen to agree with
Bjorn -- and there is no need to introduce magic in
doing so.

??!??!??!!!

The omited text that follows does not explain that astounding claim: "Then a miracle happens".

  What I am seeking is agreement that
nondeterministic conceptualizations of reality are
readily compatible with perceptual control theory.

What kind of "non-deterministic conceptualizations of reality" that are compatible with normal science do you have in mind? Obviously, some kind of extension to modern physics (or denial thereof) is involved. But what, if not magic? Or are you allowing for arbitrary miracles imposed by a higher power not subject to physical laws -- in which case, I doubt our puny control systems would have much to say on the matter.

Or are you not talking about the world we live in, but instead are saying that control theory would still work, not just in this world, but also even in an imagined world like the one in the Hitchiker's Guide to the Galaxy with its maximum improbability drive (or whatever it was)? I think you would have to define a new physics appropriate to such a non-deterministic world before you could substantiate that claim.

To my mind, other than allowing for arbitrary miracles, a nondeterministic conception of reality would require statements something like "2 + 2 = somewhere between 3 and 5" or "The attraction between two bodies of masses M and N is within 3% of G*M*N/R^2". Physics in this world doesn't work that way (so far as is known by 21st century science). In this world, 2 + 2 = 4 exactly, and the attractive force is exactly G*M*N/R^2. Moving to relativistic physics doesn't change the determinism, though I suppose you could argue that some interpretations of quantum mechanics allow some level of non-determinism. The jury is still out on that.

The fact that (as Tracy observed) controlled systems approach their attractors quickly doesn't make them non-deterministic.

Martin

Martin Taylor wrote:

What kind of "non-deterministic conceptualizations
of reality" that are compatible with normal science
do you have in mind?

As an example, I suggest a book by Karl Popper: The
Open Universe, Volume II of the Postscript to The
Logic of Scientific Discovery, Routledge, 1988.

Obviously, some kind of extension to modern physics
(or denial thereof) is involved. But what, if not
magic?

Obviously it would involve not thinking of physics as
you think of it at present. I recognize that you
think determinism is inherent to science, but it looks
to me to be a metaphysical presupposition which may --
but need not -- be applied in a naturalistic world
view that emphasizes scientific understanding. As for
"what" enables its rejection over its acceptance, I
propose that the determinism that *is* a feature of
applied mathematics is *not* a feature of the aspects
of world we apply this math to measure, model, and
explain. Yes, this implies that there are errors in
theories such as theories of physics, but such an
implication seems wholly uncontroversial.

Or are you allowing for arbitrary miracles imposed
by a higher power not subject to physical laws

Nope. Not interested in anything along that line.

...
Or are you not talking about the world we live in,
...

Our real world is where my interest lies.

Tracy

···

__________________________________
Start your day with Yahoo! - Make it your home page!
http://www.yahoo.com/r/hs

[From Bjorn Simonsen (2005 12.04,20:50 EUST)]
Martin Taylor 2005.12.02.17.48

I fail to see the analogy. There are lots of different influences
that affect the sowing of the corn, and for all of your "like"s.
That's not true of the functions of a control loop. The output of
each is completely determined causally by its input and by the
function it embodies.

Yes, I knew it was a vulgar way to tell I am not enthusiastically over
considering the loop as a whole system as entirely causal.
I felt my main point was put into the background. I will try again in the
end of this mail.
First comments to your last mail.

I don't think it's correct to say that the control of
perceptions are dependent of Purposes in the same way the behaviorists say
behavior is dependent of stimuli.

It could hardly be, could it? Control of perception (in PCT) is a
_process_ that operates on the purpose, whereas in behaviorism, the
response is a consequence of the stimulus. To make the ideas at least
comparable, you would have to say something like: "I don't think it's
correct to say that the outputs of the control of perceptions are
dependent of Purposes" and naturally, PCT would say you are correct,
because the outputs depend jointly on the purposes (reference values)
and the disturbances.

This is OK for me. I appreciate your detailed comments.

Still I think control is something else than determinism.

I suppose you could introduce magic, but I doubt that many PCT
theorists would agree.

I'm afraid I don't see the connection from your first subordinate clause to
my statement. I don't think I can or will introduce magic, well - it depends
how we define magic.

To me, determinism means that if you know the current states of all
relevant variables, and the functions and processes that act on them,
you can forecast their future states.

OK

The fact that chaotic systems diverge over time doesn't mean they are
non-deterministic. It just means that if you want to forecast
accurately, you have to know the initial states and the processes
very accurately. They are just as deterministic as normal linear
processes are.

OK, but I this is a little theoretical in my daily life. I know I ought to
be theoretical, neat and thorough when I talk with you, and I like it.
The problem that exists is that it is impossible to predict in detail what
will happen more quickly than events unfold in real time in a Chaotic
system.
Didn't you once answer Bill saying that a dynamic attractor carries no
concept of causality?

One important "observer" is the one in the control loop, in other
words, the perceptual input function (the "observation" being the
perceptual signal. That observer cannot see the disturbance or the
output (or, for that matter, the reference signal).

A quite different observer is one who looks AT the control loop. This
is the "external analyst" who can put measuring probes on any
component (or all of them). It's the external analyst who can say
things like p = P(s) = P(o + d) etc., because all those signals, p,
s, o, d, e, r are observables.

I will remember this when/if we later talk about an external analyst. Thank
you.

Only those
experiences that had pushed the brain into different attractor basins

>would have a permanently retained influence.

If the brain is, after the experience in question, open to further
sensory input, those inputs could move the brain into different
attractor basins, which would have the effect of eliminating (in the
long term) the influence of the earlier experience.

I am not sure I agree.

How can you not agree? It's a question of simple physics. To say "I
am not sure I agree" is like saying "I'm not sure I agree that F =
ma".

Well I agree that "Only those experiences that had pushed the brain into
different attractor basins would have a permanently retained influence".
When a baby has reorganized so she can eat her porridge, she has put
something in her brain into an attractor basin. I guess eating the porridge
is an en-point of behavior.
When she later reorganize so she can eat a soft ice, I guess she has put her
brain into another attractor basin. But this last reorganizing started as a
trial and error from her ability to stretch her arm and bending it as she
did when she ate her porridge.
How do you explain that she still can eat porridge if her earlier experience
would have been eliminated?

Am I wrong when I say that there is always a disturbance influencing our
Input functions and all our experiences are always represented in our

brain.

>So, taking your second possible meaning, noting the word "always", it

is not true that "all our experiences are always represented in our
brain".

Well, I read your conclusion and I don't agree. I know some people forget

what they have remembered earlier. But if they have forgotten something,

it

is no longer an experience, is it? I talk about experiences that exist, of
course.

So, you didn't mean "always". You meant "at that moment". A language
confusion!

I meant " Am I wrong when I say that there is always a disturbance
influencing our Input functions and all our experiences, those experiences
that are represented in our brain are always ready for work.".
..............
As I said above "I feel my main point is put into the background".

In my first mail in this thread, I said:

If I am correct, PCT (all purposes controlled by negative feedback) is

a

theory that contributes to clear up the great philosophical problem

modern

science created when they asserted determinism against freedom of will.

PCT

tells us that it is neither - nor, it is both in a way.

Bruce A mentioned Aristotle's final causation. I think Aristotle meant that
natural processes were controlled by purpose (causal finalis) and not by
preceding causes. This was an explanation principle.

The modern scientific breakthrough put Aristotle's Purposes aside and
introduced Determinism defined as you said

To me, determinism means that if you know the current states of all
relevant variables, and the functions and processes that act on them,
you can forecast their future states.

Within Moral philosophy many philosophers inclined to Determinism in a way
where human will was denied. They said that all human actions and all human
decisions were effects from causes that in advance exist within or outside
the human being (later cognitive ps. and behaviorism).

I said then

If I am correct, PCT (all purposes controlled by negative feedback) is

a

theory that contributes to clear up the great philosophical problem

modern

science created when they asserted determinism against freedom of will.

PCT

tells us that it is neither - nor, it is both in a way.

Bjorn

[Martin Taylor 2005.12.04.15.16]

[Tracy Harms, apparently Sun, 4 Dec 2005 11:34:57 -0800]

Martin Taylor wrote:

What kind of "non-deterministic conceptualizations
of reality" that are compatible with normal science
do you have in mind?

As an example, I suggest a book by Karl Popper: The
Open Universe, Volume II of the Postscript to The
Logic of Scientific Discovery, Routledge, 1988.

Obviously, some kind of extension to modern physics
(or denial thereof) is involved. But what, if not
magic?

Obviously it would involve not thinking of physics as
you think of it at present.

I guess we have no basis for discussion, if you base your argument on the idea that normal science is an invalid way to examine the real world. "As [I] think of it at present" is using the premises and results obtained by normal science. You assert the indisputable claim that our current understanding is not provably correct. I'd go further and say I believe that there is a non-zero probability that any specific assertion of science is likely to be modified, if not falsified, by future science.

Even though I think most scientists would acknowledge the imperfection of current understanding, if you are to advance our understanding, you have to do better than just say "science is wrong". You have to supply a better science.

> ...

Or are you not talking about the world we live in,
...

Our real world is where my interest lies.

Since you deny the validity of normal science, how would you express that interest and investigate the "real world"?

Martin

From [Marc Abrams (2005.12.04.1526)]

In a message dated 12/4/2005 3:28:22 P.M. Eastern Standard Time, mmt-csg@ROGERS.COM writes:

···

[Martin Taylor 2005.12.04.15.16]

I guess we have no basis for discussion, if you base your argument on
the idea that normal science is an invalid way to examine the real
world…

Have you read the book? Judging from your response the answer is no.

So I think you are quite correct in your view that there is no basis for a discussion, but only because of the lack of understanding and knowledge you may have about the ideas expressed in that book and in many other’s.

What I think you fail to realize is that “the jury” will always be out. But that is OK, we all have our own ideas about things, don’t we?

Regards,

Marc

“As [I] think of it at present” is using the premises and
results obtained by normal science. You assert the indisputable claim
that our current understanding is not provably correct. I’d go
further and say I believe that there is a non-zero probability that
any specific assertion of science is likely to be modified, if not
falsified, by future science.

Even though I think most scientists would acknowledge the
imperfection of current understanding, if you are to advance our
understanding, you have to do better than just say “science is
wrong”. You have to supply a better science.


Or are you not talking about the world we live in,

Our real world is where my interest lies.

Since you deny the validity of normal science, how would you express
that interest and investigate the “real world”?

Martin

Martin Taylor,

I guess we have no basis for discussion, if you base
your argument on the idea that normal science is an
invalid way to examine the real world.

I do not think that. If I said anything to suggest
that, please help me correct the misunderstanding.

"As [I] think of it at present" is using the
premises and results obtained by normal science.

While determinism is a common premise, it is not the
only premise consistent with genuinely scientific
ideas and efforts. We must select among our premises,
and in this case I'm confident that determinism is
better rejected.

You assert the
indisputable claim
that our current understanding is not provably
correct.

Actually, I did not say that. I happen to agree with
the claim you stated, but I did not bring it up.

What I said is that scientific theories are inaccurate
insofar as the mathematics involved, which is
deterministic, deviates from the way things actually
are.

I'd go further and say I believe that there is a
non-zero probability that any specific assertion of
science is likely to be modified, if not
falsified, by future science.

This still seems too weak. Let's guess that a great
many scientific theories we admire today are false,
and will be marginalized in the future as their
inadequacies are revealed.

Even though I think most scientists would
acknowledge the
imperfection of current understanding, if you are to
advance our
understanding, you have to do better than just say
"science is
wrong". You have to supply a better science.

Your intent here is definitely something I share, and
I know that what I've said here so far is sketchy at
best.

Please note that not everything at stake in this
discussion counts as science. A good deal of what you
and I are drawing on here is philosophy. The claim I
wish to emphasize at present is this: It is neither
necessary nor desirable to understand the most
successful scientific theories of our time from a
deterministic standpoint.

I hope you can notice that this claim does not require
the rejection of, nor even revision to, any scientific
theories. This implies that there is no need for me
to supply a better science in order to successfully
argue my case.

...

Since you deny the validity of normal science, how
would you express that interest and investigate
the "real world"?

I do not deny science any more than you do. I do not
agree that the products of science provide any
endorsement of determinism, that's all.

Tracy B. Harms

···

__________________________________________
Yahoo! DSL � Something to write home about.
Just $16.99/mo. or less.
dsl.yahoo.com

[From Erling Jorgensen (2005.12.05 1530 EST)]

Bjorn Simonsen (2005.12.02,09:20 EUST)

After re-studying texts about Teleology and parts of PCT, I have the
understanding that PCT is quite independent of any causal relations.

I don’t think I would express it the way you are above, but I agree that
control processes necessitate a reconceptualization of cause-effect
relations as traditionally expressed in lineal causality. The whole
analysis changes when you close the loop with feedback processes, because
it introduces a different relationship to time. I would almost say that
this feature is constitutive of living processes. They do not just
exist with external causes; because of the circular causality embedded
in negative feedback arrangements, they in some sense cause themselves.

Back in 1999, I posted to CSGNet an essay that tried to partition the
various notions of “cause” as applied to control loops. I think it is
relevant to what you are raising, and I’d like to bring it into the
discussion.

I think the essay might be worth Dag’s posting it at the LCS website,
because I think it deals in a succinct way with some of the philosophical
differentiation between circular vs. lineal causality. I don’t remember
what subject heading it used, and I can’t seem to search the archives
previous to the past year. My own draft document had the date 7/16/99,
so it was sent somewhere around that time.

···

In any event, here is a reposting of that essay, which we might call,
“Causation and Negative Feedback Control Loops.”

From a systemic standpoint, I’m not sure “cause” is a very
helpful word in talking about control systems. It all depends
on which portion of the system you are considering at the moment.

a) Every snippet of the control loop can be thought of as
propagating a signal, and in that sense it has a (causal)
input and a (resulting) output. To the extent that we in
CSG analyze in this black box fashion, we usually focus on
the “nodes” of the control loop, i.e., the comparator function,
the output function, the perceptual input function (PIF), and
occasionally the environmental feedback function.

Aside: Regardless of how many actual neurons, evoked potentials,
graded potentials, membrane permeability, etc. may actually be
involved, and whether the signals are meeting in ganglia or
other types of neural tissue, the Comparator has been elegantly
modeled as having two net inputs with inverted signs –
reference and perceptual signals respectively – and one net
output, the error signal. In a sense, the interaction of
reference and perception “cause” the error, but that’s not
the best way to think about it.

2nd Aside: Some have spoken of the error signal “causing” the
output or action of a control system, but that again seems to
cut the loop into snippets. The output function has been
powerfully modeled (in the tracking demos) as an integrating
function, requiring not only the reference-minus-perception
input, but a multiplier constant representing gain, a
multiplier constant representing a slowing factor (I think
that’s the “leaky” part of the integrator), and the previous
output of the function as a new input! What’s the “cause”
in all of this, or is that not the right concept to impose
on control systems?

3rd Aside: Apart from some weighted sums, and applying some
logical operators, I have seen almost no modeling of different
types of perceptual input functions. If Bill’s suggestion
about hierarchical levels of perception is a useful launching
point (and I think it is), then theoretically there should be
ten or eleven qualitatively distinct ways of modeling PIF’s.
The actual neural computations of perceptions are undoubtedly
incredibly more complex, but for a model all we would need to
begin empirically testing of its concepts is to reproduce some
essential feature of a postulated level of perception. For
instance, the essence of a Transition, to my way of thinking,
is the simultaneous experience of variance mapped against
invariance. An Event is a series of transitions framed –
one might almost say arbitrarily – with a beginning and an
end. So an event control system (again, as I conceive it)
is the one that does “framing”, but to test whether such
perceptions can be constructed and stabilized against
disturbances, we first need measurable models of variants
and invariants mapped against each other. Such complexities
are beyond my current modeling abilities, (not to mention
the point Bill has raised about getting the right dynamic
equations to model the environmental forces on the computer.)

b) We can also consider a single control loop “in isolation,”
and ask whether causality is a helpful concept there. The rules
change (and so should the concepts) when you close the loop. In
one sense, every part of the loop is a cause of every other part.
The corollary to this is that every part of a closed loop is a
cause of itself! Some theories like to respond with the idea of
“circular causality,” but I think Rick is right that it often
just amounts to linear causality chasing itself incrementally
around the circle. The fundamental idea of accumulating
integrating functions (with all their ramifications) doesn’t
seem to enter the picture. It seems better to think of the
organization of components itself, not some event occuring
within it, as the effective cause.

c) We can move the zoom focus slightly farther out and consider
a single control loop together with its inputs. As the basic
model now stands, every loop has only two inputs from outside
itself – one from inside the organism, the reference signal
(which, again, can be modeled as the net effect of whatever
neural and chemical processes actually bring it about), and one
from outside the organism, the (net) disturbance. A traditional
view of causality would say that the reference and the disturbance
are the only two candidates for being a “cause,” and in a sense
we in CSG accept that.

But by quantifying the relations in the loop into equations,
Bill et al. have been able to say something much more precise
about these external causes. Only the reference is an effective
cause of the stabilized state of the perceptual input quantity.
Any causal effect from the disturbance on that quantity is
neutralized by the negative feedback action of the loop. The
cost is that the disturbance becomes an effective cause (in
inverted form) of the behavioral output. [This latter point
seems to be what Herbert Simon was referring to in his quote
about the behavior of ants, that was hotly debated awhile back
on CSGNet.]

d) We can move even further back and look at more of the
hierarchy, as it’s currently proposed to operate. Here almost
every control loop is embedded in a network of control loops
“above” it and “below” it. So in one sense, higher loops
“cause” it to operate by providing changing reference signals,
and it “causes” lower level loops to control by the same
mechanism. I deliberately say “higher loops”, plural, and I
mean it in two senses. For one thing, many loops at the next
higher level can be contributing to the net reference signal
at a loop at the next lower level, so perhaps all those loops
are causal. But we can also speak of proximal and distal causes,
and include each relevant loop all the way up the hierarchy as
a “cause” of a given low-level loops’ operation. This is why I
have no problem considering “attending a meeting” as one (distal)
cause of contracting a given muscle on the way to the garage.
Just as closing the loop changes the notion of causality, so
does embedding everything in a network.

e) Sticking with this hierarchical vantage point for one more
iteration (if you’ve stuck with me this far!), it needs to be
emphasized that the interaction between levels does not occur
by intact loops sending signals to other intact loops below
them. Rather, those lower level loops are part of the
structure
, part of the loop itself, of the higher level.
Remember, all loops are closed through the environment –
(other than the “imagination switch,” if we can figure out a
way to get it to function!) – which means that higher loops
have the longest (and slowest) path to travel to achieve
their control. And they only achieve it if the lower level
loops to which they contribute are achieving sufficient
control of their own variables.

So maybe this reflection has come full circle (sorry about
the pun, but it fits!), in that when higher levels “cause”
lower level perceptions to become stabilized, they are simply
causing their own control to happen. Basically, I think we
have two choices for using causality in a way that reminds
us (instead of deflecting us) about how control loops operate.

  1. Either we allow this reflexive notion of “self-causality”
    to be part and parcel of how we use the term – which means
    processes in the loop are always in a time relationship
    within themselves, as well as always functioning and embedded
    in higher and lower loops.

Or 2) we say causality cannot be determined apart from the
organizational structure that one is considering. In essence,
it is not a relationship among events that pass through the
loop, but rather a property of the organization itself. The
answer to “what’s causing this action?” is the same as to “what’s
causing this perception?” It is the fact that these components
are organized into the functional form of a control system. So
to speak about causes, you can’t stick with the events. You have
to address the question, how does the system (specifically as a
system) bring about its own functioning.

All the best,
Erling

NOTICE: This e-mail communication (including any attachments) is CONFIDENTIAL and the materials contained herein are PRIVILEGED and intended only for disclosure to or use by the person(s) listed above. If you are neither the intended recipient(s), nor a person responsible for the delivery of this communication to the intended recipient(s), you are hereby notified that any retention, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify me immediately by using the “reply” feature or by calling me at the number listed above, and then immediately delete this message and all attachments from your computer. Thank you.
<<<>>>

From [Marc Abrams (2005.12.05.1712)

In a message dated 12/5/2005 3:59:06 P.M. Eastern Standard Time, EJorgensen@RIVERBENDCMHC.ORG writes:

···

[From Erling Jorgensen (2005.12.05 1530 EST)]

Bjorn Simonsen (2005.12.02,09:20 EUST)

After re-studying texts about Teleology and parts of PCT, I have the
understanding that PCT is quite independent of any causal relations.

I don’t think I would express it the way you are above, but I agree that
control processes necessitate a reconceptualization of cause-effect
relations as traditionally expressed in lineal causality. The whole
analysis changes when you close the loop with feedback processes, because
it introduces a different relationship to time. I would almost say that
this feature is constitutive of living processes. They do not just
exist with external causes; because of the circular causality embedded
in negative feedback arrangements, they in some sense cause themselves.

Back in 1999, I posted to CSGNet an essay that tried to partition the
various notions of “cause” as applied to control loops. I think it is
relevant to what you are raising, and I’d like to bring it into the
discussion.

I think the essay might be worth Dag’s posting it at the LCS website,
because I think it deals in a succinct way with some of the philosophical
differentiation between circular vs. lineal causality. I don’t remember
what subject heading it used, and I can’t seem to search the archives
previous to the past year. My own draft document had the date 7/16/99,
so it was sent somewhere around that time.


In any event, here is a reposting of that essay, which we might call,
“Causation and Negative Feedback Control Loops.”

From a systemic standpoint, I’m not sure “cause” is a very
helpful word in talking about control systems. It all depends
on which portion of the system you are considering at the moment.

I think Martin Taylor addressed this very nicely yesterday. Every component in a control loop is both a cause and effect. I also agree with you that what is ‘cause’ and what are ‘effects’ depend on how and what you are viewing at any given point in time.

I think it might be more useful thinking of things as ‘necessary conditions’ and ‘goals’. In other words, given A what is needed for A to exist or function.

a) Every snippet of the control loop can be thought of as
propagating a signal, and in that sense it has a (causal)
input and a (resulting) output. To the extent that we in
CSG analyze in this black box fashion, we usually focus on
the “nodes” of the control loop, i.e., the comparator function,
the output function, the perceptual input function (PIF), and
occasionally the environmental feedback function.

Yes, but this is not the only way one might look at a control process. It all depends as you said on the context, and in this case the level of abstraction. When I observe the negative feedback of a heating system I am not observing ‘signals’, even though at some level you could say I would have to, but are the observable ‘signals’ all the same at each level of abstraction? I don’t think so.

So the PCT view is at but one level of abstraction. I’m not sure how well that level will help us understand higher levels of abstraction any more than understanding what generates a pixel would help us appreciate a work of art.

Aside: Regardless of how many actual neurons, evoked potentials,
graded potentials, membrane permeability, etc. may actually be
involved, and whether the signals are meeting in ganglia or
other types of neural tissue, the Comparator has been elegantly
modeled as having two net inputs with inverted signs –
reference and perceptual signals respectively – and one net
output, the error signal. In a sense, the interaction of
reference and perception “cause” the error, but that’s not
the best way to think about it.

Am not sure why you say ‘regardless’. Are you interested in something approximating the truth or a piece of art?

What good is a model if it does not lead you to a better understanding of what is actually happening.

From my limited knowledge of physiological control systems that are known, the comparison function is carried out in a number of different ways, none very elegant, but highly practical.

This is not to say that the organization of the control processes of the mind are not in line with current PCT theory. I am simply stating that we just don’t know at this point and it doesn’t matter how many elegant models are built to support any given theory.

I can provide any number of system dynamics models that support the views of every psych theory ever devised. All very elegant, and all very incomplete

2nd Aside: Some have spoken of the error signal “causing” the
output or action of a control system, but that again seems to
cut the loop into snippets. The output function has been
powerfully modeled (in the tracking demos) as an integrating
function, requiring not only the reference-minus-perception
input, but a multiplier constant representing gain, a
multiplier constant representing a slowing factor (I think
hat’s the “leaky” part of the integrator), and the previous
output of the function as a new input! What’s the “cause”
in all of this, or is that not the right concept to impose
on control systems?

I think it’s a perfectly good and important question to ask. What if I rephrased your question and asked; what are the necessary conditions for behavior to occur?

That seems like a more interesting and answerable question. What do you think?

3rd Aside: Apart from some weighted sums, and applying some
logical operators, I have seen almost no modeling of different
ypes of perceptual input functions. If Bill’s suggestion
about hierarchical levels of perception is a useful launching
point (and I think it is), then theoretically there should be
ten or eleven qualitatively distinct ways of modeling PIF’s.

I agree, and since this has not been done, why do you still believe it to be a good ‘launching pad’? What do you think has inhibited from being done?

The actual neural computations of perceptions are undoubtedly
incredibly more complex, but for a model all we would need to
begin empirically testing of its concepts is to reproduce some
essential feature of a postulated level of perception.

Sorry, I think this is grossly simplistic. You can indeed ‘model’ the specific levels in the hierarchy. But modeling the levels is no evidence of their existence in the real world any more than modeling Alice in Wonderland would be. One of the many problems is having an understanding of how and what each level derives from each of the other 'levels '.

In a hierarchy you have a very strict line of dependencies from the top to the bottom.

I agree that the HPCT formulation is a very elegant solution. The question though is it an accurate one? I’m not convinced it is and I’m trying to find out why you believe it is. Maybe I am overlooking something? maybe I am not understanding something? I’m asking for someone to shed some light on my darkness.

This is not a challenge. I am truly interested in learning and I’m willing to put my beliefs on the table for any and all to examine and I hope someone can come up to me and say; "Marc, I think you are mistaken here and here is why, and here is my data and evidence.

instance, the essence of a Transition, to my way of thinking,
is the simultaneous experience of variance mapped against
invariance. An Event is a series of transitions framed –
one might almost say arbitrarily – with a beginning and an
end. So an event control system (again, as I conceive it)
is the one that does “framing”, but to test whether such
perceptions can be constructed and stabilized against
disturbances, we first need measurable models of variants
and invariants mapped against each other. Such complexities
are beyond my current modeling abilities, (not to mention
the point Bill has raised about getting the right dynamic
equations to model the environmental forces on the computer.)

There are a number of issues here. First it is not simply one or two levels you need to concern yourself with but eleven that are strictly dependent on one another.

You must show that level 11 is fully dependent on the prior 10 levels and in a precise order. Second you would then have to show how each level provides the ‘references’ for the levels below and that all of this accounts for our ‘cognition’

b) We can also consider a single control loop “in isolation,”
and ask whether causality is a helpful concept there. The rules
change (and so should the concepts) when you close the loop. In
one sense, every part of the loop is a cause of every other part.
The corollary to this is that every part of a closed loop is a
cause of itself! Some theories like to respond with the idea of
“circular causality,” but I think Rick is right that it often
just amounts to linear causality chasing itself incrementally
around the circle. The fundamental idea of accumulating
integrating functions (with all their ramifications) doesn’t
seem to enter the picture. It seems better to think of the
organization of components itself, not some event occuring
within it, as the effective cause.

What does a ‘control loop’ represent in the above formulation? As I noted before there are many levels of abstraction and control ‘looks’ quite different at the cellular rather than at the organism level of abstraction.

c) We can move the zoom focus slightly farther out and consider
a single control loop together with its inputs. As the basic
model now stands, every loop has only two inputs from outside
itself – one from inside the organism, the reference signal
(which, again, can be modeled as the net effect of whatever
neural and chemical processes actually bring it about), and one
from outside the organism, the (net) disturbance. A traditional
view of causality would say that the reference and the disturbance
are the only two candidates for being a “cause,” and in a sense
we in CSG accept that.

Yes, and I would ask you what data and evidence has provided you with the understanding to accept all that as a given? As Martin Taylor suggested yesterday there may be more than just those two ‘inputs’

But by quantifying the relations in the loop into equations,
Bill et al. have been able to say something much more precise
about these external causes. Only the reference is an effective
cause of the stabilized state of the perceptual input quantity.
Any causal effect from the disturbance on that quantity is
neutralized by the negative feedback action of the loop. The
cost is that the disturbance becomes an effective cause (in
inverted form) of the behavioral output. [This latter point
seems to be what Herbert Simon was referring to in his quote
about the behavior of ants, that was hotly debated awhile back
on CSGNet.]

Erling, many good scientists have done some wonderful work. If we take control as foundational then everything these behavioral scientists have been studying has to have been aspects of the control loop or process. The key is in understanding what aspects they were and are.

Contrary to the nonsense spewed by both Marken and Powers that we have little to gain and learn by understanding the work of others, I am afraid that this hubris rates right up there with anyone of the other gaffes Powers and Marken might have made in trying to gain PCT acceptability.

d) We can move even further back and look at more of the
hierarchy, as it’s currently proposed to operate. Here almost
every control loop is embedded in a network of control loops
“above” it and “below” it. So in one sense, higher loops
“cause” it to operate by providing changing reference signals,
and it “causes” lower level loops to control by the same
mechanism. I deliberately say “higher loops”, plural, and I
mean it in two senses. For one thing, many loops at the next
higher level can be contributing to the net reference signal
at a loop at the next lower level, so perhaps all those loops
are causal.

No ‘perhaps’ about it. A level cannot exist without the levels ‘below’

existing.

But we can also speak of proximal and distal causes,
and include each relevant loop all the way up the hierarchy as
a “cause” of a given low-level loops’ operation. This is why I
have no problem considering “attending a meeting” as one (distal)
cause of contracting a given muscle on the way to the garage.
Just as closing the loop changes the notion of causality, so
does embedding everything in a network.

Again, I find the question; what are the necessary conditions for x level to exist, is a much more profitable way of looking at the issue. If you can honestly say and believe that for a PCT hierarchy and ‘systems’ level to exist all those other levels must exist and in that specific order of dependence, than I would appreciate it if you could explain your reasoning to me, of how you have come to accept this as the ‘truth’ and no other possibilities exist.

e) Sticking with this hierarchical vantage point for one more
iteration (if you’ve stuck with me this far!), it needs to be
emphasized that the interaction between levels does not occur
by intact loops sending signals to other intact loops below
them. Rather, those lower level loops are part of the
structure
, part of the loop itself, of the higher level.
Remember, all loops are closed through the environment –
(other than the “imagination switch,” if we can figure out a
way to get it to function!) – which means that higher loops
have the longest (and slowest) path to travel to achieve
their control. And they only achieve it if the lower level
loops to which they contribute are achieving sufficient
control of their own variables.

Ok, sounds reasonable. How did you come to this? Care to share your thoughts behind the belief?

So maybe this reflection has come full circle (sorry about
the pun, but it fits!), in that when higher levels “cause”
lower level perceptions to become stabilized, they are simply
causing their own control to happen. Basically, I think we
have two choices for using causality in a way that reminds
us (instead of deflecting us) about how control loops operate.

Again, the PCT model is very elegant. The question, again though, is it accurate, and again instead of ‘cause’ if we use the; what are the necessary conditions for a ‘y’ goal to be in place and working just the opposite way from a perception (that is, down) at each level the reference conditions are fully dependent on the levels above for their existence and in that specific order.

  1. Either we allow this reflexive notion of “self-causality”
    to be part and parcel of how we use the term – which means
    processes in the loop are always in a time relationship
    within themselves, as well as always functioning and embedded
    in higher and lower loops.

Everything occurs through time, although you might have a difficult time sometimes realizing this utilizing mathematics.

Erling, you must have had a great deal more free time in 1999. :wink:

Regards,

Marc

[From Dag Forssell (2005 12 05 16:00)]

Erling,

Your post appeared

From [Erling Jorgensen (990716.0235 CDT)]

Here are the three subsequent posts for your consideration if you want to revise or expand upon your essay before I format and post it, which I will be happy to do. We can work this one the same way we massaged the Frame Question.

Dag

···

=============================================
Date: Fri, 16 Jul 1999 08:47:43 -0700
From: Richard Marken <rmarken@EARTHLINK.NET>
Subject: Re: Causality

[From Rick Marken (990716.0850)]

Erling Jorgensen (990716.0235 CDT) --

This was a very nice discussion, Erling. You said many of the
things I was planning to say myself. I will still try to say
them (when I get some time today). But I wanted you to know
that I think you made some very good points.

Best

Rick
--
Richard S. Marken Phone or Fax: 310 474-0313
Life Learning Associates mailto: rmarken@earthlink.net
http://home.earthlink.net/~rmarken

------------------------------

Date: Fri, 16 Jul 1999 12:47:34 -0700
From: Bruce Nevin <bnevin@CISCO.COM>
Subject: Re: Causality

[From Bruce Nevin (990716.1045 EDT)]

Erling Jorgensen (990716.0235 CDT)--

An excellent synopsis! A tour de force, tracing the theme of causation
coherently through the Gordian all-at-onceness of the control hierarchy,
and showing how a model can help us distinguish the multiple ambiguities of
a simple word like "cause."

Another step: the relationship between a neural cell (an autonomous control
system) and a multicelled control system in which it participates. This is
a special case of the relationship between the cellular order and
multicellular orders of organisms. Perhaps there are parallels between
virus-host and parasite-host, prokaryote-cell and symbiote-organism,
primitive multicellular structures and observable social structures. But
your theme is causation. Here, the causal cord is cut, or anyway has a more
accidental cast, as a side effect that is not only unintended but even
beyond the perceptual capacities of the organism effecting it. It seems
clear that cells are indeed autonomous control systems, that they do not
control any variables that are controlled by the organisms that they
constitute (cells do not control neural signals as such), and vice versa
(humans do not control rate of flow of ions across a cell membrane, or
whatever it is that the cell is controlling). And this however much the
control of variables at each level or "order" of organization (e.g.
cellular and human) may *influence* the state of variables controlled at
the other. We may speak of a cancerous tumor in the brain as causing a loss
of eyesight, but that is not a result that is controlled by the cells or by
the tumor; it seems rather that cancer is a side effect of the failure of
some cells to control their "social" relations with other cells in an organism.

Organisms that collectively stabilize their shared environment (which
includes especially each other) survive better than those that don't.
Consequently, on an evolutionary time scale, in each type of organism that
survives there must arise innately some controlled variables, or more
likely some way of controlling variables (input and output functions), that
has as a side effect a tendency of the individuals to stabilize in
higher-order systems. But this is not a discussion of perceptual input
control; it is a discussion of the evolution of innate properties
(variables, values, input functions, output functions) in populations of
control systems. There is causation here too, but even more indirect than
what you have discussed.

         Bruce Nevin

------------------------------

Date: Fri, 16 Jul 1999 11:41:13 -0700
From: Bill Curry <capticom@OLSUSA.COM>
Subject: Re: Causality

Thanks for your work Erling--a beautiful contribution...now filed in my
CSG "Superthreads" folder!

Best,

Bill Curry

From Erling Jorgensen (2005.12.06 13:50 EST)]
Bjorn Simonsen (2005.12.02,09:20 EUST)

After re-studying texts about Teleology and parts of PCT, I have the
understanding that PCT is quite independent of any causal relations.

I don't think I would express it the way you are above, but I agree that
control processes necessitate a reconceptualization of cause-effect
relations as traditionally expressed in lineal causality. The whole
analysis changes when you close the loop with feedback processes, because
it introduces a different relationship to time. I would almost say that
this feature is constitutive of living processes. They do not just
exist with external causes; because of the circular causality embedded
in negative feedback arrangements, they in some sense cause themselves.

I accept and I know that it is too superficial to say "PCT is quite
independent of any causal relations". And I appreciated your wonderful
synopsis. (Congratulations with you Licensure, Erling!)
There is really no more to say, but I wish to close my "Determinism"
contribution with some words.

I think it is very important that we (and also I) describe our definitions
when we use concepts that are some uncommon. Bryan tried to do that in a way
in his mails.
We in CSG know that control at the relationship level is often met with
external conflicts. These could be brought down if we defined our concepts
(I guess).
I visited dictionary.com for the concept "causal", and I discovered a lot of
synonyms (actual cause, cause in fact, superceding cause, proximate cause,
procuring cause, producing cause, intervening cause, probable cause, and
more) and I think people are oriented more toward one of these synonyms than
toward the other. And this must lead to dissensions.

I am not sure we have to use the concept teleology, because we are able to
express PCT using the concept purpose. But if we are able to confine the
concept teleology in a correct way we can use it among people who are
informed with teleology and not PCT.

I also think that we live in a society where people talk about the cause if
the Iraqi war, advertising, violence and more as if there is a causal
reason. And I think PCT is able to demonstrate that the disturbances are
just one part when we discuss behavior. Purpose is the other.

In the end I will turn back to Norbert Wiener et al where he said in his
essay
"meaningfulness, as it is defined here, is quite independent of law of
causation, initial or final. Teleology has principally been discredited
because it indicated a cause following in time a given effect. But when this
aspect of teleology was turned out, unfortunately the coherent insight into
purpose was also turned out. Since we consider meaningfulness as an
essential concept, necessary for our understanding different types of
behavior, we claim that teleological studies are advantageous even if it
escapes causal relations and just is suitable for a purpose-examination.
We have confined the content of teleological behavior by using this term for
meaningful responses, controlled by error of the response - id est the
difference between the state of the active object at a certain moment and
the final phase interpreted as purpose. Teleological behavior is therefore
synonymous with behavior controlled by negative feedback and obtains
therefore in precision by delimiting the concept enough.
In accordance with this confined definition teleology is not an antagonism
to determinism, but to not-teleological. Both teleological and
not-teleological systems are deterministic when the behavior in question
belongs to the area where determinism is practical accommodated. The concept
teleology is just common in one way to the concept causal relation; the time
axle. But causal relation means one single, relative not retractable,
functional situation, while teleology on the other hand is suitable for
behavior, and not for functional situations".

But as Martin says, "If we look into the loops, also behavior is a
functional situation.

Bjorn

[From Erling Jorgensen (2005.12.06 1100 EST)]

Dag Forssell (2005 12 05 16:00)

Here are the three subsequent posts for your consideration if you
want to revise or expand upon your essay before I format and post it,
which I will be happy to do. We can work this one the same way we
massaged the Frame Question.

Hello, Dag. I appreciate your willingness to format the Causality
essay, similar to your help with the Frame Problem essay. I also
very much appreciate your providing the additional context of the
subsequent remarks by Bruce Nevin.

I am not sure if it will be an expansion of the original essay,
but I am pondering the implications of Bruce’s comments. He poses
some intriguing possibilities about “organisms that collectively
stabilize their shared environment.” There is an emergent (or
bootstrap) causality here, if I can call it that, tied to mutually
creating the conditions for more successful control.

I am starting to wonder about symbiotic forms of control, like those
postulated by Bruce between cellular and multicellular orders of
organization. It almost seems that successful symbiotic control
could lead to a form of meta-stability, that would have enormous
evolutionary (read “not-reorganized-away”) advantage.

The thoughts are still in the germination stage, and I’ll see if I
can pull them together over the next few days or so. I’m not sure
if the final product will be one extended essay, or an additional
one in a slightly different direction. My inclination at this point
is to let them stand independently. I think the original Causality
essay deals with the multiple tiers of meaning one must take into
account with a word like “cause”, depending on one’s vantage point at
the moment when considering a system of closed loops. The essay I
am envisioning here launches off such a base, and considers various
implications when autonomous control systems interact in a myriad of
ways. It specifically would consider the special case situations
of different logical orders of control providing mutually
advantageous conditions for more successful control, than either
order could muster on its own.

More later. I’m also pondering some of the comments Marc Abrams
made, and will try to respond to those ideas as well. Thanks for
the input, folks. We’ll see what develops.

All the best,
Erling

NOTICE: This e-mail communication (including any attachments) is CONFIDENTIAL and the materials contained herein are PRIVILEGED and intended only for disclosure to or use by the person(s) listed above. If you are neither the intended recipient(s), nor a person responsible for the delivery of this communication to the intended recipient(s), you are hereby notified that any retention, dissemination, distribution or copying of this communication is strictly prohibited. If you have received this communication in error, please notify me immediately by using the “reply” feature or by calling me at the number listed above, and then immediately delete this message and all attachments from your computer. Thank you.
<<<>>>

[Martin Taylor [2005.12.16,13.00]

[From Erling Jorgensen (2005.12.06 1100 EST)]
I am pondering the implications of Bruce's comments. He poses
some intriguing possibilities about "organisms that collectively
stabilize their shared environment." There is an emergent (or
bootstrap) causality here, if I can call it that, tied to mutually
creating the conditions for more successful control.

I am starting to wonder about symbiotic forms of control, like those
postulated by Bruce between cellular and multicellular orders of
organization. It almost seems that successful symbiotic control
could lead to a form of meta-stability, that would have enormous
evolutionary (read "not-reorganized-away") advantage.

The thoughts are still in the germination stage, and I'll see if I
can pull them together over the next few days or so.

When you do that, I'd appreciate your comments on <http://www.mmtaylor.net/PCT/Mutuality/index.html&gt; which is on this topic. If you have as cogent comments on this as you do on causality, it could be a well worthwhile exercise.

Martin