B:CP Course Study Guide, CH 2 Models & Generalizations

[Rick Marken (2013.07.13.1155)]

And here is the Study Guide for the next Chapter.

Best regards

Rick

Week 3 Study Guide, CH 2 Models & Generalizations.doc (26 KB)

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Kent McClelland (2013.07.16.1455)]

Some thoughts on “internal” vs. “external” models (B:CP, p. 18):

The distinction that Bill Powers makes in Chapter 2 between “‘internal’ and ‘external’ theories” and the models based on those theories brought to my mind the two kinds of model airplanes that kids used to build, back before the days of computer games
and iPads.

In my distant youth, kids who were interested in airplanes had a choice of two kinds of model airplane kits when they went to a toy store.

The first kind of kit was the molded plastic model. When you opened the box, you found lots of little plastic pieces that had to be glued or snapped together and step-by-step directions for assembling the model plane. When the plane was assembled, you
could paint it or apply decals to make it look more realistic, and the model was ready to play with or (more likely) to sit as a decoration on your shelf.

The second kind was typically made of balsa wood instead of plastic and might require some carving or sanding to make the parts fit together properly, as well as some painting if you wanted to make it look pretty. It also came with directions for assembly,
and it usually had a propeller and an engine of some kind, ranging from a twisted rubber band for the low-end models to an electric or even gas-powered motor for the fancier models. If you were a dedicated hobbyist, you could even buy and build radio-controlled
models that allowed you to send signals to change the direction of the plane in flight.

The first kind of model plane looked much more realistic. Although no one would ever mistake the little plastic model for a full-size 747 or X-15, it simulated all the surface details of the actual plane. Underneath the surface, however, it was just a
hollow plastic shell.

The second kind of plane was more fun to play with, because you could actually fly it. The motor, propeller, wings, fuselage, tail assembly, and landing gear all worked together to put the plane in motion through the air. Had you been so foolish as to
try to fly one of the plastic models, it would have just dropped like a stone and broken into smithereens. Of course, the wooden models often crashed, too, but usually they followed some kind of interesting flight pattern first.

A child playing with plastic models could learn a lot about the different parts of a plane and what different kinds of planes looked like, but the child would learn practically nothing about the mechanics of actually flying a plane.

Playing with the second kind of model, a child could learn a lot about about how real planes fly, because physical forces like gravity, friction, and air flow acted on the parts of the model plane in exactly the same way as they did on real planes. Adjusting
the wing flaps and tail surfaces of the model planes, for instance, could produce the same kinds of stalls and turns as occur with real planes in flight.

In Bill’s terms, the first kind of model plane might be classified as an “external” model, because it deals only in surface appearances, and it lacks any of the “subsystems” (B:CP, p. 15) that are the essential parts of a functional airplane. The second
kind of model plane is an “internal” model, that despite being simplified in many respects in comparison to a real airplane, has the all the subsystems necessary for it to work in the same way.

In Chapter 2, Bill is arguing that all the previous psychological theories listed in the chapter are based on external models, and that that the theory he offers in the book is an internal model, a model that is much more revealing about how brains actually
work.

On a related topic, I took a look at the paper by Rodgers that Rick Marken placed in his dropbox on the “quiet methodological revolution” in psychology over the last few decades. Rodgers argues that null hypothesis significance testing and the general
linear model have been replaced by more sophisticated statistical modeling techniques, like structural equations models, log-linear models, or even nonlinear models, all of which rely on goodness-of-fit testing.

To my mind, this change in statistical modeling techniques is far from revolutionary. Having published a paper myself using structural equations modeling (some 30+ years ago!), I feel confident in asserting that what you get after using all those fancy
statistics is essentially no different from the results of using general linear model: a static picture of the surface appearances of averaged behavior, with no clue to workings of the underlying subsystems producing the apparent behavior. Rick Marken’s “So
You Say You’ve Had a Revolution” article is still the proper response to inflated claims like those Rodgers is making.

Kent

···

On Jul 13, 2013, at 2:56 PM, Richard Marken wrote:

[Rick
Marken (2013.07.13.1155)]

And here is the Study Guide for the next Chapter.

Best regards

Rick

Great analogy Kent, cheers!

Warren

···

Sent from my iPhone

On 15 Jul 2013, at 22:10, “McClelland, Kent” MCCLEL@GRINNELL.EDU wrote:

[Kent McClelland (2013.07.16.1455)]

Some thoughts on “internal” vs. “external” models (B:CP, p. 18):

The distinction that Bill Powers makes in Chapter 2 between “‘internal’ and ‘external’ theories” and the models based on those theories brought to my mind the two kinds of model airplanes that kids used to build, back before the days of computer games
and iPads.

In my distant youth, kids who were interested in airplanes had a choice of two kinds of model airplane kits when they went to a toy store.

The first kind of kit was the molded plastic model. When you opened the box, you found lots of little plastic pieces that had to be glued or snapped together and step-by-step directions for assembling the model plane. When the plane was assembled, you
could paint it or apply decals to make it look more realistic, and the model was ready to play with or (more likely) to sit as a decoration on your shelf.

The second kind was typically made of balsa wood instead of plastic and might require some carving or sanding to make the parts fit together properly, as well as some painting if you wanted to make it look pretty. It also came with directions for assembly,
and it usually had a propeller and an engine of some kind, ranging from a twisted rubber band for the low-end models to an electric or even gas-powered motor for the fancier models. If you were a dedicated hobbyist, you could even buy and build radio-controlled
models that allowed you to send signals to change the direction of the plane in flight.

The first kind of model plane looked much more realistic. Although no one would ever mistake the little plastic model for a full-size 747 or X-15, it simulated all the surface details of the actual plane. Underneath the surface, however, it was just a
hollow plastic shell.

The second kind of plane was more fun to play with, because you could actually fly it. The motor, propeller, wings, fuselage, tail assembly, and landing gear all worked together to put the plane in motion through the air. Had you been so foolish as to
try to fly one of the plastic models, it would have just dropped like a stone and broken into smithereens. Of course, the wooden models often crashed, too, but usually they followed some kind of interesting flight pattern first.

A child playing with plastic models could learn a lot about the different parts of a plane and what different kinds of planes looked like, but the child would learn practically nothing about the mechanics of actually flying a plane.

Playing with the second kind of model, a child could learn a lot about about how real planes fly, because physical forces like gravity, friction, and air flow acted on the parts of the model plane in exactly the same way as they did on real planes. Adjusting
the wing flaps and tail surfaces of the model planes, for instance, could produce the same kinds of stalls and turns as occur with real planes in flight.

In Bill’s terms, the first kind of model plane might be classified as an “external” model, because it deals only in surface appearances, and it lacks any of the “subsystems” (B:CP, p. 15) that are the essential parts of a functional airplane. The second
kind of model plane is an “internal” model, that despite being simplified in many respects in comparison to a real airplane, has the all the subsystems necessary for it to work in the same way.

In Chapter 2, Bill is arguing that all the previous psychological theories listed in the chapter are based on external models, and that that the theory he offers in the book is an internal model, a model that is much more revealing about how brains actually
work.

On a related topic, I took a look at the paper by Rodgers that Rick Marken placed in his dropbox on the “quiet methodological revolution” in psychology over the last few decades. Rodgers argues that null hypothesis significance testing and the general
linear model have been replaced by more sophisticated statistical modeling techniques, like structural equations models, log-linear models, or even nonlinear models, all of which rely on goodness-of-fit testing.

To my mind, this change in statistical modeling techniques is far from revolutionary. Having published a paper myself using structural equations modeling (some 30+ years ago!), I feel confident in asserting that what you get after using all those fancy
statistics is essentially no different from the results of using general linear model: a static picture of the surface appearances of averaged behavior, with no clue to workings of the underlying subsystems producing the apparent behavior. Rick Marken’s “So
You Say You’ve Had a Revolution” article is still the proper response to inflated claims like those Rodgers is making.

Kent

On Jul 13, 2013, at 2:56 PM, Richard Marken wrote:

[Rick
Marken (2013.07.13.1155)]

And here is the Study Guide for the next Chapter.

Best regards

Rick

[John Kirkland 2013.07.16]

Creating satisfactory hypotheses about the cause of what’s going on is a synthesising creative act; one needs to account for everything that’s been observed everywhere (show me – my degree is from Missouri), of emergent generalisations (aka logical consistency), of other accounts (hypotheses often parading as theories), face-validity, statistical validity (as well as discriminant validity, conceptual validity and others), individual and group variability, and so on and so forth. In effect a gigantic exercise in reducing noise as much as possible and finding what’s ‘there’. It’s a tall order for anybody. And to do so it’s a side-show act to present straw-men, red-herrings.

What BP has offered appears to be a series of qualitative steps, successive ones are arranged on different platforms. There’s an implicit nested hierarchy. I suspect there are more steps than these three too.

Many years ago we started tinkering with various statistical models because, simply put, we could not account for individual variability. It would have been easy enough to ignore these irritations but they would not go away. The upshot is a reworked suite of algorithms that’s taken three decades to develop, test and validate.

The holy grail for our approach is to map the phenomena under investigation and try to identify these maps’ axes. It’s not an error of our ways, but almost every MDS map collapses to three viable orthogonal dimensions. And, yes, we have designed software to view these 3D maps. The good news is that if anybody can make a ‘reasonable’ decision about whatever it is under investigation, we can map it. Our range of studies has included most sensory inputs.

I’m consistently amazed at how slow I am on the uptake sometimes. Duh, these axes are possibly the potentially controllable variables. One thing’s for sure, there is not a hope in hell of seeing them without heavy analyses. Though once spotted they stick out a mile. So, I’m a tad reluctant to toss out the data with the analyses (baby and bathwater generalisation).

Here’s an example from our research stable: colour.

Many years ago whilst shooting the breeze we became fascinated with Bentham discs. We created a programme to make our own and printed these for pasting onto 45rpm records. Some of you may recall stacker players, where several records could be placed into a spindle and tripped to fall onto the rotating disk. We asked people to tell us about any colours they may have been noticing. Typically, pastels are reported consistently. These are known as subjective colours. Why subjective? Because the patterned discs only had black printed on white. We then asked ourselves what people with colour-vision deficiency (CVD) would report.

This question led us to a line of research that’s ongoing. We never got around to answering that initial question, though others have done so since. Where it did take us was to CVD assessment. As we aren’t physiologists we persuaded (suspicious) staff of an opthalmology department at another institution to join us. We reanalysed some of their data using our emergent approach and surprised them we could obtain outcomes as good, if not better than standard analytical methods. This was the start of a string of publications on this topic.

Long story short is we designed a completely new approach for assessing CVD in anybody, severe or not. Indeed my colleague has just co-authored an article reporting identification of sub-clinical heavy metal poisoning as well as type II diabetes via shifts in colour perception. Not bad outcomes from a little irritating and apparently innocuous question.

The other reason for studying colour perception was it’s cost, almost nothing. We’ve been most fortunate many other researchers have shared data sets enabling us to reanalyse their data. There’s an adage attributed to Nobel Laureate Rutherford (whilst at Manchester, Warren): now we have no money we’ll have to use our brains.

We extrapolate: for instance vary lighting conditions. And, secondly, from data obtained across three generations (affected father, carrier daughter, affected grandson) we were the first to identify ‘invisible’ carrier status if the effect is ‘strong’ from her behavioural reports.

We abstract: we’ve got a test which seems to be holding up to sustained scrutiny (it’s now used by several researchers in different countries)

And, there’s model building: sub-systems within the system being studied. There is the obvious genetic contribution (let’s see, if there are around 150 members of CSGnet, and let’s say 66% are males, then I’d opine there are at least 10 with CVD ‘problems’, and for about five this will be on the more severe side). We can distinguish between heavy and non-smokers reliably (smokers’ eyes are screwed – pun intended), at least from their colour-perception reports and this difference may well have a ‘cause’; our physiological colleagues can make some pretty shrewd guesses (working hypotheses) about what’s going on with the machinery.

So, why this little story? It’s my view we need all three steps outline by BP, they are interdependent. But, if a researcher chooses to make any of the lower ones a primary platform (and there are many reasons for this limitation) then perhaps we could be patient and consider this as a potential contribution to others’ model building.

We are reasonably confident our take on MDS (multidimensional scaling) is OK since we’ve a slew of articles on many applications. This is in no way to be taken as an endorsement of our approach. There’s more to be done yet. For instance in our latest paper we revisited the Points-of-view an approach which was bumped by INDSCASL that, ironically, washed up the very noise through aggregation (the very point BP was making) and within which we are now finding the signal within. And, even more startling, data consists of basic card-sorting that has a very strong Gibsonian base (aka discrimination).

If I am right in my slow uptake and gradual grasp of PCT, then might it be possible we stumbled across what I’m calling ‘potentially controllable variables’ (BP’s underlying properties. For colour there are two axes: red-green, yellow-blue; orthogonal in the Nerwtonian circle) and now may have a model-building platform?

(As an aside: thanks to Rick for his patience in answering my awkward questions off line).

Finally, I’m finding B:CP an even better read this time around and no less challenging. Others’ contributions to Rick’s curly questions, even if these are not addressed directly, are welcome fodder.

···

On Tue, Jul 16, 2013 at 9:20 AM, Warren Mansell wmansell@gmail.com wrote:

Great analogy Kent, cheers!

Warren

Sent from my iPhone

On 15 Jul 2013, at 22:10, “McClelland, Kent” MCCLEL@GRINNELL.EDU wrote:

[Kent McClelland (2013.07.16.1455)]

Some thoughts on “internal” vs. “external” models (B:CP, p. 18):

The distinction that Bill Powers makes in Chapter 2 between “‘internal’ and ‘external’ theories” and the models based on those theories brought to my mind the two kinds of model airplanes that kids used to build, back before the days of computer games
and iPads.

In my distant youth, kids who were interested in airplanes had a choice of two kinds of model airplane kits when they went to a toy store.

The first kind of kit was the molded plastic model. When you opened the box, you found lots of little plastic pieces that had to be glued or snapped together and step-by-step directions for assembling the model plane. When the plane was assembled, you
could paint it or apply decals to make it look more realistic, and the model was ready to play with or (more likely) to sit as a decoration on your shelf.

The second kind was typically made of balsa wood instead of plastic and might require some carving or sanding to make the parts fit together properly, as well as some painting if you wanted to make it look pretty. It also came with directions for assembly,
and it usually had a propeller and an engine of some kind, ranging from a twisted rubber band for the low-end models to an electric or even gas-powered motor for the fancier models. If you were a dedicated hobbyist, you could even buy and build radio-controlled
models that allowed you to send signals to change the direction of the plane in flight.

The first kind of model plane looked much more realistic. Although no one would ever mistake the little plastic model for a full-size 747 or X-15, it simulated all the surface details of the actual plane. Underneath the surface, however, it was just a
hollow plastic shell.

The second kind of plane was more fun to play with, because you could actually fly it. The motor, propeller, wings, fuselage, tail assembly, and landing gear all worked together to put the plane in motion through the air. Had you been so foolish as to
try to fly one of the plastic models, it would have just dropped like a stone and broken into smithereens. Of course, the wooden models often crashed, too, but usually they followed some kind of interesting flight pattern first.

A child playing with plastic models could learn a lot about the different parts of a plane and what different kinds of planes looked like, but the child would learn practically nothing about the mechanics of actually flying a plane.

Playing with the second kind of model, a child could learn a lot about about how real planes fly, because physical forces like gravity, friction, and air flow acted on the parts of the model plane in exactly the same way as they did on real planes. Adjusting
the wing flaps and tail surfaces of the model planes, for instance, could produce the same kinds of stalls and turns as occur with real planes in flight.

In Bill’s terms, the first kind of model plane might be classified as an “external” model, because it deals only in surface appearances, and it lacks any of the “subsystems” (B:CP, p. 15) that are the essential parts of a functional airplane. The second
kind of model plane is an “internal” model, that despite being simplified in many respects in comparison to a real airplane, has the all the subsystems necessary for it to work in the same way.

In Chapter 2, Bill is arguing that all the previous psychological theories listed in the chapter are based on external models, and that that the theory he offers in the book is an internal model, a model that is much more revealing about how brains actually
work.

On a related topic, I took a look at the paper by Rodgers that Rick Marken placed in his dropbox on the “quiet methodological revolution” in psychology over the last few decades. Rodgers argues that null hypothesis significance testing and the general
linear model have been replaced by more sophisticated statistical modeling techniques, like structural equations models, log-linear models, or even nonlinear models, all of which rely on goodness-of-fit testing.

To my mind, this change in statistical modeling techniques is far from revolutionary. Having published a paper myself using structural equations modeling (some 30+ years ago!), I feel confident in asserting that what you get after using all those fancy
statistics is essentially no different from the results of using general linear model: a static picture of the surface appearances of averaged behavior, with no clue to workings of the underlying subsystems producing the apparent behavior. Rick Marken’s “So
You Say You’ve Had a Revolution” article is still the proper response to inflated claims like those Rodgers is making.

Kent

On Jul 13, 2013, at 2:56 PM, Richard Marken wrote:

[Rick
Marken (2013.07.13.1155)]

And here is the Study Guide for the next Chapter.

Best regards

Rick

(From Adam Matic, 2013.07.16. 1500 CET)

I’ve been trying to reconcile Bill’s criticism of behaviorist methods with other criticisms of positivist methodology in general, since behaviorism is, according to some, a direct consequence of positivism in social science.

The main parallel I found is one that distinguishes “betting on the outcomes in the future” and “scientific prediction”, where betting is trying to figure out, either by extrapolation from past trends or some other means of statistical prediction, what will happen in the future, in “the real world” (as opposed to laboratory setting). Scientific prediction, on the other hand, involves, and indeed requires, a model of a limited number of variables, a controlled laboratory setting, and a prediction of approximate value of a variable within a margin of error.

In physics there is, I believe, some confusion about those two methods. There is experimental physics and there is astronomy. Experiments are clearly done in the strict form of mathematically modeling variables, knowing all along that the prediction will hold only under stated conditions. Physicists don’t try to predict the future, they just try to find models of causality. They can find out what happens in which conditions, but they don’t try to predict the conditions in the future.

In astronomy - models of planet movement are at the same time a prediction of the future. There are relatively few unexpected occurrences, like comets or asteroids, that their influence can usually be discarded in a prediction.

When physicists, or people influenced by the success of physics, try to deal with human behavior, they adopt the astronomy model instead of the laboratory, experimental model, and this happens all across social sciences. Sociologists, psychologists, economists all extrapolate statistical trends into the future, or look for correlations between past variables and even call those correlations “effects of variable A on variable B”, implying a causal influence. The main problem of this kind of research is a big number of factors influencing any given variable at any given time. Correlations found in one setting usually don’t appear under different settings. Correlations found in the past, don’t repeat in the future. Social scientist, when trying to predict something, are effectively just betting on what the average will be in the future, and calling that science. Physicists, when criticizing social sciences, point out bad uses of statistics, but don’t point out the lack of an underlying model of human behavior.

Perhaps a similar parallel could be made with the theory of gasses. Social sciences try to predict the average movements of a large number of gas particles, without having a model of how an individual gas particle behaves (or having a very, very simple SR model), and without knowing all the environmental variables that could influence the behavior of gas particles.

Now, there is a kind of laboratory experiments in psychology. but they don’t involve a model of a person. They can find if a certain event influences a certain variable on average. Say, the effect of caffeine on reaction time. On average, people will have shorter RT on caffeine then without it. That is all an experiment in psychology can say, it does not deal with mechanisms of caffeine influence, nor with individual variables that moderate the influence. The same can be said of random, placebo controlled trials in fields such as medicine. They can find what works, but not how it works. They can be viewed as a type of observation that comes prior to a real experiment.

What PCT gives are the tools to conduct a real scientific experiment in the field of human behavior. We could say it follows methodological individualism - studying individual people instead of group averages, trying to find how a single gas particle behaves before going for group behavior. It doesn’t predict the future but provides explanations of mechanisms. It doesn’t just say what works, but gives testable models of how things might work.

···

=====================================================

After reading this, I’m not sure it’s very coherent. Any comments are welcome. I’d say the main point is that science is not, or could not be in the business of predicting the future, but only in the business of providing models of causes and mechanisms, and that is what PCT provides for the study of humans, or even all living things and groups of living things.

Best

Adam

[From Fred Nickols (2013.07.16.1139 EDT)]

My responses are attached.

Regards,

Fred Nickols, CPT

Managing Partner

Distance Consulting LLC

The Knowledge Workers’ Tool Room

Week 3 Study Guide CH 2 Models Generalizations - FWN.doc (30.5 KB)

[Rick Marken (2013.07.18.1310)]

Just wanted to say thanks to those of you who posted replies to the study guide questions for Ch. 2 on Models and Generalizations. As usual I’ll wait until the weekend to give my own summary of the chapter, making an effort to include your ideas in that summary. But if you have time, there is one thing that I would like to get your comments on. In Chapter 2 Bill is making a case for PCT being a very different kind of psychological theory – one that is more like the theories of physics than those of traditional psychology. PCT is a “working model” in the sense that it generates behavior from the properties and interactions between assumed system components. I am wondering whether any of you can identify any current psychological theories that are of the same type as PCT. [By the way I don’t know the answer to this myself; there are some theories that I think of as models in the PCT sense but I’m not sure; I’m asking this just for my own information, to make the discussion of this chapter a little more concrete.]

Best

Rick

···

On Sat, Jul 13, 2013 at 11:56 AM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken (2013.07.13.1155)]

And here is the Study Guide for the next Chapter.

Best regards

Rick


Richard S. Marken PhD
rsmarken@gmail.com

www.mindreadings.com


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[John Kirkland 2012.07.19]

One candidate would be ‘attachment theory’ which John Bowlby happened to describe as a working model as well. Attachment theory has been described as the only viable theory of socio-emotional development; with Bowlby’s theorising and Mary Ainsworth’s applications it combined cognitive, ethological, systems and dynamic perspectives (in the sense of the past being represented).

···

On Fri, Jul 19, 2013 at 8:11 AM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken (2013.07.18.1310)]

Just wanted to say thanks to those of you who posted replies to the study guide questions for Ch. 2 on Models and Generalizations. As usual I’ll wait until the weekend to give my own summary of the chapter, making an effort to include your ideas in that summary. But if you have time, there is one thing that I would like to get your comments on. In Chapter 2 Bill is making a case for PCT being a very different kind of psychological theory – one that is more like the theories of physics than those of traditional psychology. PCT is a “working model” in the sense that it generates behavior from the properties and interactions between assumed system components. I am wondering whether any of you can identify any current psychological theories that are of the same type as PCT. [By the way I don’t know the answer to this myself; there are some theories that I think of as models in the PCT sense but I’m not sure; I’m asking this just for my own information, to make the discussion of this chapter a little more concrete.]

Best

Rick

On Sat, Jul 13, 2013 at 11:56 AM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken (2013.07.13.1155)]

And here is the Study Guide for the next Chapter.

Best regards

Rick


Richard S. Marken PhD
rsmarken@gmail.com

www.mindreadings.com


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Rick Marken (2013.07.18.1450)]

[John Kirkland 2012.07.19]

One candidate would be ‘attachment theory’ which John Bowlby happened to describe as a working model as well. Attachment theory has been described as the only viable theory of socio-emotional development; with Bowlby’s theorising and Mary Ainsworth’s applications it combined cognitive, ethological, systems and dynamic perspectives (in the sense of the past being represented).

Thanks John. Now could you try to describe the theory in a bit more detail and explain why it might qualify as a working model, in the sense described in Ch 2 (or just from knowing how the PCT model works).

Best
Rick

···


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

Nice challenge Rick; sure, I’ll give it a go but it will take a little while for draft#1 to emerge.

···

On Fri, Jul 19, 2013 at 9:48 AM, Richard Marken rsmarken@gmail.com wrote:

[Rick Marken (2013.07.18.1450)]

[John Kirkland 2012.07.19]

One candidate would be ‘attachment theory’ which John Bowlby happened to describe as a working model as well. Attachment theory has been described as the only viable theory of socio-emotional development; with Bowlby’s theorising and Mary Ainsworth’s applications it combined cognitive, ethological, systems and dynamic perspectives (in the sense of the past being represented).

Thanks John. Now could you try to describe the theory in a bit more detail and explain why it might qualify as a working model, in the sense described in Ch 2 (or just from knowing how the PCT model works).

Best
Rick


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[Rupert Young (2013.07.19 10.00 BST)]

Not particularly aux fait with psychological theories, but ...
1. Can you think of two or three examples of psychological theories that are examples of “extrapolation”- type theories, per Powers description in that section of Chapter 2?

Psychometric testing, such as Myers-Briggs Type Indicator?

2\. Can you think of two or three examples of psychological theories that are examples of “abstraction”\-type theories

Cognitive theories of memory, with classifications of episodic, declarative, short-term memory etc?

3\. Can you think of two or three examples of psychological theories that are examples of “model”\-type theories?

Not sure. There aren't any? How about Gibson's ecological approach?

4\. David pointed me to a paper in American Psychologist that is ostensibly about modeling in psychology\.\.\.\.  boring paper \(just the kind I like\) but if you are willing to read it I would be interested in hearing whether you think the models described in the paper are of the extrapolation, abstraction, or model\-type described by Powers in this chapter\.

Still slogging my way through it. If only you'd said it was an exciting paper :slight_smile:

Leading questions:
1. There isn't one, you can't apply statistical generalisations to discrete real-time events.
2. 1, 9, 10, assuming constant direction and velocity.

Regards,
Rupert

···

On 13/07/2013 19:56, Richard Marken wrote:

[Rick Marken (2013.07.13.1155)]

And here is the Study Guide for the next Chapter.

Best regards

Rick

--
Richard S. Marken PhD
<mailto:rsmarken@gmail.com>> rsmarken@gmail.com
<http://www.mindreadings.com>> www.mindreadings.com

[Martin Taylor 2013.07.19.11.11]

I have a long-standing problem with Chapter 2. Bill and I argued about it with no resolution some 20 years ago. The problem is that, to use Bill's terminology, his abstract generalizations are not mine because his view is what constitutes "science" differs from mine. What follows is necessarily unfair, as Bill is not available to correct my interpretation of his viewpoint. However, thinking of this on-line seminar as a stage in the advancement of science, I have resolved my conflict in favour of putting forth our contrasting views as best I can. I hope I will not do Bill an injustice, as I believe he would not accept much of what I have to say.

To frame the problem, both of us see "science" as a means to an end, the end being the better understanding of Nature (the unknowable "real world"). However, Bill sees that understanding as the ability to construct simulations that behave the way the simulated portion of Nature seems to behave. Yet in his terms, to construct those simulations (models), actually provides only an "external" view. The models assume the existence of internal components. Maybe those components have characteristics that can be attested by other means of investigation -- Bill says he tries to make them conform to what is known from neurophysiological investigation. If they do, well and good, but the problem is only regressed to the next level: what is it that allows us to say how these attested components work; how are they simulated to match the way the "real world" entities seem to behave? Under what conditions do the simulations match the observed behaviour and under what conditions do they not.

My view is different. It is that understanding Nature is a question of the ability to judge what will be observed in conditions other than those already observed. Bill's three "abstract generalizations" (generalization, abstract generalization, and models) with the external-internal contrast for the last two (or maybe just the last) are to me just ways to approach this understanding. If you can predict that under conditions XYZ you will observe ABC, you can say no more about the realm of discourse within which ABC is (or is not) observed. If you can say WHY ABC is observed under conditions XYZ and HOW different components work together to produce observation ABC, you have extended the realm of discourse and you understand a larger chunk of Nature. But then you would want to extend your understanding of the "why and how", to understand under what conditions the different components work as they should and when they do not.

Bill says late in the chapter that what he is going to offer is an organization of known components. That is a definite enhancement of the understanding of Nature over simply collecting a bag of known components. Building working simulations using abstractions of these components increases the precision of understanding, but I do not see it as other than a quantitative advance within any specific domain. Where the major increase of understanding comes from is the ability to "abstractly generalize" the organization to a wide range of life-forms and environmental conditions.

···

----------

I have avoided using the term "model" in Bill's sense, because to me it implies "toy". "Toy" might well be appropriate considering the simplicity of our control simulations as compared to the apparent complexity of any living organism, but it sounds a bit pejorative. A toy train looks as though it behaves like a real one, but the way it works is quite different. Bill's "toy" is supposed to work the way the real one does.

As Rick has often said, the "fact of control" can be directly observed. Life forms can be observed controlling.

They can also be observed not controlling other things we as observers can see. Control is an external fact (using Bill's language). The form of the mechanism that allows control is not. Nor is the mechanism that allows us as observers to determine the conditions under which a particular organism (person) will control one thing rather than another. Those conditions would presumably come under Bill's first abstraction "generalization". We do not have a "model" or even an "abstract generalization" for it.

I know I am getting ahead of our place in the book here, but I think the point is worth making. Applying Occam's razor probably leads to the simple scalar control loop we all know and love, but there are other possibilities. Those possibilities multiply when we deal with multi-level control of simultaneously varying perceptions. All that control requires is that there be a way for the state of the environment to be sensed and a way for the organism to influence what it senses. What happens between is open to a wide range of possibilities, of which the straight-through connection by way of a comparison with the reference value is probably the simplest, and therefore the one to be chosen in the absence of evidence to the contrary. When we get to other aspects of the Nature of living systems, we need more. Physics places limits of what can be controlled, but hardly limits what is actually controlled from within what can be controlled.

------------

In sum, I don't see the clean distinction between "generalization", "abstract generalization", and "model" that Bill sees. In the end, they all come down to "generalization".

Martin

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

The three different strategies for generalization in the chapter are: Extrapolation, Abstraction, and Model Building, not the ones that you list. The first seems to involve regression methods of statistical analysis. The second suggests factor analysis, clustering or multidimensional scaling. The last one, model building, because it is talking about mechanism, probably involves experimental approaches and analyses like ANOVA. As Rick points out in his articles, these are all examples of the general linear model, which does not include feedback effects where input changes output, and output changes input.

I guess I don’t really
understand why you don’t see these different strategies as different, at least in in terms of data analyses suggested by them.

David

···

From: Martin Taylor mmt-csg@MMTAYLOR.NET
To: CSGNET@LISTSERV.ILLINOIS.EDU
Sent: Friday, July 19, 2013 11:57 AM
Subject: Re: B:CP Course Study Guide, CH 2 Models & Generalizations

[Martin Taylor 2013.07.19.11.11]

I have a long-standing problem with Chapter 2. Bill and I argued about it with no resolution some 20 years ago. The problem is that, to use Bill’s terminology, his abstract generalizations are not mine because his view is what constitutes “science” differs from mine. What follows is necessarily unfair, as Bill is not available to correct my interpretation of his viewpoint. However, thinking of this on-line seminar as a stage in the advancement of science, I have resolved my conflict in favour of putting forth our contrasting views as best I can. I hope I will not do Bill an injustice, as I believe he would not accept much of what I have to say.

To frame the problem, both of us see “science” as a means to an end, the end being the better understanding of Nature (the unknowable “real world”). However, Bill sees that understanding as the ability to construct simulations that behave the way the simulated portion
of Nature seems to behave. Yet in his terms, to construct those simulations (models), actually provides only an “external” view. The models assume the existence of internal components. Maybe those components have characteristics that can be attested by other means of investigation – Bill says he tries to make them conform to what is known from neurophysiological investigation. If they do, well and good, but the problem is only regressed to the next level: what is it that allows us to say how these attested components work; how are they simulated to match the way the “real world” entities seem to behave? Under what conditions do the simulations match the observed behaviour and under what conditions do they not.

My view is different. It is that understanding Nature is a question of the ability to judge what will be observed in conditions other than those already observed. Bill’s three “abstract generalizations” (generalization, abstract
generalization, and models) with the external-internal contrast for the last two (or maybe just the last) are to me just ways to approach this understanding. If you can predict that under conditions XYZ you will observe ABC, you can say no more about the realm of discourse within which ABC is (or is not) observed. If you can say WHY ABC is observed under conditions XYZ and HOW different components work together to produce observation ABC, you have extended the realm of discourse and you understand a larger chunk of Nature. But then you would want to extend your understanding of the “why and how”, to understand under what conditions the different components work as they should and when they do not.

Bill says late in the chapter that what he is going to offer is an organization of known components. That is a definite enhancement of the understanding of Nature over simply collecting a bag of known components. Building working simulations using
abstractions of these components increases the precision of understanding, but I do not see it as other than a quantitative advance within any specific domain. Where the major increase of understanding comes from is the ability to “abstractly generalize” the organization to a wide range of life-forms and environmental conditions.


I have avoided using the term “model” in Bill’s sense, because to me it implies “toy”. “Toy” might well be appropriate considering the simplicity of our control simulations as compared to the apparent complexity of any living organism, but it sounds a bit pejorative. A toy train looks as though it behaves like a real one, but the way it works is quite different. Bill’s “toy” is supposed to work the way the real one does.

As Rick has often said, the “fact of control” can be directly observed. Life forms can be observed controlling.

They can also be observed not controlling other things we
as observers can see. Control is an external fact (using Bill’s language). The form of the mechanism that allows control is not. Nor is the mechanism that allows us as observers to determine the conditions under which a particular organism (person) will control one thing rather than another. Those conditions would presumably come under Bill’s first abstraction “generalization”. We do not have a “model” or even an “abstract generalization” for it.

I know I am getting ahead of our place in the book here, but I think the point is worth making. Applying Occam’s razor probably leads to the simple scalar control loop we all know and love, but there are other possibilities. Those possibilities multiply when we deal with multi-level control of simultaneously varying perceptions. All that control requires is that there be a way for the state of the environment to be sensed and a way for the organism to influence what it senses. What happens between
is open to a wide range of possibilities, of which the straight-through connection by way of a comparison with the reference value is probably the simplest, and therefore the one to be chosen in the absence of evidence to the contrary. When we get to other aspects of the Nature of living systems, we need more. Physics places limits of what can be controlled, but hardly limits what is actually controlled from within what can be controlled.


In sum, I don’t see the clean distinction between “generalization”, “abstract generalization”, and “model” that Bill sees. In the end, they all come down to “generalization”.

Martin

David,

It's not so much that I see no difference, but that I see no clear

boundaries between these classes
(Extrapolation/Abstraction/Modelling). I didn’t use Bill’s headline
words to label them, but words in his text.

Start with the assumption that there is a "real world" out there,

and that our perceptions relate to it in some way. In this “real
world”, any state or event occurs only once, and it is by
Abstraction that we even segregate those elements that we include in
the state or event from those we do not. To say that an event recurs
is further abstraction, saying, in essence, that secular time is
irrelevant. To say that an event could recur is Extrapolation, or,
to use the word I prefer, generalization. It is equally Abstraction
and Extrapolation to say that if some part of an event structure
were to be modified in some way the event would turn out
differently. Abstraction and either “generalization” or
“Extrapolation” are inextricably mixed. You can’t extrapolate
without abstracting what it is over which you are extrapolating, and
you can’t abstract without generalizing – saying that for your
purposes certain kinds of things are the same and other parts of the
Universe don’t matter.

I'm not trying to suggest that there are no differences among the

investigative techniques that can be used to investigate the world.
What I am arguing is that I would divide the many different methods
quite differently than Bill does. I think an important division is
into those in which the environment is manipulated and the effects
of those manipulations on observations are recorded (e.g. lab
experiments), as opposed to those that simply observe the
environment (e.g. astronomy). That distinction is orthogonal to
Bill’s categories.

Another orthogonal dimension is the distinction between those things

studied that respond passively to influences, such as rocks and
water pools, and those that actively influence their environment
when they are themselves subjected to external influences – meaning
living things. You push a rock and either it moves or it doesn’t.
You push a dog and you may get bitten, you may get a playful
rollover, you may get a growl, or something else may happen.

The third orthogonal dimension of investigation could be

characterized as statistical analysis of observation of the world
versus statistical analysis of comparison between observation and
“models” of the world. All of the statistical techniques you mention
could be applied in either case. Models come in two flavours, as
Bill says: those that simply produce results that mimic what is
observed (“external”), and those that attempt to produce those
results by the same mechanisms as are used by the thing observed
(“internal”). This dimension thus has at least three possibilities
– there are grades between the limit classes of models just
described, depending on how the internal mechanisms are treated.

With just these three dimensions we have at least twelve categories,

each of which incorporates Bill’s Abstraction and Extrapolation,
which I do not see as separable, each requiring the other if it is
to be meaningful.

Considering your assignment of different statistical techniques to

Bill’s three categories, I don’t think any of them are in principle
particularly favoured in any of the twelve of my categories or in
any of Bill’s three. Whether they are in practice differentially
used is a matter for observational analysis.

In response to your question I wrote pages more on this, going into

examples and detail, without so far even getting into models of
living things. It might have been sufficiently detailed to obscure
the message, so I cut it all. I hope the above should be enough to
answer your question.

Martin
···

On 2013/07/20 7:35 AM, D GOLDSTEIN
wrote:

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

      The three different strategies for generalization in the

chapter are: Extrapolation, Abstraction, and Model Building,
not the ones that you list. The first seems to involve
regression methods of statistical analysis. The second
suggests factor analysis, clustering or multidimensional
scaling. The last one, model building, because it is talking
about mechanism, probably involves experimental approaches and
analyses like ANOVA. As Rick points out in his articles, these
are all examples of the general linear model, which does not
include feedback effects where input changes output, and
output changes input.

      I guess I don't really understand why you don't see these

different strategies as different, at least in in terms of
data analyses suggested by them.

David

From: Martin
Taylor Friday, July 19, 2013 11:57 AM
Re: B:CP Course Study Guide, CH 2 Models &
Generalizations

          [Martin Taylor 2013.07.19.11.11]



          I have a long-standing problem with Chapter 2. Bill and I

argued about it with no resolution some 20 years ago. The
problem is that, to use Bill’s terminology, his abstract
generalizations are not mine because his view is what
constitutes “science” differs from mine. What follows is
necessarily unfair, as Bill is not available to correct my
interpretation of his viewpoint. However, thinking of this
on-line seminar as a stage in the advancement of science,
I have resolved my conflict in favour of putting forth our
contrasting views as best I can. I hope I will not do Bill
an injustice, as I believe he would not accept much of
what I have to say.

          To frame the problem, both of us see "science" as a means

to an end, the end being the better understanding of
Nature (the unknowable “real world”). However, Bill sees
that understanding as the ability to construct simulations
that behave the way the simulated portion of Nature seems
to behave. Yet in his terms, to construct those
simulations (models), actually provides only an “external”
view. The models assume the existence of internal
components. Maybe those components have characteristics
that can be attested by other means of investigation –
Bill says he tries to make them conform to what is known
from neurophysiological investigation. If they do, well
and good, but the problem is only regressed to the next
level: what is it that allows us to say how these attested
components work; how are they simulated to match the way
the “real world” entities seem to behave? Under what
conditions do the simulations match the observed behaviour
and under what conditions do they not.

          My view is different. It is that understanding Nature is a

question of the ability to judge what will be observed in
conditions other than those already observed. Bill’s three
“abstract generalizations” (generalization, abstract
generalization, and models) with the external-internal
contrast for the last two (or maybe just the last) are to
me just ways to approach this understanding. If you can
predict that under conditions XYZ you will observe ABC,
you can say no more about the realm of discourse within
which ABC is (or is not) observed. If you can say WHY ABC
is observed under conditions XYZ and HOW different
components work together to produce observation ABC, you
have extended the realm of discourse and you understand a
larger chunk of Nature. But then you would want to extend
your understanding of the “why and how”, to understand
under what conditions the different components work as
they should and when they do not.

          Bill says late in the chapter that what he is going to

offer is an organization of known components. That is a
definite enhancement of the understanding of Nature over
simply collecting a bag of known components. Building
working simulations using abstractions of these components
increases the precision of understanding, but I do not see
it as other than a quantitative advance within any
specific domain. Where the major increase of understanding
comes from is the ability to “abstractly generalize” the
organization to a wide range of life-forms and
environmental conditions.

          ----------



          I have avoided using the term "model" in Bill's sense,

because to me it implies “toy”. “Toy” might well be
appropriate considering the simplicity of our control
simulations as compared to the apparent complexity of any
living organism, but it sounds a bit pejorative. A toy
train looks as though it behaves like a real one, but the
way it works is quite different. Bill’s “toy” is supposed
to work the way the real one does.

          As Rick has often said, the "fact of control" can be

directly observed. Life forms can be observed controlling.

          They can also be observed not controlling other things we

as observers can see. Control is an external fact (using
Bill’s language). The form of the mechanism that allows
control is not. Nor is the mechanism that allows us as
observers to determine the conditions under which a
particular organism (person) will control one thing rather
than another. Those conditions would presumably come under
Bill’s first abstraction “generalization”. We do not have
a “model” or even an “abstract generalization” for it.

          I know I am getting ahead of our place in the book here,

but I think the point is worth making. Applying Occam’s
razor probably leads to the simple scalar control loop we
all know and love, but there are other possibilities.
Those possibilities multiply when we deal with multi-level
control of simultaneously varying perceptions. All that
control requires is that there be a way for the state of
the environment to be sensed and a way for the organism to
influence what it senses. What happens between is open to
a wide range of possibilities, of which the
straight-through connection by way of a comparison with
the reference value is probably the simplest, and
therefore the one to be chosen in the absence of evidence
to the contrary. When we get to other aspects of the
Nature of living systems, we need more. Physics places
limits of what can be controlled, but hardly limits what
is actually controlled from within what can be controlled.

          ------------



          In sum, I don't see the clean distinction between

“generalization”, “abstract generalization”, and “model”
that Bill sees. In the end, they all come down to
“generalization”.

          Martin

mmt-csg@MMTAYLOR.NET
**To:**CSGNET@LISTSERV.ILLINOIS.EDU
Sent:
Subject:

[David Goldstein (2013.07.21.13:50)]

[Martin Taylor (2013.07.21.10:45)]

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

Thanks for your answer. The purpose of the course is to help us all understand what Bill Powers said. Perhaps, the major contribution of the chapter is the emphasis he places on “internal” modeling approaches, one that theorizes on the actual mechanisms inside the living organism.

David

···

From: Martin Taylor mmt-csg@MMTAYLOR.NET
To: CSGNET@LISTSERV.ILLINOIS.EDU
Sent: Sunday, July 21, 2013 10:45 AM
Subject: Re: B:CP Course Study Guide, CH 2 Models & Generalizations

David,

It's not so much that I see no difference, but that I see no clear

boundaries between these classes
(Extrapolation/Abstraction/Modelling). I didn’t use Bill’s headline
words to label them, but words in his text.

Start with the assumption that there is a "real world" out there,

and that our perceptions relate to it in some way. In this “real
world”, any state or event occurs only once, and it is by
Abstraction that we even segregate those elements that we include in
the state or event from those we do not. To say that an event recurs
is further abstraction, saying, in essence, that secular time is
irrelevant. To say that an event could recur is Extrapolation, or,
to use the word I prefer, generalization. It is equally Abstraction
and Extrapolation to say that if some part of an event structure
were to be modified in some way the event would turn out
differently. Abstraction and either “generalization” or
“Extrapolation” are inextricably mixed. You can’t extrapolate
without abstracting what it is over which you are extrapolating, and
you can’t abstract without generalizing – saying that for your
purposes certain kinds of things are the same and other parts of the
Universe don’t matter.

I'm not trying to suggest that there are no differences among the

investigative techniques that can be used to investigate the world.
What I am arguing is that I would divide the many different methods
quite differently than Bill does. I think an important division is
into those in which the environment is manipulated and the effects
of those manipulations on observations are recorded (e.g. lab
experiments), as opposed to those that simply observe the
environment (e.g. astronomy). That distinction is orthogonal to
Bill’s categories.

Another orthogonal dimension is the distinction between those things

studied that respond passively to influences, such as rocks and
water pools, and those that actively influence their environment
when they are themselves subjected to external influences – meaning
living things. You push a rock and either it moves or it doesn’t.
You push a dog and you may get bitten, you may get a playful
rollover, you may get a growl, or something else may happen.

The third orthogonal dimension of investigation could be

characterized as statistical analysis of observation of the world
versus statistical analysis of comparison between observation and
“models” of the world. All of the statistical techniques you mention
could be applied in either case. Models come in two flavours, as
Bill says: those that simply produce results that mimic what is
observed (“external”), and those that attempt to produce those
results by the same mechanisms as are used by the thing observed
(“internal”). This dimension thus has at least three possibilities
– there are grades between the limit classes of models just
described, depending on how the internal mechanisms are treated.

With just these three dimensions we have at least twelve categories,

each of which incorporates Bill’s Abstraction and Extrapolation,
which I do not see as separable, each requiring the other if it is
to be meaningful.

Considering your assignment of different statistical techniques to

Bill’s three categories, I don’t think any of them are in principle
particularly favoured in any of the twelve of my categories or in
any of Bill’s three. Whether they are in practice differentially
used is a matter for observational analysis.

In response to your question I wrote pages more on this, going into

examples and detail, without so far even getting into models of
living things. It might have been sufficiently detailed to obscure
the message, so I cut it all. I hope the above should be enough to
answer your question.

Martin

  On 2013/07/20 7:35 AM, D GOLDSTEIN wrote:

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

      The three different strategies for generalization in the

chapter are: Extrapolation, Abstraction, and Model Building,
not the ones that you list. The first seems to involve
regression methods of statistical analysis. The second
suggests factor analysis, clustering or multidimensional
scaling. The last one, model building, because it is talking
about mechanism, probably involves experimental approaches and
analyses like ANOVA. As Rick points out in his articles, these
are all examples of the general linear model, which does not
include feedback effects where input changes output, and
output changes input.

      I guess I don't really understand why you don't see these

different strategies as different, at least in in terms of
data analyses suggested by them.

David

From: Martin
Taylor mailto:mmt-csg@MMTAYLOR.NET
To:
CSGNET@LISTSERV.ILLINOIS.EDU
Sent:
Friday, July 19, 2013 11:57 AM
Subject:
Re: B:CP Course Study Guide, CH 2 Models &
Generalizations

          [Martin Taylor 2013.07.19.11.11]



          I have a long-standing problem with Chapter 2. Bill and I

argued about it with no resolution some 20 years ago. The
problem is that, to use Bill’s terminology, his abstract
generalizations are not mine because his view is what
constitutes “science” differs from mine. What follows is
necessarily unfair, as Bill is not available to correct my
interpretation of his viewpoint. However, thinking of this
on-line seminar as a stage in the advancement of science,
I have resolved my conflict in favour of putting forth our
contrasting views as best I can. I hope I will not do Bill
an injustice, as I believe he would not accept much of
what I have to say.

          To frame the problem, both of us see "science" as a means

to an end, the end being the better understanding of
Nature (the unknowable “real world”). However, Bill sees
that understanding as the ability to construct simulations
that behave the way the simulated portion of Nature seems
to behave. Yet in his terms, to construct those
simulations (models), actually provides only an “external”
view. The models assume the existence of internal
components. Maybe those components have characteristics
that can be attested by other means of investigation –
Bill says he tries to make them conform to what is known
from neurophysiological investigation. If they do, well
and good, but the problem is only regressed to the next
level: what is it that allows us to say how these attested
components work; how are they simulated to match the way
the “real world” entities seem to behave? Under what
conditions do the simulations match the observed behaviour
and under what conditions do they not.

          My view is different. It is that understanding Nature is a

question of the ability to judge what will be observed in
conditions other than those already observed. Bill’s three
“abstract generalizations” (generalization, abstract
generalization, and models) with the external-internal
contrast for the last two (or maybe just the last) are to
me just ways to approach this understanding. If you can
predict that under conditions XYZ you will observe ABC,
you can say no more about the realm of discourse within
which ABC is (or is not) observed. If you can say WHY ABC
is observed under conditions XYZ and HOW different
components work together to produce observation ABC, you
have extended the realm of discourse and you understand a
larger chunk of Nature. But then you would want to extend
your understanding of the “why and how”, to understand
under what conditions the different components work as
they should and when they do not.

          Bill says late in the chapter that what he is going to

offer is an organization of known components. That is a
definite enhancement of the understanding of Nature over
simply collecting a bag of known components. Building
working simulations using abstractions of these components
increases the precision of understanding, but I do not see
it as other than a quantitative advance within any
specific domain. Where the major increase of understanding
comes from is the ability to “abstractly generalize” the
organization to a wide range of life-forms and
environmental conditions.

          ----------



          I have avoided using the term "model" in Bill's sense,

because to me it implies “toy”. “Toy” might well be
appropriate considering the simplicity of our control
simulations as compared to the apparent complexity of any
living organism, but it sounds a bit pejorative. A toy
train looks as though it behaves like a real one, but the
way it works is quite different. Bill’s “toy” is supposed
to work the way the real one does.

          As Rick has often said, the "fact of control" can be

directly observed. Life forms can be observed controlling.

          They can also be observed not controlling other things we

as observers can see. Control is an external fact (using
Bill’s language). The form of the mechanism that allows
control is not. Nor is the mechanism that allows us as
observers to determine the conditions under which a
particular organism (person) will control one thing rather
than another. Those conditions would presumably come under
Bill’s first abstraction “generalization”. We do not have
a “model” or even an “abstract generalization” for it.

          I know I am getting ahead of our place in the book here,

but I think the point is worth making. Applying Occam’s
razor probably leads to the simple scalar control loop we
all know and love, but there are other possibilities.
Those possibilities multiply when we deal with multi-level
control of simultaneously varying perceptions. All that
control requires is that there be a way for the state of
the environment to be sensed and a way for the organism to
influence what it senses. What happens between is open to
a wide range of possibilities, of which the
straight-through connection by way of a comparison with
the reference value is probably the simplest, and
therefore the one to be chosen in the absence of evidence
to the contrary. When we get to other aspects of the
Nature of living systems, we need more. Physics places
limits of what can be controlled, but hardly limits what
is actually controlled from within what can be controlled.

          ------------



          In sum, I don't see the clean distinction between

“generalization”, “abstract generalization”, and “model”
that Bill sees. In the end, they all come down to
“generalization”.

          Martin

[Martin Taylor 2013.07.21.14.47]

David,

If I had appreciated that "The purpose of the course is to help us

all understand what Bill Powers said", I would not have posted my
initial comment, which was based on a 20-year unresolved
disagreement with what Bill said in this chapter (as I said in the
prologue to my posting). I had thought that the purpose of the
course was to help us all to understand PCT better, by means of
critical analysis of what Bill said.

I'm not really interested in pure literary analysis, so I will

refrain from further comment during the course unless I think other
commenters are missing the point of what Bill wrote.

I agree with your comment about the major contribution of the

chapter. That was not the area in which I disagreed with Bill.

Martin
···

mmt-csg@MMTAYLOR.NET
**To:**CSGNET@LISTSERV.ILLINOIS.EDU
Sent:
Subject:

[Rick Marken (2013.07.21.1400)]

[Martin Taylor 2013.07.21.14.47]

David,



If I had appreciated that "The purpose of the course is to help us

all understand what Bill Powers said", I would not have posted my
initial comment, which was based on a 20-year unresolved
disagreement with what Bill said in this chapter

It’s OK Martin. The purpose of the course is, indeed, to understand what Bill said but we’re not treating what he said as gospel. If you (or anyone) disagrees with something said in B:CP then we we would like to hear what you disagree about and why. Hopefully, this is the way we will make progress towards a better understanding of human nature (and the nature of all living things, for that matter).

Best

Rick

···
(as I said in the

prologue to my posting). I had thought that the purpose of the
course was to help us all to understand PCT better, by means of
critical analysis of what Bill said.

I'm not really interested in pure literary analysis, so I will

refrain from further comment during the course unless I think other
commenters are missing the point of what Bill wrote.

I agree with your comment about the major contribution of the

chapter. That was not the area in which I disagreed with Bill.

Martin

[David Goldstein (2013.07.21.13:50)]

[Martin Taylor (2013.07.21.10:45)]

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

      Thanks for your answer. The purpose of the course is to

help us all understand what Bill Powers said. Perhaps, the
major contribution of the chapter is the emphasis he places on
“internal” modeling approaches, one that theorizes on the
actual mechanisms inside the living organism.

David

From: Martin
Taylor mmt-csg@MMTAYLOR.NET
To:
CSGNET@LISTSERV.ILLINOIS.EDU
Sent:
Sunday, July 21, 2013 10:45 AM
Subject:
Re: B:CP Course Study Guide, CH 2 Models &
Generalizations

David,

              It's not so much that I see no difference, but that I

see no clear boundaries between these classes
(Extrapolation/Abstraction/Modelling). I didn’t use
Bill’s headline words to label them, but words in his
text.

              Start with the assumption that there is a "real world"

out there, and that our perceptions relate to it in
some way. In this “real world”, any state or event
occurs only once, and it is by Abstraction that we
even segregate those elements that we include in the
state or event from those we do not. To say that an
event recurs is further abstraction, saying, in
essence, that secular time is irrelevant. To say that
an event could recur is Extrapolation, or, to use
the word I prefer, generalization. It is equally
Abstraction and Extrapolation to say that if some part
of an event structure were to be modified in some way
the event would turn out differently. Abstraction and
either “generalization” or “Extrapolation” are
inextricably mixed. You can’t extrapolate without
abstracting what it is over which you are
extrapolating, and you can’t abstract without
generalizing – saying that for your purposes certain
kinds of things are the same and other parts of the
Universe don’t matter.

              I'm not trying to suggest that there are no

differences among the investigative techniques that
can be used to investigate the world. What I am
arguing is that I would divide the many different
methods quite differently than Bill does. I think an
important division is into those in which the
environment is manipulated and the effects of those
manipulations on observations are recorded (e.g. lab
experiments), as opposed to those that simply observe
the environment (e.g. astronomy). That distinction is
orthogonal to Bill’s categories.

              Another orthogonal dimension is the distinction

between those things studied that respond passively to
influences, such as rocks and water pools, and those
that actively influence their environment when they
are themselves subjected to external influences –
meaning living things. You push a rock and either it
moves or it doesn’t. You push a dog and you may get
bitten, you may get a playful rollover, you may get a
growl, or something else may happen.

              The third orthogonal dimension of investigation could

be characterized as statistical analysis of
observation of the world versus statistical analysis
of comparison between observation and “models” of the
world. All of the statistical techniques you mention
could be applied in either case. Models come in two
flavours, as Bill says: those that simply produce
results that mimic what is observed (“external”), and
those that attempt to produce those results by the
same mechanisms as are used by the thing observed
(“internal”). This dimension thus has at least three
possibilities – there are grades between the limit
classes of models just described, depending on how the
internal mechanisms are treated.

              With just these three dimensions we have at least

twelve categories, each of which incorporates Bill’s
Abstraction and Extrapolation, which I do not see as
separable, each requiring the other if it is to be
meaningful.

              Considering your assignment of different statistical

techniques to Bill’s three categories, I don’t think
any of them are in principle particularly favoured in
any of the twelve of my categories or in any of Bill’s
three. Whether they are in practice differentially
used is a matter for observational analysis.

              In response to your question I wrote pages more on

this, going into examples and detail, without so far
even getting into models of living things. It might
have been sufficiently detailed to obscure the
message, so I cut it all. I hope the above should be
enough to answer your question.

              Martin




                On

2013/07/20 7:35 AM, D GOLDSTEIN wrote:

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

                    The three different strategies for

generalization in the chapter are:
Extrapolation, Abstraction, and Model Building,
not the ones that you list. The first seems to
involve regression methods of statistical
analysis. The second suggests factor analysis,
clustering or multidimensional scaling. The last
one, model building, because it is talking about
mechanism, probably involves experimental
approaches and analyses like ANOVA. As Rick
points out in his articles, these are all
examples of the general linear model, which does
not include feedback effects where input changes
output, and output changes input.

                    I guess I don't really understand why you

don’t see these different strategies as
different, at least in in terms of data analyses
suggested by them.

David

                          **From:**
                          Martin Taylor mailto:mmt-csg@MMTAYLOR.NET
                          **To:**
                          CSGNET@LISTSERV.ILLINOIS.EDU

                          **Sent:**
                          Friday, July 19, 2013 11:57 AM
                          **Subject:**
                          Re: B:CP Course Study Guide, CH 2 Models

& Generalizations

                        [Martin Taylor 2013.07.19.11.11]



                        I have a long-standing problem with Chapter
  1. Bill and I argued about it with no
    resolution some 20 years ago. The problem is
    that, to use Bill’s terminology, his
    abstract generalizations are not mine
    because his view is what constitutes
    “science” differs from mine. What follows is
    necessarily unfair, as Bill is not available
    to correct my interpretation of his
    viewpoint. However, thinking of this on-line
    seminar as a stage in the advancement of
    science, I have resolved my conflict in
    favour of putting forth our contrasting
    views as best I can. I hope I will not do
    Bill an injustice, as I believe he would not
    accept much of what I have to say.

                         To frame the problem, both of us see
    

“science” as a means to an end, the end
being the better understanding of Nature
(the unknowable “real world”). However, Bill
sees that understanding as the ability to
construct simulations that behave the way
the simulated portion of Nature seems to
behave. Yet in his terms, to construct those
simulations (models), actually provides only
an “external” view. The models assume the
existence of internal components. Maybe
those components have characteristics that
can be attested by other means of
investigation – Bill says he tries to make
them conform to what is known from
neurophysiological investigation. If they
do, well and good, but the problem is only
regressed to the next level: what is it that
allows us to say how these attested
components work; how are they simulated to
match the way the “real world” entities seem
to behave? Under what conditions do the
simulations match the observed behaviour and
under what conditions do they not.

                        My view is different. It is that

understanding Nature is a question of the
ability to judge what will be observed in
conditions other than those already
observed. Bill’s three “abstract
generalizations” (generalization, abstract
generalization, and models) with the
external-internal contrast for the last two
(or maybe just the last) are to me just ways
to approach this understanding. If you can
predict that under conditions XYZ you will
observe ABC, you can say no more about the
realm of discourse within which ABC is (or
is not) observed. If you can say WHY ABC is
observed under conditions XYZ and HOW
different components work together to
produce observation ABC, you have extended
the realm of discourse and you understand a
larger chunk of Nature. But then you would
want to extend your understanding of the
“why and how”, to understand under what
conditions the different components work as
they should and when they do not.

                        Bill says late in the chapter that what he

is going to offer is an organization of
known components. That is a definite
enhancement of the understanding of Nature
over simply collecting a bag of known
components. Building working simulations
using abstractions of these components
increases the precision of understanding,
but I do not see it as other than a
quantitative advance within any specific
domain. Where the major increase of
understanding comes from is the ability to
“abstractly generalize” the organization to
a wide range of life-forms and environmental
conditions.

                        ----------



                        I have avoided using the term "model" in

Bill’s sense, because to me it implies
“toy”. “Toy” might well be appropriate
considering the simplicity of our control
simulations as compared to the apparent
complexity of any living organism, but it
sounds a bit pejorative. A toy train looks
as though it behaves like a real one, but
the way it works is quite different. Bill’s
“toy” is supposed to work the way the real
one does.

                        As Rick has often said, the "fact of

control" can be directly observed. Life
forms can be observed controlling.

                        They can also be observed not controlling

other things we as observers can see.
Control is an external fact (using Bill’s
language). The form of the mechanism that
allows control is not. Nor is the mechanism
that allows us as observers to determine the
conditions under which a particular organism
(person) will control one thing rather than
another. Those conditions would presumably
come under Bill’s first abstraction
“generalization”. We do not have a “model”
or even an “abstract generalization” for it.

                        I know I am getting ahead of our place in

the book here, but I think the point is
worth making. Applying Occam’s razor
probably leads to the simple scalar control
loop we all know and love, but there are
other possibilities. Those possibilities
multiply when we deal with multi-level
control of simultaneously varying
perceptions. All that control requires is
that there be a way for the state of the
environment to be sensed and a way for the
organism to influence what it senses. What
happens between is open to a wide range of
possibilities, of which the straight-through
connection by way of a comparison with the
reference value is probably the simplest,
and therefore the one to be chosen in the
absence of evidence to the contrary. When we
get to other aspects of the Nature of living
systems, we need more. Physics places limits
of what can be controlled, but hardly limits
what is actually controlled from within what
can be controlled.

                        ------------



                        In sum, I don't see the clean distinction

between “generalization”, “abstract
generalization”, and “model” that Bill sees.
In the end, they all come down to
“generalization”.

                        Martin


Richard S. Marken PhD
rsmarken@gmail.com
www.mindreadings.com

[David Goldstein (2013.07.21.19:230]

[Martin Taylor 2013.07.21.14.47]

Martin,

Thanks for understanding. Remember that our ultimate goal is to create a PCT course online which people unfamiliar with it can take, maybe for a very small fee. Any money from this activity will be channeled into a research fund for students doing PCT related work.

David

···

From: Martin Taylor mmt-csg@MMTAYLOR.NET
To: CSGNET@LISTSERV.ILLINOIS.EDU
Sent: Sunday, July 21, 2013 3:17 PM
Subject: Re: B:CP Course Study Guide, CH 2 Models & Generalizations

[Martin Taylor 2013.07.21.14.47]

David,

If I had appreciated that "The purpose of the course is to help us

all understand what Bill Powers said", I would not have posted my
initial comment, which was based on a 20-year unresolved
disagreement with what Bill said in this chapter (as I said in the
prologue to my posting). I had thought that the purpose of the
course was to help us all to understand PCT better, by means of
critical analysis of what Bill said.

I'm not really interested in pure literary analysis, so I will

refrain from further comment during the course unless I think other
commenters are missing the point of what Bill wrote.

I agree with your comment about the major contribution of the

chapter. That was not the area in which I disagreed with Bill.

Martin

[David Goldstein (2013.07.21.13:50)]

[Martin Taylor (2013.07.21.10:45)]

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

      Thanks for your answer. The purpose of the course is to

help us all understand what Bill Powers said. Perhaps, the
major contribution of the chapter is the emphasis he places on
“internal” modeling approaches, one that theorizes on the
actual mechanisms inside the living organism.

David

From: Martin
Taylor mailto:mmt-csg@MMTAYLOR.NET
To:
CSGNET@LISTSERV.ILLINOIS.EDU
Sent:
Sunday, July 21, 2013 10:45 AM
Subject:
Re: B:CP Course Study Guide, CH 2 Models &
Generalizations

David,

              It's not so much that I see no difference, but that I

see no clear boundaries between these classes
(Extrapolation/Abstraction/Modelling). I didn’t use
Bill’s headline words to label them, but words in his
text.

              Start with the assumption that there is a "real world"

out there, and that our perceptions relate to it in
some way. In this “real world”, any state or event
occurs only once, and it is by Abstraction that we
even segregate those elements that we include in the
state or event from those we do not. To say that an
event recurs is further abstraction, saying, in
essence, that secular time is irrelevant. To say that
an event could recur is Extrapolation, or, to use
the word I prefer, generalization. It is equally
Abstraction and Extrapolation to say that if some part
of an event structure were to be modified in some way
the event would turn out differently. Abstraction and
either “generalization” or “Extrapolation” are
inextricably mixed. You can’t extrapolate without
abstracting what it is over which you are
extrapolating, and you can’t abstract without
generalizing – saying that for your purposes certain
kinds of things are the same and other parts of the
Universe don’t matter.

              I'm not trying to suggest that there are no

differences among the investigative techniques that
can be used to investigate the world. What I am
arguing is that I would divide the many different
methods quite differently than Bill does. I think an
important division is into those in which the
environment is manipulated and the effects of those
manipulations on observations are recorded (e.g. lab
experiments), as opposed to those that simply observe
the environment (e.g. astronomy). That distinction is
orthogonal to Bill’s categories.

              Another orthogonal dimension is the distinction

between those things studied that respond passively to
influences, such as rocks and water pools, and those
that actively influence their environment when they
are themselves subjected to external influences –
meaning living things. You push a rock and either it
moves or it doesn’t. You push a dog and you may get
bitten, you may get a playful rollover, you may get a
growl, or something else may happen.

              The third orthogonal dimension of investigation could

be characterized as statistical analysis of
observation of the world versus statistical analysis
of comparison between observation and “models” of the
world. All of the statistical techniques you mention
could be applied in either case. Models come in two
flavours, as Bill says: those that simply produce
results that mimic what is observed (“external”), and
those that attempt to produce those results by the
same mechanisms as are used by the thing observed
(“internal”). This dimension thus has at least three
possibilities – there are grades between the limit
classes of models just described, depending on how the
internal mechanisms are treated.

              With just these three dimensions we have at least

twelve categories, each of which incorporates Bill’s
Abstraction and Extrapolation, which I do not see as
separable, each requiring the other if it is to be
meaningful.

              Considering your assignment of different statistical

techniques to Bill’s three categories, I don’t think
any of them are in principle particularly favoured in
any of the twelve of my categories or in any of Bill’s
three. Whether they are in practice differentially
used is a matter for observational analysis.

              In response to your question I wrote pages more on

this, going into examples and detail, without so far
even getting into models of living things. It might
have been sufficiently detailed to obscure the
message, so I cut it all. I hope the above should be
enough to answer your question.

              Martin




                On

2013/07/20 7:35 AM, D GOLDSTEIN wrote:

[David Goldstein (2013.07.20.07:23)]

[Martin Taylor 2013.07.19.11.11]

Martin,

                    The three different strategies for

generalization in the chapter are:
Extrapolation, Abstraction, and Model Building,
not the ones that you list. The first seems to
involve regression methods of statistical
analysis. The second suggests factor analysis,
clustering or multidimensional scaling. The last
one, model building, because it is talking about
mechanism, probably involves experimental
approaches and analyses like ANOVA. As Rick
points out in his articles, these are all
examples of the general linear model, which does
not include feedback effects where input changes
output, and output changes input.

                    I guess I don't really understand why you

don’t see these different strategies as
different, at least in in terms of data analyses
suggested by them.

David

                          **From:**
                          Martin Taylor mailto:mmt-csg@MMTAYLOR.NET
                          **To:**
                          CSGNET@LISTSERV.ILLINOIS.EDU

                          **Sent:**
                          Friday, July 19, 2013 11:57 AM
                          **Subject:**
                          Re: B:CP Course Study Guide, CH 2 Models

& Generalizations

                        [Martin Taylor 2013.07.19.11.11]



                        I have a long-standing problem with Chapter
  1. Bill and I argued about it with no
    resolution some 20 years ago. The problem is
    that, to use Bill’s terminology, his
    abstract generalizations are not mine
    because his view is what constitutes
    “science” differs from mine. What follows is
    necessarily unfair, as Bill is not available
    to correct my interpretation of his
    viewpoint. However, thinking of this on-line
    seminar as a stage in the advancement of
    science, I have resolved my conflict in
    favour of putting forth our contrasting
    views as best I can. I hope I will not do
    Bill an injustice, as I believe he would not
    accept much of what I have to say.

                         To frame the problem, both of us see
    

“science” as a means to an end, the end
being the better understanding of Nature
(the unknowable “real world”). However, Bill
sees that understanding as the ability to
construct simulations that behave the way
the simulated portion of Nature seems to
behave. Yet in his terms, to construct those
simulations (models), actually provides only
an “external” view. The models assume the
existence of internal components. Maybe
those components have characteristics that
can be attested by other means of
investigation – Bill says he tries to make
them conform to what is known from
neurophysiological investigation. If they
do, well and good, but the problem is only
regressed to the next level: what is it that
allows us to say how these attested
components work; how are they simulated to
match the way the “real world” entities seem
to behave? Under what conditions do the
simulations match the observed behaviour and
under what conditions do they not.

                        My view is different. It is that

understanding Nature is a question of the
ability to judge what will be observed in
conditions other than those already
observed. Bill’s three “abstract
generalizations” (generalization, abstract
generalization, and models) with the
external-internal contrast for the last two
(or maybe just the last) are to me just ways
to approach this understanding. If you can
predict that under conditions XYZ you will
observe ABC, you can say no more about the
realm of discourse within which ABC is (or
is not) observed. If you can say WHY ABC is
observed under conditions XYZ and HOW
different components work together to
produce observation ABC, you have extended
the realm of discourse and you understand a
larger chunk of Nature. But then you would
want to extend your understanding of the
“why and how”, to understand under what
conditions the different components work as
they should and when they do not.

                        Bill says late in the chapter that what he

is going to offer is an organization of
known components. That is a definite
enhancement of the understanding of Nature
over simply collecting a bag of known
components. Building working simulations
using abstractions of these components
increases the precision of understanding,
but I do not see it as other than a
quantitative advance within any specific
domain. Where the major increase of
understanding comes from is the ability to
“abstractly generalize” the organization to
a wide range of life-forms and environmental
conditions.

                        ----------



                        I have avoided using the term "model" in

Bill’s sense, because to me it implies
“toy”. “Toy” might well be appropriate
considering the simplicity of our control
simulations as compared to the apparent
complexity of any living organism, but it
sounds a bit pejorative. A toy train looks
as though it behaves like a real one, but
the way it works is quite different. Bill’s
“toy” is supposed to work the way the real
one does.

                        As Rick has often said, the "fact of

control" can be directly observed. Life
forms can be observed controlling.

                        They can also be observed not controlling

other things we as observers can see.
Control is an external fact (using Bill’s
language). The form of the mechanism that
allows control is not. Nor is the mechanism
that allows us as observers to determine the
conditions under which a particular organism
(person) will control one thing rather than
another. Those conditions would presumably
come under Bill’s first abstraction
“generalization”. We do not have a “model”
or even an “abstract generalization” for it.

                        I know I am getting ahead of our place in

the book here, but I think the point is
worth making. Applying Occam’s razor
probably leads to the simple scalar control
loop we all know and love, but there are
other possibilities. Those possibilities
multiply when we deal with multi-level
control of simultaneously varying
perceptions. All that control requires is
that there be a way for the state of the
environment to be sensed and a way for the
organism to influence what it senses. What
happens between is open to a wide range of
possibilities, of which the straight-through
connection by way of a comparison with the
reference value is probably the simplest,
and therefore the one to be chosen in the
absence of evidence to the contrary. When we
get to other aspects of the Nature of living
systems, we need more. Physics places limits
of what can be controlled, but hardly limits
what is actually controlled from within what
can be controlled.

                        ------------



                        In sum, I don't see the clean distinction

between “generalization”, “abstract
generalization”, and “model” that Bill sees.
In the end, they all come down to
“generalization”.

                        Martin