PCT vs Free Energy (Specification versus Prediction)

Hi Rick, thanks for this elaboration and crystal-clear explanation of PCT. Just a minor point. I used the term ‘specify’ as a synonym for ‘define’, and not as synonym for a reference value of what perceptual variable ‘should’ be; if I had wanted to do that I would have just used the term ‘should be’ or ‘reference value’. Personally, I would not regard ‘specify’ as a synonym for ‘should be’. So, I agree with everything you have said, but I would prefer you to state it as it is, rather than to make your own interpretation of what you think I meant. I do appreciate that you have a much clearer way of explaining PCT which I why I always appreciate your responses!

I’m sorry. I’m so used to thinking of the reference signal as a specification, in the engineering requirements sense, that I didn’t see the possibility that you were using “specify” to mean “define”. Indeed, I called this thread "PCT vs Free Energy (Specification versus Prediction) because I think it is important to figure out a way to describe the functional difference between the FEP Bayesian probability signal (a prediction) and the PCT reference signal (a specification). According to FEP the brain is a prediction machine; according to PCT the brain is a specification machine. I want to be able to make that distinction clear, hopefully with a demo of some sort.

Thanks.

No worries. Could PCT be a specification and control machine? I think this sounds better than a ‘definition and specification’ machine, or a ‘definition and control’ machine…

I think you’re right about that. PCT establishes that control of some variable by an entity is fact by using TCV and then modeling from there. FEP models are based on “theoretical assumptions” (“just math!”). I feel hesitant about calling it “just math” because I don’t understand the FEP derivations well enough - “just math” can be unreasonably effective. In any case, I would argue that the motivations behind the FEP models, in this case, might be a little irrelevant. That’s because, I imagine that one could, for example, run a TCV test and then make an Active Inference POMDP model that casts the identified controlled variable (and it’s value) as a “preference distribution”, effectors/actions as “policies”, errors as magnitudes of "variational free energy (perception) and “expected free energy (action)”, and reorganization as “Minimizing both variational and expected free energy”.

If we’re going to make an active inference model with which to compare it against PCT models then we cannot get hung up on the language here. Under the FEP, speaking in terms of “inference” is just a way of talking about how we can model the behavior of organisms. We do not have to reify and truly believe that the actual physical system we are modeling is engaging in the “high level cognitive act (associated with higher levels in the hierarchy, yeah?)” that we typically think of as inference. Another way of saying it is that the FEP says that if we believe that modeling organisms with dynamical systems is a good way to model (we do, right?), then we can also model organisms using Bayesian mechanics. They’ll be different languages, but will systematically vary with each other in a particular way. So, we can go from the language of solving differential equations to the language of minimizing variational free energy, and still be describing the same behavior. At least, that’s what the FEP suggests.

But again, we don’t have to worry about if we believe the FEP for these purposes. We just have to worry about what concepts in Active Inference models map onto concepts in a PCT model so that we can compare and contrast properly. I’m not AT ALL well versed in the mathematics of PCT does it have a particular set of mathematical tools that it uses to build models like “Bayesian Mechanics”? - If so, where can I read about it?

Hopefully, you can see here that the FEP doesn’t say “the brain is a prediction machine”, rather it says something more like , “the brain is a dynamical system that can be cast as a prediction machine”.

Edit: Perhaps you could lay out the mathematical model for the outfielder problem and I/others could try to convert that into Active Inference so that we can compare?

The idea of writing something about the brain being a “specification machine” came to me after seeing a recent article in Psychology Today titled “The Brain as a Prediction Machine”. I wanted to say to the author “Yes, the brain predicts what will happen but, more fundamentally, it specifies what should happen, including whether or not prediction should happen.” If (or when) I write a reply to this article I would call it “The Brain as a Specification Machine” because that is what PCT says it is.

Specification is the central feature of the controlling done by a brain that is a component of a closed loop system. It is also the central feature of that system’s autonomy because the value of the specification is determined completely by the system itself. A predictive brain is not really autonomous because the value of the prediction ultimately depends on what is currently being perceived and what has been perceived in the past.

The behavior of a system with a predicting brain is ultimately driven by the perceptions that are the basis of those predictions. The behavior of a system with a specifying brain is ultimately that of the perceptions that are being driven by those specifications.

I agree fully, but I still think we need a term to describe what an input function does, and how it works, to define a variable aspect of the environment.

Hi Ty

It’s nice to have someone here who actually understands FEP.

I like your suggestion of comparing PCT to FEP. The only way to do that in a way that makes sense to me is to compare the models in terms of their ability to account for the same data. You suggest doing this in terms of the “outfielder problem”. I don’t have any ball catching data handy but I do have quite a bit of object interception data (which is essentially the same problem but where the object to be intercepted (caught) doesn’t have a trajectory that is a simple as the parabolic trajectory of a flyball). But I suggest we start with something simpler: Let’s see how well FEP does at accounting for the behavior in a simple tracking task, like this one.

The mathematics of the PCT model of the tracking task. But I’ll take case of modeling using PCT. All you have to do is produce a working FEP model of the same task. I’m not much of a mathematician myself but I would be happy to try to help you develop the FEP model.

Best, Rick

Sounds like an exciting prospect!

FYI, attached is the complicated FEP solution generated for a Watts Governor…

isal_a_00288.pdf (889.9 KB)

Thanks. I think it will take me some time to process this but I’ll try to have something for you ASAP. In the meantime maybe Ty could take a crack at it.

The simplest unit of information is a difference that makes a difference. The making of the second difference (the transmission of information) requires energy in any non-imaginary system. Information is talked about as though separate from physical systems, or even as underlying physics (where it might not be noticed that ‘underlying’ is a form of ‘separate’). It is separate only because it is a measure, a quantified observation, that is, a perception controlled by the observer who is quantifying information.

This energy is expended by the observed system, and necessarily so. It reflects properties of a physical system, not of information conceived as distinct from the physical system in which changes in amount of information are observed. Why is this not obvious?

1 Like

Hi Rick and Warren, how about this data?

The Baltieri, Buckley & Bruineberg paper (henceforth BB&B) was tough going, especially the FEP part, but I think I got the gist of their two models of the Watt flyball (centrifugal) governor. I’ll call the first model the “standard” model and the second the FEP model. I’ll try to compare both models to a PCT model although BB&B compared only the FEP model to PCT.

First a little about the flyball governor, shown here:
330px-Centrifugal_governor

This device was (and still is, I suppose) used on steam engines to control the speed of the engine, w, keeping it constant despite variations in load, G. A shaft (the vertical rod in the diagram) connects the governor to the engine (not shown) and rotates at a speed proportional to w. The two flyballs at the end of arms rotate along with rotation of the shaft, causing a centrifugal force that lifts the arms in proportion to w. Thus, the arms attached to the flyballs form an angle, psi, with respect to vertical that increases with increases in w.

Because of the way the arms are attached to the small levers above them, an increase in psi pulls the left end of the horizontal bar (top of the diagram) down, thus lifting the lever that closes the throttle valve. The size of the throttle value opening, tv, affects the speed of the engine, w; w is positively relative to engine speed and, therefore negatively related to psi.

Standard Analysis of the Flyball Governor

BB&B’s standard analysis takes advantage of the relationships described above to give a mathematical description of the physics of the flyball governor. Here’s a simplified PCT version of their mathematical analysis. The analysis requires two sets of equations, one that defines the behavior of the System (the governor) and the other that defines the system’s Environment.
In this analysis, italicized variables correspond to those used in BB&B; non-italicized variables are added for clarity of correspondence to PCT.

System

  1. psi = p(w)
  2. tv = f(r - psi)

Equation 1 says that flyball arm angle, psi, is a function of engine speed, w. I call the function p() because in the BB&B analysis, psi corresponds to the PCT perceptual variable. Equation 2 says that the throttle valve opening, tv, is a function of the difference between psi and the reference, r, for psi, which is essentially the reference for the speed of the engine. When engine speed decreases, psi goes below the reference, r - psi goes positive and the throttle valve opening, tv, increases, which increases engine speed; when engine speed increases, psi goes above the reference, r - psi goes negative and the throttle valve opening decreases, which increases the engine speed.

Environment

  1. w = g (tv, G)

Equation 3 is the function relating the effect of system output, tv, and external disturbances, G (which BB&B define as “torque induced by engine load”) on engine speed.

The controlled variable in this system is engine speed, w. A dynamic analysis of the system defined by equations 1-3 gives what BB&B call the equilibrium state of the system and what PCT calls the reference state of w. In both cases this state of w is proportional to the system’s reference specification for the state of the perception of w. In PCT, this reference specification is r so the reference state of w is proportional to r: w ~ k*r.

I knew that the same must be true in the BB&B “standard analysis” but it took me a while to figure out that it is g, gravitational acceleration, that functions as the reference specification in that analysis. The proportional relationship between w and g – the equivalent of the proportional relationship between w and r in the PCT analysis – can be found by solving equation 4 of BB&B for w (you’ll find that w is proportional to the square root of g).

My discovery that g in the BB&B analysis is equivalent to r in the PCT analysis made it clear that the difference between the two analyses is simply one of perspective. The BB&B analysis is done from the engineering perspective – the perspective of the user of the flyball governor – while the PCT analysis is done (or should be done) from the system perspective – the perspective of the governor itself. This difference led to my making a mistake in my PCT analysis and catching one in the BB&B analysis. The mistake is taking psi – the angle of the flyball arm relative to vertical – as the controlled perceptual variable.

In a control system the perceptual and reference variable must be of the same type. You can’t compare apples and oranges and the basis of control is continuous comparison of perception and reference values. Being a control engineer, Powers understood this when he applied control theory to the behavior of living systems. In PCT, both perceptual and reference variables are assumed to be neural currents. They are not only the same type of variables but we understand the physiology that allows these variables to be compared to each other by subtraction (because there are excitatory and inhibitory connections between neurons).

If g is the reference specification in a flyball governor then the perception to which it is compared must be of the same type – a force. And, sure enough, it is. The actual perception that is controlled by the flyball governor is the centrifugal force, Fc, produced by the spinning flyballs. Like psi, the angle of the arms attached to the flyballs, Fc is proportional to w, the speed of the motor. When the motor is running at a speed such that Fc = g, then Fc - g = 0 and there is no need to change the throttle opening, tv; the motor is running at the reference speed. When Fc is greater than g then the motor is going too fast and the flyball arms will pull the horizontal lever down causing the throttle valve to close. When Fc is less than g then the motor is going too slow and the flyball arms will will pull the horizontal lever up causing the throttle valve to open. The angle of the flyball arms, psi, can now be seen as analogous to the error signal in PCT since it is connected to the lever that varies the throttle opening (system output).

Mistaking psi for the actual controlled perception is not a particularly big deal when working from the engineering perspective unless one is interested in designing a governor with a variable reference. Since the force of gravity is a constant, producing a variable reference requires the ability to vary the restoring force acting against Fc. This can be done by inserting a spring of variable stiffness at the appropriate point on the shaft of the governor.

FEP Analysis of the Flyball Governor

The FEP model is explicitly done from the system’s perspective. Indeed, citing Powers (1973), they say that they take the “…rather unusual reading of the engine-governor coupled system: an agent trying to stabilize its observations, i.e., the perceived angle of the flyball arms”. This perspective seems to have allowed them to avoid making the mistake of having the controlled perception and the reference for the state of that perception be of different types. In this case it seems that both are the same type: both are angles. The perception is the observed angle, psi, and the reference is the predicted angle, x. The “error” in this system is apparently the free energy, F, which is proportional to the conditional probability of psi given x (equation 7 in BB&B). The system continuously takes action, a, based on F, presumably as the means of keeping F close to 1.0.

The FEP model of the flyball governor includes functions that the governor cannot and does not carry out. Most obviously, it can’t and doesn’t compute the conditional probability of psi given x. The authors of the model acknowledge this by saying that their FEP model is an “as if” model; the governor behaves as if it is acting to maintain a high conditional probability of psi given x. So one wonders why BB&B wanted to show that you could model the behavior a flyball governor using the FEP model. It can’t be because they wanted to show what you could learn about the governor from the model. The FEP model explains nothing about how the governor actually works and, indeed, is very misleading.

I think the same applies to the application of FEP to the controlling done by living systems. How, for example, does the nervous system come up with the “true value” (x in the example) of the variable being controlled? How does it represent that variable in order to compare it to the observed value of the variable psi in the example)? How does it compute and then store the probabilities necessary to determine the conditional probability the observation given the true (or predicted?) state of the world? For example, in order to compute P(psi| x) per Bayes’ theorem the system would have to know P(x| psi), P(psi) and P(x) for every x that is to be controlled. The idea that organisms do this in order to control the variables they need to control seems highly implausible and, per PCT, completely unnecessary.

I think the BB&B paper shows very clearly that the FEP model is just an “as if” model of the controlling done by living systems, and a very misleading one at that. Even though FEP is popular and trendy, I think the best course for those who want to actually understand the behavior of living systems is to ignore the “as if” theory that is FEP and stick with the “like this” theory, which is PCT.

1 Like

Hi Rick, you’ve done so much work on this, and it’s so helpful to see laid out like this. It is fascinating that in the process of trying to improve on a classic control theory explanation by shifting up the system perspective, the FEP approach adds a set of assumptions concerning probabilities that are evidently unnecessary. Are you happy with your summary and critique as it stands? Otherwise I can think of other options, including potentially modelling the governor using the PCT equations and showing it ‘work’ dynamically; showing that this does indeed behave ‘as if’ probabilities are being calculated in the full knowledge they are not. This would seem to be publishable IMO. Alternatively we could share your critique as its stands either as it stands by sharing it with the authors, or as part of a larger article on PCT vs FEP, or as a possible adversarial collaboration? Or just leaving it here. What would be your preference Rick?

Great. It was helpful for me too.

I don’t think FEP was aimed at improving classic control theory. In the paper, BB&B were just trying to show that a theory that was developed to explain the behavior of living systems (FEP) could also account for the behavior of a physical system that exhibits behavior like that of a living system (the flyball governor). The BB&B paper shows (unintentionally) that a model of behavior (FEP) that describes functions that couldn’t possibly be carried out by the system being modeled can still appear (mathematically) to explain the behavior of that system. The paper shows that it is just as ridiculous to apply FEP to the behavior of a flyball governor as it is to apply it to the behavior of living systems. For example, there is no way for the flyball governor to determine the prior probability of the physical cause of an observation, and the same is true for a living system.

I could probably do a more detailed job. But I think it’s good enough.

It would just lead to a lot of fighting, confusion and wasted time, as in the power law debate. People who are into FEP (like people who are into the power law) are not going to be convinced of PCT by just showing them that their theory is wrong. I’m going to try – or, like Bart Simpson, I’m going to try to try – to focus on doing research that shows what’s right about PCT – what it can do – and try to ignore all the trendy crap that’s currently popular (like FEP) that seems to be similar to PCT (they never are, except superficially and misleadingly).

I’d just leave it there for now. I’m saving it for a possible paper on how prediction fits into PCT. Such a paper would include actual data and models that explain the data.

Good plan!

Happy New Year all! I hope you’ve had a great holiday.

@rsmarken - That was a fantastic breakdown of the BB&B paper and comparison between these three different types of models. Great job and thank you for taking the time to do that. I would concur that we should stick with a “like this” theory, but I’d like to know more about the actual mathematics entailed in constructing such “like this” models.

I suppose that the nice thing about FEP, or “as if” theories, is that they allow you to very quickly get predictive power over the thing that you are studying. For example, if I have a dataset generated by some organism, then I could fit a neural network to that data. However, the problem here is that it’s not clear how the nodes in the neural network correspond to the actual mechanisms the organism uses to generate that data. Like you’re saying, I think FEP models very much suffer the same fate. However, FEP models are built explicitly with certain concepts in mind. In other words, unlike neural networks that have a sort of “assumptionless” structure which then fits some data, FEP models won’t fit data unless the assumptions are “correct” in the sense that the causal structure of those assumptions is capable of “solving the problem”. The problem is then that these assumptions are written in terms of beliefs making them “as if” rather than “like this”. However, it should be mentioned that their are associated neural process theories that apparently allow onw to make predictions about actual neurophysiological responses. See section 5 of: A step-by-step tutorial on active inference and its application to empirical data

It’s also worth noting that in the sense it’s ridiculous to use FEP to model a Watt Governor; It would be just as ridiculous to use PCT to model a Watt Governor. You’d want to use “Control Theory”, I suppose. That being said, could it not make sense to use FEP to describe certain appropriate behaviors? Perhaps it’s interesting to think about where FEP and PCT make sense to apply and how those overlap.

Are there any particular mathematical tools that one uses in PCT to construct their models, or is it just, “Do it like control theory, but make your control system diagrams/mathematics PCT appropriate?” - A good example is your conversion of the BB&B “Standard” Model into a PCT appropriately labeled version.

I want to say something like: “As if” models have less explanatory power than “like this” models, but it doesn’t seem clear to me how to get to “like this” models without “As if” models. Furthermore, given that we must label the parts in our model like you did in the PCT Converted “Standard Model” - What let’s us know that it’s really “like this” and not just “as if”? Obviously, we appeal to the physics in PCT, but at the end of the day we know that even our physics is incomplete and so is only “as if” as well. I feel like, what we ultimately want is a principled way of constantly refining our “as if” models.

This brings me back to “Constructor Theory”, Popperian Epistemology, or “principles first” sort of approaches. And, to creating multi-scale models of “things” that obey those principles such that we can zoom into their details and see if their details match the details of the “thing” we’re studying in reality at every level of scale. Only then can we be certain that it’s “like this”.

Thanks. And the same to you!

And thank you again! What a nice way to start my New Year!

Yes, big problem!

My main problem with the assumptions of FEP is that many of them are obviously wrong because it is impossible for them to be true, such as the assumption that the behaving system can know the a priori probability of external event x or the conditional probability that an observation was caused by x, p(obs|x).

Thanks for this reference. I’m looking forward to seeing what kind of data they are fitting the model to. But I have a feeling the data is the typical input/output data collected in conventional psychology – data that ignores the possible existence of controlled variables. Therefore, no matter how plausible the assumptions made by the model about the mental/behavioral mechanisms that produced the data, the fit to the model, even if nearly perfect, will be spurious because any observed relationship between input and output variables will be a behavioral illusion. But I will read the paper and see if that is the case.

I disagree. PCT applies quite sensibly to the behavior of the flyball governor. Every PCT function and variable exists as an actual function or variable in the governor. The controlled variable is engine speed, w. The perceptual function is the physical law that converts w into the centrifugal force, Fc, exerted by the spinning flyballs. The perceptual variable is Fc. The reference for the state of that variable is the force of gravity, g. The error signal is the angle of the flyball arms, which is proportional to g - Fc. And the output function is the connection of the flyball arms to the lever that varies the size of the throttle valve opening. The output of that function is variation in the size of the opening of the throttle valve, o, which determines the rate at which steam is delivered to the engine. And, completing the loop through the environment, engine speed, w, is a function of variations in system output, o, as well as disturbances, which are mainly variations in the load on the engine, G; so, w = o +G.

Powers was a control engineer so he knew how to build a control system and he also know how to reverse engineer an existing one, such as a living organism. So he knew how each of the functions and variables in an artefactual control system, like the flyball governor, mapped to the functions and variables in the nervous system and environment of a living organism. So PCT can be mapped properly to any control system, whether it is made of metal or cells.

The math of PCT can be very simple, involving just algebra, if you ignore the dynamics of control (which requires some calculus) and deal with it as a steady state process. The main thing to understand about PCT is what the functions and variables in the model correspond to in an organism. I think it’s that mapping that gives people the most problems with understanding PCT.

Just ignore the “as if” models. That’s easy for me to say but I know it’s very hard to do, especially for people who have a lot invested in one of these models. So I wouldn’t advise doing it until one is well established in their career.

Because, when control theory is correctly mapped to the functions and variables in the system (of whatever kind), we can manipulate those variables and see that they behave as described by the model. I am looking forward to reading about how the FEP model of human behavior is tested but I suspect that, because of the bizarre way FEP is mapped to behavior, there will be some variables that are left out of any test that is done (mainly the possible controlled variable) so that the results will be as spurious as any of the results collected using standard behavioral science methods (see my book The Study of Living Control Systems). But we shall see.

I’m afraid I don’t understand this. But I am in favor of you doing what works best for you, in terms of your work and your career.

Sorry for the delay, Ty. I have now looked over the “step by step” article by Friston et al. It describes an “auditory mismatch” experiment that they use as a basis for making FEP predictions. It’s a pretty standard psychophysical or cognitive task but with the DV being an EEG pattern (called an ERP) rather than an overt response. Therefore, this is a completely open-loop task since the ERP has no effect any perception (controlled variable) that might be affected by the sound stimulus. So the results of this study bear little relationship to PCT, which is a theory control. Studies of perception are certainly relevant to understanding control in living systems. But this kind of research doesn’t help me understand why FEP is considered compatible with PCT.

It’s true that FEP models cannot always be reified. One cannot always point to the physiological structures that correspond to these “beliefs”; Nor can one always point to what the mechanisms for processing and updating these beliefs are. My main point was only that, it is also true that FEP models still can have the causal structure that allow for capturing the behavior of the “thing” that they are modeling like in the case of the Watt Governor. I think being able to replicate behavior at least means that your assumptions about how to put the POMDP Model, or necessary Variational Free Energy equations, together are correct. The actual mathematical model independent of the labels that people try to reify is what I refer to as the “causal structure” of a model.

I look forward to seeing your analysis!

Okay, I agree, but I thought the whole point was should it be? Should we talk about the behavior of the Watt Governor as having a perception? Similarly, as with FEP, should we talk about the Watt Governor has having “beliefs”? - It’s all only helpful if we understand this mapping between our model and the “thing” we’re studying.

This mapping problem is what I’d generally call the problem of modeling. I could state it as follows:
Given that our model fits the data, how do the pieces of our model correspond to the pieces of the organism we’re studying?

PCT makes this easy because it uses TCV to ensure that variables in the model have a measurable correspondence to begin with.

I suppose that with FEP Models, one could imagine measuring a variable, converting that into a belief representation, and then processing it via Bayesian inference, and then reconverting it back into the proper variable to measure. Perhaps that’s how they use FEP models to predict neurophysiological data. I have not looked into it in detail.

In any case, independent of the mapping that between model and organism, in order to fit data, your model has to have the right causal structure and that’s generally hard to do. Neural networks make this easy as they automate the generation of a causal structure that fits the data, but then we don’t have this mapping which is where arguably where true understanding lies.

Correctly mapped is the key phrase. I want to conjecture that, given that one’s model fits the data, then it should always be possible to identify variables, or collections of variables, in the model that correspond to some measurable aspect of your system. No? - I’m not saying that this would be easy, the problem of finding such a mapping for any fitted model is probably “NP” - But it feels like such a mapping “should” always exist. It feels related to the “unreasonable effectiveness of mathematics” and the realization that mathematical structures (causal structures) used in one context can be effective/useful in another, seemingly unrelated, context. Here’s a snippet from that paper:

There is a story about two friends, who were classmates in high school, talking about their jobs.
One of them became a statistician and was working on population trends. He showed a reprint to his
former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician
explained to his former classmate the meaning of the symbols for the actual population, for the
average population, and so on. His classmate was a bit incredulous and was not quite sure whether
the statistician was pulling his leg. “How can you know that?” was his query. “And what is this
symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the
circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the
classmate, “surely the population has nothing to do with the circumference of the circle.”

Thank you! I imagine the confusing part is about multi-scale models of “things” that obey conjectured principles. As you’ve pointed out PCT models are “like this models” - A “like this model” is a model whose variables have a mapping to some measurable part(s) of the system being modeled. An “As if” model by contrast, is a model whose model-system mapping is not known, or only known incompletely. However, if a model captures only one level of scale, like describing how an organism intercepts an objects, but not also about how, say, the organs of that organism contribute to making object interception work then the model is only “like this” at one level of scale. So, what we would like are multi-scale models that capture as much detail about every scale(molecules, organelles, cells, tissues, organs, organisms, societies) as possible. Unfortunately, making a multi-scale model only through observation/probing/empirical measurements can be very difficult or infeasible. Thus, one would like to have a way of generating multi-scale models so that we can reduce our need to do physical work as much as possible. To me, generating multi-scale models necessitates the applications of principles that are scale-free and can describe “why” certain behavior happens independently of the scale. “Behavior as the control of perception” is one such principles that serves as a guide on how to generate multi-scale models of organisms essentially by saying “If your model is an open-loop feedforward model, then it cannot properly capture the behavior of organisms”. This is akin to the conservation of energy suggesting that “If your model creates or destroys energy, then it cannot properly capture the nature of this universe”. Conjecturing the right set of constraints allows one to search only a subspace of the space of possible multi-scale models. Automating the search of these subspaces using “Eleatic games” is the goal of my research project.

As things move along I’ll have nicer demonstrations of this and a paper too! Ideally this year I can get it out there.

All things in good time! Thank you for investigating that, makes sense to me. I said a bit about what I think about PCT/FEP compatibility above, but I’ll try to say it here perhaps a different way. Given that FEP and PCT models of a watt governor are able to capture its behavior, then it stands to reason that the “causal structure” of both models are correct. They both contain negative feedback loops in their causal structure. However, due to the way we label a PCT model, each label corresponds to a measurable variable. This may not be the case in FEP models, but it is possible here to get this sort of labeling scheme as I’ve said here:

I think you might be interested in this podcast where Vicente Raja who wrote the paper “The Markov Blanket Trick: On the scope of the free energy principle and active inference” discuss this notion of “as if” and the general confusion when reifying FEP models is appropriate or not.

Thank you for engaging with me! I appreciate the opportunity discuss these things!

Hi Ty

I am not ignoring you. I’ve just ended up with a lot of balls in the air and I suck at juggling (which I am actually trying to learn to do, with almost no success so far).

I want models to be properly mapped to the observable functions and variables in the system under study. The aim of this is not reification. the aim is to have a valid model that is testable. PCT Is both; FEP is neither.

That was not the point I meant to be making with my PCT vs FEP analysis of the Watt governor. I mean to show that the PCT model maps to the actual functions and variables in ANY control system. FEP doesn’t. So PCT is the model you want to use if you are trying to reverse engineer a living control system, which is what I want to do. FEP isn’t.

I think that’s exactly backwards. I think we should know how the pieces of a model (functions and variables) correspond to the observable or known functions and variables of the organism we’re studying before we test to see if the model fits the behavioral data.

PCT is based on existing knowledge about the causal structure of the nervous system and the world outside that system.

No. Because I think this is still backwards. You should have identified “variables, or collections of variables, in the model that correspond to some measurable aspect of your system” before you even tested the model.

I haven’t listened to it yet but I’ll try to get to it ASAP. But I’ve got to tell you that I am really not a fan of FEP and find the idea that it has anything to contribute to PCT (or vice versa) ridiculous at best and counter-productive at worst.