PCT vs Free Energy (Specification versus Prediction)

I’d be happy to give you some pointers! Most important is to keep the hands low, and the tosses high. Next is to pay a lot of attention to the shape traced by the objects you’re throwing. You want to start with a quite tall parabola shape at first, and you want its width to be about shoulder width. You toss from A great two-object drill is tossing the left object to the right hand, and the right object to the left hand, and then clapping, instead of a throwing a third object (Toss-Toss-Clap). I’d advise working on two in one hand too. There are 3 shapes here. Parabola that starts medially and ends distally (Reverse/Outside Tosses), parabola that starts distally and ends medially (Forwards/Inside Tosses), and a straight line (Columns!). Two in one-hand practice will be good for building up your speed and coordination. I would be willing to do a video call on it sometime if you’re interested. Here are my credentials: IJA Tricks of the Month by Tyfoods of the USA | Poi Juggling

That’s fair! That means you reject all models that are fit to data before a model-system mapping has been established. I believe Artificial Neural Networks, and many other techniques fall into this. Indeed, theoretical and computational neuroscience has largely been about this sort of work. I think this is the case in other fields too.

Right, sometimes we don’t know what that causal structure is, and sometimes it’s because it’s in practice very hard/impossible to measure. I think this is why people actually like models like Artificial Neural Networks and other such models that can simply be fit to data. They can give us insight as to the form of unknown causal structures although we may not know the physiological mechanisms corresponding to it. However, I agree predictive understanding is absolutely not the end all be all.

I think it’s fine to reject data fitting without a solid model-system mapping as an approach to gaining understanding; However, suggesting that a model-system mapping does not exist, even when the model is capable of predicting data, seems to be a very strong claim. Are we really to believe that this data-fitting is only coincidence?

I am reminded of the “Unreasonable Effectiveness of Mathematics”, in particular the opening paragraph:

There is a story about two friends, who were classmates in high school, talking about their jobs.
One of them became a statistician and was working on population trends. He showed a reprint to his
former classmate. The reprint started, as usual, with the Gaussian distribution and the statistician
explained to his former classmate the meaning of the symbols for the actual population, for the
average population, and so on. His classmate was a bit incredulous and was not quite sure whether
the statistician was pulling his leg. “How can you know that?” was his query. “And what is this
symbol here?” “Oh,” said the statistician, “this is pi.” “What is that?” “The ratio of the
circumference of the circle to its diameter.” “Well, now you are pushing your joke too far,” said the
classmate, “surely the population has nothing to do with the circumference of the circle.”

The fact that our symbols on paper, or computers, seem to capture anything about the world we experience is absolutely mysterious and incredible.

These are great suggestions. Thanks. And I will try them out. But I’ve been working on this for a month and still can’t juggle so I think I might be too old to learn this new trick. But I should note that your suggestions are very PCT: they describe what I should try to perceive rather than what actions I should try to take. I am very impressed by your juggling skills and would love to do a video call with you sometime to see if there is any hope for me!

I don’t see how any model can be fit to data before it is mapped to the system in some way. The Bekinschtein et al research implicitly mapped FEP to the variables (auditory stimuli, ERP brain wave measures) used in their experiments. But this mapping seemed rather ad hoc to me, relative to what I had read about FEP.

My experience with tests of ANN and other such models is that the inputs and outputs of these models are clearly mapped to the input and output variables to be used when testing them. This is certainly true of the Large Language Models that are the basis of generative AI. They know that the inputs will be textual specifications or questions and the outputs will be images or coherent textual replies.

If the model predicted data then it had to have been mapped to the system in some way in order to do that. This is what happened in the Bekinschtein et al study. They mapped FEP theory to a predicted relationship between different types of auditory stimuli and brain wave patterns. And they apparently found that the observed relationship was predicted quite well by the theory. All that this prediction shows me, however, is that FEP is not about what I am interested in understanding, which is the purposeful behavior of living systems – behavior such as juggling;-)

Best regards

Rick

Never too old!! Definitely very PCT. I will send you an email!

While it is true that inputs and outputs are clearly mapped, the weights/stuff inbetween the inputs and outputs are not always mapped. What I have seen folks do is train an ANN to achieve some task like image recognition and then try to use the resulting network to gain insight as to the actual architecture of the brain. Here’s a paper about that sort of research: Convolutional Neural Networks as a Model of the Visual System: Past, Present, and Future

I always say that we will not have achieved AGI until it can juggle like Wes Peden!

Yes, it looks like Wes is a guy who is firmly in control of the variables involved in juggling. I’m not quite there yet but I am getting a little better so I am optimistic that this old dog will eventually be able to learn a new trick.