I am working on the development of a python library for implementing PCT control hierarchies.
Python is a very popular programming language that is pretty easy to pick up and you can write something useful pretty quickly. It has an incredible amount of sophisticated functionality due to the huge number of additional libraries that can be installed.
If people are interested in using it I will get my skates on and get it ready for the conference in October and, perhaps, do a workshop. Let men know.
I’m relatively interested. I would like to know how to use it for interactive demos/experiments and to know whether or how to attach the output to a data analysis program like mathematica or matlab (Neither of which I know how to use;-)
RY: Do you mean demos that would be online, so others can run them, in a browser?
RM: Yes, that would be best, if it were possible. But I would also like it if people could run them and download the data for themselves and apparently that is difficult to implement (due to security concerns) from a server.
RY: Python has many data analysis packages. Are there particular functions you are interested? I will check those.
RM: Some multivariate analysis packages would be nice. Multiple regression, of course. Maybe some discriminant analysis and multidimensional scaling , such as factor analysis. And nice, easy to use graphics would be nice too. Graphics that let you do nice visual data description for example.
I looked over your analysis of two methods for optimizing control-system parameters: the ecoli method and gradient descent. It was not clear to me how you implemented ecoli reorganization; moreover, your implementation of it seemed to work poorly, especially compared to gradient descent. Could you describe the ecoli algorithm for me, as used in this test?
Basically, at each iteration the two regression parameters are changed by dW and db. If the loss reduces, then dW and db are kept the same. If the loss does not reduce then two new random values of dW and db are chosen.
I would expect ecoli to work poorly compared to gradient descent as sometimes it would be going in the wrong direction. Also, the changes may result in reduced loss, but at a very slow rate, for example it could be at 89 degrees to the optimum direction.
Thanks! If I understand that correctly, then the ecoli method you describe above is not the method Bill Powers called by that name, which he used to tune the parameters of a control-system model such as that implemented in the “TrackAnalyze” demo that came with LCS III.
Bill’s method seems closer to gradient descent. Here is the code from the Analysis unit of the program:
OldRMS := ErrFit;
Param := pMin;
Range := (pMax - pMin);
BestParam := Param;
OldRMS := ErrFit;
DeltaP := Range/2.0;
MinErr := 1e6;
Count := 0;
if OldRMS < ErrFit then DeltaP := -DeltaP/5.0;
Param := Param + DeltaP;
OldRMS := ErrFit;
if ErrFit < MinErr then
MinErr := ErrFit;
BestParam := Param;
until (Abs(DeltaP) <= 0.001) or (Count > 20);
Param := BestParam;
A parameter value is initialized to a value half-way between its minimum and maximum allowed value. The model is then run and the OldRMS is initialized to the RMS error from that run and DeltaP is initialized to Range/2. On each loop the model is again run and the RMS error is compared to its previous value (OldRMS). If the new error is larger, then the sign of DeltaP is reversed and its size is divided by 5. DeltaP is then added to the parameter’s current value. The process continues until the absolute value of DeltaP is <= 0.001 or there has been 20 iterations of the loop.
So, this is a hill-descending process that moves in the direction of decreasing least-squares error until the error starts to increase; it then reverses direction while cutting the size of the change to 1/5th its previous value, slowing down the rate of change. This process will oscillate around the least-squares error low-point at decreasing rates of change until the error reaches a local minimum or the number of attempts to find it exceed 20. (It has the defect that it may fall into a local minimum, depending on the starting point of the search.)
I understand that correctly, then the ecoli method you describe above is not the method Bill Powers called by that name, which he used to tune the parameters of a control-system model such as that implemented in the “TrackAnalyze” demo that came with LCS III.
I think Bill calls the method from TrackAnalyze “a version of Newton’s method of steep descent”, though I can’t find where he said that. The E.Coli method necessarily has random tumbling if the error is increasing, just like the bacterium tumbles when the environmental conditions seem to get worse.
Yes. I checked with Bill’s description of E. coli reorganization in LCS III Chapter 7, and you are correct! The E. coli method of reorganization is used in the "ThreeSys demo. Initial output weights of three control systems are initialized at random, along with a randomly chosen set of reorganization weights that can vary between -1 and + 1. The control system model is run for a few iterations and the errors used to compute the least-squares error. These reorganization weights are multiplied by a factor that is proportional to the least-squares error; consequently, as the error decreases, the size of the change in output associated with a given reorganization weight decreases. If the least-squares error increases, a new set of output weights is selected at random. Half the time this random selection will produce on average a new reorganization weight having the same sign as the previous one, although differing in magnitude.
This method is consistent with E. coli except for the dependence of the change in output weight on the size of the squared error. It differs from the method used in TrackAnalyze, where each “tumble” reverses the sign of the change weight and divides it by 5, as opposed to selecting a new weight at random and making its effect on the output diminish with the size of the least-squares error.
I hadn’t noticed that the reorganization algorithm differed between TrackAnalyze and ThreeSys; thanks for pointing that out, I learned something!