I have quite a bit of discussion on this topic in Chapter 40 of PPC. What follows is a small slice of the current draft, which may well change if this thread leads anywhere.
“”""""""""""""""""""""""""""""""""""""""""""
40.3 “Bots”, “botnets”, and Robots
Another kind of “robot” is the “bot” that roams the internet by following links on Web pages, seeking out particular kinds of sites an behalf of some human agency. Such bots have no physically tangible presence, though they do have a well-defined structure. Bots have no physically active parts that apply force to their environment, since their environment is the network of sites linked on web pages they examine.
Web-following bots are autonomous in their choices of what sites to investigate next among the links they have discovered. They may discover and report to their users private information of any kind that someone has committed to the internet, perhaps financial, perhaps medical, perhaps innocent, perhaps laying a person open to blackmail.
Bots that live only in the internet may be every bit as dangerous to humans and other organic life as those that overtly aim to cause physical damage, but they are not usually perceived to be dangerous, perhaps because they are familiar, or perhaps because their activities are seldom directly sensed in real time by human users of the internet.
Depending, however, on how their discoveries are used by their employers, bots can be very dangerous tools. They may infiltrate ill-intentioned software (malware) into computers with poorly protected systems, causing those computers to act otherwise than their owners would choose them to do, and those computers might control important social infrastructure such as electricity or water distribution systems, transport networks, or air traffic control systems, not to mention the flow of finance on all scales from dollars to trillions or the “social media” networks through which many humans gain much of their perceptual understanding of the changing world.
Could such bots be constructed as perceptual control systems, with their own control hierarchies and intrinsic variables? In particular, could forms of bots co-evolve into ecological networks of their own, through the same kind of evolutionary processes that have created all the organisms we see around us today? Could networks of bots communicate in ways that produce “hive minds”? The answer to all these questions is the same — Yes, that is possible.
We can ask another question along much the same lines. Suppose a bot hive-mind came to exist, could the hive-mind structure evolve to form one or more perceptual control structures, in which the individual bots played the roles of the components of the perceptual control hierarchy of a living control system, perceptual function, reference input function, comparator, and output function, or of sensors and actuators that interface with the physical environment with which organic sensory systems interact?
Again, the answer must be that it is possible, and indeed, each individual bot might well have the computing power to act as one or more ECUs, each complete with all three functions and a reference input function. These bots are, I think, becoming more and more reminiscent of organic neurons with their individual myriads of connections.
Suppose that hive-mind perceptual control hierarchies consisting of entire ecologies of networked bots came into existence, might such hive-mind systems as a whole have intrinsic variables that are characteristically different from the perceptual control hierarchy and are interlinked into a variety of homeostatic loops? In living organisms, we have argued (Chapters 14 and 15) that such homeostatic loop networks evolutionarily precede perceptual control and guided the development of ever more complex and capable perceptual control hierarchies.
This would presumably not be true of our hypothetical bot-based perceptual control structures. If the botnet came to exist, it would not have developed through an analogue of a chemical soup that produced ever more complex molecules. The initial genesis of the bots would have come from a human design, which perhaps the bots had optimized in a form of design evolution. No “soup” would have constrained or supported the development of one structure or another, and the continued existence of any particular kind of bot would not have needed a loop of interactions with other kinds of bots. There is no obvious surface argument for the existence of an equivalent to the biochemical homeostatic loops of an organism.
When we look at their function outside their effects on reorganization, however, we can find an argument why a perceptual control botnet hierarchy might develop homeostatic loops analogous to those of organic intrinsic variables. Biochemical homeostatic loops are the maintenance crew for the organism, be it a bacterium, a tree, a fish, or a human. Any structure that is not isolated from the physical environment will deteriorate in what is sometimes called “entropic decay”, and this is as true of a bot or the connections among bots as it is of an organism.
The botnet will require a maintenance crew, presumably consisting of specialized bots programmed to control for clearing out failed bots, rebuilding new bots from templates and installing them where they are needed in the structure, ferrying and controlling energy around the system, adjusting the interconnection parameters among the bots that do the actual work, and so forth. All of these requirements are fulfilled in living control systems by the biochemicals involved in organic homeostatic loops that we have called “intrinsic variables”.
It not unreasonable to think of the specialized “maintenance and repair” bots as intrinsic variables for a botnet perceptual control hierarchy. They would need to intercommunicate observations of the status of the perceptual control hierarchy of bots, if only in order to sense the state of its various components and interactions so that the competent bots might be able to repair software glitches or communication link problems that might well arise in a spatially distributed robot network under the rubble of a collapsed building. With widespread intercommunication requirements comes the possibility of homeostatic loops and internal clocks. Any such clocks might work thousands of times faster than our own, though some would presumably be related to diurnal or seasonal rhythms because those rhythms affect the environment with which the perceptual control hierarchy of the robot network would interact.
A botnet hierarchy along with its intrinsic variables is housed in some physical space. If this space is enclosed in an armoured shell of limited permeability, to the outside observer it would look like what is ordinarily thought of as a robot. On outside observer cannot observe that the “robot” is powered by bots inside its visible external membrane or shell. Like any Black Box, the observer can detect what it does when probed, and a White Box might be constructed to emulate its functioning and some of its internal functional connections, but, as with any Black Box, the observer could not determine how it performs its functions from knowledge of how the White Box performs the same functions.
A software bot lives in some substrate we might as well call a computer. For the bot to perform any function requires a through flow of energy, just as does the firing of an organic neuron. The substrate, the analogue of the physical structure of a brain, must be provided with a low entropy energy source and a means of disposing of higher entropy waste.
One of the most difficult problems faced by evolution about the increasing size of our more “intellectual” human skull beyond that of our ancestors was not how to fit all the neurons into it, but how to dissipate the heat of their operation to somewhere else for eventual radiation to outer space. Failure to dissipate that heat from the skull would result in one of two things, thermoregulation of the total amount of nerve firing or a physical meltdown of the brain matter. Evolution provided a solution to the problem of both delivering energy throughout the brain and dissipating the heat of its operation by using the same circulating fluid medium, blood.
Software bots contained in an enclosure would be subject to the same constraints of energy acquisition and dissipation. A hive-mind botnet widely distributed over space would not. Either kind of botnet-based robot has an added energy requirement for its communication infrastructure, greater if it uses spatially wasteful radiated wireless than if communication is by fibre or wire. Would the waste high-entropy energy be subject to a liquid cooling requirement in an encapsulated botnet on the grounds that liquids can carry a higher energy density than gases of the same volume, or would the encapsulated botnet use thermo-electrical means of dissipation? Either way, the means of heat dissipation would probably occupy a non-negligible fraction of the volume available within the robot’s “skull”, as they do in ours.
All the above is highly speculative, but it could hardly be otherwise, given that the question is about what could be possible or even plausible about the structure and capabilities of future robots, even far future ones. The speculation involves only technologies available today, except perhaps a slight extrapolation of current trends in electronic miniaturization. My speculations do not even depend on potential advances in quantum computing, which might permit far more “intelligence” in a bot and in a botnet than is anticipated in these speculations.
Without extrapolating current technology beyond matters of scale, I will assume the possibility that a future botnet based robot could well have intrinsic variables with which a perceptual control hierarchy could interact in a way functionally similar to the way that, according to PCT as developed in this book, the perceptual control hierarchies in organisms interact with their biochemical intrinsic variables.
Organisms are subject to attack by microorganisms we call viruses, bacteria, and parasites among other names. Would a botnet based robot be vulnerable to similar attacks? Of course it would. We already have propagating “malware” we call viruses that attack through our relatively slow internet and cause damage to the functioning of individual users of the software “infected”. Why should we assume that a botnet distributed hive-mind robot would be immune?
The quick answer is that it would not be immune, and must have mechanisms akin to the organic microbiome and cellular immune processes to counter the problem of invading viruses (and software bacteria and parasites, which we have not discussed and will not discuss). We certainly cannot properly attempt to treat the whole science of immunology in a book about the range of application of Powers’s Perceptual Control Theory, even though the concept of opposing damage by invading micro-organisms is self-evidently within the purview of PCT.
We leave the question of botnet immune processes with the observation that just as current anti-viral software is designed to both detect and act to neutralize malware, so would both within-bot and botnet-wide perceptual control processes detect and neutralize external attacks on — immunize — a hive-mind robot.
As does any sufficiently complex organism, the botnet-based robot would need to learn perceptual functions that produce perceptions of the forms of previously encountered attacks. Perceptual control reduces the relative entropy of local environmental processes, easing the ability of the controller to dissipate that entropy into less destructive parts of the environment. In itself, this provides an imperfect form of immunization, but every bit helps us, and the same would be true of a botnet.
40.4 Robots — Conscious and Emotional?
We move on to questions 2 and 3:
2. Could a robot ever be conscious, and more particularly conscious of itself as an independently acting entity?
3. Could an autonomous robot feel emotion and be empathetic with emotion in humans and other animals.
both of which hinge on the question of whether the robot reorganizes on the basis of the benefit to its intrinsic variables of controlling this or that perception in a particular manner. Central is the question of whether the robot actually has intrinsic variables, and how those variables interact with each other and with the perceptual control hierarchy if it does.
Question 2 is about the nature and use of consciousness that we have been discussing through much of this Chapter. It is essentially about intellect and category perception, as in our discussion of crumpling. Question 3 asks about emotion, which, following Powers, we attribute to biochemical influences on and from the perceptual control hierarchy, in other words, on homeostatic loops that interact with the perceptual control hierarchy.
For neither of these questions is the answer cut and dried. Both must be approached from backgrounds that have a supportable basis in PCT. Speculative as was the earlier discussion, the following discussion is even more speculative. Please take it with a quite a few grains of salt.
No existing robot of which I am aware learns by reorganizing an internal perceptual control hierarchy, and none has any obvious intrinsic variables. Accordingly we must act like fortune-tellers and pretend to foresee a future in which some do. As we just proposed, however, a robot composed of or containing a botnet perceptual control hierarchy almost must incorporate maintenance and repair bots, and has intrinsic variables that function much as do the intrinsic variables of a living control system. Accordingly, we start with the assumption that the robot in question will be a hive-mind botnet.
Going back to the argument of Section 21.2, we talked about the use of consciousness as a tool related to Genetic Algorithms. Using the rapid testing available in conscious imagination, the Mechanic might be able to develop a novel perceptual function whose inputs consist of existing controllable perceptions. When it is successfully controllable, the perception contributes to the top level perceptual input function, or becomes itself a new top-level perception to be controlled as suggested by Figure 11.1.
We just skipped an obvious question: What makes this new pattern of perceptions — this situation in Perceptual Reality — into a pattern worthy of learning to control? Why and when is that useful? Ultimately, it is always because the change in the hierarchy is more likely to enhance the stability of the dynamics of feedback loops within the intrinsic variables and the perceptual control hierarchy in the external environment as it exists than it is to interfere with those stabilities.
The intrinsic variables of a robot, if it has any, must similarly determine the effectiveness of its reorganization of its own control hierarchy. Simply having very good control of those perceptions it does control is not enough. Why should control of this or that perception be more useful to the robot than some other potential controllable perceptions at any level of the hierarchy, unless the robot does have intrinsic variables akin to the hormones and so forth that are the intrinsic variables of the reorganizing organism? Would the robot have no intermediate representation of its success or failure between either being allowed to continue its operation or being switched off and recycled?
Well, yes it would, if the foregoing speculation about botnet based perceptual control hierarchies is at all valid. A robot with a botnet structure would indeed have the required intrinsic variables, consisting of sensors of actual or incipient failure in the perceptual control hierarchy, coupled with maintenance and repair bots to bring the system back to full health. A side-effect of this ability to maintain itself would necessarily be that both the sensor bots and the maintenance-and-repair bots have templates that they can decode and compare with the current state of the part of the hierarchy (and the maintenance system) for which they are responsible. They should be able to repair themselves, as do the component processes in natural homeostatic loops.
In living control systems, the most basic of these templates are in the form of DNA formed into chromosomes, but they have this constraint because they live in a three-dimensional space and must produce functional molecules such as proteins that conform spatially to whatever place they must engage. Templates for unencapsulated botnets have no such dimensional constraints, but like DNA molecules, they must, in total, have plans sufficient to build a functioning version that will build itself to be very like the entity that hosts them — a descendant or “child” botnet.
Having the plans for producing a descendant, at least in principle, renders the botnet robots capable of replication, which need not be faultless. The replicate might have slight changes from the original templated plan. Therefore the population of botnets could evolve in the same way as does a population of living control systems. Evolved botnet robots built with a perceptual control hierarchy would have just the same degree of autonomy as does a living control system. But could it be “conscious” or even “self-conscious”?
Before we address this question directly, we should note a significant difference between encapsulated and unencapsulated botnets. A “child” encapsulated botnet is separate and distinct from its parent, whereas it is difficult even to conceive of a separate botnet being produced by an existing one unless there is a possibility for “spores” to contain the replication plans in a way that would allow them to be carried to a region of the underlying support network as yet unexposed to the existing botnet. An unencapsulated botnet is more analogous to a fungus network than to an example of any other main branch of the tree of life.
Seldom, if ever, has the possibility of fungal consciousness been mooted outside of science-fiction, though the question has been raised with respect to many complex eukaryotic species, vertebrate and invertebrate, aquatic, avian, or land-based. Whether scientifically justifiable or not, the idea of consciousness seems to be limited to members of species that move and act independently, not to the hive mind of communities of them. With that mindset, consciousness seems to be thought of as applying only to entities that have much faster internal communication than it does to entities that communicate externally at much the same speed as internally.
As with the words “robot” and “autonomous”, we need to refine what we mean by “consciousness”. Since it is inherently impossible to know what other people experience when they use words that apply perfectly to our own experience, our qualia may differ from theirs. What we can surmise, with some hope of being correct, is that consciousness is used to solve perceptual control problems not yet solved by incorporation in the non-conscious perceptual control hierarchy built by reorganization to function in the real world environment.
“”"""""""""""""""""""""""""""""
Even this snippet is quite a bit too long for this venue, I think, but in Chapter 40 I develop the ideas hinted at here. If you have any interest in following up these ideas, either comment and criticise here, or ask me for the relevant Sections of the current draft in PDF form