Contact with A(lien) I(ntelligence)


Large Language Models develop unforeseen capabilities that are discovered months after the fact.

LLMs can be trained on data from any domain. They create collectively sourced products in images, sound (speech recognition, speech synthesis, music), robotics, politics, finance, a theory of mind, MRI images, and indefinitely more domains, interactions with them, and resulting or foreseeable effects on human relationships, collective control, and the maintenance of social institutions.

An example: Given a 3-second sample of your loved one’s voice one such AI model can speak any text to you unmistakably in that voice. This has already been used to fake a phone call from child to parent.

https://www.washingtonpost.com/technology/2023/03/05/ai-voice-scam/

A quote from a video cited below:

"This is the year that all content-based verification breaks, and none of our institutions are prepared to stand up to it."

It can be used to hack computer systems.

When Biden declared his run for a second term the Republican party released a scary ad with deepfake video of a Chinese invasion of Taiwan followed by martial law in San Francisco, entirely AI-generated.

Is this the last year a US presidential election is won by a human? Will it be only a matter of which faction controls the largest computing capacity?

Five corporations have the multibillion dollar computational resources. They are competing to have the most users of their AI products, in parallel to the clickbait ‘race to the brainstem’ of social media, where the commoditized human attribute is attention. Their AI products may affect social institutions and individual mental health in more extreme ways.

Tristan Harris and Aza Raskin of the Center for Humane Technology addressed the problems of social media in the 2020 Netflix documentary ‘The Social Dilemma’. Here is their presentation on these LMM AIs in March 2023:

Because no one of these corporations alone nor two or three alone can stop their arms race, they all have to rein it in and harness it together, by agreement. Harris and Raskin call for a collective control process.

  • Convene stakeholders for a strategic agreement (0:50 in the video).
  • Selectively slow down public deployment (without slowing down the undoubted positive developments and applications).
  • Presume a public deployment is dangerous until proven otherwise.
  • AI developers must be responsible for unintended effects.

OK, friends. What is the place of PCT in understanding these phenomena, and on that basis how might IAPCT participate in our collectively solving these problems?

1 Like

An excellent and very important post, Bruce. I hope others will comment on it as well.

This is a brilliant presentation; informative, disturbing and yet, somehow, optimistic.

I think it’s much better to say that they are calling for a “collective MOL session”. They are certainly not calling for a “collective control process” where stability, in the form of a stable, virtually controlled variable (in this case, the virtually controlled variable being the capabilities of AI), emerges from conflict between the “stakeholders”.

I take the basic message of their presention to be that the stakeholders are locked (or, perhaps more accurately, “stuck”) in a conflict that is leading to an uncontrolled, unpredictable and exponential increase in the capabilies of AI. Their proposal is to get these stakeholders together to “rise above this conflict” in order to slow down public deployment of AI until there is agreement about how to do it responsibly (in a controlled manner).

A few months ago, when LLMs were first being deployed to the public, I thought the place of PCT in AI was to understand what kind systems were producing these AI phenomena: causal (open loop) or control (closed loop) systems. I think that determining this is a worthwhile, PCT-based endeavor. But after listening to this talk I think the aspect of PCT that is most relevant to solving the potentially catastrophic consequences of irresponsibly (and, dare I say, unintelligently) deploying AI systems would be understanding how to use MOL to solve interpersonal (as opposed to intrapersonal) conflict between the stakeholders involved in deploying these systems.

I think Harris and Raskin’s general suggestions regardling how to deal with the AI deployment problem are quite reasonable.

I think PCT, in the guise of MOL, could help move things from 1) to 2), 3) and 4).