The Threat of AI

[From Fred Nickols (2017.09.21.1843 ET)]

I received the message below from another list to which I belong. The video is worth watching. I would be very much interested in CSGNet members’ reactions.

“I was watching this excellent TED talk about artificial intelligence. Bostrom addresses the question of what happens when AI gets out of control by suggesting that we can imbue AI with values that are good for humanity. That’s a great idea. However, my question is, whose values should those be? Will we program it with “profit” values? How about manifest destiny values or religious values or certain moral values? Might the values be that what benefits our company, country, ethnic group, education level, etc, regardless of how it might be detrimental to another company, group, etc. are the right values. Maybe the first problem we should have AI solve is the problem of how humanity can make sure that AI isn’t the end of us. Thoughts?”

https://www.youtube.com/watch?v=MnT1xgZgkpk

Fred Nickols

[From Bruce Nevin (2017.09.22.17:22 ET)]

Surely he mentions the foundation that Asimov laid.

https://en.wikipedia.org/wiki/Three_Laws_of_Robotics

···

On Thu, Sep 21, 2017 at 6:45 PM, Fred Nickols fred@nickols.us wrote:

[From Fred Nickols (2017.09.21.1843 ET)]

Â

I received the message below from another list to which I belong. The video is worth watching. I would be very much interested in CSGNet members’ reactions.

Â

“I was watching this excellent TED talk about artificial intelligence. Bostrom addresses the question of what happens when AI gets out of control by suggesting that we can imbue AI with values that are good for humanity. That’s a great idea. However, my question is, whose values should those be? Will we program it with “profit” values? How about manifest destiny values or religious values or certain moral values? Might the values be that what benefits our company, country, ethnic group, education level, etc, regardless of how it might be detrimental to another company, group, etc. are the right values. Maybe the first problem we should have AI solve is the problem of how humanity can make sure that AI isn’t the end of us. Thoughts?â€?

Â

Â

https://www.youtube.com/watch?v=MnT1xgZgkpk

Â

Â

Fred Nickols

Â