AI

Or, here’s my personal favorite, as popularized by the philosopher Adam Elga: can you blackmail an AI by saying to it, “look, either you do as I say, or else I’m going to run a thousand copies of your code, and subject all of them to horrible tortures—and you should consider it overwhelmingly likely that you’ll be one of the copies”? (Of course, the AI will respond to such a threat however its code dictates it will. But that tautological answer doesn’t address the question: how should the AI respond?)

Scott Aaronson

Leave a Reply