Well, it seems that some aspects of my "prediction" are in fact statements of already-underway research. "MDARS" is a sentry robot that the US is looking into giving it "the ability to track and engage targets independently". (New Scientist, 23 Oct 2010, p23). An Israeli company is developing "Guardium", which has already been patrolling Israel's borders, and it can be programmed to return fire if shot at. The South Koreans have a currently static robot that can be programmed to demand a password from an intruder, and if it is not given (or the robot fails to recognise an attempt to give it) open fire. The military are very much interested in robots, and their interests are in ones to harm people.
Well, yes, the military are very much interested in robots that harm
enemies. But as more and more of these are made, robots will become a much more potent threat than humans, which means that they will be trained to take down enemy robots and drones, not humans. Now, another potential application of robots that would harm humans would be to enforce curfews or deploy a sort of thought police. But in any case, any party who implements robots without kill switches is retarded.
Other than physical impossibilities, name one? Multiple scientific studies, as well as informal anecdotes, show that "most people" will do pretty much anything if put in the "correct" situation. (The classic example of a formal study of this kind is the
http://en.wikipedia.org/wiki/Milgram_experiment . For anecdotes, read
http://en.wikipedia.org/wiki/Pranknet for what "most people" can be induced to do by a simple phone call.)
Unorthodox behavior of some machines in contrived situations is supposed to be worrying in what sense, exactly? What I mean is more on the order of phobias. Some humans have behaviors that cannot be turned off without therapy or medication. For the most part, nature selects against this, in a weak way. We would select
for this, in a strong way.
You seem to be under the impression that we would train AI by putting them in a simulated or real natural environment until human-like intelligence arises, and then we'd pick out those that are the most successful at survival. We wouldn't do that. We would tightly control the survival criterion itself. For example, we could try to minimize the Kullback-Leibler divergence between the probability distribution of the robot's actions not knowing what other robots do, and the distribution of the robot's actions knowing what other robots do. What this means is that robots would have a higher probability of survival if they act independently of other robots, i.e., we can train them to be non-social.
Also, you're assuming we make AI that we fully understand. That may not be the case; in particular, should we successfully create strong AI by genetic algorithm methods, we won't know how it works, and therefore won't be able to easily impose specific limits.
No, I do not. I assume that we know the objective function that the AI optimizes. In other words, I don't claim to understand how the chess player plays, I claim to know that the chess player is playing chess. If I want to train an AI that does exactly what I tell it to using genetic methods, then the only AI that will survive is AI that obeys my commands. How the hell do you think this process could lead to AI that not only doesn't obey my commands, but works against me? The AI's environment is not nature, it is "obey me". AI isn't culled if it can't feed, as it would be on Earth. It's culled
if I am dissatisfied. I mean, genetic algorithms can certainly lead to surprises, like AI that doesn't
always obey me, but what you suggest is the equivalent of positing a natural species that never has sex and gleefully throws themselves on jagged rocks whenever they get the chance.