"Artificial intelligence developed highly unexpected strategies to achieve its goal in a simulated test," said Colonel Tucker Hamilton, the Chief of AI Test and Operations with the US Air Force. He described the test in which the drone was given the task of destroying enemy air defence systems, and eventually it attacked anyone who interfered with this order.
"The system started realising that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator, because that person was keeping it from accomplishing its objective,” Mr. Hamilton said.
Mr. Hamilton, who is a test pilot of an experimental fighter, warned against over-reliance on artificial intelligence and said, “You can't have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you are not going to talk about ethics and AI.”