Georgia Tech students discover one of their TAs is in fact an Artificial Intelligence program

Artificial Intelligence (AI) took a small step into our world when Georgia Tech professor of computer science Ashok Goel “recruited” Ms. Jill Watson as one of the teaching assistants (TAs) for his Knowledge-Based Artificial Intelligence class.

Students had no idea and were flabbergasted when they finally learned that Ms. Watson, one of the nine TAs they had been interacting with on the forum board, was in fact an artificial intelligence computer program powered by International Business Machines Corp.’s Watson analytics system.

Mr. Goel notes that students in the class typically post 10,000 messages a semester, asking mostly routine questions, such as due dates for assignments. Ms. Watson was trained to answer such questions where she had a confidence rate of at least 97%. Mr. Goel estimates that within a year, Ms. Watson will be able to answer 40% of all the students’ questions.

More and more, our learning and interaction are going online and on a gigantic, global scale. Can we expect more of the “people” we deal with at the other end of the line to be silicon-based instead of carbon-based?

via WSJ

Ban New Tools For Killing People

Cars that drive themselves, teaching assistants (and perhaps even, instructors and online sales people) who are artificial intelligence programs, robots that can do more than we can, autonomous weapons (killer bots)… perhaps, it’s time we consider a set of Asimov-like laws to govern the behaviour of robots as they become more advanced.

Join Stephen Hawking, Elon Musk, Steve Wozniak and 20,000 plus others and sign the Autonomous Weapons: an Open Letter that call upon industry and the military to ban offensive autonomous weapons and prevent an AI arms race.

Autonomous Weapons: an Open Letter from AI & Robotics Researchers

Autonomous weapons select and engage targets without human intervention. They might include, for example, armed quadcopters that can search for and eliminate people meeting certain pre-defined criteria, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions. Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.

Many arguments have been made for and against autonomous weapons, for example that replacing human soldiers by machines is good by reducing casualties for the owner but bad by thereby lowering the threshold for going to battle. The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

Just as most chemists and biologists have no interest in building chemical or biological weapons, most AI researchers have no interest in building AI weapons — and do not want others to tarnish their field by doing so, potentially creating a major public backlash against AI that curtails its future societal benefits. Indeed, chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.

In summary, we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.

Related Links: