So . . . . the Elon Musk anti-AI hype cycle has started up again.
Worse, we have the Stuart Russell's movie, Slaughterbots.
screen-grab from Slaughterbots
Actually, Slaughterbots is exceptionally well-done and really should be mandatory watching for everyone. If you haven't seen it, go watch it now -- it's less than eight minutes long, "entertaining" and scary AF.
The first problem is that it can be done today by most computer science graduates on a fairly low budget. The only real problem is getting the shaped explosive -- and the internet provides all sorts of opportunities and work-arounds for that.
screen-grab from Slaughterbots
The second problem is "slaughterbots" are too effective and "clean" of a solution. Land mines were outlawed for a variety of reasons beyond civilian casualties. Cruise missiles and fully autonomous systems like the Israeli Harpy (used by nine countries and operational since 2008) have generated a lot of protest but are too useful for countries to give up -- and they are getting better and better.
But the biggest problem, and what I want to rant about, is the completely muddled framing of the problem as an "AI problem". In reality, there are four very different and non-overlapping AI problems. There is the weaponry that uses software developed as part of AI research but which is not itself truly "autonomous". There is the fear of truly autonomous killer robots (aka Arnold Schwarzenegger). There is the already existing problems of humans either intentionally using AI to harm others and sway elections or unintentionally causing problems due to bias and other "black box" shortcomings . There is the rapidly increasing problem of AI replacing humans.
I have argued for years with Noel Sharkey and the International Committee for Robot Arms Control about their rhetorical tactics of conflating the current entirely pre-programmed (and still fairly stupid) weapons with future self-improving fully autonomous robots. Stuart Russell is only backing into the autonomous weaponry debate because he is deathly afraid of future super-intelligent AI. And indeed, most of what the average citizen gets through the news from Elon Musk, Nick Bostrom, the Machine Intelligence Research Institute (MIRI), the Future of Life Institute (FLI) and others is actually weaponized narrative to ensure that their fears are honored.
What we haven't seen is any effective collaborative action. We've seen several "ethics" boards formed -- but membership has been strictly limited and there have been almost no published results. MIRI and FLI have sunk a lot of money into one very specific line of research -- that precisely follows the errors that prevented AI from making progress for decades. Instead of partisan fear-mongering and calls for regulation with absolutely no details, we need to divide the problem into rational pieces and start proposing rational solutions COLLABORATIVELY.
image provided courtesy of Metric Media
The first step is to separate the problems where humans are responsible (i.e. ALL the current problems) from the problems where machines are responsible (FUTURE). We need to stop nonsensical fear-mongering proclamations like Elon Musk's claim that humanity has only a 5-10% chance of surviving artificial intelligence. And we need to start investigating ALL avenues together.
Machines with at-least-human intelligence are coming whether we like it or not. There are already many clear and present dangers from the limited AI that we already have. Let's stop the screaming and get down to business. The future of humanity is on the line.
Let The Game BEGIN!