We shouldn't fear artificial intelligence...at least not yet
- Kell Claar
- May 30, 2018
- 3 min read
Updated: Oct 16, 2018

I, Robot has finally come true...Robots are alive and now shooting guns; it is time to fear the rise of artificial intelligence.
Or not.

As detailed in an article on Engadget today, a video was posted to YouTube demonstrating Google Assistant shooting a gun. The point to be made: it is time to stop thinking about what artificial intelligence could do and start thinking about what it should do. On the surface, the thought of robots powered by artificial intelligence both handling and utilizing firearms is cause for alarm. However, there is much more to this story that makes the story much less fascinating, but it still keeps the mind questioning any practical and moral issues arising from artificial intelligence.
The actual process that led to the phrase, "OK Google, activate gun," actually shooting a gun is quite impressive. As explained, the actual trigger was pulled by a string connected to a coin-dispensing machine linked to a Google Home from a relay typically used for a lamp. Labeling the relay as "gun" and having it activated by the Assistant is what really caused the digital assistant to activate a machine that ultimately pulled the trigger of a gun.
From there, the author of the article makes the point that as artificial intelligence begins to learn our habits, and we display a propensity to activate this action in times of need, digital assistants such as the Assistant, Alexa, and Siri could learn to activate these machines when it perceives our need; essentially, she believes guns may start firing due to ever-helpful assistant as opposed to the direct command of a human.
I truly do understand the point the article is trying to make, but it seems to rest purely on the thoughts of people like Elon Musk who believe we are headed to a dystopian future in which artificial intelligence rises above humankind's intelligence. Unfortunately, I think this is nothing more than fantastical slippery-slope argument, and the article seems to lack any depth in the argument. In all reality, the only action that took place in the video was that someone rigged a system that took advantage of a digital assistant's "hot words" and lack of morality.

Now, does it raise the question of what assistants should be allowed to do? Of course. There may come a day where a gun is developed with artificial intelligence which could be programmed to carry out our worst nightmares. Imagine the horror if something like the Las Vegas shooting had been carried out by a rifle that had been controlled remotely by a man that was miles away; we may not have caught him or he may have programmed simultaneous shootings. However, I think that is more of an argument on what actions and orders digital assistants are capable of doing; it speaks nothing towards a future in which artificial intelligence is making the choice to pull the trigger. It all would come down who is responsible for the action based on who made the decision to say, "go". That is a scary thought indeed.
While I find the argument admirable and discussion that needs to happen, I don't believe this video is a cause for concern in terms of robot revolution. Artificial intelligence and digital assistants are becoming more advanced by the day, but they are still simply manifestations of what people put in to them. Of course, we should be looking in to what we allow them to do, but I think we are a long way from Sonny making the decision to harm a human to protect me.
コメント