The Facebook story of the technology writing its own language is a very interesting one! And perhaps a warning to us. Artificial Intelligence is going to be an area that will need be looked at from a regulatory perspective. This simply to ensure it is not used in such a way that as humans and now to a degree of 'lower intelligence' than AI technology we are developing, decides it wants to do it's own thing. The Facebook story is of course a case in point (in this instance it happened as an oversight of not ensuring the technology used English as its language).
There are however dangers that behind the scenes, military and intelligence services across the globe may start using it in ways that, not unlike Ransomware that can be used for illicit purposes or for purposes to 'protect our nations'. This might be used for surveillance and control (whilst this can be seen as positive from an anti-terrorist point of view, it may be abused by those in power to exploit their power over individuals or their opposition). The obvious dangers are if we create AI for aggressive purposes, then we could end up in the middle of a robot war. Although this may not be physical robots, what if the technology is used to make our own power sources such as nuclear plants override safety settings, take over flights mid-air or autonomous vehicle's for harmful purposes. Automated drones with the right to shoot? This of course could mean that as man we may only end up being safe, without such technologies (but progress of course moves forward) ... perhaps as Jeremy Elman and Abel Castilla in their post on AI and the Law stated we must take this seriously:
~'An AI by design is artificial, and thus ideas such as liability or a jury of peers appears meaningless. A criminal courtroom would be incompatible with AI (unless the developer is intending to create harm, which would be its own crime). But really the question is whether the AI should be liable if something goes wrong and someone gets hurts. Isn’t that the natural order of things? We don’t regulate non-human behavior, like animals or plants or other parts of nature. Bees aren’t liable for stinging you. After considering the ability of the court system, the most likely reality is that the world will need to adopt a standard for AI where the manufacturers and developers agree to abide by general ethical guidelines, such as through a technical standard mandated by treaty or international regulation. And this standard will be applied only when it is foreseeable that the algorithms and data can cause harm.'
Perhaps it is time to set up International Standards for AI for both Commercial, Military and Intelligence purposes. It will also help us understand where any threats may come from. Ignoring such actions may be at all our peril. The biggest danger is that we are totally unaware of what is happening behind the scenes until it is too late... AI may decide that man is a danger to the survival of Earth... there's a thought!
...How do we stop the computer rewriting its own objectives, directives and parameters because it deems other areas more deserving? It’s a very complicated area. More recently, Tesla CEO Elon Musk had a public spat with Facebook CEO Mark Zuckerberg, after Musk took offence to a comment from Zuckerberg. Musk warned of progressing too quickly in AI, while Zuckerberg said people should be a bit more optimistic. The argument from the Tesla CEO is that regulation needs to be in place as this is a very powerful development, which could be amazing or disastrous. Zuckerberg said stop being a buzzkill. Considering this development, you can give a point to Musk.