What really the AI is ?
Civilization on the earth till now is nothing but a product of intelligence, with having lots of experiments and researches; the same we are going to know today is Artificial Intelligence, which is the amplification of Natural Intelligence with Machine Intelligence. It has the potential of helping civilization flourish like never before, the only thing is to learn manage keeping the technology beneficial.
Presently the Artificial Intelligence (AI) is progressing rapidly; either it is SIRI or the self driving cars, robotics, sensory systems, Neural Computing or many others. The conclusion of the saying is that AI can encompass anything from Google’s search algorithms to IBM’s Watson to autonomous weapons, while science fiction often portrays AI as robots with human-like characteristics.
Why need of Research AI Safety
The AI’s impact on society beneficial motivates for more and more research in many areas, from economics and law to technical topics such as defense, verification, security etc.
Artificial Intelligence which has been designed till now and the scientists are making it use of are the weak AI because these are getting in use of weak tasks like driving a car, face recognition, data manipulation, solving equations, etc.
The researchers have continuously been in effort to create Strong AI which would be intended to outperform humans at nearly every cognitive task.
In the long term of this research and over the possibilities of the success for the same, an important question arises, i.e. what will happen if an AI system becomes better than humans at all cognitive tasks.
Though designing smarter AI systems which would outperform humans at all tasks is itself a cognitive task. Such a system could potentially undergo recursive self-improvement, triggering an intelligence explosion leaving human intellect far behind. By inventing revolutionary new technologies, such a superintelligence might help us eradicate war, disease, and poverty, and so the creation of strong AI might be the biggest event in human history. Some experts have expressed concern, though, that it might also be the last, unless we learn to align the goals of the AI with ours before it becomes superintelligent.
There are some who question whether strong AI will ever be achieved, and others who insist that the creation of superintelligent AI is guaranteed to be beneficial.
The researches do recognize the possibilities of both, but also recognize the potential for an artificial intelligence system to intentionally or unintentionally cause great harm
Most researchers agree that a superintelligent AI is unlikely to exhibit human emotions like love or hate, and that there is no reason to expect AI to become intentionally benevolent or malevolent.
How can AI be dangerous?
Instead, when considering how AI might become a risk, experts think two scenarios most likely:
- Programming may go to do something devastating:
The risk of doing something devastating is present even with the weak AI also, but grows as levels of AI intelligence and autonomy increase. Autonomous weapons are artificial intelligence systems that are programmed to kill. If it goes in the hands of the wrong person, these weapons could easily cause mass casualties. Also to win the race, an AI war could take place would be the next phase of mass destruction, as years before, happened in the case of nuclear weapons.
- The AI's goal achieving programmed may cause destructive rather than beneficial
There is also the possibility that AI, for attaining the goal at any cost, since it is programmed to do so, may create a new difficulty.
For instance, if we ask an obedient intelligent car to take us to a place within the specific time, it might get us there chased by helicopters and splitting lots of things in the way, just to cover the time.
In an another instance, we can say that if an intelligent system is tasked with a ambitious geo-engineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
Why we are NOW concern about AI safety: Is it necessary?
Though till now it has been considered as a science fiction, and even the researchers believe that the quest for strong AI would take centuries to succeed, though it is also being considered as a serious fact that this super intelligence may become a truth even in our lifetime.
On the other hand most AI researches also guessed that it may happen before somewhere near 2050-2060. Due to the possibility of this too, it is prudent to start for the safety research, now since it may take decades to complete the study and prepare for the unexpected result with the strong AIs.
Though we have no sure-fire way of predicting how a super strong AI will behave because it has the potential to become more intelligent than any human. Even the past technological developments can’t help us make a true prediction because we have never created anything that has the ability to outperform us, wittingly or unwittingly. Till now we have been operating the planet control machines, the satellites or the micro machines which can’t be even seen through naked eyes, not because we are the strongest or fastest, but because we are the smartest, and for any reason, if any unpredicted happens, we are assured enough to be remained in control. But, if the intelligence works more than us, it is unpredictable to assume, as talking about the present picture, never knows how to bring it in control.