Dangers of Artificial Intelligence You Need to Know
Artificial Intelligence (AI) has a long history. There have been many setbacks in research and a lot of expectations could not be met, but it is still a constantly growing and changing field.
In the twenty-first century AI techniques became an essential part of the technology industry, helping to solve many challenging problems in computer science. Currently, together with Big Data, which created many new possibilities, machine learning has become popular once again.
An Evil Power?
Even if AI has made a lot of progress recently, it is still far from being comparable to human intelligence. Although it has already surpassed us in performing certain specialized tasks – often with an astonishing speed and precision – researchers have not at all achieved to create an AI that is as good as a general problem solver as we are.
But there is a possibility that this vision will be reality not so far in the future. There have been many speculations on how this will affect our life, most notably in movies such as 2001: A Space Odyssey, The Matrix and more recently, Ex Machina and Her.
All of these movies feature the motive of an AI taking over control and starting to dominate us.
As soon as somebody invents an artificial intelligence that is superior to us and able to improve and copy itself, this will result in unfathomable changes, also called technological singularity. If this happens, I doubt that we have a chance to survive.
Let’s assume we manage to invent an AI that is superior to us and it won’t destroy us. Even then, somebody evil might misuse the power that comes with controlling this intelligence.
So, no matter how we handle it, will the outcome always be bad?
What Makes Us Human
Let’s assume handling AI will be no difficulty. Let’s also assume evil people will never get access to that power. Even then, would a technological singularity lead to a desirable outcome?
If machines support us in every possible task, nobody will ever need to work any more. Humans won’t be valuable any more, because they can be replaced at any time.
Nobody is able to tell when exactly technology will be advanced enough to make a superintelligence possible.
Some believe that it will never happen. A lot of prognoses estimate it to be around 2040.
But maybe, it will happen much sooner?
I wish to live in a world where humans matter. Paradoxically, I am researching on AI anyways.