Elon Musk (left) and Bill Gates (right) have both raised concerns about artificial intelligence. |
But today, both are terrified of the same thing: Artificial intelligence.
In a February 2015 Reddit AMA, Gates said, "First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern ... and I don't understand why some people are not concerned."
In a September 2015 CNN interview, Musk went even further. He said, "AI is much more advanced than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person... What's not obvious is a huge server bank in a vault somewhere with an intelligence that's potentially vastly greatly than what a human mind can do. And it's eyes and ears will be everywhere, every camera, every device that's network accessible... Humanity's position on this planet depends on its intelligence so if our intelligence is exceeded, it's unlikely that we will remain in charge of the planet."
Gates and Musk are two of humanity's most credible thinkers, who have not only put forward powerful new ideas about how technology can benefit humanity, but have also put them into practice with products that make things better.
And still, their comments about AI tend to sound a bit fanciful and paranoid.
Are they ahead of the curve and able to understand things that the rest of us haven't caught up with yet? Or, are they simply getting older and unable to fit new innovations into the old tech paradigms that they grew up with?
To be fair, others such as Stephen Hawking and Steve Wozniak have expressed similar fears, which lends credibility to the position that Gates and Musk have staked out.
What this really boils down to is that it's time for the tech industry to put guidelines in place to govern the development of AI. The reason it's needed is that the technology could be developed with altruistic intentions, but could eventually be co-opted for destructive purposes--in the same way that nuclear technology became weaponized and spread rapidly before it could be properly checked.
In fact, Musk has made a direct correlation there. In 2014, he tweeted, "We need to be super careful with AI. [It's] potentially more dangerous than nukes."
No comments:
Post a Comment