Two Silicon Valley titans recently crossed swords on social media. Tesla co-founder Elon Musk and his counterpart in Facebook Mark Zuckerberg disagreed on the potential dangers to humanity that could arise from artificial intelligence (AI). Mr Musk has argued that AI poses a danger to homo sapiens and he is certainly not alone in holding that view. Stephen Hawking is one of many other public intellectuals who have expressed such fears. Mr Zuckerberg’s take is that these doomsday scenarios are really negative and in some ways “pretty irresponsible”. In response, Mr Musk dismissed Mr Zuckerberg’s statement by saying “Mark’s understanding of AI is pretty limited”.
Mr Musk’s fears are as follows. Eventually, AI will be smarter than its creators and capable of replicating and upgrading its capabilities by designing new hardware and developing more skills. This overtaking of the human race is sometimes referred to as the “singularity”. It is true that AI may take a long time to develop a specific skill. For example, computers took 40 years to start beating world champions in chess and even longer to understand idiomatic language. But once a specific skill is acquired, it can be perfectly transmitted infinitely.
What is more, unlike biological species, computers are immortal — programming, memory and data can be passed on forever. As AI takes over more vital tasks, it is becoming indispensable. In effect, AI will be a new species (or several new species), albeit a species of non-biological origin. The goals of a super-intelligent new species may not coincide with humanity’s goals, and humanity will, by then, be incapable of preventing AI from superseding human beings on the evolutionary pyramid, if it so chooses.
Interestingly, Mr Zuckerberg has been programming his own do-it-yourself AI “Jarvis”, to be a domestic help. But in his world view, even though AI may become more intelligent than its creators eventually, this is likely to occur a millennium down the line. So it is not worth worrying about. At this instant, as the Facebook CEO has blogged, it takes a lot of painstaking programming to have AI contextualise simple tasks.
Both may be right and wrong at the same time. AI is already handling tasks that require ethical decisions. That leads into a moral minefield. Mr Musk deals in self-driving cars, for instance. Insurers need to know how autonomous vehicles tackle the well-known “trolley problem” in real life. Let us say a self-driven car has to choose between hitting a pedestrian or risk killing its own passenger by taking dangerous evasive action. What does it do?
Similarly, let us say AI managing a smart hospital diagnoses a short-circuit that could cause a devastating fire. But if it switches off power supply, patients on life support systems may die. What does it do? There are more nuanced ethical decisions for AI as well. For example, a social media network may use AI to moderate content. How does AI abide by the guidelines of free speech while removing content that is hateful?
There are still other issues. AI has already taken over most manufacturing jobs and is taking over many white collar tasks such as providing advice on finance and tax planning. This will lead to a tectonic shift in the re-skilling of workforces, quite possibly with the help of AI itself.
One member of the “Zuckerberg camp” recently remarked the doomsday race extinction fears were akin to worries about overpopulation on Mars. But it is also true that in the near term, AI will have to learn to make ethical judgements and humans will have to learn new skills to beat obsolescence.