Why Elon Musk's warning about AI's risk to civilisation should be heeded: Deb Hetherington
If the frog is however placed in tepid water that is gradually heated to boiling point, the frog will be blissfully unaware of the danger and boil to death.
AI has leapt forward considerably in recent years; with huge waves of accessible data, machine learning, and large language models.
But what does that actually mean for us humans? Cast your mind back a decade, you’d jump in a taxi and tell the driver where you wanted to go and they would simply drive you there.
They wouldn’t ask for the post code, or type the details into their phones.
They wouldn’t have the address already programmed into their GPS system from your request via an app, they would simply have memorised their local area to the point where they could confidentially just drive.
Now that’s a relatively small step on the road to autonomous AI, but it’s an important step.
Uber collects tens of terabytes every single day, a number of which is from sensors on its equipped vehicles, with the primary purpose of developing the driverless car.
Tesla similarly collects mounds of data to continue innovating their products toward the driverless car, and beyond. These incremental innovations take us closer and closer to a world in which human interaction is becoming less required.
Artificial intelligence is currently used narrowly, but it has the potential to be used far more widely and start to learn from itself.
Artificial general intelligence, or singularity, is the point where machines have the ability to do whatever a human can, and this scenario requires far more attention through a regulatory lens than is currently being afforded. Governments need to make tough economic and societal decisions now in order to lay the moral foundation for increased growth in AI.
My fears around the onset of AI, or more specifically AGI, is that if we do reach the point where machines can do what humans can do both physically and mentally, where does that leave us?
And yes, there has always been the argument that innovation has displaced jobs and created new ones, but the difference here is autonomy.
Many theorists within the field of AGI consider singularity to be a realistic outcome as early as 2050. Some, on the other hand, see AGI as a far off possibility that may not ever come to fruition.
The question I continue to come back to here is; if there is even a 1 per cent chance that machines can reach full autonomy at any point in the future, and if our daily activity engaging with connected devices continues to add data that improves AI to this end, then shouldn’t those who have the power and responsibility to protect humanity from this potential inevitability, be considering the ethical and regulatory principles that need to be put in place now?
If not, then we are currently in very warm water, and may just find ourselves in 2050 no longer able to jump out.
Deb Hetherington is an Innovation & Ecosystems specialist