Machine learning can be invaluable but flawed inputs could prove costly - Andy Brown

There has been a lot of talk recently about whether artificial intelligence is at risk of getting out of control and how we might manage it better. Which is strange. Because to date no machine has been invented which is actually capable of conscious intelligent thinking.

Artificial learning is already a common feature of software and it is often a very good thing. When I was operated on for prostate cancer the surgeon sat at a machine interacting with a computer which guided the highly sensitive tools that worked on me. The surgeon never touched a knife and each time the operation is done both the doctor and the machine learn and improve.

The consequences of such artificial learning for the patient are amazing. Instead of a brutal operation which takes months to get over, prostate cancer surgery is now astonishingly delicate and I was left with a scar too small to notice and was able to leave hospital the next day. Much the same gains have been made in breast cancer surgery.

Hide Ad
Hide Ad

There are many other areas of life where we are gaining from modern technology and should be cheering on the development of machine learning rather than worrying about them. Anyone flying in a plane should be very glad if the machine has the ability to draw on the experience of every other plane that has flown the same route and knows what works best in different weather conditions.

A departure board at Heathrow Airport amidst disruption from air traffic control issues. PIC: Lucy North/PA WireA departure board at Heathrow Airport amidst disruption from air traffic control issues. PIC: Lucy North/PA Wire
A departure board at Heathrow Airport amidst disruption from air traffic control issues. PIC: Lucy North/PA Wire

There are, however, limits. Machine learning is usually accompanied by machine decision making. That is fine most of the time but I wouldn’t want to sit in that plane if the pilot was completely incapable of over-riding the computer in any circumstance. There is only so much that can be pre-programmed or learned from predicted circumstances. Sometimes unforeseen events arise and there is a need for some unexpected thinking. Which is why genuinely self-drive cars are proving so hard to create.

A good example of what can go wrong is the 2008 financial crash. Back then stock exchange trading programmes had been invented which were incredibly clever at spotting trends in microseconds and making decisions to buy or sell financial products quickly enough to make a lot of money. Which was fine when markets were operating normally and in the way that the programmers expected.

Then financial institutions like Northern Rock started to go to the wall and markets started to worry about how much banks had spent on buying highly complex packages of loans. That was bad enough but prices went down in ways that told the very clever self-learning computers that they needed to sell quickly. This then contributed to a self-enforcing scrabble to sell which put a lot more banks and financial institutions at risk.

Hide Ad
Hide Ad

We are still paying the price for the chaos that resulted from the tendency of financial markets to boom and bust and of badly designed software to exaggerate the problem. A large chunk of our national debt is a direct result of the money that was channelled into banks to avoid something worse than the 1930s great depression.

The risks of over complex systems making bad decisions are also evident when you consider the fiasco of the recent air traffic control failure. It would be impossible for a human mind to hold all the complexity of flight movements across Europe and to make real time decisions about what flights ought to be allowed to go to which airports. It proved all too easy for the complex self-learning system to be thrown out of kilter by one unexpected action.

It is at this point that alarm bells start to ring. If a small error can result in serious disruption to the control of movements of commercial flights, how can we be fully confident that nothing can take place which threatens passenger safety? If mistakes can happen in civilian life can they also happen in the military?

Right now there are rather a lot of drones being fired by Putin and increasing retaliation from inside Ukraine. Those machines make a lot of decisions for themselves under the guidance of self-learning systems.

Hide Ad
Hide Ad

The good news is that this should enable responsible generals to target their weapons at genuine military targets more effectively. The bad news is not just that there are a lot of irresponsible generals and irresponsible politicians making decisions about what to target but that the machines have been designed to make some of their own decisions when they get close to their target.

It isn’t just people with a liking for paranoid conspiracy theories who worry about the possibility that this could result in some very bad things happening. Can we be confident that no drone will end up mistaking a nuclear weapons silo for an enemy drone launch site?

Any such mistake would not be the fault of the machine but of the team that wrote its programme.

Andy Brown is the Green Party councillor for Aire Valley in North Yorkshire.