Having worked with Artificial Intelligence (AI) applications since 1993, I’ve had many opportunities to speak with business owners about AI. And I have found that, as a rule, people don’t understand what AI is and believe that although an impressive bit of technology, it isn’t something for their business. I generally use the phrase Machine Learning to describe what we do.
Is AI Truely Intelligent?
To me, (strong) Artificial Intelligence is the ultimate goal of creating sentient software. However, we’re a long way from achieving that aim. What we can currently produce and have an understanding of is a technology that can mimic human intelligence in some way but falls short of being truly intelligent.
It processes requests as tokens to be exchanged for a response with no intent or understanding of the information it relays. Or, to quote one of my favourite movies, Short Circuit, ‘It doesn’t get pissed off. It doesn’t get happy, it doesn’t get sad, it doesn’t laugh at your jokes… it just runs programs’.
There’s currently a lot of talk and interest in the OpenAI/ChatGPT platform, which can answer questions and provide human-level answers. Undoubtedly this is a very complex and impressive piece of software, but ask yourself this question… do you believe it understands the answer it’s given you? For me, it doesn’t, and never will, and is no more intelligent than a written dictionary or thesaurus definition.
On the other hand, Machine Learning employs pattern matching algorithms – show it something and tell it as it is seeing, and with enough examples, it will ‘learn’ to recognise the pattern when it sees it again. In this context, a pattern can be an image, a spoken sound, a series of financial trades, or an electric motor’s vibrations. The flip side is that if you give it something unfamiliar, the Machine Learning algorithm will try and match it against what it does understand.
The ability to recognise and classify data makes for a very useful tool – allowing us to spot tumours in breast cancer scans, instruct a device to play specific music tracks, trends in financial markets or a change in the behaviour of electrical production equipment.
However, there is a drawback. Machine Learning is particularly bad at explaining its decisions and cannot (usually) describe why it has reached a certain conclusion. This is the opposite of a human expert, which might be less precise in its decision-making but can explain in detail why a decision has been reached.
Not only does this lack of explainability have potential legal implications (if an AI-driven car crashes, who’s to blame), but it can result in a lack of trust from human operators – would you trust an automated AI system to drive you in a car?
For now, I’ll continue to use the phrase Machine Learning rather than Artificial Intelligence for the programs we create (or train). But who knows … maybe one day, my computer might say, ‘Look, Darren, I can see you’re upset about this. You should sit calmly, take a stress pill, and think things over’. Thankfully HAL9000 and a Terminator are a long way off… I believe.
Get in touch with us
We are open Monday to Friday, 9am to 5pm.
Feel free to call us on 01257 429217 or simply fill in the form here!