AS I SEE IT
Imagine that you are bracing yourself to book a business trip or holiday, flights, transfers, hotel restaurants and all the rest. But instead of having to battle with annoying websites there is a convenient way to line up all the reservations with the efficiency of James Bond’s Miss Moneypenny.
Or suppose that you need to write a paper or an essay involving a tooth grinding amount of research. Rather than struggling with endless trawls through search engines you simply tap in a request and within minutes, rather than hours and hours, the information comes back complete with attribution sources. Your project can even be written for you, maybe with better results.
Far from being a futuristic dream these scenarios already exist thanks to AI systems and chat bots like ChatGPT and GPT-4. But rather than being welcomed as drudge-saving tools these advances have set alarm bells ringing with calls from technocrats to call a halt to further developments until risks have been assessed and contained.
Worthies like Elon Musk and Apple co-founder Steve Wozinak have called for a six-month halt, warning “human-competitive intelligence can pose profound risks to society and humanity”.
Their warnings sound like the plot for a sci-fi horror movie in which AI spirals out of control taking over jobs, jamming the net with misinformation and even more sinisterly, outwitting humans. Could the techno supremos be suffering from over-heated imaginations or are they being heedful of the fact that AI is developing so fast that there isn’t time to research the implications of what it can do and how this will affect humanity?
It’s one thing to joke about an AI model that can write your essay but quite another to let lose uncontrolled intelligence systems potentially capable of taking decisions for themselves. Among the concerns are the way that competition in the tech industry might push less responsible firms into launching their latest product, before the necessary checks have been carried out.
The computer scientists’ letter expressing concern followed Open AI’s release of GPT-4, which has triggered a race by big names in the industry like Microsoft and Google to launch similar products for their systems. In it they asked for a six month halt. “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4, If the pause cannot be enacted quickly, governments should step in and institute a moratorium.”
Smouldering concern has ignited beyond the tech community. Italy hit the news headlines here when it imposed a temporary ban last month on ChatCPT and the EU is considering what action it should take. Among fears posed by AI running on Large Language Models (LLM) are the implications for education, especially at third level.
There are worries about security where, data scraping can result in sensitive information, such as trade secrets and financial data, being exposed to competitors. For disinformation look no further than the way most people were completely taken in by pictures of the Pope wearing a flashy white designer puffer jacket which went viral. Doctoring pictures photoshopping has been around for years; here a speedier version Midjourney was used.
In other quarters the alarm bells are being dismissed as placing far too much faith in AI , viewing it as being incapable of taking the kind of rational, purposeful decisions that humans take (which may be bit of misplaced faith in human minds).
Asked about the risks and whether we have reached a tipping point with AI, Professor Noel O’Connor , School of Electronic Engineering DCU, said: “We need to look at what we are doing. The time for regulation has arrived.”
According to a techie informant, concern is being caused by a tokening system in which information is rated with codes which are not understood nor is it known exactly what the system is doing.
The big question really is whether regulation can move fast enough to keep up with the rate of and development of AI where the graph of change is taking off like one of Elon Musk’s proposed space trips with the speed and power of computers doubling every 18 months.
It may lag behind as it has done with social media. And then there’s the catch that, if countries pause in the intelligence race, they may get left behind by competing nations like China.
Looks like the genie is already out of the bottle.