AI set to get crazier and crazier
AI expert Emad Mostaque commented that large AI models will only get “crazier and crazier” unless more regulation and control is taken. As the EU AI Act progresses, what challenges or priorities should businesses be on guard for?
Emad Mostaque, CEO of Stability AI, explained one necessary step in making AI safe is to remove explicit or harmful images from the training data. Emad argues that continuing to train large language models like OpenAi’s ChatGPT and Google’s LaMDA on what is effectively the entire internet, is making them too unpredictable and potentially dangerous.
"The labs themselves say this could pose an existential threat to humanity," said Emad.
The head of OpenAI, Sam Altman, told the United States Congress that the technology could “go quite wrong” and called for regulation. This is amongst other tech leaders including Dr Geoffrey Hinton who recently left his position at Google. Dr Hinton warned of the dangers of AI chatbots, joining the growing sea of critics who say companies are racing toward danger with their aggressive campaign to create products based on generative AI.
Neil Murphy, Chief Sales Officer at the Intelligent Automation Company ABBYY explains what we can expect from the Act.
“The progression of the EU AI Act is necessary for the benefits of AI to be realised in an ethical and sustainable way.
“There’s a lot that falls under the overall banner of Al, and ChatGPT is one which is still in a very experimental phase. At the moment, people are very willing to upload text or various information which could be considered confidential. Many businesses are banning employees from accessing ChatGPT due to uncertainty about where that data is going or how it's being used. These laws will help change that.
“However, at the rate AI is developing, organisations should continue assessing the risks before deploying new technologies regardless of current regulation, as they could impact critical processes across the organisation.
“At ABBYY, we use different types of AI within our products, ones that we already have our own governance around with how the data is being used and we're committed to compliance. We go through a lot of data security checks with our customers to make sure that they understand how we're using that data. So, when we talk about AI, it’s a very general term, and regulators will have to take that into account.
“In terms of transparency, there must be disclosure with customers and employees. Tech leaders must be transparent and explain what the technology is, what the objectives are, what capabilities it brings, and the impact that they expect it to have for the organisation or end-user. These are the standards we can expect from the AI Act, but we shouldn’t wait for it to pass before considering all the ethical, legal, and business repercussions.”
Tim Wright, Tech and AI Regulatory Partner at Fladgate comments: “US -based AI developers will likely steal a march on their European competitors given news that the EU parliamentary committees have green-lit its ground breaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset.
The US tech approach (think Uber) is typically to experiment first and, once market and product fit is established, to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.
The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset; however the potential to experiment in a safe space - a regulatory sandbox - may prove very attractive.”
AI is advancing faster than anyone ever suspected and as Dr Hinton explained, the race between Google, Microsoft and other tech giants will escalate into a global race that won’t stop without some sort of regulation.