Misusing artificial intelligence (AI) can have any very clear and costly consequences. movie studio Lionsgate late joined a long list of organisations discovering that quotations and citations from generative AI (GenAI) systems request to be verified like any another source; Microsoft is being sued by a German writer after Bing Copilot suggested he had committed crimes he had alternatively reported on; and a US telecoms service is paying a $1m fine for simply transmitting automated calls featuring a fake AI voice mimicking president Biden.
Enterprise enthusiasm for adopting GenAI remains high, meaning organisations are busy putting various governance, hazard and compliance protections in place around their usage in different jurisdictions. While the main reason for restrictions on AI usage is frequently data privacy and safety concerns, regulation and copyright concerns are besides high on the list.
Part of the problem for chief information officers (CIOs), however, is knowing precisely which regulations apply to AI, from the legal basis of utilizing individual data to train AI models, to questions of transparency and discrimination erstwhile utilizing AI systems.
Many organisations focus on upcoming government specifically designed to set rules for those developing and deploying AI systems, alongside a mix of regulations and voluntary guidelines for AI that can be individually useful, but make up what United Nations secretary-general António Guterres rather politely called a “patchwork” of possibly inconsistent rules.
But the impact of fresh laws hasn’t yet been felt, and changes in government in the UK and US make it harder to foretell what future government will dictate, especially for UK businesses caught between the US and the European Union (EU).
Meanwhile, existing regulations that don’t explicitly mention AI already apply – and are being applied. This summer, the Brazilian data processing authority temporarily forced Meta to stop utilizing “publicly available” information collected from its users to train AI models – on the basis of legitimate interests allowed under Brazilian government akin to the General Data Protection Regulation (GDPR). The company had to notify users in advance and supply easy ways to opt out.
To safely navigate this web of regulations and laws – both upcoming and existing – that cover different stages of the improvement and deployment of AI, enterprises must so urgently get to grips with the direction of travel and appetite for enforcement in the countries they operate in.
Evolving UK priorities
Although there is likely to be an AI Bill in the UK, neither of the two private member’s bills making their way through the home of Lords are a reliable guide to what future government might look like.
The government seems unlikely to take precisely the same “pro-innovation” approach to AI regulation as the erstwhile one, especially as it’s signed up to the Council of Europe’s Framework Convention on AI and Human Rights (which covers the usage of AI by governments and another public bodies).
Currently a investigation organisation, it’s possible the UK AI Safety Institute may get a fresh function as an additional regulator, alongside the Information Commissioner’s Office (ICO), Ofcom, the Financial Conduct Authority (FCA) and the Competition and Markets Authority (CMA).
The government study on Assuring a liable future for AI envisions a commercial ecosystem of “AI assurance” tools to guide businesses utilizing AI to mitigate risks and harms. It promises an AI Assurance Platform with a toolkit based on standards specified as ISO/IEC 42001 (covering AI management systems), the EU AI Act and the NIST AI hazard Management Framework. Indeed, existing GRC tools specified as Microsoft Purview Compliance manager are already introducing reporting templates that cover these regulations.
As for AI providers, the minister for the future digital economy and online safety, Baroness Jones, told the planet Trade Organization’s AI conference the government will shortly “bring forward highly targeted binding regulation on the fistful of companies developing the most powerful AI systems”.
Similarly, Ofcom late issued an open letter to online service providers reminding them that the Online Safety Act (which imposes additional duties on search engines and messaging services starting in December 2024 with more beginning next year) besides applies to GenAI models and chatbots. That’s aimed at the large AI platforms, but businesses utilizing services to make chatbots, for example, for client service, will want to test them thoroughly and make certain content safety guardrails are turned on.
Disclosure requests
The ICO has already requested fairly detailed disclosures from large platforms specified as LinkedIn, Google, Meta, Microsoft and OpenAI about the data utilized to train their GenAI systems. Again, that’s covered by the Data Protection Act 2018, the UK implementation of GDPR. “Substantially right now, GDPR is regulating AI,” Lilian Edwards, manager at Pangloss Consulting and prof. of technology law at Newcastle University, told Computer Weekly.
While the upcoming Data (Use and Access) Bill doesn’t include the sweeping reforms to data protection rights proposed by the erstwhile government, it besides doesn’t supply any extra clarity about the impact of UK GDPR on AI, beyond the existing guidance from the ICO, which makes it clear that elder management needs to realize and address the complex data protection implications of the technology.
“The definition of individual data is now very, very wide: data that relates to a individual who can be made identifiable,” warned Edwards. “Any AI company is almost certainly, if not intentionally, processing individual data.”
More generally, she cautioned chief information officers not to dismiss government that doesn’t name AI specifically, noting that all of the compliance and hazard management processes already in place apply to AI. “There are plenty of laws affecting companies that have been around for donkeys’ years that people aren’t as excited about: all the average laws that apply to businesses. Discrimination and equality: are you in any way breaking the laws that are enforced by the Equality and Human Rights Commission? Are you breaking consumer rights by putting terms and conditions into your contracts?”
Edwards further warned that AI systems delivering hallucinations can breach wellness and safety laws (like Amazon selling AI-generated books that misidentify poisonous mushrooms).
Diya Wynn, liable AI lead at Amazon Web Services, said: “We all have been very acquainted and accustomed to having an knowing of sensitivity of data, whether it’s confidential, PII or PHI. That awareness of data and protecting data inactive is foundational, whether you’re utilizing AI or not, and that should underpin interior policies. If you would not share PII or confidential or delicate information normally, then you absolutely don’t want to do that in in in AI systems that you’re building or leveraging as well.”
International implications
Despite (or possibly due to the fact that of) being home to the major AI suppliers, the US has no comprehensive, AI-specific national laws. Both the consequence of the election and the highly fragmented legal and regulatory setup (with multiple authorities at both state and national level) make it unclear what, if any, government will emerge.
The executive order on AI issued by president Biden called for frameworks, best practices and future government to guarantee AI tools don’t break existing laws about discrimination, individual rights or how critical infrastructure and financial institutions handle risk.
Although ostensibly broad, it actually applied more narrowly to government services and organisations that supply the US government, including assigning responsibilities for developing guardrails to NIST, which runs the US AI Safety Institute.
As Paula Goldman, Salesforce chief ethical and humane usage officer and a associate of the national AI advisory committee that advises the US president, noted: “In a policy context, there are real, legitimate conversations about any of these bigger questions, like national safety hazard or bad actors. In a business setting, it is simply a different conversation.”
That conversation is about data controls and “a higher level of attention to good governance, hygiene and documentation that can be discussed at a board level”.
The executive order did specifically mention GenAI models trained on very large systems, but there are no existing systems with the specified level of resources.
Many individual states have passed laws covering AI, any implementing the principles in the executive order, others regulating decisions in critical or delicate areas made utilizing GenAI without crucial human engagement (again, GDPR already includes akin principles). California did introduce a sweeping scope of legislation covering areas specified as deepfakes and AI-generated “digital replicas”, of peculiar interest to Hollywood.
The Republican run talked about replacing the executive order with AI built on free speech principles and the somewhat mysterious “human flourishing”. It’s unclear what that would look like in practice – especially given the conflicting interests of various donors and advisors – and organisations doing business in the US will request to deal with this patchwork of regulation for any time to come.
The EU may set the tone
On the another hand, the EU is the first jurisdiction to pass AI-specific legislation, with the EU AI Act being both the most comprehensive regulation of AI and the 1 that suppliers are actively preparing for. “We are specifically building into our products, in as much as we can, compliance with the EU AI Act,” Marco Casalaina, vice-president of product at Azure AI Platform, told Computer Weekly.
That’s something Goldman expects businesses to welcome. “We have definitely heard a desire from companies to make certain that these regimes are interoperable, so there’s an overarching set of standards that can apply globally,” she said. “Honestly, it’s a lot to keep track of.”
The Council of Europe’s AI Convention follows akin principles, and if this leads to global government aligning with it in the same way that GDPR influenced government worldwide, that will simplify compliance for many organisations.
That’s by no means certain (there are already AI laws in place in Australia, Brazil and China), but any business operating in the EU, or with EU customers, will surely request to comply with it. And unlike the mostly voluntary compliance approach of the executive order, the act comes with the usual penalties – in this case, of up to 3% of global turnover.
The first thing to remember is that the fresh government applies in addition to a wide scope of existing laws covering intellectual property, data protection and privacy, financial services, security, consumer protection, and antitrust.
“It fits into this giant pack of EU laws, so it only covers truly rather a tiny amount,” said Edwards.
The act doesn’t regulate search, social media, or even recommender systems, which are meant to be dealt with by government enforcing the Digital Services Act.
Rather than focusing on circumstantial technologies, the EU AI Act is about demonstrating that AI products and services available in the EU comply with product safety requirements, which include data safety and user privacy, in much the same way that the CE mark acts as a passport for selling physical products in the EU market.
The act covers both conventional and generative AI; the second provisions are clearly a work in advancement mainly intended to apply to AI providers, who gotta registry in an EU database. As in the US executive order, general intent models with “systemic risks” (like disinformation or discrimination) are categorised by the capacity of the strategy they were trained on, but at the level of the training systems already in usage by providers specified as Microsoft and OpenAI, alternatively than future systems yet to be built.
The most oversight (or outright bans) applies to circumstantial higher-risk uses, which are predominantly in the public sector, but besides include critical infrastructure and employment.
“If you are building a chatbot for vetting candidates for hiring,” Edwards warned, that would fall into the high-risk category. AI systems with limited hazard (which includes chatbots) require transparency (including informing users about the AI system): those with minimal or no risk, specified as spam filters, aren’t regulated by the act.
These high-risk AI systems require conformity assessments she described as “a long list of very sensible device learning training requirements: transparency, hazard mitigation, looking at your training and investigating data set, and cyber security. It’s got to be trained well, it’s got to be labelled well, it’s got to be representative, it’s got to be not full of errors.”
AI providers gotta disclose at least a summary of their training set as well as the risks active in utilizing these AI systems, “so you can see what risks have been mitigated and which ones have been flagged as not entirely mitigated”, said Edwards.
The first intent of the act was to regulate conventional AI systems (not models) with a specific, intended usage (like training, education or judicial sentencing) and making certain they stay fit for that intent with tracking of updates and modifications. That makes little sense for GenAI, which is seldom designed for a single purpose, but the rule remains. Organisations deploying GenAI systems have less responsibilities than AI providers under the act, but if they make crucial adequate modifications for these to number as a fresh strategy (including putting their name or trademark on it), they will besides request a conformity assessment if they fall into a high-risk category.
Code of practice
It’s not yet clear whether fine-tuning, RAG or even changing from 1 large language model to another would number as a crucial modification, as the code of practice for “general purpose” GenAI systems is still being written.
Until that guidance clarifies the position for businesses deploying GenAI (and besides the situation for open weight models), “it’s not certain how far this will apply to everyday individuals or businesses”, Edwards told Computer Weekly.
The act comes into force gradually over the next fewer years, with any provisions not in force until the end of 2030. Even if the EU takes the same approach as with GDPR (of enforcing the regulations very publically on any larger suppliers to send a message), that may take any time – although multinational businesses may already be reasoning about whether they usage the same AI tools in all jurisdictions.
But 1 thing all organisations utilizing AI covered by the act will request to do very quickly, is make certain staff have adequate training in liable AI and regulatory compliance to have “AI literacy” by the February 2025 deadline.
“It’s not just for your lawyers,” Edwards warns: anyone dealing with the operation and usage of AI systems needs to realize the deployment of those systems, and the opportunities and risks involved.
That kind of awareness is crucial for getting the most out of AI anyway, and getting ahead of the game alternatively than waiting for regulation to dictate safety measures makes good business sense.
As Forrester vice-president Brandon Purcell put it: “Most savvy legal and compliance teams know that if there’s a problem with an experience that involves your brand, you’re going to be culpable in any way, possibly not in litigation, but surely in the court of public opinion.”