Szczyt Etyki Cyfrowej 2024: uznanie społeczno-technicznej natury AI

cyberfeed.pl 1 tydzień temu


Artificial intelligence (AI) must be recognised and regulated as a socio-technical strategy that has the possible to massively deepen social inequalities and further entrench concentrations of power.

Speaking at trade association TechUK’s eighth yearly Digital Ethics Summit in December, delegates heralded 2025 as the “year of diffusion” for AI, noting while there were no major fresh method advances in the technology in the past year – peculiarly from the generative AI platforms that kickstarted the current hype cycle with the release of their models to millions of users at the end of 2022 – organisations are now “in the nitty gritty” of practically mapping the technology into their workflows and processes.

The delegates added that, in the year ahead, the focus will be on getting AI applications into production and scaling its use, while improving the general safety of the technology.

They besides highlighted the fresh UK Labour government’s focus on utilizing the technology as an engine of economical growth, as well as its emphasis on achieving greater AI diffusion in both the public and private sectors through building a stronger safety and assurance ecosystem.

However, delegates argued that the expanding proliferation and penetration of AI tools through 2024 means there is now a pressing request to operationalise and standardise ethical approaches to AI, as while numerous frameworks and principles have been created in fresh years, the majority are inactive besides “high level and vague”.

They stressed the request for any regulation to recognise the socio-technical nature of AI (whereby the method components of a given strategy are informed by social processes and vice versa), which means it must consider how the technology will lead to greater inequality and concentrations of power. They besides highlighted how AI is diffusing at a time erstwhile trust in government and corporations is at an all-time low.

To avoid deepening these inequalities and power concentrations while besides re-building trust, many delegates emphasised the request for inclusive, participatory approaches to AI’s improvement and governance, something that needs to be implemented locally, nationally and globally.

Conversation besides touched on questions around sovereign method capabilities, with delegates noting that only China and the US are in a position to act unilaterally on AI, and whether it is better to take a closed or open approach to the technology’s further development.

During the erstwhile summit in December 2023, conversation likewise focused on the request to translate well-intentioned ethical principles and frameworks for AI into concrete practices, emphasising that much of the discussion around how to control AI is overly dominated by corporations and governments from rich countries in the global north.

A consensus emerged that while the increasing strength of the global debate around AI is simply a sign of affirmative progress, there must besides be a greater emphasis placed on AI as a social-technical system, which means reckoning with the political economy of the technology and dealing with the practical effects of its operation in real-world settings.

Operationalising ethics, inclusively

Speaking in a panel on what the latest method breakthroughs mean for AI ethics, Leanne Allen, the UK head of AI at consultancy firm KPMG, said that while the ethical principles around AI have stood the test of time (at least in the sense that no fresh principles are being added to the list), applying those principles in practice remains challenging.

Giving the example of “explainability” as 1 of the settled ethical principles for AI, Allen added that it’s “fundamentally difficult” to explain the outputs of generative AI models, meaning “there needs to be nuances and further guidance on” what many ethical principles look like in practice.

Melissa Heikkilä, a elder reporter for MIT Technology Review, agreed that while it’s promising to see so many companies developing ethical frameworks for AI, as well as expanding their method capacities for red teaming and auditing of models, much of this work is inactive taking place at a very advanced level.

“I think no 1 can agree how to do a appropriate audit, or what these bias evaluations look like. It’s inactive very much in the chaotic West, and companies each have their own definitions,” she said, adding there is simply a distinct deficiency of “meaningful transparency” that is hindering the process of standardisation in these areas.

Alice Schoenauer Sebag, a elder associate of method staff in Cohere’s AI safety team, added that uncovering any way of standardising ethical AI approaches will aid to “kickstart innovation” while getting everyone on the same page in terms of setting a shared knowing of the issues.

Highlighting the “risk and reliability” working group of AI benchmarking firm MLCommons, she said work is already underway to build a shared taxonomy around the technology, from which a benchmarking platform for companies can then be built to assist them in evaluating their models: “I think this will truly be critical in building trust, and helping diffusion by having a shared knowing of safety, accessibility and transparency.”

Sebag further added that, as more companies decision from AI “experimentation to production”, there is simply a request to start having “really grounded conversations about what it means for the product to be safe” in the context it’s being deployed.

“I wouldn’t necessarily say that the [ethical] conversations are getting harder, I would say that they’re getting more concrete,” she said. “It means we request to be super specific, and so that’s what we’re doing with customers – we’re truly discussing all the truly the crucial details about what it means for this application, this usage case, to be deployed and to be responsible.”

Allen said due to the fact that most organisations are adopting AI as opposed to building their own models, they’re yet looking for assurance that what they’re purchasing is safe and reliable.

“Most organisations…don’t have control over that aspect, they’re relying on whatever the contract is with that organisation to say that they’ve gone through ethical standards,” she said. “But there’s inactive uncertainty, as we all know it’s not perfect, and it won’t be perfect for a long time.”

However, the delegates stressed the importance of operationalising inclusivity in particular, noting there are cultural differences that are not accounted for in the AI industry’s focus on the English-speaking world.

“Since we’re deploying products to aid businesses around the world…it’s super crucial that we don’t have a Western, English-centric approach to safety,” said Sebag. “We truly request to make certain that the models are safe by what safe means, basically, everywhere.”

Concentrations of power

Heikkilä added that over the past 10 years, the dominance of English-speaking countries in the improvement of AI hasn’t truly changed, but has immense implications: “With language comes quite a few power and values and culture, and the language or country that can dominate the technology can dominate our full view, as everyone interacts with these models.

“Having a diverse set of languages and a geographical divided is so important, especially if you want to scale this technology globally…It’s truly crucial to have these representations, due to the fact that we don’t want to have an AI that only truly applies in 1 way, or only works for 1 set of people.”

However, Heikkilä stressed that in political, socio-technical terms, we’re likely to witness “an even further concentration of power, data, [and] compute” among a fewer firms and countries, peculiarly amid “rising tensions with the US and China”.

Speaking on a separate panel about AI regulation, governance and safety in the UK, Alex Krasodomski, manager of the Digital Society Programme at Chatham House, said that Sino-American rivalry over AI means “this is the era of the accelerator, not the hand brake”, noting that any are already calling for a fresh AI-focused Manhattan task to guarantee the US stays ahead of China.

He besides noted that due to the geopolitical situation, it is improbable for any global agreements to be reached on AI that go beyond method safety measures to anything more political.

Andrew Pakes, the Labour (Co-op) MP for Peterborough, said that while geopolitical power dynamics are crucial to consider, there must besides be a concurrent consideration of interior power imbalances within UK society, and how AI should be regulated in that context as well.

“We are in what I hope will be the dying days of fundamentally the neoliberal model erstwhile it comes to regulation. We talk a lot about regulation…and everyone means something different. In the public discourse on what they believe regulation is, they think it’s about the public good regulation, but mostly it’s about economical competition and protecting marketplace,” he said, adding any regulatory approach should focus on building a cohesive, democratic environment based on inclusion.

“We request to think about what AI means in the everyday economy for people in work, but mostly in places like mine, the sad reality is that industrial change has mostly meant people being told they will have better opportunities, and their children having worse opportunities. That’s the context of industrial change in this country.”

He added while there is simply a clear request for the UK to figure out ways to capitalise on AI and stay competitive internationally, the benefits cannot be limited to the country’s existing economical and technological centres.

“Places like mine in Peterborough, sitting next to Cambridge, you’ve got the tale of the UK in 2 cities, 40 minutes away,” he said, noting while the second is simply a hub for innovation, jobs and growth, the erstwhile has any of the highest levels of kid poorness in the UK, and any of the lowest levels of people going on to university. “We have 2 different lives that people live just by the postcode they live in – how do we deal with that challenge?”

Pakes added that the image is further complicated by the fact this next wave of AI-powered industrial change is coming at a time erstwhile people simply do not trust in change, especially erstwhile it’s being imposed on them from above by governments and large corporations.

“We are trying to embark on 1 of the greatest, most fast industrial changes we’ve known, but we’re most likely at the period in our democracy where people have the least trust of change which is done to them. And we see that globally,” he said. “People love innovation, they love their gadgets, but they besides see things like the Post Office [scandal] and they distrust government IT, and government’s ability to do good. They equally distrust large corporations, I’m afraid.”

Pakes concluded that it is key for people to feel like change was being done with them, alternatively than to them, otherwise: “We will lose the economical benefits and we may actually build more division in our society.”

Responsible diffusion

Speaking on a panel about how to responsibly diffuse AI throughout the public sector, another delegates stressed the request to co-design AI with the people it will be utilized on, and to get them active early in the lifecycle of any given system.

For Jeni Tennison, founder of run group Connected By Data, while this could be via public deliberations or user-led research, AI developers request to have links with civilian society organisations and straight with the public.

She added there needs to be a “mode of curious experimentation” around AI that recognises it will not be perfect or work consecutive out-of-the-box: “Let’s together find the way that leads us to something that we all value.”

Also noting the pressing request to include average people in the improvement of AI, Hetan Shah, chief executive at the British Academy, highlighted the public’s experience of austerity in the wake of the 2008 recession, saying that while the government at the time pushed this inclusive thought of The large Society, in practice the public witnessed a diminishing of public services.

“There was a respectable kernel of thought behind that, but it got bound up in something completely different. It became about, ‘We don’t spend any money on libraries anymore, so we’ll have volunteers instead’,” he said. “If that’s where AI gets wrapped up, citizens won’t like it. Your agenda will end in failure if you don’t think about the citizens.”

Tennison added that, ultimately, practically including average people is simply a question of politics and political priorities.

On the global side, Martin Tisné, CEO and thematic envoy to the AI Action Summit being held in France in early 2025, underscored the request for global collaboration to address AI’s strategical importance and geopolitical implications.

He noted that in organising the latest summit – which follows on from the inaugural AI Safety Summit held in Bletchley Park in November 2023 and the AI Seoul Summit held in May 2024 – while any nations clear view the technology as a strategical asset, the event is being organised in a way that is attempting to “bend the trajectory of AI globally towards collaboration, alternatively than competition”.

He added that collaboration was critical, due to the fact that “unless you’re the US or China”, you simply are not in the position to act nationalistically around AI.

Sovereign capabilities?

For Krasodomski, part of the solution to the clear geopolitical power imbalances around AI could lay in countries building up their own sovereign capabilities and method expertise, something that is already underway with AI safety institutes.

“Chatham home has come out powerfully in its investigation on this thought of public AI…the thought being it’s not good adequate for governments to be regulators,” he said. “They request to be builders, they request to have capacity and a mandate to deliver AI services, alternatively than simply relying on tremendous technology companies with whom they are incapable to strike an equal and balanced bargain over the technology that their population depends on.”

However, he stressed that currently, “it is not a fair conversation” for the UK and others due to the “realities that the AI manufacture is presently hyper concentrated in the hands of a fewer truly large companies” specified as Microsoft and Google.

Highlighting the power imbalance that be both in and between countries – which has hobbled the ability of many governments to invest in and rotation out their own digital capabilities – Krasodomski added that: “Each country will be vying about who can have the strongest relation with the only companies who are capable of deploying the capital at the scale that AI requires.”

In terms of what the UK government in peculiar should do, Krasodomski said another countries specified as Sweden, Switzerland and Singapore are building their own national AI models “because they recognise that this is their chance to take control”, adding that while the UK doesn’t necessarily gotta build a sovereign AI capability, it should at least build up method capacities within government that let it to start negotiating more effectively with the large providers.

Chloe MacEwen, elder manager of UK government affairs at Microsoft, added that the UK is uniquely placed in an increasingly multilateral planet to reap the benefits of AI, as its technological talent and expertise will aid contribute to “standard setting” and safety research.

She further added that the UK is well placed to build fresh markets around third-party audits and assurance of AI models, and that the cloud-first strategy of successive governments means organisations will have access to the underlying infrastructure essential for AI’s further proliferation.

She said that while the building blocks for AI are already in place in terms of the cloud infrastructure (in December 2023, Microsoft committed to investing £2.5bn in the UK over the next 3 years to more than double the size of its datacentre footprint), the next step for Microsoft is reasoning about is how to centralise and organise disparate data so it can be utilized effectively.

However, Linda Griffin, vice-president of global policy at Mozilla, said that cloud represents “a major geopolitical, strategical issue”, noting “it’s truly where quite a few the power of the AI stack lies at the moment, and it’s with a fistful of US companies. I don’t feel like we’ve balanced the risks very well.”

Open vs closed AI

Recounting how Mozilla and others spent the 1990s fighting to keep the net open in the face of its corporate enclosure, Griffin said that “we’re gearing up for the same kind of fight with AI”, noting how most AI investigation is now done by companies in-house without being openly shared.

She added while open approaches to AI are on the emergence – and have been gaining traction over the past year in peculiar with government’s, businesses, 3rd sector organisations and investigation bodies all looking for open models to build on – it’s inactive a threat to the current AI marketplace and how it’s shaping up.

“Markets change all the time, but any of the companies that trust on black box AI models are very threatened by open origin because, you know, competing with free can be hard, and they’ve done a truly good occupation of painting a image of open origin as ‘dangerous’,” she said, highlighting a study by the US’ National Telecommunications and Information Administration (NTIA) that researched the open versus closed model question.

It concluded that: “Current evidence is not adequate to definitively find either that restrictions on specified open weight models are warranted, or that restrictions will never be appropriate in the future.” Instead, it outlined the steps the US government should take if “heightened risks emerge” with the models down the line.

Griffin added: “There’s no evidence or data to back up [the assertion that open is dangerous]. All AI is going to be dangerous and hard to work with, but just due to the fact that something’s put in a black box that we can access or truly realize doesn’t mean it’s safer. They both request guardrails, open and closed.”

In line with Krasodomski, who stressed the request for governments to have greater control over the infrastructure that enables AI-powered public services, Griffin said it’s in the UK’s interests to “think ahead on how we can usage open origin to build”.

“Most companies won’t be able to pay for the expanding cost of these APIs, they’ve no control over how the terms and conditions will change. We know from 25 years of the Web and more that real innovation that benefits more people happens in the open, and that’s where the UK can truly shine.”

Griffin said that, ultimately, it comes down to a question of trust: “We hear quite a few hype about AI and public services, how it’s going to be transformative and save us loads of taxpayers money, which would be wonderful. But how are we expected to trust AI, really, at scale in healthcare, for example, unless we realize more about the data, how it’s been trained, and why it arrives at certain decisions? And the clear answer is: be more open.”



Source link

Idź do oryginalnego materiału