Global EN

AI Regulation Takes Another Step Forward

Prag Jaodekar

Technology Director - CTO Office , Synechron UK

Artificial Intelligence

The British Standards Institution (BSI), the national standards body of the United Kingdom, last week announced a first-in-kind AI management system designed to enable the safe, secure and responsible use of Artificial Intelligence (AI) across society, following their research suggesting that 61% of people surveyed want global guidelines for the technology.

The international standard (BS ISO/IEC 42001) is intended to assist firms in the responsible use of AI. It addresses things like non-transparent automatic decision-making, the utilization of machine learning – instead of human-coded logic for system design – and continuous learning. The BSI produces technical standards on a wide range of products and services, and also supplies certification and standards-related services to businesses.

UK government rules likely to follow

This follows on from recent reports that the UK government is set to publish a series of tests that need to be met in order to pass new laws on AI – this is being driven by an apparent resistance to calls for a tougher regulatory regime for the technology. These criteria, due to be published by ministers in the coming weeks, will address the circumstances in which they would curtail powerful AI models created by leading AI providers.

It is reported that among the “key tests” that would trigger an intervention is if the systems put in place by the UK’s new AI Safety Institute — a government body comprised of academics and machine learning experts — fails to identify risks around the technology. Another scenario that could trigger legislation is if AI companies fail to uphold voluntary commitments to avoid harm.

Movement in Washington

And these talks are progressing elsewhere in the world. Back in September 2023, a delegation of technology leaders, including Sundar Pichai, Elon Musk, Mark Zuckerberg and Sam Altman in Washington, met with US senators in a behind-closed-doors meeting to discuss the rise of AI and how it should be regulated. This “AI safety forum” was one of several scheduled meetings between Silicon Valley, researchers, labor leaders and US government officials. President Biden also last year issued an executive order on safe, secure, and trustworthy AI, designed to ensure that “America leads the way in seizing the promise and managing the risks of AI.”

Other senators have said they will introduce a bipartisan bill for artificial intelligence, including rules for licensing and auditing AI, and liability rules around privacy and civil rights, as well outlining data transparency and safety standards. Plans have also been set out to create an AI oversight office for regulation.

And EU regulation is solidifying

And this news follows the EU announcement in December 2023 that the Council presidency and the European Parliament’s negotiators had reached a provisional agreement on harmonized rules on AI – the so-called ‘artificial intelligence act’. This draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. This proposal also aims to stimulate AI investment and innovation in Europe. Meanwhile, G7 leaders have recently agreed to create an AI voluntary code of conduct. Meanwhile, all the way back in 2018, the Monetary Authority of Singapore (MAS) released a similar set of principles to promote fairness, ethics, accountability and transparency (FEAT) in the use of AI and data analytics in finance.

 

Synechron says

Establishing standards for safe and regulated artificial intelligence is hugely important. AI and Generative AI, in particular, hold immense transformational potential for businesses. But, leaders need to ensure that ‘pace and innovation’ is balanced with caution – by focusing on AI security, compliance and ethics. We use AI to power innovative and practical applications that augment human capabilities and enhance our clients' operations – without compromising on safety. As regulatory compliance catches up with AI technology, it’s imperative that AI systems are developed with security, compliance and ethics in mind from the very beginning (to mitigate retrospective compliance costs). Responsible AI policy and governance frameworks must be built-in to firms’ operating models as they start their journey towards AI adoption.

Find out more about our AI solutions →

The Author

Rachel Anderson, Digital Lead at Synechron UK
Prag Jaodekar

Technology Director - CTO Office

Prag Jaodekar is Technology Director at CTO Office of Synechron, based in the UK. He supports Synechron’s business units and clients with strategy, architecture and engineering expertise that spans Synechron’s business and technology capabilities. Prag has more than 18 years of experience as a consultant of technology and application architect specialist creating IT strategies and delivering solutions for many top tier banks across the financial services industry.

To learn more about Synechron’s opinions, skills and abilities on any of these mainstay and emerging technologies, or to learn how we can advise your company on ways to deploy these for business optimization purposes, please reach out to: cto-office@synechron.com

See More Relevant Articles