Embracing the new AI regulations – Setting the wheel in motion

Artificial Intelligence (AI) has emerged as an indisputable force in every industrial sector that strives to stay aligned with ever-changing business dynamics. The market size of AI in 2020 was $62.35 billion and is expected to grow at a CAGR of 40.2% between 2021 and 2028. The technology has seen rapid adoption in recent times, owing to its steadfast significance and measurable benefits. It is not surprising that AI has found its application in various streams of the legal industry, which is quite conventional.

As technology evolves, it has been a global norm to regulate its application to address risks and ensure ethical use. In view of AI’s development, the European Union (EU) has pioneered introducing regulations, with the publication of its “Proposal for a Regulation laying down harmonized rules on Artificial Intelligence and amending certain Union Legislative Acts”. While the EU set the motion to introduce regulation for AI development and applications, this heralds the beginning of global-wide regulations.  Case in point, lawmakers in the US and Australia, have already begun their work on establishing their AI-specific regulations. As a precursor, banks in the US had been requested to share information on their AI systems in March 2020 by the country’s five largest federal financial regulators.

EU’s AI regulation proposal is awaiting approval from the European Parliament and the Council of the European Union. Post the approval, companies will be given time to implement and transition existing systems to comply with the new guidelines. Well, it looks like considerable time is left for the implementation of the regulations by the companies, including the extraterritorial, working with clients and customers in the EU market. However, the wiser strategy would be to begin the groundwork now so that you wouldn’t be caught by surprise, as experienced with the adoption of GDPR guidelines.

This article covers some of the key criteria that companies building and leveraging AI should focus on to ensure a hassle-free transitioning and utmost compliance with the new EU AI regulations.

A dedicated team to interpret the new act: It is a lot of work to go through every clause mentioned in the act. However, missing one bit of information can lead to serious compliance issues and penalties. And, its non-compliance penalty is relatively high, with fines of up to €30 million or 6 percent of global revenue. As the law is expected to come into force by 2023, you can get the team ready to clearly understand what is expected of you as a business leveraging AI systems. For instance, the act classifies AI systems under three risk categories, and the requirements for sustaining its use vary based on their risk level. To understand such nuances, you need experts who can guide you through the successful transitioning of your AI system.

List your AI applications and the objectives: This is ideally an inventory of your current and proposed AI systems, detailing how and where you have deployed them, the purpose it serves, etc. This list can be used to map an AI system against the requirements stated in the act to evaluate the category it belongs to. You can also extend this into documentation comprising training datasets, the general scope of application, performance attributes, etc. Over time, this will become the go-to source to understand the functioning of each of your AI systems.

Evaluate the compliance level with the proposed act: A cross-functional team comprising legal, technology, risk management, and data experts should assess all your AI systems against the requirements defined by the EU AI act. The reason for involving a multi-disciplinary team is to ensure a 360-degree perspective of your AI’s compliance. The team can also build a comprehensive risk assessment strategy for the product team to refer to and adhere to. If required, you can liaise with an external auditing team to ensure that a resilient compliance mechanism is in place.

Deploy a Risk Assessment plan: As you prime to navigate through uncharted waters, a risk assessment and mitigation plan are essential. It acts as a compass guiding you through the transitioning phase, helping you gauge the potential threats and build a protection plan to avoid any crisis. While building the plan, you can document potential risks your system might encounter, any violation of rights, and unforeseeable instances. With the involvement of a cross-functional team, you can devise a comprehensive AI operational risk assessment plan by taking every possible facet into account.

Build a system for timely AI assessments: Ensuring compliance is not a one-time effort. It should be an ongoing process, with audits in defined intervals. Your compliance team can monitor industry trends and benchmarks specific to the regulations and measure them against your existing approach to ensure that the AI systems adhere to the defined requirements and deliver the best experience to the users.

With 2023 as a projected timeline for the regulations to come into effect, you might have immense time in hand. But beginning the actual work when the deadline is nearing would be chaotic, erroneous, and undermine your team’s spirit. So, the best bet is to get started with it in a phased manner. It will provide you ample time to review the internal systems and how the newly implemented change will impact your existing processes. This implies you will be able to make well-informed decisions and prepare your team to embrace the change.

Note: This is an expert piece by an AI/ML Training Services specialist at Cenza, a Managed Legal Services provider. As AI technology enters a critical phase of its journey with a regulation on the horizon, this article is an effort to assist companies and AI providers as you prepare for the big transition.

About Cenza

We are a Managed Legal Service Provider with over 20 years of experience working with leading corporations, law firms, in-house departments, and financial institutions worldwide. AI & Machine Learning training is one of our prime services leveraged by legal AI companies to enhance their tool’s efficiency and accuracy. What sets us apart is the lawyer-in-the-loop training to accelerate the learning curve of AI platforms and help them deliver the expected outcomes.

SHARE:
Raja

Raja

Raja leads the Sales and Marketing at Cenza and his job involves creating strategy for the day to day sales and marketing activities, and focused on optimizing the sales for Contract Migration, AI and ML training, Lease Management, and other managed legal services. Also help CLM providers with cost-effective solutions for contract extraction and migration needs.