AI: Benefits, Examples, and (Legal) Implications

Good morning, Legal People – January 25, 2024, with speakers:

Anny Ho – Manager Data Privacy and Protection Europe at Lucid Motors

Arash Rahmani – Board Member bij ISACA NL Chapter en nu Chief Information Security Officer (CISO) a.i. bij Nederlandse Zorgautoriteit


AI: An Introduction

Over the past 20 years, Arash Rahmani has seen many new technologies emerge. A significant portion of these were “shiny objects”—novelties that received excessive attention and generated a high level of hype but ultimately did not deliver the major business or societal impact that was expected. Technologies such as blockchain, big data, the metaverse, and VR fall into this category.
On the other hand, there are technologies that truly cause a paradigm shift—a fundamental change in perspective or approach that leads to transformation. Examples include the computer itself or mobile telephony. The development we are witnessing now is one of those transformative moments: the rise of Artificial Intelligence (AI).

Rahmani’s first encounter with AI was in 2015 during a project on pattern recognition in video footage. Since then, he has seen AI become ubiquitous, fully integrating into everything we do. The chatbot in an online store uses AI, just like the active noise cancellation in video conferencing applications.

What we now call AI is, in that sense, not actually new. Technological change happens gradually, but most people experience it as rapid. AI has been around for a long time and is already embedded in many systems, but only with the development of accessible applications like ChatGPT has it truly entered everyday life.
Over the coming years, the integration of AI into our daily lives will increase significantly.

Risks

There are fantastic applications of AI, but there are also risks involved. For example, the actual intelligence of AI is often overestimated. AI excels at pattern recognition and can generate new content based on those patterns. However, these patterns already exist—they are existing data, analyzed at incredible speed through training and reward systems. The GPT in ChatGPT stands for Generative Pre-trained Transformer, meaning it is pre-trained. One of the risks is that AI can be trained incorrectly. Rahmani calls this “garbage in, garbage out.” There are cases where automated social media accounts quickly became racist because the platform was flooded with dubious content. Additionally, the output of generative AI depends on the input, meaning it can be manipulated to deviate from its intended purpose.

AI and the Legal Field

As in many other fields, AI is making its way into the legal world. Key applications to watch include Harvey AI and Pre/dicta. Harvey AI assists lawyers by accelerating manual tasks, improving efficiency in legal work. Pre/dicta analyzes data to predict judicial decisions and assess how a judge’s ruling might be influenced.
Beyond assisting legal professionals, AI will also impact the course of legal proceedings. Historically, much attention has been given to the reliability and availability of information. However, AI—through technologies like voice cloning—is making it harder to verify the origin and integrity of information.
Video and audio content can now be AI-generated, making it more difficult to use as evidence in court.

European Legislation: The AI Act

With the AI Act, Europe is introducing legislation on AI. Anny Ho states that the legislation not only brings legal obligations but also provides practical guidelines to manage the impact of AI. The AI Act describes AI with a broad definition, meaning that many applications can fall under its scope. Moreover, the law applies not only to those who develop AI but also to those who bring it to market or use it.

The AI Act follows a risk-based approach: the higher the risk, the greater the obligations.
Some applications are classified as “unacceptable risk”, such as social scoring or emotion recognition in the workplace or schools—these will be banned. Most AI systems, however, fall into the “high-risk” category. This includes AI-driven hiring tools that analyze applicants and resumes, workforce management systems, and applications that determine access to essential services such as credit approvals or insurance.
According to Ho, the AI Act provides strong legal frameworks. This gives guidance, but it also limits flexibility in developing AI solutions. Unlike the GDPR, this legislation appears less open to interpretation in terms of how obligations must be fulfilled.
At a time when everyone wants to implement AI, the EU is focusing on ethical and responsible usage.

The high-risk category carries the most obligations for organizations, requiring measures such as adequate risk management systems, data governance and management, security measures, and robustness.
For lower-risk AI applications, only transparency obligations may apply, such as informing users about AI involvement. Although it is still unclear which authority in the Netherlands will oversee enforcement, compliance is crucial. Non-compliance can lead to market access restrictions or fines of up to €35 million. The final text of the AI Act is expected to be finalized this spring, with the “unacceptable risk” category set to be banned later this year, and broader obligations coming into effect by mid-2025. Since most organizations will use AI systems that fall into the high-risk category, it is advisable to assess the impact of AI usage as early as possible. Organizations should establish a multidisciplinary AI task force including legal, IT, and security experts, and ensure that AI Act obligations are reflected in contracts. Additionally, an inventory of AI systems should be conducted, risks and responsibilities should be identified, and an AI compliance plan should be developed.

In Conclusion

AI developments affect all organizations, with the AI Act being just one part of a much broader regulatory landscape surrounding digital transformation.
In the boardroom, leaders must look beyond immediate concerns, which can be especially challenging for older organizations with little competition, where board positions are filled based on networks rather than expertise. Every organization should have its own AI governance task force to manage the impact of AI, as there is no “one size fits all” approach in this case.