It can feel like an endless race to keep up with technological developments, particularly AI. It has been said, more than once, that life in the fifth industrial revolution (a world inhabited by ‘intelligent’ machines) is breaking Moore’s Law. Time will tell, but one thing is certain: the pace of change is staggering.
This pace affects us as individuals and as professionals. To a large degree, AI has already quietly embedded itself into our lives without much fanfare and without the spectre of the Terminator! If you use streaming services, facial recognition, smart home devices, and, of course, ChatGPT, you’re using AI.
There’s much talk of whether AI will replace humans in the workplace. There may be a lack of consensus about the answer to that question, but one thing is certain, it’s already changing how we work, and we need to think carefully about its impact on us as individuals and as businesses.
Whilst there are powerful examples of ‘AI for good’ (healthcare screening is a good example), there are also examples of more malign use, such as deepfakes and huge-scale scams. The question that is, therefore, vexing many governments is what, if any, regulation needs to sit around the use of AI?
There seems to be little appetite for comprehensive regulation in the US, where so many of the ‘tech giants’ reside. But another big player in this area is the EU, which has taken a very different view.
In June 2024, the EU adopted the world’s first rules on AI. The AI Act aimed to establish a risk-based approach to regulate AI systems within the EU to ensure safety, human rights, and trustworthy AI.
It applies across different risk categories, including:
- Unacceptable risk: AI systems that threaten fundamental rights are prohibited, including government-run social scoring systems, certain predictive policing applications, and other manipulative AI.
- High risk: Systems that significantly impact health, safety, or fundamental rights (e.g., AI in recruitment and selection or critical infrastructure) must meet strict requirements for market access, such as data quality and human oversight.
- Limited risk: AI systems with a risk of manipulation or deceit, such as chatbots and generative AI systems, must clearly indicate that users are interacting with an AI system, and any deepfakes should be identified as such.
- Minimal risk: Most other AI systems, such as AI-enabled spam filters, don’t have any restrictions or mandatory obligations; however, it’s recommended to follow general principles such as human oversight, non-discrimination, and fairness.
It’s interesting to note that the narrative around its implementation focuses heavily on promoting AI that is human-centric, safe, transparent, non-discriminatory, and environmentally friendly, with an objective of fostering innovation and public trust. It’s seen by many as a key pillar in economic health and growth. Regulation, done well, can foster the positives whilst putting guardrails in to protect against the negatives.
Ultimately, AI is heavily reliant upon data and, as such, will often get captured by existing legislation, like data protection. So, whilst the Channel Islands do not appear to have an appetite for specific legislation, it’s worth getting familiar with the EU Act because it sits neatly within the existing legislative framework and encourages us to think about our own engagement (or not!) with AI and associated technologies.
The Act creates uniform rules for AI systems, ensuring consistent implementation across all EU member states. Whilst we may sit outside the reach of this legislation, we will no doubt be impacted by the way in which businesses and organisations across the EU are required to conduct themselves.
Not only is taking the time to survey the landscape in this area good for us all, but it’s also important for businesses to engage proactively with both the opportunities and risks that life in this AI era presents.
This isn’t a question of ‘if’ AI becomes relevant to you and your business, but rather a question of ‘Now that AI is embedded into our lives, how are we exploiting those opportunities and mitigating those risks?’. None of us can afford to leave that deliberation to others.
At the very least, you should:
- Understand what AI is, and importantly, what it isn’t.
- Have an accurate picture of what tools are being used in your organisation, for example, are your staff using ChatGPT or other chatbots at work?
- Understand and manage the risks.
- Ask what you want the technology to do, rather than simply what it can do (they may well be different things!
AI is not just a question of technology; it’s a question of business practice, professional ethics, culture, values, and so much more. True intelligence does not lie in the circuit of a machine; it lies within each one of us, and we need to engage it now more than ever.
Key timeline of the EU AI Act
- 21 April 2021 – EU Commission proposed the AI Act
- 13 March 2024 – EU Parliament approved the draft law
- 2 August 2024 – AI Act takes effect (start of 24-month transition period)
- 2 February 2025 – Ban on AI systems with ‘unacceptable risks’
- 2 August 2025 – Entry into force of governance rules and obligations
- 2 August 2026 – Obligations for high-risk systems come into effect (end of 24-month transition period)
- 2 August 2027 – Entire Act comes into force