Skip to main content

Industries braced as the EU AI Act starts to prohibit AI practices

As one of the world's most comprehensive AI regulations, the European Union's Artificial Intelligence Act (the EU AI Act) officially came into force in August 2024, regulating activities across the full AI lifecycle to ensure fair and ethical usage, safeguard fundamental rights and create a transparent AI ecosystem.

The Act itself will become generally applicable two years later, in August 2026, however it is already shaping the AI landscape, as the first tranche of key provisions came into effect from 2nd February 2025.  This milestone marks the beginning of prohibitions on certain AI systems, the enactment of which has ramifications on many industry sectors, including educational establishments and workplaces.

Prohibited AI practices

From February 2025, the EU started to enforce bans on AI systems deemed to pose an "unacceptable risk" to individuals' rights and safety.  These include AI systems designed to influence people’s behaviour in ways that cause harm, AI tools that target individuals based on vulnerabilities such as age or disabilities, and government or private sector social scoring systems that rank individuals based on behaviours or characteristics.

More directly, real-time facial recognition and biometric surveillance using AI will be banned in public spaces, with limited exceptions for serious crimes or security threats.  These prohibitions aim to ensure that AI development aligns with ethical principles and fundamental rights while preventing potential misuse in both public and private sectors.

Analysis of emotions in workplaces and places of education is also included in the prohibition, as are biometric categorisation systems based on sensitive characteristics.

Impact on educational establishments and workplaces

The education sector must now carefully consider the deployment of AI systems used for admissions, grading, and student monitoring.  Schools, universities, and training institutions using AI tools must ensure that decision-making remains fair and transparent, reducing biases in grading and admissions.  Students and teachers must be informed about how AI is used and what decisions it could influence, ensuring that there is clarity in its application.  By enforcing transparency and fairness, the EU seeks to prevent AI from creating disadvantages in educational opportunities and ensure that students are not unfairly assessed by automated systems.

In workplaces, employers must disclose the use of AI in hiring and promotions, ensuring that applicants and employees understand how AI influences decisions.  Employers can no longer use AI tools that monitor staff behaviours or emotions, preventing potential misuse of automated systems in offices.  Similar to other sectors, transparency measures must be implemented that enable employees to contest AI-driven decisions that impact their employment status.  With these regulations, the EU aims to strike a balance between leveraging AI for efficiency and protecting workers' rights.

Impact on other sectors

Beyond education and workplaces, the Act has far-reaching implications for other sectors.  For example, in healthcare, AI applications used in medical diagnosis, treatment recommendations, and patient monitoring have to meet strict transparency and safety requirements.  Medically approved AI systems must now undergo rigorous testing to ensure accuracy and reliability before being deployed.  In finance, AI-driven risk assessment and credit-scoring models must be free from biases and provide clear explanations for automated decisions that affect consumers' access to financial services.  And the public sector will also see increased regulation, with AI systems used in law enforcement and public administration needing to comply with fairness and accountability standards.

Looking ahead

The February 2025 deadline marks a major step in the EU’s effort to create a safe, transparent, and human-centric AI landscape.  The next major milestone happens in May this year, when the EU AI Office starts to facilitate development around codes of practice covering obligations on providers of general-purpose AI.

Beyond, August 2026 marks the date where obligations on high-risk AI systems come into effect, namely those in biometrics, critical infrastructure, education, employment, access to essential public and defined private services, law enforcement, immigration, and administration of justice.

By enforcing these initial restrictions, the EU is setting a global precedent for responsible AI regulation.  Organisations must now begin to comply with these rules or face strict penalties, ensuring that AI serves society fairly and ethically.

 

Date Published:

Simon Forrest

About the author

Simon Forrest

As Principal Technology Analyst for Futuresource Consulting, Simon is responsible for identifying and reporting on transformational technologies that have propensity to influence and disrupt market dynamics. A graduate in Computer Science from the University of York, his expertise extends across broadcast television and audio, digital radio, smart home, broadband, Wi-Fi and cellular communication technologies.

He has represented companies across standards groups, including the Audio Engineering Society, DLNA, WorldDAB digital radio, the Digital TV Group (DTG) and Home Gateway Initiative.

Prior to joining Futuresource, Simon held the position of Director of Segment Marketing at Imagination Technologies, promoting development in wireless home audio semiconductors, and Chief Technologist within Pace plc (now Commscope) responsible for technological advancement within the Pay TV industry.

Latest Consumer Electronics Insights

Cookie Notice

Find out more about how this website uses cookies to enhance your browsing experience.

Back to top