EU AI Act's Impact on Chat Services

Mar 27, 2024

Eu AI Act Chat

The EU AI Act will apply to chatbots and chat services powered by AI. At the same time, automated customer engagement is becoming more of a competitive advantage every day. With the EU AI Act being approved by Parliament, it’s becoming a closer reality to consider how that will affect your operations.

The European Union sets a global precedent with its comprehensive approach to regulating artificial intelligence (AI). The proposed AI Act aims to safeguard EU citizens against the potential risks posed by AI technologies while fostering innovation and trust in digital services. The legislation outlines new standards for AI applications, including chat services. Let's explore the EU AI Act's applicability, status, operational framework, and best practices for chatbots to ensure transparency and trustworthiness.

What is the EU AI Act in a Nutshell?

The EU AI Act represents the first binding worldwide regulation on artificial intelligence, setting a common framework for the use and supply of AI systems within the EU. It classifies AI systems based on a risk-based approach, with distinct requirements and obligations tailored to each category. Specifically, it targets AI systems presenting 'unacceptable' risks, 'high-risk' AI systems, systems with limited risk due to lack of transparency, and those posing minimal risk. The regulation also underscores specific rules for general-purpose AI models, especially those with potential 'high-impact capabilities' that could pose systemic risks.

Source: Digital Strategy European Commission

Who does the AI Act Apply To?

The AI Act has broad applicability, affecting many stakeholders within the EU and beyond. Primarily, it applies to providers and deployers of AI systems who are either established in the EU or place their AI systems on the EU market. Furthermore, it extends to providers and users of AI systems established outside the EU, provided the output of their systems is used within the Union. This wide-ranging applicability ensures that the act covers virtually all AI systems used within the EU, irrespective of where they are developed or deployed.

Has the EU AI Act been Passed?

As of the latest updates, the European Parliament's plenary approved the draft AI Act this March 2024, after the EU member states reached a political agreement on it in December 2023. Following this, the AI Act must also be endorsed by the Council and published in the EU's Official Journal before entering into force. From that point on, it will enter into force twenty days from publication and be fully applicable 24 months later. Depending on the level of risk, there are different timelines with

How does the EU AI Act Work?

The EU AI Act introduces a comprehensive and nuanced approach to AI regulation, focusing on a risk-based categorization of AI systems. This framework ranges from outright prohibiting certain AI practices deemed as posing 'unacceptable risks' to imposing strict requirements on 'high-risk' AI systems and setting transparency obligations for AI systems that interact with humans, such as chatbots. The act mandates that high-risk AI systems undergo rigorous testing and compliance procedures, including risk management and technical documentation, before being introduced to the market. Furthermore, it envisions creating a European Artificial Intelligence Board to oversee and facilitate consistent application of the act across member states.

Risk levels for chat services under the EU AI Act

No risk level is specified for chat services in particular, as it depends on the application of the automated conversation Most likely, your chatbot would be a limited-risk application of AI, subject to a limited set of transparency obligations.

Clearly, your chatbot would be banned if it is used for harmful AI practices, such as manipulation, exploitation of vulnerable groups, use by public authorities for social scoring, or certain types of biometric identification, and they threaten people’s safety, livelihoods, and rights.

Your AI chat service is as a high-risk system if it profiles individuals for things like automated handling of personal information for employment performance, financial status, health conditions, trustworthiness, and location tracking. Certain industries (banking, insurance, healthcare, …) will have to pay special attention to the use cases automated by their chat conversations. Annex iii of the Act includes an overview of use cases seen as high-risk, that could apply to automated chats.

Also, using your chat service for emotion recognition systems, biometric categorisation systems, and generating or manipulating image, audio or video content (such as deepfakes) would make it a high-risk system.

EU AI Act Best Practices for Chatbots

Chat services that use AI to simulate conversations with human users, do receive special mention in the EU AI Act, especially regarding transparency and trustworthiness. Here are some best practices for companies to align their chat services with the AI Act:

  • Transparency and Disclosure: Companies must ensure that users are aware they are interacting with a chatbot. This involves clearly disclosing the AI nature of the service upfront, allowing users to make informed choices about their engagement.

  • Data and Privacy Protection: Following GDPR and other relevant data protection laws is crucial. Chatbot operators and developers must ensure that personal data collected through chat services is handled securely and ethically.

  • Human Oversight: Incorporate mechanisms for human oversight, allowing human intervention in decisions made by chatbots. This ensures that a human can review and correct decisions impacting users’ rights and safety if necessary.

  • Accuracy and Reliability: Regularly monitor and evaluate the chatbot's performance to ensure accuracy and reliability. Update and train your AI systems with diverse and representative data sets to minimize biases and inaccuracies.

  • User Feedback: Implement channels for user feedback on chatbot interactions. This feedback can be invaluable for improving AI performance, enhancing user experience, and identifying areas where human intervention may be necessary.

  • Ethical Considerations: Beyond legal compliance, consider the ethical implications of chatbot interactions. This includes respecting user autonomy, avoiding manipulative practices, and ensuring fairness and non-discrimination in automated decisions.

When your chat service is considered a high-risk AI system, there are more stringent rules regarding its deployment, such as having a risk management system, implementing robust data governance, technical documentation and record-keeping, and more.

Beyond Regulation: Fostering Trust in AI Chat

The EU AI Act sets a new benchmark for the responsible use of AI technologies, particularly in services directly interacting with consumers, like chatbots. Understanding and applying these principles is not just about regulatory compliance; it's about creating a future where AI enhances human interactions without compromising trust and privacy. By embracing these best practices, companies can not only align with the upcoming EU AI Act but also lead the way in establishing a more ethical, transparent, and user-centric AI landscape.

For chat services, the implications are clear: a move towards greater transparency, enhanced user rights, and stringent oversight to ensure that AI-driven interactions promote rather than undermine user trust and safety. Companies that lead in implementing these practices not only position themselves favorably within the regulatory landscape but also differentiate their services in an increasingly competitive market where consumers value ethical services.

Like our content? Interested in learning more? All our articles

Follow us on

© Requesty Ltd 2024