AI is playing an increasingly important role in the competition for technological pioneering roles. Companies are increasingly using AI assistants to automate processes or answer customer enquiries via chatbots, for example. However, the first AI agents are also finding their way into our everyday lives and are designed to take over complex actions and make decisions for us as “personal assistants”.
These new AI helpers have many advantages – but what risks and legal issues should companies take into consideration?
What are AI assistants and AI agents?
AI assistants perform specific tasks in response to direct input from the user: chatbots in online shops answer questions about orders and returns, voice assistants such as Alexa or Siri play music, smart tools suggest formulations in email programmes or translate texts in real time.
In contrast, AI agents act autonomously. They analyse their environment, make decisions independently and carry out complex actions without being dependent on direct input. A current example of AI agents is the operator in the research preview of ChatGPT Pro (OpenAI), which uses image recognition to learn user behaviour in the web browser and then independently performs actions on websites, e.g. entering payment information, placing orders and sending messages.
What are the benefits and risks of AI assistants and agents?
AI assistants are geared towards specific user commands or requests and primarily solve clearly defined tasks. AI agents, on the other hand, act more independently and make decisions within a predefined framework or actively initiate processes. While an AI assistant primarily provides support, an AI agent takes on a more proactive role with a higher degree of autonomy.
AI assistants and AI agents therefore open up a wide range of opportunities for companies to develop technologically and hold their own against the competition. However, these systems are dependent on a lot of data, are susceptible to manipulation from outside and the risk of their use increases with their degree of autonomy. It is therefore important to clarify the legal framework at an early stage. This is the only way to minimise risks and establish sustainable, future-proof AI solutions.
What legal aspects need to be considered?
As with all AI applications, the use of AI assistants and AI agents is subject to the applicable legal requirements. In addition to issues relating to data protection and data security, IT security and liability risks, copyright and consumer rights also play a key role, as do the (future) requirements of the AI Act. Contractual agreements – whether with customers, developers or cooperation partners – can also play a decisive role.
-
Data protection and data security
With regard to personal data, such as names, addresses, but also IP addresses and other data that can be assigned to a natural person, the requirements of the GDPR also apply when using AI assistants and AI agents. Companies must ensure that they have a valid legal basis (Art. 6 GDPR) for each processing operation. Possible legal bases include consent (for example, when using a personalised AI chatbot on a website), contract fulfilment (for example, if the AI assistant is to provide the agreed support service) or the company’s legitimate interest (for example, data processing to optimise a chatbot to improve customer service).
In addition, the other requirements of the GDPR must also be observed, in particular measures for data minimisation and data security (e.g. encryption, access restrictions) must be implemented and data subject rights (e.g. information and erasure) must be observed. Last but not least, data processing by AI systems is regularly assumed to be a particularly high-risk application, so that a data protection impact assessment (DPIA) will generally be required, at least for AI agents.
-
Cybersecurity
AI assistants and AI agents can be an attractive target for external attacks due to their networking and access to extensive databases. Manipulated inputs, so-called prompt attacks or malware infiltration, can lead to the AI system providing false information or unauthorised access to sensitive data. There is also a risk that automatically generated content could be misused for phishing emails or other illegal activities.
Companies should therefore invest in robust defence mechanisms at an early stage: In addition to established security standards such as encryption or access restrictions, regular penetration tests, clearly defined incident response processes and continuous security updates are all part of a solid cybersecurity strategy.
-
Liability and responsibility
Companies that use AI in their own name are generally liable for faulty AI. However, companies are also often responsible for damage caused by copyright infringements, the unintentional disclosure of trade secrets, breaches of competition or unwanted contracts and incorrect advice from AI assistants and agents: if an AI chatbot results in a contract with the user, for example, the company is generally bound by it.
Further information can be found in our article “AI and liability in practice – an overview“.
-
Copyright and intellectual property
When developing and implementing AI assistants and AI agents, companies must carefully check whether and how protected works are used. This also includes clear documentation of training data and prompting processes in order to minimise liability risks. IP law is also relevant with regard to output: aI assistants and AI agents often create texts, images or other content. As a rule, these are not eligible for protection as they lack human intellectual creation. However, there can be a risk of copyright infringement if the output is too closely modelled on protected works or adopts parts of them. Companies must therefore ensure that no unauthorised use of third-party content takes place – especially if the content created by the AI assistants is published or passed on to third parties.
-
AI Regulation (AI Act)
AI assistants and AI agents must also comply with the requirements of the AI Regulation – at least in the near future. The EU’s AI Act provides for a tiered system of obligations based on the risk class of the AI application and the respective role of the company along the value chain:
-
Prohibited AI systems (Unacceptable Risk)
Prohibited are AI assistants or agents that use subliminal manipulative techniques, exploit the vulnerability of certain groups or carry out a social assessment that discriminates against people (social scoring). Targeted manipulation of users of AI voice assistants, targeted deception in company chatbots or the use of deepfake technology to persuade children or elderly people to take actions that they would not otherwise have taken are therefore prohibited without exception.
-
High-risk AI systems (High Risk)
AI assistants and AI agents can constitute high-risk AI systems if they fall under one of the laws listed in Annex I of the AI Regulation as a product or safety-relevant component, e.g. if they are used in machines, lifts or medical devices.
The obligations for high-risk AI systems also apply if the small AI assistants are used as intended in one of the high-risk areas listed in Annex III, for example in critical infrastructure, in education or training or in access to essential services. This is particularly the case if the AI agent can make significant automated decisions. A different conclusion can only be reached if AI is only used for preparatory purposes or with little influence on decisions – this is more likely to be the case with AI assistants. In the case of AI agents, which are characterised by decisions that are as autonomous and far-reaching as possible, it is more likely to come to the conclusion that the specific agent represents a high-risk AI system. In particular, if natural persons are profiled in such areas, a high risk must be assumed.
-
Systems with limited or low risk
Both AI assistants and AI agents in the high-risk area and those without a high risk will regularly have to fulfil the transparency obligations under Art. 50 of the AI Regulation in practice. Among other things, these explicitly apply to AI systems for direct interaction with humans, which is already the case with classic AI chatbots and should also regularly apply to AI agents.
-
Contractual implementation
Finally, anyone integrating AI assistants or AI agents into their business model should set up the contractual constellations properly. This includes contracts with external service providers, e.g. the processing of personal data (as part of order processing), regulations on availability, response times and update cycles in service level agreements (especially for cloud-based AI solutions (“AI SaaS”), clear licence and usage conditions as well as liability and warranty regulations.
Conclusion
AI assistants and AI agents open up numerous opportunities for companies to organise processes more efficiently and improve customer interactions. However, it is important to clearly recognise the different features: AI assistants provide users with targeted support for clearly defined tasks and require direct input, while AI agents can make decisions independently and control processes autonomously. This greater autonomy of AI agents increases legal challenges, particularly in the areas of data protection, data security, in accordance with the AI Regulation and liability issues.
Companies must observe the specific legal requirements, particularly those of the GDPR and the AI Act, at an early stage and make clear contractual agreements, supplemented by comprehensive but pragmatic documentation and internal compliance structures.
Would you like to build your AI projects on a solid, legally compliant foundation and optimally utilise the potential of AI assistants and AI agents? We would be happy to advise you – contact us!