Skip to content

Generative AI, especially current Large Language Models (LLMs) like ChatGPT, are on everyone’s lips. The “gold rush” is in full swing. Both individuals and companies are trying to incorporate the revolutionary technology into their everyday lives and workflow in order to increase their own productivity and functionality. However, Samsung’s recent example illustrates that the use of ChatGPT in a professional environment is fraught with risks.

In this article, we highlight the legal risks of AI use in the context of the German Trade Secrets Protection Act and present possible solutions.

Samsung and ChatGPT

The Economist reported that the South Korean technology manufacturer Samsung had allowed engineers in its semiconductor business to use ChatGPT to support their work. During this use, some employees had entered confidential information into ChatGPT. The data included source code from proprietary applications that had been fed into the AI for debugging purposes. In addition, confidential internal meeting notes were used to create presentations with the help of ChatGPT. OpenAI, provider of ChatGPT, states in its own terms of use that content entered by users in the chat can be stored and used for further development and improvement of the service. This means that Samsung’s confidential information is now held by OpenAI.

Samsung has recently put the use of ChatGPT and similar generative AI models by its employees on hold in order to create internal company safeguards to ensure the secure use of generative AI. According to the reports, Samsung is now also considering developing its own AI solution to prevent such issues.

ChatGPT and the German Trade Secrets Protection Act

Samsung’s case is not likely to be the only one. More and more companies as well as employees will resort to using the latest AI applications and integrating them into their workflow and business processes. For companies in Germany, therefore, the question of whether using ChatGPT can lead to a violation of the German Business Secrets Protection Act (GeschGehG) arises.

What constitutes a trade secret is defined in Section 2 No. 1 GeschGehG:

„For the purposes of this Act […] a trade secret is information,
a) which is not generally known or readily accessible, either in its entirety or in the precise arrangement and composition of its components, to persons in the circles which normally deal with this type of information and is therefore of economic value; and
b) which is the subject of measures of secrecy appropriate in the circumstances, taken by its rightful owner; and
c) for which there is a legitimate interest in maintaining confidentiality.“

With respect to the information to be protected, the adoption of “appropriate secrecy measures” by the owner of the trade secret is required. According to Section 2 No. 2 GeschGehG, the owner of a trade secret is “any natural or legal person who has lawful control over a trade secret.” Usually, this will be the employer.

Possible confidentiality measures to be considered here include, in particular:

  • contractual measures (confidentiality agreements),
  • organisational measures (definition of responsibilities, security concept) und
  • technical and physical safeguards (firewall, safe, password protection).

In companies that take the necessary measures, especially prototypes, source code, pricing structures, financing information and other highly sensitive information will often fall under the concept of trade secrets. Employees are also increasingly subject to specific confidentiality obligations under employment contracts. In addition, the conclusion of non-disclosure agreements (NDAs) is also part of everyday business life in the B2B sector. Any person who has access to the content that is to be classified as a trade secret is obliged to protect this secret and is not permitted to disclose it to unauthorized third parties.

If a person who is obligated to maintain the trade secret enters it in the chat field of ChatGPT, he or she may be considered a possible “infringer” within the meaning of Section 2 No. 3 GeschGehG. Likewise, OpenAI might be considered as an infringer, since it can use the entered information (trade secrets) for the purpose of improving its own services. In the following, we will show whether an application of the legal framework supports such classification.

1. User as an Infringer

Persons entrusted with the trade secret who use ChatGPT (“users”) may be classified as infringers within the meaning of Section 2 No. 3 GeschGehG as a result of their use. Accordingly, an “infringer is any natural or legal person who unlawfully obtains, uses or discloses a trade secret in violation of Section 4”, as long as there is no exception according to Section 5 GeschGehG.

The use of ChatGPT can be considered as disclosure in the sense of Sections 2 No. 3, 4 (2) No. 3 GeschGehG. Disclosure is the disclosure of classified information to an unauthorized third party. Pursuant to Section 3 lit. (c) of its own Terms of Use, OpenAI stores all content that is not entered via the API. OpenAI reserves the right to use this content for the improvement of its own services. Accordingly, an entry of trade secrets in the chat window of ChatGPT constitutes a disclosure.

Section 2 No. 3 GeschGehG requires that the disclosure of the trade secret is also unlawful. If users of ChatGPT disclose trade secrets although they are obliged to maintain confidentiality, this constitutes a violation of Section 4 (2) No. 3 GeschGehG. This indicates the unlawfulness of the disclosure. However, the unlawfulness may not apply if the owner of the trade secret has consented to the disclosure. A general permission to use ChatGPT without restriction with regard to trade secrets could also be interpreted as consent. This could be contradicted by the fact that a general permission of use does not automatically constitute consent to use with respect to trade secrets. If the user has entered into a confidentiality agreement with the owner of the trade secret, it must be assumed that a permission of the general use of ChatGPT without explicit reference to the fact that trade secrets may also be entered into the chat field is not to be regarded as a consent.

In the case of a dispute, it will most likely come down to a weighing up of the intent of the trade secret owner in permitting the use on the one hand, and the manner of communication of this permission on the other.

One thing is clear: if the trade secret owner has not permitted the use of ChatGPT, then the illegality of the disclosure must be assumed. The person who entered the trade secret into ChatGPT is then an infringer in the sense of Section 2 No. 3 GeschGehG. An exception according to Section 5 GeschGehG will presumably not be assumed in any case of ChatGPT use.

If a user is an infringer, the owner of the trade secret may be entitled in particular to information, injunctive relief and damages. If the user is in an employee of the owner of the trade secret, the consequences may range from a formal warning to termination without notice, depending on the severity of the violation of the GeschGehG.

If the user has disclosed the trade secret for his own benefit, for the benefit of a third party or with the intention of causing harm to the owner of the secret, he may be liable to criminal prosecution in accordance with Section 23 GeschGehG.

2. OpenAI as an Infringer

To say it in advance: OpenAI does not qualify as an infringer. Pursuant to Section 4 (3) Sentence 1 GeschGehG, classification as an infringer requires that OpenAI “has obtained the trade secret via another person and knows or should know at the time of obtaining, using or disclosing that this person has used or disclosed the trade secret contrary to Paragraph 2”. These requirements are not met. Users of ChatGPT are advised upon use that conversations with AI may be viewed and evaluated by AI Trainers to improve their own systems. Users are specifically asked not to disclose sensitive information in their conversations with ChatGPT. OpenAI has no control over what information users enter into the program. The company also cannot know what information is classified as a trade secret and whether users share that information in the chat history with or without the permission of the trade secret owner.

To address these and other concerns, OpenAI recently gave users the option to opt out of storing chat history and using that data to train the AI model, even as part of normal ChatGPT use (i.e., even without using the API). In addition, the AI company announced a business version for the coming months, which is supposed to completely eliminate the legal concerns of the use.

3. Measures for the Protection of Trade Secrets

Based on the fact that trade secrets can be disclosed through the use of ChatGPT against the will of the owner of the trade secret, the question arises as to how secure the use of the popular LLMs can be implemented in everyday professional life.

A business owner who allows the use of ChatGPT needs to ensure that there is an internal categorization of information and data that allows employees to clearly understand what information is categorized as a trade secret. Particularly in larger companies, a precise labeling of the respective information is necessary so that users can be relieved of a decision regarding the sharing of content. A system in which information categorized as a trade secret is explicitly labeled as such, e.g., “Secret,” “Confidential,” etc., is conceivable. In this case, it is important that the confidentiality is clearly identified.

Based on the fact that trade secrets can be disclosed through the use of ChatGPT against the will of the owner of the trade secret, the question arises as to how secure the use of the popular LLMs can be implemented in everyday professional life.

A business owner who allows the use of ChatGPT needs to ensure that there is an internal categorization of information and data that allows employees to clearly understand what information is categorized as a trade secret. Particularly in larger companies, a precise labeling of the respective information is necessary so that users can be relieved of a decision regarding the sharing of content. A system in which information categorized as a trade secret is explicitly labeled as such, e.g., “Secret,” “Confidential,” etc., is conceivable. In this case, it is important that the confidentiality is clearly identified.

If such categorization is not already in place within the company, it is necessary to take the appropriate steps and optimize the workflow to categorize content. If such categorization already exists within the company, it is advisable to implement specific in-house guidelines when allowing the use of ChatGPT and similar LLMs. Especially in the case of ChatGPT, it will be mandatory to allow the use only under the condition that the users use the new data settings and thus exclude storage and use of chat histories for model training.

However, there may be risks with the actual implementation of this approach in that, on the one hand, overcorrection may occur, so that all information (even irrelevant information) could be marked as “Confidential” in an inflationary manner. On the other hand, it is also possible that critical information, which should be classified as a trade secret is overlooked and is therefore not marked as „Confidential“ in accordance with the internal specifications or that the data settings are not, used correctly. In these cases, the protection of secrets can also be rendered invalid. The larger the company and thus the larger the group of users, the higher the risk associated with use is likely to be.

Banning the use of ChatGPT for professional purposes will always be considered particularly secure. If the company issues the ban, it can enforce it by locally blocking ChatGPT.

The company will have to weigh the risks and the possible benefits from the use of ChatGPT and take the appropriate measures. Among other things, this will depend on which persons are to be allowed to use it and which improvements in workflow or even in the company’s own business model the company expects to see as a result of its use.

Conclusion

The use of ChatGPT and similar LLMs for professional purposes can lead to a significant increase in productivity. In serious cases, however, it can also lead to a violation of the Trade Secret Protection Act. In these cases, the trade secret owner is entitled to injunctive relief and damages against the infringer, however, the infringer might not have a way of recovering the disclosed information from the AI operators or a way to have it deleted.

Companies that wish to permit the use of ChatGPT are advised to draft the authorization of use in such a way that the permitted forms of use can be clearly identified. In particular, it is advisable to include a disclaimer prohibiting the entry of trade secrets and confidential information and to point out the risks of use. In addition, companies should make usage contingent on employees selecting the settings that exclude the storage of the input and use thereof for training purposes. This can be in the form of internal company AI usage guidelines.

If companies can be patient, it may be a good idea to delay the implementation of ChatGPT until the launch of ChatGPT Business. OpenAI has not yet unveiled what that version will look like. However, it can be assumed that it will make it possible for companies to manage their own users and make security settings that preclude disclosure of the content entered in the chat.