epi Guidelines: Use of Generative AI in the Work of Patent Attorneys


Introduction

These Guidelines have been prepared by the Professional Conduct Committee of epi to assist Members who use generative AI in their work as European Patent Attorneys. Artificial Intelligence pervades all aspects of modern life, including the patent profession, from word processing auto-suggestions, through translations to patent searching strategies. Experimenting with and use of generative AI by patent attorneys are increasing rapidly.

“Generative AI” is generally understood to be the type of text- or image-based AI tool that generates (supposedly) original outputs following the inputting of task-specific prompts. The outputs draw on (typically, extensive) training data sets in ways determined by the architectures, algorithm forms and weightings of the models.

“Non-generative AI” usually refers to the kinds of AI tool that e.g. compare texts, check for spelling errors, offer language/grammar suggestions and identify patterns in subject texts and images. Many language translation and interpreting tools also fall into the category of non-generative AI products.

Generative AI tools can amount to private or confidential systems. When using such systems the guidance given below may not be applicable. It is however incumbent on Members to assure themselves as to the security and confidentiality of AI tools that are stated to be confidential or private.

Despite the increasing use of generative AI, the operations of this type of tool are often poorly understood. Such misunderstandings can severely, adversely impact the correctness of the work of patent attorneys and they can cause detriment to clients and instructing principals. The use of generative AI moreover even when it is well understood can give rise to matters of professional standards.

Non-generative AI tools are less likely to give rise to the same problems, although it remains the case that care is required in the use of such tools. Many of these Guidelines, such as those concerning confidentiality and responsibility for the work product, are relevant even when using non-generative AI tools.

The following guidance is intended to minimise the risks, to practitioners, clients and instructing principals, of using generative AI models in patent work. Generative AI models however are developing at speed, with the result that these Guidelines may not apply to every situation encountered by Members. In view of this the Professional Conduct Committee recommends adoption of the Overarching Principle set out below.

Furthermore, as intellectual property professionals, Members of epi are encouraged to minimise the risk, when using generative AI tools, of infringing the IP rights (such as copyright) owned by third parties.

In the following the term “client” and derivatives is intended to include e.g. clients of private practice attorneys; and instructing principals responsible for instructing Members who are employed in-house.

Overarching Principles

When using AI of any kind in professional work, a Member must adopt the highest possible standards of probity; must take all reasonable steps to maintain confidentiality when this is required; and at all times must put the interests of clients first as required by Article 1 of the epi Code of Conduct.

Guidelines and Explanatory Notes

Guideline 1: Members should inform themselves about both the general characteristics of generative AI models and the specific attributes of any model(s) employed in their professional work, in terms of (at least) the key aspects of prompt confidentiality and (to the extent this can be known) the likelihood of hallucinations.

Explanatory note: As noted there are frequent misunderstandings concerning the features and characteristics of generative AI models. Indeed some facets of generative AI models are not understood even by those who develop the models. It is however an essential characteristic of professionalism that Members do their utmost to inform themselves about the weaknesses of any generative AI model used in their professional work. A common weakness is a so-called hallucination, i.e., an AI generated response which contains false or misleading information presented as fact.

It goes without saying that Members should regularly update their understandings of relevant aspects of any generative AI tools used in their professional work.

Guideline 2a: Members when using generative AI must, to the extent called for by the circumstances, ensure adequate confidentiality of training datasets, instruction prompts and other content transmitted to AI models. If there is doubt that confidentiality will be maintained to a level that is appropriate to the prevailing context the AI model in question should not be used.

Explanatory note: Members should take active steps to establish whether a chosen AI model assures the confidentiality of material fed to it.

Several generative AI products do not assure the confidentiality of material supplied as training datasets or instruction prompts. Some other AI models are unclear or uninformative about the confidentiality of material fed to them. Such models should not be used.

Members should be aware that even public domain information can by reason of the context of its use (e.g. an association with an enquiry in the name of a particular entity) acquire confidentiality.

The need for confidentiality in the work of patent attorneys of course varies depending on the prevailing circumstances.

Guideline 2b: In ensuring adequate confidentiality, Members must inform themselves about the likelihoods and modes of non-confidential disclosures deriving from use of specific AI models.

Explanatory note: It is not sufficient for Members to exhibit “wilful blindness” with regard to the confidentiality offered by specific AI models. On the contrary, Members must actively seek information on this aspect. If information is not available in relation to a particular generative AI model it should not be used for any work calling for confidentiality.

Guideline 3a: Members remain at all times responsible for their professional work, and cannot cite the use of generative AI as any excuse for errors or omissions.

Guideline 3b: Members must check any work product produced using generative AI for errors and omissions. The checking process must ensure that the work product is at least of the same standard as if it had been produced by a competent human practitioner.

Explanatory note: A professional is responsible for the quality of his/her work output, regardless of the means used to generate it. This obligation gives rise to the checking requirement specified in the Guideline.

It is accepted that one reason for using generative AI is to reduce the time taken for certain tasks. This benefit however does not supersede the need for accuracy or correctness. Members in view of this should be prepared to explain to clients that the checking requirements associated with use of generative AI may not result in net savings of time in specific instances.

Guideline 4: Members must in all instances establish, in advance of using generative AI in their cases, the wishes of their clients with regard to the use of generative AI.

Explanatory note: Members cannot assume knowledge of the wishes of clients with regard to generative AI. Some clients may encourage the use of generative AI, whereas some others may oppose its use in their cases.

It is recommended that Members keep accurate records of enquiries sent to clients in order to establish their preferences with regard to generative AI, and the responses to those enquiries.

Guideline 5a: Members are free to state, e.g. in websites and similar publications, that their work is produced using AI tools. Any such statement should be accurate, fair and dignified; and should not give rise to or promote discrimination between members.

Explanatory note: Members of course should have regard to the possible effect of any statements, concerning the use of AI tools, on readers. Statements about the use of AI tools should not exaggerate the extent to which AI tools are employed, nor should they exaggerate or mis-state their effects on the quality of the work output in question. In particular, Members should take care to avoid overstating the benefits deriving from the use of AI tools.

Guideline 5b: Members are not required to state, in communications with the European Patent Office and Unified Patent Court, that generative AI has been used in the production of work, unless they are obliged to do so by any binding statute, rule, order or client instruction. Any statement given in relation to the use of generative AI should be accurate, fair and dignified; should not disparage any party to proceedings; and should not give rise to or promote discrimination between members.

Explanatory note: At present (November 2024) there is no obligation to indicate to the EPO or UPC when generative AI has been used. Members however should keep themselves up to date on any changes in relevant statutes, rules and legal precedents in this regard, and in the event of any change should adapt their practices accordingly.

Guideline 6: In view of the risk of training prompts pertaining to one client employed in some AI models being transferred to the work of another client, Members must, if this is warranted by the nature of confidentiality in the model employed, establish mutually independent user accounts for the work of respective clients.

Explanatory note: Some inadequacies of the confidentiality of generative AI models can be mitigated through use of rigorously independent user accounts. Members however should understand that when this is done it may not compensate for all defects of a chosen AI model. A Member therefore may remain non-compliant with this Guidance even if independent accounts are used.

Guideline 7a: Members must be aware of relevant legislation impacting the use of generative AI models, and ensure that they comply with the relevant provisions.

Explanatory note: One may expect increasing amounts of legislation, at both national and European regional levels, to be enacted in the forthcoming months and years. At the very least, EU-based Members must have regard to, and apply the provisions of, the European Artificial Intelligence Act 2024 (Regulation (EU) 2024/1689) insofar as these impact practices. Much of this act is due to enter into force in January 2025.

More generally, these Guidelines are not to be taken as justification for non-compliance with any statutory or otherwise legally enforceable obligation.

Guideline 7b: Members should in addition have regard to any restrictions, obligations or reporting requirements, imposed by external organisations, that may impact the extent to which or the ways in which generative AI models may be used.

Explanatory note: This is a reminder that external organisations such as, but not limited to, national associations of patent attorneys, professional regulators and professional indemnity insurers may impose restrictions on the practices of patent attorneys, or they may impose notification obligations. Members should observe the requirements of such organisations, to the extent they relate to the use of generative AI tools.

Guideline 8: When determining fees to be charged for work products generated using AI tools, Members must at most charge fees that fairly reflect the amount of time and effort and/or the degree of difficulty and/or the degree of risk involved. Members may charge, at levels fairly reflecting the difficulty or extent of the task, for setting up or training of AI tools, AI tool subscription fees and for checking AI-generated work.

Explanatory note: It is recommended that Members keep accurate records of all aspects of setting up, training and checking work, including the levels of experience or professional education of those responsible for achieving an accurate output.

Adopted at C98 Council on 16 November 2024


Comments