Technical problems in AI Inventions in the Light of the Guidelines for Examination in the EPO

Dr. R. Free (GB)


In this article Rachel Free extends her arguments, first made in "Framing new technical problems for AI inventions" CIPA Journal October 2018 Volume 47 Number 10 pages 18 to 21, to take into account the updates to the Guidelines for Examination in the EPO which came into force on 1 November 2018 (referred to herein as The Guidelines).

One of the requirements to obtain valid patent protection in Europe (and arguably in many other jurisdictions), for a computer-implemented invention (CII), is to have a technical problem that is solved in a technical manner, which is new and inventive. Ideally, patent attorneys are able to incorporate in the patent specification at the time of drafting, several of the technical problems and solutions that they find, in order to aid prosecution of the patent application to grant. Another way of expressing the concept of a technical problem and technical solution, is to say that the result of the technical problem and technical solution is a "further technical effect" which is an effect going beyond the "normal" physical interactions between the software and the computer hardware. The updates to The Guidelines include the following examples of further technical effects at section 3.6.1:

  • Methods with a technical character/technical purpose;
  • Methods designed based on specific technical considerations of the internal functioning of the computer;
  • Methods controlling the internal functioning or operation of a computer;
  • Programs for processing code at a low level such as compilers.

Artificial intelligence (AI) inventions are essentially a sub-set of CIIs because AI is a field of study which is a branch of computer science. The Guidelines has a new section headed "Artificial Intelligence and Machine Learning" which is a sub-section within the section on mathematical methods. The new section "Artificial Intelligence and Machine Learning" is limited to inventions concerning "computational models and algorithms for classification, clustering, regression and dimensionality reduction" and so is clearly not intended to apply to inventions using other forms of AI such as robotics, expert systems, probabilistic knowledge bases, reasoning systems and others. The section in The Guidelines headed "Artificial Intelligence and Machine learning" explains that where AI inventions are of a mathematical nature, they will need to meet the same requirements for patentability as for a mathematical method. That is, to be patentable, AI inventions of a mathematical nature need to be either:

  • tied to a technical application/technical purpose, or
  • tied to computer hardware.

Often, applicants limit the claim scope to a specific problem domain, in order to move the invention into technical subject matter. However, this is typically not enough to achieve an inventive step because there needs to be an improvement or benefit over the prior art. Therefore typically, there will be another benefit resulting from the patent claim, such as improved accuracy, efficiency, saving resources, or giving security.

In considering seeking patent protection for AI inventions, it is interesting to consider whether AI inventions exhibit any new types of technical problems as compared with those that we are familiar with for CIIs in general. In particular, perhaps there are new types of technical problem concerned with AI ethics as explained in more detail below.

Fundamental technical problems for CIIs

Many of the technical benefits achieved by CIIs relate to a small set of high-level problems. These can be identified as:

  1. saving resources (memory, processing capacity, bandwidth, space, time, power),
  2. improving accuracy (of simulation, prediction, control of processes or equipment), and
  3. improving security.

In some cases one of these problems may be subsumed into another. For example, improving accuracy of a prediction may be seen as part of the problem of saving a resource. However, for the sake of argument, let's assume there are three fundamental technical problems of CIIs.

Note that the three fundamental technical problems are intended to be expressed at a general or high level, independent of a specific task (as "task independent problems"). Examples of problems which include the specific task ("task-specific problems") are ones like "how to recognise a face from an image depicting a person" or "how to control a manufacturing plant" or "how to reduce burden of user input to a computer".

There are other problems CIIs typically address, but are arguably not considered as technical problems at all, due to their abstract nature. Some of these abstract problems are fundamental to CIIs and, more particularly, are fundamental to AI inventions. Examples in AI are: how to represent knowledge/data in a way best suited to the task at hand, how to represent uncertainty, how to search a huge search space/compute an optimisation. Many of these tasks are building blocks used in AI technology.

New technical problems

In the case of AI inventions, the author has been finding that there are a number of new technical problems arising that it is difficult to incorporate into her list of high-level, or fundamental, technical problems. Because these problems are common to many types of AI inventions, the author argues that these are sub-problems of a new fundamental technical problem, rather than task-specific ones. Some examples are set out below:

Generating a rationale for an AI decision:

An example of this is the claim language paraphrased below and taken from European patent publication number EP3291146 Fujitsu ("'146"). The claim is directed to an invention where a conventional neural network is mapped into a form where nodes of the neural network have semantic labels. A technical problem here is how to make the behaviour of a neural network more interpretable by humans. When a trained neural network computes a prediction, it is difficult for scientists to give a principled explanation of why the particular prediction was computed as opposed to a different prediction. Such a principled explanation is desirable for ethical reasons. The claim language in '146 captures a new technical problem: "how to make a prediction computed by a neural network more interpretable by humans".

Paraphrased claim 1 of EP3291146

A method for use with a convolutional neural network, CNN, used to classify input data, the method comprising:

  • after input data has been classified by the CNN, carrying out a labelling process in respect of a convolutional filter of the CNN which contributed to classification of the input data, the labelling process comprising using various complicated filters to assign a label to a feature of the input data represented by the convolutional filter;
  • repeating the labelling process for each convolutional filter used;
  • translating the CNN into a neural-symbolic network in association with the assigned labels;
  • extracting, from the neural-symbolic network, knowledge relating to the classification of the input data by the CNN;
  • generating and outputting a summary comprising the input data, the classification of the input data assigned by the CNN, and the extracted knowledge, and an alert indication that performance of an action using the extracted knowledge is required.

Implementing the right to be forgotten

Another example is the problem of how to efficiently remove data about a particular person from a machine-learning system or a knowledge base, which has been created using data about the particular person and data about a huge number of other people. This problem is also referred to as "how to enable the right to be forgotten". Removing data about a particular person is extremely difficult where that data has become subsumed in a complex representation of data inside a computer, such as a deep neural network, without completely retraining the neural network. Removing data about a particular person from a knowledge base is also extremely difficult for the same reason. Ways of tracking which data has been used in which parts of the knowledge base and removing the effects of particular data need to be invented. This would overcome the high costs of completely retraining or reconstructing the neural network or knowledge base. These problems are seen as very complex, and more than mere administration since they could not be done manually and since there is no straightforward solution currently known.

Determining accountability where an autonomous agent is involved

Determining accountability, for example, when an autonomous vehicle is involved in a collision or event resulting in death of a human or other harm is a very real obstacle to securing acceptance of autonomous decision making systems. The problems involved in determining which entity is accountable are known to be extremely difficult to solve. Indeed, a recent report from the European Commission proposed that because of this difficulty a sensible and pragmatic way forward is to make the autonomous AI agent itself the entity which is accountable[1]. As a step towards this, tamper-proof ways of recording state of the autonomous vehicle need to be invented, and ways to trigger when it is appropriate to record such state so that after an event involving harm, the recorded state can be used as evidence. How to record state of the autonomous agent in tamper-proof ways will become even harder in future because there will be a possibility for the AI agent to be deceptive. Humans will need to invent ways to record state in ways guaranteed to represent ground truth.

Driving "acceptable" behaviour

A further example is how to create a trained machine-learning system to perform a particular task in a manner that is acceptable to humans, so that, for example, it is not biased against particular sections of society. A machine-learning system trained to recognise faces might inadvertently be biased against people from a particular ethnic group, depending on the training data used. If a solution to this problem is more than mere abstract statistics, there is potential for a technical solution.

The "problem" of ethics using AI

If we think about the new technical problems of AI inventions discussed above, these are all concerned with so-called "AI ethics". That is they reflect the values that societies hold concerning how to use and create AI. The AI ethics value of each of the examples is:

  • In the case of generating a rationale for a decision computed by a neural network, that humans should have a right to know that an automated decision is being used and how the automated decision has been made when that decision uses personal data and the decision has a legal effect on the person;

  • In the case of how to remove data about a particular person from an AI system, that humans have a right to withdraw consent to use of their data in some cases, and that the withdrawal of consent should be effective;

  • In the case of determining accountability, that it should be possible to determine which human entities and legal persons are responsible or accountable for artificial, or semi-artificial, autonomous agents; and

  • In the case of unacceptable behaviour such as avoiding bias, that AI (or at least its use) should be fair and not discriminate against particular sections of society.

Returning then to the list of fundamental CII problems, note that the first and second (efficient use of resources, and greater accuracy) relate to objective determinants based on the laws of nature, whereas the third, improving security, arises from and is determined by human-made requirements. Adding the AI ethics-related "technical" problems to the list, would be adding further human-made requirements, determined on the basis of human made rules of ethical conduct. There are potentially several new entries into the list in this class, including how to achieve transparency, how to give data privacy rights, how to enable accountability and how to ensure fairness.

Do AI ethics-related problems have anything in common?

If AI ethics related problems have something in common, then perhaps we can replace them in the list by a single new fundamental problem.

In my view, the AI ethics-related technical problems do have commonality, which is "how to address the risks that come with increasingly able AI", and I would therefore argue that we should add this problem to the list of fundamental technical problems of CIIs. The rationale for each of these is that:

  • Generating a rationale for a decision computed by a neural network will help humans to control the AI as AI becomes more "able";

  • Implementing the right to be forgotten gives individuals the ability to control AI in the use (or abuse) of their personal data as the use of AI becomes more pervasive;

  • Enabling accountability to be determined such as by recording the ground truth state of an AI agent in a tamper-proof way gives humans the ability to know what an AI agent has done; and

  • Avoiding bias enables humans to ensure AI agents act fairly, again as the use of AI becomes more pervasive.

As the ability of AI increases there will be a corresponding increase in the need to deal with the risks, as explained by the following quote. Thus the specific problems mentioned in the bullet point list above are the beginning of a whole field of problems yet to be formulated and solved.

"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an "intelligence explosion", and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control." I J Good, 1965.

I have therefore argued that the fundamental technical problem to be added is "how to address the risks of increasingly able AI"[2].

What is the relevance of a "new" fundamental technical problem to patent drafting and patent prosecution of CII inventions?

  • The list of fundamental technical problems provide a resource to help the patent drafter work with the inventors to identify technical problems to be mentioned in the patent specification.

  • During prosecution the list can also be used to identify and frame technical problems based on material in the specification, although it is much harder to rely on problems that are not already expressly mentioned in the specification.

In addition to using the idea of a wider set of fundamental technical problems, and specifically the addition of an ethics based technical problems, practitioners also need to take account of the recent updates to the EPO Guidelines for Examination regarding AI technology. Let's consider each of the examples of further technical effects given in The Guidelines.

  • Methods with a technical character/technical purpose;

  • Methods designed based on specific technical considerations of the internal functioning of the computer;

  • Methods controlling the internal functioning or operation of a computer;

  • Programs for processing code at a low level such as compilers.

Methods with a technical character/technical purpose

Is technology which answers the technical problem of "how to address the risks of increasingly able AI" technology which has a "technical purpose"?.

In order to assess whether a purpose is technical or not the EPO looks to case law. However, there is no existing case law regarding AI ethics as it is such a new field.

Another way to assess whether a purpose is technical or not is to consider whether the field of study is a technical field or not. So for example, an engineering purpose would be considered technical because engineering is a field of technology. In the case of AI ethics, ethics is a branch of philosophy and philosophy is not a science or technology because it is not empirical. Ethical values are held by human societies and vary according to the particular human society involved. Therefore there is an argument that "how to deal with the risks of increasingly able AI" by giving AI ethical values is a social problem which is not in a technical field. I disagree with this line of argument since scientists and engineers will need to devise engineering solutions, be they software and/or hardware engineering solutions, in order to give AI ethical values and ensure the AI upholds those values. The problem of deciding what ethical values to give AI is a separate problem.

With regard to ways to make AI computation interpretable by humans, there are arguments that this is a technical purpose since it gives information to humans about the internal states of the computer.

With regard to ways to remove data from already trained AI systems without having to completely retrain them, there are arguments that this is a technical purpose because it is not merely administrative. Getting the solution wrong would lead to a non-working result or worse, to an incorrectly operating AI that may cause harm as a result. The same applies for ways to make AI decision making systems unbiased/fair. These problems are part of a broader task of controlling an AI system which is a technical problem of control and is not an administrative problem of removing data.

In my experience, even where a claim is limited to a technical purpose, it is often necessary to include one of the fundamental technical problems of CIIs in order to achieve inventive step. If AI ethics becomes one of the fundamental technical problems of CIIs then perhaps it will often be combined with a more specific technical purpose such as those listed in The Guidelines (controlling an X-ray apparatus, determining a number of passes of a compaction machine to achieve a desired material density, image processing, ...).

Methods designed based on specific technical considerations of the internal functioning of the computer.

It is very likely that some inventions that address the risks of increasingly able AI will be designed to make use of particular internal functioning of the computer. One can imagine an ethical AI operating system designed to prevent the computer from being deceptive and using detail of the internal functioning of the computer.

Methods controlling the internal functioning or operation of a computer

The operation of a computer, where the computer implements artificial intelligence technology, is potentially autonomous operation that may need to be controlled by humans. Therefore methods of controlling the internal functioning or operation of a computer are at the heart of technology which addresses the risks of increasingly able AI.

Programs for processing code at a low level such as compilers

Programs for processing code at a low level such as compilers will also need to have AI ethics values integrated in order to deal with the risks of increasingly able AI. Therefore some AI ethics inventions will show a technical effect by virtue of processing code at a low level.

Dr Rachel Free (Fellow) is Of Counsel (Patent Attorney) at CMS Cameron McKenna Nabarro Olswang LLP in London. See more at cms.law


  1. see JURI draft report of 31 May 2016 PE582.443 2015/2103(INL) setting out a series of recommendations on civil law rules on robotics
  2. I would note that others have suggested a more fundamental problem, namely that of how to control super-intelligent machines. (In Superintelligence (Oxford University Press 2014), Nick Bostrom argues that as AI advances there will eventually be an exponential explosion in the rate of improvement of AI cognitive ability, which results in a singleton superintelligence that will pose an existential risk to humanity.)

Comments