Assessment of Assertions of Synergy as a Basis for Inventive Step in Compositions Comprising Mixtures

F. De Corte (BE), Head Intellectual Property Crop Protection Syngenta
K. Ward (UK), Head of Biometrics, Crop Protection Research, Syngenta


Abstract

Patent claims for inventions predicated on the existence of synergistic action of mixtures are common, yet often the evidence provided on which the assertion of synergy is based is not as compelling as it may first appear. Here we provide guidance to both the professional representative and the patent examiner on points to consider when submitting or assessing evidence for synergy, and make recommendations as to how the credibility of the process could be improved.

Introduction

“Plausibility” has made a firm entry into the patent law vocabulary. Although it seems common sense that a patent applicant needs to show in a credible manner that the invention actually works over the claimed range, the patent examiner seems to be faced with a (legal) problem to translate his doubts about the plausibility of the invention into a substantiated reason to refuse certain claims. Especially in the field of so-called synergistic mixtures, there often seems to be a disproportionality between what has been credibly demonstrated and the extent of the exclusivity granted to the patent applicant. Although case law seems to be shifting towards a more plausible position, some perspectives on data analysis could be of use to patent examiners.

This led to the authors making a presentation to European patent examiners in November 2016 on issues pertinent to the assessment of patent claims for inventions predicated on the existence of synergistic action. This article is essentially a transcript of what was presented.

Many scientific papers have been published on methods for determining the presence of synergistic action. However, this article does not seek to explore the different methodologies in detail. Instead, the intention is to provide guidance to both the professional representative and the patent examiner on points to consider when submitting or assessing evidence for synergy. Although the article has been assembled in the context of mixtures of agrochemicals, the content has broader applicability.

Definition of Synergy

There is some debate as to the precise definition of synergy. A common definition, to be found, for example in the Collins English Dictionary and adopted either explicitly or implicitly in a number of patent cases, is that synergy occurs when the combined action of two or more agents is greater than the sum of their individual effects. (In the context of agrochemical research, the term “agent” would normally refer to a particular substance at a particular dose.) But use of the word “sum” is too rigid and leads to obvious problems e.g. if agent A alone has 60% effect and agent B alone has 70% effect then according to this definition the predicted effect of A+B would be 130%, which is clearly nonsense in most situations. Moreover, while the idea that the predicted effect of a mixture should be equal to the sum of the individual effects might seem intuitively reasonable, the reality is that this would only be the case under a very particular set of circumstances that would rarely, if ever, occur in practice.

A more realistic and scientifically justifiable definition is as follows: synergy is said to occur when the combined action of two or more agents is greater than could have been predicted based on the performance of the agents when used alone.

Demonstrating Synergy

In our opinion, synergy will have been demonstrated if it has been convincingly shown that the performance of a mixture is indeed better than could justifiably have been predicted. This gives rise to two challenges: (i) how to calculate the predicted response, and (ii) how to interpret the difference between observed and predicted responses in the knowledge that both are subject to the effects of random variation.

Deciding How the Predicted Response should be Calculated

There is no single method of deriving predicted responses that is appropriate in all cases. Strictly, the choice of method should be dictated by the researcher’s understanding of how the agents in question would be expected to act together in the absence of any synergistic effect. In practice, however, the choice of method often appears to be somewhat arbitrary, with no reference to mode of action (MOA). In fact, many interested parties appear to be unaware that there is a choice at all or appear not to understand the circumstances under which each method is appropriate.

Different methods of calculating predicted responses can sometimes result in very different outcomes. This point is crucial because it means that the interpretation as to whether a particular mixture is synergistic can change depending on which method is used. The fact that the issues surrounding the choice of method are not widely understood no doubt means that patent applications are sometimes filed in which the observed response to a particular mixture looks somewhat better than the predicted response for no other reason than the latter has been calculated using an inappropriate method. Consider a hypothetical case involving substance S1 at 200g, substance S2 at 200g, and the combination of these two agents, in which both agents alone gave an observed response of 60%. Based on the well-known “Colby” method, the predicted response for a mixture of two agents each giving 60% response is 60 + 60 - 60×60/100 = 84%. But suppose S1 and S2 were actually the same substance; under this scenario, S1 at 200g + S2 at 200g is nothing other than the same substance at 400g, and the likely response to this “mixture” is entirely dependent on the slope of the dose response relationship for the substance in question. With a fairly steep slope as shown in Figure 1a, the likely response to the mixture would be around 92%, which is somewhat bigger than the response predicted using the Colby method and thus implies that the substance in question is synergistic with itself. Conversely, with a fairly shallow slope as shown in Figure 1b, the likely response to the mixture would be around 75%, which is somewhat less than the response predicted using the Colby method and thus implies that the substance in question is antagonistic with itself.

While this concept is most easily demonstrated by assuming S1 and S2 are the same substance, exactly the same principle applies to any mixture in which one substance essentially behaves like a serial dilution of the other, such that either can be substituted for the other in fixed proportion depending on the relative activities of the substances in question. This might be the case when mixing substances which have the same, or similar, MOA’s for example.

 

Figures 1a and 1b

Figures 1a and 1b. Likely response at 400g for a substance that gives a response of 60% at 200g, assuming either a steep dose response slope (Figure 1a, left) or a shallow dose response slope (Figure 1b, right).

 

To be clear, this is not a criticism of the Colby method per se. The Colby method (which is also variously attributed to Abbott, Bliss, Limpel and others) is, in fact, entirely appropriate in cases where, in the absence of any synergistic effect, the agents are expected to act independently. The problem is that this method is simply not appropriate in other circumstances. In cases where it is reasonable to assume that the MOA of the agents in question are so similar that, in the absence of any synergistic effect, one agent will behave exactly like a simple dilution of the other, an entirely different approach to calculating predicted responses is called for – one that is based on dose response modelling using, for example, methodology advocated by Wadley (and others).

The problem of choosing an appropriate method of calculating predicted responses is further exacerbated by the fact that, often, mixture studies are conducted at a time when the researcher may be unaware of the respective MOA of each agent. In these circumstances it might be impossible to identify which method of calculating predicted responses is most appropriate.

Taking Into Account the Impact of Random Variation

All biological experiments are impacted by the effects of natural variation. A key consequence of this is that, even if a mixture was neither synergistic nor antagonistic, the observed response would not be expected to be identical to the corresponding (appropriately derived) predicted response. Instead, observed and predicted responses would be expected to differ to some extent, according to the laws of random variation. In practice, this means that there is a 50% chance that the observed response to any given mixture will be numerically greater than its predicted response simply due to random variation alone. Of course, in such cases the magnitude of the difference between observed and predicted responses may be relatively small but this does not necessarily preclude an assertion of synergy from being made, or a patent predicated on such an assertion from being granted.

Disclosures of synergy rarely include enough detail of the experimental design and levels of underlying variation to allow the reader (even one with high level statistical expertise) to estimate what size of deviation between observed and predicted responses could be expected simply due to random variation alone. Moreover, given all of the factors that can vary between one dataset and another such as the nature of the recorded response and the level of replication, it is impossible to come up with a “rule of thumb” in this respect. But having said this, in the absence of convincing evidence to the contrary, it is reasonable to assume that the size of deviation attributable to random variation alone could be quite large, and assertions of synergy based on relatively small differences between observed and predicted responses should be interpreted with this in mind. This concern applies regardless of the method of estimating predicted responses.

It could be argued that any assertions of synergy should be backed up with an appropriate statistical test – one that assesses the probability of obtaining a difference between observed and predicted responses of an equal or greater magnitude to that presented simply due to random variation alone. In practice, however, standard statistical tests are rarely appropriate for synergy studies, and deriving a bespoke test that takes proper account of all relevant sources of random variation is in most cases far from straightforward. Also, the structure of the statistical test itself would need to vary from one case to another depending on issues such as the precise experimental design and the nature of the response. While some scientific papers relating to synergy include statistical tests, no particular test is widely accepted by the scientific community. Moreover, at least some of the tests discussed in the published literature have subsequently been shown to be flawed. There are exceptions. For example, if two agents are to be mixed and it is known that only one shows activity when used alone, the question can be simplified to “is A+B better than A alone”, in which case it is possible to design a study in such a way that the resulting data can be analysed using standard statistical methodology. (Although some would argue that the concept of synergy does not apply in cases where only one agent shows activity when used alone.) But in general, reliance on a statistical test does not seem to be a viable way forward.

Cherry Picking of Results

Often, the design of a synergy study will include an entire matrix of different treatments (e.g. five doses of substance X and four doses of substance Y plus all possible combinations). Under this scenario, if there was in reality no synergy and no antagonism whatsoever, we would expect around half the mixtures to have an observed response that is numerically greater than their respective predicted response, and half to have an observed response that is numerically less. If this were indeed the case then by looking at the entire set of results it should be apparent that the deviations between observed and predicted responses are no different to what we might reasonably expect by chance. But if the applicant chose to submit only those results which showed the biggest deviation in the positive direction, the evidence for there being a synergistic response would appear more compelling than was actually the case. Such “cherry-picking” of results is therefore misleading and disclosure of all relevant results should be encouraged.

 

Results as might be submitted in support of a synergy claim

Table 1a. Results as might be submitted in support of a synergy claim

 

Table 1b

Table 1b. Full Results. Responses are in percentages. For mixture treatments, table shows observed response, predicted response (in italics) and difference between observed and predicted. Cherry-picked results corresponding to Table 1a shown in bold font.

 

How Independent are the Results?

In cases where multiple results have been submitted, it is important to take into account the extent to which different results are truly independent from one another. In many synergy studies, the data generated for any given agent when used alone contributes to the calculation of predicted responses for a number of different mixtures, and so if the response to a particular agent happens to be somewhat lower than expected this could result in all of the corresponding mixtures appearing to be better than predicted. An example of this is shown in Table 2.

 

Table 2

Table 2. Responses are in percentages. For mixture treatments, table shows observed response, predicted response (in italics) and difference between observed and predicted. Note response for substance Y alone at 40ppm is much lower than would be expected based on the results of the other three doses, resulting in inaccurate predicted responses which in turn leads to the false impression that all four mixtures involving substance Y at 40ppm are synergistic.

 

Generality

If synergy between two (or more) substances does indeed exist, then it is probable that the synergistic relationship is specific to certain doses or, more likely, to certain ratios of doses. So, if synergy has been convincingly demonstrated for a single mixture (e.g. substance X at 50g + substance Y at 10g), it is not clear how broad a claim this one result should allow. In order to substantiate an assertion that an entire specific ratio of doses, say 5:1, of the substances in question is synergistic it does not seem unreasonable to expect the applicant to submit data demonstrating a consistent synergistic effect at several different dose combinations in the same 5:1 ratio. Similarly, to justify a claim covering a range of ratios, it does not seem unreasonable to expect the applicant to submit data demonstrating synergistic effects at or across those different ratios. Extending a patent claim to cover doses and/or ratios far outside the range for which synergy has been demonstrated is difficult to justify.

Synergy Factors

Often, results of synergy studies are presented in terms of “synergy factors”, calculated as the ratio of observed response to predicted response. If, for a given mixture, the observed response is identical to the predicted response this would result in a synergy factor of 1, while any factor greater than one would typically be presented as evidence of synergy. There are two concerns here. Firstly, factors close to 1 might deviate from 1 simply due to the effects of random variation (as explained earlier). Secondly, such ratios inadvertently give more emphasis to results at the low end of the response range than at the high end. For example, if the observed response = 30% and the predicted response = 20% then the difference of 10% equates to a synergy factor of 1.5, whereas if the observed response = 90% and the predicted response = 80% then the difference of 10% equates to a synergy factor of 1.125. Moreover, such factors can become very unstable when the predicted response is very low, and of course they make no sense at all if the predicted response is zero.

Conclusions

Currently, the standard of patent claims which cover an invention predicated on the synergistic action of mixtures of agrochemicals is of variable scientific quality, which in itself is not surprising given there are no guidelines in this respect. Moreover, the complexities of the science are such that it can be difficult for the examiner to critically assess the data and arguments provided by the applicant in support of those claims. Together, these factors undermine the credibility of the process. Although the complexities will not go away, the credibility of the process could be improved if the following three recommendations were adhered to:

  1. the method of calculating predicted responses should be justified and should be based on the applicant's understanding of how the agents in question would be expected to act together in the absence of any synergistic effect;
  2. observed and predicted responses are expected to deviate to some extent simply due to the effects of random variation, and synergy-based inventions should be assessed with this in mind, especially in cases where differences are small and/or inconsistent;
  3. the practice of “cherry-picking” of results should be discouraged and, instead, disclosure of all relevant results should be seen as the norm.

Comments