Some companies and some US authorities haven’t learned their lesson from the Bt10 maize case. I was not very surprised. But of course, there are some differences here. In the Bt10 case, it seems that the company Syngenta erroneously had selected a Bt10 line to produce what they thought was Bt11 seed, and actively produced and sold seeds of the wrong GM event. With Bayer’s LL601 rice, the problem seems to have been that the GM was spread from field trials. There is a difference with respect to the internal quality control of the companies. But both cases serve to demonstrate that the protein-based screening methods applied in the US fail to provide the information needed to determine if a product should be sold or not, simply because the events are not distinguished.
In Europe we have strongly promoted the use of event-specific detection methods for many years. While these should allow us to distinguish between Bt10 and Bt11 maize, or between LL62 - which is authorised in the US - and LL601 rice, we also have to acknowledge that our traditional application of event-specific methods would fail to detect Bt10 and LL601.
Because we would not have the event-specific methods at hand. Without a legal provision, companies have only very rarely provided information, biological or genetic material, or detection methods. This provision for event-specific methods is unique to EU, and follows from Regulation EC 1829/2003. But it only refers to events for authorisation within the EU. No requirements are in place for events in developmental stages, or which are being tested in field trials. With the large number of events in these stages, we are standing in front of a major challenge! Of course it would be possible to establish an international system for information exchange, where sufficient information could be uploaded in a database to allow for the development of targeted detection methods. This information should then include detailed sequence information on every genetic construct used. But this would imply that the companies would also have to disseminate information that they currently may keep confidential. That certainly is not in line with their policy on intellectual property.
The US government has always promoted a policy with few restrictions, and the industrial lobby is very powerful. The US government is always very careful when it comes to introducing regulations that may affect the US agricultural industry in a negative way. They give a lot of freedom and responsibility to the companies, and - in these cases - the companies have failed to live up to the government’s expectations. I expect that the US government will hesitate, and I am hoping not to see more similar stories. We don’t know what is going on in the corridors in Washington, but my presumption is that the US Department of Agriculture is making it clear to the industry that they need to improve, if they want to avoid stricter regulations. In the US, they also have a much tougher regime when it comes to liability. Farmers who are no longer able to sell their harvests can sue the biotech companies, and the financial consequences may be much more severe than the fines enforced by the USDA. On the other hand, there is a long tradition in the US of applying rapid screening technology with high - meaning, inferior - detection limits. The entire agricultural industry in the US is unified in this matter: they do not want more sensitive and complex detection methods. It is a matter of cost, and of throughput. Anyone familiar with the size and structure of the agricultural production chains can understand why. So, the US government probably may permit small amounts of unauthorised GM events in the supply chain, under certain restrictions. Safety aspects will necessarily be part of these considerations. Since the US tradition is to focus on the trait, the main issue will probably be whether the GM crop in question is modified to express a "safe" protein.
It is tricky in any case, because they may then have to establish a threshold and, in the end, this would mean having to use quantitative detection methods. But we should also be aware of the consequences in Europe. Zero tolerance can never be fully implemented. Why? Because the only way to ensure absolute absence is to test everything - that is, every single grain, every single gramme of flour, and so on. Of course, you don’t need to be a statistician to see the socio-economic consequences of this. There would not be anything left for consumption, and the costs of testing would be astronomic. The problem for stakeholders can be illustrated by another example. Say that an importer is testing extensively, and nothing is found. The product is then sold and later tested by another stakeholder. What happens if, this time, the test demonstrates presence, although at a very low level? Who is then responsible for the legal and economic consequences? Even zero tolerance means that we must define criteria for testing, and that if these criteria are followed, and nothing is found, then everything is OK - even if the product is later shown to have some of the GMO in it.
Having said all this, of course the major problem remains: how to be able to detect in the first place. This brings us back to what I said earlier: an international system for storing and accessing information on the genetic constructs and events may be the ideal solution to help to develop appropriate detection methods in time. Since the companies can patent specific DNA sequences, and since crude genetic maps are already regularly disseminated, I don’t see why the companies need - or should be permitted - to keep the DNA sequence information confidential. Instead, such an information system would facilitate the development of analytical tools that could contribute to the improvement of confidence for all stakeholders. In addition, this system might facilitate dialogue concerning the safety of the genetic constructs, because much more of the information would pass through one central information node. Retrieving and inputting relevant information would be facilitated. Otherwise, there will always be the risk that relevant information is ignored - because of being stored in decentralised information systems that, for example, are inaccessible to competent authorities abroad.
I have already said a bit about this. We can only detect what we already know. And we have to consider resource availability. Testing for everything can easily take weeks and cost thousands of Euros for a single sample. Of course, stakeholders find this unacceptable, so we have to limit our efforts. In the EU, most laboratories do a DNA-based screening for a few genetic elements, and only if one or more of these elements is detected do we continue with more specific, and quantitative, analyses. The latter currently means event-specific real-time PCR quantification, which can only be done for one event per test reaction. So if a sample is to be tested for 10 different events, we may have to perform 10 separate test reactions, usually in duplicate and with two or more replicates – in total, at least 40 reactions per sample. Without going into detail, this strategy has very clear limitations, as the number of events on the global market is rapidly growing. We are already at the point where this system is challenged by stakeholder demands and by the capacity of analytical laboratories. On the other hand, in the US and other large-scale producing countries, they usually apply protein-based screening for selected traits. This is much cheaper and faster than DNA-based testing, but is also much more prone to error. Some events do not express their traits in the harvested product, some of the events can not be detected by available methods, and in most cases the protein-based methods are unable to discriminate between authorised and non-authorised events carrying the same trait. On top of this, these methods usually are less sensitive than DNA-based methods, and therefore may fail to detect GM levels exceeding those that may be defined in contracts or legislation. The solution that we are looking for is, of course, a rapid and cheap detection method that covers all sorts of GMOs and that will allow us not only to assess if GM material is present, but also which trait and event the material is derived from, and what the quantity is. Along with our current efforts to develop such tools - for example, based on the application of DNA microarray technology - we also are looking at other ways to rationalise the analytical work, such as by using methods based on decision trees.
Yes, I believe so. I have to say, though, that there is a fundamental difference between testing a sample derived from a single unprocessed plant specimen and testing a sample derived from a blend of several plant specimens, unprocessed or processed. There is also a fundamental difference between looking for something which is quite similar to a previously known event, and a GMO where practically everything that has been inserted is different from anything used in previously known events.
I realise that this is becoming quite technical and complex.
Metaphorically, a GMO detection may be compared to finding new text elements in a text on a computer, using a text search tool. Let’s imagine that the DNA of a certain crop species corresponds with the text of Shakespeares’ "Julius Caesar". This text is available in numerous editions, versions and translations.
The “genetic” modification we want to introduce is a paragraph of text from a completely different source, e.g. an agronomic text book. All text contains some terms that are found ubiquitously in any type of text. The typical, known “text elements” of GMOs have a structure that basically would be comparable with the contents of paragraphs about herbicides, viral plant diseases, insect diseases and insecticides. If you only search for terms typical of literature on herbicides, insecticides, virus and insect diseases, then of course, you are most likely to find the new genetic elements, but unlikely to detect other elements if anyone inserted a paragraph on geology, unless you read the entire text.
Furthermore, if somebody took the agronomic paragraph and inserted it in a different position in another edition of Julius Caesar, then it might take a long time before anyone realised that it was not just a reproduction of an earlier modified edition.
Yes, a construct-specific detection method may be compared with the method of looking only for the paragraph that was inserted. An event-specific method, on the other hand, may be compared to a method looking only for the junction between the original text of Julius Caesar, and the inserted paragraph [see illustration below]. The computer’s search tool can only find perfect matches. So the challenge to detect a completely new GMO event here is to be able to find a more or less perfect match of the insert, and a completely new insert junction.
Yes, indeed. Analysing a sample consisting only of a certain GM plant is comparable with searching in a single copy of a single edition of Julius Caesar. You either find perfect matches of both, or, if you only find the insert but not the junction, you may conclude that the modification is different. But if you have to search in a text consisting of hundreds of fragmented copies of different editions of Julius Caesar simultaneously –comparable with the situation of analysing processed food consisting of a mixture of raw material from different sources - both text objects may be present and found, but the new modification may also be present. How would you find it, without having to read all the text? This is where we try to be smart.
LL601 rice and Bt10 maize were both created by transformation of the recipient plant with a genetic construct used also to transform other GM events (i.e., LL62 rice and Bt11 maize, respectively). If we detect the genetic construct from Bt11, for example, but not the expected event-specific sequence motif of Bt11, then logically we must conclude that we are confronted with a GMO which is not the known event (Bt11) but which has been transformed with the known genetic construct. In the single plant, that conclusion may be drawn directly. Some of the multiplex screening tools that we are developing, for example DNA arrays, may allow us to do this type of observation. In a blended product, there may be both Bt11 and the other GMO (Bt10) present at the same time. Then, comparison of quantities may lead to the conclusion that there is more of the construct present than what is explained by the known event (Bt11). So here, we may have to use also conventional real-time PCR methods, or sufficiently reliable quantitative multiplex methods. This approach is what we call a differential display approach. We may also use anchor PCR profiles, which is a type of DNA fingerprinting approach, and sequence the resulting unexpected fragment or fragments. This approach could elucidate that the fragment was coming from a different insertion site - which is a strong indicator that we are confronted with a different transformation event.
The technology described in the Nesvold et al. paper can only be applied directly to single, unprocessed, plant specimen samples. However, with this technology we may screen for up to several million different sequence motifs simultaneously. If these motifs are selected or designed appropriately, the result of a screening may allow us to infer enough about the structure of the insert to be able to proceed very rapidly towards the isolation, sequencing and functional characterisation of the insert. If someone came to me with a plant specimen, and said that this specimen is highly suspected of being genetically modified, yet does not respond to any of the available tests for GM materials, this tool may allow us to give an answer - if we succeed with its development. In this case, the test will be expensive – so, it is not something you would use for routine analysis. However, I believe that it is absolutely necessary that we develop the tool. On the day that the tool is really needed, it is too late to start the development. You may ask yourself, will we ever need the tool? I hope not, but we have to prepare for a situation where we do. In my opinion, failing to stop a really harmful GMO, just because we failed to prioritise the development of the tool to detect it, is not an option.
Detection of unauthorised events is possible with existing technology, but not under the current testing regimes. Since the EU has had in place for several years a labelling requirement in its legislation, there is a tradition of prioritising quantitative analyses to detect authorised events. As the diversity of tests required to comply with the regulatory requirements increases, so do the costs. Several of the novel approaches that we develop within the Co-Extra project will become available during the project. However, the approach described by Nesvold et al. is extremely complicated. Availability can mean different things. If you mean a proof of concept, then I believe it will be completed within the duration of the project. If you mean a tool to support the stakeholders, then we need more time and funding.
Multiplex screening tools being developed in the project will result in cheaper and, possibly, faster testing. They will also cover more events than before. After all, most of the partners working on this are already doing GMO testing routinely. So - probably better than any other stakeholders - we can perceive the potential benefit of more rational testing methods, and also the limitations of present-day approaches. However, unexpected events may add significant complexity to the picture. The Nesvold et al. approach is not going to lower testing costs, simply because it is a tool to be used in a field of testing where previously nothing could be, or has been, done. It is a tool to be applied when suspicion is sufficiently strong, when the material allows for testing, and when the potential negative consequences of not testing outweigh the expenditures of testing.