What is the future of GMO detection? A freely speaking scientist 's opinion.

Drawing lessons from illegal GM rice in the European market

The discovery of traces of the unapproved GM rice LLrice601 in US exports sparked new passion in the debate on  proper handling of GM crops and their derived products. The rice is believed to have escaped during field trials in the USA in 2001. US authorities declared the rice to be entirely safe for humans and the environment. The European Food Safety Authority came to a similar assessment – however, this is not the question. If it is not guaranteed that unwanted GM products can be efficiently excluded from the European market, consumer trust in the regulatory system will be lost.

In the future, the number of experimentally and commercially grown GM crops worldwide is expected to increase significantly, complicating the detection of unapproved GM crops within the supply chain. Consequently, Health and Consumer Safety Commissioner Markos Kyprianou has recently asked his staff to come up with improved tools to prevent the entrance of unauthorised GMO material into the food and feed chain.

We asked Dr. Arne Holst-Jensen, Norwegian researcher in the Co-Extra project, and expert for GMO detection methods: What is the future of GMO detection?


Arne Holst-Jensen investigates novel GMO detection methods.
Arne Holst-Jensen investigates novel GMO detection methods.
Dr. Holst-Jensen, what was your first thought when you got to know the discovery of unapproved GM rice in Europe?

Some companies and some US authorities haven’t learned their lesson from the Bt10 maize case. I was not very surprised. But of course, there are some differences here. In the Bt10 case, it seems that the company Syngenta erroneously had selected a Bt10 line to produce what they thought was Bt11 seed, and actively produced and sold seeds of the wrong GM event. With Bayer’s LL601 rice, the problem seems to have been that the GM was spread from field trials. There is a difference with respect to the internal quality control of the companies. But both cases serve to demonstrate that the protein-based screening methods applied in the US fail to provide the information needed to determine if a product should be sold or not, simply because the events are not distinguished.

So which kind of detection methods would fit better?

In Europe we have strongly promoted the use of event-specific detection methods for many years. While these should allow us to distinguish between Bt10 and Bt11 maize, or between LL62 - which is authorised in the US - and LL601 rice, we also have to acknowledge that our traditional application of event-specific methods would fail to detect Bt10 and LL601.

Why would this be the case?

Because we would not have the event-specific methods at hand. Without a legal provision, companies have only very rarely provided information, biological or genetic material, or detection methods. This provision for event-specific methods is unique to EU, and follows from Regulation EC 1829/2003. But it only refers to events for authorisation within the EU. No requirements are in place for events in developmental stages, or which are being tested in field trials. With the large number of events in these stages, we are standing in front of a major challenge! Of course it would be possible to establish an international system for information exchange, where sufficient information could be uploaded in a database to allow for the development of targeted detection methods. This information should then include detailed sequence information on every genetic construct used. But this would imply that the companies would also have to disseminate information that they currently may keep confidential. That certainly is not in line with their policy on intellectual property.

How will the US government presumably handle that problem?

The US government has always promoted a policy with few restrictions, and the industrial lobby is very powerful. The US government is always very careful when it comes to introducing regulations that may affect the US agricultural industry in a negative way. They give a lot of freedom and responsibility to the companies, and - in these cases - the companies have failed to live up to the government’s expectations. I expect that the US government will hesitate, and I am hoping not to see more similar stories. We don’t know what is going on in the corridors in Washington, but my presumption is that the US Department of Agriculture is making it clear to the industry that they need to improve, if they want to avoid stricter regulations. In the US, they also have a much tougher regime when it comes to liability. Farmers who are no longer able to sell their harvests can sue the biotech companies, and the financial consequences may be much more severe than the fines enforced by the USDA. On the other hand, there is a long tradition in the US of applying rapid screening technology with high - meaning, inferior - detection limits. The entire agricultural industry in the US is unified in this matter: they do not want more sensitive and complex detection methods. It is a matter of cost, and of throughput. Anyone familiar with the size and structure of the agricultural production chains can understand why. So, the US government probably may permit small amounts of unauthorised GM events in the supply chain, under certain restrictions. Safety aspects will necessarily be part of these considerations. Since the US tradition is to focus on the trait, the main issue will probably be whether the GM crop in question is modified to express a "safe" protein.

Is the US request of establishing a new Codex Alimentarius working group related to those recent issues?

Well, related yes, but only to a limited extent. The purpose of that working group is to develop recommendations to the Codex Task Force on performing a safety assessment in situations of low-level presence in which the recombinant-DNA plant has already been found to be safe and authorised for commercialisation for food by one or more countries. This means cases where a GMO is authorised in one country, for example the USA, but not in another, for example the EU. The LL601 case, and the Bt10 case before that, where different since the GMOs in question were not authorised in any country. So I think this proposal from the US is likely to have emerged more as a consequence of US frustration over the “asynchronous” - US terminology - or “asymmetric” authorisation process globally, which they may see as de facto trade barriers. The working group does not cover the problems with completely unauthorised GMOs in the food chain, and that is after all what the LL601 and Bt10 cases are about. 

If the US authorities would allow a certain amount of unauthorised GM events to enter the food and feed chain, what would be the consequences?

It is tricky in any case, because they may then have to establish a threshold and, in the end, this would mean having to use quantitative detection methods. But we should also be aware of the consequences in Europe. Zero tolerance can never be fully implemented. Why? Because the only way to ensure absolute absence is to test everything - that is, every single grain, every single gramme of flour, and so on. Of course, you don’t need to be a statistician to see the socio-economic consequences of this. There would not be anything left for consumption, and the costs of testing would be astronomic. The problem for stakeholders can be illustrated by another example. Say that an importer is testing extensively, and nothing is found. The product is then sold and later tested by another stakeholder. What happens if, this time, the test demonstrates presence, although at a very low level? Who is then responsible for the legal and economic consequences? Even zero tolerance means that we must define criteria for testing, and that if these criteria are followed, and nothing is found, then everything is OK - even if the product is later shown to have some of the GMO in it.

However, to be on the safe side, the applied testing procedures in Europe should be very robust to avoid such problems for traders, processors and retailers.

Having said all this, of course the major problem remains: how to be able to detect in the first place. This brings us back to what I said earlier: an international system for storing and accessing information on the genetic constructs and events may be the ideal solution to help to develop appropriate detection methods in time. Since the companies can patent specific DNA sequences, and since crude genetic maps are already regularly disseminated, I don’t see why the companies need - or should be permitted - to keep the DNA sequence information confidential. Instead, such an information system would facilitate the development of analytical tools that could contribute to the improvement of confidence for all stakeholders. In addition, this system might facilitate dialogue concerning the safety of the genetic constructs, because much more of the information would pass through one central information node. Retrieving and inputting relevant information would be facilitated. Otherwise, there will always be the risk that relevant information is ignored - because of being stored in decentralised information systems that, for example, are inaccessible to competent authorities abroad.

Obviously, the current practice of GMO detection has some serious limits. Could you summarise them?

I have already said a bit about this. We can only detect what we already know. And we have to consider resource availability. Testing for everything can easily take weeks and cost thousands of Euros for a single sample. Of course, stakeholders find this unacceptable, so we have to limit our efforts. In the EU, most laboratories do a DNA-based screening for a few genetic elements, and only if one or more of these elements is detected do we continue with more specific, and quantitative, analyses. The latter currently means event-specific real-time PCR quantification, which can only be done for one event per test reaction. So if a sample is to be tested for 10 different events, we may have to perform 10 separate test reactions, usually in duplicate and with two or more replicates – in total, at least 40 reactions per sample. Without going into detail, this strategy has very clear limitations, as the number of events on the global market is rapidly growing. We are already at the point where this system is challenged by stakeholder demands and by the capacity of analytical laboratories. On the other hand, in the US and other large-scale producing countries, they usually apply protein-based screening for selected traits. This is much cheaper and faster than DNA-based testing, but is also much more prone to error. Some events do not express their traits in the harvested product, some of the events can not be detected by available methods, and in most cases the protein-based methods are unable to discriminate between authorised and non-authorised events carrying the same trait. On top of this, these methods usually are less sensitive than DNA-based methods, and therefore may fail to detect GM levels exceeding those that may be defined in contracts or legislation. The solution that we are looking for is, of course, a rapid and cheap detection method that covers all sorts of GMOs and that will allow us not only to assess if GM material is present, but also which trait and event the material is derived from, and what the quantity is. Along with our current efforts to develop such tools - for example, based on the application of DNA microarray technology - we also are looking at other ways to rationalise the analytical work, such as by using methods based on decision trees.

With the prospect of an ever-increasing number of GM organisms, won’t the possibility of detecting unexpected events approach null sooner or later? What can the Co-Extra project contribute to the solution of this problem?

We do not detect unexpected events with the methods routinely used by analytical laboratories today. Most laboratories in Europe struggle with the workload of looking for the EU authorised events, and in other parts of the world the situation is more or less the same, even if they use other analytical methods. To be able to detect unexpected events you need to redesign your analytical strategy - to expect the unexpected. Clever design may bring us from a very low probability to a very high probability. But, of course, it will also compete for the available resources for analytical testing. As long as the stakeholders ask for quantification of authorised events, that is what the laboratories will test for. If the stakeholders ask for detection of the non-authorised events, then the laboratories will try to deliver. The Co-Extra project acknowledge that the toolbox for detection of unexpected events needs to be upgraded, so we explore different alternatives. Some are rather simple, and some are extremely complex. Notably, they may find different domains of application.

What is the most promising method to detect an unknown GMO, even if the GMO contains absolutely new and unpredictable new genetic elements?

As a scientist this is an extremely intriguing question. How do you find, characterise and risk-assess an object, when you have absolutely no idea about the nature of the object? We asked ourselves this question a couple of years ago, we discussed the possibility to use high density micro-array technology, and we contacted experts in bioinformatics at the University of Oslo. Together, we started a journey where gradually we have moved from scepticism to optimism. Recently, we [Nesvold et al., 2005] published a paper in the scientific journal Bioinformatics based on the results of computer-simulation studies of the problem. The conclusion was that - at least, from a theoretical point of view - the approach could work. So, in the Co-Extra project we have initiated experimental work on real biological materials. While there is a long way to go, and still many pits, barb wire fences and mine fields to cross before we reach the real battlefield, I see this approach as surprisingly promising.

Can you foresee if the methods under development will safely prevent incidents such as the LLrice601 case?

Yes, I believe so. I have to say, though, that there is a fundamental difference between testing a sample derived from a single unprocessed plant specimen and testing a sample derived from a blend of several plant specimens, unprocessed or processed. There is also a fundamental difference between looking for something which is quite similar to a previously known event, and a GMO where practically everything that has been inserted is different from anything used in previously known events.

I realise that this is becoming quite technical and complex.

Indeed, this sounds very complex and challenging. Could you explain that more practically?

Metaphorically, a GMO detection may be compared to finding new text elements in a text on a computer, using a text search tool. Let’s imagine that the DNA of a certain crop species corresponds with the text of Shakespeares’ "Julius Caesar". This text is available in numerous editions, versions and translations.

The “genetic” modification we want to introduce is a paragraph of text from a completely different source, e.g. an agronomic text book. All text contains some terms that are found ubiquitously in any type of text. The typical, known “text elements” of GMOs have a structure that basically would be comparable with the contents of paragraphs about herbicides, viral plant diseases, insect diseases and insecticides. If you only search for terms typical of literature on herbicides, insecticides, virus and insect diseases, then of course, you are most likely to find the new genetic elements, but unlikely to detect other elements if anyone inserted a paragraph on geology, unless you read the entire text.

Furthermore, if somebody took the agronomic paragraph and inserted it in a different position in another edition of Julius Caesar, then it might take a long time before anyone realised that it was not just a reproduction of an earlier modified edition.

Ok, you are talking about the difference of construct-specific detection and event-specific detection. The former methods detects only a certain new genetic element, but this can be contained in several GM events. If you want to know which GM event is in your sample, you have to use an event-specific method.

Yes, a construct-specific detection method may be compared with the method of looking only for the paragraph that was inserted. An event-specific method, on the other hand, may be compared to a method looking only for the junction between the original text of Julius Caesar, and the inserted paragraph [see illustration below]. The computer’s search tool can only find perfect matches. So the challenge to detect a completely new GMO event here is to be able to find a more or less perfect match of the insert, and a completely new insert junction.

CLICK HERE TO ENLARGE!
CLICK HERE TO ENLARGE!

Is it even more challenging to find unexpected events in a mixture of different raw material, or processed food, respectively?

Yes, indeed. Analysing a sample consisting only of a certain GM plant is comparable with searching in a single copy of a single edition of Julius Caesar. You either find perfect matches of both, or, if you only find the insert but not the junction, you may conclude that the modification is different. But if you have to search in a text consisting of hundreds of fragmented copies of different editions of Julius Caesar simultaneously –comparable with the situation of analysing processed food consisting of a mixture of raw material from different sources - both text objects may be present and found, but the new modification may also be present. How would you find it, without having to read all the text? This is where we try to be smart.

What does this mean in practise?

LL601 rice and Bt10 maize were both created by transformation of the recipient plant with a genetic construct used also to transform other GM events (i.e., LL62 rice and Bt11 maize, respectively). If we detect the genetic construct from Bt11, for example, but not the expected event-specific sequence motif of Bt11, then logically we must conclude that we are confronted with a GMO which is not the known event (Bt11) but which has been transformed with the known genetic construct. In the single plant, that conclusion may be drawn directly. Some of the multiplex screening tools that we are developing, for example DNA arrays, may allow us to do this type of observation. In a blended product, there may be both Bt11 and the other GMO (Bt10) present at the same time. Then, comparison of quantities may lead to the conclusion that there is more of the construct present than what is explained by the known event (Bt11). So here, we may have to use also conventional real-time PCR methods, or sufficiently reliable quantitative multiplex methods. This approach is what we call a differential display approach. We may also use anchor PCR profiles, which is a type of DNA fingerprinting approach, and sequence the resulting unexpected fragment or fragments. This approach could elucidate that the fragment was coming from a different insertion site - which is a strong indicator that we are confronted with a different transformation event.

How can your new DNA-chip technology help to raise the probability of finding unknown GMO events?

The technology described in the Nesvold et al. paper can only be applied directly to single, unprocessed, plant specimen samples. However, with this technology we may screen for up to several million different sequence motifs simultaneously. If these motifs are selected or designed appropriately, the result of a screening may allow us to infer enough about the structure of the insert to be able to proceed very rapidly towards the isolation, sequencing and functional characterisation of the insert. If someone came to me with a plant specimen, and said that this specimen is highly suspected of being genetically modified, yet does not respond to any of the available tests for GM materials, this tool may allow us to give an answer - if we succeed with its development. In this case, the test will be expensive – so, it is not something you would use for routine analysis. However, I believe that it is absolutely necessary that we develop the tool. On the day that the tool is really needed, it is too late to start the development. You may ask yourself, will we ever need the tool? I hope not, but we have to prepare for a situation where we do. In my opinion, failing to stop a really harmful GMO, just because we failed to prioritise the development of the tool to detect it, is not an option.

Do you have an idea, when the new DNA-chip method will be available in practice?

Detection of unauthorised events is possible with existing technology, but not under the current testing regimes. Since the EU has had in place for several years a labelling requirement in its legislation, there is a tradition of prioritising quantitative analyses to detect authorised events. As the diversity of tests required to comply with the regulatory requirements increases, so do the costs. Several of the novel approaches that we develop within the Co-Extra project will become available during the project. However, the approach described by Nesvold et al. is extremely complicated. Availability can mean different things. If you mean a proof of concept, then I believe it will be completed within the duration of the project. If you mean a tool to support the stakeholders, then we need more time and funding.

What about the costs of such methods? Do they raise or lower the expenditures required for controlling the supply chain?

Multiplex screening tools being developed in the project will result in cheaper and, possibly, faster testing. They will also cover more events than before. After all, most of the partners working on this are already doing GMO testing routinely. So - probably better than any other stakeholders - we can perceive the potential benefit of more rational testing methods, and also the limitations of present-day approaches. However, unexpected events may add significant complexity to the picture. The Nesvold et al. approach is not going to lower testing costs, simply because it is a tool to be used in a field of testing where previously nothing could be, or has been, done. It is a tool to be applied when suspicion is sufficiently strong, when the material allows for testing, and when the potential negative consequences of not testing outweigh the expenditures of testing.




Contacts: