The ongoing debate about whether, what, when and how to feedback incidental findings (IFs) from whole genome sequencing continues to rage on both sides of the Atlantic following the American College of Medical Genetics and Genomics’ controversial recommendations on reporting IFs released last month. In an unexpected twist, the authors of the guidance have now written “a clarification” in response to the many criticisms that have been raised including here on GenomesUnzipped. The clarification covers five points – autonomy, children, labs, communication and interpretation.
Archive for the 'Opinion' Category
By now, we’re probably all familiar with Niels Bohr’s famous quote that “prediction is very difficult, especially about the future”. Although Bohr’s experience was largely in quantum physics, the same problem is true in human genetics. Despite a plethora of genetic variants associated with disease – with frequencies ranging from ultra-rare to commonplace, and effects ranging from protective to catastrophic – variants where we can accurately predict the severity, onset and clinical implications are still few and far between. Phenotypic heterogeneity is the norm even for many rare Mendelian variants, and despite the heritable nature of many common diseases, genomic prediction is rarely good enough to be clinically useful.
The breadth of genomic complexity was really brought home to me a few weeks ago while listening to a range of fascinating talks at the Genomic Disorders 2013 conference. Set against a policy backdrop that includes the recent ACMG guidelines recommending opportunistic screening of 57 genes, and ongoing rumblings in the UK about the 100,000 NHS genomes, the lack of predictability in genomic medicine is rather sobering. For certain genes and diseases, we can or will be able to make accurate and clinically useful predictions; but for many, we can’t and won’t. So what’s the problem? In short, context matters – genomic, environmental and phenotypic. Here are six reasons why genomic prediction is hard, all of which were covered by one or more speakers at Genomic Disorders (I recommend reading to the end – the last one on the list is rather surprising!):
One of the major bioethical debates in clinical genetics and genomics research is the issue of what to do with incidental or secondary findings (IFs) unrelated to the original clinical or research question. Every genome contains thousands of rare variants, including a surprising number of loss of function variants, as well as hundreds of variants associated with common disease and dozens linked with recessive conditions. As whole genome or exome sequencing is used more routinely in non-anonymised cohorts – such as the 100,000 patient genomes to be sequenced by the UK NHS – these variants will be uncovered and linked to an increasing number of individuals. What should we do with them?
Robert Green of Brigham and Women’s Hospital in Boston, who co-chairs the American College of Medical Genetics (ACMG) working group on secondary findings, was quoted in a Nature blog last year saying, “we don’t think it’s going to be a sustainable strategy for the evolving practice of genomic medicine to ignore secondary findings of medical importance”. But just saying it doesn’t make it so. There are still numerous questions that need to be addressed – you can be part of the debate by participating in the Sanger Institute’s Genomethics survey.
The recent announcement that the UK Government has earmarked £100 million to “sequence 100,000 whole genomes of NHS patients at diagnostic quality over the next three to five years” raises a number of questions, with which the Department of Health are no doubt grappling as I write. I’ve previously discussed the thorny issue of using targeted versus whole genome sequencing to maximize diagnostic yield and benefit patients. However, one of the great achievements of next generation sequencing technologies is to make the assay – actually sequencing genome (or some portion of it) – one of the easier parts of clinical genomics. Although laboratories will have to be suitably equipped, staffed and flexibly managed to deal with high sample throughput and ever changing scientific specifications, the biggest challenge will be to implement genomic knowledge in the clinic.
On 10th December 2012, UK Prime Minister David Cameron launched a Report on the Strategy for UK Life Sciences One Year On by announcing that the Government has earmarked £100 million to “sequence 100,000 whole genomes of NHS patients at diagnostic quality over the next three to five years”. This ambitious initiative – which will focus initially on cancer, rare diseases and infectious diseases – aims to train a new generation of genetic scientists, stimulate the UK life sciences industry and “revolutionise” patient care.
There is no doubt that this investment offers a major opportunity for the UK to firmly establish itself as a world-leader in medical genomics. However, deciding how best to use the £100M to maximise patient benefit will be a challenge. There are numerous implementation issues, outlined in the PHG Foundation’s response to the announcement. Not least of these is the urgent need for informatics provision to facilitate storage, processing, annotation, interpretation and secure access to both genomic and phenotypic data. This will involve determining appropriate ethical and operational standards across a broad range of questions.
But there is one particularly crucial question that needs to be answered early on: what is the most appropriate assay to use for clinical implementation? All the literature released by the Government, and quoted extensively by the press, states quite categorically that the money will be used for “sequencing whole genomes”. Surely this can’t really be true? (I certainly hope it’s just coincidence that if you multiply a £1000 genome by 100,000 patients you reach the magic figure of £100 million…) If it is the case, there are several major problems.
About a year ago on this site, I discussed a model for addressing some of the major problems in scientific publishing. The main idea was simple: replace the current system of pre-publication peer review with one in which all research is immediately published and only afterwards sorted according to quality and community interest. This post generated a lot of discussion; in conversations since, however, I’ve learned that almost anyone who has thought seriously about the role of the internet in scientific communication has had similar ideas.
The question, then, is not whether dramatic improvements in the system of scientfic publication are possible, but rather how to implement them. There is now a growing trickle of papers posted to pre-print servers ahead of formal publication. I am hopeful that this is bringing us close to dispensing with one of the major obstacles in the path towards a modern system of scientific communication: the lack of rapid and wide distribution of results.*
Continue reading ‘The first steps towards a modern system of scientific publication’
The recent announcement of a new journal sponsored by the Howard Hughes Medical Institute, the Max Planck Society, and the Wellcome Trust generated a bit of discussion about the issues in the scientific publishing process it is designed to address—arbitrary editorial decisions, slow and unhelpful peer review, and so on. Left unanswered, however, is a more fundamental question: why do we publish scientific articles in peer-reviewed journals to begin with? What value does the existence of these journals add? In this post, I will argue that cutting journals out of scientific publishing to a large extent would be unconditionally a good thing, and that the only thing keeping this from happening is the absence of a “killer app”.
Disclaimer: Genomes Unzipped received 12 free kits from Lumigenix for review purposes, and Dan Vorhaus has provided legal advice to the company. We plan to release a full review of the Lumigenix service in early July.
Last month three direct-to-consumer (DTC) genetic testing companies opened their mailboxes to find a slightly ominous but entirely expected letter from the FDA. The three recipients (Lumigenix, American International Biotechnology Services and Precision Quality DNA) received substantively equivalent letters, with the FDA warning each company that its genetic testing service “appears to meet the definition of a device as that term is defined in section 201(h) of the Federal Food Drug and Cosmetic Act,” and that the agency would like to meet with company representatives “to discuss whether the service [they] are promoting requires review by FDA and what information [they] would need to submit in order for [their] product to be legally marketed.”
Translated from bureaucratese, that means that the FDA views these services as ones that may need to be formally reviewed by the agency and either approved or cleared before they can be legally sold. The FDA letter asks each company to describe its service and to explain either (1) why it does not require FDA approval or (2) how the company plans to pursue such approval.
This is a strategy that the FDA has pursued with a growing cadre of DTC service providers. These letters (currently 23 and counting1) represent the only public and company-specific actions the agency has taken to date with respect to DTC genetic testing. While many DTC letter recipients are engaged in dialogue with the FDA, those conversations have occurred beyond the public’s view. Until now.
[Editor's Note: This guest post is contributed by Blaine Bettinger. Blaine is the author of The Genetic Genealogist, a blog that examines the intersection of genetics and ancestry, and a patent attorney at Bond, Schoeneck & King in Syracuse, NY.]
As you may have heard, I recently made my 23andMe and Family Tree DNA autosomal testing results available for download online at “mygenotype,” and dedicated the information to the public domain (if dedicating DNA sequence to the public domain is even possible – I’m currently doing some research in this area and expect to write more in the future). [Editor's Note: see additional comments on personal genomics data in the public domain at the end of this post.]
At “mygenotype” you can download the following:
My Family Tree DNA Results:
- Affymetrix Autosomal DNA Results (2010)
- Affymetrix X-Chromosome DNA Results (2010)
- Illumina Autosomal DNA Results (2011)
- Illumina X-Chromosome DNA Results (2011)
My 23andMe Results:
- V2 Results (2008)
- V3 Results (2010)
- Y-DNA Results (2010)
- mtDNA Results (2010)
You can also find my SNPedia Promethease reports:
- Promethease Report using an early version of Affymetrix Family Finder DNA Results
- Promethease Report using V2 23andMe DNA Results
- Promethease Report using pooled 23andMe and Family Finder DNA Results
A Challenge To YOU
Now that the information is out there, available to anyone who might be interested, it remains to be seen who might be interested in the information.
This week has seen another FDA meeting seeking guidance on how to regulate direct-to-consumer (DTC) genetic tests in the US. The meeting itself has been covered by GNZ bloggers Daniel at Genetic Future and Dan at Genomics Law Report, and its apparent outcome has sparked furious debate elsewhere. The discussion among the “independent” panel convened at the meeting appeared to converge on the proposal that all health-related genomic tests should be ordered and reported through physicians. However, the outcomes of the meeting in terms of FDA policy remain unclear, and one FDA official has indicated that decisions about the availability of genetic tests will be made on a test-by-test basis.
There is no doubt that the appropriate regulation of personal genomics tests is a complex issue, and there is a diversity of opinion about how best to achieve it within GNZ (as there is throughout the genomics community). However, there are several points we agree on:
- Individuals have a fundamental right to access information about themselves, including genetic information. While it is important to also consider the accuracy, interpretation, validity and utility of tests, this underlying principle should guide policy.
- There is currently no evidence that DTC genetic tests pose a danger to consumers. A recent study of over 2,000 participants in DTC testing concluded that “testing did not result in any measurable short-term changes in psychological health”. In the absence of any evidence of harm there is no justification for restricting individual autonomy.
- DNA does not have magical powers, and does not require special treatment simply by virtue of being DNA. Genetic exceptionalism – the idea that genetics must be treated as special under the law – is an inappropriate basis for policy-making. Tests should be regulated appropriately based on their predictive power, utility and potential for harm, all of which are related concepts.
- As DNA sequencing becomes cheaper, the line between medical and non-medical testing will continue to blur. Excessive regulation of health-related genetic tests could also unncessarily hinder the ability of people to access their entire genome sequences for other purposes (such as genetic genealogy).
- Most clinicians do not have the appropriate knowledge to interpret genomic tests, particularly in healthy individuals. This point is almost universally agreed, even by the FDA, and has certainly been the experience of some of the GNZ members upon taking our genetic results to doctors. Physicians in general are therefore a strange choice for ‘guardians of the genome’.
- Most early adopters of DTC genetic tests are sufficiently well-informed to understand the implications of a genomic test and interpret the results correctly. Putting a general physician between these informed individuals and their own genomes is paternalistic and unnecessary.
While the outcome of the FDA’s deliberations remain uncertain, it is clear that there will be intensive lobbying against any attempt at excessive legislation. In the worst case scenario, the fledgling and innovative personal genomics market could be crushed by the FDA. However, there is still plenty of room for a measured approach that enforces test accuracy, punishes false claims and promotes informed choices by consumers, without reducing the ability of responsible companies to continue to operate and innovate.
We urge others in the genomics community to make their voices heard on these issues. Let the FDA – and, if you’re based in the USA, your political representatives – know that regulation of genetic testing should be based on evidence, not fear, and that any attempt to unreasonably restrict your access to your own genetic information is unacceptable.