This is a guest post from Mari Niemi at the Wellcome Trust Sanger Institute. Mari is a graduate researcher whose research combines the results of human genetic studies with zebrafish models to study human disease.
The turn of the year 2012/13 saw the emergence of a new and exciting – and some may even say revolutionary – technique for targeted genome engineering, namely the clustered regularly interspaced short palindromic repeat (CRISPR)-system. Harboured with the cells of many bacteria and archaea, in the wild CRISPRs act as an adaptive immune defence system chopping up foreign DNA. However, they are now being harnessed for genetic engineering in several species, most notably in human cell lines and the model animals mouse (Mus musculus) and zebrafish (Danio rerio). This rapid genome editing is letting us to study the function of genes and mutations and may even help improve the treatment of genetic diseases. But what makes this technology better than what came before, what are its downsides, and how revolutionary will it really be?
Genetic engineering – then and now
Taking a step backward, the ability to edit specific parts of an organism’s genetic material is certainly not novel practice. In the last decade or two, zinc finger nucleases (ZFNs) and more recently employed transcription activator-like endonucleases (TALENs) saw the deletion and introduction of genetic material, from larger segments of DNA to single base-pair point mutations, at desired sites become reality. ZFNs and TALENs are now fairly established methods, yet constructing these components and applying them in the laboratory can be extremely tedious and time-consuming due to the complex ways in which they binding with DNA. Clearly, there is much room for improvement and a desire for faster, cheaper and more efficient techniques in the prospect of applying genome engineering in treatment of human disease.
Continue reading ‘How emerging targeted mutation technologies could change the way we study human genetics’
Last week, the FDA sent a sternly-worded letter to the personal genomics company 23andMe, arguing that the company is marketing an unapproved diagnostic device. Many have weighed in on this, but I’d like to highlight a thoughtful post by Mike Eisen.
Eisen makes the important point that interpreting the genetics literature is complicated, and a company (like 23andMe) that provides this interpretation as a service could potentially add value. I’d like to add a simple point: this is absolutely not limited to genetics. In fact, there are already many software applications that calculate your risk for various diseases based on standard (i.e. non-genetic) epidemiology. For example, here’s a (NIH-based) site for calculating your risk of having a heart attack:
And here’s a site for calculating your risk of having a stroke in the next 10 years:
And here’s one for diabetes. And colorectal cancer. And breast cancer. And melanoma. And Parkinson’s.
I don’t point this out because it leads to an obvious conclusion; it doesn’t. But all of the scientific points made about risk prediction from 23andMe (the models are not very predictive, they’re missing a lot of important variables, there are likely errors in measurements, etc.) of course apply to traditional epidemiology as well. Ultimately, I think a lot rides on the question: what is the aspect of 23andMe that sets them apart from these websites and makes them more suspect? Is it because they focus on genetic risk factors rather than “traditional” risk factors (though note several of these sites ask about family history, which of course implicitly includes genetic information)? Is it the fact that they’re a for-profit company selling a product? Is it something about the way risks are reported, or the fact that risks for many diseases are presented on a single site? Is it because some genetic risk factors (like BRCA1) have strong effects, while standard epidemiological risk factors are usually of small effect? Or is it something else?
This is a guest post by Danny Wilson from the University of Oxford. Danny was recently awarded a Wellcome Trust/Royal Society fellowship at the Nuffield Department of Medicine, and in this post he tells us why you cannot understand human genetics without studying the genetics of microbes. If you are a geneticist who finds this post interesting, he is currently hiring.
Never mind about sequencing your own genome. Only 10% of cells on your “human” body are human anyway, the rest are microbial. And their genomes are far more interesting.
For one thing, there’s a whole ecosystem out there, made up of many species. Typically a person harbours 1,000 or more different species in their gut alone. For another, a person’s health is to a large part determined by the microbes that live on their body, whether that be as part of a long-term commensal relationship or an acute pathogenic interaction.
With 20% of the world’s deaths still attributable to infectious disease, the re-emergence of ancient pathogens driven by ever-increasing antibiotic resistance, and the UK’s 100K Genome Project– many of which will have to be genomes from patients (i.e. microbes) rather than patients’ own genomes given its budget – pathogen genomics is very much at the top of the agenda.
So what do pathogen genomes have to tell us? Continue reading ‘Guest post: Human genetics is microbial genomics’
This guest post from Daniel Howrigan, Benjamin Neale, Elise Robinson, Patrick Sullivan, Peter Visscher, Naomi Wray and Jian Yang (see biographies at end of post) describes their recent rebuttal of a paper claiming to have developed a new approach to genetic prediction of autism. This story has also been covered by Ed Yong and Emily Willingham. Genomes Unzipped authors Luke Jostins, Jeff Barrett and Daniel MacArthur were also involved in the rebuttal.
Last year, in a paper published in Molecular Psychiatry, Stan Skafidas and colleagues made a remarkable claim: a simple genetic test could be used to predict autism risk from birth. The degree of genetic predictive power suggested by the paper was unprecedented for a common disease, let alone for a disease as complex and poorly understood as autism. However, instead of representing a revolution in autism research, many scientists felt that the paper illustrated the pitfalls of pursuing genetic risk prediction. After nearly a year of study, two papers have shown how the Skafidas et al. study demonstrates the dangers of poor experimental design and biases due to important confounders.
The story in a nutshell: the Skafidas paper proposes a method for generating a genetic risk score for autism spectrum disorder (ASD) based on a small number of SNPs. The method is fairly straightforward – analyze genetic data from ASD case samples and from publicly available controls to develop, test, and validate a prediction algorithm for ASD. The stated result – Skafidas et al. claim successful prediction of ASD based on a subset of 237 SNPs. For the downstream consumer, the application is simple – have your doctor take a saliva sample from your newborn baby, send in the sample to get genotyped, and get a probability of your child developing ASD. It would be easy to test fetuses and for prospective parents to consider abortions if the algorithm suggested high risk of ASD.
The apparent simplicity is refreshing and, from the lay perspective, the result will resonate above all the technical jargon of multiple-testing correction, linkage disequilibrium (LD), or population stratification that dominates our field. This is what makes this paper all the more dangerous, because lurking beneath the appealing results is flawed methodology and design as we describe below.
We begin our critique with the abstract from Skafidas et al. (emphasis added):
Continue reading ‘Guest post: the perils of genetic risk prediction in autism’
Barbara Prainsack is at the Department of Social Science, Health & Medicine at King’s College London. Her work focuses on the social, regulatory and ethical aspects of genetic science and medicine.
More than seven years ago, my colleague Gil Siegal and I wrote a paper about pre-marital genetic compatibility testing in strictly orthodox Jewish communities. We argued that by not disclosing genetic results at the level of individuals but exclusively in terms of the genetic compatibility of the couple, this practice gave rise to a notion of “genetic couplehood”, conceptualizing genetic risk as a matter of genetic jointness. We also argued that this particular method of genetic testing worked well for strictly orthodox communities but that “genetic couplehood” was unlikely to go mainstream.
Then, last month, a US patent awarded to 23andMe – which triggered heated debates in public and academic media (see here, here, here, here and here, for instance) – seemed to prove this wrong. The most controversially discussed part of the patent was a claim to a method for gamete donor selection that could enable clients of fertility clinics a say in what traits their future offspring was likely to have. The fact that these “traits” include genetic predispositions to diseases, but also to personality or physical and aesthetic characteristics, unleashed fears that a Gattaca-style eugenicist future is in the making. Critics have also suggested that the consideration of the moral content of the innovation could or should have stopped the US Patent and Trademark Office from awarding the patent.
Continue reading ‘Guest post: 23andMe’s “designer baby” patent: When corporate governance and open science collide’
The UK’s ambitious plan to sequence 100,000 whole genomes of NHS patients over the next 3-5 years, announced by the UK Prime Minister in December last year, sparked interest and curiosity throughout the UK genetics community. Undeterred by the enormity of the task, a new company, Genomics England Limited (GeL), was set up in June of this year by the Department of Health, tasked with delivering the UK100K genome project. Yesterday, they held what I’m sure will be the first of many ‘Town Hall’ engagement events, to inform and consult clinicians, scientists, patients and the public on their nascent plans.
So what did we learn? First, let’s be clear on the aims. GeL’s remit is to deliver 100,000 whole genome sequences of NHS patients by the end of 2017. No fewer patients, no less sequence. At its peak, GeL will produce 30,000 whole genome sequences per year. There’s no getting away from the fact that this is an extremely ambitious plan! But fortunately, the key people at GeL are under no illusions about the fact that theirs is a near impossible task. Continue reading ‘Genomics England and the 100,000 genomes’
Dr. Tuuli Lappalainen is a postdoctoral researcher at Stanford University, where she works on functional genetic variation in human populations and specializes in population-scale RNA-sequencing. She kindly agreed to write a guest post on her recent publication in Nature, “Uncovering functional variation in humans by genome and transcriptome sequencing”, which describes work done while she was at the University of Geneva. -DM
In a paper published online today in Nature we describe results of the largest RNA-sequencing study of multiple human populations to date, and provide a comprehensive map of how genetic variation affects the transcriptome. This was achieved by RNA-sequencing of individuals that are part of the 1000 Genomes sample set, thus adding a functional dimension to the most important catalogue of human genomes. In this blog post, I will discuss how our findings shed light on genetic associations to disease.
As genome-wide studies are providing an increasingly comprehensive catalog of genetic variants that predispose to various diseases, we are faced with a huge challenge: what do these variants actually do in the cell? Understanding the biological mechanisms underlying diseases is essential to develop interventions, but traditional molecular biology follow-up is not really feasible for the thousands of discovered GWAS loci. Thus, we need high-throughput approaches for measuring genetic effects at the cellular level, which is an intermediate between the genome and the disease. The cellular trait most amenable for such analysis is the transcriptome, which we can now measure reliably and robustly by RNA-sequencing (as shown by our companion paper in Nature Biotechnology).
In this project, several European institutes of the Geuvadis Consortium sequenced mRNA and small RNA from lymphoblast cell lines from 465 individuals that are in the 1000 Genomes sample set. The idea of gene expression analysis of genetic reference samples is not new (see e.g. papers by Stranger et al., Pickrell et al. and Montgomery et al.), but the bigger scale and better quality enables discovery of exciting new biology.
Continue reading ‘Uncovering functional variation in humans by genome and transcriptome sequencing’
Now the dust has settled, I’ve been reflecting on the controversial recommendation from the American College of Medical Genetics and Genomics (ACMG) that all clinical genomes should be screened for a specific set of conditions. Following the release of the guidelines, the European Society of Human Genetics (ESHG) published its more conservative recommendations, and vigorous debate has continued internationally regarding the wisdom of introducing genomic screening. While I still have some major reservations about the policy (outlined in previous posts), upon reflection there are certainly things some aspects that make a lot of sense…
Continue reading ‘Further reflections on genomic screening’
The ongoing debate about whether, what, when and how to feedback incidental findings (IFs) from whole genome sequencing continues to rage on both sides of the Atlantic following the American College of Medical Genetics and Genomics’ controversial recommendations on reporting IFs released last month. In an unexpected twist, the authors of the guidance have now written “a clarification” in response to the many criticisms that have been raised including here on GenomesUnzipped. The clarification covers five points – autonomy, children, labs, communication and interpretation.
Continue reading ‘ACMG guidelines on IFs – responding to the response…’
This is a guest post by Graham Coop and Peter Ralph, cross-posted from the Coop Lab website.
We’ve been addressing some of the FAQs on topics arising from our paper on the geography of recent genetic genealogy in Europe (PLOS Biology). We wanted to write one on shared genetic material in personal genomics data but it got a little long, and so we are posting it as its own blog post.
Personal genomics companies that type SNPs genome-wide can identify blocks of shared genetic material between people in their databases, offering the chance to identify distant relatives. Finding a connection to someone else who is an unknown relative is exciting, whether you do this through your family tree or through personal genomics (we’ve both pored over our 23&me results a bunch). However, given the fact that nearly everyone in Europe is related to nearly everyone else over the past 1000 years (see our recent paper and FAQs), and likely everyone in the world is related over the past ~3000 years, how should you interpret that genetic connection?
Continue reading ‘Identification of genomic regions shared between distant relatives’