Author Archive for Guest Author

A rare variant in Mexico with far-reaching implications

HNF1A-imageThis guest post was contributed by Karol Estrada, a postdoctoral research fellow in the Analytic and Translational Research Unit at Massachusetts General Hospital and the Broad Institute of MIT and Harvard. It is dedicated to the memory of Laura Riba.

Genome-wide association studies (GWAS) of common variants have successfully implicated more than 70 genomic regions in type 2 diabetes, revealing new biological pathways and potential drug targets. However, most large studies have examined genetic variation only in northwestern European populations, despite the rich genetic diversity in other populations around the world. Most studies have also been limited in their ability to detect variants present in fewer than 5 percent of people. Much remains to be learned.

In this post, we discuss our new paper, published in the Journal of the American Medical Association, on a low-frequency missense variant in the gene HNF1A that raises risk of type 2 diabetes five-fold, and was seen only in Latinos. This variant was the only rare variant to reach genome-wide significance in an exome sequencing study of almost 4,000 people, the largest such study to date. We explain the ramifications for sample sizes of rare-variant studies, note the importance of studying populations outside of northwestern Europe, and caution against simplistic dichotomous interpretations of disease as either complex or monogenic. Finally, we note that a low-frequency or rare variant might guide therapeutic modification.
Continue reading ‘A rare variant in Mexico with far-reaching implications’

Guidelines for finding genetic variants underlying human disease

Authors: Daniel MacArthur and Chris Gunter.

New DNA sequencing technologies are rapidly transforming the diagnosis of rare genetic diseases, but they also carry a risk: by allowing us to see all of the hundreds of “interesting-looking” variants in a patient’s genome, they make it potentially easy for researchers to spin a causal narrative around genetic changes that have nothing to do with disease status. Such false positive reports can have serious consequences: incorrect diagnoses, unnecessary or ineffective treatment, and reproductive decisions (such as embryo termination) based on spurious test results. In order to minimize such outcomes the field needs to decide on clear statistical guidelines for deciding whether or not a variant is truly causally linked with disease.

In a paper in Nature this week we report the consensus statement from a workshop sponsored by the National Human Genome Research Institute, on establishing guidelines for assessing the evidence for variant causality. We argue for a careful two-stage approach to assessing evidence, taking into account the overall support for a causal role of the affected gene in the disease phenotype, followed by an assessment of the probability that the variant(s) carried by the patient do indeed play a causal role in that patient’s disease state. We argue for the primacy of statistical genetic evidence for new disease genes, which can be supplemented (but not replaced by) additional informatic and experimental support; and we emphasize the need for all forms of evidence to be placed within a statistical framework that considers the probability of any of the reported lines of evidence arising by chance.

The paper itself is open access, so you can read the whole thing – we won’t rehash a complete summary here. However, we did want to discuss the back story and expand on a few issues raised in the paper.
Continue reading ‘Guidelines for finding genetic variants underlying human disease’

The undiscovered chromosome

ChrXSexDiffPicBlackCroppedThis guest post was contributed by Taru Tukiainen, a postdoctoral research fellow in the Analytic and Translational Research Unit at Massachusetts General Hospital and the Broad Institute of MIT and Harvard.

The X chromosome contains around 5% of DNA in the human genome, but has remained largely unexplored in genome-wide association studies (GWAS) – to date, roughly two thirds of GWAS have thrown the X-chromosomal data out of their analyses. In a paper published in PLOS Genetics yesterday we dig into X chromosome associations and demonstrate why this stretch of DNA warrants particular attention in genetic association and sequencing studies. This post will focus on one of our key results: the possibility that some of the X chromosome loci contribute to sexual dimorphism, i.e. biological differences between men and women.
Continue reading ‘The undiscovered chromosome’

Pertinent and Non-pertinent Genomic Findings

About Guest Co-Author: Dr Ewan Birney is Associate Director of the EMBL European Bioinformatics Institute and a fellow blogger.

The ACMG recommendations on clinical genomic screening released earlier this year generated quite a storm. Criticisms broadly related to:

  • the principle of whether we are ready and able to offer genomic screening to people undergoing exome/genome sequencing (the topic of this post!);
  • to whom the recommendations should apply
  • whether individuals have a right to refuse genomic screening results; and
  • the exact content of the list of genes/variants to be screened.

In the UK, this debate has come into sharp focus following the launch of the NHS 100,000 genome project, where details of data interpretation and data sharing are still rather hazy. The central policy question is clear: in the context of clinical practice, how should we be using genomic data, and with whom, in order to maximise its benefits for patients? (In the context of research, as broad as possible sharing consistent with patient consent is most desirable.) Last month, we published a paper in the BMJ – along with a number of genetic scientists, clinical geneticists and other health specialists – advocating an evidence-based approach that places the emphasis on targeted diagnosis in the short term, and gathering evidence for possible broader uses in future.

Continue reading ‘Pertinent and Non-pertinent Genomic Findings’

How emerging targeted mutation technologies could change the way we study human genetics

Mari Niemi

This is a guest post from Mari Niemi at the Wellcome Trust Sanger Institute. Mari is a graduate researcher whose research combines the results of human genetic studies with zebrafish models to study human disease.

The turn of the year 2012/13 saw the emergence of a new and exciting – and some may even say revolutionary – technique for targeted genome engineering, namely the clustered regularly interspaced short palindromic repeat (CRISPR)-system. Harboured with the cells of many bacteria and archaea, in the wild CRISPRs act as an adaptive immune defence system chopping up foreign DNA. However, they are now being harnessed for genetic engineering in several species, most notably in human cell lines and the model animals mouse (Mus musculus) and zebrafish (Danio rerio). This rapid genome editing is letting us to study the function of genes and mutations and may even help improve the treatment of genetic diseases. But what makes this technology better than what came before, what are its downsides, and how revolutionary will it really be?

Genetic engineering – then and now

Taking a step backward, the ability to edit specific parts of an organism’s genetic material is certainly not novel practice. In the last decade or two, zinc finger nucleases (ZFNs) and more recently employed transcription activator-like endonucleases (TALENs) saw the deletion and introduction of genetic material, from larger segments of DNA to single base-pair point mutations, at desired sites become reality. ZFNs and TALENs are now fairly established methods, yet constructing these components and applying them in the laboratory can be extremely tedious and time-consuming due to the complex ways in which they binding with DNA. Clearly, there is much room for improvement and a desire for faster, cheaper and more efficient techniques in the prospect of applying genome engineering in treatment of human disease.

Continue reading ‘How emerging targeted mutation technologies could change the way we study human genetics’

Guest post: Human genetics is microbial genomics

DWilsonThis is a guest post by Danny Wilson from the University of Oxford. Danny was recently awarded a Wellcome Trust/Royal Society fellowship at the Nuffield Department of Medicine, and in this post he tells us why you cannot understand human genetics without studying the genetics of microbes. If you are a geneticist who finds this post interesting, he is currently hiring.

Never mind about sequencing your own genome. Only 10% of cells on your “human” body are human anyway, the rest are microbial. And their genomes are far more interesting.

For one thing, there’s a whole ecosystem out there, made up of many species. Typically a person harbours 1,000 or more different species in their gut alone. For another, a person’s health is to a large part determined by the microbes that live on their body, whether that be as part of a long-term commensal relationship or an acute pathogenic interaction.

With 20% of the world’s deaths still attributable to infectious disease, the re-emergence of ancient pathogens driven by ever-increasing antibiotic resistance, and the UK’s 100K Genome Project– many of which will have to be genomes from patients (i.e. microbes) rather than patients’ own genomes given its budget – pathogen genomics is very much at the top of the agenda.

So what do pathogen genomes have to tell us? Continue reading ‘Guest post: Human genetics is microbial genomics’

Guest post: the perils of genetic risk prediction in autism

This guest post from Daniel Howrigan, Benjamin Neale, Elise Robinson, Patrick Sullivan, Peter Visscher, Naomi Wray and Jian Yang (see biographies at end of post) describes their recent rebuttal of a paper claiming to have developed a new approach to genetic prediction of autism. This story has also been covered by Ed Yong and Emily Willingham. Genomes Unzipped authors Luke Jostins, Jeff Barrett and Daniel MacArthur were also involved in the rebuttal.

autism-puzzleLast year, in a paper published in Molecular Psychiatry, Stan Skafidas and colleagues made a remarkable claim: a simple genetic test could be used to predict autism risk from birth. The degree of genetic predictive power suggested by the paper was unprecedented for a common disease, let alone for a disease as complex and poorly understood as autism. However, instead of representing a revolution in autism research, many scientists felt that the paper illustrated the pitfalls of pursuing genetic risk prediction. After nearly a year of study, two papers have shown how the Skafidas et al. study demonstrates the dangers of poor experimental design and biases due to important confounders.

The story in a nutshell: the Skafidas paper proposes a method for generating a genetic risk score for autism spectrum disorder (ASD) based on a small number of SNPs. The method is fairly straightforward – analyze genetic data from ASD case samples and from publicly available controls to develop, test, and validate a prediction algorithm for ASD. The stated result – Skafidas et al. claim successful prediction of ASD based on a subset of 237 SNPs. For the downstream consumer, the application is simple – have your doctor take a saliva sample from your newborn baby, send in the sample to get genotyped, and get a probability of your child developing ASD. It would be easy to test fetuses and for prospective parents to consider abortions if the algorithm suggested high risk of ASD.

The apparent simplicity is refreshing and, from the lay perspective, the result will resonate above all the technical jargon of multiple-testing correction, linkage disequilibrium (LD), or population stratification that dominates our field. This is what makes this paper all the more dangerous, because lurking beneath the appealing results is flawed methodology and design as we describe below.

We begin our critique with the abstract from Skafidas et al. (emphasis added):
Continue reading ‘Guest post: the perils of genetic risk prediction in autism’

Guest post: 23andMe’s “designer baby” patent: When corporate governance and open science collide

portrait6_cropped_2Barbara Prainsack is at the Department of Social Science, Health & Medicine at King’s College London. Her work focuses on the social, regulatory and ethical aspects of genetic science and medicine.

More than seven years ago, my colleague Gil Siegal and I wrote a paper about pre-marital genetic compatibility testing in strictly orthodox Jewish communities. We argued that by not disclosing genetic results at the level of individuals but exclusively in terms of the genetic compatibility of the couple, this practice gave rise to a notion of “genetic couplehood”, conceptualizing genetic risk as a matter of genetic jointness. We also argued that this particular method of genetic testing worked well for strictly orthodox communities but that “genetic couplehood” was unlikely to go mainstream.

Then, last month, a US patent awarded to 23andMe – which triggered heated debates in public and academic media (see here, here, here, here and here, for instance) – seemed to prove this wrong. The most controversially discussed part of the patent was a claim to a method for gamete donor selection that could enable clients of fertility clinics a say in what traits their future offspring was likely to have. The fact that these “traits” include genetic predispositions to diseases, but also to personality or physical and aesthetic characteristics, unleashed fears that a Gattaca-style eugenicist future is in the making. Critics have also suggested that the consideration of the moral content of the innovation could or should have stopped the US Patent and Trademark Office from awarding the patent.
Continue reading ‘Guest post: 23andMe’s “designer baby” patent: When corporate governance and open science collide’

Uncovering functional variation in humans by genome and transcriptome sequencing

Tuuli_chamonix2_croppedDr. Tuuli Lappalainen is a postdoctoral researcher at Stanford University, where she works on functional genetic variation in human populations and specializes in population-scale RNA-sequencing. She kindly agreed to write a guest post on her recent publication in Nature, “Uncovering functional variation in humans by genome and transcriptome sequencing”, which describes work done while she was at the University of Geneva. -DM

In a paper published online today in Nature we describe results of the largest RNA-sequencing study of multiple human populations to date, and provide a comprehensive map of how genetic variation affects the transcriptome. This was achieved by RNA-sequencing of individuals that are part of the 1000 Genomes sample set, thus adding a functional dimension to the most important catalogue of human genomes. In this blog post, I will discuss how our findings shed light on genetic associations to disease.

As genome-wide studies are providing an increasingly comprehensive catalog of genetic variants that predispose to various diseases, we are faced with a huge challenge: what do these variants actually do in the cell? Understanding the biological mechanisms underlying diseases is essential to develop interventions, but traditional molecular biology follow-up is not really feasible for the thousands of discovered GWAS loci. Thus, we need high-throughput approaches for measuring genetic effects at the cellular level, which is an intermediate between the genome and the disease. The cellular trait most amenable for such analysis is the transcriptome, which we can now measure reliably and robustly by RNA-sequencing (as shown by our companion paper in Nature Biotechnology).

In this project, several European institutes of the Geuvadis Consortium sequenced mRNA and small RNA from lymphoblast cell lines from 465 individuals that are in the 1000 Genomes sample set. The idea of gene expression analysis of genetic reference samples is not new (see e.g. papers by Stranger et al., Pickrell et al. and Montgomery et al.), but the bigger scale and better quality enables discovery of exciting new biology.
Continue reading ‘Uncovering functional variation in humans by genome and transcriptome sequencing’

Identification of genomic regions shared between distant relatives

This is a guest post by Graham Coop and Peter Ralph, cross-posted from the Coop Lab website.

We’ve been addressing some of the FAQs on topics arising from our paper on the geography of recent genetic genealogy in Europe (PLOS Biology). We wanted to write one on shared genetic material in personal genomics data but it got a little long, and so we are posting it as its own blog post.

Personal genomics companies that type SNPs genome-wide can identify blocks of shared genetic material between people in their databases, offering the chance to identify distant relatives. Finding a connection to someone else who is an unknown relative is exciting, whether you do this through your family tree or through personal genomics (we’ve both pored over our 23&me results a bunch). However, given the fact that nearly everyone in Europe is related to nearly everyone else over the past 1000 years (see our recent paper and FAQs), and likely everyone in the world is related over the past ~3000 years, how should you interpret that genetic connection?

Continue reading ‘Identification of genomic regions shared between distant relatives’


Page optimized by WP Minify WordPress Plugin