Archive for the 'Guest Posts' Category

Guidelines for finding genetic variants underlying human disease

Authors: Daniel MacArthur and Chris Gunter.

New DNA sequencing technologies are rapidly transforming the diagnosis of rare genetic diseases, but they also carry a risk: by allowing us to see all of the hundreds of “interesting-looking” variants in a patient’s genome, they make it potentially easy for researchers to spin a causal narrative around genetic changes that have nothing to do with disease status. Such false positive reports can have serious consequences: incorrect diagnoses, unnecessary or ineffective treatment, and reproductive decisions (such as embryo termination) based on spurious test results. In order to minimize such outcomes the field needs to decide on clear statistical guidelines for deciding whether or not a variant is truly causally linked with disease.

In a paper in Nature this week we report the consensus statement from a workshop sponsored by the National Human Genome Research Institute, on establishing guidelines for assessing the evidence for variant causality. We argue for a careful two-stage approach to assessing evidence, taking into account the overall support for a causal role of the affected gene in the disease phenotype, followed by an assessment of the probability that the variant(s) carried by the patient do indeed play a causal role in that patient’s disease state. We argue for the primacy of statistical genetic evidence for new disease genes, which can be supplemented (but not replaced by) additional informatic and experimental support; and we emphasize the need for all forms of evidence to be placed within a statistical framework that considers the probability of any of the reported lines of evidence arising by chance.

The paper itself is open access, so you can read the whole thing – we won’t rehash a complete summary here. However, we did want to discuss the back story and expand on a few issues raised in the paper.
Continue reading ‘Guidelines for finding genetic variants underlying human disease’

The undiscovered chromosome

ChrXSexDiffPicBlackCroppedThis guest post was contributed by Taru Tukiainen, a postdoctoral research fellow in the Analytic and Translational Research Unit at Massachusetts General Hospital and the Broad Institute of MIT and Harvard.

The X chromosome contains around 5% of DNA in the human genome, but has remained largely unexplored in genome-wide association studies (GWAS) – to date, roughly two thirds of GWAS have thrown the X-chromosomal data out of their analyses. In a paper published in PLOS Genetics yesterday we dig into X chromosome associations and demonstrate why this stretch of DNA warrants particular attention in genetic association and sequencing studies. This post will focus on one of our key results: the possibility that some of the X chromosome loci contribute to sexual dimorphism, i.e. biological differences between men and women.
Continue reading ‘The undiscovered chromosome’

Pertinent and Non-pertinent Genomic Findings

About Guest Co-Author: Dr Ewan Birney is Associate Director of the EMBL European Bioinformatics Institute and a fellow blogger.

The ACMG recommendations on clinical genomic screening released earlier this year generated quite a storm. Criticisms broadly related to:

  • the principle of whether we are ready and able to offer genomic screening to people undergoing exome/genome sequencing (the topic of this post!);
  • to whom the recommendations should apply
  • whether individuals have a right to refuse genomic screening results; and
  • the exact content of the list of genes/variants to be screened.

In the UK, this debate has come into sharp focus following the launch of the NHS 100,000 genome project, where details of data interpretation and data sharing are still rather hazy. The central policy question is clear: in the context of clinical practice, how should we be using genomic data, and with whom, in order to maximise its benefits for patients? (In the context of research, as broad as possible sharing consistent with patient consent is most desirable.) Last month, we published a paper in the BMJ – along with a number of genetic scientists, clinical geneticists and other health specialists – advocating an evidence-based approach that places the emphasis on targeted diagnosis in the short term, and gathering evidence for possible broader uses in future.

Continue reading ‘Pertinent and Non-pertinent Genomic Findings’

How emerging targeted mutation technologies could change the way we study human genetics

Mari Niemi

This is a guest post from Mari Niemi at the Wellcome Trust Sanger Institute. Mari is a graduate researcher whose research combines the results of human genetic studies with zebrafish models to study human disease.

The turn of the year 2012/13 saw the emergence of a new and exciting – and some may even say revolutionary – technique for targeted genome engineering, namely the clustered regularly interspaced short palindromic repeat (CRISPR)-system. Harboured with the cells of many bacteria and archaea, in the wild CRISPRs act as an adaptive immune defence system chopping up foreign DNA. However, they are now being harnessed for genetic engineering in several species, most notably in human cell lines and the model animals mouse (Mus musculus) and zebrafish (Danio rerio). This rapid genome editing is letting us to study the function of genes and mutations and may even help improve the treatment of genetic diseases. But what makes this technology better than what came before, what are its downsides, and how revolutionary will it really be?

Genetic engineering – then and now

Taking a step backward, the ability to edit specific parts of an organism’s genetic material is certainly not novel practice. In the last decade or two, zinc finger nucleases (ZFNs) and more recently employed transcription activator-like endonucleases (TALENs) saw the deletion and introduction of genetic material, from larger segments of DNA to single base-pair point mutations, at desired sites become reality. ZFNs and TALENs are now fairly established methods, yet constructing these components and applying them in the laboratory can be extremely tedious and time-consuming due to the complex ways in which they binding with DNA. Clearly, there is much room for improvement and a desire for faster, cheaper and more efficient techniques in the prospect of applying genome engineering in treatment of human disease.

Continue reading ‘How emerging targeted mutation technologies could change the way we study human genetics’

Guest post: Human genetics is microbial genomics

DWilsonThis is a guest post by Danny Wilson from the University of Oxford. Danny was recently awarded a Wellcome Trust/Royal Society fellowship at the Nuffield Department of Medicine, and in this post he tells us why you cannot understand human genetics without studying the genetics of microbes. If you are a geneticist who finds this post interesting, he is currently hiring.

Never mind about sequencing your own genome. Only 10% of cells on your “human” body are human anyway, the rest are microbial. And their genomes are far more interesting.

For one thing, there’s a whole ecosystem out there, made up of many species. Typically a person harbours 1,000 or more different species in their gut alone. For another, a person’s health is to a large part determined by the microbes that live on their body, whether that be as part of a long-term commensal relationship or an acute pathogenic interaction.

With 20% of the world’s deaths still attributable to infectious disease, the re-emergence of ancient pathogens driven by ever-increasing antibiotic resistance, and the UK’s 100K Genome Project– many of which will have to be genomes from patients (i.e. microbes) rather than patients’ own genomes given its budget – pathogen genomics is very much at the top of the agenda.

So what do pathogen genomes have to tell us? Continue reading ‘Guest post: Human genetics is microbial genomics’

Guest post: the perils of genetic risk prediction in autism

This guest post from Daniel Howrigan, Benjamin Neale, Elise Robinson, Patrick Sullivan, Peter Visscher, Naomi Wray and Jian Yang (see biographies at end of post) describes their recent rebuttal of a paper claiming to have developed a new approach to genetic prediction of autism. This story has also been covered by Ed Yong and Emily Willingham. Genomes Unzipped authors Luke Jostins, Jeff Barrett and Daniel MacArthur were also involved in the rebuttal.

autism-puzzleLast year, in a paper published in Molecular Psychiatry, Stan Skafidas and colleagues made a remarkable claim: a simple genetic test could be used to predict autism risk from birth. The degree of genetic predictive power suggested by the paper was unprecedented for a common disease, let alone for a disease as complex and poorly understood as autism. However, instead of representing a revolution in autism research, many scientists felt that the paper illustrated the pitfalls of pursuing genetic risk prediction. After nearly a year of study, two papers have shown how the Skafidas et al. study demonstrates the dangers of poor experimental design and biases due to important confounders.

The story in a nutshell: the Skafidas paper proposes a method for generating a genetic risk score for autism spectrum disorder (ASD) based on a small number of SNPs. The method is fairly straightforward – analyze genetic data from ASD case samples and from publicly available controls to develop, test, and validate a prediction algorithm for ASD. The stated result – Skafidas et al. claim successful prediction of ASD based on a subset of 237 SNPs. For the downstream consumer, the application is simple – have your doctor take a saliva sample from your newborn baby, send in the sample to get genotyped, and get a probability of your child developing ASD. It would be easy to test fetuses and for prospective parents to consider abortions if the algorithm suggested high risk of ASD.

The apparent simplicity is refreshing and, from the lay perspective, the result will resonate above all the technical jargon of multiple-testing correction, linkage disequilibrium (LD), or population stratification that dominates our field. This is what makes this paper all the more dangerous, because lurking beneath the appealing results is flawed methodology and design as we describe below.

We begin our critique with the abstract from Skafidas et al. (emphasis added):
Continue reading ‘Guest post: the perils of genetic risk prediction in autism’

Guest post: 23andMe’s “designer baby” patent: When corporate governance and open science collide

portrait6_cropped_2Barbara Prainsack is at the Department of Social Science, Health & Medicine at King’s College London. Her work focuses on the social, regulatory and ethical aspects of genetic science and medicine.

More than seven years ago, my colleague Gil Siegal and I wrote a paper about pre-marital genetic compatibility testing in strictly orthodox Jewish communities. We argued that by not disclosing genetic results at the level of individuals but exclusively in terms of the genetic compatibility of the couple, this practice gave rise to a notion of “genetic couplehood”, conceptualizing genetic risk as a matter of genetic jointness. We also argued that this particular method of genetic testing worked well for strictly orthodox communities but that “genetic couplehood” was unlikely to go mainstream.

Then, last month, a US patent awarded to 23andMe – which triggered heated debates in public and academic media (see here, here, here, here and here, for instance) – seemed to prove this wrong. The most controversially discussed part of the patent was a claim to a method for gamete donor selection that could enable clients of fertility clinics a say in what traits their future offspring was likely to have. The fact that these “traits” include genetic predispositions to diseases, but also to personality or physical and aesthetic characteristics, unleashed fears that a Gattaca-style eugenicist future is in the making. Critics have also suggested that the consideration of the moral content of the innovation could or should have stopped the US Patent and Trademark Office from awarding the patent.
Continue reading ‘Guest post: 23andMe’s “designer baby” patent: When corporate governance and open science collide’

Identification of genomic regions shared between distant relatives

This is a guest post by Graham Coop and Peter Ralph, cross-posted from the Coop Lab website.

We’ve been addressing some of the FAQs on topics arising from our paper on the geography of recent genetic genealogy in Europe (PLOS Biology). We wanted to write one on shared genetic material in personal genomics data but it got a little long, and so we are posting it as its own blog post.

Personal genomics companies that type SNPs genome-wide can identify blocks of shared genetic material between people in their databases, offering the chance to identify distant relatives. Finding a connection to someone else who is an unknown relative is exciting, whether you do this through your family tree or through personal genomics (we’ve both pored over our 23&me results a bunch). However, given the fact that nearly everyone in Europe is related to nearly everyone else over the past 1000 years (see our recent paper and FAQs), and likely everyone in the world is related over the past ~3000 years, how should you interpret that genetic connection?

Continue reading ‘Identification of genomic regions shared between distant relatives’

Learning more from your 23andMe results with Imputation

PeterAndEliana This is a guest post by Peter Cheng and Eliana Hechter from the University of California, Berkeley.

Suppose that you’ve had your DNA genotyped by 23andMe or some other DTC genetic testing company. Then an article shows up in your morning newspaper or journal (like this one) and suddenly there’s an additional variant you want to know about. You check your raw genotypes file to see if the variant is present on the chip, but it isn’t! So what next? [Note: the most recent 23andMe chip does include this variant, although older versions of their chip do not.]

Genotype imputation is a process used for predicting, or “imputing”, genotypes that are not assayed by a genotyping chip. The process compares the genotyped data from a chip (e.g. your 23andMe results) with a reference panel of genomes (supplied by big genome projects like the 1000 Genomes or HapMap projects) in order to make predictions about variants that aren’t on the chip. If you want a technical review of imputation (and the program IMPUTE in particular), we recommend Marchini & Howie’s 2010 Nature Reviews Genetics article. However, the following figure provides an intuitive understanding of the process.

Continue reading ‘Learning more from your 23andMe results with Imputation’

Response to “Exaggerations and errors in the promotion of genetic ancestry testing”

mtTree3jpg

Following the Genomes Unzipped post entitled “Exaggerations and errors in the promotion of genetic ancestry testing”, we received a request to reply from Jim Wilson. Jim Wilson is the chief scientist of BritainsDNA. He is not the one who gave the BBC interview that prompted the Genomes Unzipped post but he is a key contributor to the science behind BritainsDNA. We are keen to tell both sides of this story and this post is an opportunity for BritainsDNA to state their arguments and motivation. -VP

I saw Vincent Plagnol’s post here on Genomes Unzipped about the promotion of genetic ancestry testing and felt compelled to respond. While I did not give the interview that was the subject of the post, I am the chief scientist at BritainsDNA and I feel that the post was biased in presenting only one side of the story and thus misrepresenting the situation. Perhaps I can offer another perspective for readers.

Continue reading ‘Response to “Exaggerations and errors in the promotion of genetic ancestry testing”’


Page optimized by WP Minify WordPress Plugin