Tag Archive for 'missing heritability'

The genome hasn’t failed

On Monday, the Guardian published an article by plant geneticist Jonathan Latham entitled “The failure of the genome”. Ironically given this is an article criticising allegedly exaggerated claims made about the power of the human genome, Latham does not spare us his own hyperbole:

Among all the genetic findings for common illnesses, such as heart disease, cancer and mental illnesses, only a handful are of genuine significance for human health. Faulty genes rarely cause, or even mildly predispose us, to disease, and as a consequence the science of human genetics is in deep crisis.

[...] The failure to find meaningful inherited genetic predispositions is likely to become the most profound crisis that science has faced. [emphasis added]

The claim that human genetics is in crisis is not novel. Latham made an extended version of this argument in a blog post at the Bioscience Resource Project in December last year, which Daniel critiqued at length at the time, and which contained a schoolboy statistical error corrected by Luke. And Latham is by no means the only genome-basher out there: the 10 year anniversary of the sequencing of the human genome triggered a spate of “genome fail” pieces (see Nicholas Wade, Andrew Pollack, Matt Ridley, and a particularly horrendous example from Oliver James, for instance).

We suspect for most of our readers Latham’s rather hysterical critique will fall on deaf ears, but it is part of a bizarre and disturbing trend that needs to be publicly countered. Here are several of the places where Latham’s screed gets it patently wrong:

Continue reading ‘The genome hasn’t failed’

Are synthetic associations a man-made phenomenon?

Early last year David Goldstein and colleagues published a provocative paper claiming that many GWAS associations are driven not by common variants of modest effect (the canonical common disease – common variant hypothesis underpinning GWAS) but instead by a local cluster of lower frequency  variants that have much bigger effects on disease risk. They dubbed this hypothesized phenomenon “synthetic association” and the term quickly became a genetics buzzword. The paper was widely discussed in both the specialist and mainstream media, and caused quite a stir among academic statistical geneticists.

That debate has been re-opened today by a set of Perspectives in PLoS Biology: a rebuttal by us (Carl & Jeff) and our colleagues at Sanger, a rebuttal by Naomi Wray, Shaun Purcell and Peter Visscher, a rebuttal to the rebuttals by David Goldstein and an editorial by Robert Shields to tie it all together.

Continue reading ‘Are synthetic associations a man-made phenomenon?’

Estimating heritability using twins

Last week, a post went up on the Bioscience Resource Project blog entited The Great DNA Data Deficit. This is another in a long string of “Death of GWAS” posts that have appeared around the last year. The authors claim that because GWAS has failed to identify many “major disease genes”, i.e. high frequency variants with large effect on disease, it was therefore not worthwhile; this is all old stuff, that I have discussed elsewhere (see also my “Standard GWAS Disclaimer” below). In this case, the authors argue that the genetic contribution to complex disease has been massively overestimated, and in fact genetics does not play as large a part in disease as we believe.

The one particularly new thing about this article is that they actually look at the foundation for beliefs about missing heritability; the twin studies of identical and non-identical twins from which we get our estimates of the heritability of disease. I approve of this: I think all those who are interested in the genetics of disease should be fluent in the methodology of twin studies. However, in this case, the authors come to the rather odd conclusion that heritability measures are largely useless, based on a small statistical misunderstanding of how such studies are done.

I thought I would use this opportunity to explain, in relative detail, where we get our estimates of heritability from, why they are generally well-measured and robust, and real issues need to be considered when interpreting twin study results. This post is going to contain a little bit of maths, but don’t worry if it scares you a little, you only really need to get the gist.
Continue reading ‘Estimating heritability using twins’

Friday Links

At the risk of turning Friday Links into a self-trumpet-blowing occasion, we are happy to report that a number of GNZ contributors (Jeff, Carl and Luke) are authors on a new Crohn’s disease GWAS meta-analysis of 6000 patients that came out in Nature Genetics this week. The study brings the number of Crohn’s associations up to 71, with 30 novel, bringing the proportion of heritability explained up to about 24%; also worth noting that all of the associations from the previous meta-analysis were replicated it this one, showing how the cross-platform independent replication experiments that are now standard have largely obliterated false positives in GWAS. There were also 5 loci that showed evidence of a second, independent signal, which I think is a promising sign of things to come.

Continue reading ‘Friday Links’

Friday Links

Over at Your Genetic Genealogist, CeCe Moore talks about investigating evidence of low-level Ashkenazi Jewish descent in her 23andMe data. What I like about this story is how much digging CeCe did; after one tool threw up a “14% Ashkenazi” result, she looked for similar evidence in 23andMe’s tool. She then did the same analysis on her mother’s DNA, finding no apparant Ashkenazi heritage, and to top it all off got her paternal uncle genotyped, which showed even greater Ashkenazi similarity. [LJ]

A paper out in PLoS Medicine looks at the interaction between genetics and physical activity in obesity. The take-home message is pretty well summarized in the figure to the left; genetic predispositions are less important in determining BMI for those who do frequency physical excercise than for those who remain inactive. This illustrates the importance of including non-genetic risk factors in disease prediction; not only because they are very important in their own right (the paper demonstrates that physical activity is about as predictive of BMI as known genetic factors), but also because information on environmental influences allows better calibration of genetic risk. [LJ]

Trends in Genetics have published an opinion piece in their most recent issue outlining the types of genetic variants we might expect to see for common human diseases (defined by allele frequency and risk), and how exome and whole-genome sequencing could be used to find them.  They give a brief, relatively jargon-free, overview of gene-mapping techniques that have been previously used, and discuss how sequencing can take this research further, particularly for the previously less tractable category of low-frequency variants that confer a moderate level of disease risk. [KIM]

More Sanger shout outs this week; Sanger Institute postdoc Liz Murchison, along with the rest of the Cancer Genome Project, have announced the sequencing of the Tasmanian Devil genome. The CGP is interested in the Tasmanian Devil due to a rare, odd and nasty facial cancer, which is passed from Devil to Devil by biting. In fact, all the tumours are descended from the tumour of one individual; 20 years or so on, and 80% of the Devil population has been wiped out by the disease. As well as a healthy genome, the team also sequenced two tumour genomes, in the hope of learning more about what mutations made the cells go tumours, and what makes the cancer so unique.

I have to say, this isn’t going to be an easy job; assembling a high-quality reference genome of an under-studied organism is a lot of work, especially using Illumina’s short read technology, and identifying and making sense of tumour mutations is equally difficult. Add to this the fact that the tumour genome is from a different individual to the healthy individual, this all adds up to a project of unprecedented scope. On the other hand, the key to saving a species from extinction could rest on this sticky bioinformatics problem, and if anyone is in the position to deal with it, it’s the Cancer Genome Project. [LJ]

Tasmanian Devil image from Wikimedia Commons.

Setting the record straight

The current issue of Cell has some important correspondence in response to an essay published by Jon McClellan and Mary Claire King in April. Daniel covered the original piece and hosted a guest post from Kai Wang which detailed some of the more obvious flaws in their argument. Now, Wang and his colleagues from Philadelphia have published an official response in Cell, in parallel with a similar letter from Robert Klein and colleagues from New York. Accompanying these is a further reply from McClellan and King. Read on for an overview of three contentious statements made in the original piece, and the rebuttals to each.

Continue reading ‘Setting the record straight’


Page optimized by WP Minify WordPress Plugin