An illustration of a DNA sequence: examining genetic data doesn’t reveal “genes for” certain conditions “in any meaningful sense”. © Phanie Agency/REX

There is no "gene for" everything

It's delusional to believe that everything can be explained by genetics
November 13, 2014

You might think it’s sheer bad luck to be struck by lightning. But some of us are cursed with a struck-by-lightning (SBL) gene. Sure, as with many genetic conditions, if you have the SBL gene it doesn’t mean you will be struck by lightning, just that your chances are higher (by a factor of about three or four) than those without it. But that seems a fairly big risk factor to me—and I should know, because I’ve got the gene.

Yet no one is working on a genetic remedy. Scandalous? Not really, because SBL can be identified as the gene better known as SRY, the sex-determining gene on the Y chromosome, which makes an embryo develop into a male. Yes, men get hit by lightning more often, because their behaviour—rushing about on golf courses and football pitches in the rain, that sort of thing—makes it more likely. Call it stereotyping, but the statistics don’t lie.

The geneticist Steve Jones has used this example to point to the absurdity of the concept of a “gene for.” If we knew nothing else about what SRY does, and it fell out of a statistical search for genetic associations with being hit by lightning, we might indeed conclude that it warranted the label SBL. But the association with lightning strikes is merely a side-product of the way the gene’s effects play out in a particular environment. SRY could equally be misattributed as a gene for criminality, murder, baldness, watching Top Gear.

“The most dangerous word in genetics is ‘for’,” Jones has said. “Only 15 years ago people expected that they would find genes for cancer, heart disease or diabetes. But medicine’s big secret is that we haven’t found them. And we haven’t found them because they are not there.” Compare that with Bill Clinton promising, while standing next to a group of smiling scientists in 2000, that the decoding of the human genome means “doctors increasingly will be able to cure diseases like Alzheimer’s, Parkinson’s, diabetes and cancer by attacking their genetic roots.”

What does this mean for the much vaunted age of “personalised medicine”—of healthcare tailored to our individual genome, which can now be decoded for a few thousand dollars and might soon be as common a feature as blood group and cholesterol index on everyone’s health records? The answer is complicated. Genetic data do reveal a lot about our inherent predispositions to certain medical conditions. But that doesn’t necessarily mean we have the “genes for” those conditions in any meaningful sense—genes that can be considered to lie at the “roots.”

The tendency to assign genes the responsibility for well defined personal attributes doesn’t just muddy the waters of post-genomic medicine. It distorts the whole public discourse around genetics, and arguably around the way genomes are shaped by natural selection. And it takes us down some dark avenues, from the notorious history of eugenics to the recurring question of how genes are influenced by race. The furore over the views expressed by former New York Times science reporter Nicholas Wade in his book A Troublesome Inheritance: Genes, Race and Human History is just the latest skirmish in this ongoing saga. Wade suggests that some differences in social behaviour and characteristics among human societies may be genetically encoded. It’s an old argument, although expressed less crudely than in the anthropology of yore: the intellectual and economic hegemony of western culture is down to innate biological differences. Scientists have lined up to savage Wade’s book, but the contentious questions it asks—are differences in, say, intelligence, rationality and social cohesion down to our genes?—won’t go away. Nor should they. But we’re not going to make much headway with them until we get to grips with the distinctions between what genes do and what genes are “for.”

Geneticists now gnash their teeth at the bad journalism that proclaims the discovery of a “gene for.” But the burden of guilt for this trope lies with the research community itself. It’s not hard to find both implicit and explicit references to “genes for” in the literature or pronouncements of biologists. They are not always as ill-judged as DNA pioneer James Watson’s suggestion that genetic testing for “gay genes” could offer a woman the opportunity to abort a child that carried them. But the implication that traits such as personality and susceptibility to disease are necessarily determined by one or a few genes permeates the field. Without that sort of functional autonomy, for example, it is hard to see how the notion of selfish genes (in Dawkins’s sense) can be coherent. During the Human Genome Project, references to blueprints, lists of parts and instruction manuals carried the same baggage. 

It’s easy to understand how this habit began. As the modern era of genetics dawned and it became possible to probe the effects of particular genes by mutating, adding or silencing them (the latter being called “knockout” experiments) in flies, mice and other laboratory animals, researchers began to find clear links between the presence or absence of a gene variant—for brevity I’ll follow the sloppy convention and just say “gene”—in an organism’s genome and certain traits of the whole organism. Surely it stands to reason that, if you see a particular trait in the presence of a gene but not in its absence, that gene is in some sense a gene “for” the trait?

Well, yes and no. So-called coding genes contain the instructions for making particular proteins: enzymes that comprise the biomolecular machinery and protein fabrics of the body. That’s the only thing they are really “for.” Spiders have a “gene for silk”; humans have a “gene for digesting the milk sugar lactose.” Mutations of these genes can be responsible for inheritable conditions. 

But the lack or malfunction of a particular enzyme due to a genetic mutation can have complex knock-on effects in the body. What’s more, most genes are non-coding: they don’t encode proteins, but instead regulate the activity of other genes, creating complex networks of gene interactions. Most human traits arise out of this network, which blurs the “genes for” picture. As the spurious “SBL gene” shows, it’s then wrong to infer causation from correlation. That’s not just a difficulty of finding the right genes within the network. For some traits, even if they are genetically encoded it can be inappropriate to talk of causative mechanisms and explanations at the genetic level.

I

ndeed, gene knockout studies tended to undermine the “gene for” picture more than they confirmed it. Time and again geneticists would find that, if they knocked out a gene apparently “for” a feature indispensable to an organism’s vitality, the organism hardly seemed to bat an eyelid. We now know that this is at least partly because of the immense complexity of gene networks, which have redundancy built in. If there’s a failure in one part of the network then, just as with closing a station on the London Underground, there may be an alternative route to the same goal.

Nothing here would surprise engineers. They know that such redundancy and fail-safe mechanisms are an essential part of the robustness of any complex system, whether it is a chemicals plant or a computer. There is nothing that need have surprised geneticists either, who have known since the 1960s that genes work in self-regulating networks. All the same, I sat through countless editorial meetings at Nature in the early 1990s in which a newly accepted paper would be described breathlessly as showing that a gene thought to do this had now been shown to do that too. The language remained resolutely that of “genes for”: such genes were just multi-tasking. One of the most notorious episodes of “genes for” from that period was a 1993 study by a team of geneticists at the United States National Cancer Institute, who published in the leading journal Science the claim that with “99.5 per cent certainty there is a gene (or genes) in [a particular] area of the X chromosome that predisposes a male to become a heterosexual”—in effect, a “gay gene.”

Anyone interested in genes was already primed to accept that idea. Geneticists had been talking about a genetic basis for homosexuality since the 1970s, and in his 1982 book The Extended Phenotype, Richard Dawkins used the possibility (“for the sake of argument”) to explore the notion of how a gene might exert different effects in different environments. For Dawkins, this environmental influence shows only that we must recognise a contingency about what a gene is “for,” not that the whole idea of it being “for” a particular trait or behaviour may be meaningless.

This complexity in the emerging view of what genes do is tellingly, and perhaps inadvertently, captured in Matt Ridley’s book Genome, published in 1999 as the completion of the Human Genome Project was about to be announced. Ridley offered little portraits of inheritable traits associated with each of the 23 human chromosomes. He began with a confident description of how the gene associated with Huntington’s disease was tracked down. Here, surely, is a “gene for”—if you are unlucky enough to have the particular mutation, you’ll develop the disease.

But then Ridley gets to asthma, intelligence, homosexuality and “novelty-seeking.” All do seem to have an inherited component.  “In the late 1980s, off went various groups of scientists in confident pursuit of the ‘asthma gene’,” Ridley writes. By 1998 they had found not one, but 15. Today some researchers admit that hundreds might be involved. In the other cases, Ridley admitted, the jury was still out. But it’s not anymore: today, all the candidate “genes for” he mentioned in relation to intelligence, homosexuality and novelty-seeking have been ruled out. Isn’t this odd? There was Ridley, an unusually well-informed science writer, declaring the futility of quests for specific genes “for” complex personality traits, yet finding himself compelled to report on geneticists’ efforts to find them. So who was to blame?

On genes for intelligence, Ridley mentioned the work of Robert Plomin, who in 1998 reported an association between IQ and a gene called IGF2R. The fact that the gene was known to encode a protein responsible for a very routine and mundane cell function might have been a clue that the connection was at best indirect. That the gene had previously been associated with liver cancer might have been another. Still, Ridley said, we’ll have to see. In 2002 we saw: Plomin and others reported (to scant press attention) that they had not been able to replicate the association of IGF2R with IQ. “It doesn’t look like that has panned out,” he said in 2008. 

“Anybody who gets evidence of a link between a disease and a gene has a duty to report it,” Ridley wrote. “If it proves an illusion, little harm is done.” Isn’t that just the way science works, after all? Surely, but whether innocent errors and false trails cause harm depends very much on how they are reported. Studies like Plomin’s are well motivated and valuable, and he has deplored the “genes for” picture himself. But there’s little hope that this research will avoid such associations unless biologists can do a better job of correcting the deeply misleading narrative that exists about what genes do, which has flourished amid their often complacent attitude towards explaining it.

If you want to see the hazards of illusory gene associations, take the recent claim by Michael Gove’s former education advisor Dominic Cummings that findings on the inherited, innate aspect of intelligence (in particular, the work of  Plomin) are being ignored. For a start, the very mention of genetics seemed to send rational argument out of the window. Some on the left sucked their teeth and muttered darkly about eugenics, or declared the idea “incendiary” and outrageous without bothering to explain why. That’s why the psychologist Jill Boucher, writing in Prospect (“Don’t deny the evidence” December 2013), had a point when she excoriated the “politically correct” attacks on Cummings’s comments. But unless Boucher can point to an educationalist or teacher who denies that children differ in their innate abilities, or who regards them all as potential Nobel laureates, she is erecting something of a straw man.

The real problem with what Cummings said was not that he attributed some of our characteristics to our genes, but that he gave the impression of genetics as a fait accompli: if you don’t have the right genes, nothing much will help. This goes against the now accepted consensus that genes exert their effects in interaction with their environment. While IQ is often quoted as being about 50 per cent inheritable, the association with genetics is much weaker in children from poor backgrounds: good genes won’t help you much if the circumstances are against it.         

Nonetheless, there’s nothing wrong in principle with Cummings’s suggestion that research to identify “high IQ” genes should be encouraged. But if he were to look a little more deeply into what it has already discovered (and sometimes un-discovered again), he might wonder what it offers education policy. A 2012 study pointed out that most previous claims of an association between intelligence and specific genes don’t stand up to scrutiny. Nor is there much encouragement from ones that do. In September an international consortium led by Daniel Benjamin of Cornell University in New York reported on a search for genes linked to cognitive ability using a new statistical method that overcomes the weaknesses of traditional surveys. The method cross-checks such putative associations against a “proxy phenotype”—a trait that can “stand in” for the one being probed. In this case the proxy for cognitive performance was the number of years that the tens of thousands of test subjects spent in education.

From several intelligence-linked genes claimed in previous work, only three survived this scrutiny. More to the point, those three were able to account for only a tiny fraction of the inheritable differences in IQ. Someone blessed with two copies of all three of the “favourable” gene variants could expect a boost of just 1.8 IQ points relative to someone with none of these variants. As the authors themselves admitted, the three gene variants are “not useful for predicting any particular individual’s performance because the effect sizes are far too small.”

Perhaps, then, the media would be best advised not to call these “IQ genes.” But you could forgive them for doing so, for they’d only have been echoing one of the paper’s authors, the influential cognitive scientist Steven Pinker of Harvard University. The proper response to a study showing that most existing candidates for gene-intelligence associations were wrong, and that the few that weren’t contribute almost negligibly to inheritability, surely isn’t “Here they are at last,” but “Jesus, is this all there is?”

Where, then, is the remainder of the inherited component? It must presumably reside among a host of genes whose effects are too subtle to be detected by current methods. Those genes will surely be involved in other physiological functions, their effects in intelligence being highly indirect. They are in no meaningful sense “genes for intelligence,” any more than SRY is a gene for being struck by lightning.

It’s not clear, then, pace Cummings, what this kind of study adds to the conventional view that some kids are more academically able than others. It’s not clear why it should alter the goal of helping all children achieve what they can, to the best of their ability. Such findings offer very dim prospects for Plomin’s hope, laudable in principle, that education might be tailored to the strengths and weaknesses of individual pupils’ genetic endowment. 

So, to Wade’s claims that genetics causes racial differences in traits such as the propensity for violence. As Wade’s book has shown, the issue of race and genes remains as contentious as ever. On the one hand, of the total genetic variation between random individuals, around 90 per cent is already present in populations on a single continent—Asia, say—and only 10 per cent more would accrue from pooling Europeans, Africans and Asians together. Some biologists argue that this makes the notion of race biologically meaningless. Yet ancestry does leave an imprint in our genomes: for example, lactose intolerance is more common in Africa and Asia, sickle-cell anemia in people of African origin, and cystic fibrosis in white northern Europeans. That’s why the concept of race is useful as a proxy for medical risk assessment and diagnosis. Besides, arguments about statistical clusters of gene variation don’t alter the fact that culturally conventional indicators of race—pigmentation and eye shape, say—are genetically determined.

What you choose to emphasise and ignore in this matter is largely a question of ideology, not science. But arguments like those Wade puts forward draw their strength from the simplistic notions of how genes relate to phenotype. We know that what we can reasonably call cystic fibrosis or sickle-cell genes (because the conditions derive from a single gene mutation) differ in incidence among racial groups. We also know that genetic variation, while gradual, is not geographically uniform. Might it not be that those variations could encompass genes for intelligence, say?

Yet if the genetic constitution of such traits is really so dispersed in the genome, this is a little like grabbing a hundred Scrabble tiles from some huge pile and expecting them to spell out this sentence. Ah, but such random grabs are then filtered into meaningful configurations by natural selection, Wade argues: genes producing a predisposition to capitalism or tribalism might be more useful in some populations than others. This relies on the assumption that every inheritable trait can be selected for, because it stems from genes “for” that trait. That’s precisely the fallacy that once supported eugenic arguments for the betterment of the human race: that we can breed out genes for criminality, stupidity and mendacity. 

While it has been reassuring to watch Wade’s thesis be comprehensively dismantled by knowledgeable scientists, it’s hard not to contrast their response with that to Watson’s claim in 2007 that the idea that all races share “equal powers of reason” is a delusion. Despite the fact that Watson adduced as “evidence” only the alleged experience of “people who have to deal with black employees,” some defended him as the victim of a witch-hunt by an “illiberal and intolerant thought police.” Even though it is hard to disentangle genuine prejudice from habitual liberal-baiting in Watson’s remarks, all we are really seeing here is one natural endpoint of the “genes for” and “instruction book” mentality underpinning the Human Genome Project that Watson helped establish and initially led.  

The dispersed, “polygenic” nature of inheritable intelligence is likely to be the norm in genetics, at least for many traits we care about. Much the same applies to many inheritable medical conditions, such as schizophrenia and multiple sclerosis: like asthma, they seem to arise from the action of many, perhaps even hundreds, of genes, and there’s not one gene, or even a small number, that can be identified as the main culprits. This “missing heritability,” sometimes called the “dark matter” of the genome, is one of the biggest challenges to the promised personalised medicine of the post-genome era. But it should also be seen as challenging our understanding of genetics itself. Jones, who has been energetic about puncturing the worse misunderstandings of the “genes for” picture, admits that he wouldn’t now attempt to explain how genetics really works, as he did in his brilliant The Language of the Genes (1994), because the field has got so damned complicated.

Yet the linguistic analogy—with genes as words and genomes as books—might remain a serviceable one, if only it were taken more seriously. Combing the genome for genes for many (not all) complex traits seems a little like analysing Hamlet to look for the words in which Hamlet’s indecision resides. Sure, there’s a lot riding on the cluster “To be or not to be,” but excise it and his wavering persists. Meanwhile, “to” does a lot of other work in the play, and is in no meaningful sense an “indecisive Hamlet” word. 

The irony is that a study like the latest “IQ genes” report led by Daniel Benjamin, while showing yet again the inadequacy of the “gene for” picture, is likely to perpetuate it. As Jones has pointed out, such work has the unfortunate side effect of feeding our fascination with the putative genetic basis of social problems such as discrimination or differences in educational achievement, about which we can do rather little, while distracting us from the often more significant socio- economic causes, about which we could do a great deal.