Science plc

Science and business are more entangled than ever before. This is good for economic growth, but can be bad for scientific authority
August 19, 1999

It has become a commonplace of our age that debate about science and technology is elbowing out the old themes of high politics. Godfrey Hodgson, elsewhere in this issue of Prospect, attributes this to the end of a "grand narrative" of two world wars and a cold war in which everyone's future, at least in Europe, was directly bound up with events on the international political stage. Elemental anxieties about survival have not, however, disappeared. Rather, they have shifted ground from armed conflict to the big scientific controversies about modern life. Is over-population about to cause a cataclysm of disease and famine? Are pesticides going to give us all cancer? What caused the hole in the ozone layer? Is there a cure for HIV? Does biodiversity matter? Is Britain about to suffer an epidemic of CJD? Is the greenhouse effect real or not? Dare we eat genetically modified foods?

The rise and rise of scientific controversy is not just a media phenomenon. Nearly half of the bills now put before the US Congress have a substantial scientific component, which is not matched by the level of scientific literacy among congressmen. This is increasingly true of other democracies, too. The issues are usually too complex for politicians and public to do anything but defer to the scientific consensus (if one ever emerges). We are increasingly dependent upon scientists for their advice-and anxious about whether we can, in fact, trust them.

The problem of trust is compounded by a change in the nature of modern science. Over the past 20 years, science and business have become much more closely entangled. Of course science has always served business. The white-coated scientists conjured up in Harold Wilson's "white heat of the scientific revolution" in the 1960s were just as likely to work for the private sector as the public sector. Indeed, in Britain since 1995 the private sector has actually employed more scientists than the public sector. But, until recently, the dominant ethos of science remained a public sector one-it was something done by disinterested boffins for the public good.

This public sector ethos also reflected the public funding of science. Impressed by the success of science in the second world war (which gave us penicillin, radar and the atom bomb), governments invested heavily in science up until the 1970s. But thereafter public funding has been in decline and much of the financial slack has been taken up by the private sector. Academic scientists are increasingly unlikely to win public funds for research proposals (the success rate is below 30 per cent) and are usually only too pleased to work for reputable businesses with funding offers.

Individual scientists are also beginning to appreciate the commercial value of what they do. The market for academic expertise-whether for research consultants or expert witnesses-is lucrative. From selling expertise, it has been a short step to selling intellectual property, and, thence-a short step again-to founding companies. Where once this would have been considered an inappropriate distraction for the serious academic, the spin-off company has become a status symbol. A number of British academics-although still far fewer than in the US-have become millionaires in this way. Successful companies founded by academic scientists in Britain include Oxford Asymmetry, Powderject, Roslin Biomed (now merged with Geron), Cambridge Display Technology and Cantab Pharmaceuticals.

Even public-funded research is increasingly serving private business. The recent move of the office of science and technology into the department of trade and industry is indicative of this. There are now a number of awards (such as the Realising Our Potential Awards) which are open only to research groups funded by business. And the government's University Challenge Fund, worth ?45m, is described as "a unique fund to help turn university research into real business opportunities."

The privatisation of science is not usually thought to have a bearing on questions of trust or public interest. On the contrary, the shift from public to private funding and the greater awareness of commercial realities within the public sector is regarded as good for economic growth and for the financing of universities. Had Oxford University in the 1930s had its current policy towards commercial exploitation, it might today be completely self-funded by the proceeds from penicillin. In the US, one explanation for the long economic upswing is the burst of technological innovation, which owes much to the convergence between academic and business interests, and the development of entrepreneurial enthusiasms among researchers and their students. Several American universities have established a steady stream of income from the royalties and share options derived from faculty researchers' discoveries. And the American public sector has hitherto filed more patents on human DNA than all the world's multinationals.

In an economy driven by technology and innovation, academic laboratories are surrounded by commercial opportunities and it is quite proper that they should want to exploit them. But the new opportunities to profit from science also create new opportunities for conflicts of interest, accusations of partiality, and suspicions of untrustworthiness.

Conflicts of interest are by no means new to scientists and they take somewhat different forms in the main arenas of scientific employment: academia, business and government. But almost all scientific fraud arises from a conflict between the desire for short-term personal gain (whether in status or in wealth) and the requirements of truth. Theft of scientific ideas can also be viewed in this light. There are also conflicts of interest arising from personal loyalties. In the peer review of grant proposals and papers submitted for publication, scientists have to try to pass objective judgement on people they know well.

But, with some exceptions, these problems have been contained by the checks and balances in place within the scientific community, public or private. The effect of personality in peer assessments is minimised by group appraisal; theft is reviled; and peer review and the replication of experiments will eventually catch up with attempts to tamper with results. For a scientist tempted to make up results, the short-term rewards will rarely outweigh the long-term risks of being shown up by one's peers.

But what if the short-term rewards are high enough? Do long-term consequences begin to lose their deterrence? The share price of the Californian biotech company, Geron, soared by 44 per cent last year, two days before the scheduled publication date of a paper written by a group of Geron scientists for the journal Science. The paper highlighted the role of a particular enzyme in cellular ageing, and showed that it was possible to stall the ageing process. When a scientific result can make such a rapid impact on a stock market, it is only a question of time before stock markets will start to make a similar impact on scientific results. There was nothing amiss in the Geron case: someone simply leaked a document before publication. But the example illustrates the scale of temptation facing the academic scientist with one foot in the commercial world.

Another example of the potential conflict between scientific practice and commercial interest comes from a recent clinical trial involving thalassemic patients in Toronto, Canada. Thalassemia is an inherited disorder in which individuals are unable to produce enough haemoglobin, with the result that many sufferers are dependent on blood transfusions. Frequent transfusions, however, can overload the body with iron. It would therefore be beneficial if some oral compound could be found which would "mop up" the excess iron in the blood streams of patients.

In the late 1980s, Nancy Olivieri, director of the Toronto Haemoglobinopathies Programme at the Hospital for Sick Children, Toronto, and a faculty member of the University of Toronto, identified a candidate compound called deferiprone. Her research team received funding from the Medical Research Council of Canada to initiate clinical trials to assess its efficacy and safety.

In 1993, Apotex Inc, a Canadian pharmaceutical company, became interested in the research, and provided additional funding to complete the trials. In 1995, short-term results looked promising. However, between 1996 and 1998, the trials appeared to be taking a turn for the worse. Far from having their iron levels reduced, the thalassemic patients on deferiprone were (according to Olivieri) beginning to show very high levels of iron, suggesting a risk of cardiac disease and early death. Although the trials were not yet complete, Olivieri thought they should be halted.

Apotex, however, was not of the same mind, and threatened Olivieri with legal action if she went public on the matter-which she did. In a surprising and disturbing turn of events, the university and hospital supported the company and removed Olivieri from her position as director of the Toronto Haemoglobinopathies Programme. In an account of the incident for the Lancet, David Weatherall, Regis Professor of Medicine at Oxford, commented: "Although understandable in economic terms, the increasing pressures on scientists by government and funding bodies to develop links with industry and to pursue work that is likely to be of commercial value are generating a research environment that is fraught with pitfalls."

Olivieri has since been reinstated, but the incident raises a number of questions. Did Olivieri have a duty to disclose her concerns? Presumably, she is not now looking forward to a rosy future of research funded by Apotex. A less scrupulous scientist might have been tempted to let self-interest override conscience. Would the situation have been different had Olivieri been a director or employee of Apotex? Might the temptation to ride the thing through have been irresistible?

For the past two centuries, scientists have cultivated the image of disinterested people willing and able to screen out their personal desires and ambitions in the quest for truth. That Olivieri was able to conform to the stereotype, under duress, reinforces this view. But other scientists have not been so resilient. I have heard several stories of young up-and-coming researchers at British universities being prepared to manipulate their results in order to keep a consulting contract alive. Whether such tales are true is difficult to say; but in the context of increasing competition for public grants and the growing rewards of private collaboration, they do not seem unbelievable.

the incidence of researchers falsifying results or companies applying financial muscle in an unethical way may be rising, but it is likely to remain small. There is a bigger issue, however, relating to the status and objectivity of scientists as expert witnesses and government advisers in the new world of commercial science. In recent years, there have been a number of high-profile court cases in which scientific advice was of paramount importance: tobacco, asbestos, breast implants, OJ Simpson, disputes between drug companies, the Microsoft anti-trust trial. In the US, expert witnesses can earn up to $2,000 a day and it seems that for the right price an expert can always be found to give the desired evidence. Because many cases involve grey areas of science, as well as grey areas of the law, it is often credible for scientists to take either side on a controversial topic, without risking their academic integrity. No doubt most expert witnesses genuinely believe the testimony they give. But the fact that the American Bar Association Journal contains four pages of classified advertisements promoting the services of expert witnesses, and that thousands of scientists, consultants, and doctors have registered their names on a website called expertpages.com, many of them advertising things like a "quick turnaround," makes one wonder whether some are not just guns for hire.

Clear-cut evidence of scientific experts putting their academic integrity at risk in exchange for money was found in the course of recent American litigation on tobacco smoking. In the late 1980s, the cigarette company, Philip Morris, was buying up European scientists to act as consultants, in order to challenge the claim that passive smoking really was dangerous. It was a strategy the company believed to be very successful: so much so, that at one point Philip Morris actually considered founding its own European version of the World Health Organisation.

Reluctance to give evidence can be as damning as the willingness to sell it. In 1969, when Union Oil's offshore well sprung a massive leak in the Santa Barbara Channel, the state of California took Union Oil and three other oil companies to court-but struggled to find a single petroleum engineer at a Californian university prepared to testify against the industry. The state's chief deputy attorney general blamed the close relationship between industry and the universities' petroleum engineers who did not want to risk losing grants and consulting retainers. What are the implications, 30 years on, when so many other scientific and technical disciplines show similar patterns of crossover between universities and business? With academic salaries and publicly funded research grants at a low ebb, it is not surprising that scientists find themselves dragged into messy commercial disputes.

The problem extends to scientists advising government. Today, Whitehall is more dependent on outside academics for scientific advice than on its own in-house scientists. This raises two problems. First, government may discover, at an inopportune moment, that it has no national expertise on which to draw. To some extent, this was the problem when the BSE crisis broke. The first BSE working party put together by the ministry of agriculture, food and fisheries, in 1988, was made up of three civil servants and three scientists, none of whom had active research experience with spongiform encepalopathy. The problem of lack of expertise was compounded by the willingness of government to manipulate research to justify an already established policy. Indeed, two prominent British researchers have alleged that the ministry and the Agricultural and Food Research Council blocked funding for BSE research at the height of the BSE crisis, fearing that it might contradict government policy on BSE.

The second problem is that the expertise, when it is available, may suffer or appear to suffer from conflicts of interest. The government takes advice from dozens of non-departmental scientific committees which play a regulatory role on issues such as food and drug safety, pollution, and the environment. The Committee on Safety of Medicines (CSM), for example, advises the British government on the approval of new drugs. Its recommendations can be crucial for business and public alike. A government decision that a drug is unsafe for sale can cost a drug company hundreds of millions of pounds in development costs, and deny it billions more in expected revenues. On the other hand, to let a drug pass too readily could lead to a repeat of the Thalidomide disaster, as a result of which 10,000 children were born with severe limb deformities in the early 1960s.

Because the stakes can be so high, a few advisory committees have gained considerable public attention. The Advisory Committee on Releases to the Environment (Acre), which advises the Department of the Environment, has found itself embroiled in the row over GM foods. The Spongiform Encephalopathy Advisory Committee (SEAC), part of the ministry of agriculture, has spent the best part of a decade making headlines. Others, such as the Committee on Mutagenicity of Chemicals in Food, Consumer Products and the Environment (Com), or the Committee on Toxicity of Chemicals in Food, Consumer Products and the Environment (Cot), may follow.

Scientific committees are mainly filled with academic scientists, an arrangement which seems both appropriate and popular. In a recent Mori survey on public attitudes towards risk, most of those surveyed specified independent scientists as the most reliable source of advice on BSE. For the purpose of the survey, "independent scientist" was equated with "university professor." But in an increasingly privatised academic world, it is difficult to find a university professor who is actually completely independent.

A glance at the annual reports of advisory committees, such as Acre, CSM and Cot, demonstrates that a large proportion of scientists have links to industry in the areas they are trying to assess. These links can take the form of shareholdings, consultancies, grants, pensions, and even company directorships. Last year, Friends of the Earth identified eight out of the 13 members of Acre as being directly involved with the biotech industry, claiming that six members had also been involved with companies that had been granted permission by Acre to start trials of GM crops. Friends of the Earth called for the committee to resign. In May this year, the government, anxious about criticisms that it might be favouring particular interest groups, announced that it was going to appoint a new committee. Ten of the 13 members were due to stand down anyway, as the government has recently introduced a time limit for serving on an advisory committee. However, Michael Meacher, the environment minister, has made it clear that there will be no room on the new committee for business interests (there were two representatives), or for members of environmental lobby groups such as Friends of the Earth (there was one). So a search has begun for truly independent academics.

John Beringer, the current chairman of the committee, is not optimistic. "If we are not going to allow people into committees because they have industrial connections, then we are going to denude our committees of the top scientists," he says. Citing the pressure on academics to raise joint funding with business for equipment grants, for supporting industrial research students, and for commercialising their inventions even when they don't especially want to, he says: "Almost every good academic department will have industrial funding now." It seems that complete independence is impossible.

This is not to imply that the decisions being made by Acre are unreasonable. There is almost no way of telling whether someone has been unduly influenced in the advice they give. As with expert witnesses to courts, much of the scientific advice to governments occurs in the fuzzy areas of science-areas where the evidence is unclear. This means that, even if you could demonstrate that a scientist was offering advice that contradicted work he had published previously, or a stance he had previously taken, he could legitimately claim to have changed his mind. The most substantial charge levied at Acre was that of passing every application that was made to it. These included applications to conduct field trials and to market genetically modified oilseed, rape, beet, maize and potatoes. At least many of those applications were only accepted after they had been amended, so presumably the committee served some public good.

Just because an adviser is a consultant to an industry he is trying to assess, or because his research is funded by a business in that industry, does not mean that he cannot make decisions with integrity and even objectivity. Most members of advisory committees no doubt believe that they leave their biases at the door. Some argue that deliberate biasing of decision-making is, in any case, next to impossible in committees of 20 people. Moreover, there is a feeling that some degree of disinterest can be established simply by making declarations of interest, and by enforcing a rule that members should leave meetings when matters of direct relevance to their interests are discussed.

But even if we can trust scientific advisers to act with integrity, this still leaves two problems. One is that the appearance of conflicts of interest may, in the long run, prove damaging to the public's acceptance of scientific authority. The second is that an implicit bias is likely to exist, simply through long association and shared ways of thinking in a particular sector.

A scientist who works regularly for an industry is more likely, over the course of a career, to start seeing things in the same way that the industry does. Many of the most difficult assessments scientific advisory committees have to make involve an element of subjectivity. Any calculation of the risks involved in eating or growing GM foods, for example, will be coloured by intuition, experience and opinion, in addition to what there is by way of hard, scientific fact. Scientists used to working in this particular area are likely to support the production of GM crops. But their opinions are formed not from grubby self-interest-not because they consult for this company or because their research is funded by that company-but because they are excited about the science, because they believe in the science, and because, being accustomed to the science, they have acquired a different perspective on the potential risks from that held by a confused public.

Perhaps this is why it is important to hold on to a substantial, independent, scientific community. Imagine if Imperial College was mainly funded by Monsanto. Should this prospect make us feel uneasy? In South Korea, several reputable universities, such as the 600-year-old SungKyunKwan University (SKK) in Seoul, and the Pohang Institute of Science and Technology (Postech) in southern Korea, are largely dependent on a single company for their financial support (Samsung in the case of SKK; the Pohang Steel Company in the case of Postech). There is no reason why such a system cannot work-both SKK and Postech are reputable institutions, and both are, effectively, intellectually independent of their sponsors.

But, coming from a culture that has learned to see universities as free, not just in practice, but also in principle, the Korean model is undesirable. Would the research at an Imperial funded by Monsanto be any different? Who knows? Would the public be less likely to believe experts from the university in their appraisal of GM foods? Almost certainly yes.

A public which loses faith in the ability of its scientists to be impartial, whether that loss of faith is warranted or not, is likely to gradually lose faith in science itself. Public attitudes towards science are already tinged by more scepticism than in the past. Most people are becoming aware that there is a degree of subjectivity implicit in the process, that experiment is seldom completely pure, and that scientists have a tendency, shared with the rest of humanity, to see what they expect to see. But there remains a shared confidence that, in the long run, the scientific process tends to get things right, to make accurate predictions and to find sensible answers.

This belief, however, depends on the perceived integrity of scientists. And that integrity is subject to constant test in a world of commercialised science. Many scientists resent the fact that they have been pushed into collaborations with business, only to find themselves accused of undermining their academic integrity. "As an academic you are trapped in just about every direction," says John Beringer.

Public confidence depends on proper rules to minimise conflicts of interest. But it also depends on the scrupulousness of individual scientists like Nancy Olivieri. There will be a steadily increasing flow of great scientific-political debates in the future-perhaps about cellular phones and brain tumours, cancer and the contraceptive pill, or the undiscovered dangers of gene therapy. In a commercialised academic world, they will all be tests of scientific character.