Friday 15 November 2013

Placebo and nocebo: conscious thought not required

I went to a fascinating New Scientist Live event earlier this week. They're launching a new book literally about Nothing -- from the number zero to the state of mental nothingness induced by anaesthetic. Helen Pilcher was there talking about the nocebo effect, the "evil twin" of the placebo effect. Most of you are undoubtedly familiar with the placebo effect. This effect, also known as homeopathy (in my humble opinion), is when sick people actually do get better by taking pills containing no medicinal ingredients. It's a fascinating effect and seems to happen to about 1/5 of people in any given trial. For years I thought of the placebo effect as a 'mind over matter' type thing, where positive feelings cause the release of neurotransmitters that subsequently make the person actually feel better. My favourite fact about the placebo effect, though, is that even people who know the pills they're taking contain nothing but sugar still feel better when taking them. Mind over matter has nothing to do with it. Disappointing, really-- I've always had a soft spot for the idea of a positive attitude enabling us to overcome anything. Science once again fails to bend to my will.

The nocebo effect is the placebo effect inverted. People experiencing the nocebo effect have deteriorating health because they're told they're going to get sick. Helen spoke about witch doctors placing curses, about patients in clinical trials overdosing on sugar pills, and about the reluctance of surgeons to operate on people who think they're going to die during surgery. An unusually large number of those who think they're going to die during surgery indeed do, ruining our poor surgeon's statistics. So is the nocebo effect, unlike the placebo effect, actually a 'mind over matter' phenomenon?

Some fascinating work from researchers in Boston suggests that consciousness isn't required for the nocebo effect either. Fist, volunteers were conditioned to associate high and low pain with two different faces. When patients were shown face 1, they were simultaneously given a painful heat stimulus. When they were shown face 2, they were simultaneously given a mild heat stimulus. Then they were shown those same faces while an intermediate level of heat was applied, and as you'd expect, the volunteers reported feeling more pain while they were looking at face 1. Now comes the interesting part. The experimenters then showed the faces so quickly that the volunteers were unable to consciously recognise them. Again, face 1 elicited more pain than face 2, despite identical heat stimuli.

So much for the conscious mind over matter there, too. There's something much more primal going on to cause these two effects. Perhaps patients who are genuinely at an increased risk of dying during surgery can somehow feel that something is just a bit off. I don't think, as Helen does, that these patients die not from the disease, but from believing that the disease will kill them. There's something else going on here. Belief has nothing to do with it.

Tuesday 16 October 2012

Pushing boundaries

I have a soft spot for men who are willing to throw themselves out of balloons (I professed my undying love for Joseph Kittinger in my Day at the Science Museum post). There were a couple of things that impressed me most about the recent Baumgartner records:

1. He found himself in an out-of-control spin and pulled himself out of it. What amazing presence of mind and calmness under pressure.

2. He broke the sound barrier and highest freefall records. Definitely cool.

3. He didn't hit a bird. At least none that we know of.

4. He captured the imagination of the entire world. Eight million people watched his stream on youtube.

We can't help it, we all love people who push boundaries. Have you looked at the X prizes lately? They're super cool. The X prizes are a series of multi-million dollar competitions designed to push boundaries. They have prizes for putting robots on the moon (governments need not apply), sequencing genomes with incredible speed and accuracy, and cleaning up disatrous oil spills in oceans and seas. They try to identify "Grand Challenges" and design prizes to spur innovation and development in these areas. Go on, take a look. Maybe you should enter!

Monday 17 September 2012

The Old Man and the G (G to A transistion, that is)

When I was a kid, my dad used to make the same joke every time my mom's birthday came around. "Twenty-nine again, eh?" he'd say. We'd laugh a bit at my mom's expense, but at that age I didn't really understand why adults, particularly women, cared so much about their age. Women dye their hair and buy "rejuvenating" creams. Female models over 30 are relegated to Dove ads. Men, on the other hand, compare themselves to fine wines. The Bernie Ecclestones of the world outnumber the Duchesses of Alba.

Much of this underscores the importance of maternal age on fetal health. Pregnant women over the age of 35 are routinely screened for chromosomal abnormalities in their fetuses. Meiosis, the process through which oocytes (eggs) and sperm are generated, is very different in men and women. Men produce sperm on-the-go from germ cells with a virtually unlimited production capacity. Those germ cells spring into action whenever they're needed, and men can produce viable sperm from puberty 'til death. Women's germ cells are already half-way to being oocytes (eggs) by the time they're born. Women don't have an unlimited capacity to produce oocytes because their germ cells don't self-renew. Biology can be a bit quirky, and oogenesis is a particularly odd example. The partially mature oocytes in a newborn baby girl are stuck half-way through a cell division, with their chromosomes aligned in the centre of the cell. One of the problems with this is that large chromosomal abnormalities, such as those seen in Down syndrome, can occur more easily when the DNA is coiled and lined up side-by-side for a long time. The only place I can think of where this happens is in oocytes. The longer the cells remain with their DNA lined up ready for division, the greater the chance that things will go wrong when meiosis resumes. Hence the routine screening for women over 35. When things go wrong they go really wrong, with big chunks of one chromosome getting stuck on another chromosome. You don't need to look very closely at the DNA to see the abnormalities. You need a microscope, but not a DNA sequencer.

So women have all their oocytes with them when they're born and they don't produce any more. But sperm production is ongoing, and it relies on continued cell division in the adult. Each cell division carries its own risks. DNA must be reliably copied, checked for mutations, packaged and sent off to make a new cell. Since most mutations occur during DNA replication, every cell division presents an opportunity for mutations to arise. Sperm are no exception. Recent work on Icelandic parent-offspring trios (mom, dad, baby) shows that the number of new mutations in a baby is strongly correlated with the age of the father at the time of conception. The age of the mother has no detectable impact. The child of a 20-year-old father has an average of 25 mutations, while the child of a 40-year-old father has about 65. That translates to about 2 additional mutations for each year of paternal age. The number of mutations doubles every 16.5 years, and is surprisingly linear. Many mutations have no obvious consequences, but others can give rise to diseases ranging from autism to cancer predisposition syndromes. Many diseases, particularly those associated with impaired brain function such as autism, schizophrenia, dyslexia and reduced intelligence are caused by multiple mutations working together and are associated with paternal age. Increased paternal age increases the probability of having enough mutations to make a difference in a child's overall health.

A French teacher of mine had an amusing way to remember the gender of disaster words. Un probleme, c'est masculine. To really make a mess you need the feminine: une catastrophe. The same idea seems to apply to DNA. Massive damage to the DNA comes from ageing mothers, while ageing fathers provide multiple smaller problems. Catastrophic DNA damage rarely makes it into viable babies; they don't usually make it past the first few weeks of a pregnancy. A fetus with a collection of DNA "problems" is much more likely to make it through gestation. So next time your charming partner makes a crack about your age, you can do what my mom used to do and tell him to put a cork in it. Even fine wine ages poorly without one. And you might need to double your order of hair dye.

Tuesday 12 June 2012

The rise of scientific activism

A couple of weeks ago I heard Mark Henderson speak about his new book, the Geek Manifesto. He has recently been appointed Head of Communications at the Wellcome Trust, but was previously the science editor for the Times. Henderson's central thesis is that science should play a bigger role in politics, and that those of us who are scientifically-minded should become more political to ensure this. We should be writing to our MPs to push the scientific agenda. Scientific process should be used to determine policy. The scientific consensus should be presented as "expert evidence", and ill-informed and incorrect science should not. He proposes the establishment of an Office for Scientific Responsibility, an independent body which would hold MPs to account for their assurances of scientific evidence in the House of Commons.

I've never written a letter to an MP. Even after hearing Mark speak I don't plan to. I think there are better ways to push a scientific agenda, many of which have been growing in the last decade. Henderson complains that  only 1 in 650 MPs has a scientific background. But what he doesn't mention is that the House of Lords is a different story. Of the 825 peers, about 700 are there because of their achievements outside the House. This includes accomplished medics and scientists along with the expected collection of lawyers, politicians and business-people. Many Lords are crossbenchers, and therefore do not expressly support any political party. The Lords, like the House of Commons, has a Science and Technology Committee. Unlike the House of Commons committee the Lords committee contains distinguished lecturers and scientific minds, including John Krebs (a zoologist), Alec Broers (an engineer), Narendra Patel (an obstetrician), and Martin John Rees (president of the Royal Society). I'm against Lords reform for precisely this reason. I want people like this to look at every bill and decide if it passes muster before being passed into law. MPs are often chest-thumping, highly politicized line-toers, but the Lords are not. They are the measured voice of reason. The Lords is full of smart people who have genuine political power. Let's keep it that way.

The government also gets scientific advice from independent science advisers. Last week Radio 4 had an interview with Robert May, the Chief Scientific Adviser to the UK government from 1995-2000. He's exactly the kind of person we, as scientists, want to have an influence on politics. I've written about some of his work in a previous post, but his contributions to the fields of ecology, mathematics, and theoretical physics are outstanding. His job as Chief Scientific Adviser was, in his own words, to "speak truth to power". He advised the government through the first death from variant CJD, the human disease caused by the prions found in cows with BSE. He then advised the government through the public uprising against genetically modified foods. Since the GM debate took off shortly after the first CJD death, I don't think GM ever had much of a chance in the UK. People were just too worried that their food was going to kill them. But during his time there he set up a protocol for giving scientific advice to the government. When something important comes along, the government should seek the best scientists in that field to give advice. They should deliberately include dissenting voices. They should do it in the open, and they should emphasize the uncertainties. That protocol was excellent advice from an outstanding scientist. By the way, he's also a Lord.

Perhaps due to issues such as BSE and GM crops, scientific activism and public engagement in science is on the rise. The "lay summary" required by virtually all granting bodies is becoming more and more important. In the last month, scientists protested against the "death of British science" in a rather over-the-top and uncharacteristically childish march to Downing Street. I don't think it was particularly productive and I don't support these types of protests, but I do think it's a sign that scientists are getting more political. More importantly, anti-science protesters who were trying to dig up an important GM research site at Rothamsted were stopped by a bunch of pro-science protectors. The Rothamsted site is purely a research site, not a commercial site. It has been measuring the effects of agriculture since 1843, and is therefore one of the longest running agricultural and environmental experiments in existence. The scientists' appeal to the protesters on youtube has over 30,000 hits. Organizations like Sense about Science, which provides scientific advice to anyone who's looking for it, are one of our best tools in the pro-scientific movement. They are independent and respected, and look for ways to expand public understanding through targeted campaigns as well as by answering individual questions. Scientists should support them. They provide a means for us to promote libel reform, engage with protesters in a productive way, and ensure our voices get heard. Organizations with strong public support such as the Royal Society, the Medical Research Council, and the Wellcome Trust should follow their example.  And we should all get behind them.

Friday 18 May 2012

The mind-robot connection

I admit it, I cry sometimes when I watch movies. But this is the first time a movie in the supplemental figures of a paper has brought tears to my eyes. This video shows a tetraplegic woman using a robotic arm controlled by an implant in her brain to lift her coffee and take a sip for the first time in 15 years. The smile on her face at the end is amazing.
It's an outstanding medical achievement, too. The researchers implanted microelectrodes in the motor cortex of two patients rendered tetraplegic and anarthric (could not speak) as a result of a brainstem stroke. They then asked the patients to imagine moving objects, and looked to see which motor cortex cells were activated. This information was used in the next trial, where the patients controlled a robotic arm with their minds and used it to grasp balls in 3-dimensional space. That was an immense achievement, and is the focus of the paper. But what really got me was the idea of giving this woman, who has been unable to physically control her environment for 15 years, a touch of independence. What an accomplishment for the researchers and their subject alike.

http://www.nature.com/nature/journal/v485/n7398/extref/nature11076-s5.mov

For those of you without Nature subscriptions, you can watch a shortened version of it here:

http://www.nature.com/news/mind-controlled-robot-arms-show-promise-1.10652

Friday 20 April 2012

How we hear

More stories for Cosmos- how turning your head can cause you to lose "streams" of sounds (like conversations). http://www.cosmosmagazine.com/news/5527/hearing-readjusts-after-head-movements

Friday 13 April 2012

Let them eat mud

The hygiene hypothesis has always appealed to me. I like dirt. I like playing in dirt. I’ll believe just about anything that gives me a good reason to do something that feels a bit naughty. I love the concept of “good fat”. I’m sure ice cream is full of it (although I make a point of never checking).

The hygiene hypothesis states that children need exposure to infectious agents early in life to ensure the normal development of their immune systems, and that without this exposure they will become atopic. Atopy refers to the inappropriate activation of the immune system that can cause allergies, eczema, and asthma. The hygiene hypothesis was originally proposed to explain why children from larger families have fewer allergies. The theory is that children from large families are exposed to more infectious agents through their siblings, and these good and necessary immune stimulations prevent the bad and unneccesary immune stimulations later in life known as allergies. It’s as though the immune system gets bored if it has nothing to do and starts attacking anything it can get its dirty little hands on. Get a few more colds as a kid and you won’t have allergies. Let your kids play in the dirt. Sounds like a good idea. Given my partner’s family history of allergies and my daughter’s infantile eczema and milk sensitivities, I particularly liked the idea of pro-actively preventing future allergies in my children. I’ve never actually looked into the science of it, and it’s about time.

The immune system is fascinating and dynamic. It is comprised of T cells, B cells, macrophages and a few others. The B cells produce antibodies. The macrophages eat things like parasites, bacteria and viruses. The T cells just help. They recognize the infection and can either help the macrophages (in what’s known as a Th1 response) or the B cells (Th2 response). Allergies are all Th2 since they involve the production of IgE from B cells, which then causes histamine release from mast cells (mast cells fall into my broad category of “other” immune cells).

Evidence for the hygiene hypothesis comes from 2 sources: epidemiology and mouse experiments. There are a number of interesting mouse models of atopic diseases. A few weeks ago, a paper in Science argued that mice raised in a germ-free environment were more prone to allergic asthma. Exposure to germs as a newborn could reverse this, while exposure during adulthood did little. The mouse evidence is quite nice, and definitely supports the hygiene hypothesis.

Epidemiology doesn’t establish causes, but it’s about the only way to find correlations in human populations. Epidemiology showed us the correlation between smoking and lung cancer. It can produce powerful information. The epidemiological data supporting the hygiene hypothesis are as follows:

1. Children from large families are less likely to have hay fever and eczema
2. Allergies and asthma have been increasing dramatically in developed countries in recent decades, where hygiene standards have also been improving

There are a couple of glaring issues with the epidemiological data. Asthma can have a variety of underlying pathologies and is more like a bunch of different diseases which all look the same. Some types of asthma are immune-related, some are not. It’s about a 50:50 split. Changes in the incidence of asthma are therefore inaccurate indicators of atopy. Furthermore, the incidence of asthma in the developed world is now on the decline (with the notable exception of inner-city African Americans). Despite the popularity of the hygiene hypothesis, I don’t think we’re any dirtier than we were 10 years ago. Given the number of handbag-sized hand sanitizers on offer at my local drugstore, it might be the opposite. A recent world-wide WHO study showed a U-shaped relationship between GDP and asthma, further undermining the hygiene hypothesis as an explanation for increasing asthma in developed countries. The poorest and the richest countries tend to have more asthma and wheezing than those in the middle. Even if we take asthma out of the picture altogether, the evidence for increasing atopy of any kind in the developed world is unclear. Some countries are experiencing declines, others are not. The epidemiological evidence for the hygiene hypothesis is sketchy at best, even though the hypothesis originated there.

In its original form the hygiene hypothesis argued that Th1 responses early in life (the macrophage-stimulating ones) could prevent subsequent inappropriate Th2 responses (the antibody-producing ones). But there’s a bit of an adaptation of that original hypothesis that seems to hold more weight. It’s not as much about exposure to infectious agents as it is about repeated, low-dose exposure to the allergens themselves. It’s more about inducing tolerance than it is about immune-skewing. Epidemiology can help here, too. Farming and early exposure to pets are associated with lower incidences of allergies and asthma, and that data is relatively robust. Recent allergy-prevention strategies involve low-dose shots of allergen in an attempt to induce immune tolerance. They don’t cause disease and they don’t induce immune responses.

Perhaps the hygiene hypothesis should be renamed. I’m not convinced that my kids need to come into contact with every infectious agent that causes the outpouring of fluids from noses, mouths and bums, but I’m glad to see that there is some evidence that letting them play in the mud and pet strange dogs might help them avoid future allergies. And if current allergy-prevention strategies are a good indicator, perhaps low-dose exposures even as an adult can induce tolerance. Yay for dirt.

Tuesday 28 February 2012

Will the Y chromosome disappear completely? Take a look at my recent article for Cosmos magazine: http://www.cosmosmagazine.com/news/5328/extinction-men-put-hold

Monday 13 February 2012

Go on, take a shot

A mathematical model suggests NBA players should be shooting earlier

Depending on which side of the Atlantic you call home, basketball is either riveting or coma-inducing. Basketball players are both athletes and actors. For sports enthusiasts, much of the excitement comes from waiting for the right scoring opportunity to arise. But a recent study suggests that NBA players may be waiting too long before shooting, and that shooting earlier could add about 4.5 points per game.
For mathematicians, the high scores and frequent shots found on basketball scoresheets give robust data sets. With only 5 players per team and a limited playbook, the interactions between players can be examined using classic models such as game theory. Most sports require quick decisions, and the results of those choices determine the score at the end of the game.
Shot-selection in basketball falls under the broad category of “optimal stopping problems”, the most famous of which is the so-called secretary problem. In the secretary problem, an administrator wishes to hire the best secretary out of n applicants. The applicants are interviewed one-by-one, in random order, and the outcome of the interview is determined immediately. Each applicant can therefore only be ranked relative to those already interviewed. How can the administrator maximize the probability of selecting the best candidate? The secretary problem has a surprisingly simple solution. The best strategy is to interview about 1/3 of the candidates (n/e, to be more precise), reject all of them and then offer the job to the first applicant after that who is better than the unlucky 1/3.
Deciding when to shoot the basketball presents a similar problem, but the solution is more complicated. By shooting early, a team forfeits any shots that would have arisen later in that possession. On the other hand, teams waiting too long pass up opportunities and instead take low-percentage shots in the dying seconds. Brian Skinner of the University of Minnesota constructed a model of the “shoot or pass up the shot” decision. In his model, the optimum time to shoot depends on three factors: the probability that a shot will go in, the distribution of shot quality that the offense will generate in the future, and the time remaining (the NBA allows each team 24 seconds before they have to either take a shot or surrender the ball). The resulting model states, unsurprisingly, that only high-quality shots should be taken early, and that the cut-off for shot quality decreases as the clock ticks down. But NBA players seem to take this too far; with 15 seconds left on the clock, the optimal model predicts about 3 times more shots than are actually taken. NBA players prefer to shoot in the dying seconds.
The model makes a number of assumptions, the most controversial of which is that shot opportunities arise randomly in time. It takes time for a team to set up its offence. Break-away plays may not use the same decision-making process, so shots from the first 7 seconds of ball possession were discounted. After 7 seconds, a team’s offence should be in place. In support of the assumption, there was little correlation in the NBA data between average shot time and the probability that a shot would score.
Under-shooting could be a sign of over-confidence. Players may be unwilling to take moderate-quality shots early in their possession, believing they’ll generate better scoring opportunities in the near future. They may also be underestimating the probability of a turnover and therefore overestimating the time remaining. This model advises ‘ballers to do less acting and more shooting. After all, smug grins are best worn by winning teams.

Monday 12 December 2011

Saving the pharmaceutical industry

Sometimes I feel sorry for the big pharmaceutical companies. In popular culture they’re part of an axis of evil that includes other satan-worshipers like oil companies and banks. Have you seen The Constant Gardener? Not a particularly sympathetic portrayal of the industry.

Things have not been looking good for the pharmaceutical industry lately. The health of each company depends on how many drugs they have under patent, and the size of the market for each of those drugs. In 2005, the 9 largest pharma companies had 9 new molecular entities (drugs, vaccines, etc) approved by the FDA. In 2010, they had 2. Many of them face expiring patents with little to fill the gap. Lipitor, Pfizer’s blockbuster cholesterol-lowering drug, lost its patent protection at the end of November. This drug alone accounts for 1/6 of Pfizer’s income, and they are in a battle to hold market share against their new generic competitors. While consumers, the NHS, and other health insurers around the world are ecstatic, we should be cautious about the graves we dance on. Pharmaceutical companies have discovered and developed drugs to treat a myriad of human diseases. Sales fund research. After years of increases, R&D spending fell by almost 3% last year. In 2011 both Novartis and Pfizer closed their major UK R&D sites. Big pharma isn’t finding new treatments and time is running short.

What’s going wrong? Drug discovery is a long and expensive process (see my human genome post). It takes an average of 13 years for a drug to reach the clinic and can cost upwards of $1bn to develop. A much bigger problem is the attrition from drug target identification to FDA approval. After preclinical development, a drug goes through clinical trials (phase I, II and III), gets registered with the FDA and finally becomes an approved drug. For every approved drug there are 24 drugs in preclinical development. Drugs fail at every stage of development, but the biggest drop is after phase II clinical trials. Phase I clinical trials aim to find the best dose, phase II trials examine the efficacy of the drug, and phase III trials compare the new drug with existing treatment regimens. Approximately half of the phase II trial failures are because the drug doesn’t work.


Pharmaceutical companies have responded by making significant strategic and structural changes. Many of them have cut early-stage in-house research in favour of mining biotechs and academia for drugs and drug targets. Many have fostered increased cooperation between industry and academia. These changes are probably a good thing both for pharma and for drug development in general. Pharma companies get to outsource the risky early stages of drug development, and budding biotechs have someone to sell their product to. Academics can publish their results in interesting journals even if they don’t have obvious and immediate therapeutic value. Increased competition amongst the biotechs should foster creativity.


There is, of course, a caveat to all of this. Industry experts have always known that results are not always reproducible from one lab to another. It’s generally thought that about half of drug targets don't validate. It turns out that this is may be a dramatic underestimation of the problem. In fact Bayer scientists could only validate about a quarter of drug targets found in the academic literature. According to Reuters, drugs that originate in-house are 20% more likely to make it to the market. What R&D budgets have saved on in-house programs they'll have to spend on target validation and intellectual property acquisition.

This strategic change may be good for a different reason. While half of phase II trials fail due to inefficacy of the drug, 29% failed for "strategic reasons" (one common translation: Pharma B has a better drug that Pharma A's drug can't compete with). Stage II trials are time-consuming and costly, and overlap is not particularly constructive. Decreased reliance on in-house programs should make the early stages of drug development more open. Small biotech companies with good products will peddle their wares to multiple different pharmas, so even if Pharma A doesn't buy a given drug they still know that the drug exists and is being developed by Pharma B.

GlaxoSmithKline had a different approach. Three years ago they separated their R&D into Discovery Performance Units, each of which should each perform as an independent biotech. Drugs coming out of these units should be as reliable as previous in-house drugs. GSK will be at a distinct advantage: they will have reliable drugs and access to information from biotechs, but won’t have to share information on their own drug development program. Not necessarily good for the industry, but good for GSK.

Strategic changes can help the industry, but they cannot save it. Over half of phase II trials still fail because the drug doesn’t work. They need to find a way to choose better targets. A recent Nature Chemical Biology paper by Mark Bunnage, a Pfizer medicinal chemist, outlined a number of ways in which target selection can be improved. He encourages target selection based on a number of hallmarks of target quality, including human genetic data and the existence of robust endpoints.

In my mind, the purpose of the pharmaceutical industry is to find new cures to diseases. In reality, big pharmas spend twice as much on marketing as they do on R&D. Biotech companies spend about 70% of their revenues on R&D, pharmas spend about 13%. Different companies, different priorities. And different outputs. I’m not saying the pharmaceutical industry is full of saints, but the research that has happened on their dime has improved the lives of millions. I hope they find a way to continue finding drugs to sell.


Tuesday 30 August 2011

The human genome: we’re just getting started

When the human genome was sequenced over a decade ago, it was a momentous scientific breakthrough. The human genome is enormous. The genome is about 3 billion DNA bases lined up one after the other along chromosomes (which are conveniently broken up into 23 parts). It contains all our genes as well as all the information about when those genes should be switched on and off. Many diseases are caused by genetic changes, so by comparing your or my genome to the average we should be able to see what diseases await us. It was as though a crystal ball had been dropped into our laps. All we had to do was look into it and see everything from our next colds to our eventual deaths. Really, by now there should be an iPhone App for it. So what happened?

As with many scientific discoveries, the sequencing of the human genome was over-hyped. It was a scientific breakthrough, but not a medical one. It takes a long time for scientific discoveries to become medicines that affect the lives of patients. A decade or more usually passes from the time a treatment is thought up to the time the first patient is treated, and most drugs don’t work and therefore never make it into patients at all. One of the most important things that scientists have used the genome data for is genome-wide association studies. In these studies the genomes of healthy people are compared with the genomes of people with diseases like heart disease, diabetes, cancer and autoimmunity. Scientists have found a number of mutations in people with those diseases, but knowing that a mutation is there is only the first step. The next steps are to see what that mutation does, try to develop drugs to fix the problem, and then see if those drugs are safe. These discoveries will take time. But without the genome data there in the first place, we wouldn’t even have a starting point. There are over 500 genetic diseases from cystic fibrosis to hemophilia. We can test for most of these. Now we need to develop ways to treat them.
 

Another important change has occurred over the last ten years. DNA sequencing has become cheaper and faster. Since most genomes differ by 1-3%, we need to have a better idea of what “normal” is. The only way to do this is to collect a bunch of normal samples and see how they differ from one another. The Human Genome Project, the publicly-funded effort to sequence the human genome, cost about £1.5 billion and took 11 years to complete. Sequencing the genome now would cost closer to £15,000 and take a couple of months. The X-prize Foundation currently has a $10 million prize for anyone who can sequence 100 human genomes in 10 days for less than $10,000 per genome. We’re not there yet, but we’re not far off. The competitive spirit has been part of sequencers’ ethos since the very beginning. The race to publish the genome itself was nail-biting, including a photofinish between the Human Genome Project and a splinter biotech company founded by a maverick scientist out to show us all how it should be done. Who says scientists are boring?
 

Anyone with internet access and a penchant for staring at repetitive things can look at the human genome for themselves ( http://genome.ucsc.edu/ has a good browser for this). The Human Genome Project and the scientific journals have been instrumental in ensuring that all the data is publically available. Before the human genome it was difficult to convince another scientist to show you his data unless you showed her yours. Anyone with little to show was left in the dark. Having easy access to data means that scientific discoveries happen faster. Genomes are being sequenced faster and faster, and that data is available to anyone who wants it. DIY biologists are starting up companies in their garages. Making DNA is becoming faster and cheaper. Bacteria with synthetic genomes have been created. Biology is accelerating.
 

As Isaac Newton once said, “If I have seen further it is only by standing on the shoulders of giants”. The sequencing of the first human genome was a gigantic accomplishment. It will take some time before we can use this information to improve our health, but as discoveries start happening faster and faster it’s only a matter of time before the era of genetic medicine is upon us. These are exciting times, and they will yield exciting results. One day we will be able to sequence a person’s genome, know what diseases they’re likely to get, and then prevent those diseases from happening. It will, however, take time. Patience, patients.

Tuesday 9 August 2011

Higgs vs Jupiter: a modern-day David vs Goliath

Physics is about extremes. Even by Newton's time we had figured out the rules governing most things we can see with our eyes, so physicists for the last 200 or so years have been left with the task of investigating things that are either too small, too far away, or too hard to detect with our meagre five senses. The first half of the 20th century was devoted to small things. Thomson discovered electrons, Rutherford discovered atoms, Marie Curie discovered radioactivity, nuclear bombs were made. Bohr's and Schrodinger's atomic models remain largely unchanged today. Nuclear physics was born, space exploration was still a fantasy. It was all about the small guys.

Tides turned when the Cold War started. The space race captured the imagination of big and little kids everywhere. Astronauts became the coolest people on the planet. Men went into space and walked on the moon. Space stations orbited the earth. When we were little, my dad made a set of bookshelves for my brother where the endpieces were shaped like rocketships launching into space. Go figure, my brother grew up to be a space physicist and spends his time launching things into space (although not bookshelves). NASA and its counterparts in Japan (JAXA) and Europe (ESA) have successfully sent probes to every planet, some of their moons, and a handful of meteors, meteorites and dwarf planets. There's still a lot more to be learned about these bodies, but the tides have turned once again.

On July 21, NASA's space shuttle program came to a controlled stop at the end of the Kennedy Space Center's runway. As the Atlantis landed for the last time, the reins of human space flight were turned over to the likes of Richard Branson and friends until the International Space Centre de-orbits in 2020 and humans come back to earth. Since its first manned flight in 1958, NASA has spent $470 billion, at an average of 1.2% of the US annual budget. That's a serious commitment to looking at big, far-away things. The knock-on effects of NASA spending were huge and impossible to quantify, but it unquestionably inspired two generations of scientists, engineers and other dreamers in the US and beyond. NASA really did boldly go where no man had gone before. NASA's most recent mission, the Juno probe's trip to Jupiter, successfully launched last Friday. The Juno probe will take a polar orbit to look at the biggest planet in our solar system, a huge gas planet that resembles the sun except for the obvious lack of fire. An interesting mission, but we are entering the post-astronaut era. The "wow" factor has waned. Although they strapped a couple of smiling Lego people to the probe in an attempt to attract a younger audience, Lego people are simply too big. Our imaginations have moved on.

On the other end of the size spectrum, the Higgs boson and other particles currently being sought by the large Hadron collider (LHC) have attracted an astounding amount of media attention since the accelerator was turned on in September 2008. Even on the subatomic front there has been considerable rivalry between the big guys and the small guys. There’s more than one way to look for subatomic particles. Colliders such as the LHC make atoms move really really fast and then crash them into each other, hoping that not only does the hubcap pop off, but that the seat leather comes off too. These theoretical, subatomic particles should also exist in space, and probes outside the earth’s atmosphere can look at waves from distant objects that would be destroyed by the time they reach the earth. So we should also be able to detect Higgs in space, as Miss Piggy has known from the start. NASA’s FERMI satellite is currently doing just that. The race is on. Even people who traditionally focus on big things are investigating subatomic structure. The coming decades will push the limits of our understanding of all things small. I’d better start building some atomic structure bookshelves.

Friday 5 August 2011

The problem with science careers is sample size

Science is an attractive career for many reasons. On the surface, academics have no real boss, flexible working hours, and job-for-life stability. They spend their time poking around, collecting tidbits of data on whatever catches their eye, and self-aggrandizing to passers-by in the hallways. Sounds like a pretty enjoyable career. An undergraduate science student looking to extend her jean-wearing, coffee-guzzling days into retirement could be easily fooled into thinking this was for her (that’s right, over half the undergraduate science students at most universities are female).

As you might guess from the title of this blog, the reality is very different. In fact, the statistics are rather appalling. One in ten biologists has a professor/assistant professor position 10 years after completing her PhD. Admittedly, some of those have left science of their own volition, but many more have been driven out by a lack of opportunity. Theoretically, if everyone wants a to become an academic, a 10% success rate should mean that the best 10% of scientists get positions while the rest do something else, which isn't that different from a lot of other careers. Surely we want the best scientists to lead their own research programs. That’s the problem. I’ve seen people in that top 10% get academic jobs, and I’ve seen people in that top 10% leave science altogether. Same for the other 90%. It all comes down to a problem of iterations.

Let’s say a person can get an academic job if she publishes in one Holy Trinity journal (Cell, Science, Nature- make sure to cross yourself as you say these) during her PhD/post-doc. If a young scientist publishes a total of 4 first author papers during this time, she’s done well. The papers that make it into the Holy Trinity are there because they’re interesting. And they’re interesting because they’ve asked timely questions and gotten useful and sometimes unexpected results. Some of this comes down to outstanding experimental design and skillful execution, but in equal measures it comes down to luck. Even outstanding scientists don’t publish exclusively in the Holy Trinity. Some great ideas simply don’t pan out, or the answer to a key question was “no” rather than “yes”. Biology can’t be bent to the experimenter’s desires. The answer doesn’t change the quality of the work, but it changes the interest factor and therefore the impact factor of the resulting paper. That “yes” or “no” answer often comes at the end of a body of work, when the scientist has already invested 2-3 years in the project, is running out of time and money and needs to publish or perish. Out of 10 great ideas, perhaps 1 or 2 will result in a Holy Trinity paper. Ensuring that 1 in 4 early-career papers gets into a Holy Trinity journal is as much luck as it is skill. In order to gauge scientific ability instead of luckiness scientists need to have more iterations before having their CVs scrutinized. If a paper took 6 months of full-time work, an early-stage scientist could put out at least 10 before applying for independent funding. Three-month projects would give her 20. Then there would be enough data points to assess the quality of the candidate. The more data points there are, factors such as luck will play a smaller and smaller role. As scientists and statisticians, we should know this better than anyone.

Unfortunately, I can’t imagine science moving in that direction. Today’s papers have much more information in them than papers from 10 years ago. A knock-out mouse model used to be a paper in itself; now it’s Figure 1a. The amount of time it takes to do the experiments, however, has remained unchanged. A PhD still produces 1-2 papers, same for a post-doc. Time seems to be constant.

Tuesday 21 June 2011

Everyone loves stem cells

Stem cells have been THE hot topic for a number of years now. In theory, stem cells can be turned into any other cell type and could therefore be used to repopulate a damaged organ with healthy, normal cells. Sounds cool. "Stem cell" refers to a number of different cell types, some can become any cell in the body and some can become only a small subset of cells. Bone marrow, which contains blood stem cells, has been successfully used to repopulate blood after chemotherapy since the 1950s. Half a century later, some recent papers suggest that stem cells could also be used to repopulate damaged hearts and livers, but there have also been some troubling reports about the nature of stem cells, particularly induced pluripotent stem cells.

Stem cells come in three flavours: embryonic stem cells (ES cells), induced pluripotent stem cells (iPSCs), and resident stem cells. ES cells are more of a research tool than a potential therapeutic tool. They can be used to study the normal processes which turn a stem cell into all the different cells in the body. They have their much-debated ethical pitfalls, and ES research continues to be plagued by government restrictions and the threat thereof. The advantage of ES cells is that they can be turned into literally any cell, whereas iPSCs and resident stem cells are more restricted. An iPSC cell might become a heart or liver cell, but not a brain cell. The downfall of ES cells is immune incompatibility. When foreign cells are injected into a patient, the patient's immune system will recognize them as foreign and attack them. Bone marrow and other organ donations are matched as closely as possible to the patient, but even then most patients are on immunosuppressant drugs to prevent rejection. ES cells, since the embryo is destroyed in order to get the cells, will never be genetically identical to a prospective patient and immune incompatibility will always be an issue.

iPSCs, on the other hand, are made from the patient's own cells so shouldn't be rejected. Cells taken from a person's skin (for example) are grown in dishes and turned into iPSCs through a variety of different protocols including genetic modification or drug treatment. Recently, cells from a mouse's tail have been turned into iPSCs and used to repopulate its damaged liver. iPSCs have their own problems: most iPSCs have multiple, large mutations. Putting mutant cells into someone is not exactly the best idea; not only would they be unlikely to work properly they'd also potentially form cancers. The second major problem is that iPSCs are also rejected by the host's immune system. This was quite unexpected, since iPSCs are theoretically genetically identical to their host. Changes to the cells that occur during their transformation into iPSCs seem to be recognized by the immune system, and the iPSCs are rejected. So the iPSC field now has two enormous hurdles to overcome; they must find cells that are both genetically stable and not rejected by the host's immune system. The two might have a similar solution but iPSCs are a long way from the clinic. The tail-becoming-liver experiment is still promising, but it used genetic modification with some nasty genes in order to perform its feat.  No tumours were found in the mice after 2 months, but the long-term effects remain to be determined.

Resident stem cells are perhaps the best prospect for stem cell therapies. Many of our organs have the capacity to regenerate themselves, at least partially. A person can have a big chunk of their liver removed and the resident stem cells will help it to grow back.  Bone marrow repopulates blood. Resident stem cells are specific to each organ but are already present in the body. The question is how to get them to grow when needed. Livers and blood regenerate themselves without needing to be stimulated, hearts and brains don't. Interestingly, a recent paper shows that resident stem cells in the mouse heart can grow and repopulate a damaged heart when the mouse is injected with a growth factor cocktail. The key to using resident stem cells will be finding the right cocktail for each organ. Some organs may not have stem cell populations that are inducible. It will take a lot of trial and error to find the right mix. The possibility of stimulating a population that's already in place is attractive since it circumvents the problems that arise when the cells are grown outside the body or genetically modified. Repopulating an organ from resident stem cells is a new idea and there will undoubtedly be problems along the way. Therapeutically it could only be used with partially damaged organs, since organs which are heavily damaged or removed completely wouldn't have the necessary stem cells. Some organs may not have resident stem cell populations, or those populations may not respond to growth cocktails. Neurons, for example, are particularly difficult to make. Things that work in mice don't always work in humans. And of course putting molecules into a human which stimulate growth could theoretically cause other inappropriate growth related diseases (ie cancers).

Few topics in biology have been as over-hyped as stem cells. They are a potentially powerful tool. Let's see what resident stem cell researchers come up with in the next few years.

Sunday 5 June 2011

Sorry, it's been a while...

Yes, it's been almost a month since my last post. And I have to make one small correction- there was technically a meltdown at the Fukushima Daiichi plant. But I still stand by what I said.

I'm going to do a bit of recycling right now, so here's a little tidbit on oil droplets  I wrote about a year ago. I thought you might find it interesting. For some more of my thoughts over the last month, check out:

http://www.economist.com/blogs/babbage/2011/05/controlling_illegal_fishing

In the meantime, enjoy this bit about oil drops.

Like lipids through a maze

Oil droplets may be used to solve complex network problems (from 05.06.10)

The maze is a long-standing test of problem-solving and learning skills. From rats looking for cheese to children running through a labyrinth, finding the end usually requires a trial and error approach. The successful maze solver must correct a few wrong turns along the way, staying focused enough on the end goal to not get disoriented and distracted licking one’s own paws.
Now it seems that lipid droplets laced with acid have moved into the ranks of successful maze navigators. Bartosz Grzybowski and colleagues at Northwestern University found that lipid droplets can successfully navigate mazes, and can even turn back when they encounter dead ends. In this case the “cheese” is an acid which diffuses through the maze to create a pH gradient. Since the laced droplets themselves slowly release acid, the side of the droplet facing the exit becomes more acidic while the side facing the start of the maze becomes more basic. This difference in acidity creates surface tension on the droplet, which propels the droplet towards the finish line.
Two types of acid-laced droplets were used, based either on mineral oil or on dichloromethane, an organic solvent. Dichloromethane releases the acid faster than mineral oil, and the two lipids displayed different properties. The mineral oil always chose the shortest possible route. More interestingly, the faster-moving dichloromethane behaved like a cab driver encountering unexpected roadworks; it didn’t always choose the shortest route but was able to correct itself when it found a dead end. In some situations this required the droplet to backtrack for a period of time before resuming its path. When two droplets were simultaneously introduced into the maze, they rarely got in each other’s way.
This system could be useful in a number of ways. On a practical level, the movement of acid-laced droplets could be used as a micropump in equipment such as medical diagnostic tools or DNA microchips. If the system is scalable, the maze could also be used to solve more complex network problems. Tracing the paths of different droplets attracted to different targets may serve as a model for the flow of traffic through roads or websites. Robotics and plant and facility layouts could also be modeled using oil drops. The dichloromethane drop’s ability to correct errors could show what happens when slower-moving regions are introduced into the system. At what point will the drop change from a slower but more direct route to a longer but faster route?
There are two types of maze-solving experiments, testing spatial navigation or learning respectively. The oil drop experiment examines spatial navigation, where the maze-runner has no previous knowledge of the maze. To examine learning, the maze runner is placed in the same maze repeatedly; the time needed to complete the maze decreases as the runner learns. Lipid droplets can navigate, but living organisms still seem to have the edge on learning.