Monday 12 December 2011

Saving the pharmaceutical industry

Sometimes I feel sorry for the big pharmaceutical companies. In popular culture they’re part of an axis of evil that includes other satan-worshipers like oil companies and banks. Have you seen The Constant Gardener? Not a particularly sympathetic portrayal of the industry.

Things have not been looking good for the pharmaceutical industry lately. The health of each company depends on how many drugs they have under patent, and the size of the market for each of those drugs. In 2005, the 9 largest pharma companies had 9 new molecular entities (drugs, vaccines, etc) approved by the FDA. In 2010, they had 2. Many of them face expiring patents with little to fill the gap. Lipitor, Pfizer’s blockbuster cholesterol-lowering drug, lost its patent protection at the end of November. This drug alone accounts for 1/6 of Pfizer’s income, and they are in a battle to hold market share against their new generic competitors. While consumers, the NHS, and other health insurers around the world are ecstatic, we should be cautious about the graves we dance on. Pharmaceutical companies have discovered and developed drugs to treat a myriad of human diseases. Sales fund research. After years of increases, R&D spending fell by almost 3% last year. In 2011 both Novartis and Pfizer closed their major UK R&D sites. Big pharma isn’t finding new treatments and time is running short.

What’s going wrong? Drug discovery is a long and expensive process (see my human genome post). It takes an average of 13 years for a drug to reach the clinic and can cost upwards of $1bn to develop. A much bigger problem is the attrition from drug target identification to FDA approval. After preclinical development, a drug goes through clinical trials (phase I, II and III), gets registered with the FDA and finally becomes an approved drug. For every approved drug there are 24 drugs in preclinical development. Drugs fail at every stage of development, but the biggest drop is after phase II clinical trials. Phase I clinical trials aim to find the best dose, phase II trials examine the efficacy of the drug, and phase III trials compare the new drug with existing treatment regimens. Approximately half of the phase II trial failures are because the drug doesn’t work.


Pharmaceutical companies have responded by making significant strategic and structural changes. Many of them have cut early-stage in-house research in favour of mining biotechs and academia for drugs and drug targets. Many have fostered increased cooperation between industry and academia. These changes are probably a good thing both for pharma and for drug development in general. Pharma companies get to outsource the risky early stages of drug development, and budding biotechs have someone to sell their product to. Academics can publish their results in interesting journals even if they don’t have obvious and immediate therapeutic value. Increased competition amongst the biotechs should foster creativity.


There is, of course, a caveat to all of this. Industry experts have always known that results are not always reproducible from one lab to another. It’s generally thought that about half of drug targets don't validate. It turns out that this is may be a dramatic underestimation of the problem. In fact Bayer scientists could only validate about a quarter of drug targets found in the academic literature. According to Reuters, drugs that originate in-house are 20% more likely to make it to the market. What R&D budgets have saved on in-house programs they'll have to spend on target validation and intellectual property acquisition.

This strategic change may be good for a different reason. While half of phase II trials fail due to inefficacy of the drug, 29% failed for "strategic reasons" (one common translation: Pharma B has a better drug that Pharma A's drug can't compete with). Stage II trials are time-consuming and costly, and overlap is not particularly constructive. Decreased reliance on in-house programs should make the early stages of drug development more open. Small biotech companies with good products will peddle their wares to multiple different pharmas, so even if Pharma A doesn't buy a given drug they still know that the drug exists and is being developed by Pharma B.

GlaxoSmithKline had a different approach. Three years ago they separated their R&D into Discovery Performance Units, each of which should each perform as an independent biotech. Drugs coming out of these units should be as reliable as previous in-house drugs. GSK will be at a distinct advantage: they will have reliable drugs and access to information from biotechs, but won’t have to share information on their own drug development program. Not necessarily good for the industry, but good for GSK.

Strategic changes can help the industry, but they cannot save it. Over half of phase II trials still fail because the drug doesn’t work. They need to find a way to choose better targets. A recent Nature Chemical Biology paper by Mark Bunnage, a Pfizer medicinal chemist, outlined a number of ways in which target selection can be improved. He encourages target selection based on a number of hallmarks of target quality, including human genetic data and the existence of robust endpoints.

In my mind, the purpose of the pharmaceutical industry is to find new cures to diseases. In reality, big pharmas spend twice as much on marketing as they do on R&D. Biotech companies spend about 70% of their revenues on R&D, pharmas spend about 13%. Different companies, different priorities. And different outputs. I’m not saying the pharmaceutical industry is full of saints, but the research that has happened on their dime has improved the lives of millions. I hope they find a way to continue finding drugs to sell.


Tuesday 30 August 2011

The human genome: we’re just getting started

When the human genome was sequenced over a decade ago, it was a momentous scientific breakthrough. The human genome is enormous. The genome is about 3 billion DNA bases lined up one after the other along chromosomes (which are conveniently broken up into 23 parts). It contains all our genes as well as all the information about when those genes should be switched on and off. Many diseases are caused by genetic changes, so by comparing your or my genome to the average we should be able to see what diseases await us. It was as though a crystal ball had been dropped into our laps. All we had to do was look into it and see everything from our next colds to our eventual deaths. Really, by now there should be an iPhone App for it. So what happened?

As with many scientific discoveries, the sequencing of the human genome was over-hyped. It was a scientific breakthrough, but not a medical one. It takes a long time for scientific discoveries to become medicines that affect the lives of patients. A decade or more usually passes from the time a treatment is thought up to the time the first patient is treated, and most drugs don’t work and therefore never make it into patients at all. One of the most important things that scientists have used the genome data for is genome-wide association studies. In these studies the genomes of healthy people are compared with the genomes of people with diseases like heart disease, diabetes, cancer and autoimmunity. Scientists have found a number of mutations in people with those diseases, but knowing that a mutation is there is only the first step. The next steps are to see what that mutation does, try to develop drugs to fix the problem, and then see if those drugs are safe. These discoveries will take time. But without the genome data there in the first place, we wouldn’t even have a starting point. There are over 500 genetic diseases from cystic fibrosis to hemophilia. We can test for most of these. Now we need to develop ways to treat them.
 

Another important change has occurred over the last ten years. DNA sequencing has become cheaper and faster. Since most genomes differ by 1-3%, we need to have a better idea of what “normal” is. The only way to do this is to collect a bunch of normal samples and see how they differ from one another. The Human Genome Project, the publicly-funded effort to sequence the human genome, cost about £1.5 billion and took 11 years to complete. Sequencing the genome now would cost closer to £15,000 and take a couple of months. The X-prize Foundation currently has a $10 million prize for anyone who can sequence 100 human genomes in 10 days for less than $10,000 per genome. We’re not there yet, but we’re not far off. The competitive spirit has been part of sequencers’ ethos since the very beginning. The race to publish the genome itself was nail-biting, including a photofinish between the Human Genome Project and a splinter biotech company founded by a maverick scientist out to show us all how it should be done. Who says scientists are boring?
 

Anyone with internet access and a penchant for staring at repetitive things can look at the human genome for themselves ( http://genome.ucsc.edu/ has a good browser for this). The Human Genome Project and the scientific journals have been instrumental in ensuring that all the data is publically available. Before the human genome it was difficult to convince another scientist to show you his data unless you showed her yours. Anyone with little to show was left in the dark. Having easy access to data means that scientific discoveries happen faster. Genomes are being sequenced faster and faster, and that data is available to anyone who wants it. DIY biologists are starting up companies in their garages. Making DNA is becoming faster and cheaper. Bacteria with synthetic genomes have been created. Biology is accelerating.
 

As Isaac Newton once said, “If I have seen further it is only by standing on the shoulders of giants”. The sequencing of the first human genome was a gigantic accomplishment. It will take some time before we can use this information to improve our health, but as discoveries start happening faster and faster it’s only a matter of time before the era of genetic medicine is upon us. These are exciting times, and they will yield exciting results. One day we will be able to sequence a person’s genome, know what diseases they’re likely to get, and then prevent those diseases from happening. It will, however, take time. Patience, patients.

Tuesday 9 August 2011

Higgs vs Jupiter: a modern-day David vs Goliath

Physics is about extremes. Even by Newton's time we had figured out the rules governing most things we can see with our eyes, so physicists for the last 200 or so years have been left with the task of investigating things that are either too small, too far away, or too hard to detect with our meagre five senses. The first half of the 20th century was devoted to small things. Thomson discovered electrons, Rutherford discovered atoms, Marie Curie discovered radioactivity, nuclear bombs were made. Bohr's and Schrodinger's atomic models remain largely unchanged today. Nuclear physics was born, space exploration was still a fantasy. It was all about the small guys.

Tides turned when the Cold War started. The space race captured the imagination of big and little kids everywhere. Astronauts became the coolest people on the planet. Men went into space and walked on the moon. Space stations orbited the earth. When we were little, my dad made a set of bookshelves for my brother where the endpieces were shaped like rocketships launching into space. Go figure, my brother grew up to be a space physicist and spends his time launching things into space (although not bookshelves). NASA and its counterparts in Japan (JAXA) and Europe (ESA) have successfully sent probes to every planet, some of their moons, and a handful of meteors, meteorites and dwarf planets. There's still a lot more to be learned about these bodies, but the tides have turned once again.

On July 21, NASA's space shuttle program came to a controlled stop at the end of the Kennedy Space Center's runway. As the Atlantis landed for the last time, the reins of human space flight were turned over to the likes of Richard Branson and friends until the International Space Centre de-orbits in 2020 and humans come back to earth. Since its first manned flight in 1958, NASA has spent $470 billion, at an average of 1.2% of the US annual budget. That's a serious commitment to looking at big, far-away things. The knock-on effects of NASA spending were huge and impossible to quantify, but it unquestionably inspired two generations of scientists, engineers and other dreamers in the US and beyond. NASA really did boldly go where no man had gone before. NASA's most recent mission, the Juno probe's trip to Jupiter, successfully launched last Friday. The Juno probe will take a polar orbit to look at the biggest planet in our solar system, a huge gas planet that resembles the sun except for the obvious lack of fire. An interesting mission, but we are entering the post-astronaut era. The "wow" factor has waned. Although they strapped a couple of smiling Lego people to the probe in an attempt to attract a younger audience, Lego people are simply too big. Our imaginations have moved on.

On the other end of the size spectrum, the Higgs boson and other particles currently being sought by the large Hadron collider (LHC) have attracted an astounding amount of media attention since the accelerator was turned on in September 2008. Even on the subatomic front there has been considerable rivalry between the big guys and the small guys. There’s more than one way to look for subatomic particles. Colliders such as the LHC make atoms move really really fast and then crash them into each other, hoping that not only does the hubcap pop off, but that the seat leather comes off too. These theoretical, subatomic particles should also exist in space, and probes outside the earth’s atmosphere can look at waves from distant objects that would be destroyed by the time they reach the earth. So we should also be able to detect Higgs in space, as Miss Piggy has known from the start. NASA’s FERMI satellite is currently doing just that. The race is on. Even people who traditionally focus on big things are investigating subatomic structure. The coming decades will push the limits of our understanding of all things small. I’d better start building some atomic structure bookshelves.

Friday 5 August 2011

The problem with science careers is sample size

Science is an attractive career for many reasons. On the surface, academics have no real boss, flexible working hours, and job-for-life stability. They spend their time poking around, collecting tidbits of data on whatever catches their eye, and self-aggrandizing to passers-by in the hallways. Sounds like a pretty enjoyable career. An undergraduate science student looking to extend her jean-wearing, coffee-guzzling days into retirement could be easily fooled into thinking this was for her (that’s right, over half the undergraduate science students at most universities are female).

As you might guess from the title of this blog, the reality is very different. In fact, the statistics are rather appalling. One in ten biologists has a professor/assistant professor position 10 years after completing her PhD. Admittedly, some of those have left science of their own volition, but many more have been driven out by a lack of opportunity. Theoretically, if everyone wants a to become an academic, a 10% success rate should mean that the best 10% of scientists get positions while the rest do something else, which isn't that different from a lot of other careers. Surely we want the best scientists to lead their own research programs. That’s the problem. I’ve seen people in that top 10% get academic jobs, and I’ve seen people in that top 10% leave science altogether. Same for the other 90%. It all comes down to a problem of iterations.

Let’s say a person can get an academic job if she publishes in one Holy Trinity journal (Cell, Science, Nature- make sure to cross yourself as you say these) during her PhD/post-doc. If a young scientist publishes a total of 4 first author papers during this time, she’s done well. The papers that make it into the Holy Trinity are there because they’re interesting. And they’re interesting because they’ve asked timely questions and gotten useful and sometimes unexpected results. Some of this comes down to outstanding experimental design and skillful execution, but in equal measures it comes down to luck. Even outstanding scientists don’t publish exclusively in the Holy Trinity. Some great ideas simply don’t pan out, or the answer to a key question was “no” rather than “yes”. Biology can’t be bent to the experimenter’s desires. The answer doesn’t change the quality of the work, but it changes the interest factor and therefore the impact factor of the resulting paper. That “yes” or “no” answer often comes at the end of a body of work, when the scientist has already invested 2-3 years in the project, is running out of time and money and needs to publish or perish. Out of 10 great ideas, perhaps 1 or 2 will result in a Holy Trinity paper. Ensuring that 1 in 4 early-career papers gets into a Holy Trinity journal is as much luck as it is skill. In order to gauge scientific ability instead of luckiness scientists need to have more iterations before having their CVs scrutinized. If a paper took 6 months of full-time work, an early-stage scientist could put out at least 10 before applying for independent funding. Three-month projects would give her 20. Then there would be enough data points to assess the quality of the candidate. The more data points there are, factors such as luck will play a smaller and smaller role. As scientists and statisticians, we should know this better than anyone.

Unfortunately, I can’t imagine science moving in that direction. Today’s papers have much more information in them than papers from 10 years ago. A knock-out mouse model used to be a paper in itself; now it’s Figure 1a. The amount of time it takes to do the experiments, however, has remained unchanged. A PhD still produces 1-2 papers, same for a post-doc. Time seems to be constant.

Tuesday 21 June 2011

Everyone loves stem cells

Stem cells have been THE hot topic for a number of years now. In theory, stem cells can be turned into any other cell type and could therefore be used to repopulate a damaged organ with healthy, normal cells. Sounds cool. "Stem cell" refers to a number of different cell types, some can become any cell in the body and some can become only a small subset of cells. Bone marrow, which contains blood stem cells, has been successfully used to repopulate blood after chemotherapy since the 1950s. Half a century later, some recent papers suggest that stem cells could also be used to repopulate damaged hearts and livers, but there have also been some troubling reports about the nature of stem cells, particularly induced pluripotent stem cells.

Stem cells come in three flavours: embryonic stem cells (ES cells), induced pluripotent stem cells (iPSCs), and resident stem cells. ES cells are more of a research tool than a potential therapeutic tool. They can be used to study the normal processes which turn a stem cell into all the different cells in the body. They have their much-debated ethical pitfalls, and ES research continues to be plagued by government restrictions and the threat thereof. The advantage of ES cells is that they can be turned into literally any cell, whereas iPSCs and resident stem cells are more restricted. An iPSC cell might become a heart or liver cell, but not a brain cell. The downfall of ES cells is immune incompatibility. When foreign cells are injected into a patient, the patient's immune system will recognize them as foreign and attack them. Bone marrow and other organ donations are matched as closely as possible to the patient, but even then most patients are on immunosuppressant drugs to prevent rejection. ES cells, since the embryo is destroyed in order to get the cells, will never be genetically identical to a prospective patient and immune incompatibility will always be an issue.

iPSCs, on the other hand, are made from the patient's own cells so shouldn't be rejected. Cells taken from a person's skin (for example) are grown in dishes and turned into iPSCs through a variety of different protocols including genetic modification or drug treatment. Recently, cells from a mouse's tail have been turned into iPSCs and used to repopulate its damaged liver. iPSCs have their own problems: most iPSCs have multiple, large mutations. Putting mutant cells into someone is not exactly the best idea; not only would they be unlikely to work properly they'd also potentially form cancers. The second major problem is that iPSCs are also rejected by the host's immune system. This was quite unexpected, since iPSCs are theoretically genetically identical to their host. Changes to the cells that occur during their transformation into iPSCs seem to be recognized by the immune system, and the iPSCs are rejected. So the iPSC field now has two enormous hurdles to overcome; they must find cells that are both genetically stable and not rejected by the host's immune system. The two might have a similar solution but iPSCs are a long way from the clinic. The tail-becoming-liver experiment is still promising, but it used genetic modification with some nasty genes in order to perform its feat.  No tumours were found in the mice after 2 months, but the long-term effects remain to be determined.

Resident stem cells are perhaps the best prospect for stem cell therapies. Many of our organs have the capacity to regenerate themselves, at least partially. A person can have a big chunk of their liver removed and the resident stem cells will help it to grow back.  Bone marrow repopulates blood. Resident stem cells are specific to each organ but are already present in the body. The question is how to get them to grow when needed. Livers and blood regenerate themselves without needing to be stimulated, hearts and brains don't. Interestingly, a recent paper shows that resident stem cells in the mouse heart can grow and repopulate a damaged heart when the mouse is injected with a growth factor cocktail. The key to using resident stem cells will be finding the right cocktail for each organ. Some organs may not have stem cell populations that are inducible. It will take a lot of trial and error to find the right mix. The possibility of stimulating a population that's already in place is attractive since it circumvents the problems that arise when the cells are grown outside the body or genetically modified. Repopulating an organ from resident stem cells is a new idea and there will undoubtedly be problems along the way. Therapeutically it could only be used with partially damaged organs, since organs which are heavily damaged or removed completely wouldn't have the necessary stem cells. Some organs may not have resident stem cell populations, or those populations may not respond to growth cocktails. Neurons, for example, are particularly difficult to make. Things that work in mice don't always work in humans. And of course putting molecules into a human which stimulate growth could theoretically cause other inappropriate growth related diseases (ie cancers).

Few topics in biology have been as over-hyped as stem cells. They are a potentially powerful tool. Let's see what resident stem cell researchers come up with in the next few years.

Sunday 5 June 2011

Sorry, it's been a while...

Yes, it's been almost a month since my last post. And I have to make one small correction- there was technically a meltdown at the Fukushima Daiichi plant. But I still stand by what I said.

I'm going to do a bit of recycling right now, so here's a little tidbit on oil droplets  I wrote about a year ago. I thought you might find it interesting. For some more of my thoughts over the last month, check out:

http://www.economist.com/blogs/babbage/2011/05/controlling_illegal_fishing

In the meantime, enjoy this bit about oil drops.

Like lipids through a maze

Oil droplets may be used to solve complex network problems (from 05.06.10)

The maze is a long-standing test of problem-solving and learning skills. From rats looking for cheese to children running through a labyrinth, finding the end usually requires a trial and error approach. The successful maze solver must correct a few wrong turns along the way, staying focused enough on the end goal to not get disoriented and distracted licking one’s own paws.
Now it seems that lipid droplets laced with acid have moved into the ranks of successful maze navigators. Bartosz Grzybowski and colleagues at Northwestern University found that lipid droplets can successfully navigate mazes, and can even turn back when they encounter dead ends. In this case the “cheese” is an acid which diffuses through the maze to create a pH gradient. Since the laced droplets themselves slowly release acid, the side of the droplet facing the exit becomes more acidic while the side facing the start of the maze becomes more basic. This difference in acidity creates surface tension on the droplet, which propels the droplet towards the finish line.
Two types of acid-laced droplets were used, based either on mineral oil or on dichloromethane, an organic solvent. Dichloromethane releases the acid faster than mineral oil, and the two lipids displayed different properties. The mineral oil always chose the shortest possible route. More interestingly, the faster-moving dichloromethane behaved like a cab driver encountering unexpected roadworks; it didn’t always choose the shortest route but was able to correct itself when it found a dead end. In some situations this required the droplet to backtrack for a period of time before resuming its path. When two droplets were simultaneously introduced into the maze, they rarely got in each other’s way.
This system could be useful in a number of ways. On a practical level, the movement of acid-laced droplets could be used as a micropump in equipment such as medical diagnostic tools or DNA microchips. If the system is scalable, the maze could also be used to solve more complex network problems. Tracing the paths of different droplets attracted to different targets may serve as a model for the flow of traffic through roads or websites. Robotics and plant and facility layouts could also be modeled using oil drops. The dichloromethane drop’s ability to correct errors could show what happens when slower-moving regions are introduced into the system. At what point will the drop change from a slower but more direct route to a longer but faster route?
There are two types of maze-solving experiments, testing spatial navigation or learning respectively. The oil drop experiment examines spatial navigation, where the maze-runner has no previous knowledge of the maze. To examine learning, the maze runner is placed in the same maze repeatedly; the time needed to complete the maze decreases as the runner learns. Lipid droplets can navigate, but living organisms still seem to have the edge on learning.

Tuesday 3 May 2011

Go nuclear

Energy will always be a politically-charged topic.  Growing up in a white-collar town dominated by oil companies, I understand what impact energy has on the economy. In Calgary everyone drives Porches when oil prices are high and they trade those Porches for Fords when times are tough. The rest of the world does precisely the opposite; when oil prices are high it costs us more to drive, heat our homes, and manufacture goods. Inflation goes up. Food is more expensive. The difference between countries that produce their own energy and those that don't is stark.

I was living in the UK in 2009 when the Russians and the Ukranians starting spitting at each other and the Russians turned off the natural gas pipeline. The knock-on effects (both real and potential) were felt throughout Europe, with 18 European countries reporting major drops or complete cuts in their gas supplies. It was a bit of a wake-up call for me; I'd never really understood the importance of energy self-sufficiency before. Last winter, during the most bitter of the cold spell, Norway (which supplies an ever increasing fraction of UK's gas) had to shut down one of its gas processing centres, leaving the UK with only 7 days worth of gas. Not exactly reassuring.

The UK produces energy from a number of sources. Approximately 40% of the UK's power comes from gas, 33% from coal, 20% from nuclear and 7% from renewables (mainly wind). Coal is dirty, and many of the coal plants are scheduled to be shut down in accordance with EU objectives. The UK aims to have 20% of its power come from renewable sources by 2015, so renewables are certainly not poised to produce the majority of the UK's electricity in the next decade. That leaves us with nuclear power and gas to make up the rest of the 80% once the coal-powered stations are shut down.

When the double-punch earthquake and tsunami hit Japan on 11 March, it was a once in a lifetime test for the nuclear community. The forty-year-old Fukushima power station was the 15th largest nuclear power station in the world. What shocked me was that there wasn't a melt-down. The footage of the greenhouses being flattened or of the enormous ships being pushed around like toys highlights the power of water. The Fukushima station did not escape unscathed, but there was no mushroom cloud either. Japan is in a seismically active region. The largest nuclear power station in the world, the Kashiwazaki-Kariwa Nuclear Power Plant, was shut down in 2007 after a nearby earthquake shook the power plant more than it should have. Fortunately no radiation leaked that time, and the plant re-started 21 months later. The Fukushima station was not as lucky, and radiation has certainly left the site. It will be years before we can assess the impact on the health of those living near the site, but nearby residents showed no immediate signs of radiation poisoning. The Fukushima power station shows that a nuclear power station can withstand a severe beating and not melt down. Well-done.

The problems of waste disposal and storage still exist. But I hope the UK will continue to recognize the importance of nuclear power as a source for safe, green electricity.

Thursday 28 April 2011

The age-old question: should I have my genome sequenced?

In the early days of the post-genomic era, some scientists were predicting a boom in individuals having their genomes sequenced. For about £1000, you too can have the coding portion of your genome  (about 1-2%) sequenced. Fewer people have been willing to fork out for this information than many scientists had thought. We all know we shouldn’t smoke or drink too much and that we should get regular exercise. With few exceptions, knowing the precise sequence of your DNA won’t give you many more insights than that. Having your genome sequenced can only bring bad news: you’re more likely than most to get disease A, B or C. Perhaps we should look at the genomes of people who have lived extraordinarily long and disease-free lives. If I thought that having my genome sequenced would give me license to eat chocolate with impunity, I might consider it.

Tuesday 12 April 2011

Seeing is believing

Biology is beautiful. Living organisms have symmetries, colours and shapes that are aesthetically pleasing. Our eyes can only appreciate this at centimeter or millimeter resolutions, but the same is true on much smaller scales. It's an obvious thing to say, but computers (and increasingly inexpensive data storage) have changed the way we can see biological events. Videos containing gigabites of high-resolution data are easy to generate, and can give us a four-dimensional view of development and other cellular processes. Erik Sahai's lab always showed beautiful videos of migrating cells during their seminars and it made me want to study migration. If you're a Youtube junkie like me, here are a couple of videos that are worth watching:

Dividing cells:
http://www.youtube.com/watch?v=m73i1Zk8EA0&feature=channel_video_title

From a textbook publisher with some great videos including audio explanations of what you're seeing:
http://www.youtube.com/user/garlandscience#p/a (check out the zebrafish development video)

The development of the eye itself is a complex and multi-step process. It starts as a big ball of cells that then gets flattened into a bilayer called the optic cup. This bilayer is like taking the air out of a volleyball or soccer ball and pushing in one side until it's folded in half. The lens of the eye then sits at the opening of this bilayer. A fascinating new publication from Yoshiki Sasai's lab shows that these first few stages of eye development can happen in mouse embryonic stem cells growing ex vivo (i.e. in a dish). Naturally, there are also great videos of this process.

Differentiation of organs ex vivo is both a goal and a tool for developmental biologists. If organs such as the retina could be grown in dishes it would reduce the need for organ donations where demand always outstrips supply. It would also allow for custom organs to be grown, making organ rejection less likely. Growing organs ex vivo also marks an important point in our understanding of how that organ develops. A mouse (or any other organism) starts out as a single cell and ends up with many different kinds of cells including heart cells, lung cells, and muscle cells. Two identical cells side-by-side will grow and divide and change into very different cells by the time development is complete. Numerous signals from neighbouring cells and the rest of a cell's environment help to ensure that each cell chooses the correct fate for its time and place. To recapitulate this in a dish is no small feat. Luckily for early eye development, the requirements for differentiation are minimal and the optic cup develops spontaneously from balls of cells. Most organs will probably need a precisely engineered environment that will be defined over many years through trial and error, but the optic cup system is a good start.

Tuesday 1 March 2011

The Shark Tank

Some papers feel like they were written over drinks at the pub one night. So it is for a recent Nature paper co-authored by an Oxford ecologist and a Bank of England economist (doi:10.1038/nature09659). What these two were doing at the same pub remains unclear, but the result is an interesting analysis of the banking system's inherent fragility using established food web models.

Increased globalization of the banking system in recent decades has resulted in significant interdependency. As much as two-thirds of the growth in banks' balance sheets is accounted for by banks lending to banks and to other financial institutions. The collapse of Lehman Brothers in the autumn of 2008 caused a global financial crisis as waves of banks, each dependent on banks in the preceeding wave, found themselves in financial trouble.

Interdependency is a common theme in ecology. Species interactions range from relationships which benefit both species (mutualistic interactions) to those in which one species eats the other (predatory interactions), but all depend on the population dynamics of interacting species. Predator and prey population sizes depend on each other. If rabbit food is scarce, rabbit populations decrease, and fox populations follow closely behind. Similarly, if bank #1 fails, then bank #2 which lent bank #1 money now has debts that won't be repaid, and if bank #2 fails then bank #3 which lent money to bank #2 now has the same problem. These "financial ecosystems" can be modelled with banks replacing species in standard food web interaction models. In this model each bank has assets (interbank loans and external assets such as mortgages or bonds) and liabilities (interbank borrowing and deposits from customers). The difference between these two must be positive or the bank fails. Each bank must also keep a fraction of its money as a reserve. This reserve insulates banks from shocks. If the bank's customers decide they want to take their money out, the bank has a reserve of money so that the customers can be paid immediately. If those reserves aren't big enough the bank then has to find money in other ways, either through selling off its assets or by borrowing money from another bank. If a fraction of the bank's assets are wiped out by a shock, the bank fails if it does not have sufficient capital reserves. This paper looks at how an initial failure is propagated through the financial ecosystem, and how the size of the reserve affects this propagation.

Three types of shocks were examined, and each had a different outcome. When a single shock hits a single bank, all other banks are affected only by their interbank loans. Increased connectivity attenuates risk. Fewer banks fail in the second wave. In a second situation, a generalized decrease in market prices causes bank #1 to fail. In this situation, bank #2 now has two problems: a generalized decrease in market prices and outstanding loans to bank #1 which won't be repaid. The shock amplifies as more and more banks fail, and connectivity propagates risk. The third situation attempts to describe the most recent financial crisis; intrabank loans decrease following an initial shock, and affected banks follow suit. This liquidity-hoarding shock does not attenuate as the second and third generation of banks are affected.

Some interesting observations emerge. If all banks do the same thing and hold a similar mix of assets, each bank individually is less likely to fail if the value of one of those assets decreases. The system as a whole, however, is much more volatile, since it behaves like a single bank. One big shock could wipe out the whole banking system. If regulators want to decrease systemic risk, they should encourage diversification. They should also encourage modularity, so that failures from one type of financial activity do not contaminate banks engaged in unrelated activities. The United States has already proposed the Volcker rule to do precisely that.

As the authors point out, the banking system is not quite as simple as the model they used. One major difference is that in reality there tends to be a few large, well-connected banks and many more smaller banks. The smaller banks are especially well-connected to the big banks, since the big banks have a proportionally big share of the banking market. The spread of infections uses a model similar to food webs. In the epidemiology of infectious diseases, people with lots of interpersonal connections (ie big banks) are known as "super-spreaders", and a web with super-spreaders maximizes the number of infected individuals. Regulations aimed at reducing systemic risk, as opposed to bank-by-bank risk, should require super-spreader banks to have larger reserves than other banks.

This is not the first time the financial world has turned to science to find answers. The Black-Scholes model used for pricing derivatives is, at its core, a heat dispersion equation. Perhaps interbanking webs will be better than these much-scapegoated derivatives at identifying and reducing systemic risk.

Tuesday 15 February 2011

A day at the science museum

A couple of weeks ago we took our little girls to the Science Museum. It was absolutely rejuvenating. My almost-two-year-old fell in love with the rockets. I explained to her how rockets launch things into space and then fall back to earth. She then ran around pointing at them, and kept telling me "rockets fall down"! She looked at the models with such intensity; she was truly amazed. Her excitement was contagious, and when we later saw the Apollo 10 landing capsule and a 1 million volt particle accelerator from the 1930s, I felt amazed too. I love that feeling. Lately we make lots of play-doh rockets, probably because I want to remind us both what awe feels like.

A friend of mine was recently accused of being a geek for wondering how much a person's head weighed. I don't think geek is the right word. The accused is one of the best scientists I know, probably because she spends her spare time wondering about things like the weight of her head. Scientists must be inherently curious people, as scientific discovery doesn't take a straight path and discoveries are often fortuitous. One of my favourite examples is restriction enzymes. Restriction enzymes are used in the lab to cut pieces of DNA and glue them back together in a different order. They are the cornerstone of the molecular biologist's toolkit. They weren't discovered by someone looking to cut DNA into pieces, but rather by a scientist studying the effects of radiation on bacteria. He received the Nobel Prize for this discovery. In his autobiography he writes, "When I started investigations on the mechanisms of host-controlled modification, I did not of course imagine that this sidetrack would keep my interest for many years. Otherwise I might not have felt justified to engage in this work because of its lack of direct relevance to radiation research." There's a message somewhere in there for those who fund scientists. Luckily for the rest of us his curiosity was piqued by this mechanism.

If you're ever looking for inspiration and don't have easy access to the Apollo 10 landing capsule, check out First Man in Space - Skydiving From The Edge Of The World on youtube. It's a video from Joseph Kettinger skydiving out of a helium balloon from 100,000 feet. As a reference, trans-Atlantic flight paths are around 35,000 feet. Performed in 1960, Kettinger's dive pushed our understanding and our expectations of human knowledge. There aren't many people with enough courage to get into a helium balloon in a space suit and wave goodbye, but I'm certainly glad those people exist. Since there's no atmosphere up there, he fell so fast he exceeded the speed of sound. Amazing.

Lately I've been lulled into routine and into the mundane. Maybe it's winter. Spring is on its way though, and I want to be amazed. Since I'm too much of a chicken to skydive from 100,000 feet, I'm going to go try to weigh my head.

Wednesday 19 January 2011

Uncertainty is everywhere

It is impossible to determine whether or not my six-month-old is asleep in her cot without altering her state of wakefulness. The Heisenberg uncertainty principle is everywhere.

Tuesday 18 January 2011

The MMR vaccine and the motivational powers of fear

Last week the scientific community once more denounced the work of Andrew Wakefield, the lead author of the now infamous Lancet paper which falsely linked the measles, mumps and rubella (MMR) vaccine to autism. Previous investigations into his work demonstrated unethical behaviour in his data collection; in the most distasteful example he was passing out £5 bills at a kids' birthday party in exchange for blood samples. There were also substantial and unreported conflicts of interest. While investigating the possible link between the MMR vaccine and autism he was paid as an expert witness by lawyers preparing a case against the manufacturers of the vaccine itself. If he had found no link, Dr. Wakefield wouldn't have been a particularly useful witness. Moreover, he had filed for patents for individual vaccinations. Individual vaccinations would have been an obvious choice if the triple vaccine was unsafe.

So we knew that Dr. Wakefield employed questionable practices and was motivated by questionable and undisclosed funding. He behaved unethically. But the more important question is, was he right? Is there a link between the MMR vaccine and autism?

Subsequent work from numerous labs has failed to reproduce his data. It is important to note that he was drawing his conclusions from a patient sample of 12. Statistical anomalies happen, especially with small sample sizes. A confidence interval of 95% is generally acceptable for the publication of an association between two medical conditions. This means that 95% of the time, the two conditions are associated. The converse is that 5% of the time the two medical observations are simply a coincidence. Theoretically, he could have observed the association and reported it, not knowing that he saw these conditions merely by chance. Was Dr. Wakefield the unfortunate victim of coincidence? Was his reputation sullied by fate?

Part of me had hoped that to be true. With great power comes great responsibility. Those in positions of authority, from politicians to medical professionals, have a great responsibility to promote the public interest. Dr Wakefield broke that trust. He fabricated data to fulfill his predictions. He was a liar. Of the twelve cases reported in his original paper, eleven of them were irreconcilable with the hospital's health records. The Lancet paper describes twelve children who were developmentally normal until they received the MMR vaccination, and then developed autism. According to the hospital records only one child actually had regressive autism, and five of them were developmentally abnormal before receiving the MMR jab. Dr. Wakefield was not the victim of coincidence, he was a fraud.

Why did his findings have such an enormous impact on public health, and how can the damage be repaired? Insurance companies can tell you the answer. Horrible but unlikely events are such stuff as nightmares are made on. It's terrifying to think that vaccinating your infant could cause him to become autistic, and correspondingly immunization rates in the UK fell below 80% in the early naughties. This has now caused another horrible and increasingly likely event; a fatal outbreak of measles, mumps or rubella. The fatality rate from measles for otherwise healthy people in developed countries is 3 deaths per thousand cases. In the last two years outbreaks of measles have occurred in Wales, New York, San Diego, France, and Germany. It is only a matter of time before an unvaccinated child dies from measles, and parents start rushing back to their GPs to have their children vaccinated. Fear is a powerful motivator.

Tuesday 11 January 2011

If immune systems could talk

Throughout our lives, we are exposed to a variety of pathogens. These exposures result in immune memory. A one-year-old gets every cold that comes her way; her parents are likely to be immune to many of these viruses and will therefore not get sick every time. Disorders of the immune system from allergies to multiple sclerosis occur when the immune system misidentifies something normal as being abnormal and therefore attacks it. Each immune disorder should theoretically each have a set of diagnostic antibodies, antibodies which recognize the thing that they shouldn't.

Many other diseases, such as cancers and neurodegenerative diseases, cause physiological changes that are recognized by the immune system. Alzheimer's disease, while not a disease of the immune system, is associated with the accumulation of antibodies which recognize the brain damage. These diseases should also have a set of diagnostic antibodies. Recently, a group in Florida has described a new way to look for antibodies in patient blood samples. They were able to find antibodies in both human Alzheimer's disease and in a mouse model of multiple sclerosis that were abnormal and therefore potentially diagnostic.

Using antibodies to detect these diseases could be very useful. Often, the detection and diagnosis of neurodegenerative diseases is difficult. MRI scans are expensive. Taking blood is not. Diagnosis of these diseases through antibody screening of blood samples could provide a cheap and reliable alternative to MRI scans. For diseases such as cancers, early detection is the key to prevention. If antibodies can be detected early enough, many cancers could be treated early enough to prevent them from spreading.

Wisdom comes with experience. Many of our experiences have been witnessed by our wizened immune systems. Perhaps we now have a way to let them talk.

Thursday 6 January 2011

Me in a nutshell

Welcome!

Let me start with a brief introduction. My name is Megan, and I have a problem.

Two years ago, I was enjoying my postdoc in a cancer research institute in London. My days consisted mainly of staring at unconscious flies under a microscope, pipetting dilute solutions of nasty chemicals from one tube to another, and learning French swear words from my benchmate. I had the standard plans to start my own lab and live happily ever after within the Ivory Tower. Then something terrible happened. I came to the realization that I didn't actually want to be a scientist when I grew up. A career in science is kind of like a career in acting. It's great if you're Angelina Jolie, but waiting tables in Hollywood while being recognized as "that girl in the Colgate ad" isn't very satisfying. Unfortunately, I'm no Angelina. And I'm a lousy waiter. So I threw in the proverbial pipetteman and chose a new path.

Do I miss being at the bench doing experiments? No. Not a bit. Okay, sometimes I do. But not too much, and not for too long. I miss the "woo hoo!" moment. Anyone who's had one knows what I'm talking about. It's the bubbling excitement you get when you're first looking at the results of an experiment that really tests your theory, and everything is clean and clear and the answer is staring back at you from the film as you pull it out of the developer. At that moment, there's nothing to say other than "woo hoo!". Unfortunately, in the decade I spent doing research I can count my "woo hoo!"s  on one hand. Were those moments worth all the time and effort? Did my work change our fundamental understanding about health and disease, or even our understanding of a single subset of a single disease? If the answers were yes, I'd probably still be slugging away.

Science, however, is a bit addictive. I don't miss pipetting. What I really miss is reading about and discussing new ideas. Here's where the blog comes in. A blog is the perfect way for me to get my fix, without having to devote my entire life to a lab. So come and check out my posts for some ideas and discussions about discoveries, politics, and a few quirks and quarks. Comments are always very welcome.

Enjoy!