Can the cervical cancer vaccine cause cervical cancer instead? The info has been around various Indonesian WhatsApp groups lately, but I don’t know how to fact check.



Can the cervical cancer vaccine cause cervical cancer instead? The info has been around round various Indonesian WhatApp groups lately, but I don’t know how to fact check.‘.

Given how you framed your question, first a small digression seems necessary. Responding with skepticism to information circulating on local social media shows good judgment IMO. Following up by asking a question on a Q&A forum like Quora is even better.

Social media are excellent tools for personal communication (‘I got home safely’, ‘Looks like my flight is delayed so I’m waiting in front of the boarding gate’) but extremely poor at information sharing. This is because many using them to share information tend to be ill-informed themselves, poorly educated about how to discern rumor from fact and untrustworthy from trustworthy news sources as well as all too glib and careless about what they share.

Scientific and medical information from original, scientifically peer-reviewed sources or from a credentialed entity such as a government agency authorized to do so are more reliable.

No, the cervical cancer vaccines can’t cause cervical cancer. Currently (as of Jan 2018),

  • Approved cervical cancer vaccines (HPV vaccines – Wikipedia) are Gardasil – Wikipedia, made by Merck (1), and Cervarix – Wikipedia, made by GlaxoSmithKline (2).
  • Essentially a mixture of proteins, they contain no live organism so they’re non-infectious and thus can’t induce a chronic infection, which research suggests is necessary for cervical cancer.

At present epidemiology suggests not all but ~90% of cervical cancer is caused by HPV (Human papillomavirus infection – Wikipedia). Specifically, by now numerous epidemiological studies the world over have clearly linked chronic infection with specific types of HPV to cervical cancer (3, 4, 5, 6, 7, 8, 9). HPV is a nonenveloped, double-stranded DNA virus. Thus, the currently approved cervical cancer vaccines target HPV.

The US ACIP (Advisory Committee on Immunization Practices – Wikipedia) provides detailed information (background, recommendations, safety, efficacy and adverse effects) on the cervical cancer HPV vaccines (see below from 10). Key detail to note? The vaccines are composed of envelope proteins from different types of HPV, not the whole virus itself. The cervical cancer HPV vaccines are thus non-infectious and can’t themselves cause either a chronic infection in the short-term nor cervical cancer in the long-term.

An abundance of data thus far from epidemiological follow-up studies in vaccinated populations also suggest these vaccines are both safe and effective. Since the first HPV vaccine was approved in 2006, a steady stream of scientific studies from various countries have confirmed their safety and efficacy, specifically, lower rates of HPV spread in the form of lower rates of genital warts, and lower rates of cervical tissue abnormalities (11, 12, 13, 14, 15, 16, 17).




3. Smith, Jennifer S., et al. “Human papillomavirus type distribution in invasive cervical cancer and high‐grade cervical lesions: a meta‐analysis update.” International journal of cancer 121.3 (2007): 621-632.…

4. Koshiol, Jill, et al. “Persistent human papillomavirus infection and cervical neoplasia: a systematic review and meta-analysis.” American journal of epidemiology 168.2 (2008): 123-137.…

5. Ciapponi, Agustín, et al. “Type-specific HPV prevalence in cervical cancer and high-grade lesions in Latin America and the Caribbean: systematic review and meta-analysis.” PloS one 6.10 (2011): e25493.…

6. Chan, Paul KS, et al. “Meta-analysis on prevalence and attribution of human papillomavirus types 52 and 58 in cervical neoplasia worldwide.” PLoS One 9.9 (2014): e107573.…

7. Guan, Peng, et al. “Human papillomavirus types in 115,789 HPV‐positive women: a meta‐analysis from cervical infection to cancer.” International journal of cancer 131.10 (2012): 2349-2359.…

8. Hammer, Anne, et al. “Age‐specific prevalence of HPV16/18 genotypes in cervical cancer: A systematic review and meta‐analysis.” International journal of cancer 138.12 (2016): 2795-2803.…

9. Clifford, Gary M., Stephen Tully, and Silvia Franceschi. “Carcinogenicity of Human Papillomavirus (HPV) Types in HIV-Positive Women: A Meta-Analysis From HPV Infection to Cervical Cancer.” Clinical Infectious Diseases 64.9 (2017): 1228-1235.…

10. Markowitz, Lauri E., et al. “Human papillomavirus vaccination: recommendations of the Advisory Committee on Immunization Practices (ACIP).” MMWR Recomm Rep 63.RR-05 (2014): 1-30.…

11. Brotherton, Julia ML, et al. “Early effect of the HPV vaccination programme on cervical abnormalities in Victoria, Australia: an ecological study.” The Lancet 377.9783 (2011): 2085-2092.…

12. Howell-Jones, Rebecca, et al. “Declining genital warts in young women in England associated with HPV 16/18 vaccination: an ecological study.” The Journal of infectious diseases 208.9 (2013): 1397-1403.…

13. Mesher, D., et al. “Reduction in HPV 16/18 prevalence in sexually active young women following the introduction of HPV immunisation in England.” Vaccine 32.1 (2013): 26-32.…

14. Drolet, Mélanie, et al. “Population-level impact and herd effects following human papillomavirus vaccination programmes: a systematic review and meta-analysis.” The Lancet infectious diseases 15.5 (2015): 565-580.…)

15. Bollerup, Signe, et al. “Significant reduction in the incidence of genital warts in young men 5 years into the Danish human papillomavirus vaccination program for girls and women.” Sexually transmitted diseases 43.4 (2016): 238-242.

16. Ferris, Daron G., et al. “4-Valent Human Papillomavirus (4vHPV) Vaccine in Preadolescents and Adolescents After 10 Years.” Pediatrics (2017): e20163947.

17. Kjaer, Susanne K., et al. “A 12-year follow-up on the long-term effectiveness of the quadrivalent human papillomavirus vaccine in 4 Nordic countries.” Clinical Infectious Diseases (2017).


Does playing in the dirt help kids develop a strong immune system?



Does playing in the dirt help kids develop a strong immune system?‘.

Rather than playing in dirt per se, growing up in ‘natural’ environments helps kids develop a balanced immune response while strength is an unhelpful gauge of immune system function. This answer clarifies

  • How exposure to dirt became associated with immune system functioning.
  • Why rather than strength, whether or not an immune response is orderly/well-controlled is a more appropriate gauge for assessing immune response outcome.
  • Why it’s more accurate to consider ‘playing in the dirt’ not literally but rather as a metaphor for a more consciously ‘natural’ living.

How did the notion of playing in the dirt get linked to immune system function? Steadily through the 20th century til date, epidemiologists noticed a sharp uptick in rates of a variety of chronic inflammatory disorders such as allergies and autoimmunities in Developed country – Wikipedia, meaning throughout Western Europe, North America and Australia.

Such abrupt onset and spread of non-infectious diseases within a mere generation or two made purely genetic causes such as mutations less likely. The timeline instead fingered as-yet unidentified environmental factors. What could they be?

In 1989, the epidemiologist David Strachan observed a tendency for younger siblings within large families to be much less predisposed to allergies such as hay fever and eczema. Specifically, the record suggested protection for younger siblings who developed a series of childhood infections (1). This implied a role for childhood infections in resistance to allergies.

Around the same time, multiple studies started correlating resistance to allergy and autoimmunity with childhood exposure to farms or farm living (2). The link between farm living and protection against chronic inflammatory disorders brought to focus environmental factors associated with farms, obviously dirt and plenty of it as well as poop and plenty of it (see below from 2).

The link with poop coincided with the discovery of the importance of Microbiota – Wikipedia in human physiology, which thus expanded the ambit from infections to microbes in general.

In recent decades such observations thus helped narrow down environmental factors to microbiota. Specifically, ‘Western’ living typically entails,

  • Few people in these countries live on farms.
  • Indoor sanitation and piped chlorinated drinking water are standard.
  • Exposure to antibiotics and a wide variety of antiseptic agents (cleaners, wipes, lotions) is widespread and frequent.
  • Hospitalized births are standard and increasingly those births are C-section.

Thus, excessive hygiene, especially in childhood, leads to an abrupt and sharp decline in natural exposure to all sorts of microbes. The gist of such observations became codified as the Hygiene hypothesis – Wikipedia, that ‘Western’ lifestyle increasingly automatically undermines natural exposure to microbes. This in turn fundamentally alters how the immune system gets ‘trained’ during formative years and thus increases the risk for inflammatory disorders. How exactly this happens remains the focus of intense research.

What is a strong immune system? Strength of the immune system is inadequate to the task of properly assessing immune function since someone with autoimmunity has a demonstrably strong immune system. Similarly, someone with allergy makes demonstrably strong immune responses to Antigen – Wikipedia perceived as innocuous by others. Rather than strength, disorder or dysregulation sets apart immune system function in inflammatory disorders. Thus, it’s more accurate and appropriate to focus on how the immune system functions, orderly or disorderly, rather than its strength.

Why playing in the dirt is better considered a metaphor for more natural living. Since time immemorial, humans have had interactions with the natural world which included children playing in the dirt. As outlined above, the Industrial Revolution – Wikipedia created a schism by making possible living that represents a drastic difference in kind from that past. However, it did so at the expense of the environment with clean air, water and soil its prominent casualties. Industrialization ended up desecrating and polluting vast tracts of the land all around us, especially urban lands. In many places, the land has become polluted with toxic products of industrial runoff, lead being a case in point. As a 2015 report notes (see below from 3),

‘Urban soil has become severely lead contaminated, especially in inner cities (Filippelli and Laidlaw 2009; Mielke et al. 2013).’

Thus, when it comes to playing in the dirt, discretion becomes the better part of valor with parents and caregivers taking care to ensure the ‘dirt’ children play in is really wholesome dirt and not soil that in the past was subjected to industrial activity. Deliberately choosing several other options and activities would also help ensure children develop the ability to make balanced immune responses.

  • Reducing exposure to unnecessary antibiotics and antiseptic agents.
  • Choosing C-section only when medically necessary and not as an elective.
  • Frequent exposures to farm environments, for example frequently visiting petting zoos and keeping pets at home.
  • Spend more time outdoors in natural environments, more pristine the better, national parks being case in point.
  • Feeding a diet largely composed of fruits and vegetables with plenty of natural fibers while avoiding processed foods as much as possible would help children develop and sustain the type of diverse microbiota associated with health.


1. Strachan DP (1989) Hay fever, hygiene, and household size. BMJ 299, 1259–1260.…

2. Kabesch, Michael, and Roger P. Lauener. “Why Old McDonald had a farm but no allergies: genes, environments, and the hygiene hypothesis.” Journal of leukocyte biology 75.3 (2004): 383-387.…

3. Mielke, Howard W. “Soils and Health: Closing the Soil Knowledge Gap.” Soil Horizons 56.4 (2015).…

What can we do to make a greater fraction of studies reproducible?



What can we do to make a greater fraction of studies reproducible?‘. This answer focuses on US biomedical research though similar forces at play the world over make its observations broadly generalizable.

In the normal thrum of how science operates in a typical lab, re-testing something from a publication is very much the norm though not explicitly done to confirm an entire study. More often, the lab boss will nudge their graduate students, post-docs or technicians to add some new controls to their experiments, one or more of which might be key aspects of a hot new study. Ambitious trainees may even take it upon themselves to do so on their own initiative.

Bits and pieces of an entire study thus get confirmed over time by bits and pieces in many other studies done by various labs and groups in service of their own research priorities and ideas, not as efforts to confirm another’s results. The strength of scientific methodology in a nutshell. That it works as it’s supposed to is the very reason that issues of reproducibility of published studies have increased in recent years. That’s science working as it’s supposed to.

However, increasing reports of irreproducibility also suggest something’s awry. Akin to blips across a radar screen, now and then articles in leading scientific journals as well as the general news media light up about this issue. In recent years, psychology and biomedical research have figured prominently. A bit of grumbling, teeth gnashing and hand-wringing later, the scientific status quo settles back into place, little changed, meaning we haven’t yet reached a tipping point.

How could it be otherwise when change entails changing so much? Not just how but also why scientists do their work as well as the metrics used to assess the quality of their work, and to reward and promote them. Rather than pretend the problem of study reproducibility is reducible to an easily digestible list of nostrums especially when it encompasses such different areas as basic, translational and clinical research, this answer briefly outlines

  • Key questions for everyone in biomedical research from practitioners and gatekeepers to decision-makers.
  • How the unforeseen consequences of incentives buttress the current system and give short-shrift to reproducibility.

Where reproducibility in biomedical research is concerned the key questions remain,

  • Will top-tier scientific journals publish reproducibility studies at a similar clip to those deemed novel? A recent dust-up involving The New England Journal of Medicine – Wikipedia (NEJM) inadvertently revealed hidden disdain for data sharing and independent data analysis. Arguably one of the world’s most prestigious medical journals, an opinion piece in 2016 by the journal’s chief editors about the pros and cons of data sharing referred to the concern (see below from 1)

‘among some front-line researchers that the system will be taken over by what some researchers have characterized as “research parasites.”

This article was greeted by howls of derision by some, led to the brief trending of the hashtag, #IamAResearchParasite, and became beset by accusations of ‘paternalistic arrogance‘ (2). A year later, Jeffrey M. Drazen – Wikipedia, the journal’s editor-in-chief, did a 180, now calling for (see below from 3),

‘“culture change” in the scientific community about clinical trials: Instead of solely glorifying researchers who author papers, scientists should also bestow reverence upon those who generate high-quality data sets for others to analyze.’

Whether that will happen is anyone’s guess.

  • Will grant funders and employers reward similarly (promotions, tenures, prestige, etc.) those who share data, or re-analyze, curate or annotate others’ data as they do those who publish original studies?

These are just the obvious changes necessary to enhance reproducibility’s priority and prestige in biomedical research. However, they address what, not why as in why has it even become necessary to ask such questions?

Unforeseen consequences of incentives buttress the current system and give short-shrift to reproducibility

Obviously policies and regulations alone won’t suffice here because this is about changing unspoken norms in scientific culture. In a way, the buzzy reproducibility crisis may be a sign of biomedical research becoming a victim of its own success. For long, scientific methodology has proven a reliable attribute. No matter the biases that might plague individual scientists, no matter the false starts and dead-ends, the idea goes, facts eventually shake out because scientific methodology prevails.

Even Thomas Kuhn – Wikipedia, whose most controversial work, The Structure of Scientific Revolutions – Wikipedia, pre-eminent philosophers of science such as Karl Popper – Wikipedia considered quite the poke in the eye, essentially believed in the primacy of scientific methodology. If it prevails no matter what, reproducibility shouldn’t even be an issue, right? However the reproducibility crisis suggests not, bringing us back to how biomedical research may have become a victim of its own success.

Farnam Street is a blog maintained by Shane Parrish. Its tagline says it all, ‘Mastering the best of what other people have already figured out‘. A thought-provoking article in May 2016 about the unintended and often corrosive consequences of incentives (4) does such an excellent job of illustrating the issue with a couple of historical examples, I quote the first one below in its entirety,

‘During British colonial rule of India, the government began to worry about the number of venomous cobras in Delhi, and so instituted a reward for every dead snake brought to officials. In a wonderful demonstration of the importance of second-order thinking, Indian citizens dutifully complied and began breeding venomous snakes to kill and bring to the British. By the time the experiment was over, the snake problem was worse than when it began. The Raj government had gotten exactly what it asked for.’

The second downright grisly example concerns King Leopold II of Belgium and how (see below from),

‘Looking to bolster an economy of rubber, Leopold II got an economy of severed hands. Like the British Raj, he got exactly what he asked for.’

Similarly, unforeseen consequences of incentives that privilege not process but certain outcomes help explain biomedical research’s reproducibility crisis, though obviously these outcomes aren’t as grisly for humans, especially if we choose to ignore outcomes for mice and other experimental animal models.

How to assess scientific excellence? The process currently in place is largely the outcome of empiricism initiated in the early years of the 20th century by statisticians such as James McKeen Cattell – Wikipedia who with his coining of the Homo Scientificus Americanus (5) attempted to measure the scientific ‘productivity’ of ‘men of science’. Eventually this notion of productivity coalesced around output, i.e., number of scientific papers (6, 7).

But how to deem a submission worthy of publication? A predictable response was to ask whether the work was novel. As Scientometrics – Wikipedia got codified, eventually every decision maker in the US biomedical enterprise began to prioritize and reward novelty above all. Scientific publications, promotions, grants, tenure, each of the badges necessary for a successful scientific career consider novelty of the work a necessary criterion.

What began as a surrogate to suss out excellence, over time through a combination of expediency and complacency, novelty has instead become enshrined as the centerpiece that consumes all of the proverbial oxygen in the decision-making process to identify scientific quality, even at the expense of other essential attributes such as reproducibility and even as assessment became synonymous with measurement, specifically quantity became a surrogate for quality.

As such tendencies spread the world over, additional metrics were contrived to measure quantity, *cough* quality, how many times a given paper’s cited or the Citation index – Wikipedia being case in point. Meantime, academic journals jumped to differentiate themselves from the pack using metrics such as the Impact factor – Wikipedia. Academic journals mushroomed in the internet era, a clear sign that the American approach to assess scientific quality had globalized. However, that publication in top journals doesn’t guarantee study reliability only underscores the low priority accorded reproducibility (8).

Meantime, other decades-old sociological trends in biomedical research further entrenched novelty, with recent generations of biomedical scientists getting trained to value and single-mindedly focus on it even as the larger social forces shaping scientific enterprise have contrived to minimize the value of reproducibility.

  • One is the intensified competition for jobs and research funds. Competition influences the practice of biomedical research in many consequential ways.
    • It places a greater reliance on conveniently quantifiable metrics to assess scientific excellence and, over course of the 20th century, scientific publications increasingly become an expedient peg to hang that hat on.
    • It amplifies Publish or perish – Wikipedia even as it truncates project timelines, with the Great Recession bringing such existing constraints to a boil by sharply curtailing research funding.
  • Tenured staff stay longer at their jobs even as US universities churn out ever increasing numbers of PhDs without expanding tenured positions enough to absorb them, the ensuing glut intensifying competition as well as conveniently feeding publish-or-perish by providing a ready-made army of well-trained, relatively cheap labor to do the work necessary to keep biomedical research humming along.
  • Disproportionate focus on publication worthiness reinforces the reproducibility problem with negative data tending to not see the light of day. Now called Publication bias – Wikipedia, Robert Rosenthal (psychologist) – Wikipedia first alluded to it as the file-drawer problem all the way back in 1979. Data not supporting previously published work doesn’t get submitted for publication at all, an expression of self-censorship.
  • Much shorter lifespan of technical methods feeds the tendency to minimize reproducibility. So rapid is the current rate of change of many lab techniques, some even turnover with the turnover of lab staff such as graduate students and post-docs. Such newer trends make reproducibility as a priority even more out of reach.


1. Longo, Dan L., and Jeffrey M. Drazen. “Data sharing.” (2016): 276-277.…

2. ProPublica, Charles Ornstein, April 5, 2016. Amid Public Feuds, A Venerated Medical Journal Finds Itself Under Attack — ProPublica

3. STAT news, Ike Swetlitz, April 4, 2017. Journal editor calls for ‘culture change’ around clinical trial data

4. Incentives gone wrong: Cobras, severed hands, and shea butter. Farnam Street blog, May, 2016. Incentives Gone Wrong: Cobras, Severed Hands, and Shea Butter

5. Cattell, J. McKeen. “Homo scientificus americanus.” Science 17.432 (1903): 561-570.

6. Godin, Benoît. “From eugenics to scientometrics: Galton, Cattell, and men of science.” Social studies of science 37.5 (2007): 691-728.…

7. Godin, Benoît. “The value of science: changing conceptions of scientific productivity, 1869 to circa 1970.” Social Science Information 48.4 (2009): 547-586.…

8. Tirumalai Kamala’s answer to What’s great and terrible about this draft paper on top journals publishing the least reliable science?

What have we learned about T cell biology from Jurkat cells?


What have we learned about T cell biology from Jurkat cells?‘. A more accurate reformulation would be ‘What have we learned from Jurkat cells that’s applicable to T cell biology and what’s not?”. Unfortunately, the answer isn’t that easy to untangle from archival data.

Much like the tumor cell line HeLa – Wikipedia before it, from the 1980s through the 1990s, Jurkat cells – Wikipedia became a heavyweight tool to understand human T-cell receptor – Wikipedia (TCR) biochemistry (1). Later, Jurkat also became a popular tool to model human T cell infection by HIV. However, very much a Curate’s egg – Wikipedia, Jurkat‘s perceived benefits remained disproportionately the focus during this time while its harms have been considered only much more recently. Meantime, 21st century technological advances have pretty much eliminated reliance on such inherently problematic transformed (tumor) cell lines.

This answer outlines

  • A brief history of the Jurkat T cell line, highlighting how the imperatives of prevailing technical limitations drove its past popularity.
  • how Jurkat‘s inherent limitations in decoding TCR signaling are predicated on the fact that it is a tumor cell, has mutations and is potentially contaminated, features that cast doubt on the validity of some historical results using it.

Brief History of the Jurkat T cell line

In 1977, a cell line was derived from T cells isolated from a 14 year old boy with Acute lymphoblastic leukemia – Wikipedia (ALL) (2). This cell line was eventually called Jurkat.

Back then, T cells remained very much a mystery, there was little or no consensus on how to maintain them in culture for long periods of time, a basic requirement necessary to dissect their essential properties, especially their biochemistry.

The experimental mouse model, today the backbone of immunology research, was still in its infancy as was molecular biology. Today’s technological mainstays such as gene targeting to create T cell transgenic mice and T cell transgenics with attached reporter genes to facilitate their monitoring were advances decades in the future.

Thus in this vacuum, an immortalized cell line such as Jurkat capable of being maintained in culture in perpetuity became an extremely valuable tool that in hindsight arose at the moment when most needed.

In its early years of use, Jurkat was thus used to delineate a great deal of the signal transduction pathway and molecules triggered by T-cell receptor – Wikipedia (TCR) signaling (1).

Hindsight also suggests a readily available human T cell line made such research far simpler and much cheaper. No need to draft complicated study protocols, get them reviewed and approved by Institutional review board – Wikipedia (IRB) in order to gain permission to bleed people to isolate their T cells in order to study them. Jurkat was thus a convenient tool to study aspects of human TCR biochemistry.

Limitations of Jurkat T cell line

Immortalized cell line versus primary, normal cell. During its early years of use, methods to culture human primary T cells didn’t exist. Even today, primary T cells simply can’t be maintained indefinitely in culture. Their long-term study requires stimulation, expansion, cloning and then immortalization (hybridization [mechanical fusing] with a partner tumor cell).

Yet, being a tumor cell, do results from Jurkat apply to normal human T cells in general? That’s simply unknowable since it’s impossible to know exactly what T cell stage Jurkat represents considering it got established and began to be used back when little was known about T cell development, activation, effector differentiation and memory formation. While papers routinely refer to Jurkat as a Lymphoblast – Wikipedia, at best that’s just a tenuous guess.

Mutations in many key molecules involved in the TCR signaling pathway. Tumor cells replicate uncontrollably, having broken free of biological control. Mutations in cell cycle checkpoints make such liberation possible. During its first two decades of use, how Jurkat‘s mutations might affect its TCR functioning wasn’t a focus. After all, there is an inherent tautology to unraveling signaling defects in cell lines being used to identify signaling pathways in the first place. Signaling pathways need to be comprehensively deciphered first to determine if a particular cell line has a signaling defect or two or however many the case may be.

As T cell biochemistry advanced apace by the late 1990s, some TCR signaling pathways identified using mouse T cells, T cell lines or other human T cell lines didn’t concord with those found in Jurkat. Turns out mutations in Jurkat accounted for such discrepancies.

  • Today, signaling of the Phosphoinositide 3-kinase – Wikipedia (PI3K) family is known to be a central feature downstream of TCR signaling. Yet, in the late 1990s-early 2000s, two key molecules that mediate PI3K signaling were found to be missing in Jurkat cells (1). Such a fundamental signaling defect raised questions about the validity of using Jurkat as a tool to understand (human) TCR signaling (1, 3).
  • A 2017 analysis uploaded to the preprint server, bioRxiv, comprehensively collates the various defective pathways and key signaling molecules missing in Jurkat (3).
    • In addition to PI3K, it points out Jurkat doesn’t express other molecules such as SHIP1 (INPP5D – Wikipedia), CTLA-4 – Wikipedia and Syk – Wikipedia, all now known to be important components of the TCR signaling pathway.
    • The authors also suggest a potentially ingenious use of such Jurkat defects, namely, to use them in reconstitution experiments to validate functionality of a given molecule in a particular pathway.
  • Notwithstanding such grave TCR signaling defects, even today hundreds of papers using Jurkat continue to be published annually.
  • Use of Jurkat as a model system to study HIV infection in human T cells yields a similarly confusing story, with many discordances in observations between Jurkat and primary CD4 T cells (4).

Microbial Contamination. A 2008 study reported a batch of Jurkat cells to be contaminated with a retrovirus belonging to the family of Gammaretrovirus – Wikipedia (5). Today this virus is designated as Xenotropic murine leukemia virus-related virus – Wikipedia or XMLV. Note that this study sourced its Jurkat from ATCC (company) – Wikipedia (ATCC), a major global supplier of cell lines. Open questions remain,

  • Given ATCC’s Jurkat was found to be contaminated with XMLV in 2008, how many previous published studies on Jurkat used such contaminated cells?
  • Are Jurkats stored in other cell bank repositories and maintained by labs around the world similarly infected?
  • When did Jurkat become infected? In the 1990s when the XMLV is suspected to have arisen during xenograft studies or later?
  • How does this infection influence historical results from Jurkat? Clearly, comparisons of ‘clean’ and ‘contaminated’ Jurkats are needed to figure out if and what effect this has on their TCR signaling.


1. Abraham, Robert T., and Arthur Weiss. “Jurkat T cells and development of the T-cell receptor signalling paradigm.” Nature Reviews Immunology 4.4 (2004): 301-308.…

2. Schneider, Ulrich, Hans‐Ulrich Schwenk, and Georg Bornkamm. “Characterization of EBV‐genome negative “null” and “T” cell lines derived from children with acute lymphoblastic leukemia and leukemic transformed non‐Hodgkin lymphoma.” International journal of cancer 19.5 (1977): 621-626.

3. Gioia, Louis, et al. “A Genome-wide Survey of Mutations in the Jurkat Cell Line.” bioRxiv (2017): 118117.…

4. Markle, Tristan J., Philip Mwimanzi, and Mark A. Brockman. “HIV-1 Nef and T-cell activation: a history of contradictions.” Future virology 8.4 (2013): 391-404.…

5. Takeuchi, Yasuhiro, Myra O. McClure, and Massimo Pizzato. “Identification of gammaretroviruses constitutively released from cell lines used for human immunodeficiency virus research.” Journal of virology 82.24 (2008): 12585-12588. Identification of Gammaretroviruses Constitutively Released from Cell Lines Used for Human Immunodeficiency Virus Research

Why are flu and flu shots such a big deal in the US? Is the human body not capable of dealing the flu without any preventive medication?


Is the human body not capable of dealing with the flu without any preventive medication?’.

Flu (influenza) is a seasonal disease, typically prevalent in winter in the US. Many among the unvaccinated contract and survive the flu each year suggesting many humans are capable of dealing with it without preventive medicine. However, flu strains tend to be different from year to year and strains circulating one year can be more deadly than those in other years.

  • The 1918 flu pandemic – Wikipedia is estimated to have killed at least 50 million.
  • While not as deadly, subsequent flu pandemics, Influenza pandemic – Wikipedia, such as those in 1957, 1968 and 2009 also killed many.
  • Even today, according to the WHO, seasonal flu leads to an estimated 3 to 5 million global cases of severe illness with ~250000 to 500000 deaths each year (1).
  • Typically, flu lethality disproportionately affects the very young, the very old and the already ill, the 1918 and 2009 pandemics being exceptions in disproportionately felling those between 20 and 40 years of age.
  • Already, apparently the major flu strain circulating in 2017, the influenza A strain H3N2, has led to the headline-grabbing death of an unvaccinated 20 year old mom of two in Arizona (2).
  • Further, seasonal flu is consistently a bigger problem for the older in the US, emerging as the major cause of death among those aged 65 or older, often not directly but as a result of pneumonia from secondary bacterial infections, speculatively an outcome of weakened immune system (3, 4).

Why are flu and flu shots such a big deal in the US? ‘.

Different countries recommend vaccines for different diseases based on their region-specific disease profiles and economic capability. In the US, vaccine recommendations are made by the Advisory Committee on Immunization Practices – Wikipedia (ACIP) which publishes annual flu vaccine recommendations.

Flu vaccines were licensed in the US in 1968 and only began to be included in the pediatric schedule (specifically for those aged 6 to 24 months) in 2004 (5). Starting in 2000, ACIP began incrementally increasing its annual vaccination recommendations to include ~84% of the US population by 2009. In 2010, the ACIP expanded its influenza vaccine recommendation further to all US residents >6 months of age (6), the rationale being the 2009 pandemic H1N1 flu outbreak, where those with greater risk for complications or more severe infections were found to be

  • Adults <50 years of age (7).
  • Those with obesity (8, 9).
  • Specific ethnicities (10, 11).
  • Postpartum women (12, 13, 14, 15).

Bigger Picture Look on Current Flu Shots: A Sub-optimal Solution to a Real Problem

Push for flu vaccines is predicated on two notions, that they

  • Engender milder symptoms compared to those in the unvaccinated.
  • Reduce risk of spread to vulnerable groups (the very young, the elderly or already ill), a consequence of herd immunity.

Problem with current flu vaccines is a hit-or-miss situation since their efficacy varies greatly from year to year depending on how well the strains used in the vaccine match those dominating the circulation in a given year (see below from 16, emphasis mine).

‘The cornerstone of influenza prevention and epidemic control is strain-specific vaccination. Since influenza viruses are subject to continual antigenic changes (“antigenic drift”), vaccine updates are recommended by the WHO each February for the Northern Hemisphere and each September for the Southern Hemisphere. This guidance relies on global viral surveillance data from the previous 5 to 8 months and occurs 6 to 9 months before vaccine deployment. In addition, there are always several closely related strains circulating; therefore, experts must combine antigenic and genetic characterization and modeling to predict which strains are likely to predominate in the coming season.’

See below from 17, emphasis mine.

‘Seasonal influenza outbreaks predictably occur each year and cause an estimated 250,000 to 500,000 annual deaths worldwide (WHO, 2008). Pandemics are highly unpredictable, but pose an even greater threat when they occur. There have been 4 distinct pandemics in the 20th and into the 21st century: 1918, 1957, 1968, and 2009. The worst of these, the 1918 H1N1 influenza pandemic, resulted in 50–100 million deaths globally (WHO, 2014). Despite this substantial disease burden, licensed vaccines provide suboptimal protection against seasonal influenza (typically ranging from 10% to 60%), need to be updated each year, and provide little or no protection against new pandemic influenza strains (CDC, 2017).

A universal flu vaccine that could protect against most seasonal flu strains would be a far better option. However, substantial hurdles range from vaccine design to what represents protective immunity and how to assess it to how to produce such a vaccine.’

Obviously, a universal flu vaccine would be a better solution. Hurdles in the way include figuring out optimal vaccine design, specifically which antigens to include, research on and agreement about the types of immune response that best reflect protection, i.e., correlates of protection, and appropriate methods to produce vaccine such that it retains capacity to mimic as much as possible ability to drive infection-like immunity that is robust and long-standing while still being safe. Greater public support, more funding for flu research and development, better ideas and more creativity, all these are needed to improve this sub-optimal status quo.


1. World Health Organization. “Barriers of influenza vaccination intention and behavior: a systematic review of influenza vaccine hesitancy 2005–2016.” (2016).…

2. A mother got the flu from her children — and was dead two days later

3. Thompson, William W., et al. “Mortality associated with influenza and respiratory syncytial virus in the United States.” Jama 289.2 (2003): 179-186.…

4. Matias, Gonçalo, et al. “Estimates of hospitalization attributable to influenza and RSV in the US during 1997–2009, by age and risk status.” BMC public health 17.1 (2017): 271.…

5. Harper, Scott A., et al. “Prevention and control of influenza: recommendations of the Advisory Committee on Immunization Practices (ACIP).” Morbidity and Mortality Weekly Report: Recommendations and Reports 54.8 (2005): 1-41.…

6. Grohskopf, Lisa A., et al. “Prevention and control of seasonal influenza with vaccines: recommendations of the Advisory Committee on Immunization Practices—United States, 2017–18 influenza season.” American Journal of Transplantation 17.11 (2017): 2970-2982.…

7. Fiore, Anthony E., et al. Prevention and control of influenza with vaccines: recommendations of the Advisory Committee on Immunization Practices (ACIP), 2010. Department of Health and Human Services, Centers for Disease Control and Prevention, 2010.…

8. Louie, Janice K., et al. “A novel risk factor for a novel virus: obesity and 2009 pandemic influenza A (H1N1).” Clinical Infectious Diseases 52.3 (2011): 301-312.…

9. Morgan, Oliver W., et al. “Morbid obesity as a risk factor for hospitalization and death due to 2009 pandemic influenza A (H1N1) disease.” PloS one 5.3 (2010): e9694.…

10. Castrodale, L., et al. “Deaths related to 2009 pandemic influenza A (H1N1) among American Indian/Alaska Natives-12 states, 2009.” Morbidity and Mortality Weekly Report 58.48 (2009): 1341-1344. Deaths Related to 2009 Pandemic Influenza A (H1N1) Among American Indian/Alaska Natives — 12 States, 2009

11. Wenger, Jay D., et al. “2009 Pandemic influenza A H1N1 in Alaska: temporal and geographic characteristics of spread and increased risk of hospitalization among Alaska Native and Asian/Pacific Islander people.” Clinical Infectious Diseases 52.suppl_1 (2011): S189-S197.…

12. Siston, Alicia M., et al. “Pandemic 2009 influenza A (H1N1) virus illness among pregnant women in the United States.” Jama 303.15 (2010): 1517-1525.…

13. Creanga, Andreea A., et al. “Severity of 2009 pandemic influenza A (H1N1) virus infection in pregnant women.” Obstetrics & Gynecology 115.4 (2010): 717-726.

14. Jamieson, Denise J., et al. “H1N1 2009 influenza virus infection during pregnancy in the USA.” The Lancet 374.9688 (2009): 451-458. http://med-fom-apt.sites.olt.ubc…

15. Louie, Janice K., et al. “Severe 2009 H1N1 influenza in pregnant and postpartum women in California.” New England Journal of Medicine 362.1 (2010): 27-35.…

16. Paules, Catharine I., et al. “Chasing Seasonal Influenza—The Need for a Universal Influenza Vaccine.” New England Journal of Medicine (2017).…

17. Paules, Catharine I., et al. “The Pathway to a Universal Influenza Vaccine.” Immunity 47.4 (2017): 599-603.

Is there a database that contains the frequency of different anti-HLA antibodies among the US population?


, ,

Having antibodies against Human leukocyte antigen – Wikipedia (HLA) proteins, i.e., sensitization, is a sign of anti-HLA immune response and remains a major hurdle in the transplant setting.

This answer explains why a national database on the frequency of different anti-HLA antibodies among the US population is unlikely,

  • HLA genes are the most polymorphic (diverse) in the human genome. HLA typing of each and every individual not being a routine procedure to start with makes the even more technically complicated anti-HLA antibody typing all the more unlikely.
  • Anti-HLA antibody typing has been constantly tweaked since its first report in 1969 and never more so than in the 21st century, with the adoption of a new standard called Calculated Panel-Reactive Antibody (CPRA), which supplants the previous standard, PRA (1). Given that newer tests are far more sensitive and accurate, how to reconcile results from older tests? Clearly, such a database would have to include only results that used the newer standard based on newer tests, a standard which only came into place in 2010.
  • Rather than being a population-wide issue, anti-HLA antibody seems more a result of prior transplants. One study on all kidney transplants between 1997 and 2014 in the United Network for Organ Sharing – Wikipedia (UNOS) database (Matching organs. Saving lives.) explored blood transfusion, pregnancy, prior transplant as possible triggers for immune responses against HLA and found evidence for all three, with strongest support for prior transplant (2).
  • Since an antibody test is akin to a snapshot, even if an anti-HLA antibody database existed, a past negative test has limited to little value since the individual concerned could have developed anti-HLA antibodies at any point in time subsequently. An individual with anti-HLA antibodies having much higher risk of rejecting organs and tissues of the HLA type they respond to necessitates fresh tests prior to transplants, a requirement that negates the long-term value of any such database.

Thus, a population-wide anti-HLA antibody database has value only if updated by periodically re-testing each and every individual on it, a prohibitively expensive proposition even for a wealthy country such as the US.

HLA: Most Polymorphic Genes in the Human Genome

Seeking a database that lists frequency of antibodies against specific HLA proteins is far from a trivial matter. HLA, cell surface proteins that present peptides to T cells, are the most polymorphic (diverse in layperson terms) in the entire human genome (3). An indication of such extreme variability of HLA genes is the fact that ~5000 alleles have been described so far for HLA-B alone (see below the table of HLA polymorphism from 4).

Is there a database that contains the frequency of different anti-HLA antibodies among the US population?

Anti-HLA Antibody Typing: How To Leverage To Streamline Organ Matching

Unlike blood typing, HLA typing of each and every person isn’t a routine procedure otherwise there would be no need for organizations such as UNOS to work so strenuously to match donors and recipients. Since HLA typing itself is far from routine, anti-HLA antibody typing is also far from routine, with both a priority largely for those needing transplants and for donors volunteering to provide tissues and organs, and not one for rank and file individuals within a population.

Instead, sporadic reports estimate presence of anti-HLA antibodies among individuals on transplant waiting lists. Two such studies in the US population, one on wait-listed adult kidney transplant candidates (n=161308) (5) and one on heart transplant patients (n=12858) (6), reported finding anti-HLA antibodies in ~25%, quite a high percentage that may be more a reflection of the greater likelihood that those on transplant waiting lists are more likely than the general population to have previously received transplant(s).

Rather than a database, some researchers developed an algorithm called the Virtual Crossmatch (vXM) using anti-HLA antibody and the newer CPRA standard that they claim streamlines organ matching by triaging donor matching for wait-listed patients (see below from 7).


1. Cecka, J. M. “Calculated PRA (CPRA): the new measure of sensitization for transplant candidates.” American journal of transplantation 10.1 (2010): 26-29.…

2. Redfield, Robert R., et al. “The mode of sensitization and its influence on allograft outcomes in highly sensitized kidney transplant recipients.” Nephrology Dialysis Transplantation 31.10 (2016): 1746-1753.…

3. 1000 Genomes Project Consortium. “A map of human genome variation from population-scale sequencing.” Nature 467.7319 (2010): 1061-1073.…

4. Kransdorf, Evan P., et al. “HLA Population Genetics in Solid Organ Transplantation.” Transplantation 101.9 (2017): 1971-1976. HLA Population Genetics in Solid Organ Transplantation : Transplantation

5. Sapir-Pichhadze, Ruth, et al. “Immune sensitization and mortality in wait-listed kidney transplant candidates.” Journal of the American Society of Nephrology (2015): ASN-2014090894. Immune Sensitization and Mortality in Wait-Listed Kidney Transplant Candidates

6. O’connor, Matthew J., et al. “Changes in the methodology of pre‐heart transplant human leukocyte antibody assessment: an analysis of the United Network for Organ Sharing database.” Clinical transplantation 29.9 (2015): 842-850.

7. Gebel, Howard M., and Robert A. Bray. “Alloantibodies, Sensitization and Virtual Crossmatching.” Textbook of Organ Transplantation (2014): 443-451.


Should doctors prescribe placebos?


, ,

Should doctors prescribe placebos?

Recent research suggests symptom benefit in certain conditions/diseases from open-label placebos, i.e., when patients know they are getting placebos. This is a sea change from traditional understanding of placebos as ‘inert’ or ‘ineffective’ ‘sugar’ pills given by doctors to mollify demanding patients, a change whose long-term implications for the practice of medicine are profound, though still unclear.

Placebos: From Wilful Deception (Medieval Times) to Potentially Helpful Adjuncts (Current Era Placebo Research)

Reviews state medical dictionaries began including the word Placebo in the late 18th and early 19th centuries as (see below from 1),

an epithet given to any medicine adapted more to please than to benefit the patient.

However, in recent years, research exploring how placebo can help alleviate certain symptoms of certain diseases changes age-old understanding of this medical mainstay. Such research reveals the extent to which complex mind-body interactions strongly influence physiology so much so that the noted placebo researcher, Fabrizio Benedetti suggests (2):

words and drugs may use the very same mechanisms and the very same biochemical pathways.

Clearly, placebos tap into cognitive, even emotional and sensory, pathways whose power science-based medicine has hitherto tended to under-appreciate. Benedetti showed this in a rather dramatic fashion by giving opioid pain medication to patients either openly or hidden, with clear evidence of increased benefit among those who knew they got the pain medication compared to those who didn’t know (3). In the twenty years since, Benedetti has shown similar therapeutic benefits in other conditions such as anxiety and Parkinson’s.

Note that in such open-hidden studies as Benedetti pioneered, all the patients got the same medication, even the same dose of medication. The ‘placebo‘ in these studies was merely the act of revealing to the patient that they were getting a medication, i.e., engaging their expectations. Such studies uncovered how perception of care itself carries a therapeutic element.

Parsed in this manner, we could thus split traditional placebos into two groups, placebos that entail deceit versus those that don’t, the so-called open-label placebos.

While the effect of giving placebos openly was studied as far back as 1963 (4), systematic efforts to study how placebos work their effects are of much more recent vintage. Pioneered by prominent placebo researcher, Ted Kaptchuk, a handful of studies on open-label placebos reveal a symptom benefit in conditions with substantial psychosomatic component, conditions such as pain (5), itch (6), asthma symptoms (7), pain experienced in GI tract-related problems such as Irritable bowel syndrome (IBS) (8).

One of the biggest caveats of open-placebo studies, however, is one least discussed, namely, the extent to which a Clever Hans effect is in play. Specifically, how to ensure that placebo-taking participants aren’t fudging their symptom improvement in order to appease the caregivers conducting these trials? Such a consideration is all the more relevant given the fact that in the asthma study, participants knowingly taking placebo inhaler reported similar symptom improvement yet objectively assessed lung function improved only in those who got albuterol (7). Did the asthma patients who knowingly got placebo inhaler really feel better even though they still couldn’t breathe better? Clearly, some placebo effects would be on stronger grounds if they withstood scrutiny by objective tests.

Nevertheless, research into placebos expands the definition of placebos from mere things, inert or ineffective pills, to processes such as the treatment ritual, the patient’s expectations and the quality of the doctor-patient communication, all part of behavioral conditioning (see below from 9).

Do placebos have the potential to revolutionize medicine by reliably delivering tangible benefits to patients by capitalizing on such intangibles? That’s the proverbial x$ question. These are still early days in placebo research. We are far from understanding the molecular mechanisms at play. Indeed, Benedetti believes different placebos use different mechanisms (10).

However, if such effects could be harnessed reproducibly, it could transform the very practice of medicine. For instance, we could allay drug toxicity by reducing dose or frequency and yet gain similar benefit. Not only would this benefit patient health, it would also reduce economic cost. Sham surgeries with their inherently lower collateral biological costs could accelerate healing, not to mention, again, reduce economic costs. Whether such effects are capable of being exploited in a predictable manner remains to be seen.

But is it ethical for doctors to prescribe them [placebos]?

Prescribing placebos is ethically inherently problematic since it involves deceiving patients. We are reminded of this nowhere more so than in very meaning of the word, Placebo, Latin for ‘I will please’, which in Medieval English appears to have been used as a synonym for sycophant. Consider the sycophantic nature of the character Placebo in The Merchant’s Tale by Geoffrey Chaucer (11).

In the US, the American Medical Association has published unambiguous guidelines that clearly indicate when and how physicians can and should prescribe placebos (see below from 12, emphasis mine).

Physicians may use placebos for diagnosis or treatment only if the patient is informed of and agrees to its use. A placebo may still be effective if the patient knows it will be used but cannot identify it and does not know the precise timing of its use. A physician should enlist the patient’s cooperation by explaining that a better understanding of the medical condition could be achieved by evaluating the effects of different medications, including the placebo. The physician need neither identify the placebo nor seek specific consent before its administration. In this way, the physician respects the patient’s autonomy and fosters a trusting relationship, while the patient still may benefit from the placebo effect.

A placebo must not be given merely to mollify a difficult patient, because doing so serves the convenience of the physician more than it promotes the patient’s welfare. Physicians can avoid using a placebo, yet produce a placebo-like effect through the skillful use of reassurance and encouragement. In this way, the physician builds respect and trust, promotes the patient physician relationship, and improves health outcomes.

Based on these guidelines, Charlotte Blease, Luana Colloca and Ted Kaptchuk argue (see below from 13),

open placebos fulfil current American Medical Association guidelines for placebo use, and propose future research directions for harnessing the placebo effect ethically.


1. De Craen, Anton JM, et al. “Placebos and placebo effects in medicine: historical overview.” Journal of the Royal Society of Medicine 92.10 (1999): 511-515.

2. Benedetti, Fabrizio. “The placebo response: science versus ethics and the vulnerability of the patient.” World Psychiatry 11.2 (2012): 70-72.…

3. Colloca, Luana, et al. “Overt versus covert treatment for pain, anxiety, and Parkinson’s disease.” The Lancet Neurology 3.11 (2004): 679-684.…

4. Park, Lee C., and Lino Covi. “Nonblind placebo trial: an exploration of neurotic patients’ responses to placebo when its inert content is disclosed.” Archives of General Psychiatry 12.4 (1965): 336-345.

5. Carvalho, Cláudia, et al. “Open-label placebo treatment in chronic low back pain: a randomized controlled trial.” Pain 157.12 (2016): 2766.…

6. Meeuwis, Stefanie H., et al. “Placebo Effects of Open-label Verbal Suggestions on Itch.”…

7. Wechsler, Michael E., et al. “Active albuterol or placebo, sham acupuncture, or no intervention in asthma.” New England Journal of Medicine 365.2 (2011): 119-126.…

8. Kaptchuk, Ted J., et al. “Placebos without deception: a randomized controlled trial in irritable bowel syndrome.” PloS one 5.12 (2010): e15591.…

9. Schedlowski, Manfred, et al. “Neuro-bio-behavioral mechanisms of placebo and nocebo responses: implications for clinical trials and clinical practice.” Pharmacological reviews 67.3 (2015): 697-730.…

10. Benedetti, Fabrizio. “Placebo effects: from the neurobiological paradigm to translational implications.” Neuron 84.3 (2014): 623-637.…

11. Elliott, David B. “The placebo effect: is it unethical to use it or unethical not to?.” Ophthalmic and Physiological Optics 36.5 (2016): 513-518.…

12. Bostick, Nathan A., et al. “Placebo use in clinical practice: report of the American Medical Association Council on Ethical and Judicial Affairs.” Journal of Clinical Ethics 19.1 (2008): 58. http://academicdepartments.musc….

13. Blease, Charlotte, Luana Colloca, and Ted J. Kaptchuk. “Are open‐Label Placebos Ethical? Informed Consent and Ethical Equivocations.” Bioethics 30.6 (2016): 407-414.…

How does breast milk affect the infant gut microbiome?



How does breast milk affect the infant gut microbiome?‘ Historically, breast milk has been assumed to perform two critical functions for the newborn and infants, a source of nutrition as well as for protection, the latter in the form of Passive immunity – Wikipedia, mother to newborn transfer of readymade Antibody – Wikipedia whose protective qualities are well-known.

  • However, breast milk provides nutrition not just for the newborn but also for its microbiota since its nutritive components mold their composition.
  • In fact, recent research suggests human breast milk may be specialized to preferentially promote the colonization of specific microbial species (Bifidobacteria, Lactobacilli) over others.
  • Further, breast milk’s nutritive components are also capable of protection, either by directly binding to pathogens by acting as ‘decoys’ or by shaping the local pH to be unfavorably low and thereby restricting their growth.

Human milk oligosaccharide – Wikipedia (HMO): Source of Nutrition for Newborn Microbiota

Breast milk contains both Prebiotic (nutrition) – Wikipedia and Probiotic – Wikipedia. Prebiotics in this instance include specific complex sugars or oligosaccharides (carbohydrate polymers) termed HMO while its probiotic components include organisms such as Bifidobacteria.

  • After lactose and lipids, HMOs are the third most abundant component in human milk at as much as 5 to 15 grams per liter (1). These sugars show great diversity, consisting of >200 different HMOs different in size, sequence and charge (1, 2).
  • A 2015 study (3) compared gut microbe colonization patterns of infants fed breast milk by mothers who were either capable (n=32) or not (n=12) of secreting 2′-fucosylated glycans, a type of complex sugar, in their breast milk. Secretors, individuals capable of doing so, have active FUT2 – Wikipedia.
    • Infants fed by such secretors got colonized much earlier by a specific bacterium in the colin, Bifidobacterium longum, compared to infants nursed by FUT2 non-secreting mothers, who were instead colonized by B. breve.
    • Infants nursed by FUT2 secretor mothers had lower fecal levels of lactate which suggests they may be capable of better utilizing milk sugars.
  • A 2016 study found similar results, greater abundance of B. longum in fecal samples of infants breastfed by FUT2 secretor mothers (4).
  • Further, infants breastfed by FUT2 secretors have been observed to have reduced risk for diarrheal diseases (5, 6).
  • More recent studies have found higher levels of Bifidobacteria in breastfed versus formula-fed infants (7, 8).

Thus, breast milk components such as HMOs appear to create an ecological niche that preferentially promotes colonization by specific microbial species such as Bifidobacteria which are surmised to be very beneficial for infants (9, 10, 11, 12, 13).

Human milk oligosaccharide – Wikipedia (HMO): Protective Role Against Pathogens

  • Studies have found HMOs mimic gut epithelial cell receptors (act as ‘decoys’) that preferentially bind pathogenic microbes and thereby prevent them binding to gut epithelial cells (14, 15).
  • Studies in the 1970s found fecal pH of breastfed infants to be between 5 and 6 (low) while that of formula-fed ones ranged from 8 to 9 (high). The much lower pH of the large intestine in breastfed infants is surmised to be important in restricting growth of bacteria such as Bacteroides, Clostridia, Enterobacteria while favoring the growth of those such as Bifidobacteria and Lactobacilli considered beneficial for infants (16).

Human breast milk is thus a complex mix of nutrients relentlessly shaped by evolutionary selection pressure (1, 17) to not just function as a direct source of protection and nutrition for the infant but also as a prebiotic source that actively molds the infant gut microbiota composition.


1. Zivkovic, Angela M., et al. “Human milk glycobiome and its impact on the infant gastrointestinal microbiota.” Proceedings of the National Academy of Sciences 108.Supplement 1 (2011): 4653-4658.…

2. Ninonuevo, Milady R., et al. “A strategy for annotating the human milk glycome.” Journal of agricultural and food chemistry 54.20 (2006): 7471-7480.

3. Lewis, Zachery T., et al. “Maternal fucosyltransferase 2 status affects the gut bifidobacterial communities of breastfed infants.” Microbiome 3.1 (2015): 1. https://microbiomejournal.biomed…

4. Smith-Brown, Paula, et al. “Mothers Secretor Status Affects Development of Childrens Microbiota Composition and Function: A Pilot Study.” PloS one 11.9 (2016): e0161211.…

5. Davidson, Barbara, et al. “Fucosylated oligosaccharides in human milk in relation to gestational age and stage of lactation.” Protecting Infants through Human Milk. Springer, Boston, MA, 2004. 427-430.

6. Newburg, David S., et al. “Innate protection conferred by fucosylated oligosaccharides of human milk against diarrhea in breastfed infants.” Glycobiology 14.3 (2003): 253-263.…

7. Yatsunenko, Tanya, et al. “Human gut microbiome viewed across age and geography.” Nature 486.7402 (2012): 222-227.…

8. O’Sullivan, Aifric, Marie Farver, and Jennifer T. Smilowitz. “The influence of early infant-feeding practices on the intestinal microbiome and body composition in infants.” Nutrition and metabolic insights 8.Suppl 1 (2015): 1.…

9. Sela, D. A., et al. “The genome sequence of Bifidobacterium longum subsp. infantis reveals adaptations for milk utilization within the infant microbiome.” Proceedings of the National Academy of Sciences 105.48 (2008): 18964-18969.…

10. Sela, David A., and David A. Mills. “Nursing our microbiota: molecular linkages between bifidobacteria and milk oligosaccharides.” Trends in microbiology 18.7 (2010): 298-307.…

11. Smilowitz, Jennifer T., et al. “Breast milk oligosaccharides: structure-function relationships in the neonate.” Annual review of nutrition 34 (2014): 143. http://lebrilla.faculty.ucdavis….

12. Allen-Blevins, Cary R., David A. Sela, and Katie Hinde. “Milk bioactives may manipulate microbes to mediate parent–offspring conflict.” Evolution, medicine, and public health 2015.1 (2015): 106-121.…

13. Yamada, Chihaya, et al. “Molecular insight into evolution of symbiosis between breast-fed infants and a member of the human gut microbiome Bifidobacterium longum.” Cell Chemical Biology 24.4 (2017): 515-524.

14. Newburg, David S., Guillermo M. Ruiz-Palacios, and Ardythe L. Morrow. “Human milk glycans protect infants against enteric pathogens.” Annu. Rev. Nutr. 25 (2005): 37-58.…

15. Chichlowski, Maciej, et al. “Bifidobacteria isolated from infants and cultured on human milk oligosaccharides affect intestinal epithelial function.” Journal of pediatric gastroenterology and nutrition 55.3 (2012): 321.…

16. Heavey, Patricia M., and Ian R. Rowland. “The gut microflora of the developing infant: microbiology and metabolism.” Microbial Ecology in Health and Disease 11.2 (1999): 75-83.…

17. Mueller, Noel T., et al. “The infant microbiome development: mom matters.” Trends in molecular medicine 21.2 (2015): 109-117.…

How does the immune system interact with the nervous system?


How does the immune system interact with the nervous system?’. Denoting the study of interactions between the immune and nervous systems, the word Psychoneuroimmunology – Wikipedia is quite the mouthful. Though decades old, its sparse track record of signal accomplishments is a major reason the field remains niche and not yet mainstream. Funnily enough though, substantial data in recent years shows immune-nervous system interactions to be integral for maintaining normal health. Consider two prominent examples,

Vagus nerve: A Master Communicator Between Immune & Nervous Systems?

In 2000, a team led by Kevin J. Tracey – Wikipedia showed they could reverse what was assumed to be irreversible septic shock simply by stimulating the vagus nerve in a timely manner after injecting mice with a lethal dose of Lipopolysaccharide – Wikipedia (1). Seventeen years later, how neural circuits might control both degree and type of inflammation (see below from 2) is beginning to attract major research and pharma interest so much so therapies based on vagus nerve stimulation may even be realistic for a variety of inflammatory and autoimmune disorders at some point in the future.

Placebos: Some Examples of Effectiveness in Inflammatory Disorders & Infections

Inflammatory disorders such as Irritable bowel syndrome – Wikipedia (IBS), other Inflammatory bowel disease – Wikipedia (IBD), and allergies such as asthma are increasingly a bane in developed countries. Clearly, immune system dysfunction is a major component of such inflammatory disorders. While not so apparently obvious, nervous system involvement is also increasingly undeniable. For example, over the past decade, a series of increasingly ingenious collaborative studies by prominent placebo effect researcher Ted Kaptchuk – Wikipedia as well as studies by others have shown a prominent role for placebos in alleviating symptoms of inflammatory disorders. Since the brain is a key mediator of the placebo effect, this implicates nervous system function in influencing the course of inflammatory disorders.

  • A 2008 study (3) on 262 randomly assigned IBS patients compared effect of two different types of placebos to No Rx (evaluation and observation alone) for 3 weeks. Placebos were sham acupunctures given by a practitioner who either interacted little or was warm and attentive. Symptom relief ranged from 28% in the no Rx group to 44% (limited interaction) and 62% (warm and attentive) in those who were randomly assigned to the two placebo groups.
  • A 2010 study from the same group, again with IBS patients, tried a groundbreaking tack, an open-label placebo described as (see below from 4),

‘inert or inactive pills, like sugar pills, without any medication in it’

with the patients being told (see below from 4)

‘placebo pills, something like sugar pills, have been shown in rigorous clinical testing to produce significant mind-body self-healing processes’.

Patients in the placebo group thus knew they were getting placebos with the Rx provider explaining (see below from 4),

‘(1) the placebo effect is powerful, (2) the body can automatically respond to taking placebo pills like Pavlov’s dogs who salivated when they heard a bell, (3) a positive attitude helps but is not necessary, and (4) taking the pills faithfully is critical’

An average of as many as 75% experienced symptom relief (n=37) compared to 28% in the No Rx group (n=43).

  • A 2011 study (5), again led by Kaptchuk, this time on asthma patients, compared patients who got a bronchodilator inhaler, a placebo inhaler, placebo (sham) acupuncture or no treatment. While only the bronchodilator inhaler yielded objective symptom improvement (~50%), namely, better forced expiratory volume, patients who got the two placebo Rx reported subjective symptom improvement (~21%) over the No Rx group.
  • A 2009 study (6) on 350 patients with the common cold found shorter duration and less severity in those patients who perceived their doctors to be highly empathic.
  • A 2011 open-label study on the common cold divided patients into 4 groups (see below from 7),

‘(1) those receiving no pills, (2) those blinded to placebo, (3) those blinded to echinacea, and (4) those given open-label echinacea’

and the authors concluded (see below from 7),

‘Participants randomized to the no-pill group tended to have longer and more severe illnesses than those who received pills. For the subgroup who believed in echinacea and received pills, illnesses were substantively shorter and less severe, regardless of whether the pills contained echinacea.’


1. Borovikova, Lyudmila V., et al. “Vagus nerve stimulation attenuates the systemic inflammatory response to endotoxin.” Nature 405.6785 (2000): 458-462.…

2. Fox, Douglas. “The electric cure.” (2017): 20-22.…

3. Kaptchuk, Ted J., et al. “Components of placebo effect: randomised controlled trial in patients with irritable bowel syndrome.” Bmj 336.7651 (2008): 999-1003.…

4. Kaptchuk, Ted J., et al. “Placebos without deception: a randomized controlled trial in irritable bowel syndrome.” PloS one 5.12 (2010): e15591.…

5. Wechsler, Michael E., et al. “Active albuterol or placebo, sham acupuncture, or no intervention in asthma.” New England Journal of Medicine 365.2 (2011): 119-126.…

6. Rakel, David P., et al. “Practitioner empathy and the duration of the common cold.” Family medicine 41.7 (2009): 494.…

7. Barrett, Bruce, et al. “Placebo effects and the common cold: a randomized controlled trial.” The Annals of Family Medicine 9.4 (2011): 312-322. A Randomized Controlled Trial

What are the limits of using the skin microbiome in forensic identification?


What are the limits/potential of using the skin microbiome in forensic identification?

Refers to:

  • Back in 2010, one of the first papers to moot the idea of using skin microbiome in forensic identification compared and found a statistical correlation between microbiome signatures recovered from nine computer mice and the palm surface of the dominant hand of their owners (1).
  • A more recent study on 25 to 105 samples from different body sites also mooted the forensic idea (2).
  • Meantime, the latest study referenced in the question included sampling a mere 12 healthy individuals (3).

Could a case for forensic application of skin microbiome be made on such slim pickings? Available evidence suggests not since skin microbiome varies both across body sites as well as across time while microbiome analysis itself remains unstandardized, method standardization being an essential attribute of forensics.

Skin Microbiome Varies Across The Body & With Time & Age, A Crucial Limitation For Forensic Application

Forensic signatures require individual differences to be consistent, considerable and reproducible. We are far from such certainties when it comes to the human microbiome. Where the skin is concerned, studies show skin microbiome varies in different parts of the body as well as with time/age/infection (4, 5, 6, 7, 8, 9, 10). This implies full body sampling of an individual would be necessary to match microbiome found on a given surface. How practical and feasible is such a proposition in routine evidence analysis, which would likely not be limited to what’s detected on computer mice?

Thus, merely examining the research portion of the question yields a near-insurmountable obstacle. Zooming out to look at the larger perspective, two other intrinsically intertwined issues also loom large to make clear forensic microbiome analysis may still be firmly rooted in the realm of science-fiction.

Both Forensics & Microbiome Analysis Still Lack a Basic Attribute of the Scientific Method, Rigorous Quality Control

One, the fact that, popular TV shows notwithstanding, forensics is not a science but rather a motley bag of tools shaped not by dictates of the scientific method but by the imperative of problem-solving as mandated by the justice system, with the most critical proviso being in the words of Nathan J. Robinson in the Boston Review (see below from 11)

‘Forensic science works when prosecutions are successful and fails when they are not.’

The very basis for forensics thus clashes with the very premise of science itself, namely, (see below from 11),

‘neutral, open-ended inquiry’

Seen in such a light (see below from 11),

‘…one of the key problems with evaluating forensic science. The measures of its success are institutional: we see the failures of forensics when judges overturn verdicts or when labs contradict themselves. There is a circularity in the innocence cases, where the courts’ ability to evaluate forensic science is necessary to correct problems caused by the courts’ inability to evaluate forensic science. At no point, even with rigorous judicial review, does the scientific method come into play. The problem is therefore not that forensic science is wrong, but that it is hard to know when it is right.’

Recognizing it to be a potentially fatal weakness, the US National Academy of Sciences – Wikipedia (NAS) has long advocated the need for basic quality control across all types of forensics (12). However such warnings have gone largely unheeded even as some of the scientifically more dubious forensic techniques such as hair or bite-mark analysis have recently lost their patently thin and brittle sheen of impartial, rigorous evidence analysis. As Robinson points out in his 2015 critique (see below from 11, also Google FBI hair analysis if interested in other contemporaneous news reports),

‘This past April, the FBI made an admission that was nothing short of catastrophic for the field of forensic science. In an unprecedented display of repentance, the Bureau announced that, for years, the hair analysis testimony it had used to investigate criminal suspects was severely and hopelessly flawed.’

Similar high-profile reversals have tarnished the validity of bite-mark analysis (13, 14, 15).

Even techniques such as DNA analysis, presumed to rest on arguably more scientifically rigorous basis, haven’t escaped serious and valid criticism (16; also see below from 11, emphasis mine),

‘DNA failures can border on the absurd, such as an incident in which German police tracked down a suspect whose DNA was mysteriously showing up every time they swabbed a crime scene, from murders to petty thefts. But instead of nabbing a criminal mastermind, investigators had stumbled on a woman who worked at a cotton swab factory that supplied the police. That case may seem comical, but a 2012 error in New York surely doesn’t. In July of that year, police announced that DNA taken off a chain used by Occupy Wall Street protesters to open a subway gate matched that found at the scene of an unsolved 2004 murder. The announcement was instantly followed by blaring news headlines about killer Occupiers. But officials later recanted, explaining that the match was a result of contamination by a lab technician who had touched both the chain and a piece of evidence from the 2004 crime. Yet the newspapers had already linked the words “Occupy” and “murder.” The episode demonstrates how the consensus surrounding DNA’s infallibility could plausibly enable government curtailment of dissent. Given the NYPD’s none-too-friendly disposition toward the Occupiers, one might wonder what motivated it to run DNA tests on evidence from protest sites in the first place.’

Systematic analysis of even a historical mainstay of forensics such as fingerprint analysis is still a rarity and, when undertaken, shows clear need for improvement (see below from 17, emphasis mine)

‘The interpretation of forensic fingerprint evidence relies on the expertise of latent print examiners. The National Research Council of the National Academies and the legal and forensic sciences communities have called for research to measure the accuracy and reliability of latent print examiners’ decisions, a challenging and complex problem in need of systematic analysis. Our research is focused on the development of empirical approaches to studying this problem. Here, we report on the first large-scale study of the accuracy and reliability of latent print examiners’ decisions, in which 169 latent print examiners each compared approximately 100 pairs of latent and exemplar fingerprints from a pool of 744 pairs…Five examiners made false positive errors for an overall false positive rate of 0.1%. Eighty-five percent of examiners made at least one false negative error for an overall false negative rate of 7.5%. Independent examination of the same comparisons by different participants (analogous to blind verification) was found to detect all false positive errors and the majority of false negative errors in this study. Examiners frequently differed on whether fingerprints were suitable for reaching a conclusion.’

Two, the intrinsically intertwined part of forensic quality control, procedures to detect and analyze the microbiome remain unstandardized (18), a key reason why results from different microbiome studies routinely contradict each other. Obviously, forensic application of any tool is impossible without method standardization, which is still far from the norm for microbiome studies including those sampling skin (19, 20).


1. Fierer, Noah, et al. “Forensic identification using skin bacterial communities.” Proceedings of the National Academy of Sciences 107.14 (2010): 6477-6481.…

2. Franzosa, Eric A., et al. “Identifying personal microbiomes using metagenomic codes.” Proceedings of the National Academy of Sciences 112.22 (2015): E2930-E2938.…

3. Schmedes, Sarah E., August E. Woerner, and Bruce Budowle. “Forensic human identification using skin microbiomes.” Applied and Environmental Microbiology 83.22 (2017): e01672-17. Forensic Human Identification Using Skin Microbiomes

4. Grice, Elizabeth A., et al. “Topographical and temporal diversity of the human skin microbiome.” science 324.5931 (2009): 1190-1192.…

5. Costello, Elizabeth K., et al. “Bacterial community variation in human body habitats across space and time.” Science 326.5960 (2009): 1694-1697.…

6. Ursell, Luke K., et al. “The interpersonal and intrapersonal diversity of human-associated microbiota in key body sites.” Journal of Allergy and Clinical Immunology 129.5 (2012): 1204-1208.…

7. Oh, Julia, et al. “Biogeography and individuality shape function in the human skin metagenome.” Nature 514.7520 (2014): 59-64.…

8. Flores, Gilberto E., et al. “Temporal variability is a personalized feature of the human microbiome.” Genome biology 15.12 (2014): 531. https://genomebiology.biomedcent…

9. Van Rensburg, Julia J., et al. “The human skin microbiome associates with the outcome of and is influenced by bacterial infection.” MBio 6.5 (2015): e01315-15. The Human Skin Microbiome Associates with the Outcome of and Is Influenced by Bacterial Infection

10. Perez, Guillermo I. Perez, et al. “Body site is a more determinant factor than human population diversity in the healthy skin microbiome.” PloS one 11.4 (2016): e0151990.…

11. Forensic Pseudoscience


13. California Supreme Court Overturns Murder Conviction Based on Flawed Bite-Mark Evidence

14. How the Flawed Science of Bite-Mark Analysis Imprisoned a Man for Murder

15. Jailed Texas man free after 28 years as bite evidence thrown out in murder case

16. The Surprisingly Imperfect Science of DNA Testing

17. Ulery, Bradford T., et al. “Accuracy and reliability of forensic latent fingerprint decisions.” Proceedings of the National Academy of Sciences 108.19 (2011): 7733-7738.…

18. Tirumalai Kamala’s answer to Is 16s rRNA sequencing a sound approach to studying bacterial communities?

19. Meisel, Jacquelyn S., et al. “Skin microbiome surveys are strongly influenced by experimental design.” Journal of Investigative Dermatology 136.5 (2016): 947-956.…

20. Kong, Heidi H., et al. “Performing skin microbiome research: a method to the madness.” Journal of Investigative Dermatology (2017).…