What are the limits of flow cytometry analysis of cell viability using propidium iodide staining?



Many dyes can assess cell viability when using Flow cytometry – Wikipedia. Classic dyes such as such as Propidium iodide – Wikipedia (PI) and 7-Aminoactinomycin D – Wikipedia (7AAD) are

  • Cell membrane-permeable, intercalating with double-stranded nucleic acids, DNA in particular, of cells whose membranes have become more permeable, a sign that such cells are in cell membrane damaging types of distress, dying or dead.
  • Cheap.
  • Added at the end of the staining protocol, thus requiring little by way of extra time or steps.

An inexpensive, positively charged dye that cannot cross an intact plasma membrane, PI has fairly discreet excitation and emission spectra, excites at 488nm (maximum 535nm) and emits within the 570 to 630nm range (red fluorescence emission).

Full disclosure: For reasons explained below, I preferred 7AAD to PI for flow cytometry cell viability assessment and switched away from both to amine dyes as soon as they arrived on the scene.

PI Disadvantages

  • PI’s emission spectrum overlaps a little with that of FITC (Fluorescein isothiocyanate – Wikipedia) and a lot with that of PE (Phycoerythrin – Wikipedia). Compensating for PI’s spectral overlap with FITC and PE is much more challenging (1) compared to 7AAD. Having been successfully conjugated to thousands upon thousands of antibodies, these are two of the most versatile, proven workhorses in flow cytometry. PI’s overlap with FITC and PE thus severely limits the scope of a multi-staining flow cytometry antibody panel.
  • Need to use dead cell compensation control for PI and 7AAD, usually by heat killing 70oC for 30 minutes an aliquot of the cells being stained, which adds an additional variable to the experiment.
  • PI could be genotoxic/mutagenic to cells (2).
  • PI can intercalate with RNA as well (3), a quality long vastly under-appreciated by regular flow cytometry, since morphological assessment isn’t its strength. OTOH, the newer Imaging Flow Cytometry, which combines flow cytometry with fluorescence microscopy, shows increased cytoplasmic PI staining can lead to higher false positives when using standard flow cytometry protocols (4, 5).
  • RNA binding of PI is the reason RNase is used with PI in microscopy (6). However, treating samples with RNase prior to running them on flow cytometry isn’t a widespread practice (5) since RNase doesn’t penetrate live cells. It also requires fixation, which does not work for PI.
  • This brings us to cell fixation and the fact that PI only works when using live, not fixed, cells, a major drawback since running fixed cells is a major advantage of high throughput flow cytometry. Cells are typically fixed after surface staining in order to permeabilize their membranes to additionally stain intracellular proteins. Since PI binds DNA non-covalently, when cells are fixed after staining, dye bound to dead cells’ DNA could dissociate and even stain live cell DNA. After all, fixation destroys cell membrane integrity.

Being able to use them on both live and fixed cells is a major reason for the shift away over the past 10 years from PI and 7AAD towards amine dyes. Amine dyes such as Molecular Probes (Invitrogen-Thermo Fisher Scientific) Live/Dead® dye combinations

  • Are also cell membrane permeable, meaning they work on the same principle, only entering cells with compromised plasma membranes.
  • However, rather than binding DNA, they covalently bind amine groups of cellular proteins. Such dyes would thus only bind the few amines present on the cell surface of live cells but many, many more on intracellular proteins within cells with compromised cell membranes, resulting in a marked increase in fluorescence in distressed/dying/dead cells.
  • Covalent binding renders amine dyes impervious to cell fixation, meaning they remain bound only to amines of intracellular proteins within cells that were already dead to start with and won’t leak out to enter previously live, now fixed cells. This is how they can be used to assess viability even on fixed cells.
  • Are available in a wide range of excitation and emission profiles.
  • Many vendors offer amine-reactive beads to use as dead cell marker compensation control.
  • While greatly outweighed by their benefits, disadvantages of amine-reactive dyes are the extra time and step when staining for fixed cells:
    • need to stain with them first before permeabilizing and fixing cells for main staining protocol.
    • amine-dye staining step needs to be preceded by a saline wash-out step to remove free proteins to minimize non-specific staining of the solution used to suspend the cells, which would waste the reagent and make far less of it available for binding amines of intracellular proteins within cell membrane-compromised cells.
  • Since amine-reactive dyes are washed out before the staining steps are done, they track cells that died during the experiment but not the ones that die during the sorting process, when using flow cytometry to sort out subsets of live cells. For this reason and due to the extra steps, classic DNA-binding dyes such as PI and 7AAD are still preferred when using flow cytometry sorting.


1. Telford, William, Karen Tamul, and Jolene Bradford. “Measurement and Characterization of Apoptosis by Flow Cytometry.” Current Protocols in Cytometry (2016): 9-49.

2. The Molecular Probes Handbook

3. Deitch, ARLINE D., H. O. R. A. T. I. O. Law, and R. deVere White. “A stable propidium iodide staining procedure for flow cytometry.” Journal of Histochemistry & Cytochemistry 30.9 (1982): 967-972. http://journals.sagepub.com/doi/…

4. Rieger, Aja M., et al. “Conventional apoptosis assays using propidium iodide generate a significant number of false positives that prevent accurate assessment of cell death.” Journal of immunological methods 358.1 (2010): 81-92.

5. Rieger, Aja M., and Daniel R. Barreda. “Accurate assessment of cell death by imaging flow cytometry.” Imaging Flow Cytometry: Methods and Protocols (2016): 209-220.

6. Fried, Jerrold, Amaury G. Perez, and Bayard D. Clarkson. “Flow cytofluorometric analysis of cell cycle distributions using propidium iodide. Properties of the method and mathematical analysis of the data.” The Journal of cell biology 71.1 (1976): 172-181. http://jcb.rupress.org/content/j…



How do people who study correlations using metagenomic differences between gut flora avoid accidentally p-hacking?


The very phrase suggesting that not scientific method but rather expediency and sleight of hand guide the scientific process, p-hacking refers to a widespread scientific practice of evaluating many associations in the data generated but only reporting those found to be statistically significant, specifically those found to have a p-value <0.05 (1).

A symptom of the major pressure in academia to publish, p-hacking is but one tool in the tool-kit of the notorious tendency aptly named Data dredging – Wikipedia, which is itself part of the far more pervasive and far more consequential Publication bias – Wikipedia, a well-recognized and pronounced tendency for positive, rather than negative or inconclusive, results to get published (2).

Obviously, selective publication of data thoroughly distorts understanding (3),

‘If white swans remain unpublished, reports of black swans cannot be used to infer on general swan color. In the worst case, publication bias means according to Rosenthal (1979) that the 95% of studies that correctly yield nonsignificant results may be vanishing in file drawers, while journals may be filled with the 5% of studies committing the alpha error by claiming to have found a significant effect when in reality the null hypothesis is true.’

As a result, numbers of studies found irreproducible only pile up over time, meaning a problem defined by its chronicity.

p-hacking and other data dredging efforts have become commonplace in biological research because

  • Inherently flawed thinking, specifically preoccupation with statistical rather than biological significance has come to dominate biological tests, that their results are or should be binary, either significant (accepted) or nonsignificant (rejected).
  • Even though the original intent was they would aid scientists in decision making and/or risk analysis, p-values have came to occupy a central position in biological research because they lend themselves to such artificial dichotomy.


‘Fisher offered the idea of p-values as a means of protecting researchers from declaring truth based on patterns in noise. In an ironic twist, p-values are now often manipulated to lend credence to noisy claims based on small samples” (Gelman & Loken, 2014). And the manipulation can happen completely unintentionally, without the researcher performing any conscious procedure of fishing through the data’


‘In my opinion p -values are one of many systems for looking at data (Senn, 2001). In part their limitations stem from the fact that they are all too often used to summarize a complex situation with false simplicity.’

These are general critiques and observations of experimental biological research. Some of these issues are amplified in microbiota metagenomics studies because

  • Many gut microbiota studies do an extremely poor job of estimating error, consequence of two common experimental design flaws. Small sample sizes, i.e., too few biological replicates, compounded by non-existent or too few technical replicates, i.e., number of times different aliquots of the same sample are run A-to-Z through the same technique. Biological and technical replicates are important because they yield measures of inter- and intra-individual variations, respectively.
  • Metagenomics is inherently extremely sensitive so discriminating signal to noise is already a challenge. Poorly powered studies simply exacerbate this difficulty, where Statistical power – Wikipedia is defined as the probability that a given test would be significant were an alternative hypothesis true, i.e., rejection of the null hypothesis.
  • Since small p-values or significant results require larger sample sizes, many gut microbiota studies with their small sample sizes artificially lend themselves to p-hacking or other data dredging efforts.
  • Reality is most microbiota studies are still observational in nature, simply comparing microbiota composition between two sets of people or animals. They are exploratory rather than confirmatory, and thus inherently incapable of weighty inferences, which end up happening anyway. This tendency, Law of small numbers – Wikipedia, specifically Hasty generalization – Wikipedia, is partly responsible for ludicrously lofty conclusions.

Thus, p-hacking isn’t so much a bug or glitch to be eliminated through tweaks but rather an essential and inevitable feature of a flawed approach to experimentation science (6). How then could study of microbiota metagenomics or study of any other biological phenomena for that matter be improved? Should there be fewer but larger studies or more but smaller ones? Since each study has issues of inherent bias and relative, not absolute, precision, what matters or should matter (7)

‘is not replication defined by the presence or absence of statistical significance, but the evaluation of the cumulative evidence and assessment of whether it is susceptible to major biases’

How could this goal be accomplished though? (3)

‘We should design, execute, and interpret our research as a `prospective meta-analysis’ (Ioannidis, 2010), to allow combining knowledge from multiple independent studies, each producing results that are as unbiased as possible’

However, ‘combining knowledge from multiple independent studies‘ is relatively difficult for microbiota metagenomics studies since most of the steps involved in analyzing gut microbiota haven’t yet been standardized (8, 9, 10, 11, 12, 13). These include

  • Optimal methods to collect samples (lots of confounders to account for including age, gender, diet).
  • How to process and store samples (e.g., feces or biopsy, aerobic or anaerobic).
  • Optimal choice for DNA extraction.
  • Consensus on fool-proof approaches to minimize contamination of reagents, disposables.
  • Consensus on controls to assess contamination during DNA extraction and amplification.
  • Method to analyze microbiome DNA, metagenomics or 16S rRNA.
  • If 16S rRNA, then which variable region(s), which primers and how many PCR cycles to run.
  • What sequencing technology to use.
  • What bioinformatics tool(s) to use to analyze the data. Type of taxonomic classification, clustering techniques, functional analyses.

And these are only the technical issues! There is also the scientific issue of study design (9, 11, 13, 14, 15). How many experimental groups and how many samples per group, repetitive or one-time. Lack of standardization is a major reason Meta-analysis – Wikipedia is still in its infancy in microbiota studies. Too many variables and confounders differ between studies to allow for meaningful data comparison. Which techniques lend themselves to particular kinds of biases in resulting data and how best to minimize such biases still remain to be determined (16).

Such critiques aren’t merely academic but have real-world costs. Consider for example Crohn’s disease, where published studies vary widely in their results (17). Microbiota differences between obese and lean individuals have also been difficult to replicate (18). This is also the case with Inflammatory bowel disease – Wikipedia (IBD) where microbiota metagenomic data alone aren’t yet sufficient to discriminate between healthy and specific IBD states but rather serve to complement other types of diagnostics (18).


1. Contribution to the discussion of the paper by Stefan Wellek: “A critical evaluation of the current p-value controversy”. Farcomeni, A.Biometrical J, 2017. Contribution to the discussion of the paper by Stefan Wellek: “A critical evaluation of the current p‐value controversy”

2. Smaldino, Paul E., and Richard McElreath. “The natural selection of bad science.” Royal Society open science 3.9 (2016): 160384. http://rsos.royalsocietypublishi…

3. Amrhein, Valentin, Fränzi Korner-Nievergelt, and Tobias Roth. The earth is flat (p> 0.05): Significance thresholds and the crisis of unreplicable research. No. e2921v1. PeerJ Preprints, 2017. https://peerj.com/articles/3544.pdf

4. Gelman A, Loken E. 2014. The statistical crisis in science. American Scientist 102:460465. http://www.stat.columbia.edu/~ge…

5. Senn, Stephen. “Contribution to the discussion of ‘“A critical evaluation of the current p‐value controversy”’.” Biometrical Journal (2017).

6. Wasserstein, Ronald L., and Nicole A. Lazar. “The ASA’s statement on p-values: context, process, and purpose.” (2016). http://www.scaillet.ch/risk_mngt…

7. Goodman, Steven N., Daniele Fanelli, and John PA Ioannidis. “What does research reproducibility mean?.” Science translational medicine 8.341 (2016): 341ps12-341ps12. https://pdfs.semanticscholar.org…

8. Weiss, Sophie, et al. “Tracking down the sources of experimental contamination in microbiome studies.” Genome biology 15.12 (2014): 564. https://genomebiology.biomedcent…

9. Bik, Elisabeth M. “Focus: microbiome: the hoops, hopes, and hypes of human microbiome research.” The Yale journal of biology and medicine 89.3 (2016): 363. https://www.ncbi.nlm.nih.gov/pmc…

10. Weiss, Sophie, et al. “Correlation detection strategies in microbial data sets vary widely in sensitivity and precision.” The ISME journal 10.7 (2016): 1669-1681. https://www.nature.com/ismej/jou…

11. Kim, Dorothy, et al. “Optimizing methods and dodging pitfalls in microbiome research.” Microbiome 5.1 (2017): 52. https://microbiomejournal.biomed…

12. Vandeputte, Doris, et al. “Practical considerations for large-scale gut microbiome studies.” FEMS Microbiology Reviews 41.Supp_1 (2017): S154-S167.

13. Claesson, Marcus J., Adam G. Clooney, and Paul W. O’Toole. “A clinician’s guide to microbiome analysis.” Nature reviews. Gastroenterology & hepatology (2017).

14. Laukens, Debby, et al. “Heterogeneity of the gut microbiome in mice: guidelines for optimizing experimental design.” FEMS microbiology reviews 40.1 (2016): 117-132. https://pdfs.semanticscholar.org…

15. Debelius, Justine, et al. “Tiny microbes, enormous impacts: what matters in gut microbiome studies?.” Genome biology 17.1 (2016): 217. https://genomebiology.biomedcent…

16. Lozupone, Catherine A., et al. “Meta-analyses of studies of the human microbiota.” Genome research 23.10 (2013): 1704-1714. Meta-analyses of studies of the human microbiota

17. Gevers, Dirk, et al. “The treatment-naive microbiome in new-onset Crohn’s disease.” Cell host & microbe 15.3 (2014): 382-392. http://www.cell.com/cell-host-mi…

18. Walters, William A., Zech Xu, and Rob Knight. “Meta‐analyses of human gut microbes associated with obesity and IBD.” FEBS letters 588.22 (2014): 4223-4233. http://onlinelibrary.wiley.com/d…


How is the epitope of a therapeutic monoclonal antibody determined?



A Monoclonal antibody – Wikipedia (mAb) is the product of a clonal population of B cells, specifically the secreted portion of the B cell receptor (BCR). B cells bind native, unprocessed antigens, a process very different from that of T cells, which bind small linear peptides (epitopes) presented within the groove of MHC (HLA) molecules on cell surfaces after antigens are processed (chopped up) inside an antigen presenting cell.

Mapping B cell or mAb epitopes is trickier because they can be of two very different types. The B cell epitope, the portion of the antigen that binds the B cell receptor, can be either linear (continuous) or conformational (discontinuous), Conformational epitope – Wikipedia (1). Consisting of the sequential amino acids of a particular portion of a protein antigen, linear epitopes are easier to identify compared to conformational epitopes, where disparate portions of a protein brought together by how it’s folded into its native structure constitute the B cell receptor’s epitope (see below from 2). Though the terms linear and conformational lend themselves to the misunderstanding that linear (continuous) epitopes are conformation-independent, this is obviously not true since even they could have one or more different conformations. Whether or not a B cell or mAb epitope is conformational can be established fairly easily, if antibody binds by Western blot even after SDS-PAGE, i.e., after the antigen has been denatured, its epitope is unlikely to be conformational.

Methods to identify B cell or mAb epitopes are either structural or functional with different advantages and disadvantages (see below from 3).

Regardless the epitope be linear or conformational, X-ray crystallography of antigen-antibody complexes, a structural approach where first the antibody bound to its antigen is crystallized followed by X-ray diffraction of the complex, represents the gold standard for Epitope mapping – Wikipedia (1, 4, 5). This is because it can directly identify ‘the set of atoms of the antigen that make contact with residues of the antibody. Usually a contact between two residues is said to occur if the interatomic distance between their atoms is less than 4 Å‘ (1). Thus, it provides a high resolution of the binding interactions between antibody and epitope, spanning both strong and weak interactions.

Problem is crystallizing antigen-antibody complexes is quite expensive, requires the antibody structure to be known, and entails large quantities of highly purified antigen and antibody as well as generation of sufficient quantities of diffraction-quality crystals, a task immeasurably more difficult for membrane protein antigens.

Other structural approaches such as NMR and SPR have similar drawbacks in terms of relative cost and expertise with the additional limitation of small size in the case of NMR, requiring antigens be <60 kDa in size.

Though having lower resolution compared to X-ray and NMR, Mass spectrometry – Wikipedia (MS) has become a popular B cell epitope mapping method (see below from 6) for a variety of reasons, to wit, greater efficiency, high sensitivity, the epitope can be directly sequenced, both linear and conformational epitopes can be identified, and the process can be automated.

Amide Hydrogen/Deuterium exchange (HDX), a MS variant, is a protein foot-printing approach that compares amount of deuterium incorporated by antigen when bound or unbound to the mAb (5, see below from 7), idea being exchange rate would be measurably different between easily accessible hydrogens versus those in a particular portion in contact with the antibody.

Problem with structural methods including computational (in silico) is they require knowledge of the antigen’s tertiary structure. In any case, computational approaches aren’t stand-alone, requiring experimental validation using other structural and/or functional methods (4). Despite the advantage of exquisitely high resolution, a weakness of even a structural approach such as X-ray crystallography as stand-alone is contact between specific antibody and antigen (epitope) amino acid residues alone isn’t sufficient to conclude specificity of the binding. Only painstaking amino acid by amino acid mutation analysis of the epitope could do that.

OTOH, functional epitope mapping methods such as Dot blot – Wikipedia, ELISA – Wikipedia, Western blot – Wikipedia entail processes such as antigen fragmentation, antigen modification through mutagenesis (See below from 8), competition, peptide libraries/synthetic peptides (peptide microarray, Pepscan, surface display). Such functional methods rely on their ability to detect antibody binding to its antigenic following either interventions such as competition with other antibodies or various antigenic modifications such as mutation, i.e., functionally demonstrate specific sequences of the antigenic epitope to be critical for antibody binding. ELISA and Western blot are old workhorses among these methods with the great advantage that they don’t require knowledge of the antigen’s tertiary structure.

Given their relative advantages and disadvantages, and balancing cost, time, effort and skill levels required, a mix of structural and functional approaches is required to convincingly decipher a B cell or mAb epitope.


1. Van Regenmortel, Marc HV. “What is a B-cell epitope?.” Epitope Mapping Protocols: Second Edition (2009): 3-20.

2. Forsström, Björn. Characterization of antibody specificity using peptide array technologies. Diss. KTH Royal Institute of Technology, 2014. http://www.diva-portal.org/smash…

3. Ahmad, Tarek A., Amrou E. Eweida, and Salah A. Sheweita. “B-cell epitope mapping for the design of vaccines and effective diagnostics.” Trials in Vaccinology 5 (2016): 71-83. http://ac.els-cdn.com/S187943781…

4. Sivalingam, Ganesh N., and Adrian J. Shepherd. “An analysis of B-cell epitope discontinuity.” Molecular immunology 51.3 (2012): 304-309. https://www.researchgate.net/pro…

5. Clementi, Nicola, et al. “Characterization of epitopes recognized by monoclonal antibodies: experimental approaches supported by freely accessible bioinformatic tools.” Drug discovery today 18.9 (2013): 464-471. https://www.researchgate.net/pro…

6. Opuni, Kwabena FM, et al. “Mass spectrometric epitope mapping.” Mass spectrometry reviews (2016). https://www.researchgate.net/pro…

7. Clementi, Nicola, et al. “Epitope mapping by epitope excision, hydrogen/deuterium exchange, and peptide-panning techniques combined with in silico analysis.” Monoclonal Antibodies: Methods and Protocols (2014): 427-446. https://www.researchgate.net/pro…

8. Davidson, Edgar, and Benjamin J. Doranz. “A high‐throughput shotgun mutagenesis approach to mapping B‐cell antibody epitopes.” Immunology 143.1 (2014): 13-20. http://onlinelibrary.wiley.com/d…


Why does the CDC recommend that 100% of the Baby Boom generation get tested for Hepatitis C?

Since 1997, the US has a risk-based screening strategy for Hepatitis C virus (HCV) (1). Effective risk-based screening requires accurately identifying those at high-risk and getting them tested serologically to determine whether or not they were HCV infected, a tricky balance.

The tricky balance consists of both the healthcare service provider and receiver getting it right. On the one hand, the primary healthcare provider needs to get from a given patient a complete and accurate risk-factor history such as blood transfusions from unscreened donors, unsafe medical procedures, injection drug use, sharing needles, a process that at least one study found to be far from comprehensive in achieving such a goal (2). OTOH, the patient needs to be honest and forthcoming about past behaviors that increase their HCV infection risk, again a process at least one study found was far from reliable (3).

How effectively then does risk-based HCV screening identify those infected in the US? While there have long been concerns about how accurately it assesses the true prevalence of HCV in the US, those fears were realized when a 2012 study suggested that >50% of all HCV infections may be unaware of their infection status (4). Turns out those most at risk for HCV are the 1945 to 1965 US birth cohort aka baby boomers, presumably for a couple of reasons,

  • Higher risk for HCV-infected blood transfusions. Blood transfusions from unscreened blood donors is a particular problem for older generations since universal (anti-HCV antibody) blood donor screening only started in 1992. As many as ~3 million baby boomer Americans are estimated to have chronic HCV, some of which could be attributed to their pre-1992 transfusion history (5).
  • Much higher HCV prevalence. For as-yet unknown reasons, the 1945 to 1965 birth cohort have a 5X higher HCV prevalence compared to other cohorts (6, 7). A retrospective study on 110223 past or present HCV cases from 2004 to 2010 found 68% (74578) were born from 1945 through 1965 with only 25% (27312) born after 1965 and only 7% (8066) before 1945 (8), i.e., close to 3 out of 4 of HCV-infected were found to have been born between 1945 and 1965 (7).

Thus, given that

  • HCV can be asymptomatic for long periods of time
  • The risk-based screening strategy seems to miss 1 out of 2 HCV infected
  • Boomers have a markedly higher HCV risk

the CDC expanded ‘its HCV testing guidelines to include a onetime HCV test to all persons born between 1945 and 1965‘ (9) ‘regardless of whether HCV risk factors have been identified‘ (6).


1. Alter, Miriam J., and Harold S. Margolis. “Recommendations for prevention and control of hepatitis C virus (HCV) infection and HCV-related chronic disease.” (1998). Welcome to CDC Stacks

2. Navarro, Victor J., Thomas E. St Louis, and Beth P. Bell. “Identification of patients with hepatitis C virus infection in New Haven County primary care practices.” Journal of clinical gastroenterology 36.5 (2003): 431-435.

3. Schuckman, Hugh, et al. “A validation of self-reported substance use with biochemical testing among patients presenting to the emergency department seeking treatment for backache, headache, and toothache.” Substance use & misuse 43.5 (2008): 589-595.

4. Denniston, Maxine M., et al. “Awareness of infection, knowledge of hepatitis C, and medical follow‐up among individuals testing positive for hepatitis C: National Health and Nutrition Examination Survey 2001‐2008.” Hepatology 55.6 (2012): 1652-1661. http://onlinelibrary.wiley.com/d…

5. Armstrong, Gregory L., et al. “The prevalence of hepatitis C virus infection in the United States, 1999 through 2002.” Annals of internal medicine 144.10 (2006): 705-714. https://www.researchgate.net/pro…

6. Chung, Raymond T., et al. “Hepatitis C guidance: AASLD-IDSA recommendations for testing, managing, and treating adults infected with hepatitis C virus.” Hepatology 62.3 (2015): 932-954. http://onlinelibrary.wiley.com/d…

7. https://www.cdc.gov/knowmorehepa…

8. Mahajan, Reena, et al. “Indications for testing among reported cases of HCV infection from enhanced hepatitis surveillance sites in the United States, 2004–2010.” American journal of public health 103.8 (2013): 1445-1449. https://www.ncbi.nlm.nih.gov/pmc…

9. Rosen, Hugo Ramón. “Hep C, where art thou”: What are the remaining (fundable) questions in hepatitis C virus research?.” Hepatology 65.1 (2017): 341-349. http://onlinelibrary.wiley.com/d…


What are the effects of mycoplasmic contamination on cell cultures?



Mycoplasma: some basic facts

Contamination with Mycoplasma – Wikipedia, previously known as pleuropneumonia-like organisms or PPLO, is a common bane in cell cultures, with studies routinely finding many cell lines contaminated with it (1, 2).

Though 0.2 μm or even 0.1 μm pore size of sterilizing filters are normally recommended to filter culture media constituents such as FBS/FCS (Fetal Bovine/Calf Serum) to exclude mycoplasma as well as other micro-organisms (3), mycoplasma are extremely small, 300 to 800 nm in diameter, so filtering alone may be insufficient to exclude them (4, 5). Many mycoplasma strains are also unresponsive to commonly used cell culture antibiotics. ATCC, Bionique, BioReliance, ECACC, Mycoplasma Experience are companies that provide mycoplasma screening services (5). Unfortunately in vitro culture of eukaryotic cells appears to favor mycoplasma growth (6), especially given careless hands and lack of attention to detail.

Routine testing of cell lines for mycoplasma contamination is the only way to mitigate and minimize its harmful effect on biomedical research, something well-recognized as far back as 1994 (see below from 1).

‘100% of the cultures from labs without mycoplasma testing programs were contaminated, but only 2% of the cultures from labs that tested regularly.’

Mycoplasma: cell culture effects

Stealthy by nature and insidious in effect, mycoplasma contamination is much more difficult to discern unlike contamination by bacteria or fungi, whose effects are easily detectable through microscopy or pH (Phenol Red-containing culture media turn yellow through a rapid pH drop) or turbidity changes.

Old is often gold when it comes to cell culture resources and one of the best descriptions of effects of mycoplasma contamination on cell cultures is from a 1971 review (see below from 4)

‘Gross macroscopic changes in tissue cells infected with Mycoplasma sp. can range from inapparent or minimal unsuspected alterations to cytopathology and cell destruction reminiscent of viral infections. Macroscopic changes in morphology can be so minimal that they are not suspected even though a culture can reveal high titers of Mycoplasma. Cytopathie changes may be related to depletion of arginine in the medium by the Mycoplasma…Effects of cytopathic Mycoplasma sp. have been confused with virus infections…studies of cytopathic Mycoplasma strains have emphasized the risk involved in attributing all cytopathic effects to viruses…Many properties are shared by Mycoplasma and viruses: (a) filtrability: a number of strains of Mycoplasma (78) produced cells capable of passing through a 0.22 uL MIillipore filter, and it has been found (79) that the size of the smallest filtrable unit varies under different conditions of culture; (b) electron microscopic morphology: certain pleomorphic forms of rickettsiae, ornithosis virus, and Mycoplasma can be confused with one another (80); (c) sensitivity to ether (75); (d) ability to hemagglutinate (81, 82); (e) ability to cause hemadsorption (83); (f) resistance to some antibiotics; (g) inhibition of growth by antiserum (84, 85); and (h) induction of chromosomal aberrations (86, 87).’

A 1994 review (1) summarizes mycoplasma contamination effects on cell cultures as,

‘the ability to alter their host culture’s cell function, growth, metabolism, morphology, attachment, membranes, virus propagation and yield, interferon induction and yield, cause chromosomal aberrations and damage, and cytopathic effects including plaque formation ‘

Thus, mycoplasma contamination typically

  • Affect the rate of cell proliferation (1, 4, 7).
  • Induce morphological changes (4, 7, see below from 1).
  • Cause chromosome aberrations (1, 7, see next one below from 8).
  • Influence amino acid and nucleic acid metabolism (1, 7).
  • Induce cell transformation (7).
  • Yield poorly reproducible results from cell lines (9, 10).
  • Activate primary immune cells (11).
  • Change gene expression patterns (see next one below from 8), with one study (12) suggesting as many as 10% of gene expression studies including those published in leading science journals showed evidence of mycoplasma contamination.


1. Ryan, John A. Understanding and managing cell culture contamination. Corning Incorporated, 1994. https://pdfs.semanticscholar.org…

2. Uphoff, Cord C., and Hans G. Drexler. “Comparative PCR analysis for detection of mycoplasma infections in continuous cell lines.” In Vitro Cellular & Developmental Biology-Animal 38.2 (2002): 79-85. https://www.dkfz.de/gpcf/fileadm…

3. Clarke, Sue, and Janette Dillon. “The Cell Culture Laboratory.” Animal Cell Culture: Essential Methods (2011): 1-31.

4. Fogh, Jørgen, Nelda B. Holmgren, and Peter P. Ludovici. “A review of cell culture contaminations.” In vitro 7.1 (1971): 26-41. A review of cell culture contaminations

5. Davis, John M. “Basic techniques and media, the maintenance of cell lines, and safety.” Animal Cell Culture: Essential Methods (2011): 91-151.

6. Razin, Shmuel, and Leonard Hayflick. “Highlights of mycoplasma research—an historical perspective.” Biologicals 38.2 (2010): 183-190. https://www.researchgate.net/pro…

7. Thraves, Peter, and Cathy Rowe. “The Quality Control of Animal Cell Lines and the Prevention, Detection and Cure of Contamination.” Animal Cell Culture: Essential Methods (2011): 255-296.

8. Chernov, V. M., O. A. Chernova, and J. T. Sanchez-Vega. “Mycoplasma contamination of cell cultures: vesicular traffic in bacteria and control over infectious agents.” Acta Naturae (англоязычная версия) 6.3 (22) (2014). https://pdfs.semanticscholar.org…

9. Callaway, Ewen. “Contamination hits cell work: Mycoplasma infestations are widespread and costing laboratories millions of dollars in lost research.” Nature 511.7511 (2014): 518-519. https://www.nature.com/polopoly_…

10. Gedye, Craig, et al. “Mycoplasma infection alters cancer stem cell properties in vitro.” Stem Cell Reviews and Reports 12.1 (2016): 156-161.

11. Heidegger, Simon, et al. “Mycoplasma hyorhinis-contaminated cell lines activate primary innate immune cells via a protease-sensitive factor.” PloS one 10.11 (2015): e0142523. http://journals.plos.org/plosone…

12. Assessing the prevalence of mycoplasma contamination in cell culture via a survey of NCBI?s RNA-seq archive. https://pdfs.semanticscholar.org…


What factors go into choosing a cell culture medium type?


Appropriateness for species, research purpose, cell history/provenance and type, these are among the most important factors to consider when choosing a cell culture medium.

Appropriateness for species. The fifty year old Roswell Park Memorial Institute medium – Wikipedia or RPMI 1640 (1) for short was specially formulated for human cell culture. OTOH, Norman Iscove formulated Iscove’s Modified Dulbecco’s medium or IMDM specifically for mouse cell culture (2). These definitions aren’t set in stone but some knowledge of the history of the most commonly used cell culture media helps make better choices especially when trouble-shooting.

Research purpose spans basic animal or preclinical and basic clinical on the one hand, and therapeutic on the other hand.

Serum is a common cell culture medium component because most culture media, simple mixtures of amino acids, salts, sugars and vitamins, are just too meager to support cell culture on their own. This is why the history of cell culture is dotted with additives such as extracts from embryos, amniotic fluid, milk, colostrum, lymph, plasma and serum. Its ready availability in quantity and ease of storage meant that serum, specifically from fetal calves, eventually came to dominate cell culture.

While serum, especially fetal calf/bovine serum (FCS/FBS) is a common ingredient in basic research cell culture, it’s usually a strict no-no in therapeutic uses. Being especially rich in proteins, hormones and numerous other biochemical molecules makes serum a rich source of nutrition in cell cultures used in basic research. That same advantage, however, renders it a handicap for therapeutic purposes by offering many allogeneic/xenogeneic targets for immune responses when injected into patients, something that would nullify therapeutic benefits. Serum also presents considerable safety issues in the form of potential viral or prion contaminants. Thus, therapeutic use usually entails customized or proprietary serum-free culture media such as the X-VIVO series AIM-V, etc.

Cell history/provenance in terms of primary or culture-adapted is a key difference with the former requiring richer culture media. Culture-adapted, which could be anything from cell lines to clones to transformed cells, are by contrast hardier and capable of growing in more minimal media.

Cell type is another important consideration. For example, 2-Mercaptoethanol – Wikipedia is essential for lymphocyte culture. Whether cells are adherent or free-floating. If the former, culture also entails coating materials, usually proteins such as fibronectin or pre-coated culture-ware to make the cells stick (adhere) to the culture surface.

Better to look up the published literature as well as reference repositories such as ATCC (company) – Wikipedia rather than word of mouth or sales pitches, which likely lack relevant history, let alone critical nuances pertaining to specific cell types and cultures.


1. Moore, George E., Robert E. Gerner, and H. Addison Franklin. “Culture of normal human leukocytes.” Jama 199.8 (1967): 519-524.

2. Iscove, N_N, and F. Melchers. “Complete replacement of serum by albumin, transferrin, and soybean lipid in cultures of lipopolysaccharide-reactive B lymphocytes.” Journal of Experimental Medicine 147.3 (1978): 923-933. http://jem.rupress.org/content/j….



Even given Wakefield’s fraud, given immunological anomalies in autistic individuals, shouldn’t autism affect how people respond to vaccinations?

In other words, assuming they have underlying immune debilities which predispose them to infections and/or poor infection control, compared to the general population, do autistic individuals have higher mortality rates from vaccine-preventable diseases? A quick look at recent literature suggests no. Rather, disproportionate to controls, major causes of mortality in individuals with ASD (Autism Spectrum Disorders) appear to be diseases of the nervous system, suicide and epilepsy. In other words, ASD-associated immune anomalies, which seem linked to gut microbiota disturbances, Dysbiosis – Wikipedia (1), likely do not entail frank/overt immunodeficiency.

Until recently, relatively few studies compared long-term mortality rates between ASD and the general population (2), mainly because developmental issues in children remain the major research focus while autism rates started increasing dramatically only from the 1990s, too short a time period to draw categorical conclusions about life expectancy.

Nevertheless, in recent years, a few clinical cohort and population-based studies have concluded ASD carries a two- to six-fold increased risk of premature mortality (see below 3).

What are the causes of such premature mortality? One of the largest such studies, a matched case-cohort study based on nationwide Swedish population-based registers, found it to be not infection but rather diseases of the nervous system and suicide (see below from 3).

Drilling down further into sub-categories brought up pancreatic cancer and mesothelioma in addition to epilepsy. Again, infectious diseases did not show up as disproportionate causes of mortality among ASD (see below from 3).

Regardless it’s quite large, obviously one study isn’t the last word on a subject. Remains to be seen what other studies report.


1. Tirumalai Kamala’s answer to What is the role of bacteria in the gut?

2. Bishop-Fitzpatrick, Lauren, and Amy JH Kind. “A Scoping Review of Health Disparities in Autism Spectrum Disorder.” Journal of Autism and Developmental Disorders (2017): 1-12. https://www.researchgate.net/pro…

3. Hirvikoski, Tatja, et al. “Premature mortality in autism spectrum disorder.” The British Journal of Psychiatry 208.3 (2016): 232-238. Premature mortality in autism spectrum disorder


What is the Missing Self hypothesis of organ transplant rejection?


, , ,

What is the Missing Self Hypothesis?

MHC class I molecules (Human MHC is called HLA) are typically expressed on the cell surface by all body cells save some specific cell types such as sperm and eggs. In the 1980s, the Swedish immunologist Klas Kärre – Wikipedia noticed reduced (down-regulation) cell-surface MHC class I expression was fairly common following viral infections, cancerous transformation and other types of cellular stress, and that precisely such lack of surface MHC class I seemed to make such cells a target for cytotoxic killing by NK cells (Natural killer cell – Wikipedia). Karre postulated that some then as-yet unknown NK receptors were scanning cell surfaces for presence or absence of MHC class I molecules, getting activated and killing those that lacked cell-surface MHC class I, i.e., Missing Self (1, 2). This is a very different process from how cytotoxic CD8 T cells engage antigen-derived epitopes presented within MHC class I molecules.

Circa 2017, Missing Self is much more complicated, outcome of as-yet incompletely understood interactions of a dizzying array of NK cell activating and inhibitory receptors called KIRs.

Thirty years on, the Missing Self idea is clearly not so simple. NK cells themselves seem far more complex than just innate immune cells with invariable germline-encoded receptors. Instead, parallel to and complementing T and B cells, NK cells seem to have evolved a highly complex and more pertinently, highly specific process for recognizing their target cells (3), a process strikingly different from the one used by T cells, which is of somatically rearranged cell-surface receptors binding MHC-bound peptides on presenting cells.

Rather, NK cells express a wide array of cell surface activating and inhibitory receptors, the KIRs (Killer-cell immunoglobulin-like receptor – Wikipedia), and outcome depends on the balance of what they bind, namely, MHC class I as well as as-yet unknown ligands expressed on target cells (see below from 4).

NK cells appear to get activated when multiple activating KIRs are engaged, which overrides inhibitory KIR binding. There is as yet no consensus of how KIRs help shape NK cell ‘education’ during their development in the bone marrow, especially how they learn self-tolerance, with a variety of models, Arming, Disarming, Confining, (5, 6) emphasizing different aspects of the process while the Rheostat model favored by the Swedish group tries to encompass them (see below from 7).

More recent studies complicate matters further by suggesting NK KIRs are sensitive to peptides presented by the HLA class I molecule (8). Also worth noting that most of these models are based on data generated using circulating (blood) NK cells and who knows whether or how relevant such models are to tissue-resident NK cells such as uterine NK cells, whose proper functioning is known to be critical for healthy pregnancies through NK cell KIR engagement by placental trophoblast HLA-C (9), and where proper uNK cell functioning includes extensive uterine tissue remodeling through cytokine secretion to help it quickly adapt to increased vascular supply for the growing fetus (10).

Could mismatch between NK Cell KIRs and tissue KIR ligands Influence transplant rejection?

Since most transplants are allogeneic, between genetically non-identical individuals (Allotransplantation – Wikipedia), HLA matching is done to reduce scope of T cell-mediated rejection. Do transplants need NK cell KIR and tissue KIR ligand matching as well? Available data is somewhat confusing as would be expected with triggers that activate or inhibit NK cells remaining not fully defined.

Nevertheless, since the early 2000s, a steady drip of scientific articles suggest KIR-KIRL (KIR ligands including HLA) matching could improve long-term kidney transplant survival while more recent studies suggest such matching may matter for other transplants such as liver as well. OTOH, in what surely sounds surprising, KIR-HLA mismatching might also improve graft tolerance by killing donor antigen-presenting cells, which would reduce direct antigen presentation by graft.

Thus, even with HLA compatible transplants, recipient NK cells expressing an inhibitory receptor could be activated by allograft cells that lack the HLA class I ligands relevant for that particular inhibitory receptor (11).

Human kidney transplants in particular suggest NK cell mismatch in terms of NK cell inhibitory KIR receptors and missing HLA class I can adversely affect long-term renal allograft survival.

While at least one study (12) that retrospectively analyzed KIR ligand mismatches in 608 cadaveric kidney grafts didn’t find significant differences in 10-year graft survival rates, many other studies (13, 14, 15, 16, 17, 18) support the idea that KIR matching in addition to HLA matching would improve long-term renal graft survival.

One study found KIR-KIRL matching to affect liver rejection rates with higher acute rejection for mismatches (19).

Finally, it’s also worth remembering that NK cells could also be activated to damage allografts through ADCC (Antibody-dependent cell-mediated cytotoxicity – Wikipedia), a process dependent on their antibody-binding Fc receptor – Wikipedia, not on KIRs (4).


1. Kärre, Klas, et al. “Selective rejection of H–2-deficient lymphoma variants suggests alternative immune defence strategy.” Nature 319.6055 (1986): 675-678. https://www.researchgate.net/pro…

2. Ljunggren, Hans-Gustaf, and Klas Kärre. “In search of the ‘missing self’: MHC molecules and NK cell recognition.” Immunology today 11 (1990): 237-244.

3. Yawata, Makoto, et al. “MHC class I–specific inhibitory receptors and their ligands structure diverse human NK-cell repertoires toward a balance of missing self-response.” Blood 112.6 (2008): 2369-2380. http://www.bloodjournal.org/cont…

4. Rajalingam, Raja. “The impact of HLA class I-specific killer cell immunoglobulin-like receptors on antibody-dependent natural killer cell-mediated cytotoxicity and organ allograft rejection.” Frontiers in immunology 7 (2016). https://www.ncbi.nlm.nih.gov/pmc…

5. Höglund, Petter, and Petter Brodin. “Current perspectives of natural killer cell education by MHC class I molecules.” Nature reviews. Immunology 10.10 (2010): 724.

6. He, Yuke, and Zhigang Tian. “NK cell education via nonclassical MHC and non-MHC ligands.” Cellular and Molecular Immunology 14.4 (2017): 321. https://www.ncbi.nlm.nih.gov/pmc…

7. Kadri, Nadir, et al. “Dynamic regulation of NK cell responsiveness.” Natural Killer Cells. Springer International Publishing, 2015. 95-114. https://www.researchgate.net/pro…

8. Carrillo-Bustamante, Paola, Rob J. de Boer, and Can Keşmir. “Specificity of inhibitory KIRs enables NK cells to detect changes in an altered peptide environment.” Immunogenetics (2017): 1-11. https://link.springer.com/conten…

9. Hiby, Susan E., et al. “Combinations of maternal KIR and fetal HLA-C genes influence the risk of preeclampsia and reproductive success.” Journal of Experimental Medicine 200.8 (2004): 957-965. http://jem.rupress.org/content/j…

10. Rätsep, Matthew T., et al. “Uterine natural killer cells: supervisors of vasculature construction in early decidua basalis.” Reproduction 149.2 (2015): R91-R102. https://www.researchgate.net/pro…

11. Rajalingam, Raja. “Variable interactions of recipient killer cell immunoglobulin-like receptors with self and allogenic human leukocyte antigen class I ligands may influence the outcome of solid organ transplants.” Current opinion in organ transplantation 13.4 (2008): 430-437.

12. Tran, T. H., et al. “No Impact of KIR‐Ligand Mismatch on Allograft Outcome in HLA‐Compatible Kidney Transplantation.” American Journal of Transplantation 13.4 (2013): 1063-1068. No Impact of KIR‐Ligand Mismatch on Allograft Outcome in HLA‐Compatible Kidney Transplantation

13. van Bergen, Jeroen, et al. “KIR‐ligand mismatches are associated with reduced long‐term graft survival in HLA‐compatible kidney transplantation.” American Journal of Transplantation 11.9 (2011): 1959-1964.; http://onlinelibrary.wiley.com/d…

14. Rajalingam, R., and H. M. Gebel. “KIR‐HLA Mismatching in Human Renal Allograft Transplantation: Emergence of a New Concept.” American Journal of Transplantation 11.9 (2011): 1771-1772. http://onlinelibrary.wiley.com/d…

15. Kunert, Kristina, et al. “KIR/HLA ligand incompatibility in kidney transplantation.” Transplantation 84.11 (2007): 1527-1533. KIR/HLA Ligand Incompatibility in Kidney Transplantation : Transplantation

16. Vampa, Maria Luisa, et al. “Natural killer-cell activity after human renal transplantation in relation to killer immunoglobulin-like receptors and human leukocyte antigen mismatch1.” Transplantation 76.8 (2003): 1220-1228. Natural killer-cell activity after human renal… : Transplantation

17. Nowak, Izabela, et al. “Killer immunoglobulin-like receptor (KIR) and HLA genotypes affect the outcome of allogeneic kidney transplantation.” PloS one 7.9 (2012): e44718. http://journals.plos.org/plosone…

18. Littera, Roberto, et al. “KIR and their HLA Class I ligands: Two more pieces towards completing the puzzle of chronic rejection and graft loss in kidney transplantation.” PloS one 12.7 (2017): e0180831. http://journals.plos.org/plosone…

19. Legaz, Isabel, et al. “KIR gene mismatching and KIR/C ligands in liver transplantation: consequences for short-term liver allograft injury.” Transplantation 95.8 (2013): 1037-1044. https://www.researchgate.net/pro…


What are all of the commercially available liquid biopsy tests for cancer?


There are two separable aspects to commercially available liquid biopsy cancer tests, one, who offers them, and two, who offers tests with some externally and/or independently verified data on such tests.

Medical News, Journals, and Free CME (Healio.com) lists some of the major commercial players currently in the liquid biopsy cancer test market (see below from 1).

However, not all such tests on the market have proven clinical utility. In the US, an existing loophole in the relevant laws, CLIA (Clinical Laboratory Improvement Amendments – Wikipedia), means that CLIA-accredited clinical testing labs in the US can directly market so-called LDTs (Lab developed Tests) to consumers, without ever publishing data on these tests in any peer-reviewed journals or sharing such data with regulators, provided they maintain their annual CLIA accreditation, the same loophole that Theranos famously sought to exploit as well (2).

  • This is how for example, in 2015, Pathway Genomics started selling a blood-based cancer test, CancerIntercept™ Detect, for what it characterized as ‘high-risk’ but asymptomatic individuals. This caught the attention of the FDA, which in short order, sent them a letter seeking a response within 15 days, demanding among other things, test specificity and sensitivity, explanation for the basis for selling this test to the general public (undiagnosed patients) when their own web-site hosted a White Paper showing the test was based on results from those already diagnosed with cancer (3, 4). Pathway Genomics previously received a FDA warning letter in 2010 (5) for peddling a home-use saliva collection kit and continues to offer this liquid biopsy cancer test (6) since the FDA hasn’t yet issued its final guidelines on LDTs.
  • In 2015, the Verge reported (7) that Admera Health and Strand Life Sciences were also selling direct-to-consumer cancer tests (not clear if liquid biopsy/not) without publishing any test data in peer-reviewed journals.

Liquid biopsy cancer test companies associated with either peer-reviewed publications or FDA approvals

Companies that have collaborated with academic researchers and published liquid biopsy cancer test data in peer-reviewed journals include

  • Guardant Health Inc. (8, 9).
  • Illumina (10).
  • Personal Genome Diagnostics (11, 12).
  • Roche Molecular Systems (13).

Note that the tests these companies may be currently selling may/may not be related to the data in any of these publications. Also not intended to be a comprehensive publication list. For example, Baltimore-based Personal Genome Diagnostics is named in several peer-reviewed publications on liquid biopsy cancer assays done in collaboration with the John Hopkins University School of Medicine.

While Foundation Medicine (ovarian) and Illumina (colorectal) have FDA-approved oncology tests for formalin-fixed, paraffin-embedded tissue samples, those with FDA-approved liquid biopsy cancer tests as of August 2017 are

  • Roche molecular systems, Inc.‘s cobas® EGFR mutation V2 tests for defined Epidermal Growth Factor Receptor (EGFR) mutations in circulating free tumor DNA (cfDNA) in peripheral blood (14) for NSCLC (non-small cell lung cancer), June 2016.
  • Myriad Genetics for Bracanalysis CDX, for BRCA1 and 2 variants commonly associated with breast cancer (15), December 2014.

Given the existing regulatory loophole around LDTs where the FDA kicked the can down the road as recently as January 2017 (16), choosing to go the FDA route is obviously choosing to meet a somewhat higher bar.


1. Select Commercially Available Liquid Biopsy Assays.

2. Tirumalai Kamala’s answer to How do employees at Quest Diagnostics or LabCorp feel about the recent Theranos issues?

3. https://www.fda.gov/downloads/me…

4. The FDA thinks Pathway Genomics’ cancer blood test could be harmful

5. https://www.fda.gov/downloads/Me…

6. Pathway Genomics’ Response to FDA Letter on CancerIntercept™ Detect

7. Theranos isn’t the only diagnostics company exploiting regulatory loopholes

8. Kim, Seung Tae, et al. “Prospective blinded study of somatic mutation detection in cell-free DNA utilizing a targeted 54-gene next generation sequencing panel in metastatic solid tumor patients.” Oncotarget 6.37 (2015): 40360. https://www.ncbi.nlm.nih.gov/pmc…

9. Lanman, Richard B., et al. “Analytical and clinical validation of a digital sequencing panel for quantitative, highly accurate evaluation of cell-free circulating tumor DNA.” PloS one 10.10 (2015): e0140712. http://journals.plos.org/plosone…

10. Forshew, Tim, et al. “Noninvasive identification and monitoring of cancer mutations by targeted deep sequencing of plasma DNA.” Science translational medicine 4.136 (2012): 136ra68-136ra68. https://pdfs.semanticscholar.org…

11. Tie, Jeanne, et al. “Circulating tumor DNA analysis detects minimal residual disease and predicts recurrence in patients with stage II colon cancer.” Science translational medicine 8.346 (2016): 346ra92-346ra92. https://www.researchgate.net/pro…

12. Direct detection of early-stage cancers using circulating tumor DNA

13. Scherer, Florian, et al. “Distinct biological subtypes and patterns of genome evolution in lymphoma revealed by circulating tumor DNA.” Science translational medicine 8.364 (2016): 364ra155-364ra155. https://pdfs.semanticscholar.org…

14. cobas® EGFR Mutation Test v2 – P150047

15. Premarket Approval (PMA)

16. https://www.fda.gov/downloads/me…


Given that vaccines cause immunity, then why are unvaccinated people considered a danger to the vaccinated ones?

Unvaccinated could be a danger to both the population at large as well as to the vaccinated. Understanding how requires understanding the roles of Herd immunity – Wikipedia (1) and waning immunity.

Public health risk of an infection is its cost in terms of potential morbidity (illness cost) and/or mortality (fatality cost). In such a situation, vaccines benefit

  • Individual (direct) as well as population (indirect) health.
  • Vaccinated (direct) as well as unvaccinated (indirect).

Indirect benefit or the value, indeed the concept, of Herd Immunity emerged in the wake of mass vaccinations against smallpox. Starting sometime in the 16th century, many European countries experienced rolling waves of smallpox epidemics, often as frequently as every 2 to 5 years, that left tens of thousands dead in each cycle. Sweden made smallpox vaccination compulsory for children in 1816 (2), with mass vaccination reducing smallpox infections by >100-fold already by 1822 (2).

Thus, though the phrase ‘Herd Immunity‘ itself got coined much later in 1923 (3), the concept itself was appreciated already in the early 19th century when such dramatic effects at the population level revealed that mass vaccination could not only protect an individual but could also prevent infection in the population at large, including among those not vaccinated, simply due to overall reduction in number of those infected. In other words, Herd Immunity signifies the reduction of infection transmission within a population, i.e., the chance of a susceptible person coming in contact with someone infected.

Some of the most important factors that influence the risk from those unvaccinated for a specific infection include

  • Relative proportion of unvaccinated within a population.
  • Infectiousness of a given infection, i.e., number of people directly infected from one infected person.
  • Effectiveness of vaccine in reducing infection transmission.
  • Length of immunity among those vaccinated or immune from past natural infection(s).
  • Whether humans alone are the sole reservoir for the infection or not.

Since some vaccines are simply more effective than others, each of these factors differentially influences the risk for any given infection that the unvaccinated pose to the rest of the population. Understanding how Herd Immunity within a given population provides protection for those choosing to not get vaccinated (see below from 4) helps better situate this risk.

Outbreaks/epidemics/pandemics reveal Herd Immunity‘s value in limiting infection transmission when enough people within a population happen to be immune to an infectious agent, either because they developed resistance after being already previously exposed (natural infection) or from being prophylactically vaccinated. A 1968 Rhode Island measles outbreak (5) illustrates this property.

By 1968, an intensive state-wide measles eradication program begun in January 1966 had sharply reduced measles transmission. A measles outbreak began in the Fox Point neighborhood of the city of Providence in September, 1968, mostly concentrated in two urban communities dominated by newly arrived Portuguese immigrants, who were mainly from Azores and Cape Verde islands, and most importantly, were largely unvaccinated against measles. Spanning 7 elementary schools, during the outbreak, a total of 1300 largely unvaccinated Portuguese and 1000 largely vaccinated non-Portuguese children had the opportunity to mingle with each other and spread the measles virus. Yet of the 91 children who got sick, only 3 were non-Portuguese. In other words, in a real-world chance experiment where all had similar exposure, measles rates were vastly higher in those non-vaccinated, 88 of 1300 (~6.8%), compared to those vaccinated, 3/1000 (0.3%), a textbook illustration of the epidemic theory that ‘the progress of an epidemic is regulated by the number of susceptibles and the rate of contact between infectious cases and susceptibles’ (6).

Over the course of a lifetime, immunity against any given infectious agent usually wanes, regardless the impetus for immunity be natural infection or vaccine. Waning immunity being a function of time and immune health, longer the time post-infection or -vaccination, higher the transmission risk among those previously infected/vaccinated. Thus, in a given community, the aging/aged, the temporarily immunodeficient (due to radiation/chemo for cancer or on post-transplant immunosuppressants, those on steroids for a variety of debilities, etc.) and those with any of a variety of immune disorders represent population subsets disproportionately vulnerable to the infection risk posed by the healthy who choose to remain unvaccinated.

Important take-away is that Herd Immunity isn’t a binary property (7) that could be considered either absent or present but rather a probability for transmission risk. Unless an infectious disease has been entirely eliminated, like smallpox for example, transmission risk is never zero. Total numbers of unvaccinated within a community combined with the reality of waning immunity predicates that greater the numbers of those unvaccinated, greater the transmission risk of a given infection within the entire community including among those vaccinated. This is why the unvaccinated problem is also referred as the Free-rider problem – Wikipedia.


1. Fox, John P., et al. “Herd immunity: basic concept and relevance to public health immunization practices.” American Journal of Epidemiology 94.3 (1971): 179-189. http://physwww.mcmaster.ca/~higg…

2. Fenner, Frank, et al. “Smallpox and its eradication.” (1988). http://apps.who.int/iris/bitstre…

3. Topley, W. W. C., and G. S. Wilson. “The Spread of Bacterial Infection. The problem of herd-immunity.” Epidemiology & Infection 21.3 (1923): 243-249. https://www.ncbi.nlm.nih.gov/pmc…

4. Metcalf, C. J. E., et al. “Understanding herd immunity.” Trends in immunology 36.12 (2015): 753-755. http://kilpatrick.eeb.ucsc.edu/w…

5. SCOTT, H. DENMAN. “The elusiveness of measles eradication: insights gained from three years of intensive surveillance in Rhode Island.” American journal of epidemiology 94.1 (1971): 37-42.

6. Serfling, Robert E. “Historical review of epidemic theory.” Human biology 24.3 (1952): 145-166.

7. Jamrozik, Euzebiusz, Toby Handfield, and Michael J. Selgelid. “Victims, vectors and villains: are those who opt out of vaccination morally responsible for the deaths of others?.” Journal of medical ethics (2016): medethics-2015. http://jme.bmj.com/content/medet…