How long will it be before humans no longer have to worry about organ donation to save lives? What can be done until then to mitigate the problem?


, , ,

Perceived organ shortage has mistakenly seeped into collective consciousness as a major healthcare issue when the actual problem is unsustainable increase in demand. Since transplants became a viable medical option, rather than being inadequate, organ supply gradually increased and now remains steady while organ demand kept increasing much more rapidly over the same time period.

  • A great deal of organ demand is rooted in non-medical lifestyle-related risk factors (type II diabetes, cardiovascular diseases, obesity, metabolic syndrome). Reducing organ demand through sustained lifestyle changes (diet, exercise, no smoking, moderate alcohol intake) is more a socio-political rather than medical problem.
  • Sustainably increasing organ supply requires solving a biotechnology problem, for regenerative medicine to deliver on its promise of growing tissues and organs in vitro from a patient’s own cells.

Actual Organ Donation Problem: Unsustainable Increase In Organ Demand & Not Inadequate Organ Supply

Chronic kidney disease offers an illustrative example. Eventually leading to ESRD (end stage renal disease), which requires kidney transplants, its underlying causes differ greatly between poor and affluent countries.

  • Poor countries tend to have fairly stable levels of urological diseases and glomerulonephritis, which are typically not lifestyle related.
  • OTOH, chronic lifestyle-related diseases such as type II diabetes, cardiovascular disease (hypertension) and metabolic syndrome (obesity) increase dramatically as a country becomes more ‘developed’ (see below from 1).

When kidney transplants became a viable medical option in the late 1960s to early 1970s, kidney donations initially increased in response to demand. However, demand in affluent countries increased unsustainably beginning in the mid-1980s and has stayed as high ever since (see below figures on the situation in Europe, left, and the US, right, respectively, from 2 and 3). Organ supply simply couldn’t keep pace with such frenzied increase in demand.

In affluent countries such as the US as well as across much of Europe, decades of unhealthy lifestyles have led to dramatic increases in ‘diseases of affluence’ or ‘Western diseases’ (4), such as cardiovascular disease, type II diabetes, and metabolic syndrome (5).

Such chronic health conditions ultimately lead to end stage organ diseases such as ESRD (end stage renal disease), ESLD (end stage liver disease), ESPD (end stage pulmonary disease) that can often only be alleviated by organ or tissue transplants.

Rather than being some kind of inevitability, modifiable lifestyle-related risk factors are associated with increasing prevalence of such ‘diseases of affluence’ whose reduction would reduce need for transplants.


1. Nugent, Rachel A., et al. “The burden of chronic kidney disease on developing nations: a 21st century challenge in global health.” Nephron Clinical Practice 118.3 (2011): c269-c277.…


3. Knauf, Felix, and Peter S. Aronson. “ESRD as a window into America’s cost crisis in health care.” Journal of the American Society of Nephrology 20.10 (2009): 2093-2097. ESRD as a Window into America’s Cost Crisis in Health Care

4. Ezzati, Majid, et al. “Rethinking the “diseases of affluence” paradigm: global patterns of nutritional risks in relation to economic development.” PLoS medicine 2.5 (2005): e133.…

5. Forouzanfar, Mohammad H., et al. “Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks, 1990–2015: a systematic analysis for the Global Burden of Disease Study 2015.” The Lancet 388.10053 (2016): 1659-1724.…


Can the immune system fight infections without antibodies?


Defects in antibody production ensue from defects in the B cells that secrete them. Such defects don’t just compromise the body’s ability to fight off infections but also lead to autoimmunities.

Do antibodies play a non-redundant role in controlling specific pathogens? Infecting experimental animal models genetically engineered to lack antibodies and observing the outcome is one way to figure this out. However whether their results apply to the human condition remains an open question.

OTOH, natural experiments in the form of humans born with primary antibody deficiencies (PAD) offer direct evidence on how well we can contain infections in their absence (1, 2).

Such studies reveal not greater susceptibility to infections per se but rather to only select ones, where B cells and antibodies appear to have specific non-redundant defense functions (see examples of respiratory infections in the table below from 3). This means that even in their absence, other components of the immune system can effectively control other infections.

Note the heavy-hitter respiratory infections, flu and TB, are missing from this list, meaning even people lacking antibodies are found to be capable of controlling them.


1. Vale, Andre M., and Harry W. Schroeder. “Clinical consequences of defects in B-cell development.” Journal of Allergy and Clinical Immunology 125.4 (2010): 778-787.…

2. Yazdani, R., et al. “Infectious and Noninfectious Pulmonary Complications in Patients With Primary Immunodeficiency Disorders.” Journal of investigational allergology & clinical immunology 27.4 (2017): 213-224.…

3. Jesenak, Milos, et al. “Pulmonary manifestations of primary immunodeficiency disorders in children.” Frontiers in pediatrics 2 (2014): 77. Pulmonary Manifestations of Primary Immunodeficiency Disorders in Children

Where do the source datasets come from for DNA-based heritage tests like Ancestry or 23andMe? Whose DNA was used to define each heritage or ancestry group?



As far as the procedure for getting results from DNA-based heritage tests is concerned, the formula’s down pat these days. Send back spit or cheek swab to a DNA testing company, get results of one’s ‘ancestry’ back. However, in many cases the results are unlikely to even pertain to a customer’s ancestry and it’s not by accident that details of the reference datasets these companies use to infer a customer’s various ancestral origins from their DNA tests remain far from clear.

The web-site, Autosomal DNA testing comparison chart, outlines what major commercial DNA testing companies do but note the meager details on the breadth (geographic range of populations) and depth (number of individuals from each geographic region) of the reference populations being used to infer ancestry.

Given the eagerness with which some prominent consumer genetic testing companies happily purvey ads that egregiously and unscientifically conflate genes with culture, such unexpected modesty is by design, not accident.

What those reference sets actually entail also remain a closely guarded secret. After all, they’re the be-all and end-all of realizing just how difficult it is to infer one’s ancestry from DNA tests and just how tenuous such conjectures actually are. Why would these companies share such self-destructive information with the public?

Despite lack of publicly available information, we can infer that different companies use different reference datasets (as well as different statistical modeling) since different results ensue from the same sample sent to different companies (1, 2).

For a company to claim it is able to use DNA tests to infer one population separately from others , minimal requirements necessary for underlying reference sets should be either

  • DNA samples from endogamic populations that remained and remain geographically isolated for hundreds or even thousands of years, each population breeding only within itself.
  • Sufficient amounts of historical DNA samples from discrete endogamic populations from different periods of time from different regions, one set of samples for each such population.

Clearly, it is practically impossible to obtain such reference samples. Apart from remote and isolated Amazon tribes, island populations and the like, many humans alive at present aren’t living where their ancestors lived but rather, humans migrated far and wide while interbreeding relatively freely. Human history also features plenty of wars, famines and epidemics that further stoked plenty of migrations over the hundreds of years some of these companies claim to span in their results. That makes it difficult to impossible to source reference sets that could accurately ascertain ancestry.

Genealogy, Family Trees & Family History Records (Ancestry) frankly admits these unavoidable limitations in sourcing reference samples in its white paper (see below from 3),

‘We first create a list of candidate samples to include in the reference panel. Under perfect circumstances, we would construct our reference panel using ancient DNA samples of the true ancestors for each person likely to be an AncestryDNA customer. For example, since many of our customers have ancestors from the United Kingdom, we would prefer to have samples in our reference panel from the Angles and Saxons, who represent the historical populations present in northwestern Europe.

Unfortunately, it is not possible to sample historical populations. We must instead rely on DNA samples collected from individuals alive today who can trace their ancestry to a single geographic location. When asked to trace familial origins, most of us can only reliably trace one to five generations back in time, making it difficult to find individuals with knowledge about distant ancestry. This is because as we go back in time, historical records become sparse, and the number of ancestors we must follow doubles each generation.’

One could thus reasonably infer that, rather than ancestry, commercial DNA test results represent current geographic distribution of various population groups living wherever they happened to be living when the companies collected their samples. A customer’s DNA matches inform them of not where their ancestors might have come from in the past, i.e., their ancestry, but rather current geographic distribution of similar patterns of DNA bits that each company happens to probe for. Is that DNA-based heritage or ancestry, i.e., hints of ancestral people customers may have descended from? Sounds far from it.


1. How DNA Testing Botched My Family’s Heritage, and Probably Yours, Too

2. Comparing admixture results from AncestryDNA, 23andMe and Family Tree DNA


Why doesn’t the mother’s antibody IgG attack the fetus? If the antibody is suppressed, then how does it provide immunity to the fetus? Why are immune suppression drugs given in second delivery?


, ,

Given the fetus is antigenically different from the mother and should provoke a rejecting immune response from her immune system, how the mother immunologically tolerates the fetus over the course of a ~9 month pregnancy remains an as-yet not fully resolved central mystery about the immune system.

Since the 1950s two common arguments for how the mother’s immune system tolerates a semi-allogeneic fetus assert that either the mother’s immune system is suppressed during the duration of the pregnancy or that the uterus and the growing fetus it carries are immune privileged.

A body that’s immunologically unprotected and unprotectable for ~9 months renders it in evolutionary terms essentially open sesame for pathogens to exploit, in which case we couldn’t possibly have even evolved. Fortunately though ever so slowly such evolutionarily untenable arguments are headed the way of the dodo.

An abundance of data instead shows that rather than immunosuppressed, immune systems of pregnant women function normally against any variety of immune stimuli including pathogens (1). In fact, data now suggest that inadequate recognition of fetal antigens by maternal Natural killer cell – Wikipedia may even lead to failed pregnancy (2).

Further, passive transfer of antibodies from mother to fetus, Passive immunity – Wikipedia, is a well-recognized feature of early life immunity. However, transfer of such protective antibodies represents only one side of a finely tuned double-edged sword, the other side being transfer of fetal antigen-specific antibodies such as IgG that could even be lethal for the fetus.

Why doesn’t the mother’s antibody IgG attack the fetus? Why are immune suppression drugs given in second delivery?

As for why a mother’s IgG antibody doesn’t attack the fetus, an antibody doesn’t attack randomly. A hallmark of the adaptive immune system, T and B cells, is they express somatically rearranged cell-surface receptors that bind to specific antigens. Such somatic rearrangement is the unique feature that endows the adaptive immune system with the capacity for anticipatory defense such that the body has circulating T and B cells capable of binding a seeming universe of antigens that a body hasn’t previously encountered.

Antibodies are secreted by B cells, which in this case would be a maternal antibody specific for a fetal antigen. Several steps are necessary to get to fetal antigen-specific maternal IgG.

For a mother’s B cell to bind a fetal antigen, it first needs to come in contact with it. In this case that would entail abnormal physiological changes such as fetomaternal or placental hemorrhage that would cause fetal red blood cells to leak into maternal blood, such physical contact being the trigger necessary to get the initial immune response going.

Well-known examples of situations where fetal antigens provoke a strong maternal immune response pertain to red blood cell antigens such as Rh blood group system – Wikipedia expressed by the fetus but not the mother.

In any case, the initial antibody response is IgM. To instead secrete IgG antibodies against that antigen, the secreting B cell would need to get help from a T cell that bound a piece of the same antigen that the B cell bound in order to undergo class switch recombination, Immunoglobulin class switching – Wikipedia, a process called cognate T cell help where the help consists of specific receptor-ligand pairs binding on the surface of the interacting T and B cells as well as specific cytokines secreted by the T cell.

The requirement for cognate T cell help to make IgG antibody dramatically reduces the probability that first encounter with a triggering fetal antigen would yield a high enough antibody titer necessary to cause damage since T and B cell of any given specificity, i.e., specific for any given antigen, are typically of the order of 1 in a million. OTOH, repeated triggers (immunizations) from successive pregnancies across a well-known blood type difference are capable of amplifying the initial immune response and associated anti-fetal antigen IgG antibodies to above the threshold capable of causing damage (see figure below from 3).

This pattern is the reason for medical interventions with subsequent pregnancies in the case of such blood type mismatches.


1. Arck, Petra C., and Kurt Hecher. “Fetomaternal immune cross-talk and its consequences for maternal and offspring’s health.” Nature medicine 19.5 (2013): 548.…

2. Moffett, Ashley, Olympe Chazara, and Francesco Colucci. “Maternal allo-recognition of the fetus.” Fertility and sterility 107.6 (2017): 1269-1272.

3. Kumpel, Belinda M., and M. S. Manoussaka. “Placental immunology and maternal alloimmune responses.” Vox sanguinis 102.1 (2012): 2-12.…

Are there any negative consequences from using a non-fully matched blood transfusion?


If and when such individuals need transplants, not-fully matched blood transfusions yield adverse consequences, namely, antibodies as well as allospecific memory T cells, both of which target the HLA locus (aka the MHC locus) (1). Risk of such sensitization is greater when the blood used for transfusion isn’t depleted of leukocytes.

With more confirmed disease associations than any other region, HLA are the most polymorphic genes in the human genome (4) and the crucial role of their diversity in conferring resistance to a broader range of pathogens suggests a clear role for balancing selection at this locus (5) .

OTOH, transplantation being a human innovation and not a selection factor that shaped human evolution makes HLA matching one of its central fulcrums since HLA differences such as in non-fully matched blood transfusions end up triggering strong adaptive (T and B cell) immune responses (6).

In the transplant setting, individuals with anti-HLA antibodies and/or allospecific memory T cells are called ‘sensitized’. Having such antibodies or cells is a problem in transplants because they (7, 8, 9, 10),

  • Dramatically reduce the scope for a safe donor match.
  • Increase the risk for antibody-mediated rejection of transplant(s).


1. Scornik, JC1, and H‐U. Meier‐Kriesche. “Blood transfusions in organ transplant patients: mechanisms of sensitization and implications for prevention.” American Journal of Transplantation 11.9 (2011): 1785-1791. Blood Transfusions in Organ Transplant Patients: Mechanisms of Sensitization and Implications for Prevention

2. Dausset, Jean. “Leuco-agglutinins. IV. Leuco-agglutinins and blood transfusion.” Vox Sang 4 (1954): 190-198.

3. Dausset, Jean. “Iso-leuco-anticorps.” Acta haematologica 20.1-4 (1958): 156-166.

4. 1000 Genomes Project Consortium. “A map of human genome variation from population-scale sequencing.” Nature 467.7319 (2010): 1061.…

5. Karlsson, Elinor K., Dominic P. Kwiatkowski, and Pardis C. Sabeti. “Natural selection and infectious disease in human populations.” Nature Reviews Genetics 15.6 (2014): 379.…

6. Kransdorf, Evan P., et al. “HLA Population Genetics in Solid Organ Transplantation.” Transplantation 101.9 (2017): 1971-1976. HLA Population Genetics in Solid Organ Transplantation : Transplantation

7. Mehra, Mandeep R., et al. “Allosensitization in heart transplantation: implications and management strategies.” Current opinion in cardiology 18.2 (2003): 153-158.

8. Gebel, Howard M., Robert A. Bray, and Peter Nickerson. “Pre‐transplant assessment of donor‐reactive, HLA‐specific antibodies in renal transplantation: contraindication vs. risk.” American Journal of Transplantation 3.12 (2003): 1488-1500. Pre‐Transplant Assessment of Donor‐Reactive, HLA‐Specific Antibodies in Renal Transplantation: Contraindication vs. Risk

9. Terasaki, Paul I., and Junchao Cai. “Human leukocyte antigen antibodies and chronic rejection: from association to causation.” Transplantation 86.3 (2008): 377-383. Human Leukocyte Antigen Antibodies and Chronic Rejection:… : Transplantation

10. Mehra, N. K., and A. K. Baranwal. “Clinical and immunological relevance of antibodies in solid organ transplantation.” International journal of immunogenetics 43.6 (2016): 351-368.

What are the signs that the BCG vaccine was well administered?


Perhaps the trickiest injection route in that it takes longer than other injections routes and requires specific training and skill, BCG vaccine – Wikipedia is licensed for intradermal (ID) administration. Sign of correct BCG administration is formation of bleb (wheal).

Knowledge of human skin anatomy and especially thickness of various skin layers is essential for correct ID injection.

The human epidermis is typically ~0.2mm thick except in the palms and soles where it can range 0.8 to 1.4mm. Below the epidermis, the dermis ranges from 1.5 to 3mm thick. Below the dermis is subcutaneous fat tissue, the hypodermis, whose thickness varies a lot in different parts of the body and obviously substantially between individuals (see below from 1).

Too shallow and the injection would break through the epidermis and spill out. Wrong angle and deeper, the material would end up as subcutaneous (SC), deeper still and in rare cases depending on age, weight and site, it could even end up intramuscular (IM) (see below from 2).

Procedure for proper ID injection:

  • For ID injection, the needle is inserted into the skin at an angle parallel to the surface, typically the volar surface of the forearm or deltoid surface of the upper arm.
  • Usually a short-bevel, fine gauge needle such as 27 gauge is used, bevel is up when inserted and needle goes in at ~5 to 15 degree angle into slightly stretched skin.
  • Entire bevel of the needle should penetrate the skin.
  • Once bevel penetrates skin all the way through, ~3mm in, material is injected, causing a bleb or wheal to form as the injected fluid stretches the epidermis and its basement membrane above the injection site.
  • If material leaks onto skin, penetration was too shallow. If bleb doesn’t form, penetration was too deep and the material likely ended up SC.
  • Bleb indicates an injection went in as ID rather than SC (see below from 3).

This YouTube video shows ID injection into the volar forearm: How To Do an Intradermal Injection


1. Lambert, Paul Henri, and Philippe E. Laurent. “Intradermal vaccine delivery: will new delivery systems transform vaccine administration?.” Vaccine 26.26 (2008): 3197-3208.…

2. Route of administration

3. Kis, Elsa E., Gerhard Winter, and Julia Myschik. “Devices for intradermal vaccination.” Vaccine 30.3 (2012): 523-538.

Why are people with hearing loss more prone to dementia?


, ,

A recently recognized link, whether age-related hearing loss causes dementia remains unclear. Regardless the nature of their association, since most age-related hearing loss is peripheral, not central, an age-related hearing loss-dementia link has immense practical importance given the rapidly increasing life expectancies and aging populations the world over. If simple relatively low-tech interventions such as hearing aids help prevent or mitigate aspects of cognitive decline, an economical yet tremendous medical benefit could be close to hand.

This answer* summarizes

  • Conceptual models currently used to explain the link between age-related hearing loss and cognitive decline.
  • Research on whether hearing aids help reverse cognitive decline associated with age-related hearing loss.
  • History of research on age-related hearing loss- cognitive decline link.
  • Epidemiological data supporting age-related hearing loss- cognitive decline link.

* For clarity, except where necessary, the answer uses the term cognitive decline (process), not dementia (outcome).

Conceptual Models Currently Used to Explain Age-related Hearing Loss-Cognitive Decline Link

A currently prevailing hypothesis presumes age-related hearing loss forces the person to expend an increasing amount of mental effort not only in straining to listen and understand what others around them are saying but also in trying to discern signal from noise in everyday life (social events, TV/radio/online, public announcements in airports, etc., ambient sounds and the like).

Such sustained unnatural effort is considered to set up a vicious cycle where certain parts of the brain get strained while working memory is weakened.

While there is as-yet no definitive evidence supporting either tack, the main difference between two different frameworks that link age-related hearing loss and cognitive decline is whether (see below first from 1) or not (see below next from 2) the two processes have a common trigger.

Could Hearing Aids Help Reverse Some Types of Cognitive Decline Associated with Age-Related Hearing Loss?

While a couple of studies have shown hearing aids deployed for age-related hearing loss improved working memory (3, 4), definitive evidence for hearing restoration stemming or stopping cognitive decline is as-yet (circa 2018) lacking.

Representing a first, a randomized clinical trial, begun on January 4, 2018 and running through May 2022, is recruiting ~850 70 to 84-year old cognitively normal individuals with hearing loss to be (see below from 5)

‘randomized 1:1 to the hearing intervention (hearing needs assessment, fitting of hearing devices, education/counseling) or successful aging intervention (individual sessions with a health educator covering healthy aging topics). Post baseline, participants will be followed semi-annually for 3 years’

History of Research on Age-related Hearing Loss-Cognitive Decline Link

The idea that a ‘peripheral sensory defect‘ such as sight or hearing might be linked to age-related cognitive decline was apparently first proposed in 1964 (6), briefly explored in the 1980s (7, 8, 9, 10) and then seemingly forgotten (11) until a series of groundbreaking collaborative observational studies by scientists at Johns Hopkins School of Medicine – Wikipedia and the National Institute on Aging – Wikipedia (NIA) (12, 13, 14, 15) between 2011 and 2013 revived interest in this idea.

Epidemiological Data Supporting Age-related Hearing Loss- Cognitive Decline Link

The gist from a recent flurry of epidemiological studies is relative risk of dementia from hearing loss is higher than for any other individual risk factor (16, 17, 18, see below from 19).

‘The risk of hearing loss for dementia in the meta-analysis of three studies,65–67 which we did for this Commission (pooled RR 1·94, 95% CI 1·38–2·73; figure 3), is not only higher than the risk from other individual risk factors, but it is also pertinent to many people because it is highly prevalent, occurring in 32% of individuals aged older than 55 years.91 Its high RR and prevalence explains the high PAF [population attributable fraction]. We have used the prevalence of hearing loss in individuals older than 55 years to calculate PAF because this age was the youngest mean age in which presence of hearing loss was shown to increase dementia risk.67 Hearing loss is therefore grouped with the midlife risk factors, but evidence suggests that it continues to increase dementia risk in later life.’

However, mutually antagonistic factors complicate accurately assessing the extent to which hearing loss and dementia are linked, with one set of factors contributing to underestimating the extent of hearing loss in the population at large, and another to overestimating the link between hearing loss and cognitive decline.

  • Hearing loss underestimation: May be considerably underestimated since hearing loss tends to be under-diagnosed and under-treated (20, 21). People apparently also often do not use prescribed hearing aids (22), which may exacerbate propensity for cognitive decline.
  • Hearing loss-cognitive decline link overestimation: Catch-22 aspect to measuring cognition and hearing loss since tests assessing the former frequently rely on intact hearing. Pure tone audiometry – Wikipedia, the gold standard clinical assessment for hearing loss, may thus overestimate degree of cognitive impairment in those with hearing loss, especially in tests that require auditory function (2).


1. Lin, Frank R., and Marilyn Albert. “Hearing loss and dementia–who is listening?.” (2014): 671-673.…

2. Wayne, Rachel V., and Ingrid S. Johnsrude. “A review of causal mechanisms underlying the link between age-related hearing loss and cognitive decline.” Ageing research reviews 23 (2015): 154-166.

3. Doherty, Karen A., and Jamie L. Desjardins. “The benefit of amplification on auditory working memory function in middle-aged and young-older hearing impaired adults.” Frontiers in psychology 6 (2015): 721. The benefit of amplification on auditory working memory function in middle-aged and young-older hearing impaired adults

4. Qian, Zhen Jason, et al. “Hearing aid use is associated with better mini-mental state exam performance.” The American Journal of Geriatric Psychiatry 24.9 (2016): 694-702.

5. Aging and Cognitive Health Evaluation in Elders (ACHIEVE) – Full Text View –

6. KAY, DW, M. ROTH, and P. BEAMISH. “OLD AGE MENTAL DISORDERS IN NEWCASTLE UPON TYNE. II. A STUDY OF POSSIBLE SOCIAL AND MEDICAL CAUSES.” The British journal of psychiatry: the journal of mental science 110 (1964): 668.…

7. Herbst, Katia Gilhome, and Charlotte Humphrey. “Hearing impairment and mental state in the elderly living at home.” Br Med J 281.6245 (1980): 903-905.…

8. Weinstein, Barbara E. “Hearing loss and senile dementia in the institutionalized elderly.” Clinical Gerontologist 4.3 (1986): 3-15.

9. Peters, Christie A., Jane F. Potter, and Susan G. Scholer. “Hearing impairment as a predictor of cognitive decline in dementia.” Journal of the American Geriatrics Society 36.11 (1988): 981-986.

10. Uhlmann, Richard F., et al. “Relationship of hearing impairment to dementia and cognitive dysfunction in older adults.” Jama 261.13 (1989): 1916-1919.

11. Valentijn, Susanne AM, et al. “Change in sensory functioning predicts change in cognitive functioning: Results from a 6‐year follow‐up in the Maastricht Aging Study.” Journal of the American Geriatrics Society 53.3 (2005): 374-380. https://cris.maastrichtuniversit…

12. Lin, Frank R., et al. “Hearing loss and incident dementia.” Archives of neurology 68.2 (2011): 214-220.…

13. Lin, Frank R., et al. “Hearing loss and cognition in the Baltimore Longitudinal Study of Aging.” Neuropsychology 25.6 (2011): 763.…

14. Lin, Frank R., et al. “Hearing loss prevalence and risk factors among older adults in the United States.” Journals of Gerontology Series A: Biomedical Sciences and Medical Sciences 66.5 (2011): 582-590.…

15. Lin, Frank R., et al. “Hearing loss and cognitive decline in older adults.” JAMA internal medicine 173.4 (2013): 293-299.…

16. Thomson, Rhett S., et al. “Hearing loss as a risk factor for dementia: A systematic review.” Laryngoscope investigative otolaryngology 2.2 (2017): 69-79. Hearing loss as a risk factor for dementia: A systematic review

17. Loughrey, David G., et al. “Association of age-related hearing loss with cognitive function, cognitive impairment, and dementia: a systematic review and meta-analysis.” Jama Otolaryngology–head & Neck Surgery 144.2 (2018): 115-126.…

18. Ford, Andrew H., et al. “Hearing loss and the risk of dementia in later life.” Maturitas 112 (2018): 1-11.

19. Livingston, Gill, et al. “Dementia prevention, intervention, and care.” The Lancet 390.10113 (2017): 2673-2734.…

20. Davis, Adrian, et al. “Acceptability, benefit and costs of early screening for hearing disability: a study of potential screening tests and models.” HEALTH TECHNOLOGY ASSESSMENT-SOUTHAMPTON- 11.42 (2007).…

21. Amieva, Hélène, et al. “Self‐reported hearing loss, hearing aids, and cognitive decline in elderly adults: A 25‐year study.” Journal of the American Geriatrics Society 63.10 (2015): 2099-2104.…

22. Hartley, David, et al. “Use of hearing aids and assistive listening devices in an older Australian population.” Journal of the American Academy of Audiology 21.10 (2010): 642-653.

How does a Western diet cause inflammation? How is this linked to cancer, and which immune cells are mediators, and which human organs are most susceptible?

In a nutshell, food components and additives in ‘Western diet’ actively change gut Microbiota – Wikipedia, apparently for the worse, not the better, which leads to inappropriate and/or inappropriately persisting inflammation. Since microbial antigens and metabolites can access the circulation and hence reach every part of the body, systemic, not local, adverse consequences such as Metabolic syndrome – Wikipedia ensue, and risk for diseases such as cancer increase as a consequence of the tilting of balance from health to disease during the inevitable trade-offs that are so much a part of the processes that define life, growth, survival and reproduction.

This answer briefly summarizes

  • How proper understanding of inflammation is essential for understanding its role in human health.
  • Definition of Western diet.
  • Illustrative examples of how Western diet ingredients appear to negatively impact human gut microbiota.


Already a contested word in the immunology lexicon, no wonder once it seeped out, inflammation became one of the most misunderstood words in popular usage as well. Clarifications about inflammation are thus first necessary.

  • One, inflammation is an essential part of normal immune system function.
  • Two, inflammation is a process, not an outcome.

Inflammation becomes a problem when the body inappropriately activates, inappropriately manages and/or inappropriately allows it to persist, all of which occur as a result of underlying physiological problems.

Accidentally cut a finger with a knife, inflammation is a necessary part of the healing process. If the body manages this healing process inappropriately as a consequence of underlying health problems (e.g., diabetes), the wound could get infected, forcing other types of inflammatory mediators to get engaged, the process might persist longer than is beneficial and the infection might even spread, leading to even more serious health consequences.

Rather than a problem itself, inappropriate or inappropriately persisting inflammation is more a beacon indicating underlying health problems. In the case of diet-associated inflammation, the underlying issue is how diets shape gut microbiota and the consequences thereof.

Absent specific context, the overused marketing cliche, ‘anti-inflammatory!’, that overwhelms so much of advertising is pretty much hot air, especially when used in isolation in service of a single or few ingredients. Diets rather than individual ingredients enhance or diminish propensity for persistent inflammation.

‘Western diet’

In a sign that bad habits also tend to be among the most addictive (1), even as the phrase ‘Western diet’ has taken on the attributes of a four-letter word in recent years, it only seems to globalize, relentlessly marching onward year on year colonizing country after country, Chile (2), India (3), Malaysia (4), Mexico (5), the list goes on.

Rather than inevitable, much of this unsound status quo is the outcome of lifestyle changes that increasingly prioritize convenience getting yoked to relentless, careful, decades-long, carpet-bombing type of marketing by the food industry even as the broad tacit overlap between this industry’s goals and economic considerations of policy makers helps further cement its business model. That US hospital food is largely processed is irony personified and a sign of the extent to which processed food permeates American culture, presumably with the population’s tacit if not overt acquiescence.

Convenience, shelf-life, resistance to spoilage its chief hallmarks, Western diet is today used as a short hand for food with

  • A surfeit of empty calories (sugar and alcohol).
  • High in total, saturated and animal fat.
  • Low in complex carbohydrates and dietary fiber.

In practical terms, lot of red meat, especially processed meat, refined grains, high-fat dairy, a variety of highly processed foods, drinks and sweets with a lot of added sugar and any number of food additives (preservatives, dietary emulsifiers, Thickening agent – Wikipedia, Leavening agent – Wikipedia, artificial sweeteners and the like) euphemistically deemed Generally recognized as safe – Wikipedia (GRAS) while being low in whole grains, fruits and vegetables (see below from 6).

How Western diet Ingredients Appear Problematic: Negative Impact on Human Gut Microbiota

The illustrative example of GRAS ingredients helps reveal how Western diets tend to be harmful in the long run.

Largely ignorant of the importance of microbiota and how our diets influence them, the modern food industry assembled itself over the course of the 20th century, using the GRAS loophole to exponentially expand the ingredients it adds to processed foods to enhance certain flavors, tastes, and textures, and to prolong shelf-life and stability (see below from 7).

‘In the past five decades, the number of food additives has skyrocketed — from about 800 to more than 10,000. They are added to everything from baked goods and breakfast cereals to energy bars and carbonated drinks.’

The food industry’s rules of thumb regarding nutritive value and safety of various foodstuffs may have been wholly inadequate to the task at hand simply because knowledge of the importance and influence of microbiota didn’t emerge in mainstream science until the 2000s and such research increasingly reveals how the diets we consume actively shape the microbes that inhabit our GI tracts.

A heavy reliance on GRAS food components is emblematic of Western diets. Analyze at random the ingredient list of some packaged food and inevitably one or more of polysaccharides such as Carboxymethyl cellulose, carrageenan, maltodextrin, soy lecithin, pectin, polysorbate 80, xanthan gum is sure to pop up while non-nutritive sweeteners are pervasive in beverages and baked goods (see some examples below from 8).

An illustrative example of how GRAS entails a lot more than meets the eye, as an immunologist, my first introduction to Carrageenan – Wikipedia was not as a food ingredient deemed GRAS but as a reagent immunologists in the 1970s used to inhibit or deplete experimental rodents of their macrophages, an immune cell type that acts as a clearing house for dead and dying cells (9, 10, 11).

While obviously doses used for such biological effects would be much higher than the amounts allowed in food, this is but an example to counter the inherently problematic notion that attributes of food additives deemed GRAS can or should be taken at face value.

Since modern food impacts everyone, rather than malign or nefarious motives, inadvertent use fueled by ignorance of their biological potential helps explain how processed foods chock full of myriad food additives have come to occupy such an extensive and expansive space within the Western food landscape since the mid-20th century.

While some have fixated on the toxic potential of many food additives, their real influence may be something far more insidious and sweeping, namely, large-scale modification of our microbiota, especially of those inhabiting our GI tract, which is how they may link Western diets with inappropriate or inappropriately persisting inflammation.

Many GRAS ingredients have long been presumed inert and our own cells deemed incapable of metabolizing them. However where microbiota are concerned, the relatively recent explosion in research steadily reveals otherwise. Study after study, ingredient after ingredient is now being shown subject to metabolism by specific microbes (12, 13, 14, 15, 16, 17, 18).

One of the most recent of such studies revealed how virulent strains of Clostridium difficile outcompete their more benign counterparts in utilizing Trehalose – Wikipedia, a sugar that until the 1990s was hard to manufacture at scale but is now prevalent in all manner of processed foods since it improves shelf-life and texture (19, 20). Industrial trehalose use coincides with C.diff outbreaks.

Experimental mouse models that imply correlation, not causation is the caveat about many diet-microbiota studies. Nevertheless, the emerging narrative plausibly argues Western diets appear to engender residence within our GI tracts of microbiota species whose harmful propensities outweigh benefits.


1. Opinion | What Cookies and Meth Have in Common

2. In Sweeping War on Obesity, Chile Slays Tony the Tiger

3. One Man’s Stand Against Junk Food as Diabetes Climbs Across India

4. In Asia’s Fattest Country, Nutritionists Take Money From Food Giants

5. A Nasty, Nafta-Related Surprise: Mexico’s Soaring Obesity

6. Hu, Frank B. “Dietary pattern analysis: a new direction in nutritional epidemiology.” Current opinion in lipidology 13.1 (2002): 3-9.…

7. Why the FDA doesn’t really know what’s in your food

8. Dar, H. Y., et al. “Immunomodulatory Effects of Food Additives.” Int J Immunother Cancer Res 3.1 (2017): 019-031.…

9. Catanzaro, Phillip J., Howard J. Schwartz, and Richard C. Graham Jr. “Spectrum and possible mechanism of carrageenan cytotoxicity.” The American journal of pathology 64.2 (1971): 387.…

10. Sawicki, John E., and Phillip J. Catanzaro. “Selective macrophage cytotoxicity of carrageenan in vivo.” International Archives of Allergy and Immunology 49.5 (1975): 709-714.

11. Rumjanek, V. M., S. R. Watson, and V. S. Sljivić. “A re-evaluation of the role of macrophages in carrageenan-induced immunosuppression.” Immunology 33.3 (1977): 423.…

12. Nickerson, Kourtney P., and Christine McDonald. “Crohn’s disease-associated adherent-invasive Escherichia coli adhesion is enhanced by exposure to the ubiquitous dietary polysaccharide maltodextrin.” PLoS One 7.12 (2012): e52132.…

13. Arthur, Janelle C., and Christian Jobin. “The complex interplay between inflammation, the microbiota and colorectal cancer.” Gut microbes 4.3 (2013): 253-258.…

14. Chassaing, Benoit, et al. “Dietary emulsifiers impact the mouse gut microbiota promoting colitis and metabolic syndrome.” Nature 519.7541 (2015): 92.…

15. Cani, Patrice D. “Metabolism: Dietary emulsifiers—sweepers of the gut lining?.” Nature Reviews Endocrinology 11.6 (2015): 319.

16. Chassaing, Benoit, et al. “Dietary emulsifiers directly alter human microbiota composition and gene expression ex vivo potentiating intestinal inflammation.” Gut (2017): gutjnl-2016.…

17. Statovci, Donjete, et al. “The impact of Western diet and nutrients on the microbiota and immune response at mucosal interfaces.” Frontiers in immunology 8 (2017): 838. The Impact of Western Diet and Nutrients on the Microbiota and Immune Response at Mucosal Interfaces

18. Roca-Saavedra, Paula, et al. “Food additives, contaminants and other minor components: effects on human gut microbiota—a review.” Journal of physiology and biochemistry (2017): 1-15.

19. Opinion | The Germs That Love Diet Soda

20. Collins, J., et al. “Dietary trehalose enhances virulence of epidemic Clostridium difficile.” Nature (2018).

How does a bone marrow transplant work? What is the cost?



Bone marrow is the source of blood-derived cells including Hematopoietic stem cell – Wikipedia, erythrocytes, megakaryocytes and platelets as well as various types of innate and adaptive immune cells such as basophils, neutrophils, eosinophils,, macrophages, dendritic cells, natural killer cells and T and B cells.

Bone marrow transplant emerged from the recognition of Acute radiation syndrome – Wikipedia as a devastating effect of ionizing radiation from atomic weapons especially on blood-derived cells.

After mouse model studies suggested the spleen could somehow provide some protection against such harmful effects (1), further work implicated specific bone marrow-derived cells, now recognized as pluripotent hematopoietic stem cells, in such protection. These stem cells are mobilized to proliferate and differentiate into different types of mature blood-derived cells after strong doses of ionizing radiation kill off the latter en masse.

BMT plays a similar healing role when used for treating various blood disorders. Interest in BMT or Hematopoietic stem cell transplantation – Wikipedia as therapy took off after cumulative research showed such transplants, allogeneic transplantation, Allotransplantation – Wikipedia, to be effective in treating acute leukemia (2). Today, BMT is also done for chronic myeloid leukemia, severe anaplastic anemia, thalassemia and other types of blood disorders.

BMT Process: Recipient

The process entails preparing the recipient for transplant (conditioning regimen) followed by BMT.

Classical conditioning regimen is called myeloablative and requires ablation of recipient’s own bone marrow. Allogeneic transplants usually use Cyclophosphamide – Wikipedia or Busulfan – Wikipedia plus Total body irradiation – Wikipedia. This drastic procedure serves several purposes,

  • Makes space for the donor bone marrow.
  • Reduces recipient immunocompetent cells to minimize the risk they would attack and destroy donor bone marrow-derived cells.
  • Gets rid of most of the bone marrow-derived cells and thus most of the leukemia as well.

If some leukemia cells are still left, allogeneic donor immune cells usually finish them off following transplantation through what’s called the graft-versus-leukemia effect.

Conditioning regimen itself is toxic to normal cells as well so clinical decisions about chemotherapy and irradiation doses are forced to walk a delicate balance between killing as many tumor cells as possible while killing as few normal cells as possible. Usually BMT is done for acute leukemia when the patient is assumed to be in complete remission, a situation presumed to represent minimal disease burden.

Non-myeloablative conditioning regimen emerged as an alternative with lower risk of mortality from toxicity-related side-effects. Here less drastic chemotherapy and irradiation doses are used to deplete the recipient’s bone marrow-derived cells while more effort goes into immunosuppression using purine analogues (Fludarabine – Wikipedia, Cytarabine – Wikipedia, Cladribine – Wikipedia) and Anti-thymocyte globulin – Wikipedia (3, 4).

Such gentler approaches reduce toxicity and expand the range of patients who can undergo BMT to include those older or with other health problems (5).

Sometimes some recipient hematopoietic cells may survive the Rx resulting in mixed hematopoietic chimerism after transplant, i.e. patient has both their own as well as donor hematopoietic cells existing in their body side by side. Over time, most patients convert to complete donor chimerism.

BMT Process: Donor

Ideally the donor would be a blood relative with the same Human leukocyte antigen – Wikipedia A, B, C and DR. Transplants can also be autologous where the recipient themselves is the donor.

Bone marrow is harvested from donor’s large bones such as pelvis using a large needle to directly access the center of the bone under general anesthesia.

A much less invasive approach more prevalent in recent years is to use mobilized peripheral blood stem cells where donor stem cells are mobilized from bone marrow by injecting donors with growth factors such as Granulocyte colony-stimulating factor – Wikipedia or Granulocyte-macrophage colony-stimulating factor – Wikipedia, which spur hematopoietic stem cell spillover into blood. Such mobilized cells are then harvested from blood using Apheresis – Wikipedia. The blood’s drawn from the donor into a separator device which retains the desired fraction needed for the transplant and the rest is transferred back to the donor.

Cord blood – Wikipedia is a third option. Main limitation is small volume of only ~100 ml, which makes this a viable option mainly for children.

BMT: Cost (US-specific)

In the US, BMT cost will vary greatly depending on hospital and physician reimbursement arrangements. Assuming full insurance coverage, a 2017 report summarizes average total costs for allogeneic and autologous BMT (see below from 6).


1. Jacobson, L. O., E. K. Marks, and M. J. Robson. “Effect of spleen protection on mortality following x-irradiation.” (1949): 1538-1543.

2. Thomas, E. Donnall, et al. “Bone-marrow transplantation.” New England Journal of Medicine 292.17 (1975): 895-902.

3. Kassim, A. A., et al. “Reduced-intensity allogeneic hematopoietic stem cell transplantation for acute leukemias:‘what is the best recipe?’.” Bone marrow transplantation 36.7 (2005): 565.…

4. Pingali, S. R., and R. E. Champlin. “Pushing the envelope—nonmyeloablative and reduced intensity preparative regimens for allogeneic hematopoietic transplantation.” Bone marrow transplantation 50.9 (2015): 1157.…

5. Alyea, Edwin P., et al. “Comparative outcome of nonmyeloablative and myeloablative allogeneic hematopoietic cell transplantation for patients older than 50 years of age.” Blood 105.4 (2005): 1810-1814.…


Does bee pollen help with allergies?


On the contrary, bee pollen could be dangerous for those with allergies. This answer briefly summarizes

  • What bee pollen is.
  • Allergy sources and content in bee pollen.
  • Case reports of serious health consequences for the allergic from consuming bee pollen.
  • Paltry scientific record on purported health benefits of bee pollen.

Bee Pollen

Bees accumulate a wide variety of pollen as granules, bee pollen, in pollen sacs on their hind legs as they flit from flower to flower sipping their nectar. Beekeepers collect and sell these granules as health foods using screens at hive entrances to force them out of the pollen sacs when bees reenter hives.

According to one study, bee pollen gained popularity as a health food after Finnish marathon runners credited it with their successful performances in the 1972 Munich Olympics (1). In 1977, the Chicago Tribune and the United Airlines Mainliner magazine published reports touting bee pollen health benefits (2).

Allergy Sources & Content in Bee Pollen

Plants can be pollinated by wind, Anemophily – Wikipedia, insects, Entomophily – Wikipedia or animals (both invertebrate and vertebrate), Zoophily – Wikipedia.

Airborne pollen from wind-pollinated plants such as grasses (ragweed, mugwort, etc.) are a major source of respiratory allergy. Marketing bee pollen as health foods relies on a misconception that they contain pollen from only less allergenic insect-pollinated plants. However, bee pollen sources are actually far more diverse and they contain pollen from wind-pollinated plants such as ash, oak, willow and poplar, often the source of allergens for those with allergic rhinitis (3).

  • Pollinating mechanisms are obviously far more porous in practice than imagined in theory. Wind-pollinated trees serve as major sources of pollen for honeybees in early spring for instance, a time when insect-pollinated plants aren’t a major source of pollen (4).
  • Structural similarities, Cross-reactivity – Wikipedia, between pollen from wind- and insect-pollinated plants render such distinctions moot making the latter capable of triggering allergy episodes in some allergic people.

With no international standard, bee pollen products are highly variable, with guidelines or regional standards only available from Australia-New Zealand, Brazil, Bulgaria, Poland and Switzerland as of 2015 (3). With variability an inherent feature of the ways by which bees collect, store and process bee pollen, as well as their sources, habitat and even season, it’s anyone’s guess what if anything could be done to standardize it.

Sheer amount of pollen in bee pollen could be why they’re reported to trigger strong allergy reactions and even anaphylaxis in those with common forms of inhalant allergies such as hay fever. A single bee pollen pellet might contain as many as 2 million pollen grains while one teaspoon of bee pollen is estimated to contain >2.5 billion grains. Specifically one study estimated 1 gram of bee pollen to contain ~0.4 to 6.4 million plant pollen, amounts sizable enough to trigger serious allergy attacks (1). Bee pollen is estimated to contain thousands fold more pollen compared to honey which helps explain its allergic potential (1).

Apart from pollen, bee pollen also contains bacteria, fungi, bee fecal material and insect body parts (1). Some bee pollen supplements have been found to have as much as 6% fungi such as Aspergillus and Cladosporium species (5, 6).

Case Reports of Serious Allergies from Bee Pollen

Fungi in bee pollen was found to cause anaphylaxis in patients with IgE sensitization to such molds (5, 6).

A review concluded bee pollen ingestion can be dangerous to allergic children (7). Bee pollen can cause immediate systemic allergic reactions (1, 2, 3, 6, 8) and even anaphylaxis (5, 9, 10) in people with a history of allergy.

A study that tried to figure out what patients may be reacting to in bee pollen found ~67% of those with atopy and IgE sensitization to olive tree, grass and mugwort pollen reacted positively to bee pollen skin prick tests, implying the bee pollen they tested contained pollen from all these allergy-associated plants (1).

Scientific Record of Bee Pollen Health Benefits is Paltry to Non-existent

Few scientific studies have rigorously examined bee pollen health benefits and a few randomized clinical trials failed to substantiate athletic or health benefits (11, 12).


1. Pitsios, Constantinos, et al. “Bee pollen sensitivity in airborne pollen allergic individuals.” Annals of Allergy, Asthma & Immunology 97.5 (2006): 703-706.

2. Cohen, Steven H., et al. “Acute allergic reaction after composite pollen ingestion.” Journal of Allergy and Clinical Immunology 64.4 (1979): 270-274.…

3. Shahali, Youcef. “Allergy after ingestion of bee-gathered pollen: influence of botanical origins.” Annals of Allergy, Asthma & Immunology 114.3 (2015): 250-251.

4. Keller, Irene, Peter Fluri, and Anton Imdorf. “Pollen nutrition and colony development in honey bees—Part II.” Bee World 86.2 (2005): 27-34.…

5. Greenberger, Paul A., and Michael J. Flais. “Bee pollen-induced anaphylactic reaction in an unknowingly sensitized subject.” Annals of allergy, asthma & immunology 86.2 (2001): 239-242.

6. Popescu, Florin-Dan, and Mariana Vieru. “The presence of aeroallergens in food products: a potential risk for the patient with allergic rhinitis.” Romanian Journal of Rhinology 8.29 (2018): 11-15.…

7. Martin-Munoz, M. F., et al. “Bee pollen: a dangerous food for allergic children. Identification of responsible allergens.” Allergologia et immunopathologia 38.5 (2010): 263-265.…

8. Cohen, Steven H., et al. “Acute allergic reaction after composite pollen ingestion.” Journal of Allergy and Clinical Immunology 64.4 (1979): 270-274.…

9. Geyman, John P. “Anaphylactic reaction after ingestion of bee pollen.” The Journal of the American Board of Family Practice 7.3 (1994): 250-252.

10. Choi, Jeong-Hee, et al. “Bee pollen-induced anaphylaxis: a case report and literature review.” Allergy, asthma & immunology research 7.5 (2015): 513-517.…

11. Steben, Ralph E., and Pete Boudreaux. “The effects of pollen and protein extracts on selected blood factors and performance of athletes.” The Journal of sports medicine and physical fitness 18.3 (1978): 221.

12. Ulbricht, Catherine, et al. “An evidence-based systematic review of bee pollen by the Natural Standard Research Collaboration.” Journal of dietary supplements 6.3 (2009): 290-312.…