What is the rationale behind the push to vaccinate adults against pertussis to protect babies as research is showing that the acellular version doesn’t stop transmission or carriage nor does it provide herd immunity? Is there a flaw in this research?

The bacterium Bordetella pertussis is the most common cause of pertussis or whooping cough and it spreads from an infected person’s cough or sneeze as airborne droplets.

Some (Australia, Canada, Ireland, Spain, UK, US), not all, countries that switched from whole cell (wP) to acellular (aP) pertussis vaccines saw in subsequent decades (1),

  • Protection that turned out to be both weaker and of shorter duration in adolescents.
  • Resurgence of pertussis infections, even among those previously vaccinated, i.e., poor herd immunity.

Not only does aP appear to induce weak immunological memory against pertussis in these countries, it even appears to be counter-productive in some age groups. Italy, Japan and Sweden are reported exceptions to these trends (1).

Why did some countries decide to switch from wP to aP in the first place?

In the US, that decision was not rooted in scientific rationale but was instead a knee-jerk reaction by vaccine manufacturers and government regulators to lawsuits in the 1980s.

Developed in the 1930s, wP is just killed Bordetella pertussis bacteria and it does give injection site reactions. This was enough for some US parents in the 1980s to blame it for encephalopathy-associated febrile seizures and intellectual disability in their children. The resulting lawsuits drove most US pertussis vaccine manufacturers out of the market and made it urgent for the US to find a ‘safer’ vaccine alternative.

Simply removing the Bordetella pertussis cell wall component, endotoxin, presumed responsible for the injection site reactions, presented itself as the easy solution, never mind that it wasn’t the source of the febrile seizures, whose cause(s) remained unidentified. The eventual approved product, aP, had some purified pertussis antigens but not the endotoxin.

Meantime, recent discoveries suggest switching from wP to aP was based on a fallacy since a causal link between wP and febrile seizures seems to have been misplaced.

An exhaustive retrospective 2010 analysis concluded that children who had developed such seizures coincident to wP instead had Dravet syndrome – Wikipedia, de novo mutations in the sodium channel gene SCN1A (2). This conclusion was strengthened by the observation that children who got wP before or after their first seizure had similar clinical outcome when consequences of pre-seizure wP should have been worse if it indeed played a role in the syndrome.

Confirmation bias may thus have played a role in implicating wP in these febrile seizures. After all, wP is still standard for large swaths of the world’s population such as India, which haven’t reported febrile seizures after children there get wP.

Nevertheless, naysayers would argue such retrospective studies include small numbers of patients, that they rely on previously recorded clinical data and thus may be subject to recall bias, and that they lack an unvaccinated control group.

Point is the horse is already out of the barn and these days, parents in countries such as the US are less likely or even unlikely to agree to switching back to wP even as data accumulates that it does indeed better protect against pertussis infection compared to aP.

Constituents of effective human anti-pertussis immunity are still unknown

Usually intended for use among the healthy population at large makes vaccines a much more expensive proposition compared to drugs and other medical interventions. Optimal scientific proof that a vaccine indeed prevents a given infection requires comparing infection rates for many years, maybe even lifetimes, in two groups, one that got the vaccine and the other that didn’t, an unimaginably expensive proposition that would make it impossible to get any vaccine approved.

Vaccinologists counter this gap by developing protection measurements assumed to be reliable surrogates. Correlates of immunity/correlates of protection – Wikipedia are immunological assays that hopefully measure the relevant anti-vaccine immune response(s).

Countries such as the US switching from wP to aP relied on such pertussis-related correlates of protection for their decision making. Specifically, they concluded that aP induced equivalent immunity compared to wP since both induced equivalent antibody titers against the pertussis antigens present in aP.

Contrary to anti-vaxxer conspiracy theories, vaccines are actually loss-leaders rather than moneymaking bonanzas for pharma companies. It isn’t by accident that at ~US $17 billion, vaccines represent barely 3% of US pharmaceutical sales (3).

Problem is rodent animal models commonly used in pertussis are poorly predictive of human infection and immunity. Baboons may be a better model but using them is both prohibitively expensive and ethically problematic.

That perverse incentives fuel scientific research doesn’t help matters either, with knowledge about the mouse immune system leaps and bounds ahead of its human counterpart.

Thus, pertussis vaccinologists have been making decisions in the dark, not knowing exactly which human immune responses are relevant and/or critical for preventing infection nor knowing which pertussis antigens are necessary and sufficient to recapitulate effective immunity against the whole organism. Today decades later, such ignorance is proving costly since data shows that antibody titers against aP antigens, assumed to be a reliable correlate of protection, are unable to distinguish between effective (driven by wP) and ineffective (driven by aP) human anti-pertussis immune responses.

What then lies behind the push to vaccinate adults with aP to protect babies?

Maternal pertussis vaccination effectively protects against infant death from pertussis (4, 5), especially when it is given in the 2nd trimester (6).

Since the US switch to aP largely occurred in the late 1990s and early 2000s, the idea is today’s mothers as well as older adults were more likely to have been primed (originally vaccinated) with wP, meaning they should have a more robust and effective pre-existing memory immune response to pertussis. Meantime, studies have shown that even a single aP boost can effectively reactivate memory immune responses initially induced by wP (7).

This is the basis for the rationale that expectant mothers boosted with even a less than optimal aP may passively transfer sufficient levels of anti-pertussis antibodies to vulnerable infants to protect them from pertussis.

Problem is the window of opportunity for such an approach is fast closing as aP-primed girls grow up and become mothers. Given the poor ability of aP to prime strong and effective anti-pertussis immunity, it’s unclear whether an aP boost given during pregnancy to aP-primed mothers would work as well.

Coda

At their core, suspicions about vaccines represent a profound failure of communication and breakdown of trust between scientists and those who harbor such suspicions. Since vaccines affect pubic health, entire populations, not just those individuals, pay the price.

Perversely, science’s successes – not just vaccines but also hygiene and sanitation – set the stage for current doubts about vaccines among some individuals in affluent countries. After all, at least one or more generations of people in affluent countries such as the US have now grown up without facing the scourge of epidemics caused by pathogenic microbes. Such an embarrassment of riches can foster unreasonable expectations.

In the case of vaccines, that unreasonable expectation expresses itself as entitlement to paramount safety and zero risk, notwithstanding that being very far from complete, current knowledge of human biology could never hope to meet such a lofty expectation.

In response, vaccine makers and regulators alike feel pressured to make the more risk-averse and biologically impossible decision of prioritizing safety over immunogenicity when designing new vaccines, the decision to switch from wP to aP being a case in point. This ends up violating an essential biological principle since immunogenicity, the ability to drive strong and effective immune responses, requires ‘dirt’.

Needing to make up for this lack of natural ‘dirt’, scientists add Adjuvant – Wikipedia to sub-unit vaccines comprised of pure antigens. However, how adjuvants work is still largely a black box which means outcomes remain unpredictable, especially at a population level. The wP to aP switch embodies these drawbacks.

For more details on the why and consequences of switching from wP to aP: Tirumalai Kamala’s answer to Why is the pertussis vaccine not protecting those vaccinated for pertussis?

Bibliography

1. Gill, Christopher, Pejman Rohani, and Donald M. Thea. “The relationship between mucosal immunity, nasopharyngeal carriage, asymptomatic transmission and the resurgence of Bordetella pertussis.” F1000Research 6 (2017). The relationship between mucosal immunity, nasopharyngeal carriage, asymptomatic transmission and the resurgence of Bordetella pertussis

2. McIntosh, Anne M., et al. “Effects of vaccination on onset and outcome of Dravet syndrome: a retrospective study.” The Lancet Neurology 9.6 (2010): 592-598. Effects of vaccination on onset and outcome of Dravet syndrome: a retrospective study

3. U.S. Vaccine Market – Industry Analysis, Size, Share, Growth, Trends and Forecast, 2012 – 2018

4. Amirthalingam, Gayatri, et al. “Effectiveness of maternal pertussis vaccination in England: an observational study.” The Lancet 384.9953 (2014): 1521-1528. http://sys.91sqs.net/mobilenews/…

5. Dabrera, Gavin, et al. “A case-control study to estimate the effectiveness of maternal pertussis vaccination in protecting newborn infants in England and Wales, 2012–2013.” Clinical Infectious Diseases 60.3 (2014): 333-337. https://pdfs.semanticscholar.org…

6. Eberhardt, Christiane S., et al. “Maternal immunization earlier in pregnancy maximizes antibody transfer and expected infant seropositivity against pertussis.” Clinical Infectious Diseases 62.7 (2016): 829-836. Maternal Immunization Earlier in Pregnancy Maximizes Antibody Transfer and Expected Infant Seropositivity Against Pertussis | Clinical Infectious Diseases | Oxford Academic

7. Huang, Li-Min, et al. “Immunogenicity and reactogenicity of a reduced-antigen-content diphtheria-tetanus-acellular pertussis vaccine in healthy Taiwanese children and adolescents.” Journal of Adolescent Health 37.6 (2005): 517-e1. https://www.jahonline.org/articl…

https://www.quora.com/What-is-the-rationale-behind-the-push-to-vaccinate-adults-against-pertussis-to-protect-babies-as-research-is-showing-that-the-acellular-version-doesn-t-stop-transmission-or-carriage-nor-does-it-provide-herd-immunity/answer/Tirumalai-Kamala

Advertisements

How is the lymphatic system involved in metastasis?

Tags

Similar to the blood vessel system, the body’s lymphatic network is an extremely dense network of vessels that offers a rich source of conduits for tumors to spread. Feast your eyes below on reproductions (1) of some glorious 19th century illustrations of the human lymphatic system by Nicolas-Henri Jacob made in collaboration with Jean-Baptiste Marc Bourgery – Wikipedia.

However, until recently, tumor spread via blood, hematogenous metastasis, remained the disproportionate target of study while the biological importance of lymphatic metastasis remained under-explored.

In 1960, Ernest Gould developed the concept of the Sentinel lymph node – Wikipedia in tumor biology, the idea being that if a lymph node closest to a suspect tissue appeared normal and cancer-free then other lymph nodes downstream from that one would likely be as well (1).

A sentinel lymph node is the first one that drains a primary tumor and the one most likely to harbor a metastasis (1). SLN analysis is today routine in oncology, for diagnosis and for cancer staging.

Currently, scientists speculate that tumor cells could directly invade lymphatic vessels draining the tissue where they are growing or tumors could develop such vessels themselves, Lymphangiogenesis – Wikipedia (2, 3, 4, 5, see below from 6).

Developing its own chaotic lymphatic vessels could benefit a tumor in two ways.

  • One, it could enable the tumor to recruit cells that help it grow.
  • Two, these newly created conduits could help carry tumor cells to distant sites.

Bibliography

1. Loukas, Marios, et al. “The lymphatic system: a historical perspective.” Clinical Anatomy 24.7 (2011): 807-816. http://www.academia.edu/download…

2. Achen, Marc G., Bradley K. McColl, and Steven A. Stacker. “Focus on lymphangiogenesis in tumor metastasis.” Cancer cell 7.2 (2005): 121-127. https://www.cell.com/cancer-cell…

3. Harrell, Maria I., Brian M. Iritani, and Alanna Ruddell. “Tumor-induced sentinel lymph node lymphangiogenesis and increased lymph flow precede melanoma metastasis.” The American journal of pathology 170.2 (2007): 774-786. Tumor-Induced Sentinel Lymph Node Lymphangiogenesis and Increased Lymph Flow Precede Melanoma Metastasis

4. Tammela, Tuomas, and Kari Alitalo. “Lymphangiogenesis: molecular mechanisms and future promise.” Cell 140.4 (2010): 460-476. Lymphangiogenesis: Molecular Mechanisms and Future Promise

5. Swartz, Melody A., and Amanda W. Lund. “Lymphatic and interstitial flow in the tumour microenvironment: linking mechanobiology with immunity.” Nature Reviews Cancer 12.3 (2012): 210. http://www.ohsu.edu/xd/education…

6. Stacker, Steven A., et al. “Lymphangiogenesis and lymphatic vessel remodelling in cancer.” Nature Reviews Cancer 14.3 (2014): 159. https://www.researchgate.net/pro…

https://www.quora.com/How-is-the-lymphatic-system-involved-in-metastasis/answer/Tirumalai-Kamala

What does the latest research show about the connections between obesity and gut microbiota as a cause of obesity? What do we know now and what do we suspect?

Back in the 1960s, mouse model research led by the renaissance scientist René Dubos – Wikipedia uncovered intriguing and groundbreaking connections between gut microbiota composition, and body size and weight. These associations got forgotten for several decades. Meantime obesity rates started skyrocketing the world over in just the last few decades, too short a period of time for human genetic changes such as mutations alone to be the sole or main impetus.

Recent years have thus witnessed renewed research interest in the link between gut microbiota composition and metabolic issues such as obesity. If the environment shaped human gut microbiota more so than host genetics, recent changes in it could explain much of this recent propensity for weight gain. However, whether gut microbiota composition is a cause or effect of obesity, i.e., the nature of their association remains unclear.

  • Back in 2005, a mouse model study from a prominent microbiota research lab at Washington University in St. Louis showed that a certain strain of genetically obese mice had greater proportion of Firmicutes and decreased proportion of Bacteroides bacteria compared to their wild type (normal) counterparts (1).
  • In short order, the same group showed similar trends in obese versus lean humans, and even suggested weight loss through dietary changes increased Bacteroides and decreased Firmicutes in obese individuals (2).

Alas, increase Bacteroides, decrease Firmicutes turned out to be no magic weight loss formula.

Later studies simply didn’t replicate this association between obesity and relative proportions of these two bacterial phyla.

Final nails in the coffin were meta-analyses showing that variations between different studies exceeded whatever differences were observed between obese and lean individuals in a given study (3, 4). Confounding factors could explain such dissonance,

  • Molecular biology techniques currently used to study human gut microbiomes are so sensitive and details of the methods used by different groups vary so much, it is presently difficult if not impossible to even sift signal from noise.
  • Unlike inbred mouse strains used in biomedical research, humans are an outbred, genetically heterogeneous species.
  • To name just a couple of obvious differences, lifestyle and diets vary tremendously between humans.

Studies on twins discordant for obesity, where one twin was obese while the other had normal weight, showed (5, 6),

  • They have different gut microbiota composition. Indeed, studies find twin gut microbiomes became more dissimilar the longer they live apart (7, 8).
  • Transferring stools from such individuals into germ-free mice replicated their metabolic features, i.e., germ-free mice that got stools from obese individuals developed more fat mass.
  • Co-housing such mice led to the dominance of the lean phenotype. Mice are coprophagic (they eat poop) so the idea is co-housing allowed obese mice to get colonized by bacteria from lean mice while the reverse was not observed suggesting that, at least in such reductionist mouse models, gut microbiota associated with leanness dominated.

Preliminary studies suggest Fecal microbiota transplant – Wikipedia may be helpful in reversing some aspects of obesity such as glucose tolerance (9).

Beyond such generalities, nothing much can be asserted with certainty at present (late 2018). Claims that specific bacterial species in the form of probiotics can help anyone lose weight are outright misrepresentations.

Bibliography

1. Ley, Ruth E., et al. “Obesity alters gut microbial ecology.” Proceedings of the National Academy of Sciences 102.31 (2005): 11070-11075. Obesity alters gut microbial ecology

2. Ley, Ruth E., et al. “Microbial ecology: human gut microbes associated with obesity.” nature 444.7122 (2006): 1022. https://www.researchgate.net/pro…

3. Walters, William A., Zech Xu, and Rob Knight. “Meta‐analyses of human gut microbes associated with obesity and IBD.” FEBS letters 588.22 (2014): 4223-4233. Meta‐analyses of human gut microbes associated with obesity and IBD

4. Sze, Marc A., and Patrick D. Schloss. “Looking for a signal in the noise: revisiting obesity and the microbiome.” MBio 7.4 (2016): e01018-16. http://mbio.asm.org/content/7/4/…

5. Ridaura, Vanessa K., et al. “Gut microbiota from twins discordant for obesity modulate metabolism in mice.” Science 341.6150 (2013): 1241214. https://pdfs.semanticscholar.org…

6. Goodrich, Julia K., et al. “Human genetics shape the gut microbiome.” Cell 159.4 (2014): 789-799. Human Genetics Shape the Gut Microbiome

7. Xie, Hailiang, et al. “Shotgun metagenomics of 250 adult twins reveals genetic and environmental impacts on the gut microbiome.” Cell systems 3.6 (2016): http://572-584.https://www.sciencedirect.com/science/article/pii/S2405471216303234

8. Rothschild, Daphna, et al. “Environment dominates over host genetics in shaping human gut microbiota.” Nature 555.7695 (2018): 210. https://genie.weizmann.ac.il/pub…

9. Jayasinghe, Thilini N., et al. “The new era of treatment for obesity and metabolic disorders: evidence and expectations for gut microbiome transplantation.” Frontiers in cellular and infection microbiology 6 (2016): 15. The New Era of Treatment for Obesity and Metabolic Disorders: Evidence and Expectations for Gut Microbiome Transplantation

https://www.quora.com/What-does-the-latest-research-show-about-the-connections-between-obesity-and-gut-microbiota-as-a-cause-of-obesity-What-do-we-know-now-and-what-do-we-suspect/answer/Tirumalai-Kamala

How do anti-bacterial and germ resistant coated surfaces work?

Tags

,

Common non-antibiotic anti-bacterials used in surfaces are typically compounds containing the Halogen – Wikipedia, chlorine. Among such compounds, Triclosan – Wikipedia and Triclocarban – Wikipedia are the most prevalent.

According to a 2011 white paper from the Alliance for Prudent Use of Antibiotics (1),

‘Triclosan works by blocking the active site of the enoyl-acyl carrier protein reductase enzyme (ENR), which is an essential enzyme in fatty acid synthesis in bacteria (11). By blocking the active site, triclosan inhibits the enzyme, and therefore prevents the bacteria from synthesizing fatty acid, which is necessary for building cell membranes and for reproducing. Since humans do not have this ENR enzyme, triclosan has long been thought to be fairly harmless to them. Triclosan is a very potent inhibitor, and only a small amount is needed for powerful antibacterial action.’

A 1999 Nature paper (2) presumptively confirmed this mechanism of action.

Such an effect doesn’t discriminate between environmental bacteria, human-associated commensal bacteria and pathogens, which could be quite problematic.

In the US, triclosan use has exploded in recent decades, progressively making its way into increasing numbers of consumer products with the public largely unaware of this happening and even more consequentially, largely unaware of the implications of such widespread exposure.

The range of products that contain triclosan these days is not only eyebrow- but also hair-raising,

  • Everyday use items such as cosmetics, deodorants, detergents, hand soaps, hand lotions, mouthwashes, shampoos, toothpastes.
  • Household staples such as tablecloths, kitchen cutting boards, furniture, toys, school supplies, sport equipment and even shoe insoles.
  • Hospital disinfectants, surgical scrubs, surfaces (plastics and other durables).

These products present several problems,

  • There is little convincing evidence they actually do a better job compared to the tried and true method of hand washing with soap and water.
  • They could stoke drug resistance in bacteria.
  • They could sensitize children to common allergens such as mold and animal dander.
  • Animal model studies suggest they could interfere with endocrine function.
    • Block aspects of thyroid and testosterone function.
    • Enhance functions of estrogen.

An older answer of mine delves in a bit more detail about triclosan-containing products and the problems they pose: Tirumalai Kamala’s answer to Does anti-bacterial soap do more harm than good?

Increasing evidence of such problems in recent years has led various national regulatory agencies to restrict or ban triclosan use in specific products (see below from 3).

Bottomline, consumers should pay attention to whether common daily use and household products they choose contain triclosan, and avoid them as much as possible.

Bibliography

1. https://www.google.com/url?sa=t&…

2. Levy, Colin W., et al. “Molecular basis of triclosan activity.” Nature 398.6726 (1999): 383. https://www.researchgate.net/pro…

3. Goodman, Michael, Daniel Q. Naiman, and Judy S. LaKind. “Systematic review of the literature on triclosan and health outcomes in humans.” Critical reviews in toxicology 48.1 (2018): 1-51. https://www.tandfonline.com/doi/…

https://www.quora.com/How-do-anti-bacterial-and-germ-resistant-coated-surfaces-work/answer/Tirumalai-Kamala

Why does our immune system produce IgM but not IgG antibodies against red blood cell antigens?

Tags

,

The human immune system can produce both IgM and IgG antibodies against red blood cell antigens (see summary below for the ABO histo-blood group antigens from 1).

This is why there are regulatory requirements in place to minimize risk from Immunoglobulin therapy – Wikipedia, IVIG, which entails injecting a mix of antibodies to treat various maladies (2).

IgG antibodies against the Kell blood group antigens may also be involved in hemolytic transfusion reactions and Hemolytic disease of the newborn – Wikipedia (3).

Some bacteria, parasites and plants can also express antigens either identical or strikingly similar to the human histo-blood group A and B antigens (1).

  • Exposure to such sources of environmental antigens through the GI or respiratory tracts could also trigger anti-A or -B IgG antibodies, meaning exposure to blood or blood products alone isn’t necessary to trigger the production of such antibodies.

Bibliography

1. Branch, Donald R. “Anti‐A and anti‐B: what are they and where do they come from?.” Transfusion 55.S2 (2015): S74-S79.

2. Thorpe, S. J., et al. “International collaborative study to evaluate candidate reference reagents to standardize haemagglutination testing for anti‐A and anti‐B in normal intravenous immunoglobulin products.” Vox sanguinis 97.2 (2009): 160-168.

3. Denomme, G. A. “Kell and Kx blood group systems.” Immunohematology 31.1 (2015): 14-19. https://www.researchgate.net/pro…

https://www.quora.com/Why-does-our-immune-system-produce-IgM-but-not-IgG-antibodies-against-red-blood-cell-antigens/answer/Tirumalai-Kamala

I have heard some people claim to have no trouble with eating grains of any kind in Europe when in the U.S. they need to be gluten or even grain free. What are your thoughts on that?

Tags

At a prevalence of ~1% of the population, Celiac disease, making specific T cell immune responses to gluten components, still remains relatively rare though even its rates have been increasing since the 1950s. However, in recent years starting from at least 1978 (1), a far larger proportion began to report non-celiac gluten/wheat sensitivity , reaching ~10% in countries ranging from Australia to Italy, Mexico, and the US and UK (2).

Celiac disease can be diagnosed using precise tests. However, with no clear diagnostic, non-celiac gluten/wheat sensitivity is largely self-diagnosed and thus accurate numbers are hard to come by.

That preamble out of the way, a difference in reaction to (wheat) grains when in the US versus when in Europe could be attributed to

  • Differences in gluten content of wheats grown in US versus some European countries, or
  • Differences in the total gluten present in US versus some European diets, or other diet components (additives, emulsifiers, FODMAP – Wikipedia) difficult to identify separately as the trigger(s).

Differences in gluten content of wheats grown in US versus Europe

Grain gluten differences likely don’t explain recent increases in non-celiac gluten/wheat sensitivity in the US since reports of such sensitivity are increasing the world over including in several European countries (Italy and the UK for example). Though such differences have become a favored explanation in the burgeoning popular media narratives about this increase (3), actual research neither corroborates nor substantiates it (see below from 4, emphasis mine)

‘There are few pertinent papers that address the question of whether or not the protein content of the U.S. wheat crop has increased over time in the 20th century…The hard spring wheats, grown mostly in the Northern Plains, are considered to be highly desirable for bread baking and tend to have protein contents that in general exceed the usual protein contents of winter wheats by about 2 percentage points… The North Dakota Wheat Commission reported13 that, in 2009, the hard red spring wheat crop “ …yielded an average of 13.1 percent (which) was well below the traditional level of more than 14%,” and these protein contents are fairly typical of late 20th century crops for the hard spring wheat region. Various studies have compared the protein contents of wheat varieties from the early part of the 20th century with those of recent varieties.14,15 When grown under comparable conditions, there was no difference in the protein contents. Although nitrogen fertilization can have strong effects on protein content for some wheat varieties,16 the data do not seem to be in accord with the likelihood that recent fertilization protocols have had a strong effect on the protein contents of wheat grown in the United States

Interpretation of protein data is complicated by occasional major deviations from the more usual range. In 1938, the protein content of spring wheat was exceptionally high (Table 1), averaging close to 19%; these years of exceptionally high protein (or low protein) occur occasionally and are likely to result mainly from environmental factors, rather than nitrogen fertilization or wheat breeding. To maintain a uniformity of quality characteristics from year to year, flour mills usually blend wheat flour that is intended for commercial use by specific customers, for example, bakeries. Very high protein content would usually be unsuitable for direct use, and so high protein wheat flours would usually be blended with lower protein grain to achieve a more normal protein level before reaching the consumer.

Differences in total gluten and/or other diet components present in US versus some European diets

Diets have changed dramatically over the course of the 20th century as industrialized mechanization processes were brought to bear in scaling up and streamlining food production. These days, home bread making from scratch is largely a niche hobby and many if not most people purchase commercial breads and other wheat-containing baked goods as a matter of course, not to mention that a great deal of home bread making itself relies on commercially blended products.

Industrial baking and the ingredients it uses are streets removed from home baking, speed and efficiency their hallmarks. Where traditional bread baking uses traditional leavening agents, requires long, slow fermentation, and takes ~16 hours, a cocktail of synthetic ingredients including extra yeast, additives and emulsifiers enables industrial bread making to turn out finished loaves in ~2 hours (3).

  • Vital Gluten is a vital part of this sped up process. Usually labeled ‘wheat protein’ (see below from 4),

‘Gluten fractionated from wheat flour by washing starch granules from a dough (sometimes called vital gluten) is often added to food products to achieve improved product characteristics.’

    • Given its capacity to emulsify and to increase cohesiveness, viscosity, elasticity, gelation and foaming (5), vital gluten is today an essential ingredient in industrial baking.
    • Research estimates total US gluten consumption has tripled from ~136 grams per person per year in 1977 to ~408 grams per person per year in 2013 (see below from 4),

‘…it appears that vital gluten consumption has tripled since 1977…Changes in the per capita intake of wheat and gluten might play a role; both increased during the period in question…’

    • Ironically, some consumption pattern changes such as switching to whole wheat products for perceived health reasons may not help but could instead add to this problem of increased hidden gluten consumption (see below from 4),

‘It may be noted that whole wheat products, which are increasing in consumption for health reasons (especially the higher fiber content), often have vital gluten added to them to compensate for the negative effects of the ground whole grain on quality factors, such as loaf volume in breadmaking.’

  • Careful, rigorous, double-blind, placebo-controlled crossover trials suggest gluten accounts for only ~17% of non-celiac gluten/wheat sensitivity while the rest can be attributed to fructans, part of FODMAPs, or to nocebo (6, 7). This may be why individuals with self-diagnosed non-celiac gluten/wheat sensitivity switching willy-nilly to a gluten-free diet usually end up hit-or-miss (6, 7).
  • Going gluten-free absent proper tests and diagnosis can also back fire (see below from 3),

‘But relying on gluten-free alternatives could be counterproductive. The vast majority of gluten-free creations touted as “tummy friendly” contain the same questionable enzymes and additives that food technologists use in the standard, gluten containing industrial equivalent. In addition, they also rely on hi-tech food manufacturing ingredients to provide their architecture. These include xanthan gum, a strong, glue-like substance also used in the oil industry to thicken drilling mud, hydroxypropyl methyl cellulose, also used in the construction industry for its water-retaining properties in cement, and tapioca starch, a nutritionally depleted, chemically modified starch from the cassava root.’

In sum, differences in total gluten and/or other diet components between US and some European diets could explain why some people react to (wheat) grains when in the US but not when in certain European countries.

Bibliography

1. Ellis, A., and B. D. Linaker. “Non-coeliac gluten sensitivity?.” The Lancet 311.8078 (1978): 1358-1359.

2. Aziz, Imran. “The Global Phenomenon of Self-Reported Wheat Sensitivity.” (2018): 1. The Global Phenomenon of Self-Reported Wheat Sensitivity

3. Not just a fad: the surprising, gut-wrenching truth about gluten

4. Kasarda, Donald D. “Can an increase in celiac disease be attributed to an increase in the gluten content of wheat as a consequence of wheat breeding?.” Journal of agricultural and food chemistry 61.6 (2013): 1155-1159. ActiveView HTML

5. Cabrera-Chávez, F., and AM Calderón de la Barca. “Trends in wheat technology and modification of gluten proteins for dietary treatment of coeliac disease patients.” Journal of cereal science 52.3 (2010): 337-341. http://www.sistemanodalsinaloa.g…

6. Molina-Infante, Javier, and Antonio Carroccio. “Suspected nonceliac gluten sensitivity confirmed in few patients after gluten challenge in double-blind, placebo-controlled trials.” Clinical Gastroenterology and Hepatology 15.3 (2017): 339-348.

7. Skodje, Gry I., et al. “Fructan, rather than gluten, induces symptoms in patients with self-reported non-celiac gluten sensitivity.” Gastroenterology 154.3 (2018): 529-539.

https://www.quora.com/I-have-heard-some-people-claim-to-have-no-trouble-with-eating-grains-of-any-kind-in-Europe-when-in-the-U-S-they-need-to-be-gluten-or-even-grain-free-What-are-your-thoughts-on-that/answer/Tirumalai-Kamala

How does inhibition of TNF alpha differ between monoclonal antibodies (infliximab, adalimumab, golimumab, certolizumab) and receptor fusion proteins (etanercept)?

Tags

, ,

The cytokine TNF-alpha became an attractive therapeutic target since it

  • Is abundantly expressed in many inflammatory conditions.
  • Attracts leukocytes (neutrophils, eosinophils, lymphocytes, monocytes, macrophages) to sites of inflammation.
  • Appears to regulate other cytokines such as IL-1 beta, IL-6 as well as other inflammatory mediators such as CRP.

The idea was blocking TNF-alpha would block these downstream domino inflammatory effects. TNF-alpha’s biological effects ensue from its binding to two different receptors, p55 type I TNF receptor and p75 type II TNF receptors, which exist as both soluble as well as cell surface-bound molecules.

TNF-alpha blockers thus attempt to mitigate the symptomatology and not the root causes of various inflammatory disorders.

How infliximab, adalimumab, golimumab, certolizumab and etanercept inhibit TNF alpha is a function of their very different structures (see below from 1) as well as their different administration routes and doses, which influence their pharmacology.

Similarity: These blockers bind soluble (free) TNF-alpha, which prevents it binding its receptor. Doing so could deprive some immune cells involved in the inflammation of survival signals through their p55 type I TNF receptor and thereby cause their apoptosis.

Differences (1, also see below from 2):

  • The mAbs, infliximab, adalimumab, golimumab, also bind membrane-bound TNF-alpha. This can induce CDC, Complement-dependent cytotoxicity – Wikipedia or ADCC, Antibody-dependent cell-mediated cytotoxicity – Wikipedia, apoptosis or cell cycle arrest.
  • Certolizumab pegol isn’t a classical mAb but a partial antibody since it consists of a TNF-alpha-specific Fab fragment attached to PEG (polyethylene glycol). Since it lacks the Fc portion of the antibody, it cannot induce apoptosis or ADCC though it can induce CDC.
  • Etanercept is a fusion protein consisting of those portions of the TNF receptor p75 that bind to TNF-alpha attached to the Fc portion of an antibody. The Fc piece increases its half-life compared to its natural TNF receptor counterpart. It can induce ADCC but not CDC (3).

These TNF-alpha blockers have different administration routes and doses, which influence their pharmacology (2).

  • Infliximab: intravenous.
  • Adalimumab, certolizumab pegol, golimumab and etanercept: subcutaneous.

Much higher infliximab levels compared to the others could explain its higher effectiveness in some diseases.

In sum, differences in structure and pharmacology explain different effects and side-effects of approved TNF-alpha blockers (see below from 4).

Bibliography

1. Bortolato, Beatrice, et al. “The involvement of TNF-α in cognitive dysfunction associated with major depressive disorder: an opportunity for domain specific treatments.” Current neuropharmacology 13.5 (2015): 558-576. https://pdfs.semanticscholar.org…

2. Rigby, William FC. “Drug Insight: different mechanisms of action of tumor necrosis factor antagonists—passive-aggressive behavior?.” Nature Reviews Rheumatology 3.4 (2007): 227.

3. Mitoma, Hiroki, et al. “Mechanisms for cytotoxic effects of anti–tumor necrosis factor agents on transmembrane tumor necrosis factor α–expressing cells: comparison among infliximab, etanercept, and adalimumab.” Arthritis & Rheumatism 58.5 (2008): 1248-1257. Mechanisms for cytotoxic effects of anti-tumor necrosis factor agents on transmembrane tumor necrosis factor α-expressing cells: Comparison among infliximab, etanercept, and adalimumab – Mitoma – 2008 – Arthritis & Rheumatism – Wiley Online Library

4. Brenner, Dirk, Heiko Blaser, and Tak W. Mak. “Regulation of tumour necrosis factor signalling: live or let die.” Nature Reviews Immunology 15.6 (2015): 362. https://www.researchgate.net/pro…

https://www.quora.com/How-does-inhibition-of-TNF-alpha-differ-between-monoclonal-antibodies-infliximab-adalimumab-golimumab-certolizumab-and-receptor-fusion-proteins-etanercept/answer/Tirumalai-Kamala

What is the clinical value of using IgM therapeutic antibodies over IgG?

Tags

,

By now several hundred human or chimeric (human + mouse) mAbs, Monoclonal antibody – Wikipedia, have been approved for therapeutic use in humans yet none is IgM, most being IgG1, few IgG4 and even fewer IgG2 (1). A few are even purely mouse antibodies or antibody fragments.

However, IgM antibodies do offer some advantages for use as therapeutics,

  • Secreted IgM is pentameric, meaning each IgM molecule has ten antigen-binding sites (2, see below from 3). Thus, even though IgM antibodies typically have considerably lower affinity compared to class-switched antibodies such as the monomeric IgG (2 antigen-binding sites), their pentameric structure enables higher Avidity – Wikipedia binding.
  • They bind complement with ~1000X higher affinity compared to IgG, which makes them initiate Complement-dependent cytotoxicity – Wikipedia much more efficiently (3).
  • Unlike class-switched antibody isotypes (IgG, A, E), IgM can bind carbohydrate antigens. Unlike proteins, carbohydrates are usually the target of T cell-independent B cell immune responses, i.e., IgM. This feature makes IgM a preferred choice of molecule when the target is a tumor-associated carbohydrate antigen for example.

These are the reasons some researchers has started exploring the therapeutic potential of IgM antibodies (4).

Such uses include not just developing mAbs but also using pooled natural IgM as an alternative to IVIG, Immunoglobulin therapy – Wikipedia, which usually consists of polyclonal IgG (5). Polyclonal means multiple antigenic specificities as opposed to a mAb which has a single antigenic specificity. In practical terms, this means that a mAb binds a single specific antigen while a pool of polyclonal antibodies bind multiple different antigens.

Bibliography

1. Monoclonal Antibodies Approved by the EMA and FDA for Therapeutic Use

2. B Cells and Antibodies

3. Ehrenstein, Michael R., and Clare A. Notley. “The importance of natural IgM: scavenger, protector and regulator.” Nature Reviews Immunology 10.11 (2010): 778. https://www.researchgate.net/pro…

4. Wootla, Bharath, et al. “Naturally occurring monoclonal antibodies and their therapeutic potential for neurologic diseases.” JAMA neurology 72.11 (2015): 1346-1353. https://www.researchgate.net/pro…

5. Grönwall, Caroline, and Gregg J. Silverman. “Natural IgM: beneficial autoantibodies for the control of inflammatory and autoimmune disease.” Journal of clinical immunology 34.1 (2014): 12-21. https://www.ncbi.nlm.nih.gov/pmc…

https://www.quora.com/What-is-the-clinical-value-of-using-IgM-therapeutic-antibodies-over-IgG/answer/Tirumalai-Kamala

Why would an association be found in case-control studies but not in cohort studies?

Tags

,

Case–control study – Wikipedia and Cohort study – Wikipedia are analytical in nature and therefore capable of determining association between exposure and outcome. This is how both are different in kind from descriptive observational studies (1, see below from 2).

However, key design design differences mean they arrive at associations differently. Each study type has specific strengths and weaknesses that make each appropriate for different kinds of questions (see below 3).

Case-control studies begin with an outcome (colorectal cancer referenced in the question) in hand and track backward to measure exposure (coffee intake referenced in the question).

  • Inherently retrospective in nature, they lend themselves to studying rare diseases or outcomes by trawling through existing databases in which both exposures and outcomes, whatever those might be, have already been documented. This feature makes them optimal for studying outcomes that develop over many years or those which are rare.
  • Control groups are key for the success of case-control studies. Choosing appropriate controls makes or breaks them.
  • Being retrospective and therefore relying on memory, Recall bias – Wikipedia can easily gum up their works since it stands to reason that cases rather than controls would better recall their exposures.
  • Case-control studies cannot estimate disease incidence rates and relative or attributable risks since the proportion of patients with the outcome is determined by the number of cases and controls (the study denominators) and not the actual outcome frequency in the target population at large (real denominator).
  • Absence of real denominators means case-control studies use odds ratios as the measure of association.
  • Case-control studies established the links between risk factors such as smoking and asbestos exposure and diseases such as lung cancer as well as the long-term effect of radiation exposure on leukemia.
  • Food-borne and shipboard outbreaks are examples that lend themselves to case-control studies.

Cohort studies start by measuring exposure and track forward to assess effect on outcome.

  • Cohort studies assign exposure status when patients enter the study and outcomes are tracked forward in time.
  • Cohorts with known past exposure can be assembled using existing datasets and tracked forward to the present to assess outcome (retrospective) or they can be assembled in the present and tracked into the future to do so (prospective).
  • Randomized controlled trial – Wikipedia are essentially cohort studies except that the exposure/intervention/treatment is randomly allocated.
  • Starting with exposure status is an advantage that allows cohort studies to calculate true incidence rates, relative risks, and attributable risks (2).
  • Key drawback of cohort studies is they become prohibitively expensive to study outcomes that develop over many years or those which are rare.
  • Transplant and genetic epidemiology studies are usually cohort studies.

There is no obvious reason why association would be found only in case-control but not in cohort studies. Famous longitudinal cohort studies are rightly so as their results are now part of common public health knowledge.

  • The Framingham Heart Study – Wikipedia, in place since 1948, has provided much of the bedrock of epidemiological knowledge about heart disease risk factors such as diet, exercise as well as the effect of aspirin.
  • The Nurses’ Health Study – Wikipedia has clarified the effect of risk factors such as alcohol, diet, oral contraceptives and smoking on a variety of diseases such as cardiovascular disease and cancers.

Both types of studies are subject to the biases that attend clinical research, Selection bias – Wikipedia, Information bias (epidemiology) – Wikipedia and Confounding – Wikipedia.

Case-control but not prospective cohort studies being able to show an inverse association between coffee intake and colorectal cancer risk could be explained several ways,

  • Case-control studies could be complicated by participants’ recall bias relating to their coffee intake as well as selection bias relating to controls.
  • A case of apples and oranges in that the two types of studies explored different kinds of associations between coffee intake and colorectal cancer risk.
    • Case-control studies examine the link between coffee intake and colorectal cancer proximal to diagnosis while cohort studies examine their link over a longer time-frame.
    • Authors of the review referenced in the question indeed find a similar inverse correlation between coffee intake and colorectal cancer in cohort studies that had short, not long, follow-ups.

Bibliography

1. Workplace Wellness Programs Don’t Work Well. Why Some Studies Show Otherwise.

2. Grimes, David A., and Kenneth F. Schulz. “An overview of clinical research: the lay of the land.” The lancet 359.9300 (2002): 57-61. https://pdfs.semanticscholar.org…

3. Schold, Jesse D., and S. Joseph Kim. “Clinical Research Methods and Analysis in Organ Transplantation.” Textbook of Organ Transplantation (2014): 1607-1621.

https://www.quora.com/Why-would-an-association-be-found-in-case-control-studies-but-not-in-cohort-studies/answer/Tirumalai-Kamalahttps://www.quora.com/Why-would-an-association-be-found-in-case-control-studies-but-not-in-cohort-studies/answer/Tirumalai-Kamala

Why have scientists not been able to correct a malfunctioning immune system, and cured many diseases that arise from immune system disorders?

Tags

A malfunctioning immune system could imply anything from a death sentence (inherited immune system disorders) to lifelong malaise (allergies, autoimmunities and chronic inflammatory disorders).

In recent decades, scientists have made giant strides in treating inherited immune system disorders.

In 1971, David Vetter – Wikipedia was the second child born to a Houston couple. His older brother suffered repeated infections and died a few months after birth, shortly after he was diagnosed as having SCID, Severe combined immunodeficiency – Wikipedia.

Back then a child born with such a severe immunodeficiency lived an unimaginably difficult life and faced imminent death. Limited treatment options were merely palliative and only extreme physical cocooning minimized risk of catching infections. Also a SCID, David was placed into a sterile plastic isolator as soon as he was delivered by C-section, and lived in permanent isolation the rest of his brief and unbelievably surreal life.

By the time he died at the age of 12 in 1984, David spent his entire life inside a plastic, germ-free chamber, never having physically touched anyone nor being touched, becoming famous as the original Bubble Boy (see below from 1). According to his mother, Carol Ann Demaret, one of David’s dreams had been to ‘walk barefoot in the grass‘ (2). Obviously, he never did.

Contrast with today when highly effective and even curative treatments such as bone marrow and hematopoietic cell transplants have transformed the lives of those born with an incurable and serious immunodeficiency such as SCID. At this point, >500 transplant centers around the world have carried out >1 million HCTs for a variety of such immune system disorders.

At the same time IVIG or Immunoglobulin therapy – Wikipedia has emerged as standard therapy for treating B cell and antibody deficiencies while gene therapy is making a comeback as a viable treatment option for many inherited immune system disorders.

Meantime scientists continue to identify new genes involved in PIDs. At the last count >120 genes and >150 PIDs (3).

A remarkable pace of progress considering even something as simple as the diagnosis of primary immunodeficiencies (PIDs) wasn’t simple and most doctors weren’t even adequately trained to do so. After all, even the cheat-sheet called ‘Ten Warning Signs of Primary Immunodeficiency‘ was only developed in the early 1990s (see below from 4).

However, there’s a long way to go with allergies, autoimmunities and chronic inflammatory disorders (e.g., inflammatory bowel disease).

Prevalence of these conditions increased dramatically in recent years, especially in older industrialized countries, even as their accurate diagnoses and treatments lag behind. For long only glucocorticoids, other immunosuppressives and non-specific painkillers were the mainstay Rx.

In recent years many biologics such as mAbs, Monoclonal antibody – Wikipedia, and small molecules that target specific enzymes and cytokines have rapidly gained regulatory approval. However, such therapies only better target symptoms and don’t address root causes even as they’re found to work optimally only in subsets of patients.

Poorly predictive animal models and poor clinical definitions are the major hurdles that prevent faster progress. For example, it’s entirely possible that diseases such as MS, Multiple sclerosis – Wikipedia, and SLE, Systemic lupus erythematosus – Wikipedia, are more accurately syndromes, the upshot being subsets of patients could differ in key disease features. In the case of most autoimmunities, even some basic questions such as the identity of target antigen(s) remain unknown.

  • Not knowing the antigen(s) that are the target(s) of the autoimmune responses makes it difficult to precisely treat an autoimmune disease in such a way as to ensure minimal collateral damage.
  • As for allergies and chronic inflammatory disorders, more effective treatments can only come when their underlying causes are better understood, which is not yet the case.

Bibliography

1. Chinn, Ivan K., and William T. Shearer. “Severe combined immunodeficiency disorders.” Immunology and Allergy Clinics 35.4 (2015): 671-694.

2. David’s Story. William T. Shearer, Carol Ann Demaret. Etzioni, Amos, and Hans D. Ochs, eds. Primary Immunodeficiency Disorders: A Historic and Scientific Perspective. Academic Press, 2014, pages 313-326.

3. Picard, Capucine, et al. “Primary immunodeficiency diseases: an update on the classification from the International Union of Immunological Societies Expert Committee for Primary Immunodeficiency 2015.” Journal of clinical immunology 35.8 (2015): 696-726. Primary Immunodeficiency Diseases: an Update on the Classification from the International Union of Immunological Societies Expert Committee for Primary Immunodeficiency 2015

4. 10 Warning Signs

https://www.quora.com/Why-have-scientists-not-been-able-to-correct-a-malfunctioning-immune-system-and-cured-many-diseases-that-arise-from-immune-system-disorders/answer/Tirumalai-Kamala