The idea of herd immunity to manage the coronavirus should ring alarm bells - The Washington Post
And yet, many prominent epidemiologists, public health researchers and physicians are pushing back against the concept. NIH Director Francis Collins condemned conronavirus herd immunity-based responses, calling them “fringe” and “dangerous,” while World Health Organization Director General Tedros Adhanom Ghebreyesus called it “scientifically and ethically problematic.” Currently the theory of herd immunity is used in public health settings when immunity can be acquired through a vaccine, such as measles or polio, but not when it requires people to contract a disease to develop immunity.
While herd immunity is the theory behind vaccine programs, the concept originated in veterinary medicine and livestock management in the late 19th and early 20th centuries. This matters because in this setting, economics rather than ethics served as a guiding force. In some cases, it was cheaper to slaughter diseased or suspected animals to prevent the rest from getting sick than expose an entire herd to a disease that could kill or reduce the value of livestock. While this may have helped halt damaging animal diseases, it would be unacceptable for human public health programs. Revisiting the history of managing the spread of animal disease explains why the theory of herd immunity, absent a vaccine, is a deeply troubling approach to managing the spread of the coronavirus.
At the end of the 19th century, over 1.5 million livestock farms existed with billions of dollars’ worth of cattle, swine, sheep, poultry and goats. In 1884, concerned that deadly infections such as contagious bovine pleuropneumonia and foot and mouth disease threatened the livelihood of farmers and American food security, Congress and President Chester A. Arthur established the Bureau of Animal Industry (BAI) at the USDA through legislation.
This new bureau was tasked with researching animal diseases and granted regulatory authority to prevent, contain or eradicate livestock diseases. Keeping livestock animals free of disease and death ensured a steady supply of meat, milk and eggs for Americans, and protected producers’ incomes.
Not all livestock diseases killed infected animals or rendered them unusable for food production. New York dairy farmers first documented an infectious disease now known as brucellosis in the 1850s. They noted that the disease would roll through communities every few years, causing pregnant heifers and cows to lose their calves. This led to a decrease in milk production, but most infected cows recovered and returned to normal production for the rest of their lives. Owners worried about their bottom line at that moment but did not want to slaughter potentially productive animals. Instead, they hoped to prevent the disease through sanitary measures and treatments.
Cases of cattle brucellosis were reported across the country in dairy cattle and an increasing number of range herds. Through the use of establishing herds for observation and testing, by the early 20th century the bacterial cause of the disease was identified and a test for exposure was developed — but neither a vaccine nor treatment had been discovered.
And so, researchers and farmers offered advice about how to minimize the impact of brucellosis on cattle. At an American Veterinary Medical Association meeting in 1916, a Kansas veterinarian and a BAI researcher presented on recent research and noted in the conclusion that, in herds with the disease, there was a “tendency toward herd immunity.”
They and others recommended separating animals based on their brucellosis status and preventing crossover between the two populations. Cattle that had been exposed to brucellosis were placed in brucellosis herds, while those that hadn’t been were placed in a brucellosis-free herd. In theory, the animals born into herds with brucellosis developed immunity at a young age and were less likely to suffer severe symptoms. And animals born into brucellosis-free herds would never be exposed to the disease. Separated herds allowed owners to plan for and manage financial losses due to the disease.
But divided herds alone did not eliminate brucellosis in American livestock. To ensure brucellosis-free herds, the federal government reimbursed owners who chose to slaughter brucellosis-positive cattle. Unlike with other animal disease control programs at the time, slaughter was not mandatory. Producers could choose it if it was economically beneficial.
The approval of a USDA-developed vaccine in the 1930s also assisted in creating herds free of brucellosis. Because the vaccine caused a weakened version of the disease, it was used to assist in the development of herd immunity and owners were prohibited by federal policy from using the vaccine in certified brucellosis-free herds after World War II.
As a result, the rate of brucellosis dropped from about 11.5 percent in 1935 to less than one percent in 1973. But the policy to have immune and free herds led to a different problem. Less effective testing led to intermixing of the immune and nonimmune cattle, occasionally creating outbreaks in nonimmune herds and economic loss for owners.
This risk eventually made the split program of disease and disease-free herds untenable. In the 1970s, renewed debates questioned whether the disease should be allowed to run its course, creating nationwide herd immunity, or if the push to eradicate it should continue.
In 1978, the Agriculture Department chose eradication of the disease. And so, it implemented stricter movement guidelines and more effective testing and tracing, mandated slaughter of infected animals and developed a new vaccine. It worked, and in 2000, the USDA declared American livestock free of the disease. However, brucellosis still exists in non-livestock animals, so the potential for outbreaks remains a problem that must be monitored.
There is a difference between herd immunity acquired through disease exposure, as with brucellosis, and modern public health programs that use vaccines to create immunity, such as polio and measles.
Establishing herds of cattle with disease-acquired immunity was an acceptable practice in the early 20th century because brucellosis was not deadly to livestock and there were initially no effective treatments or vaccines. Moreover, because managing the disease was driven by economic considerations, slaughtering infected animals was an acceptable part of creating or maintaining immune herds.
Unlike brucellosis in cattle, people die of covid-19, the disease caused by the coronavirus, and we do not know the long-term impact on those who recover from it. Allowing it to spread unchecked through a population exposes individuals to unknown and unacceptable risk of death and injury. And of course, slaughter cannot be part of a public health response to the coronavirus.
Also, herd immunity alone did not successfully control or eradicate the disease. Brucellosis remained a health and economic threat for decades because herds with immunity served as a reservoir of infection that could spread to disease-free animals with no immunity. Coronavirus herd immunity policies could create similar problems. The disease could persist in the general population as an endemic illness, posing an ongoing threat to isolated high-risk people for years to come.
Disease-acquired herd immunity may be an appealing option to borrow from animal health because it would allow the majority of people to return to work and school while waiting for a vaccine. But humans are not livestock. As Ghebreyesus observed, the ethics of allowing a disease to spread for the purpose of immunity and necessitating the long-term isolation of high-risk individuals is problematic.
But the fight against brucellosis also shows what works. The federal investment in vaccine development and better testing technology and the implementation of a national eradication program with test and tracing policies contributed to the eradication of brucellosis and are not mired in the same problematic ethics as disease-acquired herd immunity.
Comments
Post a Comment