Miracle Cure Page 2
Benjamin Rush, the most famous physician of the newly independent United States, dosed hundreds of patients suffering from a yellow fever outbreak in 1793 Philadelphia with mercury. Rush also treated patients exhibiting signs of mental illness by blistering, the same procedure used on Washington in extremis. One 1827 recipe for a “blistering plaster” should suffice:
Take a purified yellow Wax, mutton Suet, of each a pound; yellow Resin, four ounces; Blistering flies in fine powder, a pound. [The active ingredient of powdered “flies” is cantharidin, the highly toxic irritant secreted by many beetles, and M. cantharides, Spanish fly.] Melt the wax, the suet, and the resin together, and a little before they concrete in becoming cold, sprinkle in the blistering flies and form the whole into a plaster. . . . Blistering plasters require to remain applied [typically to the patient’s neck, shoulder, or foot] for twelve hours to raise a perfect blister; they are then to be removed, the vesicle is to be cut at the most depending part . . .
Benjamin Rush was especially fond of using such plasters on his patient’s shaven skull so that “permanent discharge from the neighborhood of the brain” could occur. He also developed the therapy known as “swinging,” strapping his patients to chairs suspended from the ceiling and rotated for hours at a time. Not for Rush the belief that nature was the most powerful healer of all; he taught his medical students at the University of Pennsylvania to “always treat nature in a sick room as you would a noisy dog or cat.”
When physicians’ only diagnostic tools were eyes, hands, tongue, and nose, it’s scarcely a surprise that they attended carefully to observable phenomena like urination, defecation, and blistering. As late as 1862, Dr. J. D. Spooner could write, “Every physician of experience can recall cases of internal affections [sic] which, after the administration of a great variety of medicines, have been unexpectedly relieved by an eruption on the skin.” To the degree therapeutic substances were classified at all, it wasn’t by the diseases they treated, but by their most obvious functions: promoting emesis, narcosis, or diuresis.
Heroic medicine was very much a creature of an age that experienced astonishing progress in virtually every scientific, political, and technological realm. The first working steam engine had kicked off the Industrial Revolution in the first decades of the eighteenth century. Between 1750 and 1820, Benjamin Franklin put electricity to work for the first time, Antoine Lavoisier and Joseph Priestley discovered oxygen, Alessandro Volta invented the battery, and James Watt the separate condenser. Thousands of miles of railroad track were laid to carry steam locomotives. Nature was no longer a state to be humbly accepted, but an enemy to be vanquished; physicians, never very humble in the first place, were easily persuaded that all this newfound chemical and mechanical knowledge was an arsenal for the conquest of disease.
And heroic efforts “worked.” That is, they reliably did something, even if the something was as decidedly unpleasant as vomiting or diarrhea. Whether in second-century Greece or eighteenth-century Virginia (or, for that matter, twenty-first-century Los Angeles), patients expect action from their doctors, and heroic efforts often succeeded. Most of the time, patients got better.
It’s hard to overstate the importance of this simple fact. Most people who contract any sort of disease improve because of a fundamental characteristic of Darwinian natural selection: The microorganisms responsible for much illness and virtually all infectious disease derive no long-term evolutionary advantage from killing their hosts. Given enough time, disease-causing pathogens almost always achieve a modus vivendi with their hosts: sickening them without killing them.* Thus, whether a doctor gives a patient a violent emetic or a cold compress, the stomachache that prompted the intervention is likely to disappear in time.
Doctors weren’t alone in benefiting from the people-get-better phenomenon, or, as it is known formally, “self-limited disease.” Throughout the eighteenth and early nineteenth centuries, practitioners of what we now call alternative medicine sprouted like mushrooms all over Europe and the Americas: Herbalists, phrenologists, hydropaths, and homeopaths could all promise to cure disease at least as well as regular physicians. The German physician Franz Mesmer promoted his theory of animal magnetism, which maintained that all disease was due to a blockage in the free flow of magnetic energy, so successfully that dozens of European aristocrats sought his healing therapies.*
The United States, in particular, was a medical free market gone mad; by the 1830s, virtually no license was required to practice medicine anywhere in the country. Most practicing physicians were self-educated and self-certified. Few ever attended a specialized school or even served as apprentices to other doctors. Prescriptions, as then understood, weren’t specific therapies intended for a particular patient, but recipes that druggists compounded for self-administration by the sick. Pharmacists frequently posted signs advertising that they supplied some well-known local doctor’s formulations for treating everything from neuralgia to cancer. Doctors didn’t require a license to sell or administer drugs, except for so-called ethical drugs—the term was coined in the middle of the nineteenth century to describe medications whose ingredients were clearly labeled—which were compounds subject to patent and assumed to be used only for the labeled purposes. Everything else, including proprietary and patent medicines (just to confuse matters, like “public schools” that aren’t public, “patent” medicines weren’t patented), was completely unregulated, a free-for-all libertarian dream that supplemented the Hippocratic Oath with caveat patiens: “Let the patient beware.”*
—
The historical record isn’t reliable when it comes to classifying causes of death, even in societies that were otherwise diligent about recording dates, names, and numbers of corpses. As a case in point, the so-called Plague of Athens that afflicted the Greek city in the fifth century B.C.E. was documented by Thucydides himself, but no one really knows what caused it, and persuasive arguments for everything from a staph infection to Rift Valley fever are easily found. That such a terrifying and historically important disease outbreak remains mysterious to the most sophisticated medical technology of the twenty-first century underlines the problem faced by physicians—to say nothing of their patients—for millennia. Only a very few diseases even had a well-understood path of transmission. From the time the disease first appeared in Egypt more than 3,500 years ago, no one could fail to notice that smallpox scabs were themselves contagious, and contact with them was dangerous.* Similarly, the vector for venereal diseases like gonorrhea and syphilis—which probably originated in a nonvenereal form known as “bejel”—isn’t a particularly daunting puzzle: Symptoms appear where the transmission took place. Those bitten by a rabid dog could have no doubt of what was causing their very rapid death.
On the other hand, the routes of transmission for some of the most deadly diseases, including tuberculosis, cholera, plague, typhoid fever, and pneumonia, were utterly baffling to their sufferers. Bubonic plague, which killed tens of millions of Europeans in two great pandemics, one beginning in the sixth century A.D., the other in the fourteenth, is carried by the bites of fleas carried by rats, but no one made the connection until the end of the nineteenth century. The Italian physician Girolamo Fracastoro (who not only named syphilis, in a poem entitled “Syphilis sive morbus Gallicus,” but, in an excess of anti-Gallicism, first called it the “French disease”) postulated, in 1546, that contagion was a “corruption which . . . passes from one thing to another and is originally caused by infection of imperceptible particles” that he called seminaria: the seeds of contagion. Less presciently, he also argued that the particles only did their mischief when the astrological signs were in the appropriate conjunction, and preserved Galen’s humoral theory by suggesting that different seeds have affinities for different humors: the seed of syphilis with phlegm, for example. As a result, Fracastoro’s “cures” still required expelling the seeds via purging and bloodletting, and his treatments were very much part of a m
edical tradition thirteen centuries old.
However, while seventeenth-century physicians (and “natural philosophers”) failed to find a working theory of disease, they were no slouches when it came to collecting data about the subject. The empiricists of the Age of Reason were all over the map when it came to ideas about politics and religion, but they shared an obsessive devotion to experiment and observation. Their worldview, in practice, demanded the rigorous collection of facts and experiences, well in advance of a theory that might, in due course, explain them.
In the middle of the seventeenth century, the English physician Thomas Sydenham attempted a taxonomy of different diseases afflicting London. The haberdasher turned demographer John Graunt detailed the number and—so far as they were known—the causes of every recorded death in London in 1662, constructing the world’s first mortality tables.* The French physician Pierre Louis examined the efficacy of bloodletting on different populations of patients, thus introducing the practice of medicine to the discipline of statistics. The Swiss mathematician Daniel Bernoulli even analyzed smallpox mortality to estimate the risks and benefits of inoculation (the fatality rate among those inoculated exceeded the benefit in population survival). And John Snow famously established the route of transmission for London’s nineteenth-century cholera epidemics, tracing them to a source of contaminated water.
But plotting the disease pathways, and even recording the traffic along them, did nothing to identify the travelers themselves: the causes of disease. More than a century after the Dutch draper and lens grinder Anton van Leeuwenhoek first described the tiny organisms visible in his rudimentary microscope as “animalcules” and the Danish scientist Otto Friedrich Muller used the binomial categories of Carolus Linnaeus to name them, no one had yet made the connection between the tiny creatures and disease.
The search, however, was about to take a different turn. A little more than a year after Napoleon’s death in 1821, a boy was born in the France he had ruled for more than a decade. The boy’s family, only four generations removed from serfdom, were the Pasteurs, and the boy was named Louis.
—
The building on rue du Docteur Roux in Paris’s 15th arrondissement is constructed in the architectural style known as Henri IV: a steeply pitched blue slate roof with narrow dormers, walls of pale red brick with stone quoins, square pillars, and a white stone foundation. It was the original site, and is still a working part of one of the world’s preeminent research laboratories: the Institut Pasteur, whose eponymous founder opened its doors in 1888. As much as anyone on earth, he could—and did—claim the honor of discovering the germ theory of disease and founding the new science of microbiology.
Louis Pasteur was born to a family of tanners working in the winemaking town of Arbois, surrounded by the sights and smells of two ancient crafts whose processes depended on the chemical interactions between microorganisms and macroorganisms—between microbes, plants, and animals. Tanners and vintners perform their magic with hides and grapes through the processes of putrefaction and fermentation, whose complicity in virtually every aspect of food production, from pickling vegetables to aging cheese, would fascinate Pasteur long before he turned his attention to medicine.
For his first twenty-six years, Pasteur’s education and career followed the conventional steps for a lower-class boy from the provinces heading toward middle-class respectability: He graduated from Paris’s École Normale Supérieure, then undertook a variety of teaching positions in Strasbourg, Ardèche, Paris, and Dijon. In 1848, however, the young teacher’s path took a different turn—as, indeed, did his nation’s. The antimonarchial revolutions that convulsed all of Europe during that remarkable year affected nearly everyone, though not in the way that the revolutionaries had hoped.
Alexis de Tocqueville described the 1848 conflict as occurring in a “society [that] was cut in two: Those who had nothing united in common envy, and those who had anything united in common terror.” It seems not to have occurred to the French revolutionaries that replaced the Bourbon monarchy with France’s Second Republic that electing Napoleon Bonaparte’s nephew as the Republic’s first president might not work out as intended. Within four years, Louis-Napoleon replaced the Second Republic with the Second Empire . . . and promoted himself from president to emperor.
France’s aristocrats had cause for celebration, but so, too, did her scientists. The new emperor, like his uncle, was an avid patron of technology, engineering, and science. Pasteur’s demonstration of a process for transforming “racemic acid,” an equal mixture of right- and left-hand isomers of tartaric acid, into its constituent parts—a difficult but industrially useless process—both won him the red ribbon of the Légion d’honneur, and earned him the attention of France’s new leader. The newly crowned emperor Napoleon III was a generous enough patron that, by 1854, Pasteur was dean of the faculty of sciences at Lille University—significantly, in the city known as the “Manchester of France,” located at the heart of France’s Industrial Revolution. And the emperor did him another, perhaps more important, service by introducing the schoolteacher-turned-researcher to an astronomer and mathematician named Jean-Baptiste Biot. Biot would be an enormously valuable mentor to Pasteur, never more so than when he advised his protégé to investigate what seemed to be one of the secrets of life: fermentation.
At the time Pasteur embarked on his fermentation research, the scientific world was evenly divided over the nature of the process by which, for example, grape juice was transformed into wine. On one side were advocates for a purely chemical mechanism, one that didn’t require the presence or involvement of living things. On the other were champions for the biological position, which maintained that fermentation was a completely organic process. The dispute embraced not just fermentation, in which sugars are transformed into simpler compounds like carboxylic acids or alcohol, but the related process of putrefaction, the rotting and swelling of a dead body as a result of the dismantling of proteins.
Credit: National Institutes of Health/National Library of Medicine
Louis Pasteur, 1822–1895
The processes, although distinct, had always seemed to have something significant in common. Both are, not to put too fine a point on it, aromatic; the smell of rotten milk or cheese is due to the presence of butyric acid (which also gives vomit its distinctive smell), while the smells of rotting flesh come from the chemical process that turns amino acids into the simple organic compounds known as amines, in this case, the aptly named cadaverine and putrescine, which were finally isolated in 1885. But did they share a cause? And if so, what was it? The only candidates were nonlife and life: chemistry or biology.
The first chemical analysis of fermentation—from sugar into alcohol—was performed in 1798 by the French polymath Antoine Lavoisier, who called the process “one of the most extraordinary in chemistry.” Lavoisier described how sugar is converted into “carbonic acid gas”—that is, CO2—and what was then known as “spirit of wine” (though he wrote that the latter should be “more appropriately called by the Arabic word alcohol since it is formed from cider or fermented sugar as well as wine”). In fact, the commercial importance of all the products formed from fermentation—wine, beer, and cheese, to name only a few—was so great that, in 1803, the Institut de France offered the prize of a kilogram of gold for describing the characteristics of things that undergo fermentation. By 1810, in another industrial innovation, French food manufacturers figured out how to preserve their products by putting them into closed vessels they then heated to combust any oxygen trapped inside (and, hence, inaugurating the canning industry). Since the oxygen-free environment retarded fermentation, and, therefore, spoilage, it was believed that fermentation was somehow related to the presence of oxygen: simple chemistry.
However, another industrial innovation, this time in the manufacture of optical microscopes, provided another theory. In the 1830s, the Italian astronomer Giovanni Amici discovere
d how to make lenses that magnified objects more than five hundred times, which allowed observers to view objects no wider than a single micron: a thousandth of a millimeter. The first objects examined were the ones most associated with commercially important fermentation: yeasts.* In 1837, the German scientist Theodor Schwann looked through Amici’s lenses and concluded that yeasts were, in fact, living things.
As with many such breakthroughs, Schwann’s findings didn’t convince everyone. To many, including Germany’s preeminent chemist, Justus von Liebig, this smacked of a primitive form of vitalism. It seemed both simpler, and more scientific, to attribute fermentation to a simple interaction of exposing sugar to air. The battle would go on for decades,* until Pasteur summed up a series of experiments with what was, for him, a modest conclusion. “I do not think,” he wrote, “there is ever alcoholic fermentation unless there is simultaneous organization, development, and multiplication of” microscopic animals. By 1860, he had demonstrated that fermenting microorganisms were responsible for spoilage—turning milk sour, and grape juice into wine. And by 1866, Pasteur, by then professor of geology, physics, and chemistry in their application to the fine arts at the École des Beaux Arts, published, in his Studies on Wine, a method for destroying the microorganisms responsible for spoiling wine (and, therefore, milk) by heating to subboiling temperatures—60°C. or so—using a process still known as pasteurization. He even achieved some success in solving the problem of a disease that was attacking silkworms and, therefore, putting at risk France’s silk industry.
The significance of these achievements is not merely that they provide evidence for Pasteur’s remarkable productivity. More important, they were, each of them, a reminder of the changing nature of science itself. In an era when national wealth was, more and more, driven by technological prowess rather than the acreage of land under cultivation, the number of laborers available, or even the pursuit of trade, industrial chemistry was a strategic asset. France was Europe’s largest producer of wine and dairy products, and the weaver of a significant amount of the world’s silk, and anything that threatened any of these “industries” had the attention of the national government.