Wednesday, December 30, 2020

What Thomas Jefferson Could Never Understand About Jesus

What Thomas Jefferson Could Never Understand About Jesus

Jefferson revised the Gospels to make Jesus more reasonable, and lost the power of his story.

By Vinson Cunningham The New Yorker


Even when young, Jefferson bridled at the metaphysical claims of Christianity.

In the early months of 1803, perhaps the most consequential period of Thomas Jefferson’s Presidency—if not, for him, the busiest—American envoys were in France, Jefferson’s old ambassadorial stomping ground, negotiating the terms of what would later be called the Louisiana Purchase. Jefferson, meanwhile, was mulling a book project. He imagined it as a work of comparative moral philosophy, which would include a survey of “the most remarkable of the ancient philosophers,” then swiftly address the “repulsive” ethics of the Jews, before demonstrating that the “system of morality” offered by Jesus was “the most benevolent & sublime probably that has been ever taught.” This sublimity, however, would need to be rescued from the Gospels, which were—as Jefferson put it in a letter to the English chemist, philosopher, and minister Joseph Priestley—written by “the most unlettered of men, by memory, long after they had heard them from him.” Jefferson pushed Priestley to write the treatise, and, by the following January, seemed to think that he would. But Priestley died in February, and Jefferson decided to do the salvage work, at least. He got a copy of the Bible, cut out some choice passages, glued them onto blank pages, and called the volume “The Philosophy of Jesus of Nazareth: extracted from the account of his life and doctrines as given by Matthew, Mark, Luke, & John. Being an abridgement of the New Testament for the use of the Indians unembarrassed with matters of fact or faith beyond the level of their comprehensions.”


One of Jefferson’s aims seems to have been to demonstrate—to himself, if to no one else—that, contrary to the claims of his political adversaries, he was not anti-Christian. As Peter Manseau, a curator at the National Museum of American History, points out in “The Jefferson Bible: A Biography” (Princeton), the puzzling reference to “Indians” in the subtitle may be a joke about the Federalists, and their apparent inability to grasp Jefferson’s true beliefs. His opponents often labelled him a “freethinker,” or an outright atheist; milder observers came closer to the mark, pegging him as a deist who largely thought of God as a noninterventionist. But Jefferson did not openly claim the deist label. “I am a Christian,” he insisted in a letter to the educator and politician Benjamin Rush, “in the only sense in which he wished any one to be; sincerely attached to his doctrines, in preference to all others; ascribing to himself every human excellence, & believing he never claimed any other.” In order to establish that this was the actual limit of Jesus’ claims, one had to carefully extricate him from the texts that contain nearly all we know about his life and thought. That might sound like impossible surgery, but, to Jefferson, the fissures were obvious. What was genuinely Christ’s was “as easily distinguishable as diamonds in a dunghill,” he wrote in a letter to John Adams. Jesus, in the Gospel of John, says, “My sheep hear my voice, and I know them, and they follow me.” Jefferson was no lamb, and no follower, but he considered himself a good hearer.

Manseau opens his study with an anecdote from earlier in Jefferson’s life, which Jefferson recounts in “Notes on the State of Virginia.” As a young man, he went digging through one of the “barrows”—huge mounds of earth, covered in grass—that mysteriously dotted the Virginia landscape. “That they were repositories of the dead has been obvious to all: but on what particular occasion constructed, was matter of doubt,” Jefferson wrote. The solution, for Jefferson, was to get a shovel. He travelled to what had once been a Native American community and got to work on a mound, quickly finding “collections of human bones.” There were arm bones and loose jaws, vertebrae and several skulls. Manseau writes:

It was the skulls that most drew Jefferson’s interest. Some were “so tender,” he noted, that they fell apart at the touch, leaving him with a handful of teeth that were considerably smaller than others. At least one section of the mound seemed to include children—a suspicion reinforced by the discoveries that followed: “a rib and a fragment of the under jaw of a person about half grown; another rib of an infant, and part of the jaw of a child, which had not yet cut its teeth.”

Manseau adds, “Today the image of Jefferson rummaging through the bones of Native Americans would likely be regarded by many as an obvious desecration, while in his own day it would have been praised as a purely scientific inquiry.” Manseau uses this unsettling anecdote to illustrate the desacralizing impulse in Jefferson—the impulse that would lead to his cut-and-paste Bible. Jefferson had seen Monacans go in groups to visit the mounds, but the knowledge of their reverence and the ardency of their devotion didn’t satisfy him. He was deeply impatient with myth, ritual, and mystery. He had to see the bones.

Even in his youth, Jefferson had bridled at the core metaphysical claims of classical Christianity. Jefferson had no use for original sin, or salvation by grace alone, or the insistence that Christ—or anyone else; stand down, Lazarus—had risen from the dead. He didn’t even care to affirm the most fundamental doctrine: that, by some mystery of history and providence, Jesus was of the same essence as God, indeed was God. One of Jefferson’s first, and most lasting, points of dissent with Christian orthodoxy had to do with the Trinity, the doctrine affirming that although there is only one God, the godhead is identified as three distinct but inseparable “persons”: the Father, who is the creator; the Son, who appeared on earth in order to reconcile humanity to the Father; and the Holy Spirit, who is the breath of love between Father and Son, and who invisibly knits all believers together, creating the society called the Church. To Jefferson, this was all too fuzzy to be true in any real sense—an “incomprehensible jargon.” Jefferson was a follower of Jesus in more or less the way that Plato was a follower of Socrates: he found his morals high, his wisdom excellent, his philosophy sound, his observations true.

This is a vision of Jesus as a Great Man, a mover of history and a moral tinkerer, whose work has been marred by friends who were his lessers. Jefferson tended, in his letters, to portray Jesus as a modernizer, more clarifier than Christ; he called him a “great reformer of the vicious ethic and deism of the Jews,” a formulation that marries anti-Semitic tropes with a rereading of Christianity’s roots through the logic of the Reformation. For Jefferson, Jesus was to Judaism what Luther was to the Catholic Church. And Jefferson, in turn, after digging through Christianity’s burial heap, would rescue those of its tenets which accorded with reason—his reason—from the vicious ethic that had grown up around it.

At the College of William & Mary, Jefferson fell under the tutelage of a professor named William Small, who introduced him to John Locke, Francis Bacon, and Isaac Newton, shining paragons of Enlightenment thought. Jefferson considered them “the three greatest men that have ever lived, without any exception.” They confirmed for him that “the world was eminently knowable,” Manseau writes, and modelled the mental mode that would characterize the rest of his life: interested more in science than in faith, more in reason than in emotion, more in minute inspection than in intuition or revelation. In a real and profound way, the Enlightenment seems to have been the creed in which Jefferson most deeply believed. (In this respect, the most Jeffersonian politician currently in power might be the French President, Emmanuel Macron, who, in justifying a crackdown on Islam after a pair of recent terrorist attacks in France, said, “We believe in the Enlightenment.”) Locke, Bacon, and Newton were “a new trinity to replace the old,” Manseau writes. And Jefferson’s relationship to them was more like that of the apostles to Jesus than he may have realized. In his correspondence and his speeches—and, most dramatically, in the Declaration of Independence—he was America’s chief interpreter of the Enlightenment generation. Jefferson in the colonies was like Paul at the Parthenon: a true believer spreading the Word of his teachers, subtly tweaking it so that the locals could understand.

Another youthful influence on Jefferson was the English parliamentarian Henry St. John, Viscount Bolingbroke, who wrote witheringly of the God of the Scriptures, in both the Old and the New Testaments. Bolingbroke argued that, at most, “short sentences” culled from the Bible might add up to a plausible but not especially coherent system of ethics and morals. For Jefferson—who, in his journals, copied long passages of Bolingbroke’s religious criticism—the only God worth serving was one whose powers accorded precisely with the powers on display in the visible world. Later, in the Declaration, Jefferson insisted that all people were “created” equal, but he also made sure to invoke “the Laws of Nature and Nature’s God,” a favorite phrase of the deists of his day. The urge to independence hadn’t come down from a mountain, etched on tablets, but was, instead, the logical end point of a long process of looking, and of thought. God was sovereign only so far as you could track his moves, like an animal leaving footprints in snow.

“The Philosophy of Jesus” did not survive; the only evidence we have for it is in Jefferson’s correspondence. But, in the eighteen-tens, after he had left the White House and had withdrawn almost totally from public life, Jefferson began working on what was, essentially, a new edition, incorporating not only the English of the King James Version but also columns of translation. This version bears a slightly shorter title: “The Life and Morals of Jesus of Nazareth Extracted Textually from the Gospels in Greek, Latin, French & English.” He had tried, once again, as he put it in a letter to a young acolyte, to separate “the gold from the dross.” Jefferson’s Jesus is born in a manger, but there are no angels, and no wise men; at age twelve, he speaks to the doctors in the temple, and everyone is impressed, but he doesn’t say that he is “about my Father’s business.” When Jefferson’s Jesus suddenly has disciples, it is not clear why they have decided to follow him. Jefferson includes Jesus’ encounter with a man with a “withered” hand, and his argument about whether it is “lawful to heal on the sabbath days”—the gold in this story, apparently, is the idea that “the sabbath was made for man, and not man for the sabbath.” The dross is the part where Jesus turns to address the poor man directly, like a real person instead of a prop for conjectural argument, and heals his hand.

Even at this late date, some who knew Jefferson believed that publishing such a text would tarnish his name. The Virginia minister Charles Clay, upon hearing about the idea, warned him that “it may effect your future character & Reputation on the page of history as a Patriot, legislator & sound Philosopher.” Jefferson finished “The Life and Morals” in 1820, and, according to acquaintances, he read from it often before going to sleep. But, when he died, six years later, only a few of his friends were aware that it existed. Nearly a century passed before the “wee-little book,” as Jefferson once called it, came fully into public view.

Manseau’s story skips ahead to that discovery—a thrilling mixture of accident, fine timing, and diligent public-museum curation—but it’s worth pausing, for a moment, at the time in between. There’s something appropriate about the fact that the book sat in obscurity, all but forgotten among library acquisitions, throughout the nineteenth century. Those resonant years were as consequential for the country’s many versions of Christianity as they were for its politics; Americans warred as much over the meaning of God as over the particulars of freedom. To the extent that America has a recognizable civic religion, it would be permanently shaped by what took place while Jefferson’s Jesus sat waiting to be retrieved from his tomb.

The interim’s most Jeffersonian voice, at least when it came to Christ, may have been Ralph Waldo Emerson, who began his controversial address to Harvard’s Divinity School, in 1838, not with a recitation of Scripture but with an invocation of nature. Emerson goes on, at length, about the “refulgent summer” that year in Cambridge—“the buds burst, the meadow is spotted with fire and gold in the tint of flowers”—as though engaging in high-flown small talk, breaking the ice by chatting about the weather. But there is a subtle assertion in it: whatever you want to know about God, you can best find by way of nature and your own good sense. “The word Miracle, as pronounced by Christian churches, gives a false impression; it is Monster,” Emerson said. When he relays a little juxtapositional parable, of a preacher speaking feebly as a snowstorm rages outside, full of the real force of nature, you can picture Jefferson nodding in agreement. “Once leave your own knowledge of God, your own sentiment, and take secondary knowledge, as St. Paul’s,” Emerson said, “and you get wide from God with every year this secondary form lasts.”

Emerson’s neighbor Nathaniel Hawthorne saw a darker god in the American landscape—in the forests and uncharted lands that had been the constant horror of the early Pilgrims and Puritans, and whose mysteries their descendants tried to tame by endless expansion and by a campaign of elimination against Native peoples. Not everybody, Hawthorne’s novels and stories suggest, could so easily do away with mystery, or with Christ as a figure who might inspire not just admiration but holy terror. Hawthorne’s friend Herman Melville likewise seemed to have little interest in a dispassionate, cerebral Jesus. In “Benito Cereno,” a novella published in 1855, Melville staged the true story of the meeting of two ships, one American and sunnily Protestant and the other from Catholic Spain and ostentatiously Gothic and baroque. There’s a mystery on board the Spanish ship, a slave vessel, and the American captain, who has a personality like a Labrador retriever’s—all happy certainty, all reliance on the senses—can’t quite figure it out. The transatlantic trade in human beings, Melville seems to say, couldn’t be understood, or justified, or, in the end, rebuked by way of simple common sense. Something of the spirit, a demon or an avenging angel, had to come to bear. The Old World, and the old pre-Reformation religion, might still have a lesson to teach.

In the years before emancipation, the best arguments against slavery were also arguments about God. Throughout “The Narrative of the Life of Frederick Douglass,” Douglass emphasizes the vulgarity and seeming godlessness of the overseers, slave breakers, and masters of the South. He shows them cursing and drinking, which, he knew, would horrify the largely temperate, highly religious abolitionists of the North. “I love the pure, peaceable, and impartial Christianity of Christ: I therefore hate the corrupt, slaveholding, women-whipping, cradle-plundering, partial and hypocritical Christianity of this land,” Douglass wrote. “Indeed, I can see no reason, but the most deceitful one, for calling the religion of this land Christianity.” But Douglass’s Jesus is not Socrates; he is, as Douglass wrote in “My Bondage and My Freedom,” the “Redeemer, Friend, and Savior of those who diligently seek Him.” Douglass did not wish to remove Christ from the Gospels, or to separate the New Testament from the Old, finding truth in Jeremiah and Isaiah as he did in Matthew, Mark, Luke, and John. One of the few lines from Jefferson that Douglass quoted in his speeches was a famous but arguably atypical remark from “Notes on the State of Virginia.” Jefferson, after meditating on the institution of slavery, wrote, “I tremble for my country when I reflect that God is just; that his justice cannot sleep forever.” Douglass added, “Such is the warning voice of Thomas Jefferson. Every day’s experience since its utterance until now, confirms its wisdom, and commends its truth.”

Abraham Lincoln once wrote that Jefferson “was, is, and perhaps will continue to be, the most distinguished politician in our history.” But, in some ways, Lincoln treated Jefferson as Jefferson had treated Christ. In arguing for the end of slavery, Lincoln exalted Jefferson’s Declaration, and praised Jefferson as “the man who, in the concrete pressure of a struggle for national independence by a single people, had the coolness, forecast, and capacity to introduce into a merely revolutionary document, an abstract truth, applicable to all men and all times.” He glided past the particulars of Jefferson’s own relationship to the practice of slavery. In centering the Declaration as the cornerstone of “the new birth of freedom” represented by the Civil War, Lincoln had cut the contradictory dross out of Jefferson’s life and emphasized what had value for a new age.

Lincoln’s Second Inaugural Address clarifies his differences with Jefferson on the matter of God—and set the stage for many religious clashes to come, suggesting how they might, in time, be settled. Both sides of the Civil War “read the same Bible and pray to the same God and each invokes His aid against the other,” Lincoln wrote; in the end, neither interpretive system could fully win the day. “The Almighty has His own purposes,” Lincoln added—purposes that, presumably, aren’t entirely knowable, even by the most capable reader. We see only so far as “God gives us to see the right.” This was the dawning of a new and fragile postbellum pluralism, grounded not in pure reason but in mutual détente. Jefferson’s Declaration, as reimagined by Lincoln, was less a fleshed-out American Gospel than a pathway to tenuous agreement—not a statement of natural fact but a metaphysical horizon toward which the country, fractured though it was, could travel together.

“The Life and Morals of Jesus of Nazareth” was brought to public attention in 1895, by Cyrus Adler, an observant Jew from Arkansas, who was a librarian and a curator of religious items at the Smithsonian. Nearly a decade earlier, as a doctoral student searching the private library of a wealthy family, Adler had happened upon a set of Bibles that Jefferson had owned, with key passages of the Gospels snipped from their pages. Now, charged with mounting an exhibition on American religion and still mulling over that discovery, Adler finally figured out where the missing passages had gone: into Jefferson’s little book, which was hidden away in the library of Carolina Ramsey Randolph, Jefferson’s great-granddaughter. Adler bought the book from Randolph for four hundred dollars and promptly put it on display in the Capitol, where, in Jefferson’s time, it would almost certainly have been a scandal. Now it was met mostly with affectionate enthusiasm, as another example of Jefferson’s wide-ranging brilliance. In 1904, the Government Printing Office made the first official set of reproductions, one of which was to be given to each U.S. congressperson. “By the 1920s, there were five editions in circulation, both as cheap pocket-sized books and as collectors’ items,” Manseau notes.

America’s national ambitions were going global. After the Spanish-American War, the country had seized possession of Puerto Rico, Guam, and the Philippines. If Jefferson needed a Jesus who could fit the imperatives of republicanism and westward expansion, Teddy Roosevelt—later to become Jefferson’s neighbor on Mt. Rushmore—needed to christen a budding empire. The new attitude was evident even in the nation’s architecture: the National Mall, for which Jefferson, in 1791, had sketched a plan of “public walks,” was reimagined as a site of Romanesque splendor. Eventually, the Jefferson Memorial was laid on the bank of the Tidal Basin, just across from the Mall, and among the documents placed under the cornerstone were the Declaration and “The Life and Morals.”

There’s a photograph of that monument taken by Henri Cartier-Bresson, in 1957, during the heat of the Black struggle for civil rights. Two Black boys, facing in opposite directions, dawdle just across the Tidal Basin from the memorial. A gentle row of trees and the dome dedicated to Jefferson loom just above their heads. The photograph is a reminder that, science and reason notwithstanding, Jefferson’s laconic Jesus, full of wisdom and bereft of spiritual power, never persuaded him to forfeit the slaves he owned. The boys in the photograph could be Jefferson’s kids; as Americans, they sort of were.

Since 2011, a monument to Martin Luther King, Jr., has sat across the water from the Jefferson Memorial, almost engaging it in a staring contest. The result is a rich spatial symbolism: two ways of seeing Christ duking it out. King saw Jesus in much the way that Douglass did: as a savior, a redeemer, and a liberator sorely degraded by those who claimed his name most loudly. During the Montgomery bus boycott, King reportedly carried a copy of “Jesus and the Disinherited,” a short, beautiful book by the minister and writer Howard Thurman. Thurman had travelled to India, where he made sure to meet Gandhi, whose doctrine of nonviolence he admired; he took what he learned from him back to America, planting an important intellectual seed that would blossom during the civil-rights movement. In his preaching and writings, Thurman reoriented what he called “the religion of Jesus,” pointing out what it might mean for those who had lived for so long under the thumb of the likes of Jefferson. Jefferson’s Jesus is an admirable sage, fit bedtime reading for seekers of wisdom. But those who were weak, or suffering, or in urgent trouble, would have to look elsewhere. “The masses of men live with their backs constantly against the wall,” Thurman wrote. “What does our religion say to them?”

Thurman’s Jesus was a genius of love—a love so complete and intimate that it suggested a nearby God, who had grown up in a forgotten town and was now renting the run-down house across the street. That same humble deity, in the course of putting on humanity, had obtained a glimpse of the conditions on earth—poverty, needless estrangement, a stubborn pattern of rich ruling over poor—and decided to incite a revolution that would harrow Hell. “The basic fact is that Christianity as it was born in the mind of this Jewish teacher and thinker appears as a technique of survival for the oppressed,” Thurman wrote. This is a Jesus that Jefferson could never understand.

In a world as compromised as ours, a soul so exalted was always destined for the Cross. Jefferson’s Bible ends before the Resurrection, with Jesus crucified by the Roman occupiers, as the Gospels tell us he was. Jefferson’s austere editing turns the killing almost into an afterthought—a desiccated reiteration of Socrates’ final encounter with hemlock, the simple consequence of having offended the wrong people. For Thurman, the Crucifixion was an emphatic lesson in creative weakness: by sticking out his neck and accepting the full implications of his own vulnerability, Christ had radically identified himself with the worst off. Those societal castoffs who could never get a break now had a savior, and a champion, and a model. This, for Thurman, is as great a teaching as anything that Jesus merely said. Where death, for Jefferson’s Jesus, is an ending, for Thurman’s it is a necessary precondition—just a start. ♦



Published in the print edition of the January 4 & 11, 2021, issue, with the headline “Personal Jesus.”

Monday, December 28, 2020

Inside Trump and Barr’s Last-Minute Killing Spree

 Inside Trump and Barr’s Last-Minute Killing Spree

Private executioners paid in cash. Middle-of-the-night killings. False or incomplete justifications. ProPublica obtained court records showing how the outgoing administration is using its final days to execute the most federal prisoners since World War II.

by Isaac Arnsdorf ProPublica is a nonprofit newsroom that investigates abuses of power. Sign up to receive our biggest stories as soon as they’re published.

In its hurry to use its final days in power to execute federal prisoners, the administration of President Donald Trump has trampled over an array of barriers, both legal and practical, according to court records that have not been previously reported.

Officials gave public explanations for their choice of which prisoners should die that misstated key facts from the cases. They moved ahead with executions in the middle of the night. They left one prisoner strapped to the gurney while lawyers worked to remove a court order. They executed a second prisoner while an appeal was still pending, leaving the court to then dismiss the appeal as “moot” because the man was already dead. They bought drugs from a secret pharmacy that failed a quality test. They hired private executioners and paid them in cash.

The unprecedented string of executions is often attributed to Attorney General William P. Barr, and his role was instrumental: It was Barr’s signature that authorized the use of a new lethal injection drug, his quotes that trumpeted the execution announcements and his position as attorney general that holds the ultimate authority in capital cases. (Barr is resigning effective Wednesday.)

But a ProPublica review of internal government records shows that Barr did not act alone. The push to resume federal executions for the first time since 2003 long predates Barr, with groundwork beginning as far back as 2011 and accelerating after Trump took office in 2017. It could not have happened without the help of Justice Department lawyers; officials at the Bureau of Prisons; two professors who endorsed the government’s injection method; conservative Supreme Court justices who dismissed final appeals; and Trump himself, who encouraged the executions and declined to commute them.

Bottom of Form

Trump and his surrogates don’t shy away from this. Throughout the campaign they highlighted the executions as a contrast to Joe Biden’s opposition to the death penalty, reinforcing Trump’s “law and order” message. White House Press Secretary Kayleigh McEnany even invoked the execution of Daniel Lee, who fell in with skinhead groups as a teenager and renounced those beliefs decades ago, to defend Trump after he declined to disavow white supremacists in the first debate.

“The activation of the death penalty and appearance of being tough on crime played into the administration’s political strategy — the same political strategy that pushed for separating children and parents and using force against peaceful demonstrators,” said Robert Dunham, executive director of the Death Penalty Information Center, a nonprofit that tracks executions. “An administration which is concerned about the rule of law and which respects the Constitution would have allowed court challenges to proceed and would not have attempted to carry out executions under a procedure that could be declared unlawful.”

The Justice Department has killed 10 people since July, with three more executions scheduled before Biden’s inauguration. Most every federal agency is rushing to wrap up unfinished business, cementing policy objectives in ways that will make them harder for the incoming president to unwind. But the Justice Department’s pressing forward with executions, even after the election of a new president who opposes them, is uniquely irreversible.

The White House and BOP declined to comment. In a statement, the Justice Department said: “Seeking the death penalty and carrying out capital sentences is not a political issue, nor have political considerations influenced the department’s decisions. The death penalty is a law enforcement and public safety issue, and the department is obligated to carry forward these sentences regardless of who is the president or the attorney general.”

Trump Has Executed More Prisoners Than the Last 10 Presidents Combined

The Trump administration has executed more federal prisoners than any presidency since Franklin Delano Roosevelt’s. Roosevelt was president for 12 years, and his total includes six saboteurs who were tried by a military commission.

“Death Penalty All the Way”

A slim and shrinking majority of Americans support capital punishment, according to public polling. But it remains popular with Republicans, especially white evangelicals. That coincides with the strongest base of support for Trump, who, in 1989, famously bought full-page ads in New York newspapers demanding the death penalty for five young Black and Latino men who were wrongly accused of attacking a white female jogger in Central Park.

“Death penalty all the way,” Trump said at a February 2016 campaign event in New Hampshire. “I’ve always supported the death penalty. I don’t even understand people that don’t.”

Until this year, the Justice Department hadn’t executed anyone since 2003. This long interruption was as much practical as legal. A drug that most states and the federal government used in lethal injections, a sedative called sodium pentothal, became unavailable because the sole American manufacturer stopped making it. The drug shortage thwarted the Obama administration’s plan to execute convicted murderer Jeffery Paul. States began using a similar drug called pentobarbital, and in 2011 federal prison officials observed several state executions, according to court records.

Shortly after Trump’s presidency began, his first attorney general, Jeff Sessions, wanted to resolve these issues so that BOP could resume executions. Associate Deputy Attorney General Brad Weinsheimer said in a January 2020 deposition that Sessions began “conversations with staff and BOP to move forward on that.”

Matthew Whitaker was involved in the Justice Department’s efforts to find a new lethal injection drug in 2017, according to court records. Whitaker later became acting attorney general. (Chip Somodevilla/Getty Images)

One of the staffers involved was Matthew Whitaker, according to Weinsheimer. Whitaker, who briefly led the Justice Department between Sessions and Barr, didn’t respond to requests for comment.

BOP made plans to use pentobarbital. But it had also become scarce as manufacturers shunned its use in executions. States resorted to using “compounding pharmacies,” which mix ingredients for custom-made drugs.

BOP planned to import powdered pentobarbital from a “foreign FDA-registered facility” but later turned to a domestic bulk manufacturer. It also hired a compounding pharmacy to create an injectable solution. The government has guarded vendor identities, since public scrutiny could pressure them to back out.

A sample of the compounding pharmacy’s solution failed a quality test by an outside lab. But according to Weinsheimer, BOP said the problem was the lab, not the compound itself, and sent a new batch to a different lab.

BOP also explored using a different drug: the opioid fentanyl. In a March 2018 memo, then-BOP Director Mark S. Inch said BOP found a fentanyl supplier but warned “there may be negative publicity associated with using a drug to which so many Americans are addicted.”

For unclear reasons, BOP planned to have the executions carried out by two private contractors, rather than government employees. The government won’t disclose the contractors’ names or profession, and it pays them in cash. “If we didn’t pay them in cash,” a BOP lawyer said in a deposition, “they probably wouldn’t participate.”

“Killing Is Not a Treatment”

BOP officials knew their new drug choice would face resistance in court; lawyers have argued that pentobarbital would flood prisoners’ lungs with froth and foam, inflicting pain and terror akin to a death by drowning. BOP worked to fend off those concerns with expert witnesses who would say the drug was humane.

Finding these experts was challenging because most doctors consider it unethical to have anything to do with executions. The American Medical Association and other professional groups prohibit any participation, including the “rendering of technical advice.”

“Doctors are experts in unkilling, we are not experts in killing,” said Dr. Joel Zivot, an anesthesiologist at Emory University who has testified that lethal injection of pentobarbital simulates death by drowning. “This is why lethal injection is so problematic. It impersonates a medical act, but it’s not about medicine at all. Killing is not a treatment. An execution chamber is not an operating room.”

The Justice Department would later claim that “BOP consulted with medical professionals” (plural). That is not exactly true. BOP engaged two expert witnesses. The first, Craig W. Lindsley, is a professor of chemistry and pharmacology at Vanderbilt University. He is not a physician or licensed care provider; he has a Ph.D., not an M.D. In May 2017, Lindsley wrote a two-page report for BOP stating that pentobarbital will take effect so rapidly the prisoner wouldn’t feel a thing. He concluded, “Of all the available options and protocols in use today, I believe this protocol to be the most humane.”

Lindsley declined to be interviewed, citing Justice Department instructions. He did not disclose his compensation, but he was hired through a contract with a consulting firm called Elite Medical Experts that the Justice Department paid $22,000 in the same month as Lindsley’s report. The company’s CEO, Dr. Burton Bentley II, did not respond to requests for comment.

BOP’s second expert witness was a medical doctor: retired California anesthesiologist Joseph F. Antognini. Antognini has said he personally opposes the death penalty as a Catholic. But he also said he believes states have a right to his advice, comparing it to criminal defendants’ right to a lawyer.

Antognini has not addressed how he squares his testimony supporting executions with his Hippocratic oath. He did raise ethical considerations when he was asked to compare lethal injection to poison gas (a comparison between methods, like the one Lindsley made). “Recommending one method of execution over another, I guess that’s an ethical issue for me,” he said in a deposition.

Antognini’s rare position as a doctor vouching for lethal injection has made him a valuable witness in capital cases, including a Missouri case that later reached the Supreme Court. Antognini charges $400 an hour, $2,000 for a deposition, $4,000 per day in court and $2,000 per travel day.

In a deposition in the Missouri case, which involved the same lethal injection drug, Antognini testified that pentobarbital would make the prisoner unconscious within 30 seconds and people can’t suffer while they’re unconscious. “Can you explain to me how you would have suffering in somebody who is unconscious?” Antognini said. “I don’t see how that can happen based on my understanding of how all this works.”

Yet just a few minutes earlier Antognini had acknowledged, “We don’t know how anesthetics work.” Scientists understand how the drugs act in the brain on a cellular level, he explained, but not how they produce unconsciousness.

The Supreme Court accepted Antognini’s pentobarbital testimony. Other courts have been skeptical, ascribing his views “little or no weight.”

Reached by phone, Antognini said he was busy and agreed to talk at a later time. When that time came, Antognini declined to comment. “I wish you all a very merry Christmas,” he said, and hung up.

“Some Objective Factor”

By the summer of 2019, BOP determined that its drug supply was secure and it was ready to schedule executions. The agency gave Barr a list of 14 prisoners, out of about 60 on death row, who had exhausted their appeals. The Justice Department has refused to disclose this list. (Court records include a list from 2017 with 10 names, but they must not overlap entirely because two of the prisoners who Barr chose in 2019 were not on the earlier list.)

 

As deputy attorney general, Jeffrey Rosen assisted Barr in choosing five prisoners to execute from a list of 14, according to court records. Rosen is set to become acting attorney general when Barr steps down Wednesday. (Photo by Olivier Douliery-Pool/Getty Images)

Barr decided whom to execute with the help of the then-deputy attorney general, Jeffrey Rosen (set to become the acting attorney general), and aides including Paul Perkins and Timothy Shea (who later became acting U.S. attorney in Washington and now leads the Drug Enforcement Administration). The officials mulled executing all 14 but decided to start with five.

They chose the five, according to Weinsheimer’s deposition, for the same reason that Barr would publicly announce on July 25, 2019: They were “convicted of murdering, and in some cases torturing and raping, the most vulnerable in our society — children and the elderly.”

“There was an effort to find some objective factor in looking at the 14,” Weinsheimer said. “This was an aggravating factor that seemed to apply.”

In fact, that wasn’t true. Barr’s summary of the case of Daniel Lee incorrectly said he “murdered a family of three, including an 8-year-old girl.” The undisputed evidence was that Lee refused to kill the girl, so his co-defendant did. The co-defendant was sentenced to life in prison, while Lee was sentenced to death.

Weinsheimer said he didn’t know if there were other death row inmates who murdered children or the elderly. There were, according to a review by ProPublica.

Barr’s announcement also justified the executions on the basis that “we owe it to the victims and their families.” This also was not true in Lee’s case: Family members of Lee’s victims have publicly come out against executing him (as have the prosecutor and judge). The families of the four other prisoners’ victims supported the executions. In any event, the Justice Department didn’t consult victims’ family members when deciding who to kill, Weinsheimer said in his deposition.

The Justice Department also didn’t review the prisoners’ physical or mental health as part of its selection, according to Weinsheimer. One of the five, Alfred Bourgeois, had an IQ between 70 and 75, and his lawyers argued he is intellectually disabled. Another, Wesley Purkey, suffered from schizophrenia, dementia and Alzheimer’s disease, which his lawyers said made him unable to understand the reason for his execution.

Weinsheimer said the Justice Department decided to schedule the executions in the order that the prisoners were convicted, with the oldest first. However, they were not the oldest capital convictions, according to ProPublica’s review.

As for the executions’ timing, Barr’s announcement did not explain why, after a 17-year hiatus, the first three executions were scheduled within one week of each other. BOP officials voiced concern that these back-to-back executions would put more stress on their staff, the agency’s lawyer said in a deposition. (After a 2014 lethal injection in Oklahoma went gruesomely awry, the state’s investigation concluded that one contributing factor was having two executions scheduled that day.) Nevertheless, BOP’s lawyer said the agency booked three executions in one week because of “guidance from the attorney general.” Weinsheimer denied that Barr gave a “direction” on how to schedule the executions.

“Moot”

Even prisoners who have exhausted their post-conviction appeals can go back to court to try to stop their execution once a date is set. A federal judge in Washington put the executions on hold in November 2019. The appeals court disagreed in April. Barr swiftly rescheduled the executions for Lee, Purkey and Dustin Honken, in a span of four days in July. In this announcement, Barr cut two from the original five (Bourgeois and Lezmond Mitchell), and added a new prisoner, Keith Nelson. The unifying theme, he said, was “murdering children,” repeating his inaccurate summary of Lee’s crime.

On the day Lee was supposed to be executed, the judge in Washington ordered a new injunction. The appeals court declined to intervene, so the Justice Department went to the Supreme Court.

The Supreme Court’s conservative majority has been consistently hostile to last-ditch reprieves in capital cases. “Courts should police carefully against attempts to use such challenges as tools to interpose unjustified delay,” Justice Neil Gorsuch wrote in a 2019 case (the Missouri execution that featured Antognini’s testimony).

 

The Trump administration is rushing to approve dozens of eleventh-hour policy changes. Among them: The Justice Department is fast-tracking a rule that could reintroduce firing squads and electrocutions to federal executions.

The conservatives have been equally unsympathetic to objections to lethal injection, considering that the court never found a constitutional problem with “traditionally accepted methods” such as hanging, electrocution and firing squad. “The Eighth Amendment does not guarantee a prisoner a painless death,” Gorsuch wrote in the Missouri case, “something that, of course, isn’t guaranteed to many people, including most victims of capital crimes.”

Early on July 14, five justices ruled that Lee’s execution could go ahead, saying that he had “not made the showing required to justify last-minute intervention.” Technically, Lee’s death warrant had expired at midnight, but the government issued a new same-day notice and went ahead with the execution around 4 a.m. Lee’s lawyers protested that there was still a separate court order that the Supreme Court hadn’t addressed, so officers left Lee on the gurney while government lawyers worked to wipe out that last obstacle. “That cautious step, taken to ensure undoubted compliance with court orders, is irreconcilable with the suggestion that the department ‘rushed’ the execution or disregarded any law,” Rosen, the deputy attorney general, wrote in a July op-ed. Less than an hour after a federal appeals court granted the government’s request, Lee was dead.

“Today, Lee finally faced the justice he deserved,” Barr said in a statement.

Later that day, at a White House press conference, Trump referred to Lee’s execution as part of his attack on the Democratic Party platform. “Abolish completely the death penalty,” he said. “You know what happened today with regard to the death penalty.”

On the day of the first federal execution in 17 years, Trump attacked Democrats for opposing capital punishment. “You know what happened today with regard to the death penalty,” he said at the White House

 

Trump’s campaign was more explicit in an email blast the next day. “President Trump ensured total justice for the victims of an evil killer,” the campaign told supporters. “With the Trump administration slated to administer total justice to three more child murderers and rapists in the coming weeks, Biden should explain why they should be protected from paying the ultimate price for their evil, horrific crimes.”

That same day, July 15, Purkey was scheduled to die. Again after 2 a.m., a sharply divided Supreme Court lifted the outstanding court orders. Purkey’s lawyers rushed to federal district court for a new emergency stay on the basis that his Alzheimer’s and schizophrenia left Purkey unable to understand his sentence. But the Justice Department made clear that it would not wait to let that petition play out.

In a “courtesy notice” emailed to one of Purkey’s lawyers at 2:03 a.m. on July 16, a senior Justice Department official said the execution would go forward at 4 a.m., despite the new court filing. “Your colleague asked me whether the govt would delay the execution to allow the judge … to consider the stay application,” Hashim Mooppan, counselor to the solicitor general, said in the email. “In light of the Supreme Court’s orders today and on Tuesday morning, the government will not delay the execution further. Absent a court order barring the execution, the govt intends to proceed.”

The judge, in fact, did say the execution should halt while he considered the motion, but he swiftly denied it, and chastised Purkey’s lawyers for “procedural gamesmanship.”

“Despite the risk of irreparable harm to Mr. Purkey, the balance of equities do not weigh in his favor,” wrote the judge, James R. Sweeney II, who was appointed by Trump in 2017. Sweeney’s brief order did not specify what equities he weighed, but the government had argued that Purkey’s sentence had already been upheld multiple times.

As Purkey’s lawyers rushed to appeal, BOP went ahead with placing an IV in Purkey’s arm. He was dead before the appeals court made its ruling. The court later dismissed Purkey’s appeal as “moot” because he was already dead.

Honken died the next day, the third that week.

“A Personal Interest”

As prisoners desperately fought the government’s execution plans in court, they argued that overdosing on pentobarbital would be so excruciating that even death by firing squad would be less painful. Justice Department lawyers chafed at the suggestion. In response, they said firing squads were “more primitive” than lethal injection, and reintroducing them would be a “regressive change.”

Two weeks later, however, the agency took steps to do just that. The Justice Department proposed a regulatory change to authorize execution methods besides lethal injection, including firing squads, which remain legal in three states. “This proposed rule would provide the federal government with greater flexibility to conduct executions in any manner allowed by federal law,” the agency said. “The proposed rule would therefore forestall potential future arguments by prisoners in litigation.”

The proposal made other tweaks to the department’s regulations to address issues raised in litigation — not exactly admitting error, but tacitly acknowledging cracks in the government’s legal foundation.

While the proposal was formally signed by Barr, its point person was Laurence E. Rothenberg, a deputy assistant attorney general in the Office of Legal Policy. Though he is a career employee, Rothenberg’s LinkedIn profile picture shows him standing proudly with Barr, and he has staked out a public position supporting the death penalty.

Laurence Rothenberg was the point person on revising the federal government’s execution regulations. His profile picture on LinkedIn shows him with Barr holding a Justice Department award.

In a 2004 law journal article (also published by the conservative Federalist Society), Rothenberg described the death penalty as “intrinsically just.” He also defended executing juvenile offenders against claims that doing so violates international law.

In another article, from 2006, Rothenberg and a Justice Department colleague attacked common critiques of the death penalty. “The extent of racial disparities in capital cases in the United States has been vastly exaggerated,” they wrote.

Rothenberg has said that his criminal justice views are shaped by a family tragedy. “I also have a personal interest in, and commitment to, this work, as the son of a murder victim,” he said in 2009 congressional testimony about a victims rights law. In 1974, Rothenberg’s parents were shot, his father fatally, on a trip to the Virgin Islands. The shooter was convicted of murder and sentenced to life in prison.

The regulation became final the day after Thanksgiving.

“It Didn’t Go Well”

For the next round of executions, Barr’s announcement simply said that the two prisoners, William LeCroy and Christopher Vialva, were “convicted of murder.” He gave no other reason or explanation for their selection.

Earlier this month, the Justice Department executed Vialva’s co-defendant, Brandon Bernard. Bernard was 18 at the time and did not pull the trigger. The prosecutor and five of the nine all-white jurors who convicted Bernard, who is Black, have since said his life should be spared. The reality star Kim Kardashian tried unsuccessfully to convince Trump to commute his sentence.

At the federal death row facility in Terre Haute, Indiana, the inmates are allowed to leave each other bequests, according to The New York Times. Alfred Bourgeois received his friend Bernard’s wrist watch for the single day before it was his turn to die.

Bourgeois was strapped to a gurney in the middle of a green-tiled room, an IV in his arm. As the pentobarbital flowed, Bourgeois’ stomach heaved and popped, according to George Hale, a public radio reporter who witnessed the execution. The apparent gasping for breath was consistent with how lawyers have described the drowning sensation that the injection could cause.

Bourgeois’ death took 28 minutes, almost twice as long as Bernard’s. Hale said, “If Alfred Bourgeois was suffering that night, he suffered for a long time.”

There are three more federal executions scheduled in January — eight, six and five days before Biden’s inauguration.

Update, Dec. 24, 2020: This story was updated to add a Trump campaign email and an email from a Justice Department official to one of Wesley Purkey’s lawyers.

Lexi Churchill, Derek Willis, and Lydia DePillis contributed reporting.

 

Wednesday, December 23, 2020

Ancient DNA Shows Humans Settled Caribbean in 2 Distinct Waves

Ancient DNA Shows Humans Settled Caribbean in 2 Distinct Waves

Millions of people living on the islands today inherited genes from the people who made them home before Europeans arrived.

Taíno ceramic vessels from eastern Dominican Republic, circa A.D. 1400.Credit...Kristen Grace


 

By Carl Zimmer NY Times

When Dr. Juan Aviles went to school in Puerto Rico, teachers taught him that the original people of the island, the Taino, vanished soon after Spain colonized it. Violence, disease and forced labor wiped them out, destroying their culture and language, the teachers said, and the colonizers repopulated the island with enslaved people, including Indigenous people from Central and South America and Africans.

But at home, Dr. Aviles heard another story. His grandmother would tell him that they were descended from Taino ancestors and that some of the words they used also descended from the Taino language.

“But, you know, my grandmother had to drop out of school at second grade, so I didn’t trust her initially,” said Dr. Aviles, now a physician in Goldsboro, N.C.

Dr. Aviles, who studied genetics in graduate school, has become active in using it to help connect people in the Caribbean with their genealogical history. And recent research in the field has led him to recognize that his grandmother was onto something.

study published Wednesday in the journal Nature, for example, shows that about 14 percent of people in Puerto Rico can trace their ancestry back to the Taino. Smaller numbers of people in Cuba (4 percent) and the Dominican Republic (6 percent) can say the same.

These results, and others like them based on DNA found in ancient Caribbean skeletons, are providing new insights into the history of the region. They show, for example, that the Caribbean islands were populated in two distinct waves from the mainland and that the human population of the islands was also smaller than once believed. But those living on the islands before colonial contact were not fully extinguished; millions of people living today inherited their DNA, along with traces of their traditions and languages.

Before the advent of Caribbean genetic studies, archaeologists provided most of the clues about the origins of people in the region. The first human residents of the Caribbean appear to have lived mostly as hunter-gatherers, catching game on the islands and fishing at sea while also maintaining small gardens of crops.

Archaeologists have discovered a few burials of those ancient people. Starting in the early 2000s, geneticists managed to fish out a few tiny bits of preserved DNA in their bones. Significant advances in recent years have made it possible to pull entire genomes from ancient skeletons.

“We went from zero full genomes two years ago to over 200 now,” said Maria Nieves-Colón, an anthropological geneticist at the University of Minnesota who was not involved in the new study.

The genes of the oldest known residents of the Caribbean link them with the earliest populations that settled in Central and South America.

“It’s a Native American population, of course, but it’s a very distinctive deep lineage,” said David Reich, a co-author of the study and a geneticist at Harvard Medical School.

But it’s not yet clear exactly from where on the mainland those early Indigenous Americans set sail in dugout canoes to reach the Caribbean islands.

“I don’t think we’re as close as we thought we’d be to an answer,” said Dr. Nieves-Colón, a co-author of another large-scale genetic study in July.

Part of the problem is that scientists have yet to find ancient DNA in the Caribbean that is more than 3,000 years old. The other problem is that ancient DNA is still scarce on the Caribbean coast of the mainland. “There’s a lot we can’t see because we don’t have old DNA,” Dr. Nieves-Colón said.

About 2,500 years ago, the archaeological record shows, there was a drastic shift in the cultural life of the Caribbean. People started living in bigger settlements, intensively farming crops like maize and sweet potatoes. Their pottery became more sophisticated and elaborate. For archaeologists, the change indicates the end of what they call the Archaic Age and the start of a Ceramic Age.

Dr. Nieves-Colón and other researchers have found that the DNA of Caribbean islanders also shifted at the same time. The skeletons from the Ceramic Age largely shared a new genetic signature. Their DNA links them to small tribes still living today in Colombia and Venezuela.

It’s possible that the migrants from the Caribbean coast of South America brought with them the languages that were still being spoken when Columbus arrived 2,000 years later. We don’t know a lot about these languages, although some words have managed to survive. Hurricane, for example, comes from hurakán, the Taino name for the god of storms.

These words bear a striking resemblance to words from a family of languages in South America called Arawak. The DNA of the Ceramic Age Caribbeans most closely resembles that of living Arawak speakers.

In the Ceramic Age record, it becomes hard to find people with much Archaic ancestry. They seem to have survived in a few places, like western Cuba, until they vanished about 1,000 years ago. The people bearing Ceramic Age ancestry came to dominate the Caribbean, with almost no interbreeding between the two groups.

“It seems like the Archaics were just overwhelmed by the Ceramics,” said William Keegan, an archaeologist at the Florida Museum of Natural History and a co-author of the new study.

Dr. Keegan, who has been studying Caribbean archaeology for over three decades, said the new DNA findings had surprised him in many ways, giving him a host of new questions to investigate.

Over the course of the Ceramic Age, for example, strikingly new pottery styles emerged every few centuries. Researchers have long guessed that those shifts reflect the arrival of new groups of people in the islands. The ancient DNA doesn’t support that idea, though. There’s a genetic continuity through those drastic cultural changes. It appears that the same group of people in the Caribbean went through a series of major social changes that archaeologists have yet to explain. 

Dr. Reich and his fellow geneticists also discovered family ties that spanned the Caribbean during the Ceramic Age. They found 19 pairs of people on different islands who shared identical segments of DNA — a sign that they were fairly close relatives. In one case, they found long-distance cousins from the Bahamas and Puerto Rico, separated by over 800 miles.

That finding flies in the face of influential theories from archaeology.

“The original idea was that people start in one place, they establish a colony someplace else, and then they just cut all ties to where they came from,” Dr. Keegan said. “But the genetic evidence is suggesting that these ties were maintained over a long period of time.”

Rather than being made up of isolated communities, in other words, the Caribbean was a busy, long-distance network that people regularly traveled by dugout canoe. “The water is like a highway,” Dr. Nieves-Colón said.

The genetic variations also allowed Dr. Reich and his colleague to estimate the size of the Caribbean society before European contact. Christopher Columbus’s brother Bartholomew sent letters back to Spain putting the figure in the millions. The DNA suggests that was an exaggeration: the genetic variations imply that the total population was as low as the tens of thousands.

Colonization delivered a huge shock to the Caribbean world, drastically changing its genetic profile. But the Ceramic Age people still managed to pass on their genes to future generations. And now, with a population of about 44 million people, the Caribbean may contain more Taino DNA than it did in 1491.

“Now we have this evidence to show that we weren’t extinct, we just mixed, and we’re still around,” said Dr. Aviles.

His fascination with the research on Caribbean DNA led him recently to help found the Council of Native Caribbean Heritage. The organization helps people find their own links to the Caribbean’s distant past. Dr. Aviles and his colleagues have consulted with Dr. Reich and other researchers, both to discuss the direction of the research and to use it to understand their own histories.

Dr. Aviles and his colleagues have uploaded the ancient Caribbean genomes to a genealogical database called GEDMatch. With the help of genealogists, people can compare their own DNA to the ancient genomes. They can see the matching stretches of genetic material that reveal their relatedness.

Sometimes Dr. Aviles imagines explaining all this to his late grandmother. “But first I would apologize for not believing her,” he said, “because she was spot on.”

Friday, December 18, 2020

Recovering Old Age

 Recovering Old Age

Covid has laid bare our warehousing attitude toward the elderly. Have we forgotten what aging is for?

Joseph E. Davis and Paul Scherz

The test of a people is how it behaves toward the old.

– Rabbi Abraham Joshua Heschel

As the outbreak of coronavirus spread this past spring, the world of biomedical ethics exploded with journal articles, consensus statements, and blog posts arguing over the proper criteria for rationing ventilators and other scarce medical resources. The flashpoint came from some of the earliest pandemic guidelines, which appeared to promote discrimination against the elderly — the most likely to die from the disease.

In a widely cited statement, published online in the New England Journal of Medicine in late March, bioethicist Ezekiel Emanuel of the University of Pennsylvania and colleagues argued for a strategy for allocating medical resources that would maximize benefits by both “saving more lives and more years of life.” In practice, rationing on the basis of life-years strongly favors young people, who have more years left to live than the elderly and people with disabilities. Given “limited time and information” in an emergency situation, the authors suggested, saving the greatest number of patients who have “a reasonable life expectancy” is more important than improving length of life for those who do not. The overall effect of this strategy would be “giving priority” to those “at risk of dying young and not having a full life.”

In response to proposals like this, and to even more directly discriminatory rationing strategies that recommended age-based cutoffs for certain treatments, the U.S. Department of Health and Human Services published a bulletin declaring that rationing based on age or disability would be illegal for any HHS-funded health programs, including Medicare and Medicaid. Similarly, rightly fearing that a focus on maximizing life-years would reinforce cultural bias that values the lives of the young over the old, many conservative bioethicists spoke out against age-based criteria. For instance, according to a joint statement issued by the Witherspoon Institute, all lives should be treated equally, for all are of “inherent, equal, and indeed incalculable value.” A policy preferential to the young would be unethical, and would send the message that society views the lives of its seniors as less valuable, less worth living, and would lead to further devaluation and inequity.

But while allocation issues put elderly people on the Covid-19 bioethical agenda right from the start, aging itself, as a critical part of the human experience, has hardly been engaged at all. In a pandemic, “difficult and heart-wrenching decisions may have to be made,” observes the Witherspoon statement, and guidance in medical matters will not only reflect the immediate demands of care for the aged but the “sort of society we want ours to be.” But what sort of society is it that properly regards the elderly?

Our response to the pandemic does in fact reveal something critical about our society and how we understand aging — specifically how we refuse to acknowledge the unique circumstances of older adults and to grant them true moral agency. In a time of pandemic and a rapidly aging population, we find ourselves profoundly impoverished. If we want our society to be one that protects older adults and treats them as full social members, then we need not only ethical policies but ethical frameworks in the fullest sense — guides to social practices and family relations and ways of life — that will cease to exclude the old, and reestablish our ties with them.

Statistical Lives and Autonomous Choice

While the debate over ventilator distribution is no longer urgent (ventilator use has declined sharply for Covid-19 treatment), it illustrates the limitations of current ethical approaches, and it remains relevant as shortages of hospital beds or medications may again arise.

Consider the limitations of the two approaches above to how we should derive the maximum benefit in the event of a shortage. Emanuel and colleagues in the NEJM statement define the greatest benefit in terms of a balancing of lives versus life-years, whereas the conservative ethicists simply define it in terms of lives; yet “lives” here, unmarked in any way, verges on a similarly statistical meaning. In neither model do the ethicists explicitly accord meaningful importance to the life course. Neither acknowledges old age as a specific stage of life with particular vulnerabilities and advantages, characteristic shortcomings and virtues, responsibilities and obligations. Benefit maximization in both models ultimately hinges on the proper measure for abstraction and equalization, whether lives per se or lives qualified by life-years. While striking life-years from the equation avoids bias against the elderly — a just and laudable correction — it does not account for what is distinctive about their lives. The equal persons remain generic, defined from the side of caregivers and bureaucratic calculation. In this framing of the debate, the lives of care receivers are effectively reduced to moral passivity and an empty chronicity, unmarked by social or biological rhythms.

For many, the only quality that marks the aged is their declining capacity to exercise choice. Emanuel and coauthors encourage patients to choose against ventilator treatment when it would contravene the “future quality of life” they would find acceptable. Talk of “quality of life” sounds innocuous, but it is more than a way to speak about the right of a patient to refuse treatment. It has replaced the older formulation of “sanctity of life,” and it conveys moral assumptions about the burden of disability, efficient resource allocation, and the sort of life that is worth living.

Not unique to the pandemic, a choice between life and death occurs in many other areas of contemporary medicine. Most obviously, the choice is presented through the increasing availability of assisted suicide. Yet, as the anthropologist Sharon Kaufman has shown in her 2015 book Ordinary Medicine, everyday practice can also confront older adults with a choice between life and death. When discussing possible transplants or implantable medical devices with elderly patients, for instance, Kaufman finds that doctors frequently frame the decision in terms of added years of life. The surgery, a doctor might say, will give you three more years. In such cases, the patients seem to be offered a choice, albeit one fashioned to indicate that the right decision is medical intervention. Such choices present “improvements in length of life,” to borrow a phrase from the NEJM statement, as the highest good one could choose for oneself, as an end in itself. The meaning of health in a well-lived life is narrowed and replaced with an ad hoc utilitarian calculus of life-years maximization.

The freedom to decide about treatment, to have one’s choices honored, is clearly essential. But without frameworks of ethical reasoning and social practice that can inform the content of the aging person’s choices, the vaunted autonomy rings hollow, even coercive. And such frameworks for an ethical life — frameworks that can speak to the sense of finitude, desire for wholeness, and ultimate concerns of the aged, as well as address their responsibility and obligations to others, to the common good, and to the transcendent — have gone missing.

Lost Images of Aging Well

At other times and in other places, traditional ways of life, social classification, and metaphysical order gave shape and coherence to the course of life, providing a picture of aging well. Each period of life had its activities, duties, and forms of flourishing.

The periods of aging, decline, and the approach of death were especially critical. They involve some of the most complex and unsettling aspects of human experience, and so the need for a strong community to provide direction and meaning is most acute. Many social and cultural practices, such as kinship cohorts, rites of generational transition, filial duties to ancestors, hierarchies that honor wisdom, social customs that guide in grieving, and arts of suffering and dying, provided support for this time of life.

Such guidance has not necessarily meant that old people have been given special respect or honor, or that old age has been treated as a social category deserving of special treatment or public concern. Social histories tell an ambiguous and varied story. But norms and practices of aging and dying did provide direction and shared expectations about how to live. And while the ideal form of character in the face of aging and death has varied by time and school of thought, the social philosophies, religious communities, and civic cultures helped guide people in a process of preparing for aging and dying “well.” Despite much variation, the goals were broadly similar.

From antiquity up through the Renaissance, for instance, the different ages of life, including old age, possessed characteristic virtues and vices, rights and obligations. The aged were actively involved in the life of society as responsible agents with important roles. In these societies, largely agrarian, owners would maintain control over their property into old age unless physical or mental decline made it impossible. Even then, they would reside on the property, sometimes within a relationship of explicit contractual obligation with their heir. This time of life was not segregated or marginal or culturally unstable; older people were prepared and knew their obligations and their place in the cycle of generations.

The Four Seasons of Life: Old Age, “The Season of Rest” by Charles R. Parsons and Lyman W. Atwater (1868)
U.S. Library of Congress

In his treatise On Old Age, Cicero responds to those who lament old age and its constriction of activities and pleasures. They grieve, he argued, only because they value the often illusory pleasures enjoyed by youth over those proper to old age, such as reflection, leisured study, and conversation. And they neglect the important services to the community provided by the aged, like advising in the Senate, directing affairs on their farms, and instructing the young in virtue. In fact, the care for future generations is what most distinctly marks the moral duties and virtues of the aged. As Cicero remarks,

And if you ask a farmer, however old, for whom he is planting, he will unhesitatingly reply, “For the immortal gods, who have willed not only that I should receive these blessings from my ancestors, but also that I should hand them on to posterity.”

Rituals and spiritual disciplines that brought the necessity of sorrow, infirmity, and death into everyday life nurtured the distinct experience of growing old. For example, the memento mori, the reflection on mortality, has been an important practice across many cultures. Anthropologist Robert Desjarlais, in his 2016 book Subject to Death, describes how children in a Nepalese Buddhist community play at death, initiating themselves in a preparation for old age and dying that continues throughout their lives. Greek philosophy in both its Stoic and Platonic forms took the preparation for death as its main activity, and many strands of Christian spirituality took up this philosophical meditation on death, looking to one’s end.

In none of these traditions did such practices encourage a morbid fascination with death or a depressing fixation on nothingness. Quite the contrary, remembering death as an unbending reality raised a sense of generational solidarity and freed people to accept their contingency and live moral lives in the present without fear of the future. This consciousness encouraged all, and especially the old, to eschew fleeting, temporal concerns, like wealth or ambition, for more enduring goods such as knowledge, virtue, and strong relationships.

The connection of the elderly to community and the economy would undergo radical changes in the era of industrialization, from shifts in the occupational system to alterations of family and kinship structure to the rise of welfare schemes and age-graded pension plans. Beginning in the nineteenth century, the life course would become increasingly fixed by chronological boundaries, with the transition to old age marked by the end of productive work in either the factory or the household. Increasing geographical mobility began to undermine stable relationships to place and the integration of grandparents into the daily life of the extended family. Rather than defining the aged as community members with agency and social responsibility, social policy increasingly defined them as persons in need of care. Social science did its part, offering accounts such as “disengagement theory,” which defined as natural the increasing marginalization and sequestration of the old and the need of society to move on without them.

In all these ways and others, and despite greatly improved economic and medical conditions, those advanced in years lost the compelling forms of social standing and engagement that once entailed their duties toward the common good.

Our Deficit Stories

In our fluid, shifting culture, common symbols and shared traditions of old age have weakened or disappeared altogether. One need only consider the many current debates that frame aging as a problem to be solved — with “successful aging,” anti-aging interventions, “engineered negligible senescence,” or assisted suicide — to see our communal and ethical quandary. We have precious few resources for even thinking about the enduring questions of a good old age, its meaning as a distinctive stage in life’s journey, and how we might prepare for and embrace the twilight years of life. For the old, aging has become more private, less externally oriented, less bound to pre-established ties to others, and less deeply rooted in a field of ultimate meanings.

What has largely replaced a shared narrative of the life cycle is an autonomous individualism that blurs all age distinctions. While our dominant image of persons as free and unencumbered agents, as masters of choice, is inadequate at every stage of life, it is especially detrimental in the last. Much of what we mean by “autonomy” is to live so as to repress and deny many features of the human condition, such as our dependence on the care of others and the vulnerability of our bodies. Fostered by an illusion of control, we imagine that we are independent of those who sustain us. We recoil from terms that express boundaries, limitations, frailty, or the need for help as “ageist,” and we reject virtually all criteria to inform our choices beyond individual beliefs and preferences. We trap ourselves in a conception of the good and a mode of self-fulfillment that works against any positive conception of living in older age.

Reflecting the cultural valuation of autonomy as the preeminent good, old age is typically depicted in either of two contrasting deficit stories. The first is a deprecating and frightening story of growing old, of our steady deterioration, loss of control and dignity, and then ending, virtually imprisoned, with a medicalized death. A good example is provided by the same Ezekiel Emanuel, writing in the Atlantic in 2014. He wants to die at seventy-five, he says, because the “simple truth” is that “living too long is also a loss.” Old age, he explains,

renders many of us, if not disabled, then faltering and declining…. It robs us of our creativity and ability to contribute to work, society, the world. It transforms how people experience us, relate to us, and, most important, remember us. We are no longer remembered as vibrant and engaged but as feeble, ineffectual, even pathetic.

The evening of life is a shameful defeat, alien to our creative and productive selves and antithetical to the way we want to be related to and remembered. There is reason to believe that this common story, in light of both an aging population and proliferating anti-aging interventions, has intensified and increased negative stereotypes of old age over time.

Exhausted by “science says”?

The second story is an upbeat account of an ageless adulthood characterized by a continuation of self-reliance, productivity, and good health throughout old age, followed by a brief decline and death. This story about “successful aging,” told in both the popular press and in academic papers, is often contrasted with the decline story and presented as a liberating debunking of its negative and stereotypical myths of helplessness and decay. It is appealing because it accurately reproduces what the psychologist Erik Erikson called our “world-image” of “a one-way street to never ending progress,” such that “our lives are to be one-way streets to success — and sudden oblivion.” But far from being anti-ageist, this story of perpetual middle age also devalues the later years as an unfortunate time of life, without value in itself and much inferior to youth. It, too, confuses physical infirmity with moral failing, offers no positive guidance for engaging dependence or vulnerability, and retains the same cultural antagonism to the aging body and approaching death.

These two stories do not exhaust the possibilities, and there have been many important efforts in recent years to envision and live a more authentic and affirming old age. On the scholarly level, the prodigious work of Harry Moody, Thomas R. Cole, and Martha Holstein comes first to mind, but the contributions in literature, film, spiritual texts, and other areas is significant and growing. Yet the deficit stories remain culturally and commercially dominant and dovetail with far-reaching social and economic changes that have destabilized the practices of preparing people for old age — once a central cultural and philosophical task.

A Social Ethics of Aging

In our moment, we urgently need an ethics of aging that centers on the question of what a good life in its later years looks like — an old age that is lived well and that goes well. Toward this ethics there are currently only scattered contributions. Much of the bioethics literature might best be described as ethical reflections on issues that predominantly involve older people. As with the issue of rationing ventilators and other medical resources, this is an abstract ethics focused on patient rights, caregiver duties, and decision-making in the context of formal institutions. However necessary at times, it is not an ethics of everyday life for those navigating their twilight years. A substantive, normative ethics of aging must address old persons in all their complexity. It must go beyond a concern with autonomy, non-exploitation, and even caregiving. It must also consider the obligations and reciprocal responsibilities of the elderly.

While speaking of the responsibilities of the aged might seem inappropriate and even callous, it is in fact essential. We can make no real progress until we do. In a 1985 paper on “The Virtues and Vices of the Elderly” — still one of the few contributions on the subject — the distinguished ethicist William F. May provided the reason why. Excusing the elderly from moral responsibility or judgment, he wrote, “may subtly remove them from the human race” by treating them “condescendingly” and, in effect, as “moral nonentities.” A crucial “step toward reentry into community with the aged,” he observed, will therefore be taken “when we are willing to attend to them seriously enough as moral beings to approve and reprove their behavior” — when we are willing, in other words, to relate to them with genuine respect. Any worthy ethics requires a “positive conception of the soul,” to quote Iris Murdoch, “the purification and reorientation of which must be the task of morals.” We can’t talk about that process, about being good, about the cultivation of virtue unless we also acknowledge the characteristic ways in which the aged might fall short, beginning with the frequently self-defeating attitude of the old to being old.

Moral agency, community integration, and an ethics of mutual obligation are just what the older traditions sought to foster and maintain. They are what we so desperately need now. Again, the pandemic sheds a glaring light on our flawed orientation to the elderly and their complete marginalization. Our failure to care, protect, and serve those in nursing homes and other residential care facilities, where about one percent of the U.S. population lives, has been widely documented and discussed. Covid has raced through such facilities, claiming, by a recent estimate, some 40 percent of all those who have died so far from the disease. Additionally, over the course of the pandemic, tens of thousands of older adults have and will die not from the coronavirus itself, but from the consequences of our response to it. Those suffering from Alzheimer’s and dementia have been particularly hard hit. They often depend on consistent routines and close care from family members, and the disruptions from the unprecedented stay-at-home orders and the visitor restrictions have had devastating — and entirely foreseeable — consequences.

This recent damage from lockdowns only scratches the surface. Social isolation and loneliness are common in the older population and contribute substantially to poor health outcomes and mental distress. A survey of nursing facilities in 2017 found that some 40 percent of residents reported depressive symptoms. In the face of this silent pain, exhortations ring out to build strong connections and find community. Then, in the name of health and safety, the very sources of resilience and the connections that for many make life worth living were severed. Through these disease-prevention efforts we have been bringing about our seniors’ worst fears — not a fear of death but a fear of dying alone; and before that, of spending their final months languishing in near-complete social isolation. Great numbers, far beyond the institutionalized, have been denied the support and consolation of loved ones and the small comforts of everyday routine.

In our unilateral actions we see the profound limits of “autonomy” and the radically deficient understanding of the social nature of the person. Did it ever occur to officials to consult the elderly about their situation, or when “listening to the science” to seek the views of gerontologists and care professionals? With older adults sequestered, consigned to the past, and excluded from responsibility for the common good, we returned with a vengeance to the strong paternalism that the autonomy principle was supposed to abolish. Our actions revealed that autonomy only applies to very particular decisions surrounding life and death and does not extend to the institutional control to which the person must submit. Persons living in care facilities are already confined, already suffer multiple losses, are already often shelved and neglected. “What we owe the old is reverence,” Rabbi Abraham Joshua Heschel teaches us in a famous address, “but all they ask for is consideration, attention, not to be discarded and forgotten.” Not to be abandoned, as the Psalmist puts it, “when my strength fails.” To foster a genuine interdependence, procedural ethics and autonomy won’t help us. We need an ethics with moral content, one from which the common good has not been jettisoned.

On the one hand, such an ethics, rooted in our shared potentiality and vulnerability, and buttressed by a robust set of cultural concepts and practices, might once again center on the care of future generations. Already in the pandemic, older persons have volunteered to forgo scare resources. Early in the crisis, for instance, popular news stories reported cases like that of the ninety-year-old Belgian woman who died after refusing a ventilator for the sake of access by younger patients. The authors of the joint Witherspoon statement recommend such generosity, that “some patients should themselves consider — not out of legal or even strict moral duty, but rather as [an] … act of generosity — giving up access to something to which they are entitled so that someone else might have it.” However, in the absence of a larger ethical framework of aging, this recommendation appears as little more than an appeal to personal altruism. Within a renewed framework of generational care, by contrast, the courageous forgoing of treatment could reflect an acceptance of mortality, a recognition of gifts received, a readiness to make sacrifices for the common good.

On the other hand, and critically, such an ethics requires that younger generations again become present to and reincorporate those who are in the evening of life. The exclusion of the old has deep roots in our social and economic practices and is becoming ever more untenable as the population ages. Progress will begin with a broader, deeper recognition of our own mortality, dependence on others, and bodily vulnerability, a recognition that would begin to free the young from the fear that drives so many away from engaging with and making a place for the old. The Buddhist and Christian practices of meditation on mortality mentioned above both envisioned aging and dying as social processes requiring the assistance and prayers of family and friends. Being with the elderly is an essential part of one’s own preparation for moving through the life course and for learning the virtues essential to reckoning with the ordeals and finding the joys of old age. At its best, being with the old teaches us about being ready for the unexpected, for the importance of our attachments, and for the value of presence even when it might increase a risk to health.

Such would be an ethics in which care extends from the young to the old and from the old to the young. It would be an ethics framed not in terms of some controlling principle or single model, but in an approach to life in old age that recognizes it as a specific state of life, embraces its distinctive features, and weaves it back into the social fabric from which it has been torn. Anything else, however unintentionally, leads to abandonment.

Joseph E. Davis is research professor of sociology at the University of Virginia and the director of the Picturing the Human project at the university’s Institute for Advanced Studies in Culture. He is the author of Chemically Imbalanced: Everyday Suffering, Medication, and Our Troubled Quest for Self-Mastery (Chicago, 2020) and the co-editor with Paul Scherz of The Evening of Life: The Challenges of Aging and Dying Well (Notre Dame, 2020). Paul Scherz is an associate professor of moral theology and ethics at the Catholic University of America and a visiting fellow at the Institute for Advanced Studies in Culture. He is the author of Science and Christian Ethics (Cambridge, 2019).