The United Nations has ended a campaign featuring Wonder Woman as an ambassador for women and girls, two months after the announcement was met with protests and a petition complaining that the fictional superhero was an inappropriate choice to represent female empowerment…. “A large-breasted white woman of impossible proportions, scantily clad in a shimmery, thigh-baring body suit with an American flag motif and knee-high boots” is not an appropriate spokeswoman for gender equity at the United Nations, the petition said.
—The New York Times, December 13, 2016
Perhaps the greatest service that the director Patty Jenkins does her protagonist in Wonder Woman, the Warner Brothers blockbuster released this June, is to give her a new set of clothes. The female superhero has been charged with various ideological impurities over the years—jingoism, a too-cozy relationship with America’s military-industrial complex, an excessively heteronormative lifestyle—but by far the most frequent complaints have been about her man-pleasing, bondage-inflected get-up. Those go-go boots! Those bracelets of submission! That quivering embonpoint! It’s hard to be taken seriously as a feminist icon when the only thing you’ve got to wear to work is a star-spangled corset.
The costume worn by Wonder Woman’s star, the Israeli actress and former beauty queen Gal Gadot, is altogether more stern. The kinky boots have been replaced by a pair of gladiatorial thigh-highs; the body suit, constructed out of some cunning alloy of spandex and bronze, is, if not quite armor, at least armor-themed. The outfit isn’t much less revealing, and only marginally more practical, than the old one. (It’s still strapless and her legs must still get rather chilly when she’s stalking villains in cold climates.) But it does at least communicate some martial ferocity and menace. Thus attired, Wonder Woman might plausibly intimidate even her haters at the UN.
Sadly, whatever fresh potency she has acquired from the wardrobe department is offset by the film’s anxious insistence on demonstrating the femininity that lies beneath her breastplate. (Both Jenkins and Gadot have acknowledged that their great goal was to avoid making Wonder Woman look like “a ballbuster.”) Au fond, we are repeatedly assured, Wonder Woman is a very simple, soft, “relatable” lady. She adores babies and ice cream and snowflakes. She is sweetly oblivious to her own beauty and its devastating effects on those around her. Shehas absolutely no problem with men. She loves men! In fact, once she’s left her Amazon family behind, she barely bothers talking to another woman for the rest of the movie. Gadot has real presence and charm as an actress—one longs to see her in something worthier of her talent. But the imperative to eradicate any hint of bossiness or anger from her character weighs heavily on the film, threatening to turn it into one long, dispiriting exercise in allaying male fears about powerful women.
There are some pleasures to be found in its 141 minutes—most notably, in the opening depiction of Wonder Woman’s Amazon childhood. The scenes set on the Amazons’ island home of Themyscira—envisioned here as a sort of second-century Canyon Ranch for lesbian separatists—have the enjoyably campy feel of a 1960s sword-and-sandal epic. All of the Amazons are blessed with excellent bone structure and deportment, and speak in the solemn, “for tomorrow we rise at dawn” locutions of Hollywood-style antiquity. They have been entrusted by Zeus with the task of defending the world against his rebellious son, Ares, and they spend their days honing their military skills for this purpose. (Their proud and elegant form of female aggression requires a lot of leaping through the air in the postures of avenging angels and hanging at half-mast from galloping steeds.)
Wonder Woman—or Princess Diana, as she is known to her people—is the only child in this happy island gynocracy, and her protective mother, Queen Hippolyta (Connie Nielsen), who claims to have created her by carving her out of clay and getting Zeus to breathe life into her, does not want her to become a warrior. Diana’s aunt, Antiope, played by Robin Wright and her phenomenal cheekbones, is tougher-minded: she knows that it is Diana’s unavoidable destiny to one day save the no-goodnik patriarchy from Ares. She has appointed herself Diana’s personal trainer and life coach, and is always exhorting her, in the manner of a Homeric-era Sheryl Sandberg, to aim higher and work harder:
You have greater powers than you know…. You expect the battle to be fair; the battle will never be fair…. Be careful in the world of men, Diana. They do not deserve you.
Alas, our sojourn with the fabulous ladies of Themyscira ends all too soon. One day, a World War I German fighter plane comes zooming through the magical force field that surrounds the island. The man in it, Steve Trevor (played by Chris Pine), isn’t really a German but an American spy who has just stolen a chemical weapon formula from an evil German scientist, and is now being pursued by a platoon of enemy soldiers. (Something incorrigibly twenty-first-century in Pine’s bearing keeps him from being entirely persuasive in this role; you can put this man in a foxhole on the Western Front and he still looks like someone on his way to the Coffee Bean and Tea Leaf for a mocha skim latte.) After a beachfront battle between the Amazons and the Germans (during which noble Antiope is fatally wounded), Diana uses her lasso of truth to find out Steve’s real identity and mission. On hearing his account of “the war to end all wars,” she becomes convinced that the time has come for her to go out and conquer Ares and, with Steve, she sets sail for England.
Our departure from Themyscira is sad for many reasons, not least because it marks the last time we will see any sun. Dominated by purplish-gray lighting, relentlessly louring skies, and lugubrious, CGI-enhanced battle scenes, the next two hours nicely simulate the experience of being trapped in a windowless video game arcade. “It’s hideous,” Diana says on first glimpsing the smoke-wreathed cityscape of London—a comment that might safely be applied to the rest of the movie.
Rumors of the heat and wit of the Diana–Steve partnership have been somewhat exaggerated. The comedy of their relationship is generated largely by her ignorance of early-twentieth-century manners, particularly as they pertain to relations between the sexes. Unlike the Diana of the comic book, who arrived in America already au fait with the social mores and politics of the place, this Diana is a stranger in a strange land, perpetually and adorably perplexed by the ways of men. (The film’s screenwriter, Allan Heinberg, apparently took his inspiration for her fish-out-of-water predicament from Disney’s Little Mermaid.) She doesn’t know that it is improper to ask a man questions about his anatomy while gazing coolly at his naked form in the bath; she is unaware that an invitation to sleep with someone means something more than sleeping next to them.
Diana is also oblivious to the fact that in the London of 1918, her sex radically limits her freedoms. She cannot see why she would be barred from attending an all-male parliamentary meeting, or why she would be expected to constrain her waist with a corset. (A bit rich, this, given that the Wonder Woman costume performs much the same function.) When Captain Steve’s secretary, Etta Candy, explains that her job involves doing whatever her boss asks her to, Diana frowns and remarks, “Where I come from, we would call that slavery.”
This—a sly reference to the ignominious moment in comic book history when Wonder Woman was relegated to being secretary of the Justice League—is a feminist joke of sorts. But it’s not a joke that Diana gets. Unlike the comic book Diana, who was always dashing about giving pep talks to abused wives (“Get strong! Earn your own living!”) and reporting back to her mother on the progress of women’s rights, Diana remains blissfully ignorant of the women’s cause. The male sidekicks who accompany her and Steve to the war in Europe teach her about racial prejudice, the plight of Native Americans, and even the horrors of PTSD, but somehow the news that women don’t have the vote evades her.
Maintaining her ignorance is of course a quite deliberate maneuver—part of the film’s scrupulous endeavor to keep any hint of ball busting at bay. (“It was important to me,” Gadot told Entertainment Weekly, “that my character would never come and preach about how men should treat women. Or how women should perceive themselves.”)
A similar effort to avoid having Diana become too domineering is evident in the careful way that she and Steve are presented as equal partners in their mission. One good man, apparently, is equal to an Amazon demigoddess. (Gadot: “We didn’t want to make Steve the damsel in distress.”) If Diana has the muscle, it’s Steve who has the tactical sense and the job of mansplaining the true nature of their mission. (She’s under the impression that if she manages to kill Ares, she will end war forever.)
Steve is also, it turns out, the person who gives her the correct moral position on man’s inhumanity to man. At the climax of her final, set-piece battle with Ares (who is not the German general she had initially fingered, but a British politician posing as her and Steve’s friend), Ares tries to persuade her that human beings are too corrupt and nasty to deserve her help—an idea initially proposed by Antiope. But Steve’s selfless actions and the power of their newly blossomed love have taught her to reject such cynicism. “It’s not about deserve, it’s about what you believe,” she says. “And I believe in love.”
The exact meaning of this homily is somewhat obscure. It has a Clintonian “Love Trumps Hate” ring to it, certainly. But why it compels her to take mercy, at the last minute, on Dr. Poison, the crazed German scientist who has been plotting to kill thousands with a lethal poison gas, is a mystery. What is the moral equation of ruthlessly dispatching hundreds of German grunts, only to spare the architect of the war’s most dastardly tactics? Never mind. The important thing is that it is Steve and the lightning strike of romantic love that has given her this wisdom.
An astonishing number of women critics have reported being moved to tears by Gadot’s performance. They have hailed Wonder Woman as an inspiring vision of female strength; a landmark in pop-cultural depictions of woman; an exultant portrait of pussy power in excelsis, perfectly timed to rouse our spirits in the dark era of Trump. But the film is far too cautious and focus group–tested an enterprise to be any of these things. Like so many recent girl-power extravaganzas that seek to celebrate what a long way we’ve come, baby, it ends up illustrating precisely the opposite.
Wonder Woman in the comics was famously enfeebled during the 1950s when a set of new writers took over and turned her into a fashion model, a babysitter, an agony aunt. Wonder Woman does nothing so crude. It allows its heroine all the trappings of free, courageous, independent womanhood. It even cheers her on when she bashes up men. It merely propagates the unhelpful myth that if a woman is nice enough, pretty enough, feminine enough, she can do such things without ever causing offense, or being called a bitch. Really, if you want feminist inspiration, you’re better off skipping Wonder Woman and going back to watch the wiseacre heroines of the 1940s: the ones played by Bette Davis, Katharine Hepburn, Rosalind Russell, and Barbara Stanwyck. They were wittier and gutsier and not half as worried about busting balls.
Though the collapse of the seven-year Republican effort to kill off the Affordable Care Act came in one of the most dramatic moments in US Senate history, deeper currents had been running against the Republicans’ serial attempts to repeal Obamacare. The failure was a testament to what can happen when the party taking control of the government seeks to overturn a major advance by the prior administration without any coherent idea of what it will do instead. In their determination to repeal a law greatly expanding the federal government’s commitment to help people obtain decent health care, the Republicans had gotten out of touch with the opinion of the people.
The Republicans had pledged to repeal the Affordable Care Act from the moment it was signed into law by Barack Obama in March 2010, and they’d voted again and again to get rid of it—when it didn’t count, since Obama was president and had a veto. But when a Republican—one who had promised to undo the ACA—won the presidency, with both the Senate and the House also in Republican hands, Congress had to deliver. In trying to muscle through a momentous change in the law in ways that would affect tens of millions of people, Republican leaders disregarded the norms of democratic lawmaking. As the August recess beckoned and amid some particularly odd legislative proceedings, the Senate voted down a series of proposals to severely cut back on what had been given to the people seven years before. In the end, the ailing John McCain cast the deciding vote and, along with two other Republicans and a united Democratic Party, delivered a colossal blow to President Trump.
Trump badly needed a victory. Six months into his presidency he hasn’t had a single major legislative achievement. Other Trump priorities, revising the tax code, raising the debt ceiling, passing a budget—have lingered while the health care bill preoccupied Congress. Trump’s infrastructure program, such as it was, has more or less disappeared, despite the administration’s “Infrastructure Week” in early June, which was largely overtaken by fired FBI director James Comey’s testimony on Capitol Hill. Trump’s White House is a shambles, with its internal warfare increasingly spilling into open ferocity. Moreover, the FBI investigation into hiss and his campaign’s dealings with Russia in connection with the 2016 election is growing more menacing; the noose is tightening. The distracted president has been hurling insults at his attorney general and hatching plots to get rid of the investigation—a highly dangerous thing for him to try to do.
Numerous critics have said that the White House was unwise to begin its congressional efforts on such a divisive issue as health care. But the Republicans have promised to repeal Obamacare in election after election; no issue has been more useful in stirring up their base. Trump himself had promised to begin his presidency with repeal. Republican congressional leaders also needed the repeal of taxes on the wealthy that would be part of their proposals to cripple Obamacare so that they could use that money to cut taxes later in what they chose to call a “tax reform” bill. The tax cuts to be proposed in that bill mattered most to the Republican leaders, in particular House Speaker Paul Ryan, who though he had a shaky relationship with the president—or perhaps because of it—would exercise great influence on the policies of the new president.
The Republicans’ implacable determination to put an end to Obama’s proudest legislative achievement has had to do with disdain for our first black president as well as resistance to such an expansion of government. Thus “Obamacare” was intended as a derogatory nickname. But they didn’t reckon on two things: that the program would become popular once a large number of people signed on to it; and that after two terms Obama would end up one of our most liked presidents.
The Republicans are particularly adept at injecting truisms into the ethos that aren’t true. One example is their insistence that Obamacare had been “rushed through Congress,” had been “shoved down our throats.” In fact, the passage of the bill came after more than a year of deliberation and was the subject of dozens of hearings in both houses and lengthy consideration in several committees. Republicans also complained that the ACA had been passed on a “partisan” basis but that was because Senate Republican leader Mitch McConnell had insisted that Republicans not compromise with the Democrats or put their imprimatur on the bill. This was of a piece with the leadership’s overall strategy of opposing every Obama proposal. (McConnell, who rarely does so, slipped and said that his intent was that Obama be “a one-term president.”)
Over time it became clear that Republicans’ flat-out opposition to the ACA was leaving a great part of the public dissatisfied: too many people were enjoying the benefits of Obamacare. If Republicans were to continue to call for repeal, they had to at least appear to intend to offer some sort of substitute. So the party’s rhetoric shifted. The new motto was “repeal and replace” and we no longer heard about “government control of your health care.”
At a rally in February 2016, Trump promised, “Obamacare is going to be repealed and replaced. You’re going to end up with great health care for a fraction of the cost and that’s gonna take place immediately after we go in. Okay? Immediately. Fast. Quick.” The following month, he pledged on his website: “On day one of the Trump Administration, we will ask Congress to immediately deliver a full repeal of Obamacare.” (This was later deleted.) According to one count, Trump has promised to repeal and replace Obamacare at least sixty-eight times, often pledging that action would commence on that poor, overworked day one. It has bothered Trump mightily that Obama was far more popular and had achieved a great deal more at this point in his presidency than Trump has. Trump’s aides have tried to cheer him up by telling him he’s doing great, and it’s possible he believes them. Trump creates his own realities. In June, he claimed, “Never has there been a president, with few exceptions—case of FDR, he had a major Depression to handle—who has passed more legislation and who has done more things than what we’ve done.” Such statements are of a piece with Trump’s supposed larger inauguration audience than Obama’s and his miscount of the number of times he’s been on the cover of Time. The extravagant hyperbole obviously fills a need on Trump’s part.
One lesson of the Republicans’ entanglement with health care is that you can’t legislate a slogan. For nearly seven years, the Republicans appealed to their base by promising to get rid of the ACA and thereby raise money from unsuspecting followers. Now they needed a new line of attack. They simply declared Obamacare a failure. This has taken various forms—the program is in a “death spiral”; or this or that county doesn’t have any insurance companies who want to participate in its health care exchange. Trump himself has routinely deemed the ACA “dead.” The problem with the Republicans’ arguments, as Ezra Klein pointed out in a searing article in Vox in March, is that they aren’t true. For example, the respected Kaiser Family Foundation has reported that all of thirty-eight counties out of 3,143 nationwide—around 1 percent—are at risk of starting out in 2018 without health care exchanges for lack of participants.
The Republicans had trapped themselves. Despite various promises, beginning with Trump’s announcement during the transition that his own health care plan would be ready any day now, he failed to come up with one, and so it fell to the Republican leaders of each chamber to provide it. When in March a House bill that had White House backing had to be pulled from the House floor for lack of votes, Trump maintained, “I never said ‘repeal it and replace it within 64 days.’ I have a long time. But I want to have a great health care bill and plan—and we will and it will happen.”
Sometimes party discipline has gone overboard. Trump himself threatened a Republican congressman that if he didn’t support the Republican health care plan he would be “primaried”—faced with an opponent from the right for the nomination for reelection. Dean Heller of Nevada, the one Republican senator believed to be in real danger of losing his seat to a Democrat in next year’s midterm election, said he opposed the leadership plan; he was confronted with ads by a PAC backing Trump—until livid Republican Senate leaders told them to knock it off. These roughhouse tactics rarely work—targeted politicians don’t want to come off as supine—yet they live on in the minds of operatives blinded by partisanship or ideology, or both.
With 20 million people having signed up for Obamacare, and numerous governors favoring the program, especially its expansion of Medicaid, the Republican leaders’ proposals stopped short of completely eliminating the 2010 law. Nonetheless, the Congressional Budget Office estimated that the House-passed bill would cut off health care coverage to 23 million people over the next ten years, and a bill drawn up under McConnell for the Senate to consider would do so for 22 million. The greatest outrage was stirred by the provisions of both bills for deep cuts in Medicaid to make up for a repeal of taxes on the wealthy. This proposed transfer of benefits from the poor to the rich had numerous Republican members of Congress hiding from their constituents during recesses.
Fearful of the public reaction to their proposals, and not wanting to allow groups time to mobilize against them, Speaker Ryan and Senate Majority Leader McConnell most unusually drafted the bills in secret and tried to rush them to a vote in both the House and Senate. No committee hearings, no airing of the proposals to see how they stood up to criticism and challenge; the very committee system by which Congress normally functions was deemed irrelevant. The departures from standard legislative process not only failed to prevent vigorous protests; if anything, with some organizing groups on watch, the protests grew stronger. There was widespread fear that benefits people had come to depend on would be taken away—including benefits for the elderly, pregnant women, people with disabilities or needing nursing home care, all of which were enacted in Medicaid and Medicare legislation in 1965 and expanded by the ACA. The House and Senate bills had won the approval, respectively, of 12 and 17 percent of the electorate.
And then an unexpected thing happened: in June, a Kaiser Foundation poll found that for the first time a majority of Americans—51 percent—supported Obamacare. Additionally, according to the poll, a majority opposed deep cuts in Medicaid. More than seventy million Americans are helped by a combination of Medicaid and the Children’s Health Insurance Program (CHIP), which was passed with strong bipartisan support during the Bill Clinton administration after its hopelessly complicated health insurance program collapsed in the Congress. The rapid growth of Medicaid made the program a target for conservatives. In March of this year, after the Republican-controlled House passed a bill with huge cuts in Medicaid, a jubilant Paul Ryan, a supposed budget guru, told National Review editor Rich Lowry, “We’ve been dreaming of this…since you and I were drinking out of kegs.”
The Republicans’ insistence on “killing” the ACA (even though their proposals didn’t completely do that) made it impossible for Democrats to negotiate with them. Republicans hypocritically complained that the Democrats wouldn’t “come to the table”; but there was no table. Loose talk by political observers and commentators about how nice it would be if the two parties worked toward a bipartisan solution neglected this basic reality. It’s not that Democrats have been blind to certain problems in the ACA; for example, premiums have risen more than people had expected, shoppers for coverage are finding fewer options. Some of the issues arise from the premises behind the law’s initial design. Obama wanted to get a bill through Congress and he deemed the political system not ready to absorb one of two alternatives: government-provided health insurance (the “public option”), or a form of Medicare coverage for everyone, which was backed by the late Edward Kennedy (a “single payer” plan). President Obama at first proposed offering people the alternative of a public option where the insurance exchanges were weak, or as an incentive to get them to work better, but dropped it under pressure from the insurance industry. Similarly, to keep the drug companies—organized in a powerful PHARMA lobby—from fighting the Obama plan, the White House agreed not to demand that they negotiate their drugs’ prices with the federal government or to allow the import of less expensive medications from Canada. These concessions created certain realities—in particular outrageous prices for medications, or higher increases in insurance premiums—for which the ACA is blamed.
Numerous proposals have been floating around to fix some of these problems, and it’s likely that the next great health care debates will be about alternative ways to provide government assistance for health care. But as long as one side insists that the ACA must be eliminated—until they drop the pretense that that’s what they’re trying to do, stop using the issue as a partisan rallying cry, and cease pushing legislative proposals to significantly undermine it—there can be no serious attempt to address these issues.
Congress hasn’t been the only arena for the battle over the fate of Obamacare. The Trump administration has taken executive actions to try to undermine the program, and has the right personnel in place to do so: Tom Price, a former congressman who was a fierce opponent of the ACA, serving as the secretary of the Department of Health and Human Services, and Mick Mulvaney, a founder of the Freedom Caucus, heading the powerful Office of Management and Budget. Trump himself has sometimes suggested that the government cost-sharing fees wouldn’t be paid to the insurance companies, as a way of forcing Obamacare to collapse—but then he’d back off out of fear of getting the blame. Such threats have created uncertainty about the program’s future and frightened some insurance companies out of participating. The Trump administration recently shut down the centers in major cities that help people sign up for Obamacare and shortened by half the time to shop for coverage in 2018. Trump has said several times that he would like to “let Obamacare fail” and blame the Democrats—presumably for backing the program in the first place.
While the majority party in Congress having one of its own in the White House presumably gives it a tremendous advantage in legislative struggles Trump’s participation in the health care fight if anything made things worse.
During the House debate this spring, Trump held meetings with members at the White House and tried to persuade reluctant ones, but it turned out that he was also an easy mark. (This was much noticed about Trump at the time and later it showed up in some of his foreign dealings.) Trump sometimes made offers to congressmen that mucked up the Republican leadership strategy. It was evident that the president didn’t much care what the bill contained: he just wanted to sign it. It quickly also became clear to Republican legislators that the president was unfamiliar with the details and evinced little interest in learning them. Word of this spread quickly. Trump is the least informed president in modern history.
After the House bill passed in early May, a buoyant Trump led a celebration of House Republicans, who were bussed to the White House for the event—a scene that may well turn up in Democrats’ ads in the future. (“Hey, I’m President!” the triumphant Trump exclaimed.) But soon after that he threw away a large amount of this bonhomie by saying he thought the House bill was “mean.” There’s no more effective way for a president to make his party’s politicians wary of casting any risky votes out of so-called loyalty. When in June it came time for the Senate to take up the health care legislation, McConnell asked the president to please stay out of it. With the exception of a few misfiring tweets and a White House lunch with Republican senators this suited Trump fine; aides said he’d become “bored” dealing with the legislation. Governing doesn’t interest him.
The successful passage of the House bill depended on the support of Freedom Caucus members, who were appeased by measures that, among other things, rolled back Obama’s Medicaid expansion, eliminated the mandate that provided the pillars for Obamacare, and loosened the protections for those with pre-existing conditions. The House bill went so far that a number of Senate Republicans dismissed it out of hand, thus complicating McConnell’s mission of not just passing a bill but one that might provide the basis for a final agreement with the House. But first McConnell had to win a majority of Senate votes to begin the consideration of health care legislation, by adopting a “motion to proceed. ”Opponents of both the House-passed bill, the legislation pending before the Senate, and the substitute—marginally less severe, that McConnell was planning to offer—feared, not without reason, that if the motion to proceed was successful the Senate would proceed to pass a bill undoing a significant portion of Obamacare and negotiations with the House would produce a bill for Trump to sign into law.
Because of his party’s narrow majority in the Senate, McConnell could afford to lose the votes of only two Republicans on the motion to proceed, and if that succeeded, on a series of amendments that would be offered as substitutes for the House-passed bill. Vice President Mike Pence could break the resulting tie. By Monday July 17 two Republicans had already said they’d oppose the motion to proceed; and then in the course of that day two more said so (that kept either one of them from being blamed for casting the decisive vote), which appeared to put an end to the Republicans’ effort to replace Obamacare with something more to their liking. And then, two days later, came the awful news that McCain had an aggressive brain tumor.
The Republican effort to undermine Obamacare looked moribund going into the following weekend. But it’s not a good idea to underestimate McConnell’s determination and resourcefulness. In fact, he was in a position to move around some funds in his pending proposal to satisfy the complaints of various Republicans whose votes to undo much of Obamacare weren’t assured. (Or who, for their own purposes, had indicated that their votes weren’t assured.) Leaders of grass-roots groups fighting to protect Obamacare began to get a sinking feeling over that weekend that McConnell might pull it off. And then came word that McCain, still recuperating from the surgery that disclosed his illness would fly back to Washington. McCain had voted against Obamacare more than once and it stood to reason that he wouldn’t be returning simply to cast a vote to save the program.
McCain received a hero’s welcome from all the senators and Senate staff members on the floor when he arrived on Tuesday afternoon, and the chamber was dead quiet as he delivered what was possibly his last address to the Senate. McCain is loved by many of his colleagues, including some Democrats, and respected by virtually everyone in the body—he’s been through unimaginable experiences—though along the way his crusty side has irritated more than a few of them. His speaking style is typically unoratorical and unadorned. But he tends to speak of things that people who know him understand come from a part of him that goes very deep and that has set McCain apart as one of the most striking political figures of this age.
This imperfect man has a deep reservoir of principle. Among the things that have offended him are distortions and degredations of the political process. Thus he went against his party in the early 2000s, after losing the presidential nomination to George W. Bush, and backed campaign finance reform—and prevailed. Now, standing by his Senate seat, he railed against the forces that have led our politics to a new low of hyper-partisanship—for which he blamed both parties—and he criticized the secretive methods by which the issue before them had been handled. He asked, “Why don’t we try the old way of legislating in the Senate?” On Tuesday, when McCain cast his vote for the motion to debate the Republican health care legislation to proceed, some of his fans were let down and the cynics who had never quite got past their doubts about him felt vindicated.
McConnell’s victory on the motion to proceed didn’t carry over to the various proposals for replacing or at least seriously undermining Obamacare. One by one alternative plans were voted down. Then it was learned that McConnell was working up a “skinny” repeal bill—a stripped-down package of cynicism that would repeal some parts of Obamacare but was designed to win fifty votes. (With Pence casting the 51st vote in its favor.) There was ample reason to fear that if the Senate passed the skinny proposal that House might agree to it and that would be the nation’s new health care law. Even if the House leaders took the more conventional route of conferring with the Senate to arrive at a compromise—it wouldn’t be easy but the basis was two bills to significantly roll back Obamacare.
There was also good reason to expect McConnell to prevail once more. The only Republicans expected to oppose this last attempt to radically change Obamacare were Susan Collins of Maine and Lisa Murkowski of Alaska. The Trump administration’s lack of finesse in trying to persuade holdouts made itself apparent when Interior Secretary Ryan Zinke phoned Murkowski and threatened to retaliate against the state of Alaska—the department’s policies on minerals and the development of energy resources has a large impact on the state—if she voted against replacing Obamacare. (Zinke made the mistake of taking on a committee chairman, and Murkowski, who heads the Committee on Energy and Natural Resources, threatened retaliation by holding up the nomination of Zinke’s deputy. One can’t help but think that Zinke, a former congressman from Montana, knew better but was under pressure from the White House.
McConnell stalled action while he tried to obtain a sufficient number of votes. The vice president made a rare excursion to the Senate floor to work on McCain. If McCain was holding out, Pence should have known better than to think that trying to persuade him would change McCain’s mind at this point. When McConnell at last let the vote proceed on the skinny repeal, McCain’s decision remained unknown to the public until after the first round of names were called. And then the old warrior entered the floor. McCain’s most dramatic vote was cast most undramatically. Not for him the Jimmy Stewart theatrics, the calling attention to himself. When his name was called he turned a thumb down—to some audible gasps in the chamber—and without a glance to his colleagues he quietly returned to his seat.
As usual with McCain there was a lot more subtlety to his act than has been imputed to him. Democratic leader Chuck Schumer told a reporter for The Guardian afterward that he and McCain had spoken “three or four times” a day for the past few days, and one subject was the secrecy with which the Senate had proceeded. (Schumer knew who he was talking to.) A very few other Republicans were also troubled by what the Senate was about to do—this included McCain’s closest Senate friend, Lindsey Graham. But by casting the deciding vote McCain offered them protection from the fury of the base had they themselves voted against changing Obamacare. And there was another thing: candidate Trump had delivered a particularly low blow to McCain by saying that he had greater respect for military personnel who weren’t captured. He also charged McCain with not helping veterans. McCain doesn’t forget such things.
After the vote, McConnell, in a sour speech, accused the Democrats of “celebrating” and rehearsed the familiar litany of charges against Obamacare. But having discharged that duty McConnell, a practical man undoubtedly eager to put the long-fought issue behind, said “It’s time to move on.” The biggest loser of the fight was of course Donald Trump, who now has little besides his executive orders (and of course his one Supreme Court appointment) to show for his record so far. And so Congress has recessed for August with many of its members as well as political observers concerned that Trump might create chaos by trying to stamp out the Russia investigation, and nervously wondering how the tempestuous president’s fractured and faltering administration, even with a new chief of staff, would perform in an international crisis.
Over five days in May, Donald Trump’s Iran policy—of monumental importance to the future of the Middle East and to US security—began to come into focus. On May 17, the president quietly agreed to continue to waive sanctions against Iran, a step that was required to keep the Iran nuclear deal in force. Two days later Iran held presidential elections with a landslide result in favor of the moderate incumbent, Hassan Rouhani; and two days after that the United States’ new Middle East policy, built around a Saudi-US-Israel axis, was unveiled in the president’s speech in Riyadh.
It had long seemed clear that Trump was not going to “rip up” what he had called in the campaign “the dumbest deal…in the history of deal-making.” The State Department had confirmed repeated findings by the International Atomic Energy Agency (IAEA) that Iran was meeting its nuclear commitments. But the May 17 waiver was the first time that an affirmative action on the deal had to be taken in the president’s name.
Iran’s election pitted President Rouhani, the architect of the deal and a proponent of reengaging Iran with the world, against a conservative, nationalist cleric, Ebrahim Raisi, who ran with the backing of the Revolutionary Guard and other hard-line forces. Had Raisi won, the deal’s future in Iran would have been very much in doubt. Instead, Rouhani had a resounding victory with high voter turnout. Though few Iranians have yet to feel any economic benefit from the deal and the end to international isolation it promises, there is little doubt that, for now, they overwhelmingly favor sticking with it.
In Saudi Arabia, where he was making the first stop of his first trip abroad as president, Trump ignored that positive outcome. His speech was a full-throated embrace of the Saudi view of Iran as the region’s chief malefactor and cause of its troubles. Trump’s reference to Tehran as the Middle East power that has “for decades…fueled the fires of sectarian conflict and terror” is a more accurate description of the Saudi kingdom, with its long record of exporting an unforgiving brand of Wahhabi Islam to madrasas and mosques around the world. His assurance of unquestioning friendship with Riyadh is new in American policy. Washington will ignore the failure of Saudi Arabia and other Sunni states to enact needed political and economic reforms, and their repression of Shia minorities, in exchange for their help against ISIS and promotion of Israeli–Palestinian peace. All nations, Trump declaimed, “must work together to isolate Iran.”
The new US policy has layers of contradictions. By not rejecting the nuclear deal the administration tacitly acknowledges that it’s working, yet senior officials continue to harshly criticize it. This extreme distaste for an agreement that has removed—at least for a decade—a nuclear threat that a few years ago raised the specter of another war in the Middle East is even odder when set against the standoff with North Korea. If anything were needed to underline how much safer the Iran deal has made the United States, the menace of North Korea’s nuclear development surely qualifies.
The new policy’s anti-Iran stance reflects the real reason that Israel and the Gulf states oppose the deal: they fear an Iran released from the international penalty box to which it was relegated for the nearly twenty years that Tehran pursued—and lied about—its weapons program. Many in the region remember that it was not very long ago that Iran and the US were close allies. They are far more comfortable with Iran’s being indefinitely excluded from the region’s commerce and diplomacy. Hence the particular words “isolate Iran.” Now that a weapons program is no longer the primary concern, the rationale for isolation has shifted to Iran’s activities in Iraq, Syria, Yemen, and elsewhere. Yet such geopolitical differences, no matter how profound, are never resolved by avoiding dialogue; rather, they deepen.
Setting aside the unwisdom of taking sides in the region’s Sunni–Shia divide, the low probability that a partnership linking Saudi Arabia, Israel, and the US will help achieve an Israeli–Palestinian peace, and the dubious assumption that conservative Sunni states will make the defeat of ISIS, al-Qaeda, and other Sunni terrorist groups a top priority, the new policy raises important questions about the nuclear deal itself. What has happened in the two years since it was agreed to? To what degree is it contributing to US national security? Can it be sustained in the face of unrelenting enmity from the US?
Since the deal was concluded in 2015, Iran has gotten rid of all of its highly enriched uranium. It has also eliminated 98 percent of its stockpile of low-enriched uranium, leaving only three hundred kilograms, less than the amount needed to fuel one weapon if taken to high enrichment. The number of centrifuges maintained for uranium enrichment is down from 19,000 to 6,000. The rest have been dismantled and put into storage under tight international monitoring. Continuing enrichment is limited to 3.67 percent, the accepted level for reactor fuel. All enrichment has been shut down at the once-secret, fortified, underground facility at Fordow, south of Tehran. Iran has disabled and poured concrete into the core of its plutonium reactor—thus shutting down the plutonium as well as the uranium route to nuclear weapons. It has provided adequate answers to the IAEA’s long-standing list of questions regarding past weapons-related activities.
Iran has accepted around-the-clock supervision by IAEA inspectors, cameras, and monitoring equipment at its nuclear facilities. There have been no problems with access. These inspections include some places, like uranium mines and centrifuge rotor production facilities, that have never previously been subjected to international oversight in other countries. Their inclusion makes it much harder to operate a covert program. Iran has adhered to allowed limits on R&D, and an innovative mechanism to track sensitive imports has been created.
Two years ago critics in the United States were deeply skeptical that these steps would be carried out. Today they are facts. Most of the commitments extend for ten or fifteen—and in a few cases twenty-five—years. Iran remains a party to the Non-Proliferation Treaty (North Korea withdrew in 2003), and several of the deal’s enforcement provisions strengthen the treaty by serving as models for application elsewhere.
In this light, Secretary of State Rex Tillerson’s recent description of the agreement as “the same failed approach…that brought us to the current imminent threat that we face from North Korea” is simply bizarre, betraying either ignorance of the facts or a willingness to wholly distort them. A “failure” like this would be an unimaginable success in North Korea.
It is dangerously easy now to forget, as Tillerson seems to have done, the trajectory of US–Iranian relations a few years ago. In September 2010, a well-sourced article by Jeffrey Goldberg in The Atlantic asserted that Israel was on the verge of bombing Iran. A technical “point of no return” in Iran’s pursuit of a nuclear weapon would be reached within a few months, Goldberg wrote, and Israel would not allow that to happen. Washington knew this would be a war that Israel could start but not finish. The US would be dragged into the conflict to aid Israel—strategically and politically a terrible outcome. Over the following two years there was more and more discussion in Washington of the US taking the military initiative.
At that time the prospect of serious negotiations between two countries steeped in mutual distrust seemed beyond reach. Iran and the US had not spoken for more than thirty years and the venomous Mahmoud Ahmadinejad was still Iran’s president. Only two options looked likely: that Iran would continue to build centrifuges until it could produce enough highly enriched uranium for a nuclear arsenal; or war—against a country more than three times the size of Iraq.
The story of how dogged diplomacy and some good luck took us from that low point to a deal that few could have imagined is one worth telling. Trita Parsi, president of the National Iranian American Council, who had the advantage of access to high-level participants on both sides, tells it well in his new book, Losing an Enemy: Obama, Iran, and the Triumph of Diplomacy. Crucial events and decisions are traced in great detail, supported by an unusual wealth of on-the-record interviews. The book generally gives the Iranian view of the more controversial issues, especially regarding the part played by sanctions. But the insight thereby provided is useful if the bias is understood.
Opponents of the deal raise three issues: that its provisions aren’t tough enough; that Iran will inevitably cheat; and that the deal should have covered nonnuclear issues. The last of these is the thinnest. No deal spanning all of the issues that divide the US and Iran, much less all seven parties to the talks (those two plus Russia, China, the UK, France, and Germany) could possibly have been agreed to; this argument amounts to rejecting negotiation entirely. And who could possibly prefer no agreement at all to one that has dealt with only the single most dangerous issue? As regards cheating, Iran has certainly done so before. While not watertight, the deal’s technical provisions are strong enough that any attempt to evade them would almost certainly be quickly detected. The technical measures are reinforced by political protections, notably the right of any single permanent member of the UN Security Council to demand that sanctions be “snapped back” if a disagreement arises over compliance.
But is the deal tough enough? Critics insist that it should have banned enrichment entirely. I felt this way in 2005. But a negotiated agreement is a reflection of what can be achieved at a given moment. In 2003 the US rejected a deal that would have capped Iranian centrifuges at an unthreatening three thousand. The decade that elapsed between then and 2013, when Iran was on the verge of nuclear breakout, did not work in the West’s favor. Technology consistently outpaced faltering diplomacy. As one official involved in the negotiations later noted, “We were constantly chasing the deal we could have gotten two years earlier.”
Yet there was a good reason why the US refused for so long to consider a deal that allowed enrichment. The difficulty lay in figuring out Iran’s real intentions. If Iran did not want nuclear weapons, as Iranian leaders insisted, why was it building centrifuge capacity so far in excess of its conceivable civilian needs? And indeed, why enrich at all when reactor fuel can be bought on the commercial market far more cheaply?
Parsi’s answer is domestic politics. Because of what he dubs the Supreme Leader’s “incentive structure”—by which he presumably means the policies Ayatollah Ali Khamenei favored and hence rewarded politically—Tehran convinced itself that “the nuclear issue ultimately was a pretext the West used to pressure Iran, to deprive it of access to science, and to deny it the ability to live up to its full potential.” This would keep Iran from being able to challenge US domination of the region. The right to enrich uranium became a symbol of national pride, technological prowess, international standing—and fairness. How could the great Persian nation be denied the right to do something that eight other nonnuclear weapons states were doing? At the least, having given up so much else, drawing the line at enrichment was a way for Tehran to keep the deal from looking, and feeling, like a defeat.
This nonnefarious explanation is much easier to take seriously now that an agreement has been reached and adhered to. In truth, the US still does not know what Tehran’s nuclear intentions were and how they may have evolved. Iranians’ views on critical questions are no less divided than are Americans’. Some members of Tehran’s leadership may have wanted Iran to be a nuclear weapons state. Others may have wanted to get just to the brink without crossing over—the so-called Japan option. A definitive choice may never have been made. US intelligence concluded in 2007, and reaffirmed twice thereafter, that Iran had abandoned its weapons program some years earlier. Perhaps nuclear weapons were the goal until the price imposed by worldwide sanctions got too high.
As reluctant as President Trump and his team are to acknowledge it, the nuclear deal has removed a major danger, allowing him to focus on other Iranian policies, especially in Syria where US and Iranian interests are likely to clash as ISIS is progressively weakened there.* The range of threats to US national security—and indeed to global security—looks entirely different than it did in 2012, when there was a real prospect of a nuclear-armed Iran that could in turn provoke nuclear proliferation across the unstable Middle East—in Saudi Arabia, Egypt, and Turkey in particular. While the deal is not perfect, Iran has thrown away tens of billions of dollars and decades of work on weapons-related materials and facilities, has taken, in the most pessimistic outlook, a ten- to fifteen-year hiatus in pursuit of nuclear weapons, and remains a permanent member of the Non-Proliferation Treaty. Are there lessons from this success that might be applied to the growing nuclear threat in Asia?
North Korea is years beyond the nuclear “breakout” the US so fears in Iran. Pyongyang’s first nuclear test was more than a decade ago. Four more have followed with yields up to twice the size of the Hiroshima bomb. The country is believed to have around twenty fission bombs and to be progressing along the path to a much larger hydrogen bomb. Moreover, the regime is consistently making faster progress on missile technology than US intelligence has expected, including the stunning July 4 test of what appears to be a bona fide intercontinental ballistic missile (ICBM). North Korea’s shorter-range missiles can now be fired from mobile launchers rather than fixed sites, and fueled with solid rather than liquid fuel. Both of these advances make preparation for a missile launch much quicker and harder to detect. The crucial remaining unknowns are how long it will take Pyongyang to perfect an ICBM capable of reaching the continental US and to miniaturize nuclear weapons so that they can be delivered atop a missile.
The differences with Iran are obvious, but there are also similarities that suggest how US policy toward North Korea should be shaped. In both cases there is a nearly bottomless well of distrust—in Pyongyang, even of its Chinese ally. Americans and Iranians so feared each other that they needed the help of a middleman, Sultan Qaboos of Oman, to get close enough even to begin negotiating. There is no person or country that can play that part for North Korea, but the absence of trust must somehow be reckoned with in US strategy.
Similarly, in both Iran and North Korea, though for different cultural and historical reasons, “respect” and “dignity” carry a weight that is very hard for Americans to appreciate, but which has to be understood. In a somber video message to clarify Tehran’s positions recorded in November 2013, Javad Zarif, foreign minister and chief negotiator, opens with the surprising words: “What is respect? What is dignity?” Summarizing their detailed study of the North Korean situation, Sung Chull Kim and Michael D. Cohen, editors of a valuable new volume of scholarly essays, write: “For North Korea, the sensitive nerve of Kim Jong-un’s legitimization—the so-called dignity—is apparently one of the most vulnerable parts of the regime.”
Pyongyang and Tehran share a third unusual characteristic that must influence US policy. In both capitals regime survival has often been more important to those in power than the national interest. The recent fate of Muammar Qaddafi after he gave up his nuclear program and of Saddam Hussein makes this anxiety even more acute. Repeated US talk of regime change will be just as counterproductive in dealing with North Korea as it was with Iran.
Above all, in neither country is there an attractive military option. North Korea is capable of inflicting millions of casualties on South Korea with conventional heavy artillery before those guns could be silenced. Negotiation is therefore unavoidable. This means that a winner-take-all goal (comparable to the zero-enrichment position vis-à-vis Iran) is unachievable. Time spent pursuing one will be wasted.
Instead, as with Iran, what can be achieved has to be calibrated against present circumstances. In view of Pyongyang’s large nuclear arsenal and advanced missile delivery systems, the long-standing US insistence that North Korea agree to complete denuclearization as a precondition to talks is far out of date and must be dropped.
How much can sanctions help? As Iran demonstrated, they can raise the cost of undesired behavior, but they will not halt it so long as the country in question is willing to suffer the consequences—something North Korea is clearly willing to do. Moreover, over long periods of time, sanctions lose an edge. External pressure unites those subjected to it and economies adapt, creating black markets that perversely produce a class of people who profit from sanctions and want them prolonged.
Parsi goes so far as to assert that the sanctions regime imposed on Iran “ultimately proved only that sanctions do not work,” but this is the Iranian line and it is false. He admits elsewhere in the book that the sanctions created substantial leverage for Iran’s opponents through the economic pain and international isolation they inflicted. Sanctions are also essential to demonstrating international resolve. Yet North Korea’s extremely closed economy and iron-fisted autocracy make it the least susceptible of any country on earth to such pressure. Sanctions have to be maintained but, short of posing a mortal threat to North Korea’s regime, they are not a solution.
China could, but won’t, create that mortal threat—by withholding oil and food. Trump is not the first American president to hope that if only the US leans heavily enough on China, China will lean hard enough on North Korea to force it to back down. Beijing fears both the internal chaos and the flood of refugees that would follow a collapse of North Korea’s government. But the principal reason why it will not force regime change is a deeply held strategic fear of a united Korea allied to the US, which would put American forces on its own border. Pressure from Washington won’t alter China’s assessment of its national interest.
Ultimately, then, the only approach that might work is one that has not yet been tried: a joint effort by the US and China. As an eventual outcome, both sides’ interests would be met by a unified, denuclearized, neutral Korea. While this end state is not hard to define, the process of getting there would be tortuous and require a degree of mutual trust between Washington and Beijing that does not now exist. Small, confidence-building steps would be needed over a long period. North and South Korea would have to find an acceptable basis for reunification—overcoming mountains of difficulty in bringing together a dictatorship that is nothing without its weapons and a democracy whose economy is more than one hundred times larger. North–South agreements signed in 1991 and 2000 point to a confederation between the two states as the means of starting the process.
The effort would take years. In the meantime, the US and the world will have to depend on a determined defense and, more importantly, deterrence. Rhetorical bluster and military gestures—like firing off missiles in response to North Korean tests—only confirm the regime’s paranoia and undermine US credibility. Pyongyang will not be frightened into changing direction at this late date. Washington can and should tighten sanctions on Chinese banks and companies trading with North Korea, and continue to pressure Beijing into taking a tougher stance. But it would be a huge mistake to make this issue the sole test of the US–China relationship, as President Trump repeatedly suggests he will do. That would be to trade one strategic threat for two.
Meanwhile, the US must preserve the Iran deal—which cannot be taken for granted. The deal’s greatest weakness is not to be found in its provisions but in the hostility of those in Tehran, Washington, and Jerusalem who, for mostly political reasons, would like to see it die. In the US, through more than thirty years of frozen nonrelations, Iran became a two-dimensional cartoon of evil that too many members of Congress, especially, and leading officials in the present administration, including the president, still believe in. And though Israel’s top general called the deal a “strategic turning point,” Prime Minister Benjamin Netanyahu’s opposition, which began long before a deal was actually negotiated, hasn’t ebbed.
Such caricatures don’t survive direct exposure. Parsi quotes a German diplomat who makes the point:
Germany has normal diplomatic relations, which makes a huge difference in our understanding of Iran. Just relying on intelligence, as the US is forced to do, can distort things. It becomes all about drama, doom and gloom, and never about the normal things. Till this day, the US still has an unnatural relationship with Iran.
He’s right, of course. Our continuing lack of diplomatic relations does not make it any easier to maintain the nuclear agreement in the face of profound geopolitical strains. The onus for this to change is on Tehran.
The administration and opponents of the deal in Congress—nearly all of them Republicans—need to update their rhetoric. Contrary to what they expected, the deal is being honored and continuing denunciations are not cost-free. They undermine the working relationship with the Iranian government needed to keep the deal in force—technical and financial issues crop up and must be managed—and they encourage dangerous mischief on Capitol Hill by members who want to score what seem to be cheap political points or even see the deal collapse. Provocations from Washington will be instantly responded to by Tehran—especially as the US escalates its military activity in Syria, Yemen, and Iraq. And the criticisms raise expectations among Iran’s opponents in the Middle East that the US cannot meet without throwing away what has been achieved.
It may be too much to hope that the Trump administration will come to recognize that pariah status does not improve any nation’s behavior, and that the Iran deal is the starting point from which other issues the US has with Iran, beginning with the future of Syria, can be addressed. But we should at least be able to expect that the administration is capable of recognizing the boon to national security it has inherited and that it can exercise the discipline and focus necessary to maintain it.
—July 12, 2017
See Joost Hiltermann, “Syria: The Hidden Power of Iran,” NYRDaily, April 13, 2017. ↩
July 26, 2017, was a personal anniversary for me: one year earlier I had written a piece in which I argued for setting aside the idea of a Trump-Russia conspiracy (yes, this idea was with us a year ago) for the much more important task of imagining what a Trump presidency might bring. I wrote that Trump would unleash a war at home and while it was difficult to predict the target, “my money is actually on the LGBT community because its acceptance is the most clear and drastic social change in America of the last decade, so an antigay campaign would capture the desire to return to a time in which Trump’s constituency felt comfortable.” This was a thought exercise; even as I made an argument that I believed to be logical, I could not believe my own words. On Wednesday of this week, one year to the day since I made that prediction, President Trump announced, by tweet, that transgender people would no longer be allowed to serve in the US military—a policy reversal that would directly and immediately affect thousands of people.
Many commentators immediately branded this move a distraction, an attempt to draw attention away from the Russian-conspiracy story, the health care battle, or anything else they deem more important than the president’s declaration that a group of Americans are second-class citizens. This is not only a grievous insult to transgender people but a basic failure to understand the emotional logic of Trumpism. This is a logic that Trump shares with most modern-day strongmen, and it was this logic that made his attack on LGBT rights so predictable, even while he was literally draping a rainbow flag over his body last year.
Trump got elected on the promise of a return to an imaginary past—a time we don’t remember because it never actually was, but one when America was a kind of great that Trump has promised to restore. Trumps shares this brand of nostalgia with Vladimir Putin, who has spent the last five years talking about Russian “traditional values,” with Hungarian president Viktor Orbán, who has warned LGBT people against becoming “provocative,” and with any number of European populists who promise a return to a mythical “traditional” past.
With few exceptions, countries that have grown less democratic in recent years have drawn a battle line on the issue of LGBT rights. Moscow has banned Pride parades and the “propaganda of nontraditional sexual relations,” while Chechnya—technically a region of Russia—has undertaken to purge itself of queers. In Budapest, the Pride march has become an annual opposition parade: many, if not most, participants are straight people who use the day to come out against the Orbán government. In Recep Erdoğan’s Turkey, water cannons were used to disperse an Istanbul Pride parade. Narendra Modi’s India has re-criminalized homosexuality (though transgender rights have been preserved). In Egypt, where gays experienced new freedoms in the brief interlude of democracy after the 2011 revolution, they are now, under Abdel Fattah el-Sisi’s dictatorship, subjected to constant harassment and surveillance and hundreds have been arrested.
Benjamin Netanyahu’s Israel is a telling exception to the rule: the government has touted its record on LGBT rights precisely to assert its otherwise tattered democratic credentials—a tactic the writer Sarah Schulman has termed “pinkwashing.” In other words, queer rights are anything but a distraction: they are a frontier, sometimes the frontier in the global turn toward autocracy.
The appeal of autocracy lies in its promise of radical simplicity, an absence of choice. In Trump’s imaginary past, every person had his place and a securely circumscribed future, everyone and everything was exactly as it seemed, and government was run by one man issuing orders that could not and need not be questioned. The very existence of queer people—and especially transgender people—is an affront to this vision. Trans people complicate things, throw the future into question by shaping their own, add layers of interpretation to appearances, and challenge the logic of any one man decreeing the fate of people and country.
One can laugh at the premise of the Russian ban on “homosexual propaganda”—as though the sight of queerdom openly displayed, or even the likeness of a rainbow (this claim has been made) can turn a straight person queer. At the same time, in Russia queer people make an ideal target for government propaganda because the very idea of them—of people freely choosing and expression their sexual orientation—serves as a convenient stand-in for an entire era of liberalization that is now shunned. Before the collapse of the Soviet Union in 1991, queerdom was unthinkable. Afterward, it became possible along with so many other things: the world became complicated, full of possibility and uncertainty. It also grew frightening—precisely because nothing was certain any longer.
This fear cuts across geographic borders; it feels much the same in countries that were never Communist and in societies that were never apparently closed. The precipitous loss of economic security, the disappearance of lifelong careers, the rising sense of a world transformed by the movement of people across borders have all coincided with the growing visibility of LGBT people. In America, too, the sight of a queer person can conjure the fear of change.
Trump’s campaign ran on the word “again,” the promises to “take back” a sense of safety and “bring back” a simpler time. When he pledged to build the wall or to fight a variety of non-existent crime waves (urban, immigrant) he was promising to shield Americans from the strange, the unknown, the unpredictable. Here, too, queers can serve as convenient shorthand. By tweeting that he has decided to ban transgender people from the military, Trump shows that he is the autocrat that he was elected to be: he can control people by issuing an order. The order juxtaposes the military—the symbol of Americans’ security—with transgender people, who make so many Americans feel so anxious.
Looking at a person who embodies choice—the possibility of being or becoming different—can be like staring into the abyss of uncertainty. In this sense, seeing a Pride march or a trans person can make a person feel very queer: it demonstrates possibility, making the world frightening. It speaks to the modern predicament the social psychologist Erich Fromm wrote about in his book about the rise of Nazism, Escape from Freedom: the ability to reinvent oneself in almost every way. One is no longer born a tradesman or a peasant, or the lifelong resident of a particular quarter, or a man or a woman. This freedom can feel like an unbearable burden. No wonder the most notorious piece of American anti-transgender legislation—the North Carolina bathroom bill—focused on the birth certificate as the most important document. In mandating that people use public bathrooms in accordance with the sex assigned at birth, the law created a situation where some people who looked, acted, smelled like—who identified and lived as—women were required to use the men’s bathroom, and vice versa—but it established that one’s position in the world was set from birth.
For the last half-century, the American LGBT movement has bent to accommodate the belief that a person’s identity is already present at birth. “Born this way” has been the mantra that has enabled many of the political advances and much of the cultural acceptance for LGBT people, even as it has pushed out of view many queer people’s lived experience of choice. But no amount of reassurance that LGBT people “can’t help it” can alleviate the anxiety brought on by the spectacle of people transgressing gender roles. This is the kind of anxiety Trump addressed as a candidate and has addressed again with his apparent promise to purge transgender people who are already serving in the military. This is no distraction: it is the very heart of Trumpism.
I write to give credit where credit is due in one paragraph of Bryan Stevenson’s outstanding essay “A Presumption of Guilt” [NYR, July 13]. It was not the NAACP but rather the International Labor Defense (ILD), the legal arm of the Communist Party USA, that launched the international campaign to save the “Scottsboro Boys” from Alabama’s electric chair. NAACP officials believed that the defense should should be conducted quietly, in the courts. The defendants and their parents chose the Communists, and the NAACP played only a peripheral role in the case.
New York City
Bryan Stevenson replies:
James Goodman is correct about the International Labor Defense. He’s the author of a terrific book on Scottsboro, Stones of Scottsboro (1994), and it was the ILD that provided legal assistance to the Scottsboro teens, primarily by getting to the families of the young men before the NAACP. It’s also true that the Communist Party did the most effective organizing around the case in the years immediately following the trial. However, it was the NAACP that ultimately won support in the black community and framed what happened in Scottsboro as part of a broader effort at confronting Jim Crow and racial violence against black people. By the end of World War II, the ILD had lost influence and it was the NAACP, especially the work of Walter White, that shaped the narrative about the legacy of lynching and its impact that would fuel the activism of the civil rights movement and the efforts of the Legal Defense Fund in particular.
In her essay “The Abortion Battlefield” [NYR, June 22], Marcia Angell writes that “women couldn’t vote in the United States until 1920.” This is incorrect. The Nineteenth Amendment, ratified in 1920, did grant the universal right of women to vote. However, for decades before this federal constitutional amendment American women voted in certain towns, cities, counties, and states. The principle of federalism granted these governments the right to enact legislation concerning voting rights and, increasingly in the late nineteenth and early twentieth century, elected officials yielded to women’s demands for the vote. In some instances, particularly in the 1870s and 1880s, this might only involve school ballots or local taxation. Still, in the nineteenth century, three states granted full suffrage to women and prior to ratification of the Nineteenth Amendment, an increasing number of states joined them (for example, California in 1911 and New York in 1917).
Pawling, New York
Marcia Angell replies:
Jill Norgren is correct that women could vote in some localities before 1920, but it was not until ratification of the Nineteenth Amendment to the Constitution that women were granted the same right to vote as men throughout the country.
In May 1801, Thomas Jefferson sent the Marines to Tripoli and Tunis to battle “Barbary pirates” who were menacing American merchants off the coast of North Africa, thereby launching the young United States’ first overseas military venture. Over the two centuries since, the list of foreign countries invaded by US forces has grown to include some 70 nations (not including the “first nations” on what became US territory itself). Some of these have become metonyms for their eras—Vietnam, Iraq. Most, though, dwell in Americans’ minds only as flickering features of news cycles from the past. One such is the small Caribbean nation of Grenada: an island that few Americans knew about before October 1983, when TV screens filled, for some days that fall, with images of paratroopers dropping between tropical palms.
Grenada is located at the base of the Windward Antilles, about one hundred miles off the coast of Venezuela, and is famed for its nutmeg. When Ronald Reagan ordered the 82nd Airborne there, he said the choppers bellowing over its beaches had come to safeguard several hundred American medical students from the civil unrest that had gripped the island after the apparent implosion, a few days before, of its government. Before 1983, Grenada was best known to West Indians for producing, along with its famous spice, the great calypso singer Mighty Sparrow. Afterward, it was known for the traumas left by this sad episode of the cold war whose legacy, for our current political era, is the subject of a welcome new documentary film, The House on Coco Road, directed by Damani Baker.
Reagan’s invasion, of course, had more to do with geopolitics than med students. Grenada had until that month been led by a charismatic and capable socialist, Maurice Bishop, whose admiration for Castro’s Cuba—and his acceptance of Cuba’s help to build a new airport for his island—bothered the US government. Reagan claimed that Bishop’s new airport—a project also backed by countries like Britain and Canada, and whose completion would enable jets carrying needed cargo and tourists to land there—was in fact meant to turn this tranquil island into “a Cuban-Soviet colony” and “a major military bastion to export terror and undermine democracy.” US soldiers had rehearsed the invasion for months on the US-Navy-held island of Vieques. Their actual landing there was quickly and roundly condemned at the UN as illegal, but Reagan was determined to maintain the Caribbean basin as an “American lake.” Sabotaging Grenada’s experiment at socialism, which had begun when Bishop’s party seized power in 1979, was an example it needed to make.
Invading Grenada worked out pretty well for the Gipper: Reagan won reelection in a landslide a year later, and would eventually see himself charitably recalled as a victor of the Cold War itself. But in the Caribbean, the one-sided battle in which US bombs killed dozens of innocents, and buried a political experiment that had inspired people across the region, is remembered rather differently.
In his film, Baker examines Grenada’s revolution through a highly personal perspective. Baker’s mother, Fannie Haughton, was an activist and educator in California who came of political age in the late 1960s amid the heady rise of the Black Panthers and the birth of university programs in Black Studies. She was a close confidante and comrade of Angela Davis, who also became, by the early 1980s, a mother concerned about rising crime in Oakland. After a trip to Grenada, Haughton impulsively decided to move her young family from the Bay Area to Bishop’s island. When they landed there her son Damani was ten years old. Thirty-four years later, Baker has made an absorbing film that’s framed as a record of his quest to understand, as an adult, what happened in Grenada—both in the life of his family and in the broader light of history.
Baker builds his story with grainy home movies of island life, and interviews with his mother and her friends (including Maurice Bishop’s winning mum). Their stories are supplemented by archival and news footage, and by Baker’s own narration as he visits sites in Grenada where Bishop’s New Jewel Party thrived. In 1983 Reagan’s bombs arrived to kill, among others, twenty-one patients in a nearby mental hospital. Baker and his sister took shelter under a bed for three days, until his mom got them off the island on a military plane that Reagan had sent to rescue those benighted med students in flip-flops.
This is the film’s climax. But to explain it, The House on Coco Road—which takes its title from Baker’s mother’s childhood home in Louisiana—reaches into the past. Baker recounts how his mother’s parents, seeking a safe place in which to raise black kids, fled southern racism for California in the 1950s. His ambition is to connect what happened in Grenada to the saga of African-American struggles for freedom reaching back to the toil of his sharecropper forebears in Dixie and forward to the Black Lives Matter movement today. This expansive approach has its hazards, and makes his film as much a personal tribute to the women who raised him as a historical narrative. But Baker’s intimate approach certainly lends human force to his rendering of Grenada’s revolution and its catastrophic end.
We see Ronald Reagan building a rhetorical case against the island and then proclaiming, after invading, “We got there just in time.” His words are jarring alongside footage of nothing more threatening than smiling people in the sun, a “popular education brigade” teaching peasants to read, and black women exulting in Bishop’s declaration that henceforth in Grenada “equal pay for equal work” would be law. Reagan’s mien is also strikingly juxtaposed with that of an afro-ed Angela Davis, who is shown smiling in gap-toothed exultation as she visits Grenada in 1982. But among all of Baker’s revelatory footage, it is perhaps that of the Grenada revolution’s leader that most impresses.
Tall and striking, with an excellent beard, Bishop was an eloquent barrister who favored guayabera shirts and exuded the confidence of his London education. With his patient smile and light-brown skin, he was Fidel Castro with humility and Bob Marley without dreadlocks, as loved by Grenada’s peasants as by the academics and activists who crowded to his speeches in New York. His New Jewel movement (its name was an acronym: the New Joint Endeavor for Welfare, Education, and Liberation) was born in 1974. Its aim was to fight the corrupted regime of the man who’d become the island’s first president, after the longtime British colony won its independence.
Eric Gairy was a petty despot who quashed dissent with a secret-police force whose slain victims included Maurice Bishop’s father. The younger Bishop’s party was at first forcibly repressed by Gairy’s police, then defeated at the ballot box in a dubious election, before its members took more drastic steps. One morning in 1979, while Gairy was away at the UN, Grenadans awoke to a smooth low voice on their radios: “Brothers and Sisters, this is Maurice Bishop.” That dawn, a few dozen New Jewel men had peacefully seized control of the island’s army barracks and its main radio station. “The dictator Gairy is gone,” Bishop intoned. “This revolution is for work, for food, for decent housing and health.”
And that—food, housing, health—is what his revolution fought for. A drowsy old sugar island whose slaves’ descendants were now mostly farmers and fisher-folk became vibrant with people crowding revolutionary rallies to dance and chant slogans that sounded like reggae songs and were affixed to brightly colored signs around the island: “Forward Ever, Backward Never”; “It takes a revolution, to make a solution”; “Not a second, without the people.” Their language may have been perfectly suited for V.S. Naipaul to ridicule, in an incisive but typically ungenerous appraisal of the “revo’s” shtick. But its aims meant rather more to the legion admirers of his movement, from the West Indies and beyond, who came to celebrate and support it. An early review of Baker’s film, recently quoted by Ava DuVernay (whose distribution company, Array Now, picked up Baker’s film) on Twitter, described Grenada in this period as “a functioning paradise for and by black people.” One of those black people was Damani Baker’s mother. And her personal backstory, which we learn as the film progresses, becomes important to how she—and her son’s film—narrates the Grenada revolution’s end.
The basic facts of what happened in Grenada, in the fateful weeks before the US invasion, are clear enough. That fall, the party’s central committee was struggling under the weight of all the projects their revolution had taken on, and debating how best to tackle them. Some of its members proposed that responsibility for the party’s leadership be split between Bishop and his erstwhile deputy, Bernard Coard. Bishop at first agreed. But then he went on a long-planned trip to Eastern Europe and, upon returning, informed his comrades that he no longer felt that power-sharing was in their revolution’s interest. They replied, with the help of the party’s security forces, by placing him under house arrest and announcing that Bernard Coard was now in charge.
Coard, though a devoted party man, was as uncharismatic as Bishop was loved; his wife, Phyllis Coard, although a prominent New Jewel minister, was also unpopular (perhaps mostly because she was Jamaican). A rumor spread that the Coards were planning to kill Bishop. Thousands took to the streets. Hundreds marched on Bishop’s house and succeeded in springing him loose, and bringing him to the island’s old colonial fort that overlooked St. George’s Bay. But soon soldiers arrived—the army had deposed Coard and declared itself in charge. Someone gave an order, or didn’t. Either way, Bishop and eight loyal colleagues were lined up against a wall and shot by men who until a few days before had been under his command. Their bodies were never found.
This chaotic, violent finale has always been somewhat mysterious. In Grenada and on nearby islands, pedants and scholars have long been occupied with arguing over and apportioning blame for what happened. (These debates also occupied a more purely expository documentary about Grenada’s New Jewel years by the Trinidadian filmmaker Bruce Paddington, called Forward Ever: The Killing of a Revolution.) Baker, for his part, doesn’t get into these arguments. To him, the basic reason for what happened is clear. And it’s to be found not in Grenada but in 1969 in Los Angeles, where Angela Davis was fired by then-governor Reagan because of her membership in the Communist Party.
UCLA was also roiled that year by the killing on campus of two young members of the Black Panther Party by rival activists. It later emerged that these murders, which threw the Panthers’ local chapters into turmoil, were at least partly precipitated by COINTELPRO—the FBI’s illegal program to infiltrate and destabilize subversive groups. Agents had fostered conflict between the Panthers and their rivals by sending fake letters between them. And Baker, when it comes time to explain Grenada’s tragedy, essentially points to this example. He spent happy months as a boy playing in the yards of both Bishop and the Coards; he’d known both as fast friends of his mother. “Why,” he asks, “were two friends who’d built the revolution together now fighting?” He answers that question with a definition: “‘Destabilization’: to cause a government to be incapable of function. Done successfully, it can happen without a trace.”
The moment reveals how Baker’s intimacy with his story may hinder its recounting. He’s the son of a woman for whom the word “Grenada” means, above all, “a loss of friends, and the loss of a utopia.” As she reads a fond letter sent to her from Phyllis Coard in prison (both of the Coards, along with sixteen other people, were convicted of Bishop’s killing), it seems the possibility of actual discord between her friends is unthinkable. One understands this, emotionally—and knows, too, that Fannie Haughton is a woman with deep knowledge of the US government’s capacity for harming those it deems threats. It’s also no doubt true that the US executed an avid propaganda campaign against Bishop’s government, in the Caribbean and beyond, and that US agents sought and likely found other ways to unsettle its leadership. But neither of those truths mean that Grenada’s revolution wasn’t also beset by genuine internal tensions.
Naipaul had a point when he wrote that “the Revolution was a revolution of words”: painting slogans for “the people” is much easier than actually running their economy—especially when you’ve nationalized important industries like the nutmeg trade, as Bishop’s government did, with a plan for growth comprised largely of “making the new man and woman.” The revolution had to work out how to thrive, as its heady hero phase began to wane, and debates over how to do so weren’t simple. “The leadership had to rock back,” is how Selwyn Strachan, Bishop’s Minister of Mobilization and Labor at the time, recently explained it to me in Grenada. “To prioritize, analyze, and rationalize.” It was through those discussions that the party’s central committee reached a decision to divide the prime minister’s duties with the aim, as Strachan put it, “of marrying Maurice and Bernard’s respective strengths, and leaving their weaknesses behind.”
Strachan should know. He was a party stalwart as close to its leaders as anyone: when Bishop returned from his trip in October 1983, it was Strachan who went to pick up his friend at the airport, and whom Bishop first informed of his reservations about the power-sharing agreement. Strachan was later convicted for his putative part in Bishop’s ensuing death, and spent twenty-six years in prison. He’s now a dignified man with a salt-and-pepper beard who was only released, after years of appeals, in 2009 (Bernard Coard was released the same year). In 2015, I sat on a porch near the troublesome tarmac that’s now named the Maurice Bishop International Airport, as he reflected on the missteps and triumphs of a small group of people who—surely facing huge pressure, not least from a superpower waiting to swoop in as soon as things went wrong—made errors and had disagreements of a sort quite understandable among people of good will. And Strachan was never more animated than when he insisted that—despite the pain of his friends’ deaths and despite his own decades in jail—he had no regrets: “The Revolution is the greatest thing that ever happened to this country.” His eyes flashed as he recounted how he and his colleagues raised the island’s literacy rate to 99 percent, and he argued that support for the revolution on his island has never waned. “A black revolution, in the West Indies. It was earth-shattering.”
The House on Coco Road concurs with this sentiment, and so did the theater in Brooklyn full of rapt and cheering Grenadans with whom I saw this affecting film. It’s also the sort of movie that you may leave, especially at this polarized moment, wishing that its director, rather than preaching to a choir, had tried a bit harder to prove his case to strangers and render this complex story with greater nuance.
But what’s inarguable is the mendacity behind Ronald Reagan’s treatment of this island’s people. And in that there are warnings for us all as a new performer-turned-president stalks the White House, perhaps hoping for a war that might boost his own agenda and image—and as he looks, frighteningly, at foes far more formidable than little Grenada.
New York City is in the throes of a humanitarian emergency, a term defined by the Humanitarian Coalition of large international aid organizations as “an event or series of events that represents a critical threat to the health, safety, security or wellbeing of a community or other large group of people.” New York’s is what aid groups would characterize as a “complex emergency”: man-made and shaped by a combination of forces that have led to a large-scale “displacement of populations” from their homes. What makes the crisis especially startling is that New York has the most progressive housing laws in the country and a mayor who has made tenants’ rights and affordable housing a central focus of his administration.
The tide of homelessness is only the most visible symptom. There are at least 61,000 people whose shelter is provided, on any given day, by New York’s Department of Homeless Services. The 661 buildings in the municipal shelter system are filled to capacity nightly, and Mayor Bill de Blasio recently announced plans to open ninety new sites, many of which are already being ferociously resisted by neighborhood residents. A packed meeting in Crown Heights, Brooklyn, about a proposed shelter for 104 men over the age of fifty that I attended this winter quickly devolved into a cacophony of ire. “You dump your garbage on us because you think we’re garbage!” shouted a black woman to a city official. The official seemed stunned, and police watched anxiously as the meeting broke up.
The revulsion against the homeless seemed linked to a deep suspicion of “the powers that be, whoever they may be,” as one attendee put it. There were already several shelters in the area. The de Blasio administration’s argument that the homeless should be placed in the neighborhoods they come from so they can renew connections and have a better chance of getting back on their feet only compounded the insult. Were the local residents “connected” to the homeless—those on the lowest social rung? When the city changed eligibility for the shelter to men sixty-two and older, residents opposing it were not assuaged: a neighborhood association filed a lawsuit that blocked the shelter from opening for nearly two months, until it was dismissed by a judge in late May.
The case is indicative of what New York faces as it tries to cope with its housing emergency. Last year more than 127,000 different men, women, and children slept in the shelters. And in 2015, though the city managed to move 38,000 people from shelters to more permanent housing, the number of homeless increased. The administration’s most optimistic forecast sees no significant decrease in homelessness over the next five years; the aim is merely to keep it from growing.
New York is the only city in the United States to have taken on the legal obligation of providing a bed for anybody who asks for one and has nowhere else to sleep. This came about after advocates for the homeless argued, in a series of lawsuits in the 1970s, that shelter was a fundamental right, not just a social service. To establish this they pointed to an article in the New York State Constitution that implies public responsibility for “the aid, care and support of the needy.” The legal battle culminated in an enforceable consent decree to shelter the homeless—the Callahan decree—that Mayor Ed Koch’s administration voluntarily signed in 1981. Three years later Koch said of the signing, “We made a mistake, and I am the first one to say it.” No one at the time imagined the future extent of homelessness and the enormous municipal effort that would be required to deal with it.
The Callahan decree is the reason that the vast majority of New York’s homeless are out of sight, more of a news story than a daily reality that might jolt us into a pressing awareness of the human suffering the crisis entails. The number of identifiably homeless who live on the street—in train tunnels, under expressways, in basements and crawl spaces, and on tenement roofs—is fairly stable. No one claims to know how many of them there actually are, but for years a variety of estimates have put the number at about 3,000 to 4,400 in winter and 5,000 to 7,000 during the summer.
In fact, 75 percent of New York’s homeless are families with children, and at least a third of the adults in these families have jobs. The bank teller, the maintenance worker, the delivery person, the nanny, the deli man, the security guard—any number of people we cross paths with every day—may be living, unbeknownst to us, in a shelter. A full-time postal worker I know lives with her two daughters in a shelter because, after losing her apartment of fourteen years, she has been unable to find housing she can afford.
Every day city employees struggle to provide emergency quarters for those they have no space for in shelters, cramming parents and their children into hotel rooms in every borough. In February 2016, when the number of homeless hotel dwellers reached 2,600—and a mother and two of her children were murdered in a Staten Island hotel where they had been placed by the city—Mayor de Blasio vowed to reduce the practice; despite his best efforts, by December the number had swollen to 7,500. There have been predictable “scandals” about the Department of Homeless Services scrambling at the last minute to put up a few dozen families for the night in expensive Manhattan hotels. But the vast majority are clustered in the outer reaches of the outer boroughs, where rundown cinderblock motels along expressways and elevated railroad tracks supplement city shelters. The ninety new shelters de Blasio plans to open are meant to ease the need for these measures, but there is no guarantee they will. One might reasonably imagine what New York would be like without the Callahan decree, and nearly 70,000 men, women, and children wandering the streets with no place to stay.
And these are only the officially counted homeless. Many others don’t show up in the statistics: people living temporarily with relatives or friends or fleeing the city altogether, not because they failed to pay rent or violated the terms of their leases, but because their landlords found a way to wrest their apartments from the rules of “rent stabilization” and take advantage of their soaring market value. The doubling and tripling up of evicted families has led in some neighborhoods to “severe overcrowding,” defined as more than 1.5 occupants per room. Citywide, the number of severely overcrowded households increased by 18 percent from 2014 to 2015. Often the situation becomes untenable after a while and the “couch surfers” move to municipal shelters.
The system of rent stabilization is another development peculiar to New York, with its history of overpopulated slums, tenant activism, and crusaders for social reform. No other American city provides legal protection to tenants at anywhere near New York’s level. Housing shortages after World Wars I and II, protests (and sometimes riots) against price gouging and substandard conditions, and a huge voting bloc of renters with shared interests have led, over the past hundred years, to an evolving series of state-enforced regulations.
In 1969 rent stabilization was established by the state legislature, covering older buildings, for the most part, with six or more units and a history of tenant leases. Though legislators have tinkered with the laws almost annually ever since—weakening protection during some periods, strengthening them in others—the basic system remains intact today: landlords can increase rents for stabilized apartments only at or below a rate set by the city’s Rent Guidelines Board, all of whose members are appointed by the mayor. In recent years increases have ranged from 3.75 to 4.5 percent for one-year leases. In 2015 and 2016, the Rent Guidelines Board froze rents to provide relief for tenants. Tenants in these apartments are also guaranteed the right to renew their leases.
Currently almost half of the rental apartments in New York City are stabilized—about 990,000 units, with 2.6 million people living in them.1 Three quarters of these units were built before 1947. They are found in late-nineteenth- and early-twentieth-century tenements, pre-war towers, and U-shaped apartment blocks, and they are among the city’s most precious resources, as critical to its well-being, I would argue, as its transit system and public parks. In view of this extraordinary level of regulation, it may seem surprising that New York faces a crisis in affordable housing. But rent-stabilized apartments are disappearing at an alarming rate: since 2007, at least 172,000 apartments have been deregulated. To give an example of how quickly affordable housing can vanish, between 2007 and 2014, 25 percent of the rent-stabilized apartments on the Upper West Side of Manhattan were deregulated.
A major reason for this is that once the monthly rent of an apartment exceeds $2,700, the owner may charge a new tenant whatever the market will bear—which, because of the exceptional pressures on New York real estate, may be thousands of dollars more. Not long ago a rent-stabilized building would sell for ten or at most twelve times its rent roll—the amount of money, before expenses, that it generates in a year. Today, it sells for perhaps thirty or forty times that amount, or ten times what the rent roll would be after regulated tenants have been dislodged. The clearing out of rent-stabilized tenants has become such a common real estate practice that it is added to a building’s value even before the fact. Landlords have found enough loopholes in tenant protection laws to make widespread displacement a viable financial strategy. A building in Crown Heights with one hundred stabilized units and a rent roll of $1.2 million might now fetch $40 million or more—and every tenant must be forced out for the investment to be recouped.
The buyers at these prices are, more often than not, private equity funds that manage pools of investors’ money: a typical participant in the Central Brooklyn market describes itself as an asset investment firm that specializes in the “repositioning” of multifamily buildings. The aggressive entry of hypercapitalized investors into the working- and lower-middle-class real estate market has struck Central Brooklyn—and the South Bronx, and East Harlem, and Washington Heights, and practically every New York neighborhood with a concentration of rent-stabilized buildings—like a thunderclap in the span of just a few years. They are a new type of owner in the outer boroughs, ones who can afford patient, relentless eviction proceedings and tenant buyouts in a way that most previous owners, who were often individual slumlords working with a different set of profit margins, could not.
The supply of higher-paying renters driving the new real estate market appears to be strong, if not exactly inexhaustible. New York has become one of a handful of big cities (London and Hong Kong are among the others) preferred by a global financial elite—not just the super-rich buyers of $50 or $75 million condominiums in the heart of Manhattan, but the “ordinary” rich as well, from places like China, Germany, Brazil, India, Russia, and the wealthy suburbs of the United States itself. Relatively crime-free Brooklyn has acquired the luster of an international brand. The well-off don’t comprise the entire new real estate market by any means, but there are enough of them to keep pushing up prices and to put pressure on New Yorkers of moderate means.
The effect has been catastrophic. A woman I know—call her S—who lived on Schenectady Avenue in Crown Heights for twenty-three years and raised her eighteen-year-old daughter there told me she was recently presented with a new lease in which the rent went from $1,017 to $2,109 per month. The hike was perfectly legal. Over the years, the landlord had not passed on the annual increases granted by the Rent Guidelines Board and was thus able to add all of them to the lease at once. Realtors call this “gentrification insurance”; the Rent Guidelines Board calls it “preferential rent.” Tenants in at least 250,000 rent-stabilized apartments pay preferential rents, which gives an idea of how many New Yorkers are in immediate danger of losing their homes as a result of drastic increases when their leases come up for renewal. When real estate companies began to market Crown Heights as a “newly discovered,” desirable urban frontier, S’s landlord levied the accumulated increase without warning. Shortly after, he sold the building.
S’s daughter, who was studying to become a dental hygienist, took on extra hours at a retail clothing chain where she worked. But they still missed rent payments, and late fees were piling up, adding to the burden. S seemed locked in a nightmare when I saw her one morning begging for a fare at the Utica Avenue subway station so she could get to her job as a home nursing aide in Manhattan. She had become impoverished overnight, paying close to 70 percent of her income in rent, and saw no recourse other than to accept her new landlord’s offer of $45,000 to move out and sign away any lingering legal claim she might have to renew her lease at the stabilized rate.
“I put up with these streets when you had to be half-crazy to go out to the bodega for a quart of milk after dark,” said S. “I got rid of a rat infestation four years ago myself.” She and other tenants once pooled money to install a new hot water heater when the old one broke down. “We watched over this street, we cleaned it up. Why should we have to leave?” S and her daughter were shuttling between various relatives and friends—paying for a couch here, a spare bed there—when I lost touch with them.
Forty-five thousand dollars seemed like a lot of money when it was offered, and it did alleviate some of S’s immediate financial worries, but in New York’s housing market it wasn’t nearly enough to replace what she and her daughter had lost. They were unlikely to find a comparable home they could continue to afford after money from the buyout ran out: most vacant rent-stabilized apartments become more expensive as landlords act to push them toward deregulation.
From the point of view of S’s landlord, the buyout was a sound investment that would pay itself back in increased rent in little more than a year, while adding substantially to the value of the building should the new owners decide to sell it. With the apartment empty, they were able to add a 20 percent vacancy bonus to the next lease, bringing the rent to $2,528. According to S, who stayed in touch with her former neighbors in the building, a renovation that involved throwing up a sheetrock wall to create a second bedroom, replacing a few kitchen cabinets and appliances, and installing a wine refridgerator and a stacked washer/dryer comfortably pushed the rent over the deregulation limit of $2,700; the law permits landlords to add to the rent 2.5 percent of the cost of “major capital improvements.” There is no effective oversight of the amount landlords claim to have spent on improvements, while there is every incentive to inflate the costs.
The conversion of a one-bedroom apartment to a cramped two-bedroom allowed the landlord to lease to a group of three young roommates who split the new monthly rent of $4,300. These new tenants are the supposed “gentrifiers” of Brooklyn: they may be Web designers, fund-raisers, editorial assistants, fashion industry aspirants, musicians with a couple of floating bartender gigs, line chefs, elementary school teachers, film or TV crew workers, or online journalists—people with the kinds of jobs that New York abundantly generates. Priced out of the borough’s more expensive neighborhoods—like Williamsburg, DUMBO, Fort Greene, and Park Slope (not to mention Manhattan)—they are beckoned by rental agencies that specialize in introducing young, single renters to the deeper territories of Brooklyn.
When a landlord embarks on a campaign to “unlock value” in his building, it becomes a consuming psychological torment for renters. “Landlord harassment is practically all anyone I know talks about,” a beleaguered tenant named Nefertiti Macaulay told me. “When it comes, it’s like a bomb’s gone off in your living room.” After an equity firm bought her building and began pressuring tenants to leave, Nefertiti tried, with mixed results, to organize a rent strike. Amiable and proper, with a tattoo on her shoulder of the famous bust of the Egyptian queen who bears her name, Nefertiti has lived her entire life in Brooklyn. After her experience with her landlord she became a housing advocate and currently works as a community liaison for Diana Richardson, who represents Crown Heights in the New York State Assembly. She told me of a seventy-one-year-old man and his ninety-year-old mother who have lived in the same apartment in another building for forty years. “The new owner wants to give them $60,000 to move, and they think they have to take it because the landlord says so. They’re more than likely to end up at the mercy of the [Department of Homeless Services], at an annual cost to the city of $43,000 per person. I see it happen all the time.”
One of the tactics owners employ is to hold rent checks without cashing them and then sue tenants for nonpayment. Delores, who has lived on Eastern Parkway for twenty-five years, found herself embroiled in this scheme. Between 2013 and 2015 her building was flipped twice. “We don’t even know who the owners are. When we call, no one answers. And when they do answer, they’re very disrespectful. They tell us they’re going to relocate us to East New York. Where in East New York? It’s like we’re bad inventory they want to off-load to some warehouse so we’re not in the way anymore.”
Some landlords bring tenants to court for putting up bookshelves (which may violate the letter of a lease that prohibits renters from drilling into walls) or for having a roommate or, in one case I know of, a pet canary. “Most people here don’t believe in the courts because they’re used to it working against them,” said Nefertiti. “That’s what landlords count on.” Many renters are unaware of the laws protecting them and have little knowledge of how New York’s intricate housing bureaucracy works, so they are easily intimidated by determined owners. A court date is also a missed day at work. Landlords don’t expect to win all of these skirmishes, but the barrage of lawsuits helps set the stage for a buyout: financially and emotionally ground down, the tenant agrees to relinquish his rights and depart.
An artist I know in South Williamsburg took flight after her landlord paid a homeless man to sleep outside her door, defecate in the hallway, invite friends in for drug-fueled parties, and taunt her as she entered and left the building. In East New York a mother tells of a landlord who, after claiming to smell gas in the hallway, gained entry to her apartment and then locked her out. In January, a couple with a three-month-old baby in Bushwick complained to the city because they had no heat. In response, the landlord threatened to alert the Administration for Children’s Services that they were living with a baby in an unheated apartment. Fearful of losing their child, they left, leaving the owner with what he wanted: a vacant unit.
What might be a welcome development under different circumstances—the sale of a neglected building and its renovation under a new owner—today provokes immediate panic. Any effort at “improvement,” many tenants suspect, is probably the first salvo in what will be a protracted assault on their homes. A group called the Association for Neighborhood and Housing Development, with the help of the Ford Foundation and the Mertz Gilmore Foundation, has assembled a Displacement Alert Map that identifies residential properties where tenants are vulnerable to harassment and illegal evictions. Using public data, it assigns risk scores to buildings with rent-stabilized units that have sold for more than the average price in the neighborhood and whose owners have applied for work permits from the city’s Department of Buildings. Of 96,400 properties on the map, 24,766 had the highest risk of displacement. The map gives tenants of these buildings, and their advocates, a way to keep track of landlords’ plans and to prepare, if necessary, an early defense against eviction.
Costa lives in Central Brooklyn, in the type of pre-war building you might find in any part of New York. His place consists of a small misshapen living room, clearly carved from a larger apartment, with a makeshift kitchen wedged against a wall. The bedroom is just big enough for a mattress. He has been living there since he was discharged from the Marines sixteen years ago.
In 2014, a management company purchased the building and set out to get rid of as many rent-stabilized tenants as possible. Over the course of a year, they were able to push out about a third. “They offered me $50,000,” said Costa, “a sum they could make up in rent in two years. I told them I needed half a million.”
The new owners began renovating the vacant apartments. According to Costa, they didn’t obtain work permits but photocopied old ones and taped them to the doors. “They worked at all hours, especially at night, on weekends, on holidays, around the clock, dust everywhere, a hell of rubble, you couldn’t sleep or hardly breathe. The workers cursed at us, as if they’d been instructed to treat us like crap.”
Costa and other residents obtained an order to stop construction. After a brief pause, however, it started again. Apartments were flooded. A neighbor’s ceiling collapsed; another’s wall caved in. Costa recorded some of the illegal work on his cell phone. A few days later, an employee of the management company showed up with police, who arrested Costa for threatening behavior: the crew foreman claimed he had brandished the phone in anger. Costa was taken, in handcuffs, to Kings County Hospital, where he was dressed in a gown and held “for psychiatric evaluation.” He had never been arrested or treated for a psychiatric condition and was released after twelve hours with a “deferred diagnosis.” In February 2016, less than two years after they bought the building, the owners sold it for almost twice what they had paid.
As astonishing as Costa’s experience was, even more shattering was that of the tenants of a building on New York Avenue, whose owners sent construction crews into occupied apartments, claiming they had come to fix structural problems. They ripped out walls, shut off water, and then abruptly ceased work, leaving occupants with piles of dust and debris. One woman had to be freed by the Fire Department after workers nailed her front door shut from the outside with plywood.
Stories like these move through the city like an underground stream. I repeat them not because they are extraordinary, but because they are a fact of life for thousands of New Yorkers. For the most part they go unnoticed. The displaced slink away, crouched into their private misfortune, seeking whatever solution they can find. Many experience displacement as a personal failure; they dissolve to the fringes of the city, forced to travel two or three hours to earn a minimum wage, or out of the city altogether, to depressed regions of Long Island, New Jersey, or upstate New York. If they have roots in the Caribbean, as some residents of Central Brooklyn do, they may try to start again there. Or they may join the growing number of people who are officially homeless, dependent on the city for shelter.
Mayor de Blasio is keenly aware of the pressures bearing down on what, as a candidate in 2013, he called “the other New York”—that vast sector of the city’s population that lost considerable economic ground during the twelve-year mayoralty of his predecessor, Michael Bloomberg. De Blasio has tried to blunt the hardships, but he also concedes that the forces responsible for the city’s housing emergency are beyond his control. At a town hall meeting I attended at a Bedford-Stuyvesant elementary school on March 9, the mayor told his worried audience not to “think the city is all-powerful. This is about something called money.” He urged renters to think twice before succumbing to landlords offering to buy them out of their stabilized leases, while tacitly acknowledging that thirty or forty or even fifty thousand dollars for someone accustomed to living week-to-week may be difficult to turn down. People had to figure out for themselves whether their leases and the rights that went with them should be put up for sale. “Sometimes, it’s a personal choice,” he said, with resignation.
The core of de Blasio’s housing plan, announced in 2014, is to “build or preserve” 200,000 affordable rental units throughout the five boroughs by 2024. The preservation part of the plan aims to keep 120,000 units that are already affordable from passing into the unregulated market. Often the administration’s efforts involve buildings that landlords allowed to fall into decrepitude and then forfeited, on account of unpaid taxes in the 1970s and 1980s. The city arranged financing for builders to renovate them and either keep existing tenants or, if the properties had become uninhabitable, give affordable leases to new ones. These arrangements usually last for twenty to thirty years—the time it takes the builders to repay their loans—at which point the affordability requirement expires, and they have the right to assume full control over the properties. The de Blasio administration has been stepping in, negotiating an extension with these owners to keep their buildings affordable.
A typical example is the sixty-three “senior apartments” at Monsignor Alexius Jarka Hall on Bedford Avenue in Williamsburg, Brooklyn. The residence, owned by a nonprofit organization called the People’s Firehouse, recently received $19 million from the city to fix the roof and remain affordable for another thirty-five years. The city has struck hundreds of such deals, and while they are of critical importance, they do not add to the pool of affordable housing or protect tenants in the vast number of rent-stabilized buildings for which the government has no negotiating leverage to ease the threat of eviction.2
The “build” part of de Blasio’s build-or-preserve housing plan gives private developers tax breaks to include a total of 80,000 affordable rental units in newly constructed market-rate buildings. The tax break, known by its legislative code number, 421-a, dates back to 1971, when the city’s economy was collapsing and its white working- and middle-class population was fleeing to nearby suburbs or to the Sunbelt, after rising energy costs encouraged the migration of jobs from the Northeast. At that time the challenge was not to create affordable housing but to keep bankrupt landlords from abandoning properties to scavengers and squatters.
Today, the tax break’s main purpose is to encourage large developers to build. Under 421-a, owners are exempt from paying the increase in property taxes that would normally result from new construction: if a building worth $200 million is erected on a lot valued at $10 million, the owner will not be taxed for the $200 million enhancement. In exchange, developers must set aside 20–30 percent of the units at below-market rates for tenants who are chosen by city officials in an income-based lottery. The apartments remain affordable for the duration of the tax exemption period, which in April was extended from twenty-five to thirty-five years.
De Blasio defends the program as the fastest and most practical way to provide a significant number of apartments for people in need. “My plan offers volume,” he said at the town hall meeting in March. “And in housing, volume matters.” Eighty percent of the volume, however, consists of high-cost market-rate rentals, far more of them than would have been built without the enticement of 421-a. Measured purely by volume, there is no housing shortage in New York—the upper end of the market is glutted with apartments, partly because tax exemptions and rezoning laws have made construction so attractive to developers. In a ten-block area of Downtown Brooklyn, nineteen residential towers with nearly seven thousand rental units are either under construction or have recently been completed. If we include the area immediately around Downtown Brooklyn, a total of 15,200 apartments have been built or had their plans approved since 2011, almost all of them with 421-a exemptions.
As more and more apartments have come onto the market, landlords have had to offer “sweeteners” to attract tenants. Posing as a prospective renter, I recently toured one of these buildings and was offered two months’ free rent on a two-year lease for a $5,400-per-month apartment. The lease came with a complimentary health club membership, concierge service, and common areas that included a sun deck and party rooms that tenants were “invited to share,” but the apartment itself was a narrow cookie-cutter two-bedroom. Turnover is high. “With so much to choose from at $5,000 there’s really no reason for tenants to stick around,” a broker told me.
The 421-a exemptions cost New York $1.4 billion in uncollected property tax in 2016, and de Blasio’s housing plan is now expected to cost at least $10 billion in exemptions by 2024. The city appears to be getting relatively little affordable housing for the money. In 2016 it managed to squeeze 6,844 new affordable units out of developers, as construction projects that had broken ground in 2014 were completed—a numerical victory, but only 35 percent of those apartments were for households making less than $40,000, the income level that is being most relentlessly pressured with eviction from older, “undervalued,” rent-stabilized buildings. Citywide, de Blasio’s program provides far more affordable units for households making $63,000 to $143,000. (The government deems housing affordable when a household spends no more than 30 percent of its income on rent.)
Yet the program appears to redefine what low and moderate income means. At 382 Lefferts Avenue in Brooklyn, for example, new subsidized one-bedroom apartments rented for $2,047 in May 2015, $400 more than the neighborhood average. According to the most recent data, the median annual household income in Brooklyn is $44,850; to be eligible for a one-bedroom at 382 Lefferts Avenue a tenant would have to earn at least $82,000 a year.
At 7 Dekalb Avenue, a gleaming zinc-skinned centerpiece of the residential skyscrapers that are rapidly rising in Downtown Brooklyn, three quarters of the subsidized units are for individuals earning at least $57,000 (for studio apartments) and families making up to $142,000 (for two-bedroom apartments). The poor aren’t forgotten, but the de Blasio plan appears to convey the belief that in the growing, privatized, global supercity that New York has become, families of four with incomes as high as $150,000 are in danger of being priced out without some form of assistance.
At the town hall meeting, the mayor, trying to explain why he hasn’t set aside more units for those near the poverty line, said, “There are swamps of people who make less than $40,000 a year. People who make $50,000 need help, too.” To a renter in the audience anxious about her future, he admitted, with a touch of sadness, that his housing policy “may not help you personally. New York may not be exactly the same city you’ve known.” But he claimed that he was doing all that was realistically within his power “to protect the character of New York.”
Much has been made of how difficult it is to win the lottery for one of these affordable apartments: between 2013 and 2015, 2.9 million applicants entered the lottery for 4,174 units, a 700 to 1 ratio. The number suggests a stampede for subsidized housing across the eligible income bands. But when the pool of applicants is looked at more closely, a revealing disparity emerges. To give an example, at 535 Carlton Avenue in the Prospect Heights section of Brooklyn, a neighborhood that has experienced a dramatic increase in property values in recent years, 92,743 households entered the lottery for 297 affordable apartments. But only 2,203—less than 3 percent of the applicants—applied for the 148 units (almost half the total) that had been set aside for households earning six figures. (The monthly rent for these units ranged from $2,680 to $3,716, depending on their size.)
By contrast, nearly 67,000 households—more than 70 percent of the applicants—vied for ninety units for tenants with incomes of between $21,566 and $38,100.3 So few applied for the more expensive apartments because New Yorkers at that income level have enough options at similar prices in the unregulated rental market. What they lack are homes they can afford to buy, a very different problem. In a rush to rack up “affordable” units and get to the 80,000 he promised, de Blasio appears to have stocked the program with housing for upper-middle-income tenants who don’t need it. It costs more to subsidize the poor because they can pay so little themselves; the logical fiscal alternative is to subsidize those who can pay more.
In any event, developers are likely to prefer—and insist upon—filling their mandatory affordable units with tenants in the higher income bands. Benjamin Dulchin, executive director of the Association for Neighborhood and Housing Development, worries that the city isn’t tough enough with developers. “It’s all in the details, how the city applies its considerable power to shape the market,” he told me. “If a developer says, ‘I don’t like your affordable allotment, I’m not going to build right now,’ the city should tell him, ‘Fine, then wait two or three years,’ instead of caving and giving away too much of the public interest.”
De Blasio’s plan is predicated on the rezoning of fifteen neighborhoods to allow for higher residential density, as urbanists call it. This means the construction of large apartment buildings designed to attract much wealthier tenants than have previously lived in those neighborhoods.
In April 2016, East New York in Central Brooklyn, one of the poorest districts in the city, became the first to be rezoned under de Blasio’s plan. Two years earlier, investment groups, having learned of the impending change, began buying up older, rent-stabilized buildings and engaging in the familiar pattern of “unlocking” value through tenant harassment and eviction. Prices shot up. Short-term speculators flipped buildings for an average return of 125 percent, the highest appreciation in all of New York in 2016.
The city has promised to spend $257 million on schools, parks, street repair, high-speed Internet service, and other improvements in a neighborhood whose residents have spent decades pleading for basic services. Subway stations will be renovated, buses will run more frequently, and police on foot patrol will give the streets a protected, reassuring air. As has happened during the early stages of gentrification in other Brooklyn neighborhoods, East New York will be more racially integrated—for a time.
If all goes to plan, three thousand new affordable apartments will be created in East New York by 2024. It is possible, however, that just as many older stabilized units will be lost to predatory investors, putting the city in the impossible position of promoting affordable housing with one hand and working against it with the other. Five Central Brooklyn neighborhoods suffered a net loss of 5,496 rent-stabilized apartments between 2008 and 2015, even after newly constructed affordable units were counted.4 When I posed this conundrum to an official in the Department of Housing Preservation and Development, he said, “Gentrification is going to happen anyway. At least this puts us in the game.” I wondered if this were true for East New York: without the city’s invitation to developers and the influx of new residents that it will bring, the neighborhood’s manic transformation—and the displacement that goes with it—seems unlikely to occur anytime soon.
Fear of displacement has reached such a pitch in New York that for many the very idea of rezoning has become synonymous with eviction. In June, when Community Board 11 in East Harlem held a meeting to vote on the city’s proposed rezoning of a ninety-six-block swath between 104th and 132nd Streets that would allow for residential towers as high as thirty-five stories, more than one hundred protesters showed up, and a violent shoving match erupted. One East Harlem resident called the rezoning plan “ethnic cleansing.” Another compared it to “a Trojan horse” that would “come out at night to do us in.” Still another called it “a criminal act against our people.”
Rejected outright by protesters was the possibility that residents and their representatives could negotiate an agreement with the city that would provide more affordable units and stricter protection for rent-stabilized tenants. The level of distrust toward the city was remarkable, but not surprising. The de Blasio administration would do well to examine its disconcerting decision to rezone mainly in poor neighborhoods where displacement is most acute.
The hard fact is that behind the wildfire of new construction, new restaurants, retail outlets, bars, music halls, cafés, tech and media start-ups, and nearly full employment, real poverty in New York is on the rise. Wages have gone up, but housing costs have made many people poorer. The median rent-to-income ratio shows that New York tenants (excluding those living in public housing projects and other financially assisted buildings) spent 65.2 percent of their total income on rent in 2016, up almost six points from the already alarmingly high figure of 59.7 percent in 2015. The median can be a misleading measurement, but in this case it provides a telling portrait of the city’s evolving predicament. By comparison, nationwide, in 2015, Americans earning the country’s median annual income of $55,589 could expect to spend no more than 30 percent on rent.
The de Blasio administration’s current policy seems to acknowledge, and to some extent concede, that the economy of New York leaves little room for the poor. The public housing projects, built with federal money between the mid-1930s and late 1960s, are quickly becoming the last relatively secure refuge for lower-income families in New York. They consist of 176,066 low-income apartments with 400,000 “authorized residents” (leaseholders and members of their immediate family), a mere 4.7 percent of the city’s population. (When “off-lease” residents are counted, some estimates put the number at 600,000.) The average family income in the projects is $24,366, and the average monthly rent is $509. There are currently 255,143 families on the waiting list, and the vacancy rate is close to zero percent. With the steady, seemingly inexorable decline in the number of older rent-stabilized apartments, it is possible to foresee a future in which the public housing projects and municipal shelters are home to New York’s only remaining poor.
Obviously the situation calls for reform. Most crucial would be to eliminate the point—currently $2,700 per month—at which rent-stabilized apartments revert to market rates. History shows that as long as landlords have a path to the unregulated market, they’ll find a way to reach it. A 3 percent increase, say, from the Rent Guidelines Board would raise a rent of $2,700 per month to $2,781, still a manageable amount for a family of three with an income of $100,000, precisely the group that many of de Blasio’s new affordable units are aimed at. But most rent-stabilized tenants pay much less than that: of the 990,000 regulated apartments, 471,694 have rents of $1,000–$1,499; an additional 120,076 rent for $800–$999. The vast majority of these cheaper apartments are in New York’s poorest neighborhoods where incomes are well below the city’s average. If rent-stabilized apartments were required to stay in the system, no matter their cost, the outsize financial reward that landlords now reap for driving poorer tenants out of their homes would disappear.
But no such reform will occur as long as legislators in Albany control the city’s housing laws. From 2000 to 2016, New York City developers contributed $83 million to state assembly and senate campaigns, more than any other economic group. Much of that money went to upstate and Long Island candidates with no regulated housing in their districts. In exchange, these legislators, risking not a single vote among their own constituents, block pro-tenant bills from reaching the floor; on the rare occasion that one does make it to the floor—such as a 2010 bill requiring landlords to justify rent increases for apartments that are about to be deregulated—they band together to ensure its defeat.
The chair of the Senate Housing Committee, Catharine Young, is a Republican who represents a district near Lake Erie that is closer to Cleveland than to New York City. Young regularly sponsors pro-landlord bills, and in one case she introduced a bill involving a single building—Independence Plaza North in Manhattan—that would have vacated a court decision in favor of 3,500 tenants.5 (Young’s bill passed the Senate but died in the Assembly.) Two thirds of New Yorkers are renters. Urgently needed is some kind of referendum that would give the city control over its housing laws.
But New York’s crisis begs for a more definitive solution. In November, Los Angeles voters passed a half-cent sales tax increase to fund the most ambitious mass transit expansion in that city’s history. In essence, Angelenos collectively agreed to pay for a vast, decades-long project to solve their most intractable urban issue: gridlock traffic and the pollution it causes. Shouldn’t New Yorkers be given the chance to vote on a similar measure to fund affordable housing? What New York desperately needs is newly constructed buildings entirely devoted to households with incomes of $35,000 to $80,000, something a half-cent sales tax would abundantly provide.
There’s no doubt that this proposal would encounter a great deal of resistance. Some might argue, for instance, that a special transit tax to repair the subway system is more equitable because it would directly benefit every New Yorker, not just those in need of affordable housing. But the MTA has access to large amounts of capital through fare hikes and the issuance of municipal bonds. In addition, a commuter sales tax of 0.375 percent that helps fund the subway already exists. A majority of New Yorkers, I believe, recognize the importance of the housing emergency.
The city can expect no help from the federal government, which largely stopped building affordable public housing during the Nixon administration more than forty years ago. Ben Carson, President Trump’s secretary of housing and urban development, has expressed strong opposition to housing assistance for the poor, and the modest amount of federal money still directed to it will likely be cut even further when the Republican-controlled Congress resumes budget talks later this year.
The New York City Housing Authority, the government agency in charge of the city’s public housing, has a $17 billion deficit, the amount needed to repair and maintain its 2,462 buildings, some of which are more than seventy years old. In 2015, de Blasio implemented an effective program to address the deficit, but Trump’s proposed budget cuts, if approved, would, in the words of a senior housing policy analyst, “set [it] back by fifteen years.” Rarely has local funding been more imperative.
Imaginative low-cost housing is one of the most exciting branches of contemporary architectural design. There is no reason why New York cannot participate, and even be a leader, in this movement, instead of surrendering its skyline to a monotony of tinted-glass-clad towers with a handful of lower-cost units thrown in as a necessary concession to the city. One need look no further than at the ranks of luxury high-rises on the Williamsburg waterfront or in Downtown Brooklyn, or at the self-replicating piles clustered around the Queensboro Bridge in Long Island City, to understand the new “blight of dullness,” to borrow Jane Jacobs’s famous phrase, that is overtaking New York.
The city already has a few architectural examples of innovative housing to draw from, such as the felicitous gray, blue, yellow, and red apartment building on Boston Road in the Bronx, with 154 units that the firm Breaking Ground was able to construct for $47 million6; and the spectacular Via Verde, also in the Bronx, with its pleasantly leaf-filled 6,000-square-foot courtyard, solar panels, and roofs planted with garden plots and fruit trees. Both are for low-income tenants and were built on city-owned land, which made their construction less expensive.
With a sales tax devoted to housing, affordable buildings needn’t be confined to land the city already owns; enough money would be available to purchase lots all over the five boroughs, not just in poorer districts. The buildings could be woven into the fabric of the city, rather than clumped together in self-enclosed enclaves that promote a kind of psychological as well as physical segregation. New affordable housing would no longer be contingent on giving tax exemptions to the builders of private, market-rate projects: luxury developers would be free to charge whatever the market will bear for all of their units, not just 70 or 80 percent of them, and the city, in turn, could collect from these developers the billions in property taxes that it now forfeits under 421-a. Housing built with money from a special tax fund would be 100 percent affordable. Over time homelessness would decrease—especially among low-wage working families—as would the amount (currently about $1.6 billion per year) that the city spends on homeless services.
I have focused mainly on Brooklyn, partly because its 306,374 rent-stabilized apartments are the most in any borough, but also because Brooklyn is emblematic of New York’s housing emergency, with the hyperinvestment its real estate has been attracting since 2011, when the credit freeze brought on by the 2008 financial crisis began to thaw. These past six years have seen an extraordinary amount of displacement, and the majority of the displaced have been African-American. Seven of Central Brooklyn’s most vulnerable neighborhoods have a combined population of 940,000, 82 percent of which was black in 2010. It is the largest concentration of African-Americans (and Afro-Caribbeans) in the United States.7
In his revelatory book The Color of Law: A Forgotten History of How Our Government Segregated America (2017), Richard Rothstein shows the extent to which explicit federal policy restricted blacks from buying homes, effectively barring them from the surest path of entering the middle class, paying for higher education for their children, and accumulating wealth. The policy lasted from 1934, when the Federal Housing Authority (FHA) was established, until Congress passed the Fair Housing Act in 1968. By then, the damage had been done: working-class whites in government-subsidized suburban subdivisions, with guaranteed mortgages from the FHA, were benefiting from the increased value of their homes, while blacks were consigned to live as renters in depopulated cities, without equity in homes and often without jobs.
Rothstein documents how mortgage covenants and deeds imposed by the FHA not only prohibited developers from selling to blacks, but prohibited buyers from reselling to them. “Incompatible groups,” the FHA manual said, could not be financed. The policy structured the economies of America’s major municipalities in ways that are still felt today: now that New York (and a select number of other cities) has become desirable to live in again, families that in the twentieth century had been kept poor in places like Brooklyn and Harlem are being pushed out of their homes. We speak nowadays with contrition of redlining, the mid-twentieth-century practice by banks of starving black neighborhoods of mortgages, home improvement loans, and investment of almost any sort. We may soon look with equal shame on what might come to be known as bluelining: the transfiguration of those same neighborhoods with a deluge of investment aimed at a wealthier class.
Under de Blasio some positive emergency measures have been proposed. In February the mayor announced that the city would guarantee legal representation for tenants who are facing eviction and earn less that $50,000 per year, roughly 90 percent of whom appear in housing court without an attorney. Landlords tend to drop spurious cases when there’s counsel on the other side, and the number of illegal evictions has already begun to fall.
This will give immediate help to people in the direst circumstances. Still, neither the city nor the state has yet to commit fully to the protection of New York’s renters where they need it most: in their existing affordable apartments. Under de Blasio’s plan, well intentioned though it may be, the housing crisis is almost certain to worsen. To what extent should a renter who fulfills the terms of his lease be shielded from the vagaries of real estate markets with their speculative booms and busts? More broadly, what kind of city do New Yorkers want to live in? What responsibility, if any, do we bear to make sure that our most besieged citizens are not pushed out by our current urban prosperity? These are critical questions that New York, and other cities profiting from a surge of private real estate capital, must answer.
This does not include the 176,066 low-income apartments—“the projects”—managed by the New York City Housing Authority or the 45,312 Mitchell-Lama units for people of moderate and middle income. ↩
The highly publicized agreement that the de Blasio administration signed with the owners of Stuyvesant Town and Peter Cooper Village in 2015, to keep five thousand apartments rent-stabilized for the next twenty years, was unusual in almost every respect. They are the only regulated units left in a development with 11,232 apartments that had been built, in the mid-1940s, with an enormous amount of public financial assistance. The fraught history of Stuyvesant Town reveals a great deal about the interdependent relationship between the city and big developers. See Rachael A. Woldoff, Lisa M. Morrison, and Michael R. Glass, Priced Out: Stuyvesant Town and the Loss of Middle-Class Neighborhoods (NYU Press, 2016); and Charles V. Bagli, Other People’s Money: Inside the Housing Crisis and the Demise of the Greatest Real Estate Deal Ever Made (Dutton, 2013). ↩
I am indebted to Norman Oder’s impeccably researched report “The Real Math of an Affordable Housing Lottery: Huge Disconnect Between Need and Allotment,” City Limits, April 19, 2017. ↩
The five neighborhoods are Bedford- Stuyvesant, Crown Heights, Prospect Lefferts Gardens, Flatbush, and East Flatbush. ↩
For more on the subject of real estate money and the state legislature see “The Inside Story of How 421a Developers Sway Albany,” an excellent investigative piece by Will Parker of The Real Deal and Cezary Podkul and Dereck Kravitz of ProPublica, The Real Deal, December 30, 2016. ↩
See Martin Filler, “A Higher Form of High-Rise,” NYR Daily, October 4, 2016. ↩
The neighborhoods are Bed-Stuy, Brownsville, Canarsie, East Flatbush, Prospect Lefferts, East New York, and Fort Greene. ↩
In 1803, the guillotine was a common children’s toy. Children also had toy cannons that fired real gunpowder, and puzzles depicting the great battles of England. They went around chanting, “Victory or death!” Do childhood games influence character? We have to assume that they do, but let’s set aside such heartbreaking speculations for a moment. War—it’s not even a proper game—leaves influenza in its wake, and cadavers. Do childhood games typically leave cadavers behind in the nursery? Massacres in those little fairy-dust minds? Hoist the banners of victory across the table from the marzipan mountain to the pudding! It’s perhaps a dreadful thought, but we’ve seen clear evidence that both children and adults have a taste for imitation. Certainly, such questions should be explored, and yet let us allow that there is a purely metaphysical difference between a toy guillotine and war. Children are metaphysical creatures, a gift they lose too early, sometimes at the very moment they learn to talk.
John Keats (1795-1821) was seven years old and in school at Enfield. He was seized by the spirit of the time, by a peculiar compulsion, an impetuous fury—before writing poetry. Any pretext seemed to him a good one for picking a fight with a friend, any pretext to fight.
Fighting was to John Keats like eating or drinking. He sought out aggressive boys, cruel boys, but their company, as he was already inclined to poetry, must have provided some comic and burlesque treats. For mere brutality—without humor, make-believe, or whimsy—didn’t interest him. Which might lead a person to extrapolate that boys aren’t truly brutal. Yes, they are, but they have rules and an aesthetic. Keats was a child of action. He’d punched a yard monitor more than twice his size, and he was considered a strong boy, lively and argumentative. When he was brawling, his friend Clarke reports, Keats resembled Edmund Kean at theatrical heights of exasperation. His friends predicted a brilliant future for him in the military. Yet when his temper defused, he’d grow extremely calm, and his sweetness shone—with the same intensity as his rage had. The scent of angels. His earliest brushes with melancholy were suddenly disrupted by outbursts of nervous laughter. Moods, vague and tentative, didn’t settle over him so much as hurry past like old breezes.
A year before leaving Enfield—the Georgian-style school building would later be converted into a train station and then ultimately be demolished—John Keats discovered Books. Books were the spoils left by the Incas, by Captain Cook’s voyages, Robinson Crusoe. He went to battle in Lemprière’s dictionary of classical myth, among the reproductions of ancient sculptures and marbles, the annals of Greek fable, in the arms of goddesses. He walked through the gardens, a book in hand. During recreation breaks, he read Elizabethan translations of Ovid. Scholars have made a habit of pointing out that the poet didn’t know Greek. So what? Even Lord Byron insinuated that Keats hadn’t done anything more than set Lemprière to verse. In the same way that the translation errors from Greek don’t at all invalidate Hölderlin’s Der Archipelagus, Keats’s own transposed Greek perhaps allowed him to tear up the fields of Albion with the shards of classical ruins. He revealed to no one that he was an orphan. The tutors were glued to his side. He forgot his birthday and decided to study medicine. He learned how to leech, pull teeth, and suture. He observed cadavers on the dissection table that had been purchased off the resurrection men for three or four guinea each. The naked bodies were delivered in sacks.
Keats took notes and in the margins sketched skulls, fruit, and flowers. He felt alone. The “blue devils” settled along with him into the damp room. He frequented the Mathew family, his cousins, Ann and Caroline, who had a righteous horror of the frivolities of youth. They picked out piano arias from Don Giovanni and the young men danced the quadrille. It’s said that John Keats’s very first passion was for a stranger he’d seen for half an hour. He was waiting for her to smile at him but she never did. John Spurgin wanted to make a Swedenborgian of him. Keats’s friend Charles Cowden Clarke procured his books. Clarke was a massively tall man with bushy hair; eight years older than Keats, he had a great interest in cricket, about which he wrote a handbook. He would also write about Chaucer and Shakespeare. Keats played cricket too.
His appearance was transformed in a single afternoon in 1813 at a lecture about Spenser. Seeming suddenly both large and potent, he emerged from his diminutive stature while reciting the verses that had struck him. He devoured books, he copied, translated sections, he became the scribe and secretary to his mind. He informed his friends at Guy’s Hospital that poetry was “the only thing worth the attention of superior minds.” And it would become his sole ambition. He dressed like a poet, collar turned up and tied with a black ribbon. For a short time he grew a mustache. When exam day arrived, everyone was sure that he wouldn’t pass, what with those poetic airs. He did earn his diploma and would be able to work as an apothecary. But he chose to leave medicine. He was only twenty years old when he saw his own poem, “To Solitude,” published in the Examiner.
It was impossible for his talent not to draw the attention of many people. Leigh Hunt, imprisoned for having libeled the king, protected Keats as long as Keats let him. John Hamilton Reynolds thought of him as a brother. Joseph Severn perceived ecstasy in his face and about his features—but then, Severn was a painter. He observed that his head was too small for his broad shoulders, observed the intensity of his gaze that blazed like a flame when crossed but when calm glittered like a lake at dusk, and noted a cold lethargy. They visited museums together. He saw Brown, Dilke, Bailey, Hazlitt. Things were lukewarm with Shelley. Benjamin Haydon showed him the Elgin Marbles from the Parthenon. Keats didn’t have the money to travel the world but made a long walking tour of Scotland. He wore a sack on his back filled with old clothes and new socks, pens, paper, ink, Cary’s translation of the Divine Comedy, and a draft of Isabella. His traveling companion was the clerk and writer Charles Armitage Brown, a practical and energetic man. Keats returned home ragged and feverish, his jacket torn and his shoes missing, but he had scaled a mountain, the Ben Nevis. He was poor, according to W. B. Yeats, and couldn’t build a Gothic castle as Beckford had, which inclined him instead toward the pleasures of the imagination. Yeats also said that Keats was malnourished, of weak health, and had no family. But aren’t all poets the heralds of Heaven?
According to the testimony of friends, Keats was of small stature, though rather muscular, with a broad chest and broad shoulders (almost too broad); his legs were underdeveloped in proportion to his torso. He gave off the impression of strength. His chestnut hair was abundant and fine. He parted it with a ruler and it fell across his face in heavy silken curls. He had a high, rather sloped, forehead. His nose was beautiful but his mouth— they were specific on this point—was big and not intellectual. His lower lip was pronounced, giving him a combative aspect, which diminished his elegance a bit, yet served, they were quick to add, to animate his physiognomy. His face was oval and there was something feminine about his wide forehead and pointy chin. Despite his disproportionate mouth, Keats, they’d concede, was handsome. Sometimes he had the look in his eyes of a Delphic priestess on the hunt for visions.
According to Haydon, he was the only one who knew him—with the sole exception of Wordsworth, who’d predicted great acclaim for him based on his looks.
He was brilliant socially, loved wordplay, and his eruptions of laughter were noisy and extended. People found him irresistibly funny when he did impressions. If he didn’t like the conversation, he’d retreat to a window corner and look out into the void. His friends respected that corner as if it were his by law.
If a face, as Johann Gottfried Herder says, is nothing more than a Spiegelkammer of the spirit, then we should be a little frightened of Keats’s variety of expressions. Even doubt insinuates itself. When Keats wrote, “I thought a lot about Poetry,” we can’t see in that a mirror reflection of Keats. The mirror is empty, uninhabited. The idea has no facial features and could look like anything, but theologically it’s more beautiful empty. Keats is unable to contemplate himself. His gift is not knowing how to reconcile himself. The identity of a person who is in the room with him presses in and cancels his own out in a flash. When Keats speaks, he’s not sure that he’s the one talking. When he dreamed of bobbing in the turbine in Canto V of Dante’s Inferno, it was one of the great joys of his life.
Joseph Severn’s portrait is described by some as a lie drawn from truth: friends found it too effeminate, the trembling mouth, and yet the eyes were right, even radiant. The painting’s three-quarter view makes the eyes seem even bigger, more remarkable. His focus rests above the earth yet not in the sky—fixed on a murky horizon. His pupils are slightly enlarged, trained perpendicularly on the suspended thought. Even his gaze is indolent, sensual, consciously engrossed, and like a veil shifting across his brow, there is a flash of charming zealotry. He looks like a girl, and if we think of him as a girl, the femininity of his features evaporates and he seems stubborn and volatile, the constant surveyor of his own visions.
One day in Haydon’s study, Keats recited “Hymn to Pan.” Wordsworth was there; he kept his left hand tucked into his waistcoat. “With reverence” was the way he’d inscribed a book of his poems for Keats and he was truly reverent about poetry. Wordsworth’s wife was once heard to say, “Mr. Wordsworth is never interrupted.” Keats dared open his mouth anyway. He recited his verse in that singsong way of his while pacing up and down the room. In the space between his voice and the paintings on the wall there was a plastic silence. “A very pretty piece of paganism,” said Wordsworth, his left hand still tucked into his waistcoat. Haydon was distressed by Wordsworth’s utter tactlessness and angered by his use of the word “paganism.” And yet we read in Meister Eckhart that through their virtue, the pagan masters had ascended higher even than Saint Paul, and that experience was what had brought them as high as the apostles had come through grace.
There were women Keats didn’t dislike. Miss Cox, an Anglo-Indian heiress, had a theatrical Asian beauty and was therefore despised by the Reynolds sisters. She kept him awake one night the way a Mozart piece might. “I speak of the thing as a passtime and an amuzement than which I can feel none deeper than a conversation with an imperial woman the very ‘yes’ and ‘no’ of whose Lips is to me a Banquet.”
Isabella Jones was a few years older than Keats and had read “Endymion.” They met when she was staying with an elderly Irish relative in the village of Bo Peep near Hastings. Biographers have questions about her—the two took walks, took tea together in the garden, and played whist late into the night—was this a summer fling or an initiation? The prevailing view is that it was an initiation.
What took the form of a young woman who’d moved in nearby was almost a matter of sorcery. For some time, Keats didn’t want anyone to utter her name. Her mere existence was secret. Fanny Brawne was descended from knights, monks, and lawyers. Her mother had married for love against her parents’ wishes— like Keats’s own mother who’d married the stable boy at the Swan and Hoop Inn. Fanny acquired Beau Brummell as a cousin when her mother’s sister married. From her paternal ancestors who’d performed at the Garrick, Fanny inherited a proclivity for the theater. Grandfather Brawne had supported the liberation of women. It was said about Fanny that she wasn’t very beautiful, but undoubtedly elegant. Her nostrils were too thin, her face too long, the nose aquiline, and her pallor chronic. Her cheeks were never rosy, not even after a six-mile walk. The history of female beauty is almost always told in the negative. Even the Brontë sisters were talked about as plain, as was Emily Dickinson. Spiritual sex appeal does not seem to generate chivalry. Fanny was the same height as Keats, just over five feet tall. His nickname for her was “Millimant.” From the moment she met him, Fanny was taken with his conversation. Generally, she found men to be fools. Was compelled to describe herself as “not timid or modest in the least.” She conversed in French with the émigrés at the Hampstead “colony.” She danced with officers at the St. John’s Wood barracks. She had an eighteenth-century way about her, her hair curled in the style of the court of Charles II. Fanny had a “fire in her heart.” Her mother made inquiries about Keats with the neighbors. They were engaged. Keats signed his letters to her with the emblem of a Greek lyre with four broken strings and the motto: Qui me néglige me désole. Walking on the heath, Keats came across a being with a strange light in its eyes, a rumpled archangel—he recognized Coleridge. They walked together and spoke of nightingales and dreams.
“That drop of blood is my death-warrant. I must die,” pronounced Keats calmly on the third of February 1820. He seemed intoxicated. His future was not predicted by a Sibyl, but by the medical student himself, the poet whose verses describe beauty flooded by a mortal estuary. With the intensity he’d once applied to his anatomy studies, he scrutinized the blood on his handkerchief. He felt like he was suffocating and only managed to fall asleep after hours of despotic insomnia. On the third day he was well enough to receive visitors and read news of George III’s death. Doctor Rodd came to see him. His lungs were not compromised but the doctor recommended mental rest. They determined that the hemorrhage was simply the body trying to fight off the recent bout of cholera that his brother George had suffered. They soothed him with currant jellies and compotes, some of which dripped onto a Ben Jonson first edition. This extreme diet provoked strong palpitations. Doctor Bree, a specialist, was summoned. They could find no ailments in his lungs or other organic causes. Keats’s illness “is in his head,” they concluded. For a day, he was tormented by Fanny’s specter, which appeared to him dressed as a shepherdess and then in a ball gown. She was a joyful simulacrum dancing and giggling in the void.
The morning of June 22, he had light bleeding. In the afternoon he went to the Hunts for tea. They talked about an Italian tenor. There was a lady there who was particularly interested in bel canto and was amazed that the young gentleman was the author of “Endymion.” The bleeding got worse over the course of the evening. He spends the twenty-third laid out in a room, far from Fanny, staring at flowers on a table. Speech is difficult. He indicates the verses he favors in a volume of Spenser he wants to give to Fanny. The doctor Darling prescribes a trip to Italy. Keats’s hands are like those of an old man, veins swollen; his features, Severn reports, have taken on the same cast his brother Tom’s did when he was dying of consumption. The evanescent hand furiously traced an oblique line over the first copy of his book. In a preface, the publisher apologized for the unfinished “Hyperion.” It is the first of July. There is a metal taste in his mouth. “If I die,” he tells Brown, “you must ruin Lockhart.” For he was the one who’d written an insulting article about Keats that touted gossip and personal details. Unsigned—yet Keats applied his sleuthing talents and located an inside source to identify that enemy of literature.
Keats considered going just anywhere in order to die alone. Then he wanted Brown to go with him. But he was to leave for Rome with Severn. On the twentieth of August he started coughing blood again. His friends began to say their farewells. Fare wells to dying people are often awkward. Haydon started off the ceremony. By way of comfort, he began to speak about life after death—the last thing that Keats wanted to hear. Angered, Keats answered that if he didn’t get better right away he’d rather kill himself. John Hamilton Reynolds was unable to take his hand. He wrote to John Taylor that he was happy about Keats’s departure, that he should be running from Leigh Hunt’s vain and cruel company. As for Fanny, Keats only benefited from the absence of the poor thing—to whom he was so incomprehensibly bound. Fanny wrote in her diary: “Mr. Keats leaves Hampstead.” Keats gave her the Severn miniature, a copy of Dante, a copy of Spenser, and his Shakespeare folio. They exchanged locks of hair and rings. Fanny sewed a silk lining into his traveling hat and also gave him a journal and a knife. Woodhouse also took a lock of his hair. He wanted to be Keats’s Boswell. The Maria Crowther set sail. It was a small two-rigger and when the sea got rough it disappeared beneath the waves.
It had one cabin intended for six people. There was the Captain, a good man; Lady Pidgeon, plump and pleasant; and Mistress Cotterell who was gracious though in an advanced state of consumption. But then there was a typhoid epidemic in London, the ship was quarantined, and it was October 31 by the time that ended and Keats was twenty-five years old. When Mistress Cotterell disembarked in Naples she asked, a little too loudly, after the moribund youth. They arrived in Rome on the fifteenth of November. Doctor Clark was waiting for them. His bedside manner had been acclaimed by the King of Belgium and Queen Victoria. He was a Scot. While attending Keats, he had only minor concerns about what was afflicting the heart and lungs and said that the more serious trouble was in his stomach. Mental exertions were the source of the trouble. The doctor recommended fresh air and moderate exercise. He had Keats throw all his medicines to the dogs. He suggested horseback riding and rented a horse for him at six pounds sterling a month. The landlady, Anna Angeletti, asked five pounds sterling in rent. Keats desired a piano and so that was rented as well. Doctor Clark lent him several pieces of music, throwing in a Haydn sonata as well. The food was fetid. On one occasion, Keats threw it out the window after tasting it. Shortly thereafter he was brought an excellent meal.
He started reading Alfieri’s Tragedie but had to stop after the first few pages—not being able to contain his emotions. He wrote a last letter to Brown, attempting an awkward bow and a grand farewell. On the tenth of December after vomiting blood, he asked Severn for laudanum. The attacks over the next week were violent. He suffered from hunger. Clark rationed his food severely because of the ruined state of Keats’s digestive apparatus; one anchovy on toast a day. Keats begged for more food. He couldn’t sleep. He suspected that someone back in London had poisoned him. The servants didn’t dare come into his room because they feared he was contagious. On Christmas Day, Severn perceived in his friend’s desperation that Keats was “dying in horror.” As a good Christian, Severn tried to convince Keats that there was redemption in pain. Keats dictated a list of books that he wanted to read: Bunyan’s Pilgrim’s Progress, Jeremy Taylor’s Holy Living and Holy Dying, and Madame Dacier’s translation of Plato. Three letters arrived that day. The letter from Fanny remained unopened.
At the end of December the landlady reported Keats’s illness to the police. Severn didn’t go out to sketch ruins but stayed at Keats’s side instead. Keats was overcome by sleep and Severn drew a portrait of Keats’s head on his pillow, eyes closed, face hollowed, a few curls glued to his forehead with cold sweat. Then transcribed Keats’s words, his last testimony. Severn was in the presence of a great poet. He may have been already thinking that one day he would be buried beside him. He’d been to visit the Protestant cemetery near the Pyramid of Cestius, its grounds were glazed over with violets and it seemed that Keats liked the spot. He said he would feel the flowers grow over him. Severn knew that violets were Keats’s favorite flower. He plucked for him a just budded rose, a winter rose. Keats received it darkly and said “I hope to no longer be alive in spring.” He wanted what he called in his last letter a “posthumous existence” to come to an end. Inscribed on his gravestone: “Here lies one whose name was writ in water.” His words are set into the stone as if on a mirror, touching everything and not touched by anything—strange asymmetry.
Stretched out on his bed, he gazed up at the rose pattern in the blue ceiling tiles. His eyes grew glassy. He spoke for hours in a lucid delirium. He never lost his faculties. He prepared Severn for his death. He wondered whether he’d ever seen anyone die before. He worried about the complications that might come up. He consoled Severn and told him that it wouldn’t last long and that he wouldn’t have convulsions. He longed for death with frightening urgency. On the twenty-third of February he worried about his friend Severn’s breathing, how it pressed on him like ice. He tried again to reassure him: “It will be easy.” Dusk entered the room. From when Keats said that he was about to die, seven hours passed. His breath stopped. Death animated him in the last moment. After the autopsy, Clark said that he couldn’t understand how Keats had survived so long. Fanny’s last letters, never read by anyone, were sealed in his coffin. After the funeral service, the police took possession of the apartment on Piazza Spagna. They stripped the walls and floor and burned all of the furniture.
From These Possible Lives by Fleur Jaeggy, translated by Minna Zallman Proctor, which will be published by New Directions tomorrow.
For Henry James’s seventieth birthday in 1913 a group of his admirers commissioned John Singer Sargent to paint him; and Sargent’s own birthday gift was to waive his fee. The novelist sat some ten times in the artist’s London studio, and the painter always asked him to bring some friends along—“animated, sympathetic, beautiful, talkative friends,” as James put it, whose conversation would break the “gloom in my countenance by their prattle.” That was Sargent’s usual practice, and the evidence of its success sits this summer at the entrance to “Henry James and American Painting,” a compact but wonderfully heterogeneous show at the Morgan Library.
The portrait presents James full-faced and with his baldness fringed by gray. His head tilts just a bit to the right, his eyes are slightly hooded, and his expression looks shrewdly confident and skeptical; judging us far more than we would dare judge him. He’s wearing his usual winged collar and a bowtie, and seeing it here—its regular home is London’s National Portrait Gallery—I was struck by the fullness of his lips and the warm tones with which Sargent has painted his face. In 1914 the painting went on display at the Royal Academy and was slashed with a hatchet by a suffragette, not because she had anything against either James or Sargent per se, but simply because it looked like a picture of masculine prominence. It was expertly patched and to my untrained eye the damage isn’t visible; but a picture taken at the time shows a gash at the temple and another across the mouth.
The Morgan’s exhibit includes a comprehensive selection of Jamesian portraits along with other paintings of and by his friends. His brother William had planned to become a painter before deciding in 1861 to take up science instead, and worked for almost two years in the Newport studio of William Morris Hunt. But in the end it was Henry who spent the most time in artists’ rooms, and got the most from it. He too had gone to Hunt, and put in his hours with charcoal and ink, though where William and his fellow pupil John La Farge drew from life, Henry merely copied plaster casts. Still, it was enough to give him a taste for the painter’s world, the portrait painter’s in particular. It was a sociable existence, its easy chat mixed with the purposeful work of the hands, and the solitary writer was drawn to it as he would later be to the drawing room or the dinner party. One consequence was the frequency with which he used the studio as a setting for his fiction, whether in the early Roderick Hudson (1875) or a tale from his maturity like “The Real Thing.” And another was that he himself was often painted or drawn or photographed.
He liked sitting, and the exhibition includes a round dozen of his many portraits; more probably than have ever been gathered in one place before. In one, a marvelous 1911 charcoal head by Cecilia Beaux, the novelist’s eyes are fierce, his baldness emphasized and egg-like. It’s matched by Abbott Handerson Thayer’s elaborately stippled 1881 crayon drawing, a three-quarters view that suggests the strength of James’s nose. And that nose figures as well in the earliest image of him here, a profile in oil that La Farge made in 1862. James was just nineteen, and his hair looks a burnt red; he seems pensive and unhappy—uncertain too—and the background has a touch of storm in it.
The show’s other La Farge was new to me, a half-length portrait of William James from 1859, a palette in one hand and the other extended beyond the frame, and with the canvas as a whole dominated by the white sleeves of his shirt. It’s again in profile, and taken together these images mark the two as brothers: the nose and the lips match, and so does the tilt of the head. But William is active not contemplative, he’s doing something; and looking at them together it seems that La Farge got at not only their similarity, but also the essential difference between them. Maybe that’s an overstatement—maybe I’m projecting what I know about that difference onto these pictures of teenagers. On a different day La Farge could well have done them differently, capturing William’s youthful irresolution and Henry’s sense of purpose of instead. Only he didn’t, and it’s a shame these two portraits normally hang separately, William at the National Portrait Gallery in Washington, and Henry at the Century Association.
The exhibit’s pair of Whistlers is more familiar. So are its other Sargents, though there are some real treasures among them, including the Royal Academy’s 1899 An Interior in Venice, where the stiff luxury of its setting is undercut by the looseness of his brush. The freshest thing about this show is a set of paintings connected with some of James’s friends and experiences in Italy. The Villa Castellani sits at the top of a Florentine hill on the south side of the Arno, a structure with a “long, rather blank-looking” front that forms one side of a “little grassy, empty, rural piazza.” The words come from The Portrait of a Lady, and they describe the building in which the novel’s Gilbert Osmond keeps an apartment. The grass is gone now, but James’s account is otherwise exact to the place: an easy walk from the city’s center, for all that his characters complain about the climb, and with each turn of the road offering a new view, the olive-shrouded hills, the Duomo too large for human conception. The villa dates from the fifteenth century, and by James’s day was divided into flats “mainly occupied by foreigners of random race long resident in Florence.” One of them was a family friend named Francis Boott, a widower who lived on the proceeds of a Lowell textile mill along with his carefully-educated daughter, Elizabeth.
As individuals the Bootts had almost nothing in common with ThePortrait’s Gilbert and Pansy Osmond. Nevertheless James drew on their situation—their closeness, their expatriation—in imagining his characters; and he also returned to them, as Colm Tóibín writes in the catalogue to this exhibit, in developing the Ververs of The Golden Bowl. Boott wrote music and Lizzie painted, and in 1879 she began to study with a good-natured but rough-mannered artist from Cincinnati named Frank Duveneck. James had already written admiringly of his work and he had a reputation as a skilled teacher; Lizzie fell in love with him, and over her father’s objections they married. Duveneck had just completed a full-length portrait of her when in 1888 she caught pneumonia and died. She stands against a gray wash of a background, all in brown—gloves and muff, dress and hat—and so impressively corseted that one can almost hear the whalebone creak. She had worn the same dress for their wedding, but her eyes here are steady and searching and sad; she’d just had a baby and her expression made me think, irresponsibly, about post-partum depression. The canvas is highly finished, indeed polished; but then so was Elizabeth Boott, and this painting’s honesty and pain seem unforgettable.
The show’s catalogue includes three highly suggestive essays by its co-curators. Tóibín’s is the most intensely biographical, slipping with practiced ease between James’s work and life, and tracing in that work the ghostly presence of the painters and sculptors he knew. Marc Simpson, formerly of Williamstown’s Clark Institute, writes exceptionally well about James as an art critic, noting both his puffery—he often reviewed his friends—and his blind spots. James wrote interestingly about the early Winslow Homer, admiring his skill while dismissing his subjects as “suggestive of a dish of rural doughnuts and pie.” But the list of American painters he ignored is a long one, Thomas Eakins among them. The Morgan’s own Declan Kiely details the library’s holdings of James’s manuscripts and letters, and the piece seems at first an outlier, though the exhibition does include several vitrines of his papers. But Kiely predicates the essay on James’s own ambivalence about such archival survivals, letters in particular; the result is sharp and alive and includes a glimpse of some still-unpublished correspondence with the expatriate American doctor William Baldwin. He was based in Florence, and his patients included everyone from Queen Victoria to Mark Twain, as well as James himself—who would have hated the idea of our looking over the doctor’s shoulder at the details of his health and diet.
James explored such invasions of privacy in stories like “The Aspern Papers” and “The Abasement of the Northmores,” and he tried to forestall posterity, knowing he would fail, by burning great piles of his own papers in 1909 and again in 1915. Among them, presumably, were the letters he’d gotten from Sargent, with whom in Simpson’s words he had an “apparently abundant correspondence.” They first met in 1884 and a few years later James encouraged the painter to settle in London, writing him up for the magazines and introducing him to other painters. They were guests at the same tables and depicted the same world; indeed some critics complained about their resemblance, as though they merely illustrated each other. The one surviving letter between them shows that they used first names—and yet James burned his files and Sargent simply didn’t save things. In writing to others the novelist rarely refers to Sargent the person as opposed to the painter, and we have never known, and never will, very much about their long friendship.
One portrait here that James mentioned as “very sure and charming” is that of a writer with whom both he and Sargent were friends—or a double portrait rather, an 1885 oil, Robert Louis Stevenson and His Wife. Fanny Stevenson lounges at one edge of the canvas, wrapped in a shawl—a few splashes of yellow and red—and with her feet bare. Meanwhile the writer paces, pulling at his mustache, his long body impossibly thin, and with the dimly Jamesian recess of a doorway swinging open between them. Sargent’s surface is rougher than in his society portraits, and some critics at the time thought it awkward. So it is, deliberately, and Stevenson loved it. He stands off-center, and looks out at us, looking too as if he’s about to leave the room, the frame. As the past, or other people, are apt to when you try to pin them down.