Месечни архиви: May 2017

Egypt: The New Dictatorship

Abdel Fattah el-Sisi
Abdel Fattah el-Sisi; drawing by James Ferguson

On July 3, 2013, General Abdel Fattah el-Sisi, chief of staff of the Egyptian Armed Forces, appeared on national television. Clad in a military uniform and black beret, he announced that he was acting on “a call for help by the Egyptian people” and seizing power from the Muslim Brotherhood. Since winning parliamentary elections in 2011 and the presidential election the following year, the Brotherhood—a grassroots movement founded in Egypt in the 1920s—had stacked the government with Islamists, failed to deliver on promises to improve the country’s deteriorating infrastructure, and attempted to rewrite Egypt’s constitution to reflect traditional religious values. These moves had provoked large demonstrations and violent clashes between supporters and secular opponents.

Sisi declared the Muslim Brotherhood a terrorist group and jailed its leadership—including the president he had deposed, Mohamed Morsi. Six weeks later, on August 13, he ordered the police to clear Brotherhood supporters from protest camps at two squares in Cairo: al-Nahda and Rabaa al-Adawiya. According to official health ministry statistics, 595 civilians and forty-three police officers were killed in exceptionally violent confrontations with the protesters, but the Brotherhood claims that the number of victims was much higher.

That fall, Sisi launched a sweeping crackdown on civil society. Citing the need to restore security and stability, the regime banned protests, passed antiterrorism laws that mandated long prison terms for acts of civil disobedience, gave prosecutors broad powers to extend pretrial detention periods, purged liberal and pro-Islamist judges, and froze the bank accounts of NGOs and law firms that defend democracy activists. Human rights groups in Egypt estimate that between 40,000 and 60,000 political prisoners, including both Muslim Brotherhood members and secular pro-democracy activists, now languish in the country’s jails. Twenty prisons have been built since Sisi took power.

In October 2013, President Barack Obama demonstrated his disapproval of the violent crackdown on Muslim Brotherhood supporters by suspending military aid to Egypt. The aid—including a dozen F-16 fighter jets, twenty Harpoon missiles, and up to 125 US Abrams M1A1 tank kits—was restored eight months later. By that point, Sisi had shed his military uniform and become Egypt’s civilian president, winning more than 95 percent of the vote in a stage-managed May 2014 election. But Obama kept his distance, refusing to invite Sisi to the White House.

Donald Trump, who has spoken bluntly about “radical Islamic terrorism” and appears to share Sisi’s view that the Muslim Brotherhood is involved in such activity, quickly signaled his support for the military government. Sisi was the first Arab leader with whom Trump spoke after his inauguration, and in April the US president invited him to the White House for what was described as a cordial private meeting. According to reports, Trump did not broach the subject of human rights violations, and observers believe that his embrace may embolden the Egyptian leader to extend his repressive policies.

But recent events in Egypt have raised the question of whether the tradeoff Sisi has offered the Egyptian public—keeping them safe in exchange for an authoritarian state and far-reaching restrictions on civil society—is working. In the northern Sinai Peninsula, an Islamic State–affiliated group called Sinai Province has launched an alarming number of attacks on security forces in recent months. The group has claimed to have killed 1,500 people—including security forces and “collaborators”—since the beginning of 2016. (Egyptian military officials say that number is wildly exaggerated.)

International peacekeepers describe the fighting in Sinai as starting to resemble the conflict in Afghanistan, with a committed army of religious fundamentalists, rocket and sniper attacks on foreign military observers, and defections by government troops angered by the state’s persecution of Islamists. “They are globally inspired local insurgents,” Major-General Denis Thompson, the Canadian former commander of the peacekeeping force, said in a recent interview. “And their effort is really to use the [ISIS] brand to attract recruits, and locally they’re trying to redress many long-standing grievances they have with the Egyptian government.” Abuses by the military may also be drawing more local men to the ISIS cause. In late April, Human Rights Watch urged the US government to suspend military aid to Egypt after a video surfaced showing troops executing eight captured insurgents, then planting rifles next to their corpses to make it look as if they were killed in combat.

Meanwhile, a previously unknown militant group called Lewaa El-Thawra (Revolution Brigade) has taken the Islamist insurgency to more populous parts of the country. At dawn on a Saturday morning last October, a senior Egyptian army officer who commanded forces in the Sinai was shot dead by members of the group outside his home in an affluent Cairo suburb. In early April, the group injured a dozen policemen in an attack on a training academy in the Nile Delta. “The current regime has destroyed the people’s revolution, killed its members, and imprisoned others,” the brigade declared in a video released last fall, announcing that it was going to war to avenge the Rabaa al-Adawiya and al-Nahda killings. “Our message to the Interior Ministry’s mercenaries is that you all will be fired upon soon.”

Far more worrisome for Egypt’s stability, however, has been a series of large-scale attacks on the country’s Coptic Christian minority. Copts, who make up about 10 percent of Egypt’s 90 million people, have been repeatedly attacked since the 2011 revolution, and numerous churches have been bombed. Many Christians blamed the Muslim Brotherhood for this violence and supported the coup that brought Sisi to power.

But the most recent attacks have caused many Christians to question that support. In December, a suicide bomber blew himself up inside a chapel beside Egypt’s main Coptic cathedral in Cairo, killing twenty-five. Two months later, ISIS released a video that called Christians the jihadists’ “favorite prey” and vowed that the Cairo bombing was “only the beginning” of a campaign to “kill every infidel.” On Palm Sunday, two attackers detonated explosive vests within hours of each other at crowded Coptic churches in Alexandria and the Nile Delta. The coordinated bombings, for which the Islamic State again assumed responsibility, killed forty-five people and injured more than one hundred. It was the deadliest day of attacks against Christians in modern Egyptian history.

Some Egyptian intelligence officials believe that jihadists, facing pressure in other parts of the Middle East, are intent on opening a new front in Egypt. Many of the six hundred Egyptians believed to have fought with the Islamic State in Syria and Iraq have apparently abandoned the conflict in recent months and drifted home. With its erratic security forces, proximity to other jihadist battlefields, large Christian minority, repression of Islamists, and large population of young Muslims unmoored and angered by the authoritarian rule of Sisi, Egypt may present a rich opportunity for jihad.

Ayman Abdelmeguid, a member of the now outlawed April 6 Youth Movement, a secular opposition group that helped launch the Egyptian revolution, spent several weeks locked in a small cell with dozens of Muslim Brotherhood members last year after his arrest for violating the protest law. Many of these young men, who faced indefinite incarceration without trial, had been drawn to jihadism, he told me, by their experience in Sisi’s prisons. “The guys who started to shift toward violence had the sole idea of revenge and breaking the regime,” he said. “They argued that the regime deliberately killed, tortured, raped, and imprisoned them and their families and friends and hence deserved an eye for an eye and a tooth for a tooth.” If these men were released, Abdelmeguid told me, they would be ripe candidates for recruitment by jihadist groups.

After the Palm Sunday attacks, Sisi ordered the seizure of copies of a private newspaper critical of the regime and declared a three-month state of emergency, the first he had imposed since the aftermath of the violence in Rabaa al-Adawiya and al-Nahda in 2013. The law allows him to dispatch civilians to State Security Emergency courts, where no appeals are permitted; overrule court decisions that aren’t to his liking; monitor and intercept all forms of communication and correspondence; censor and confiscate publications; impose a curfew; shut down businesses; and seize property.

On May 8, an Egyptian court sentenced the Muslim Brotherhood’s spiritual leader, Mohammed Badie, and two deputies to life in prison for “planning violent attacks” following the Rabaa al-Adawiya killings. The public prosecutor’s office had charged the men, along with three dozen other Brotherhood members, with “preparing an operations room to confront the state and create chaos in the country” and “planning to burn public property and churches.”

Sisi has meanwhile created three permanent regulatory bodies to monitor the press: the Supreme Council for Media Regulation, the National Press Authority, and the National Media Authority. Composed of panels of journalists and government officials, the new bodies can fine or suspend publications, broadcasters, and individual journalists—including the foreign media. Democracy activists I talked to, who were already chafing under a dictatorship that one called “far worse than the Mubarak era,” say there now appear to be few, if any, checks on Sisi’s power.


Moises Saman/Magnum PhotosSupporters of deposed president Mohamed Morsi at a rally outside the Rabaa al-Adawiya Mosque, where hundreds of protesters were killed the following month in a crackdown on the Muslim Brotherhood ordered by President Sisi, July 2013

How did Egypt reach this point? In The Egyptians: A Radical History of Egypt’s Unfinished Revolution, Jack Shenker, a former correspondent for The Guardian in Cairo, examines the brief period of hope that followed Mubarak’s downfall—and the unraveling that led to Sisi’s police state and the crushing of the country’s democratic aspirations. As Shenker tells it, Sisi’s primary interest has been to safeguard the military’s hold on power and the vast network of financial interests—land holdings, corporate investments, and businesses—it has accumulated over six decades. He has used the threat of terror to justify a clampdown on any kind of dissent.

Shenker draws a straight line from Sisi back to Gamal Abdel Nasser, who took power following a military coup in 1952. Under the stringent terms of a bargain that Nasser struck with his citizens, writes Shenker,

a new nationalist government would ensure healthcare, education and employment was available to all. But…there was no room for anti-regime protest or democratic participation by the masses; those who tried to intrude upon the realm of governance would be cast out from the national family as unpatriotic and dangerous, and face punishment.

After Nasser died of a heart attack in 1970, his successor, Anwar Sadat, kept the police state intact but took away the safety net that had guaranteed Egyptians employment and subsidized basic commodities. Islamist army officers assassinated Sadat in 1981, an event that brought Hosni Mubarak to power. Also under the guise of fighting terror, Mubarak imposed a state of emergency immediately after Sadat’s assassination, stifled political activity, jailed thousands of Muslim Brotherhood members, and unleashed his state security forces to keep the population in line. Meanwhile, his National Democratic Party (NDP) served as a patronage machine for a coterie of businessmen-politicians who, in later years, gathered around Mubarak’s son and heir apparent, Gamal Mubarak. Public utilities and other state-owned assets were sold off for a song to Mubarak’s NDP cronies, who often plundered them, laid off thousands of workers, and then resold them for huge profits.

By the late 2000s, Shenker writes, “unemployment had risen so sharply that one in four Egyptians was out of work; among the millions who had been born since 1981 and knew no other leader than Mubarak, the jobless figure was estimated at over 75 per cent.” On the surface, Mubarak’s Egypt was stable, secular, and welcoming to tourists, but few of those who came to gaze at the pyramids and cruise down the Nile had any sense of the corruption, police brutality, and gross disparities of wealth that were breeding discontent among the population.

Shenker identifies several causes of the 2011 revolution: the rise of social media, which offered an alternative to the self-censored press of the Mubarak era; the stirrings of an organized opposition during a political opening caused by the US invasion of Iraq and President George W. Bush’s quixotic determination to democratize the Middle East; pockets of activism such as Mahalla, an industrial town in the Nile Delta that, in 2008, became the setting for a lengthy strike that attracted wide support; and the excesses of Mubarak’s thuggish security forces. The tipping point may have come in June 2010, when Khaled Said, a young man who had posted photos online of police engaging in illegal activity, was arrested in a Cairo Internet café, dragged into an adjoining building, and beaten to death. When photos surfaced of Said in the morgue, his face bloody and disfigured, a protest page was started on Facebook that attracted hundreds of thousands of followers.

Months later, in December 2010, a wave of protests erupted against Tunisia’s president, Zine El Abidine Ben Ali, forcing him to flee soon after and further mobilizing a generation of Egyptians fed up with stagnation, powerlessness, and state-sanctioned violence. Beginning on January 25, 2011, hundreds of thousands of Egyptians gathered in Tahrir Square, starting the uprising that less than three weeks later brought down Mubarak.

After Mubarak stepped down on February 11, 2011, power passed to a military body called the Supreme Council of the Armed Forces (SCAF), which was determined to protect its interests and stop the revolution in its tracks. In The Battle for Egypt: Dispatches from the Revolution (2011), an enthralling account of the eighteen days of protests that led to Mubarak’s fall, originally published on The New York Review’s website, Yasmine El Rashidi captures the sense of foreboding that took hold as the SCAF tightened its grip. “Everyone I have spoken to over the past few days is concerned about the current situation,” she wrote on February 23, 2011:

There is general unease about the army and its growing power. We have become accustomed to tanks rolling through our streets; most of the soldiers are young, and in many ways just like us. But while the military leadership has arrested former business leaders and ministers, and corruption cases are now being reviewed, it is also becoming much more assertive about curfews, and activists have been alarmed by reports that people detained during the revolt were tortured.

About a week after Mubarak stepped down, two young protest leaders, Ahmed Maher, the cofounder of the April 6 Youth Movement, and Wael Ghonim, were taken to meet Sisi, then head of military intelligence. As Maher recalled when I met him in Cairo in February, Sisi told him:

“You are heroes, you did miracles, you brought down Mubarak, you did something we failed to do for years, but now we need you to stop demonstrating.” I told him, “The revolution is not complete. We need to change the structure of the government.” I met Sisi three times after that, and he said the same thing: “We need to be united, stop demonstrating.” Sisi hated the protests.

As street battles continued between security forces and protesters, resulting in hundreds of deaths, SCAF searched for a way to end the impasse. The challenge facing the generals was to appear to bow to popular pressure without sacrificing their power. “The military needed a political settlement that combined procedural democracy—the Egyptian people would clearly not be sated by anything less—with practical autocracy,” Shenker argues, “and to that end they needed a new partner in the ruling enterprise. That partner was the Muslim Brotherhood.”

Other close observers of the jockeying for power after Mubarak’s fall, including El Rashidi, have argued that the SCAF was simply bowing to the inevitable: the Muslim Brotherhood was by far the best organized political movement outside the fallen regime and was particularly popular in rural Egypt, largely because of its extensive network of charities and the spread of conservative Islam. According to this reading, military leaders saw little to be gained by actively opposing it. “Military leaders view the Brotherhood as the devil they know,” El Rashidi wrote at the time about the March nationwide referendum that led to parliamentary elections; “even in the event of a large Islamist representation in parliament, they would understand what they were getting and how to deal with it.”

As Shenker presents it, however, a behind-the-scenes bargain was struck that seemed to offer both sides advantages: the Muslim Brotherhood would let the military keep its assets and control the crucial ministries of Interior and Defense. The generals would cede to the Muslim Brotherhood day-to-day governance and allow it to write a new constitution. Yet the Morsi government lasted barely a year before Sisi overthrew it, jailed Morsi, and began reconstituting the police state.

Why did the revolution fail? In the four years since the military coup, journalists and historians have offered a number of explanations. According to some, the military cabal set out to sabotage the elected government from the start, blocking fuel supplies and creating electricity shortages to undermine popular support. Shenker places the blame squarely on the Brotherhood. “Once he had the tools of the authoritarian state at his disposal, Morsi turned upon the revolution,” he argues, “breaking strikes, beating protesters,…defending the security apparatus against popular demands for reform.”

The essays collected in Egypt and the Contradictions of Liberalism: Illiberal Intelligentsia and the Future of Egyptian Democracy, edited by Dalia F. Fahmy and Daanish Faruqi, single out a different culprit: the country’s liberal elite. In an essay about the Muslim Brotherhood, Mohamad Elmasry, an Egyptian-American analyst of Arab media, argues that Morsi was set up as a bogeyman by secular democrats who had initially embraced his electoral victory as expressing the will of the people but subsequently recoiled from his Islamist vision.

In late 2012, Morsi was engaged in a battle with Mubarak-appointed judges, who had already dissolved parliament and were threatening to break up the constitutional assembly and reverse Morsi’s decree keeping the military out of politics. Morsi issued a controversial new edict granting himself, for a limited period, sweeping powers and shielding his decisions from judicial oversight. That same day, the opposition leader Mohamed ElBaradei tweeted: “Morsi today usurped all state powers and appointed himself Egypt’s new pharaoh.” Tens of thousands gathered outside the presidential palace demanding that he withdraw the order, and violent clashes broke out between anti-Morsi and pro-Morsi factions. Elmasry argues:

The decree’s negative ramifications were grossly exaggerated in the Egyptian media and political circles. Disagreeing with Morsi’s decree—which was mishandled on a number of levels—was politically legitimate. Claiming that Morsi had turned into a dictator, however, represented a gross exaggeration, and fed an already existing myth about the Muslim Brotherhood’s alleged dictatorial, anti-democratic fantasies.

Some of the country’s leading secular democrats joined Tamarod, a grassroots campaign—allegedly orchestrated by the military—that collected millions of signatures in an effort to force early elections and drive Morsi from office. In the aftermath of Sisi’s seizure of power, Faruqi and Fahmy note in their introductory essay, prominent liberals lined up behind him. Alaa al-Aswany, the popular novelist who had taken part in the protests in Tahrir Square, praised the general as a “national hero”; Saad Eddin Ibrahim, one of the Arab world’s most respected pro-democracy reformers, lent “his enthusiastic support to the overthrow of Morsi, going so far as to support then General Sisi’s presidential ambitions”; and the respected journalist Ibrahim Eissa, a “champion of liberal values,” transformed himself into a “political reactionary” who applauded “the arrest of the April 6th Youth Movement founder Ahmed Maher, questioning the movement’s patriotism.” Maher would end up spending three years in the notorious Tora Prison, mostly in solitary confinement.

“There is little doubt that Egypt’s intelligentsia betrayed the revolution that they claimed to celebrate and support,” writes Khaled Abou El Fadl, a scholar of Islam at UCLA, in a harsh polemic, “Egypt’s Secularized Intelligentsia and the Guardians of Truth.” What they got instead was a police state far worse than any previous regime. Shenker writes:

In an effort to shut down Revolution Country, the state pressed Egyptians to turn in on themselves. A microbus passenger turned provocateur spoke of rebellion on a journey; when a fellow traveller agreed with her criticisms of Sisi, she hauled him off the bus and denounced him as a terrorist to the security forces. Schoolchildren were detained for sporting potentially seditious stickers on their pencil cases. A man who named his donkey “Sisi” was thrown into prison.

Today Egypt’s former revolutionaries are quiet, dispirited, and fearful. During two visits to Cairo in November 2016 and February 2017, I tracked down a dozen members of the April 6 Youth Movement, which a judge outlawed in 2014. Most had spent time in jail during the last four years. They were among the lucky ones: other members were still serving prison terms of up to twenty years, convicted by pro-Sisi judges and prosecutors of a raft of trumped-up offenses including assault, blocking roads, and “thuggery,” a catchall term for troublemaking introduced by the SCAF in 2011. Ahmed Maher was now under around-the-clock surveillance and, according to the terms of his release, was obliged to spend every night for the next three years at a local police station. “Even when I was in prison I had more freedom than I have now to criticize the regime,” he told me. He had frequently smuggled out eloquent critiques of the Sisi dictatorship, published in the Egyptian media and in The Washington Post and The Huffington Post, and sharp denunciations of the conditions at Tora. “I have to be very careful now, I don’t want to end up in prison again.”

Nearly everyone I talked to in Egypt believed that Sisi’s authoritarianism would only breed more violence and terror. One unseasonably cold afternoon in February, I visited an old acquaintance, Gamel Eid, a lawyer and the head of the Arabic Network for Human Rights Information, in his office in Maadi, near the Nile. Eid has defended many political prisoners in recent years, including the prominent photojournalist Mahmoud Abu Zeid, known as Shawkan, who was arrested while covering the August 2013 crackdown on Muslim Brotherhood protesters at Rabaa al-Adawiya. Charged with murder, Shawkan has been sitting in prison, awaiting trial, for nearly 1,400 days. “The general prosecutor can extend detention as long as he wants. It’s outside the law,” Eid told me. “Many times we find a person after a few months, [held] in a secret prison. It often means that he was kidnapped, tortured.”

Khaled Dawoud, a prominent journalist and leader of a small liberal opposition party, is among many in Egypt’s intelligentsia who supported Sisi’s removal of Morsi—he still refuses to call it a “military coup.” But he believes that Sisi’s position is more fragile than it appears. In Dawoud’s view, the dictator has staked his legitimacy on effectively fighting terror and turning around an economy that collapsed after the 2011 revolution; he has failed on both counts. The economy remains stagnant, with tourism down, inflation high, and huge, failing infrastructure projects such as a $9 billion expansion of the Suez Canal sucking up the country’s hard currency. Meanwhile, Sisi’s repression, Dawoud argues, has done little but foment anger. The Internet, he said, was the only free space left, “and they are chasing us there. People have been arrested for administering Facebook pages.”

When I talked to him in February, Dawoud predicted more violence and extremism in the months to come. “Libya is in shambles, and hundreds of fighters are coming back here intent on blowing things up,” he told me. “Egyptians who go to Syria are coming back to Egypt, having learned [to make bombs], and they’re screwing us. How can you solve this? By giving people political space.” Sisi has shown no inclination to do that, however, and with a new friend in the White House, he seems likely instead to shrink this space even further.

—May 10, 2017

Source Article from http://feedproxy.google.com/~r/nybooks/~3/B82VgvPRtDA/

Saul Steinberg’s View of the World


The Saul Steinberg FoundationSaul Steinberg: New York Moonlight, 1974-1981; click to enlarge

Like many children of the 1970s, I first encountered Saul Steinberg’s drawings on the cover of The New Yorker. Or, to be more precise, I first saw printed reproductions of his drawings on New Yorker covers plastered all over the walls of my family’s bathroom in Omaha, Nebraska. Like many bathrooms of the era, ours had become a do-it-yourself decorating project for my mother, for which New Yorkers—and, apparently, reproductions of nineteenth-century Sears-Roebuck catalog pages—were deemed de rigueur sometime during the years of the Ford administration. I would spend extended sessions puzzling over the pictures, which towered not only above my child-sized perspective, but also beyond the limits of my understanding. (I think my mother put the antique whalebone corset and uterine syringe advertisements near the ceiling for a reason.)

But it was the “View of the World from Omaha, Nebraska” poster framed in our den that most fascinated me. Its title, typeset in the legitimizing New Yorker font, and its curious, childlike cartoon map of familiar downtown buildings disappearing into a pastureland of distant pimples labeled with names like “Pittsburgh,” “Philadelphia,” and “New York” before rolling off into the ocean absolutely captivated me with the idea that I could be living in such an important city as Omaha—especially given that The New Yorker had seen fit to highlight the fact on a sheet of paper four times the usual size of the magazine. After all, Nebraska is more or less traditionally considered the geographic center of the United States—and is actually labeled as such in the real View of the World from 9th Avenue, drawn by Steinberg, which appeared on the March 29, 1976, cover of The New Yorker. The original did not, unfortunately, appear on our bathroom wall, so when I first saw the genuine image years later as a teenager, I still felt a lingering security within its strange loop of place-time—even if only then was I getting the actual joke.


The Saul Steinberg FoundationSaul Steinberg: Riverfront, 1969; click to enlarge

Historically speaking, View of the World from 9th Avenue was a cartoon nuclear reaction, smashing together what New York thought of itself with what the world thought of New York, all on the cover of The New Yorker itself. It spawned countless city-centered rip-offs that spiraled their particle trails through 1970s dens across the nation, including mine. To this day it remains the magazine’s most famous cover not featuring its unofficial mascot, Eustace Tilley. Yet the thieving of Steinberg’s easily thieved premise rankled him for the rest of his life, the most visible sign of his success legitimizing yet also blurring the importance of his contributions to cartooning, to say nothing of twentieth-century art. A new exhibition at the Art Institute of Chicago, “Along the Lines: Selected Drawings by Saul Steinberg,” gives some sense of his electrifying work.

As a cartoonist myself, I am dismayed that there’s little in the show I can steal, the crossover in the Venn diagram of the image-as-itself versus as-what-it-represents being depressingly slim. I am painfully aware that in comics, stories generally kill the image. But Steinberg’s images grow and even live on the page; somewhere in the viewing of a Steinberg drawing the reader follows not only his line, but also his line of thought. Describing himself as “a writer who draws,” Steinberg could just as easily be considered an artist who wrote; as my fellow cartoonist Lynda Barry puts it, his “drawing went not from his mind to his hand but rather from his hand to his mind.” Or as Steinberg himself declared at the beginning of a 1968 television interview, “[my hand explains] to myself what goes on in my mind.”


The Saul Steinberg FoundationSaul Steinberg: The Museum, 1972; click to enlarge

One can’t overstate the importance of Steinberg’s working for reproduction, of his creating drawings to be disseminated to the mailboxes, laps, and, I guess, bathroom walls, of receptive readers and not, at least initially, to museum walls. The Museum turns on an eminently Steinbergian tool—the rubber stamp—and, as a lithograph, manipulates the idea of reproduction while pictorially lampooning and dissembling it. Identical figures are plunked out to represent visitors and viewers of (what else?) official stamps of approval; over the museum’s horizon, stamps rise like suns, the entire composition grounded and buttressed by illegible signatures and, of course, more stamps. As a visa-seeking emigré in his early life, Steinberg’s fascination with legal seals is easily understandable. Riverfront and Certified Landscape pivot on the objectively ridiculous but fundamentally necessary imprimatur of government made corporeal, territorially imprinted as a skein of walls and fences. Steinberg quietly added his own signature directly into the rather unaccommodating landscapes—are they farms, factories, or concentration camps?—rather than putting it in the traditional antiseptic nonspace outside the pictorial “border.” But in The Museum, Steinberg bundles the stamp’s sanctioning power and aesthetics into the frame of the art itself, stamping his own authorizing red imprimatur in that expected nonspace outside the image, along with his signature (legible, one notes) and, as a digestif, a blind stamp (a stamp without ink, visible by the impression it leaves on the page), just to snuff out any lingering doubt about the drawing’s authenticity and, by proxy, the artist’s own legitimacy.


The Saul Steinberg FoundationSaul Steinberg: Untitled (Rush Hour), circa 1969; click to enlarge

Even a seemingly dashed-off stamp-and-doodle drawing such as Untitled (Rush Hour) rewards the viewer with a fizz of epiphany: all of the figures and cars are made from impressions of the same four rubber stamps, so that the flow of the urban workforce is made clear only in relation to the perspective of the building into which they rush and from which they leave, and all this is captured graphically with the very clerical tools that grant the city its life. Even the seemingly random zig-zag gestures of the stamped taxicabs’ bumpers synaesthetically combine to create the sound of traffic in the reader’s eye. Konak and Untitled (Table Still Life with Envelopes) are similarly constructed around office ephemera—an official invoice, a postal envelope—but within the deliberate strictures of Analytical Cubism. For Steinberg, Cubism wasn’t only a metaphysical investigation but an immigrant’s observation: “As soon as I arrived in New York, one of the things that immediately struck me was the great influence of Cubism on American architecture…the Chrysler Building, the Empire State Building, jukeboxes, cafeterias, shops, women’s dresses and hairdos, men’s neckties—everything was created out of Cubist elements.” New York Moonlight appears observed by alien eyes, the spiky Chrysler Building looking more like an Aztec totem or butterfly genitalia than a skyscraper. Steinberg does not resort to the cliché of lit windows stretching into the sky; instead, his buildings sink into the horizon, not so much looking like Manhattan in the moonlight as feeling like the metallic, acidic impression of wet moonlit pavement.


The Saul Steinberg FoundationSaul Steinberg: The South, 1955; click to enlarge

Sometime in the 1970s, Steinberg’s work took a turn for the observed, typified in the Art Institute’s collection by the lovely Breakfast Still Life. Steinberg’s wife, the artist Hedda Sterne, criticized this “realistic” direction, but Breakfast Still Life is hardly realistic, with its pencil purples and greens cast against the usual metaphysical Steinberg white, capturing in reverse-thermal snapshot the stuff of the artist’s morning—black coffee, bread, cornflakes, butter, jam, Chianti bottle, a newspaper—which Steinberg sets up in alienating opposition to the tableau most humans seek as a daily reassurance. Seemingly finding it freeing to leave the artificial atmosphere of his earlier work and return to the pleasure of observed drawing, Steinberg remarked, “in drawing from life I am no longer the protagonist, I become a kind of servant, a second-class character.”


The Saul Steinberg FoundationSaul Steinberg: Breakfast Still Life, circa 1974; click to enlarge

Of all the drawings in the Art Institute exhibition, The South stands out for the simple genius of its rough construction. As our gaze passes over it, moving from right to left (and we have no choice, as the rightmost word BOOKS is the first thing we see—Steinberg knew that one always reads before one sees), the stuffed toy and guitar in the bookstore’s window plant the first seeds of suspicion. What sort of bookstore sells toys? This prompts further investigation across darkened shops and postbellum buildings, ending at a Confederate monument and a courthouse before one is dumped into a confused, crosshatched tangle of black vegetation. In a single drawing, Steinberg has “read” a southern town and taken the reader backward in time and space to the mechanisms and history behind it—all without depicting directly what the South itself was trying to conceal: the legacy of slavery. Not that he was averse to more direct tactics: later works make free use of a disturbing Day of the Dead–like Mickey Mouse–type character, which Steinberg considered inherently racist: “Mickey Mouse was black…half-human, comic, even in the physical way he was represented with big white eyes.”

Steinberg’s later work adopts an increasingly dyspeptic view of the nation in which he had taken up residence. Untitled (Citibank) and Untitled (Fast Food) are prescient condemnations of corporate America and the ketchup-and-mustard trickle-down effect of prioritizing appetite over ethics. The artist pulled no punches on this subject, lamenting, “Gastronomy in America, the restaurant, the taste of the nation are governed by the tastes of children.” Like hundreds of Steinberg’s drawings, these two employ a shot-from-the-hip, up-skirt, underfoot perspective of an outsize world: huge legs, skyscraper tops, big shoes. His friend and fellow New Yorker writer Ian Frazier noted in a posthumous reminiscence that Steinberg said “he always tried to draw like a child…the goal was to draw like a child who never stopped drawing that way even as he aged and his subject matter became not childish.”


The Saul Steinberg FoundationSaul Steinberg: Untitled (Citibank), 1986; click to enlarge

Really, if one thinks about it, it’s a child’s perspective that grants View of the World from 9th Avenue its power. Ironically, it’s also what most appealed to me as a child, even in the knock-off “Omaha” version I initially encountered. As embarrassing as it is to admit now, growing up in those Reagan years I enjoyed a cultivated blindness to America’s place in our post-war planet, and I think it’s fair to say that I was not alone in this if the television programs of the era are any indication.

Steinberg knew that we are all the functional centers of our own universes. Beginning with an airless blank of empty white, every time Steinberg set his pen to paper, a cosmos exploded through the mnemonic mimesis of his line; not surprisingly, all the works in this exhibition also act in some way as universes unto themselves. While the artist may have preferred, at least early in his career, to see his work in reproduction first and in memory second (which is, really, how we spend the majority of our time with those works of art that most surprise us: thinking about them), each of these drawings also offers a single, signature proof that yes, Saul Steinberg the person really at one point did exist, and, most importantly, that he offered us a view of the world that was both comically unique yet disquietingly universal.


The Saul Steinberg FoundationSaul Steinberg: Untitled (Table Still Life with Envelopes), 1975; click to enlarge

Adapted from Chris Ware’s essay in the catalog for “Along the Lines: Selected Drawings by Saul Steinberg,” on view at the Art Institute of Chicago from May 27 to October 29. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/BoqB5kK0_ac/

Trump: The Presidency in Peril

If Donald Trump leaves office before four years are up, history will likely show the middle weeks of May 2017 as the turning point. Chief among his mounting problems are new revelations surrounding the question of whether Trump and his campaign colluded with Russia in its effort to tip the 2016 election. If Trump has nothing to hide, he is certainly jumpy whenever the subject comes up and his evident worry about it has caused him to make some big mistakes. The president’s troubles will continue to grow as the investigators keep on investigating and the increasingly appalled leakers keep on leaking.

Donald Trump
Donald Trump; drawing by Pancho

Two especially damaging disclosures occurred on Friday, May 19, the day Trump departed on his first foreign trip. That afternoon, while Air Force One was in the air, The Washington Post broke an ominous story that law enforcement investigators had under scrutiny a “person of interest” on the White House staff, described as “close to the president.” No longer was the focus on a small number of people at some distance from Trump, such as his former campaign chairman Paul Manafort, longtime adviser and political troublemaker Roger Stone, or Carter Page, briefly Trump’s national security adviser during the campaign. The indications are that the “person of interest” is Jared Kushner, the president’s son-in-law.

Though younger and more composed, Kushner is a lot more like Trump than is generally understood. Both of them moved their father’s businesses from the New York periphery to Manhattan. Like his father-in-law, Kushner came to Washington knowing a lot about real estate deals but almost nothing about government. Both entered the campaign and the White House unfamiliar with the rules and laws and evidently disinclined to check them before acting. Thus, Kushner has reinforced some of Trump’s critical weaknesses. Trump has thrust project after project upon him (the only top aide he could trust), and Kushner, who has a high self-regard, has taken on a preposterous list of assignments. He was able somehow (likely through his own leaks) to gain a reputation—along with his wife, Ivanka Trump—as someone who could keep the president calm and prevent him from acting impulsively or unwisely.

In the days before Trump’s foreign trip, however, others on the White House staff, by now not fans of Kushner, leaked that he had encouraged Trump to make the shortsighted decision in early May to fire FBI Director James Comey. By getting rid of the man who was overseeing the investigation into the Trump campaign’s relationship with the Russian government, the president stirred widespread outrage and reinforced suspicions that he had something to hide. (Richard Nixon, who was a lot smarter than Trump is, similarly misread the way the public would react when he arranged for the firing of his special prosecutor, Archibald Cox.) One concrete and dangerous result was that Trump was quickly confronted with something worse: a special counsel—Robert Mueller, Comey’s predecessor as FBI director—who is respected by both parties and, unlike Comey, can focus on this one assignment and will be much harder to fire.

But the widely applauded decision to name a special counsel won’t resolve some momentous matters raised by the Russia affair. Mueller’s investigation is limited to considering criminal acts. His purview doesn’t include determining whether Trump should be held to account for serious noncriminal misdeeds he or his associates may have committed with regard to his election, or violations of his constitutional duties as president. The point that largely got lost in the excitement over the appointment is that there are presidential actions that aren’t crimes but that can constitute impeachable offenses, which the Constitution defines as “treason, bribery, or other high crimes and misdemeanors.”

When it was considering the impeachment of Richard Nixon, the House Judiciary Committee concluded that “high crimes” meant something broader than offenses listed in the criminal code. The concept of impeachment was largely lifted by the Founders from English law, which Edmund Burke explained to Parliament meant that “statesmen, who abuse their power” will be accused and tried by fellow statesmen “not upon the niceties of a narrow jurisprudence, but upon the enlarged and solid principles of state morality.”1

Among the crimes that the Watergate defendants were convicted of and that might be applicable to the more recent misadventure are bribery, subornation of perjury, criminal obstruction of justice, money laundering, tax evasion, witness tampering, and violations of election laws including campaign finance laws. Other crimes that might have occurred in the Russia affair are violations of the foreign agent registration laws and the Foreign Corrupt Practices Act, perjury itself (including lying to federal investigators), plus espionage and even treason.

Unlike ordinary crimes, impeachable offenses are “political” questions—ones that deeply affect the polity. Alexander Hamilton said that impeachable offenses were political, “as they relate chiefly to injuries done immediately to the society itself.” For example, of the three articles of impeachment adopted by the Judiciary Committee against Richard Nixon in 1974, the most important was for “abuse of power.” The critical holding by the committee was that a president can be held accountable for the acts of subordinates as well as for actions that aren’t, strictly speaking, crimes. In the end, an impeachment of a president is grounded in the theory that the holder of that office has failed to fulfill his responsibility, set out in Article II of the Constitution, to “take care that the laws be faithfully executed.” Unless a single act is itself sufficiently grave to warrant impeachment—for example, treason—a pattern of behavior needs to be found. That could involve, for example, emoluments or obstruction of justice.

This concept of accountability is critical to preventing a president from setting a tone in the White House, or dropping hints that can’t be traced, that lead to a pattern of acts by his aides that amount to, as in the case of Watergate, a violation of constitutional government. Many of what seemed disparate acts—well beyond the famous break-in in the Watergate complex and the cover-up—were carried out in order to assure Nixon’s reelection in 1972, and they amounted to the party in power interfering with the nominating process of the opposition party. That way lay fascism.

Similarly, in the case of the Russia affair, even if the president’s fingerprints aren’t found on any single act, misdeeds committed by Trump’s aides and close associates could amount to an impeachable offense on the part of the president. By definition, impeachable offenses would appear to concern conduct only during a presidency. But a number of constitutional law scholars, including the Harvard Law professor Laurence Tribe, who was dubious at first, believe that if a president or his associates working on his behalf acted corruptly and secretly to rig the election, then the preinaugural period should be included.

Michael T. Flynn
Michael T. Flynn; drawing by James Ferguson

Mike Flynn, Trump’s former campaign adviser and dismissed national security adviser, is obviously a problem for the president, who has acted toward him in a most bizarre way. Trump ignored the warnings of Obama and Chris Christie not to hire Flynn. Then he resisted firing him even though, six days after the inauguration, then-acting attorney general Sally Yates warned the White House that Flynn had been “compromised” by Russia, and that Flynn had lied to Pence about his conversations with the Russian ambassador, Sergey Kislyak, in late December 2016. Yates also alluded to what she called Flynn’s “underlying conduct.”

Trump asked for Flynn’s resignation only on February 13, after stories about Yates’s warning appeared in the press—and then, two days after he fired him, the president called Flynn “a wonderful man.” Ignoring admonitions not to be in touch with someone under investigation, Trump has done so and, weirdly, recently told aides that he’d like to have Flynn back in the White House. Trump’s conduct has the unmistakable ring of a man concerned about what the other man has on him.

More recently, the McClatchy news organization reported that Flynn, in conversations with outgoing national security adviser Susan Rice during the transition, asked that the Obama administration hold off on its plan to arm Kurdish forces to help the effort to retake Raqqa, the ISIS capital in Syria. Since Flynn was a paid lobbyist for the Turkish government, which strongly opposed the plan, this action could possibly lead to a charge of treason.

In late May, it was reported that Flynn had told Kislyak that it would be preferable if Russia didn’t retaliate against sanctions imposed by the Obama administration in response to Russia’s meddling in the election. Flynn was leading the Russians to believe that they’d receive much better treatment under a President Trump and the Russians went along. (They’ve been disappointed because once Russia’s behavior in the election became known it was clear that Congress wouldn’t allow Trump to lift the sanctions.) A big question is whether Flynn discussed such important policy matters with the Russians without the knowledge of the president-elect. Once it became clear that Russia wasn’t retaliating, Trump tweeted: “Great move on delay (by V. Putin)—I always knew he was very smart!”

Another major question is how far the Russians got in recruiting allies in the Trump campaign. Recently, former CIA director John Brennan testified that last summer he’d become concerned about the number of contacts between Russians and people involved in the campaign, so much so that he told a bipartisan group of congressional leaders, including House Speaker Paul Ryan and Senate Majority Leader Mitch McConnell, neither of whom has yet to show any sign of being perturbed. (But they are people to watch closely for any sign of movement away from Trump.)

Brennan said he was worried that the Russians may even have recruited some Americans to cooperate with their effort to tilt the election. Intelligence analysts picked up conversations by Russians in which they bragged that they’d cultivated Flynn and Manafort and believed they would be useful for influencing Trump. (This doesn’t prove guilt on the part of either man.) According to CNN, some Obama administration officials viewed Flynn as a security risk.

While Mueller’s investigation could preempt some congressional inquiries, it still leaves them important work to do. It doesn’t fall to the special counsel to consider the enormous and pressing question of how to prevent a foreign power from interfering in our elections again. It’s up to Congress to determine what new laws to write to deal with that. Conflicts are likely to arise between what Mueller says he needs by way of secrecy and not subjecting witnesses to self-incrimination, and the committees’ desire to remain involved; these will have to be negotiated.

Laurence Tribe is gathering what he believes are impeachable offenses committed by Trump.2 Going back to the first days of the Trump presidency and continuing up to the present, Tribe sees Trump flouting the constitutional ban on accepting “emoluments”—payments by foreign governments that might compromise the president’s presumably undivided commitment to US interests. Examples include accepting money paid by foreign governments to Trump’s luxury hotel just down the street from the White House in order to curry favor with its owner, and Trump’s failure to cut himself off from ownership of a business that has projects all over the world.

Also, Trump may be held to have attempted to impede the FBI’s Russia investigation. In addition to his request to Comey that he “let…go” his investigation of Flynn, this could include Trump’s firing of Comey for, as he ultimately admitted, “this Russia thing.” Or Trump’s saying to Russia’s foreign minister Sergey Lavrov and to Ambassador Kislyak, of firing Comey: “I faced great pressure because of Russia. That’s taken off.” Collectively, these acts could amount to the impeachable offense of covering up other potential, substantive misdeeds. There were also Trump’s efforts very early in the administration to get Comey to pledge “loyalty” to him (Comey dodged, saying he’d give him his “honesty”). In another form of pressure, Trump asked Comey when the FBI would announce that he wasn’t under investigation. Comey didn’t respond.

When it was revealed that Comey had taken notes of their conversations, there came Trump’s not-very-veiled threat that Comey “better hope that there are no ‘tapes’ of our conversations.” Whether this was a feint or Trump had actually taped some conversations is as yet unknown, but by now Trump’s habitual lying has put him in a difficult spot when it is his word against Comey’s—or pretty much anyone’s. Whether or not Trump has recognized it—after all, he deals in threats—the revelation that Comey had notes of Trump asking him to drop the Flynn investigation was a clear sign that Comey wasn’t going to simply go away.

Where are all the leaks coming from? Many Republicans want to make this the issue rather than what the leaks reveal, but the fact that they keep coming is a sign of the state of near collapse of the White House staff. It’s not an exaggeration to say that Trump has the most unhappy staff ever, with some feeling a higher duty to warn the public about what they see as a danger to the country.

From the stories that emanate from 1600 Pennsylvania Avenue the impression one gets is that Trump is a nearly impossible person to work for: he screams at his staff when they tell him something he doesn’t want to hear; he screams at them as he watches television news for hours on end and sees stories about himself that he doesn’t like, which is most of them. Some White House staff are polishing their résumés. Leaks are also being made by the intelligence community, many of whom see Trump as a national menace.

People who have been to the Oval Office have come away stunned by Trump’s minimal attention span, his appalling lack of information, his tendency to say more than he knows. (Intelligence officials have been instructed to put as much of his daily briefing as possible in the form of pictures.) Aides have been subjected to public embarrassment by his propensity for changing his story.

Trump sullies the reputation of people who have signed on with him. The respected general H.R. McMaster, now the national security adviser, humiliated himself by trying—presumably under orders—to combat the Washington Post story on May 15 that Trump had revealed highly classified intelligence about ISIS to Lavrov and Kislyak. What made this even worse was that the intelligence had been passed on to the US by Israel under a strict international concordat that classified information shared between allies is not to be revealed to anyone else. McMaster has yet to recover his reputation from having emphatically refuted things the Post story didn’t say. Over and over, McMaster characterized the president’s passing along to the Russian officials the most sensitive information as “wholly appropriate.”

Trump’s reckless act is believed to have endangered the life of an Israeli intelligence asset who had been planted among ISIS forces, something extremely hard to pull off. Trump’s mishandling of the intelligence provoked dismay in Washington. During his visit to Jerusalem on May 22, Trump claimed that the press stories about it were wrong because he hadn’t mentioned Israel; but the reports didn’t say he did.

That same day, The Washington Post disclosed that Trump had asked the heads of two major intelligence agencies to announce that there had been no collusion between his campaign and Russia. Both declined. Some Trump defenders will argue that he didn’t know enough to understand that he shouldn’t have made those calls, or to try to get Comey to back off investigating Flynn—what might be called the ignorance defense. But while ignorance of the facts might be an acceptable defense in criminal or impeachment proceedings, ignorance of the law isn’t.

The particular challenges of serving in the Trump administration have led some people to make compromises that outsiders are prone to judge. In very short order, the same person can be almost rapturously admired as a hero and then scorned as a coward and a loser. Consider Rod Rosenstein, a career government prosecutor with a reputation for integrity who became deputy attorney general in April. Within a couple of weeks Rosenstein found himself in a meeting with Trump and Attorney General Jeff Sessions (who had supposedly recused himself from any dealings on the campaign and the Russia matter) to write a memo expressing his own strong negative views of how Comey had handled Hillary Clinton’s e-mail case. The choices before Rosenstein were to write the report, knowing that Comey was going to be fired anyway, or refuse to and resign or be fired. Then what use could he be?

Trump had reportedly thought that Democrats, still unhappy over Clinton’s loss, would be pleased with his firing of Comey if his rationale was Comey’s handling of her case. But that made no sense; the timing was inexplicable; Democrats were incredulous that Trump was now suddenly sympathetic to Clinton. While Trump was within his legal rights to fire Comey, his doing so risked politicizing the FBI and set a terrible precedent.

Now Rosenstein was the scapegoat. But despite numerous Democrats’ harsh condemnation of it, Rosenstein’s memo reads as if it had been written by any number of the Democrats or experienced prosecutors appalled by Comey’s behavior in the Clinton case. The memo set forth views widely expressed at the time that Comey had made a number of prosecutorial misjudgments. These included his tough public comments about Clinton’s handling of classified material even though he said there weren’t grounds for prosecuting her—this isn’t done—and his letter to Republican committee chairmen, which he had to know would be made public, eleven days before the election, saying that the inquiry into her handling of classified e-mails was being reopened, breaking a long-standing rule that prosecutors don’t comment on the status of continuing cases.

Comey’s problem was that in trying to protect his reputation he kept doing things that further damaged it. In his testimony before the Senate Judiciary Committee on May 3 he spoke melodramatically of his anguish in having to decide between two choices: to “speak” or to “conceal.” But many observers believed that he had a third choice: quietly to get a warrant and check out some of the e-mails that had traveled from Clinton’s laptop to her close aide Huma Abedin’s to that of Abedin’s then-husband Anthony Weiner before reopening an investigation, much less announcing one and perhaps affect the outcome of the election. Comey’s testimony also angered Democrats by wildly exaggerating the number of Clinton’s e-mails that had landed on Weiner’s laptop—“hundreds and thousands,” he said, when actually there had been just a handful. Comey’s comment that the thought that his actions may have affected the election made him “mildly nauseous” enraged Trump.

Trump summoned Sessions and Rosenstein and demanded the report on Comey. Rosenstein was at the least naive if he didn’t understand that his report would be used as the rationale for the firing, but when that ensued, drawing intense criticism of him, he indicated he might quit. That Trump changed his story two days later, now saying that when he fired Comey he was thinking about “this Russia thing,” showed how exasperating and even damaging it could be to work for him. Everyone who hewed to the White House line that the firing had been based on Rosenstein’s memo, including Pence, was now embarrassed and lost credibility with the press and the public. And then Rosenstein was the hero again when just over a week later he appointed Mueller as special counsel.

The survival of Trump’s presidency may depend most of all on congressional Republicans. Unless the Democrats take both chambers in the midterms, the Republicans will decide his fate. At what point might their patience with Trump be exhausted? How will they respond if high presidential associates or even the president himself are indicted and he chooses to fight it out rather than resign? Is it possible that a Congress in which the Republicans control both or even one chamber would consider impeaching Trump? The impeachment proceedings against Nixon were accepted by the country because they were bipartisan and considered fair. Too many different unknowns are in play to predict the outcome of the midterms, though the respected Cook Report anticipates substantial Republican losses in the House. Republicans are starting to panic.

Their challenge is how to overcome the twin blights of Trump’s chaotic governing and his lack of achievements on Capitol Hill (the exception is the confirmation of the very conservative Neil Gorsuch to the Supreme Court). Trump’s sole substantive accomplishment thus far is the House’s approval of a health care overhaul that required all but a few of them to vote to throw tens of millions of people off of health insurance. (It was followed by a grand celebration at the White House.)

The Republicans are in a bit of a spot: they don’t particularly like Trump and to them he’s an interloper. One reason many of them, especially Ryan, allied themselves with Trump was that they thought he would get their programs, especially tax cuts, through Congress, but prospects for major legislation are receding. And there’s no reason to think that a President Mike Pence wouldn’t back the same programs.

The problem with much of the predicting about what will or might happen in Washington is that it proceeds from an assumption of stasis—as if things won’t happen that could change the politicians’ calculations. When it comes to how long Trump will remain in office, one possibility often discussed is that things might get so bad for him that he decides to return to his much easier life in New York. But he insists that he’s not “a quitter.” (There’s also a question about the corpulent Trump’s health, but that’s not considered a proper topic of conversation.)

Politicians are pragmatists. Republican leaders urged Nixon to leave office rather than have to vote on his impeachment. Similarly, it’s possible that when Trump becomes too politically expensive for them, the current Republicans might be ready to dump him by one means or another. But the Republicans of today are quite different from those in the early 1970s: there are few moderates now and the party is the prisoner of conservative forces that didn’t exist in Nixon’s day.

Trump, like Nixon, depends on the strength of his core supporters, but unlike Nixon, he can also make use of social media, Fox News, and friendly talk shows to keep them loyal. Cracking Trump’s base could be a lot harder than watching Nixon’s diminish as he appeared increasingly like a cornered rat, perspiring as he tried to talk his way out of trouble (“I am not a crook”) or firing his most loyal aides as if that would fix the situation. Moreover, Trump is, for all his deep flaws, in some ways a cannier politician than Nixon; he knows how to lie to his people to keep them behind him.

The critical question is: When, or will, Trump’s voters realize that he isn’t delivering on his promises, that his health care and tax proposals will help the wealthy at their expense, that he isn’t producing the jobs he claims? His proposed budget would slash numerous domestic programs, such as food stamps, that his supporters have relied on heavily. (One wonders if he’s aware of this part of his constituency.)

People can have a hard time recognizing that they’ve been conned. And Trump is skilled at flimflam, creating illusions. But how long can he keep blaming his failures to deliver on others—Democrats, the “dishonest media,” the Washington “swamp”? None of this is knowable yet. What is knowable is that an increasingly agitated Donald Trump’s hold on the presidency is beginning to slip.

—May 25, 2017

  1. 1

    Discussed in my book Washington Journal: Reporting Watergate and Richard Nixon’s Downfall (Overlook, 2014). 

  2. 2

    See Laurence H. Tribe, “Trump Must Be Impeached. Here’s Why,” The Washington Post, May 13, 2017. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/hZonvf9Exn4/

What Gets Called ‘Civil War’?


Pennsylvania Academy of the Fine Arts, PhiladelphiaBenjamin West: Death on the Pale Horse, 1817

The end of the world is on view at Philadelphia. Hurtling across a twenty-five-foot-wide canvas in the Pennsylvania Academy of the Fine Arts are the Four Horsemen of the Apocalypse. Together, Death, Pestilence, Famine, and War ravage the earth amid blood-red banners and what looks like cannon smoke. Warriors fall before their swords and spears, and women, children, and babies are slaughtered.

Benjamin West completed this version of Death on the Pale Horse in 1817, two years after the Battle of Waterloo. It is tempting therefore to see in the painting not only the influence of the book of Revelation, and perhaps the elderly West’s intimations of his own imminent mortality, but also a retrospective verdict on the terrible catalog of death and destruction that had been the Napoleonic Wars. Yet West’s original inspiration seems to have been another conflict. He first sketched out his ideas for Death on the Pale Horse in 1783, the concluding year of the American War of Independence. Bitterly divisive on both sides of the Atlantic, the war imposed strains on West himself. Pennsylvanian born and bred, he was a supporter of American resistance.

But in 1763 he migrated to Britain, and he spent the war working as a historical painter at the court of George III. So every day he served the monarch against whom some of his countrymen were fighting, knowing all the while that this same king was launching his own legions against Americans who had once been accounted British subjects. It was this tension that helped to inform West’s apocalyptic vision. More viscerally than most, he understood that the American Revolution was also in multiple respects civil warfare.

Tracing some of the histories of the idea of civil war, and showing how definitions and understandings of this mode of conflict have always been volatile and contested, is the purpose of this latest book by David Armitage. Like all his work, Civil Wars: A History in Ideas is concise, wonderfully lucid, highly intelligent, and based on a confident command of a wide range of printed sources. It is also ambitious, and divided into three parts in the manner of Julius Caesar’s Gaul. This seems appropriate since Armitage roots his account in ancient Rome. It was here, he claims, between the first century BCE and the fifth century CE, that lethal conflicts within a recognized society, a common enough experience in earlier eras and in other regions, began to be viewed and categorized as a distinctive form of war: bellum civile.

How this came to pass is the subject of Part One of the book. In Part Two, Armitage switches to the early modern era, which is here defined mainly as the seventeenth and eighteenth centuries, and shows how elite male familiarity with classical texts encouraged Europeans and some of their overseas colonizers to interpret the civil commotions of their own times very much in Roman terms. Part Three takes the story from the nineteenth century to the dangerous and precarious present. Whereas the incidence of overt conflicts between major states has receded during the post-1945 “long peace,” civil wars have proliferated, especially in parts of Eastern Europe, Asia, the Middle East, and Africa. The “shadow of civil war,” Armitage contends, has now become “the most widespread, the most destructive, and the most characteristic form of organized human violence.”

But why ancient Rome to begin with? Armitage attributes its centrality to evolving Western conceptions of civil warfare partly to this culture’s marked success in establishing and stabilizing the idea of a distinct citizenry and political community. “Civil War could, by definition, exist only after a commonwealth (civitas) had been created.” More significant, as far as perceptions in later centuries were concerned, were the writings and careers of two brilliant Romans, each of whom in different ways was caught up in the rivalry between Julius Caesar and Pompey and destroyed by the violence of their warring successors.

Cicero, an opponent of Caesar, is the earliest-known writer to have used the term “civil war.” He also employed it in a speech that he delivered at the Forum in 66 BCE, close to the spot where his severed head and hands would be put on display twenty-three years later, as punishment for his activism and his words. In the following century, the youthful poet Lucan completed a ten-book masterwork, De Bello Civile, on how, under Caesar, “Rome’s high race plunged in her [own] vitals her victorious sword.” Lucan dedicated his saga to Nero, the emperor who later forced him to commit suicide.

Their writings and the gory fate of these men helped to foster and perpetuate the idea that civil warfare was a particularly nasty variant of organized human violence. It is in part this reputation, Armitage contends, that has made the subject of civil war a more impoverished field of inquiry than inter-state conflict. Given that the English, American, and Spanish civil wars have all long been historiographical cottage industries, I am not sure this is wholly correct. But it is the case, and he documents this powerfully throughout, that the ideas and negative language that have accumulated around the notion of “civil war” have resulted in the term’s use often being politically driven in some way. As with treason, what gets called civil war, and becomes remembered as such, frequently depends on which side eventually prospers.

At times, the term has been deliberately withheld for fear of seeming to concede to a set of antagonists even a glimmer of a claim to sovereignty in a disputed political space. Thus the royalist Earl of Clarendon chose in his history to describe the English Parliament’s campaigns against Charles I after 1642 not as a civil war, but as a rebellion. In much the same way, an early US official history of the Union and Confederate navies described their encounters between 1861 and 1865 as a “War of the Rebellion,” thereby representing the actions of the Southern states as a mere uprising against an indisputably legitimate government.

For Abraham Lincoln at Gettysburg in 1863, by contrast, it was essential to insist that America was undergoing a civil war. He wanted to trumpet in public more than simply the rightness of a particular governing regime. Since its survival was still in doubt, he needed as well to rally support for the Union itself, that “new nation, conceived in liberty” as he styled it: “Now we are engaged in a great civil war, testing whether that nation, or any nation so conceived, and so dedicated, can long endure.”

Of course, had the American Civil War ended differently, it might well not have been called a civil war at all. Later generations might have remembered it as a “War of Southern Independence,” or even as a “Southern Revolution.” As Armitage points out, when major insurrections break out within a polity, they almost invariably start out as civil wars in the sense that the local population is initially divided in its loyalties and responses. But if the insurrectionists eventually triumph, then—as in Russia after 1917, or China after 1949—it has increasingly been the case that the struggle is redescribed by the victors as a revolution. Partly because of the continuing influence of the ancient Roman cultural inheritance, “revolution” possesses far more positive connotations than the more grubby and ambivalent “civil war.”


Joseph Eid/AFP/Getty ImagesSyrian civilians in the rebel-held al-Shaar neighborhood of Aleppo, which was recently recaptured by government forces, March 2017

As a searching, nuanced, and succinct analysis of these recurring ideas, linguistic fluctuations, and shifting responses over a dramatic span of time, and across national and continental boundaries, Armitage’s account is a valuable and suggestive one. But as he admits, it is hardly comprehensive. This is not simply because of the scale of his subject matter, but also because of his chosen methodologies.

In dealing with civil wars he practices what, in an earlier work, he styled “serial contextualism.” This means that he offers detailed snapshots of a succession of discrete moments and of particular intellectual, political, and legal figures spread out over a very long stretch of time. The strategy is sometimes illuminating, but one has to mind the gaps. Most obviously, there are difficulties involved in leaping, as he does, almost immediately from ancient Rome to the seventeenth century. By the latter period, for instance, England’s “Wars of the Roses” were sometimes viewed and described in retrospect as civil wars. But at the time, in the 1400s, commentators do not seem to have resorted to medieval Latin phrases such as bella civilia or guerre civiles to describe these particular domestic and dynastic conflicts. Although classical texts such as Lucan’s De Bello Civile were known to medieval scholars, the impress of this ancient Roman inheritance on contemporary interpretations of fifteenth-century England’s internal wars does not appear to have been a vital one.

Why might this have been? The question could be rephrased. Why should it be imagined that language and concepts drawn from the ancient Roman past supplied the only or even the dominant ideas and methods for subsequent Westerners wanting to make sense of the experience of large-scale civil contention and slaughter? After all, in the medieval era and long after, most men and even more women possessed no direct knowledge of the Roman classics. Multitudes in Europe and everywhere else could not even read, never mind afford books. Yet in the past as now, it was precisely these sorts of “ordinary” people who were often the most vulnerable to the chaos and bloodshed of civil warfare, and so had little choice but to work out some ideas about it. What were these ideas?

A practitioner of intellectual history from the so-called Cambridge School of that discipline, Armitage barely touches on such questions. More international in range than many of his fellow scholars, he shares some of this school’s leading characteristics: its fascination with the long-term impact of Aristotelian and Roman republicanism, its overwhelming focus on language and on erudite elite males, and its comparative neglect of religious texts. It is partly this deliberately selective approach to the past and its sources that allows Armitage to venture on such an enormous topic over such a longue durée. But again, there is a mismatch between this methodology and the full extent and vital diversity of his subject.

To be sure, many of the impressive individuals who feature in his book were much more than desk-bound intellectuals or sheltered and austere political players. One of the most striking segments in Civil Wars is Armitage’s treatment of the multiple roles of the Prussian-born American lawyer Francis Lieber, who provided Lincoln with a legal code for the conduct of the Civil War. Lieber had fought at Waterloo and was left for dead on the battlefield. During the 1860s, he also had to bear the death of one of his sons who fought for the South, even as two others were fighting for the North. As he remarked: “Civil War has thus knocked loudly at our own door.” The fact remains, however, that most men caught up in civil wars throughout history have not been educated, prosperous, and high-achieving souls of this sort. Moreover—and this has a wide significance—civil wars have often been viewed as having a particular impact on women.

In harsh reality, even conventional warfare has usually damaged non-combatants, women, children, the elderly, and the infirm. Nonetheless, the idea long persisted that war was quintessentially a separate, masculine province. But civil wars were seen as taking place within, and cutting across, discrete societies. Consequently, by their very nature, they seemed likely to violate this separation of spheres, with women along with children and the old and frail all patently involved. This was a prime reason why civil warfare was so often characterized in different cultures not just as evil and catastrophic, but as unnatural. In turn, this helps to explain why people experiencing such conflicts have often resorted, far more avidly than to any other source of ideas, to religious language and texts for explanations as well as comfort.

The major holy books all contain allusions to civil warfare and/or lines that can be read as addressing its horrors. “I will set the Egyptians against the Egyptians,” declares the King James version of the book of Isaiah: “and they shall fight every one against his brother, and every one against his neighbour.” It was often the Apocalypse, though, as demonstrated by Benjamin West’s great canvas, that Christians mined for terrifying and allusive imagery. Such biblical borrowings sometimes crowded out references to the Roman classics as a means of evoking and explaining civil war altogether, as seems often to have happened in medieval England.

At other times, religious and classical imagery and arguments were combined. Thus, as Armitage describes, the English poet Samuel Daniel drew on Lucan’s verses on the Roman civil war when composing his own First Fowre Bookes of the Civile Wars between the Two Houses of Lancaster and Yorke in 1595, a work plundered for its plots and characters by William Shakespeare. But it is also easy to see in portions of Daniel’s text the influence of the Apocalypse:

Red fiery dragons in the aire doe flie,
And burning Meteors, poynted-streaming lights,
Bright starres in midst of day appeare in skie,
Prodigious monsters, gastly fearefull sights:
Straunge Ghosts, and apparitions terrifie,
…Nature all out of course to checke our course,
Neglects her worke to worke in us remorse.

It was never just Christians who turned to holy books and religious pieties so as to cast some light on the darkness of civil war. Unlike allusions to the Roman past, such responses seem to have been universal. Indeed, I suspect that the only way that a genuinely trans-continental and socially deep history of civil warfare could conceivably be written would be through an examination of how civil wars have been treated by the world’s various religions, and how such texts and interpretations have been used and understood over time. In particular, the idea that Samuel Daniel hints at in the passage quoted above—that civil war was a punishment for a people’s more than usually egregious sins—has proved strikingly ecumenical as well as persistent.

Thus for Sunni Muslims, the idea of civil war as fitna has been central to understandings of the past. But fitna in this theology connotes more than civil warfare. The term can evoke sexual temptation, moral depravity—once again, sin. The First Fitna, for instance, the war of succession between 656 and 661, is traditionally viewed by Sunnis as marking the end of the Rightly Guided Caliphs, the true followers of Muhammad.

As Tobie Meyer-Fong has shown, the civil wars that killed over twenty million Chinese in the 1850s and 1860s, the so-called Taiping Rebellion, were also often interpreted as divine retribution for immoral, decadent, or irreligious behavior.* Confucian, Daoist, and Buddhist commentators on all sides rationalized the carnage and disorder in these terms. Poor, illiterate Chinese caught up in this crisis seem also to have regularly turned to religion to make sense of it, and not simply out of faith, or as a means to explain apparently arbitrary horrors. By viewing civil war as punishment for Chinese society’s sins in general, they could also secure for themselves a strategy and a possible way out, even if only in spiritual terms. They could make extra and conscious efforts to follow a moral pathway, and hope thereby to evade heaven’s condemnation.

Analogous responses and patterns of belief continue today, and understandably so. As the ongoing civil warfare in Syria illustrates all too terribly, vulnerable people caught up in such ordeals can easily be left feeling that no other aid is available to them except a deity, and that the only alternative is despair. David Armitage concludes his book with a discussion of how the “long-term decline of wars between states” (a decline that should not be relied on) has been “accompanied by the rise of wars within them.” As in his previous book, The History Manifesto (2014), co-written with Jo Guldi, he also insists that historians have a duty—and a particular capacity—to address such large and recurrent features of human experience:

Where a philosopher, a lawyer, or even a political scientist might find only confusion in disputes over the term “civil war,” the historian scents opportunity. All definitions of civil war are necessarily contextual and conflictual. The historian’s task is not to come up with a better one, on which all sides could agree, but to ask where such competing conceptions came from, what they have meant, and how they arose from the experience of those who lived through what was called by that name or who have attempted to understand it in the past.

Certainly, a close reading of Civil Wars provides a deeper understanding of some of the semantic strategies that are still being deployed in regard to this mode of warfare. Thus President Bashar al-Assad and his supporters frequently represent Syria’s current troubles as the result of rebellion, revolt, or treason; while for some of his Russian allies, resistance in that country is to be categorized as terrorism.

But historians can illumine the rash of civil warfare that has characterized recent decades more deeply than this. Whereas Armitage focuses here on the making and unmaking of states, it is the rise and fall of empires that have often been the fundamental precipitants of twentieth- and early-twenty-first-century civil wars. At one level, the decline and demise of some old, mainly land-based empires—Austrian, Ottoman, and Soviet—have contributed to a succession of troubles in Eastern Europe. At another, the old maritime empires that invaded so much of Asia, Africa, and the Middle East frequently imposed new boundaries and yoked together different peoples in those regions in ways that were never likely to endure, and stoked up troubles for the future. In these and other respects, Armitage is right to insist that history can equip men and women with a better understanding of the past and of the troubled present. It always has done this. But only when its practitioners have been willing to adopt broad and diverse and not just long perspectives.

  1. *

    Tobie Meyer-Fong, What Remains: Coming to Terms with Civil War in 19th Century China (Stanford University Press, 2013). 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/lFWan8nRuJc/

The Pleasures of Pessimism


Henri Cartier-Bresson/Magnum ImagesEmil Cioran, Paris, 1984

Why do we read writers who are profoundly pessimistic? And what sense are we to make of their work in our ordinary, hopefully not uncheerful lives?

I am not speaking about the sort of pessimism concerned with the consequences of our electing this or that president, or failing to respond to world famine or global warming, but what in Italy came to be called il pessimismo cosmico. The term was coined in response to the work of the nineteenth-century poet and thinker Giacomo Leopardi, who at the ripe old age of twenty-one decided that “all is nothing, solid nothing” and he, in the midst of nothing, “nothing myself.” The only reasoned and lucid response to the human condition, Leopardi decided, was despair: hence all positive action and happiness must always have the quality of illusion.

This is existential pessimism of the most uncompromising kind. Who needs it? What could possibly be the attractions?

Toward the end of my graduate course in literary translation I introduce the students to Samuel Beckett, in particular Arsene’s speech in the novel Watt. Watt has just arrived at Mr. Knott’s house and since when one servant arrives another must depart, Arsene is leaving. Before he does so, he gives Watt the benefit of a lifetime’s disillusionment in a twenty-page monologue. This is the passage I offer my students:

Personally of course I regret everything. Not a word, not a deed, not a thought, not a need, not a grief, not a joy, not a girl, not a boy, not a doubt, not a trust, not a scorn, not a lust, not a hope, not a fear, not a smile, not a tear, not a name, not a face, no time, no place, that I do not regret, exceedingly. An ordure from beginning to end. And yet, when I sat for Fellowship, but for the boil on my bottom… The rest, an ordure. The Tuesday scowls, the Wednesday growls, the Thursday curses, the Friday howls, the Saturday snores, the Sunday yawns, the Monday morns, the Monday morns. The whacks, the moans, the cracks, the groans, the welts, the squeaks, the belts, the shrieks, the pricks, the prayers, the kicks, the tears, the skelps, and the yelps. And the poor old lousy old earth, my earth and my father’s and my mother’s and my father’s father’s and my mother’s mother’s and my father’s mother’s and my mother’s father’s, and my father’s mother’s father’s and my mother’s father’s mother’s and my father’s mother’s mother’s and my mother’s father’s father’s and my father’s father’s mother’s and my mother’s mother’s father’s and my father’s father’s father’s and my mother’s mother’s mother’s and other people’s fathers’ and mothers’ and fathers’ fathers’ and mothers’ mothers’ and fathers’ mothers’ and mothers’ fathers’ and fathers’ mothers’ fathers’ and mothers’ fathers’ mothers’ and fathers’ mothers’ mothers’ and mothers’ fathers’ fathers’ and fathers’ fathers’ mothers’ and mothers’ mothers’ father’s and fathers’ fathers’ fathers’ and mothers’ mothers’ mothers’. An excrement.

The students’ collective response is always the same, at first perplexity, faint smiles, frowns, widening eyes as the long list of “mother’s” and “father’s” begins, and finally a blend of giggles and incredulity: is “prof” really going to read that list to the end? So the passage becomes an exercise in showing how the most negative of visions can be smuggled into our minds without our hardly noticing, we are so distracted by the form. On my computer the autocorrect function of Word has underlined much of the passage in blue: “avoid repetition,” it suggests.

Not all pessimists have the same fondness for bizarre comedy. To read Thomas Hardy’s Jude the Obscure, Joseph Conrad’s Lord Jim, J. M. Coetzee’s Disgrace, or indeed many other fine novelists, is to feel at times that any optimism we might unwisely entertain is being systematically ground into the dirt; anything that can go wrong will. All the same, these works differ from Beckett’s in that unhappiness is the result of adverse circumstance, or a combination of particular character and particular situation. There is, that is, in these novelists, a denunciation of the customs of their times, customs that contribute to their characters’ downfalls. Jude and Sue would not have ended up so badly if people had had a more lenient view of unmarried couples. Jim would never have wound up as he did without the race discrimination which underlies so much of what happens in the book. David Lurie’s story could only happen in modern South Africa. So the reader is permitted to think that such disasters occur to certain people in certain situations, but not of absolute necessity. Precisely the feeling that the happy life is possible, yet has been missed out on, intensifies the distress, but prevents the story from becoming a general, existential condemnation. The reader can close the book with a grim smile, and a “there, but for the Grace of God…”

Pessimistic essayists and philosophers may not cast the same narrative gloom as fiction writers, but the implications of their work tend toward the universal. Indeed, to believe that unhappiness was merely a question of immediate circumstance and particular character might be seen as a crass form of optimism. “Our chief grievance against knowledge is that it has not helped us to live,” observes Emil Cioran, dismissing the whole Enlightenment enterprise in a few dry words. Or again: “No one saves anyone; for we save only ourselves, and do so all the better if we disguise as convictions the misery we want to share, to lavish on others.” Or again, “Being busy means devoting oneself to the fake and the sham.” And: “Trees are massacred, houses go up—faces, faces everywhere. Man is spreading. Man is the cancer of the earth.”

Here there is no question of a certain person making certain mistakes in certain circumstances. Here we have an across-the-board dismissal of the very idea of progress or improvement, or engineered happiness. So why do we, or some of us, read such material, and read it with appetite? Is it perhaps a perverse form of indulgence? Self-pity even? Leopardi noted,

the pleasure the mind takes in dwelling on its downfall, its adversities, then picturing them for itself, not just intensely, but minutely, intimately, completely; in exaggerating them even, if it can (and if it can, it certainly will), in recognizing, or imagining, but definitely in persuading itself and making absolutely sure it persuades itself, beyond any doubt, that these adversities are extreme, endless, boundless, irremediable, unstoppable, beyond any redress, or any possible consolation, bereft of any circumstance that might lighten them; in short in seeing and intensely feeling that its own personal tragedy is truly immense and perfect and as complete as it could be in all its parts, and that every door toward hope and consolation of any kind has been shut off and locked tight…

This certainly rings a bell, and the very accuracy of the description brings with it a certain pleasure and relief. How absurd that we do this! “Our pleasures like our pains,” Cioran comments, pushing the disillusionment a step further, “come from the undue importance we attribute to our experiences.”

Perhaps the best way to understand our engagement with pessimism is to observe those occasions when it does not attract us, when we put it aside with distaste or boredom. In novels this occurs when we feel the author is merely piling on the pain, without our feeling there was anything necessarily fatal about the combination of character and circumstance. A car accident occurs at the point when someone is happiest. Or our hero contracts a fatal disease. So what? We know that there are people who have interminable bad luck. Why torture us with it? We can all forgive, or at least condone, an unconvincing happy ending—David Copperfield, for example—for the ambiguous relief it brings, but not an unconvincing unhappy ending, or an ending that seeks to generalize distress from the merest individual accident. We have been made to suffer for nothing.

Recently I went to see Edward Bond’s 1971 play Lear, a reworking of Shakespeare’s story that presents a king obsessed with building a wall to protect his kingdom and (in this version) his two daughters, who are intent on marrying the rulers on the other side of the wall. The play amounts to a long denunciation of political violence and subterfuge, and offers no character with whom the spectator might remotely sympathize. People change position constantly but always repeat old mistakes that bear obvious resemblances to the horrors of twentieth-century Europe. Most spectators will be in wholehearted agreement with the playwright’s thesis from the beginning; but there is no pleasure either in the quality of expression (it is unwise to encourage comparison with Shakespeare), or in watching scenes of rape, torture, and execution. The literary symbolism and interminable allusions are heavy-handed. One leaves the theater exhausted and disgruntled. Mulling over this response, I realized that what is positive about Jude, or Lord Jim, or Disgrace, or indeed Shakespeare’s King Lear, is that the lives and feelings of the individual characters do seem important, and the trajectories of the stories told, however unhappy, are clear and convincing.

For essayists and philosophers, what we cannot forgive is, first, the suspicion that our writer has a personal axe to grind, and second, perhaps even worse, dullness, a lack of panache. The slightest feeling that facts are being manipulated in order to support a position in which, for some spoilsport reason, the author has a personal investment, is fatal. The reader, that is, must recognize that a genuine truth is being acknowledged. Beckett can get away with his long list of “father’s” and “mother’s” because it tells an undeniable truth: mine really is the same earth that all my ancestors walked, the same life all my forebears lived. And it is true, unavoidably, that as one goes backward in time so one’s forebears multiply—two parents, four grandparents, eight great grandparents, sixteen great-great grandparents—so that one’s own life becomes steadily less significant and could be construed as mere repetition.

But why is dullness a problem, if what we care about is the truth? Why does it matter that a pessimist deliver his or her message with brio? Here I think we are approaching the key to an aesthetic of pessimism, particularly in essay form.

Modern society, as a whole, tends toward a sort of institutional optimism, espousing Hegelian notions of history as progress and encouraging us to believe happiness is at least potentially available for all, if only we would pull together in a reasonable manner. Hence the kind of truth pessimists tell us will always be a subversive truth. All the quotations I chose from Cioran, almost at random, could be understood as rebuttals of the pieties we were brought up on: that knowledge is a vital acquisition, that we must work to help and save each other, that it is positive to be industrious and healthy, that freedom is supremely important, and so on.

Such a radical deconstruction may be alarming, yet when carried out with panache, zest, and sparkle, it nevertheless creates a moment’s exhilaration, and with it, crucially, a feeling of liberty. Reading Leopardi or Cioran or Beckett, one is being freed from the social obligation to be happy. Here is Schopenhauer:

There is not much to be got anywhere in the world. It is filled with misery and pain; and if a man escapes these, boredom lies in wait for him at every corner. Nay more; it is evil that generally has the upper hand, and folly that makes the most noise. Fate is cruel and mankind pitiable.

Espousing this kind of vision might seem like madness, but elsewhere Schopenhauer explains its usefulness:

If you accustom yourself to this view of life you will regulate your expectations accordingly, and cease to look upon all its disagreeable incidents, great and small, its sufferings, its worries, its misery, as anything unusual or irregular; nay, you will find that everything is as it should be, in a world where each of us pays the penalty of existence in his own peculiar way.

Cioran pushes the notion to extremes, and makes it more exciting:

The only way of enduring one disaster after the next is to love the very idea of disaster: if we succeed, there are no further surprises, we are superior to whatever occurs, we are invincible victims.

Invincible victims! Here is a curious optimism lurking at the very heart of pessimism. And notice again how important form is. Life is chaos, a long sequence of uncontrollable disasters, but this idea is expressed with great control and elegance, suggesting heroic adaptation, appropriation even, rather than capitulation; in the midst of disasters we can formulate witty sentences. “No, future here,” observes Beckett’s narrator in Worstward Ho. And proceeds: “Alas, yes.” With even greater virtuosity, Robert Lowell, in “Her Dead Brother,” creates a punchline by omission when he gives us: “All’s well that ends.” With these flashes of creativity it’s as if a turbulent seascape were fleetingly illuminated by lightning; we are shown our shipwreck brilliantly.

The pleasure detonated by these clever devices does not last, of course, which is why one is never enough. Aphorisms of the negative kind are addictive. To read Cioran’s Cahiers is to see a man obsessed with transforming his negative intuitions into these splendid little firecrackers, repeating and honing and refining one after another until they achieve the maximum effect in the most concise formulation, the brilliance becoming a kind of anesthetic that actually makes it a pleasure to feel the knife turn in an old wound. The form is a triumph over pain.

“Do you believe in the life to come?” Clov asks Hamm in Beckett’s Endgame. And Hamm replies, “Mine was always that.”

Source Article from http://feedproxy.google.com/~r/nybooks/~3/rs_GGIEbByI/

The Vitality of the ‘Berlin Painter’


Musée du Louvre/RMN-Grand Palais/Art Resource Red-figure bell-krater showing Ganymede, described in the Iliad as the most beautiful of mortal men, attributed to the Berlin Painter, circa 500-490 BC

Only twice in modern times have museums surveyed the career of a single Greek vase painter, and both shows were at major international institutions (the Metropolitan Museum of Art in 1985 and Berlin’s Staatliche Museum in 1990-1991). Thus it is a marvel that the more modest Princeton University Art Museum has assembled a vast selection of the works of the master referred to as the Berlin Painter, who lived in Athens in the early fifth century BC. Curated by J. Michael Padgett, the show charts the development, over some four decades, of an artist whose name, nationality, and even gender remain unknown, but whose distinctive and confident illustration in the red-figure style stands out as clearly as any signature. 


The Metropolitan Museum of Art, FletcherRed-figure amphora showing a musician with his head tilted back in song, attributed to the Berlin Painter, circa 495-485 BC

In his pioneering research on attic vase painting, the Oxford art historian Sir John Beazley devised the label “Berlin Painter” in 1911 in honor of a large lidded amphora decorated by this artist that is housed in Berlin’s Antikensammlung. He assigned thirty-seven other works to the same artist on the basis of the unique line they shared, which he described as “thin, equable, and flowing,” and various features of the depiction of the human form. By now several hundred vases have been attributed, more or less confidently, to this artist’s hand, many recovered from the graves of wealthy Etruscans in western Italy. More than fifty can be seen in the Princeton show, along with pots by the equally talented Kleophrades Painter—who, because of the similarity of their styles, is thought to have been the Berlin painter’s teacher—and by other, later artists who clearly took their inspiration from these two masters.

The Berlin Painter began working at the end of the sixth century BC, when the red-figure technique of vase painting—in which black glaze fills the background, leaving silhouettes of unglazed red ceramic to form the image—was just starting to replace its inverse, the black-figure style that had prevailed earlier. The possibilities offered by this new medium clearly intrigued the artist, who began to expand the black background and diminish the red subject to a single, static figure—a lyre-playing singer with his head thrown back in musical ecstasy, a young athlete holding a discus. These figures seem to float, anchored to the physical world only by the short geometric band on which they plant their feet. In some cases, even this tiny hint of landscape disappears. 

The first phase of the Berlin Painter’s career coincided with the birth of democracy in Athens, and the early works—which portray ordinary people caught in simple moments of daily life in much the same way that other vase painters treated gods and heroes—demonstrate the humanism of that political evolution. The vitality that the Berlin Painter gave to these portraits attests to the new social consciousness emerging in Athens that would soon culminate in the great drama, history, and political works of the later fifth century.    

In the later works, the Berlin Painter tried more crowded and kinetic scenes, divine processions and mythic combats. The results are disappointing compared with these earlier works. One of the fascinations of the Princeton show is that it illustrates the arc of his (or her) career, from youthful exuberance and innovation to a less confident, more conventional mature phase. Even as his early work attracted imitators and rivals, the Berlin Painter moved toward more traditional subjects and developed a clumsier, less graceful line. This shift fits in with a general decline in painting standards during the upheaval following the Persian destruction of Athens in 480 BC, according to Padgett.

Even from the Berlin Painter’s late phase, however, sparks of brilliance emerge. An Athenian state commission was granted to the artist, probably in the 470s BC, to produce the amphorae that were given out as prizes in the quadrennial Panathenaic athletic games. Two surviving examples, out of the thousand-plus that the commission required, are on display in the Princeton exhibition. Both are breathtaking. The design of the Panathenaic vase was fixed by conventions: it had to be decorated in the old black-figure style, and to bear a stock image of Athena on one side, a freeze frame on the other depicting the action of the sport in question. Long-distance runners are vividly portrayed on one of the amphorae in the show, their muscles in vigorous motion as they approach a turning post. One older man, his body still lithe but his hair shaggy and receding, trails behind three rivals, perhaps conserving strength for a final kick. What was, in lesser hands, a staid genre is here infused with personality and drama.      


Gregory Callimanopulos Collection, New YorkBlack-figure Panatheniac prize amphora, attributed to the Berlin Painter, circa 480-470 BC

     


“The Berlin Painter and His World” is at the Princeton University Art Museum through June 11. The exhibit will be on view at the Toledo Museum of Art from July 8 through October 1. The accompanying book, which includes a catalog and is edited by J. Michael Padgett, is published by Yale University Press.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/11lQrroWu58/

The Art of Difference

Diane Arbus: In the Park


The Estate of Diane ArbusDiane Arbus: A young man and his girlfriend with hot dogs in the park, N.Y.C. 1971

In the mid-1990s, when The New Yorker’s offices were a stone’s throw from the main branch of the New York Public Library, I worked down the hall from Joseph Mitchell. The great writer was in his eighties then. It had been decades since he’d published anything, but it was thrilling to discover his old pieces in the bound volumes that lined the magazine’s library shelves: back issues made the past feel present. When Mitchell became a staff writer in 1938, he introduced The New Yorker’s readers to a world I recognized as not too far removed from the Manhattan I knew—a universe peopled by characters in sour barrooms, a bearded lady or two, black men who wore their self-protective reserve like a second suit. Indeed, Mitchell’s portraits—snapshots—of outsiders spoke to me of difference in a way that strenuously “queer” literature of the time did not.

The longest conversation I ever had with Mitchell was not about writing, though. It was about the photographer Diane Arbus, and took place around the time The New Yorker published my review of Untitled, her third posthumous book of photographs. (She died by her own hand in 1971.) Both he and Arbus used the word “freaks” to describe their subjects (a word I found disparaging and objected to, albeit silently in his presence). But Arbus’s subjects were unlike Mitchell’s: her photographs showed them pursuing their otherness with a fierce velocity that had little in common with his ultimately more assimilated characters, seen through the skein of his elegant and sometimes ironical prose.

Arbus’s photographs were elegant, too—classically composed and cool—but they were on fire with what difference looked like and what it felt like as seen through the eyes of a straight Jewish girl whose power lay in her ability to be herself and not herself—different—all at once. The story she told with her camera was about shape-shifting: in order to understand difference one had to not only not dismiss it, but try to become it. “I don’t like to arrange things,” Arbus once said. “If I stand in front of something, instead of arranging it, I arrange myself.”

When my review of Untitled appeared in The New Yorker Mitchell stopped me in “our” hall to say that Arbus had first telephoned him in 1960, after she read his work. She wanted to talk about his subjects—the “freaks” that he had described on the page and that she was attempting to describe in her photos. He told Patricia Bosworth for her Diane Arbus: A Biography (1984):

I urged Diane not to romanticize freaks. I told her that freaks can be as boring and ordinary as so-called “normal” people. I told her what I found interesting about Olga, the bearded lady, was that she yearned to be a stenographer and kept geraniums on her windowsill.

Mitchell said that Arbus phoned him several times after that first conversation. They would always talk for at least an hour, and he jotted down some of the topics they discussed: Franz Kafka, James Joyce, Walker Evans, Grimms’ Fairy Tales.

While Arbus’s genius found its fullest expression in photography, she was also an astute reader and writer whose letters, journals, and other writings deserve space on any serious reader’s shelf. She began keeping extensive notebooks at the time she started taking pictures in earnest, in 1956, and much of her available writing is collected in Diane Arbus: A Chronology (a treasure trove of a book whose design is vexing: the type is too small). Indeed, a number of her best-known images, ranging from Russian midget friends in a living room on 100th Street, N.Y.C. 1963 to Hermaphrodite and a dog in a carnival trailer, Md. 1970, feel like tales of the fantastic—stories she might have been tempted to write down if making images didn’t claim her attention first, last, and always.

Still, language was important to Arbus. “Another thing I’ve worked from is reading,” she once said. “It happens very obliquely. I don’t mean I read something and rush out and make a picture of it. And I hate that business of illustrating poems.” But it was the camera that not only gave her license to “go where I’ve never been before” but also, as her more recent biographer Arthur Lubow suggests in his Diane Arbus: Portrait of a Photographer, allowed her to be looked at in return, especially after she started using a square-format Rolleiflex, a camera that allowed her to confront more directly those she photographed, since it didn’t obscure her face. She wanted her queer subjects—all those “other” self-created people whom she memorialized in Manhattan, her wrecked, magical city—to see her difference, too.

That difference was something she felt nearly from the beginning. Born in 1923, she was the second child of David and Gertrude Nemerov. (Her brother, Howard, born three years before, would go on to become a noted writer. Her sister, René, an artist, was born in 1928.) Nemerov supported his family as the merchandising director at Russeks, one of the city’s leading fur emporia; the company had been founded in the late nineteenth century by Frank Russek, Gertrude’s father, along with two of his brothers. Families that go into business with one another often live in a peculiar world defined by power, an uneasy closeness based on trade and profit, and the isolation that can come with wealth. Throughout her life Arbus was drawn to other closed societies.

In a slide show of her work in 1970 (a Japanese student, unsure of his English, recorded her lecture so he could play it back later), Arbus tells the audience that she grew up “kind of rich,” and when they laugh, she doesn’t laugh with them. Her silence feels like a sign of an injury. But by virtue of the education that her class and money made possible, she was able to articulate in her writing what her difference felt like, while her “freaks” could only display theirs, and hope for the best. Writing to her close friend Marvin Israel in 1960, Arbus said:

I remember the special agony of walking down that center aisle, feeling like the princess of Russek’s: simultaneously privileged and doomed. The main floor was always very empty like a church and along the way were poised the leeringest manikins ever whose laps and bosoms were never capacious enough for refuge and all the people bowed slightly and smiled like the obsequies were seasoned with mockeries. It seemed it all belonged to me and I was ashamed.

Another cause for shame or secrecy, perhaps, was Arbus’s relationship with her brother: Lubow suggests that it was incestuous, beginning when they were children, cosseted by nannies but emotionally neglected by their beautiful, depressed mother and remote father. He goes on to say that based on material provided by Arbus’s therapist, the affair lasted until shortly before her death. Whatever the circumstances, incestuous coupling can be viewed as a kind of twinning—a game you can explore with someone who is you and not you, all at once. Throughout her career Arbus returns, again and again, to that feeling of twinning and difference. It’s there in her application in 1971 for an Ingram Merrill grant (she didn’t get it):

The sign of a minority is The Difference. Those of birth, accident, choice, belief, predilection, inertia. (Some are irrevocable: people are fat, freckled, handicapped, ethnic, of a certain age, class, attitude, profession, enthusiasm.) Every Difference is a Likeness too.

Like Howard before her, Arbus went to Fieldston, the Riverdale campus of the Ethical Culture School. One classmate recalls that “she came full-blown with her mature privacy intact.” Direct, shy, secretive, and charming, her writing was advanced. A 1940 paper about Chaucer is detailed, questioning, and specific:

Chaucer seems to be very sure and whole and his attitude toward everything is so calm and tender because he was satisfied and glad that he was himself…. The pleasure he gets from meeting [people] is part physical, part spiritual. He seems to love physical things, even obscene ones, and from looking at them, he gets a contact with the other person. His way of looking at everything is like that of a newborn baby; he sees things and each one seems wonderful, not for its significance in relation to other things, but simply because it is unique and because it is there.

Arbus’s uniqueness was heralded right away, but the praise disturbed her. During her first years at Fieldston, she met Allan Arbus, who was working in the advertising department at Russeks. (David Nemerov’s partner, Max Weinstein, was Allan’s uncle by marriage.) Five years Diane’s senior, he had dropped out of City College. The two quickly became allies, according to Allan, and began meeting in secret on weekends, vowing to marry. Sometimes they were mistaken for siblings. That he was already a part of her family’s professional life when they met, a brother who was not a brother, might have added to his appeal, too.

Still, there was high school to finish and the awful weight that came with being “gifted.” At Fieldston, Arbus’s literary skills were considered equal to her gifts as a painter. The late screenwriter Stewart Stern, a fellow student, recalled:

When she picked up her brush for the first time she was simply not doing what anybody else did. We were all trying to be representational and she had no interest in that, except as a kind of satire.

For Arbus the question was: What realities does reality represent? And yet she couldn’t bear making art that was “art”; like all those Russeks furs, painting belonged to a moneyed class, the world of connoisseurship. Was she talented, she wondered, or was she encouraged to make art because a girl of her background was supposed to? For her senior class assignment, Arbus produced her “Autobiography,” in which she wrote:

Everyone suddenly decided I was meant to be an artist and I was given art lessons and a big box of oils and encouragement and everything. I painted and drew every once in a while for 4 yrs. with a teacher without admitting to anyone that I didn’t like to paint or draw at all and I didn’t know what I was doing. I used to pray and wish often to be a “great artist” and all the while I hated it and I didn’t realize that I didn’t want to be an artist at all. The horrible thing was that all the encouragement I got made me think that I really wanted to be an artist and made me keep pretending that I liked it and made me like it less and less until I hated it because it wasn’t me that was being an artist.

Who was that “me”? Despite her horror of painting—“I remember I hated the smell of the paint and the noise it would make when I put my brush to the paper. Sometimes I wouldn’t really look but just listen to this horrible squish squish squish,” she told the journalist Studs Terkel—Arbus’s teachers thought they were encouraging her true self, or a self she wanted to be. But she knew she was acting, and felt herself a fraud. Like most reasonably self-aware, polite, socially adept young girls of the time, Arbus might have considered herself the problem when, in fact, what she was really questioning was the world’s authenticity, which turns equally on the fake and the real.

This was the 1940s and she was a girl and even though power belonged to the world of men, there were questions. What if she was smarter and more talented than Howard? Than Allan? Than her father, who loved the Impressionists and became a Sunday painter because he wanted to, and could? What would that make her? Howard’s son Alexander Nemerov—he never met his aunt—saw what Diane Arbus no doubt tried to hide but could not:

Arbus had the courage not only to bend photography over backward but to bend her own written eloquence backward, too…. The world for my father responded only to his intelligence…. Arbus, by contrast, could see the world as it was without her. She simply gave it the chance to be as it was. What she saw, in one sense, was the ardency and joy of a world relieved of the burden—this is how I would put it—of having to be intelligent for her, of having thereby to mirror her own intelligence, of being required to give that intelligence back to her in a genuine way, ever-present, all the time, that must have been exhausting to the person of such expectations, like going to the school of your own mind twenty-four hours a day.1

Arbus’s search for real feeling—she suffered from manic depression her entire life, just as her mother had before her—was not an unintelligent reaction to the closed society she had grown up in, one that placed a great deal of importance on appearances, and the silences that permeate decorum. Arbus could rail against all that fakery in photography, and in her writing, without disturbing her sitters’ shell of secrecy, of being known only to themselves—and then to her. Together, they would tell the truth while sending up the “hierarchy” of art, the world her parents inhabited, and that Howard followed them into. (His 1965 book, Journal of the Fictive Life, is, in part, a condemnation of photography as a form of pornography. Alexander Nemerov recalls his father holding one of Arbus’s most famous images, Identical twins, Roselle, N.J. 1966, the only one of her pictures he had, like something that stank.)

A naked man being a woman N.Y.C. 1968, for example, is a kind of joke on Botticelli’s Birth of Venus. In the picture a naked man stands center frame. His genitals are tucked between his upper thighs. His left hand rests sweetly on his left hip; his right hand on his right thigh. His right foot is arched. The manufactured pudendum, the powdered face and penciled eyebrows and rouged lips do not obscure his “real” self—his wide, shaved chest, his long male feet—but add to the reality of his dream of a self, being-as-a-wish. Arbus wanted in on that deeply private exchange with the self. “I have learned to get past the door, from the outside to the inside,” she wrote in a fellowship application in 1964. “I want to be able to follow.”

It took Arbus a long time to become Arbus. Shortly after her marriage to Allan in 1941, they began collaborating on fashion photography. (They had two children, in 1945 and 1954.) Their first client was Russeks—more family business. Arbus styled the shoots and Allan, always more technically adept, shot the photographs. By 1956 Arbus decided she had to end her collaboration with Allan or lose her mind.

She marked her transition from commercial photographer to artist when she began studying with Lisette Model. Born in Vienna in 1901, Model, like Arbus, had grown up rich—a world she rejected in her black-and-white pictures, in which one feels the spiritual fatigue or complacency behind the idle and pampered. What Arbus needed from Model was her permission to be herself, as a photographer. Model recalls:

I said, “Originality means coming from the source….” And from there on, Diane was sitting there and—I’ve never in my life seen anybody—not listening to me but suddenly listening to herself through what was said.

Arbus’s first 35-millimeter images were, she remembered in a class she taught in 1971, “very grainy things. I’d be fascinated by what the grain did because it would make a kind of tapestry of all these little dots and everything would be translated into this medium of dots.” She did not like being a painter, but she could certainly see and speak in the language of one. The photographs in the exhibition “In the Beginning” at the Met Breuer last fall, curated by Jeff Rosenheim, began in 1956 when Arbus was making those dots more specific. These pictures are important to our understanding of her work: despite the fact that many of them might qualify as street photography, they’re markedly different from those of her contemporaries—Sid Grossman, Saul Leiter. They are not about Manhattan as a clichéd swirl of taxis and people but, instead, the city as a hitherto unseen terrain containing faces and bodies and most importantly souls that haunt it at odd angles and on empty streets.

At the same time Arbus was working those dots out. Looking at the prints at the Met Breuer (early on Arbus used the popular and lightweight Leica; Rosenheim’s show ends in 1962, when she switched to the Rolleiflex) was like reading draft after draft of a first great poem as it comes into focus: you know that once the writer finishes it, that poem will be the bridge to other great poems. Arbus knew her apprentice work was pivotal to clarifying what she needed to say with the camera:

But when I’d been working for a while with these dots, I suddenly wanted terribly to get through there. I wanted to see the real differences between things. I’m not talking about textures. I really hate that, the idea that a picture can be interesting simply because it shows texture…. It really bores the hell out of me. But I wanted to see the difference between flesh and material, the densities of different kinds of things: air and water and shiny.

Whatever her tools, Arbus generally recognized what she wanted to photograph—people and relationships that were queer, or that queered our idea of the “normal.” Arbus was particularly attuned to postures that connote shame, the horror of avoidance as played out by so-called normal-looking people. In a picture like Woman with white gloves and a pocket book, N.Y.C. 1956, the figure looks slightly rattled, as if recoiling from the memory of an emotional pummeling that nevertheless, and miraculously, left hair and makeup in place (see illustration below). The same year, Arbus took a picture titled Mother contemplating her toddler, N.Y.C., in which a hefty mother in her winter coat looks at her offspring with nothing approaching maternal concern. It’s as though she can’t decide if he’s a bad dream, or why he isn’t a dream, and Arbus captures this complication in the most primal of relationships.


The Estate of Diane ArbusDiane Arbus: Woman with white gloves and a pocket book, N.Y.C. 1956

Those two feel like a curtain raiser to the devastating Woman and her son, N.Y.C. 1965, in “Diane Arbus: In the Park” at Lévy Gorvy Gallery. In this image, the two figures are physically similar; the boy is overweight, the better to be “like” his mother—or to withstand her psychological weight, the mouth that keeps going, even during the sitting. Both mother and child pictures feature the drama of interaction no matter how distanced or cruel: the story couldn’t happen without that tormented or tormenting other.

As Arbus went on, though, she became more and more interested in the drama of the self as it appeared not only to her through her lens (her magic portal) but to her subject. No visual artist of the twentieth century has described with more accuracy the enormous pride her characters, certainly in the early pictures, feel at having risked all to become themselves—selves they could not lock up, or hide, or resist being recorded despite the pain of being marginalized in their daily life.

In A very thin man in Central Park, N.Y.C. 1961, in the Lévy Gorvy show, the subject resembles an elongated Raymond Massey; he has a movie star’s interest in his effect—his verticality of form, his costuming, the spats that emphasize the thinness of his legs, a “defect” that no doubt contributes to his unblinking pleasure in his own self-worth. And because he’s proud of his look, he’s interested in his effect—his power as, potentially, an erotic object. The picture is a record of a come-on—what can he give Arbus in exchange for her having been interested in him? His mouth is slightly open with the questions, with desire.

Arbus once said that when she went to photograph someone she heard herself saying, “‘How terrific,’ and there’s this woman making a face.” She continued:

I really mean it’s terrific. I don’t mean I wish I looked like that. I don’t mean I wish my children looked like that. I don’t mean in my private life I want to kiss you. But I mean that’s amazingly, undeniably, something. There are always two things that happen. One is recognition and the other is that it’s totally peculiar. But there’s some sense in which I always identify with them.

The Lévy Gorvy show helps open up just how much the Rolleiflex expanded Arbus’s view. It’s like going from a 16-millimeter screen to Vistavision: the enlarged format allows her to take in her subjects’ surrounding worlds. In early pictures she took with the new camera, such as Lillian and Dorothy Gish in Central Park, N.Y.C. 1964, the actresses are huddled against not only the cold but against the white loneliness of the surrounding landscape, a landscape redolent of the frozen wastes Lillian struggled to survive in the silent film Way Down East (1920), except the east now is not Maine but New York in winter.

Unlike Garry Winogrand or Robert Frank, Arbus made pictures that grew out of and described the loneliness we are all taught to be ashamed of and should try to “fix” through conventional connections—marriage, children, and so on.2 Arbus’s “I”—the eye behind her camera—was unabashed loneliness, looking to connect, if only because she understood what it felt like not to. She wanted to see the world whole, which meant seeing and accepting the fractures in those connections, too, along with all that could not be fixed. When she started taking pictures of drag queens and interracial couples, homosexuality was illegal, and miscegenation was still met with violence or derision.

While the figures in the Lévy Gorvy show sometimes look like creatures you’d expect to find at night, all of the photos were taken during the day when Arbus trolled for subjects in Washington Square Park and Central Park. That only adds to their boldness—and surreality. What world is this? A world we turn away from as we jog and cycle through leaf-dappled public spaces, ignoring our mortality or troubled self as it takes form in that madwoman’s eyes or those depressed kids holding hot dogs. About her work in Washington Square Park, Arbus once recalled:

I could become a million things. But I could never become that, whatever all those people were. There were days I just couldn’t work there, and then there were days I could…. I hung around a lot. They were a lot like sculptures in a funny way. I was very keen to get close to them, so I had to ask to photograph them. You can’t get that close to somebody and not say a word, although I have done that.

The former painter was never far from the photographer.

Once, while working with Model, Arbus said she was ashamed of what she saw—that it was evil. Her elder daughter and executor, Doon, took exception to this, writing in 1972:

I think what she meant was not that it was evil, but that it was forbidden, that it had always been too dangerous, too frightening, or too ugly for anyone else to look on. She was determined to reveal what others had been taught to turn their backs on. As far as I know, it was her first description of the territories she wanted to make her own, those that would attest to her daring.

Arbus’s daring separated her from Joseph Mitchell, in the end. When it was revealed after his death that he had put some if not a lot of fiction into his later journalism, my first thought was less about the nagging contemporary issue of “truthiness” than about Arbus. Some journalists make things up when they’re naturally reticent, tired, and can no longer bear to do the journalist’s essential work, which is to listen to one’s subject. I cannot say why Mitchell relied on his imagination more and more as his career went on, but perhaps he felt he needed his imagination to help round his stories out. Maybe he got tired of listening to what other people had to say, but was too afraid to rely on his own voice to write fiction.

Arbus did not have to make her subjects up. And she never tired of listening. As Doon Arbus points out in her 1972 piece, her mother told her subjects things she never told her friends or family; a sitting was also an exchange of secrets. Just as her diary entries and letters often describe a life in flux—the girl becoming a woman, the novice becoming an artist—Arbus’s photographs are about the act of transformation, too—a man becoming a woman; a pig once alive, now dead.

She and Mitchell were wrong to call their subjects freaks, a sixteenth-century word that originally meant “sudden turn of mind.” Arbus’s mind was deliberate, not sudden. She did not photograph freaks but characters, citizens in her Manhattan, a city that gets relatively little attention in her work, even though it is everywhere she turned, including inward. “I am not ghoulish am I?” she asked Marvin Israel in a 1960 note:

I absolutely hate to have a bad conscience, I think it is lewd…There was a lady stretched out on the ground…fallen, I think, yesterday weeping and saying to the cops please help me with one shoe off and covered with a blanket waiting for an ambulance which came, on lexington ave and 57th Street. Is everyone ghoulish? It wouldn’t anyway have been better to turn away, would it?

  1. 1

    Alexander Nemerov, Silent Dialogues: Diane Arbus and Howard Nemerov (Fraenkel Gallery, 2015), p. 92. 

  2. 2

    In the wall text to “Arbus Friedlander Winogrand: New Documents, 1967,” the exhibition at the Museum of Modern Art that introduced Arbus to a larger public, John Szarkowski, then head of the museum’s department of photography, wrote of the trio’s pictures: “They are anti-news—or at least, non-news—things as they are rather than things as they should be, could be, or thought to be. Their photographs are not visual ‘no comment’ but rather records of real events offered to an audience who may not always believe the events are that way.” Arbus Friedlander Winogrand: New Documents, 1967, edited by Sarah Hermanson Meister, with an essay by Max Kozloff (Museum of Modern Art, 2017). 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/YPRltBaQRdQ/

More Dangerous Than Trump


Mike Blake/TPX/ReutersAttorney General Jeff Sessions, San Diego, April 21, 2017

Tangled in self-inflicted chaos, President Donald Trump has been unable to accomplish much during his first four months in office. His signature executive orders have been stymied by the courts; his legislative efforts have stalled; and now he faces a special counsel investigating him over the Russia affair. But Trump’s attorney general, Jeff Sessions, is another story. Even amid the scandal of the firing of FBI director James Comey—an action in which Sessions himself had a central part—Sessions has quietly continued the radical remaking of the Justice Department he began when he took the job. 

On May 20, Sessions completed his first hundred days as attorney general. His record thus far shows a determined effort to dismantle the Justice Department’s protections of civil rights and civil liberties. Reversing course from the Obama Justice Department on virtually every front, he is seeking to return us not just to the pre-Obama era but to the pre-civil-rights era. We should have seen it coming; many of his actions show a clear continuity with his earlier record as a senator and state attorney general.

Sessions has been especially focused, and particularly retrograde, on criminal justice. In the Senate, he was to the right of most of his own party, and led the charge to oppose a bipartisan bill, cosponsored by Republicans Charles Grassley and Mike Lee, that would have eliminated mandatory minimums and reduced sentences for some drug crimes. As attorney general, he has rescinded Eric Holder’s directive to federal prosecutors to reserve the harshest criminal charges for the worst offenders. Sessions has instead mandated that the prosecutors pursue the most serious possible charge in every case. Prosecutors ordinarily have wide latitude in deciding how to charge a suspect—they can select any of a number of possible crimes to charge, decline to pursue charges altogether, or support a diversion program in which the suspect avoids any charges if he successfully completes treatment or probation. Not all crimes warrant the same response, and prosecutorial discretion makes considered justice possible. Yet Sessions has ordered prosecutors to pursue a one-size-fits-all strategy, seeking the harshest possible penalty regardless of the circumstances.

At the same time, Sessions has promised to reduce the Justice Department’s critical oversight of policing. Under previous administrations of both parties, the Justice Department’s Civil Rights Division has responded to reports of systemic police abuse in cities like Los Angeles, Cincinnati, New Orleans, Chicago, Baltimore, and Ferguson by investigating, reporting, and entering “consent decrees”—court-enforceable agreements with local police departments—designed to reduce or eliminate abuse. Before his confirmation, Sessions condemned such consent decrees as “dangerous” and an “end run around the democratic process.” As attorney general, he has ordered a review of all such decrees, expressing concern that they might harm “officer morale,” about which he seems to care more than about the constitutional rights of citizens. In April, Justice Department lawyers voiced skepticism about and sought to delay court approval of a consent decree that had been fully negotiated and agreed to by the city of Baltimore and the Justice Department before Sessions became attorney general. The court rejected the Justice Department request and enforced the consent decree. But such decrees require active monitoring by the Justice Department, and given the attorney general’s outspoken opposition to these agreements, no one should expect him to live up to that responsibility.

When Sessions was a senator, he opposed extending hate crimes protections to women and gays and lesbians, explaining that “I am not sure women or people with different sexual orientations face that kind of discrimination. I just don’t see it.” One of his first acts as attorney general was to withdraw a guidance document protecting transgender students from discrimination. The withdrawal led the Supreme Court to vacate a lower court opinion protecting the right of Gavin Grimm, a transgender boy, to use the boys’ restroom at his Virginia high school. Reportedly Education Secretary Betsy Devos opposed rescinding the guidance, but was overruled by Sessions.

Sessions was practically the only senator to come to Trump’s defense when, in November 2015, he proposed banning all Muslims from the country. At the time, Sessions actively opposed a resolution introduced by Senator Patrick Leahy that simply affirmed that religious discrimination has no part in immigration enforcement. (The resolution passed 96-4.) Sessions called Islam, a religion practiced by billions of people across the world, “a toxic ideology.” Sessions’s involvement in drafting Trump’s first travel ban is unclear; it was issued shortly before he was confirmed as attorney general. But it’s Sessions’s Justice Department that has defended both the first and the second travel bans, arguing that courts should defer blindly to the executive branch and ignore all the anti-Muslim statements that Trump made in connection with the initiatives. Sessions has also warned “sanctuary cities” that if they decline to enforce federal immigration law, as they have the right to do under the Tenth Amendment to the Constitution, he will revoke their federal funding. Thus far, federal courts have declared both the travel bans and the attempt to revoke federal funding from sanctuary cities unconstitutional, but Sessions’s Justice Department is appealing those decisions.   

As Alabama attorney general, Sessions prosecuted black civil rights activists for helping to get out the vote. The judge dismissed many of the charges even before getting to trial; the jury acquitted the defendants on the rest. When the Supreme Court in 2013 gutted the Voting Rights Act by invalidating a provision requiring states with a history of discriminatory voting practices to prove that any changes they sought to make to voting law not undermine minority voting opportunities, Sessions called it “good news…for the South.” As attorney general, his Justice Department took the extraordinary step of withdrawing its claim, already fully litigated and developed in trial court, that Texas had adopted a voter ID law for racially discriminatory reasons. The court nonetheless ruled that Texas had in fact engaged in intentional race discrimination. It refused to close its eyes to evidence of racial intent, even if the new Justice Department was willing to do so. 

And then there is the matter of ethics. As Alabama attorney general, Sessions oversaw the filing of a 222-count criminal indictment against TIECO, a competitor of US Steel, at a time when US Steel and its attorney were contributors to Sessions’s Senate campaign. Every single count was dismissed, many for prosecutorial misconduct. The judge wrote that “the misconduct of the Attorney General in this case far surpasses in both extensiveness and measure the totality of any prosecutorial misconduct ever previously presented to or witnessed by the Court.” In his confirmation hearings, Sessions committed another ethical infraction when he falsely denied that he had met with Russian officials during the Trump campaign. That lie, once exposed by the press, compelled him to recuse himself from the ongoing investigation of Russia’s meddling in the election and the Trump campaign’s potential collusion. Yet Sessions did not recuse himself when his boss asked for his assistance in firing James Comey, the man overseeing the investigation, shortly after Comey had sought more resources for it. 

The attorney general is the nation’s top law enforcement officer. He is responsible for investigating federal crimes, advising on the appointment of judges and the constitutionality of bills, defending federal government programs, and enforcing the civil rights laws. It’s an awesome responsibility in any administration. But perhaps never before has it been so important, given President Trump’s lack of interest in the rule of law, ignorance of constitutional laws and norms, and hostility to basic civil rights and civil liberties. What’s needed at the Justice Department is a strong, independent, and thoughtful leader who can exert some restraint on the president. Instead, we have Jeff Sessions, a man who, when asked whether Trump’s grabbing women by the genitals would constitute sexual assault, replied, “I don’t characterize that as a sexual assault. I think that’s a stretch.”

That’s our attorney general: willing to throw the book at drug offenders and undocumented immigrants, but unwavering in his defense of a president who brags about assaulting women and targeting Muslims. Together, Trump and Sessions pose a profound threat to our most basic freedoms. And because, unlike Trump, Sessions has been able to implement major changes to the agency charged with protecting the rights of all Americans, the attorney general may actually be the more dangerous of the two.              

Source Article from http://feedproxy.google.com/~r/nybooks/~3/Hqrp8vIiAZs/

A Better Way to Choose Presidents


Thomas Dworzak/Magnum PhotosSupporters of Emmanuel Macron celebrating his victory in the French presidential election, Paris, May 2017

Our recent essay “The Rules of the Game: A New Electoral System” [NYR, January 19] provoked thoughtful responses from many readers—in letters to The New York Review, in blog postings and columns, and in private communications. We are grateful to the Review for giving us the chance to reflect on some of the ideas that came up, and also to say something about the French presidential election.

Our essay proposed two improvements to US presidential elections. First, in both presidential primaries and the general election, we would replace plurality rule (in which each voter chooses a single candidate, and the candidate with the most votes wins, even if he or she falls short of 50 percent) with majority rule (in which voters rank candidates, and the candidate preferred by a majority to each opponent wins). Second, we would reform the Electoral College so that nationwide vote totals rather than statewide totals determine the winner.

Currently, all but two states rely on both plurality-rule voting and a winner-take-all system to award Electoral College votes: the candidate with the most votes, no matter how far short of a majority, wins the state and gets all of its electoral votes. By contrast, two states, Maine and Nebraska, use plurality-rule voting but a proportional system to award Electoral College votes. In either case, however, plurality-rule voting is seriously vulnerable to vote-splitting, which arises when candidate A would defeat candidate B in a one-on-one contest, but if candidate C (who appeals to some of the same voters as A) also runs, then A splits the vote with C, giving B the victory.

Vote-splitting has had a profound influence on many presidential elections, for example, in 2000, when Ralph Nader took votes from Al Gore, enabling George W. Bush to win; in 1992, when Ross Perot cut into George H.W. Bush’s support, allowing Bill Clinton to prevail; and in 2016, when Republican candidates such as Marco Rubio, John Kasich, and Ted Cruz divided the mainstream Republican vote in the early primaries and thus gave outsider Donald Trump a path to the nomination.

In view of the unhappy history of plurality rule, some readers have suggested instead using runoff voting, another well-known voting system. Under runoff voting, each voter again chooses a single candidate, but if no candidate gets a majority, the two top vote-getters face each other in a second round. This is the method used for electing presidents in France, but as French history shows, it too is highly subject to vote-splitting.

On April 23, Emmanuel Macron and Marine Le Pen finished first and second in the first round of the French election, and as a result faced each other in the May 7 runoff. However, most available evidence shows that if the third-place finisher, François Fillon, had faced Le Pen head-to-head, he would easily have won (even the fourth-place finisher, Jean-Luc Mélenchon, would quite possibly have beaten her one-on-one). Thus the fact that Macron faced a runoff against Le Pen, as opposed to against Fillon or Mélenchon, seems anti-democratic. (And Le Pen’s post-election claim that she is the main opposition to Macron is clearly inaccurate.) As an extremist, she had been able to “divide and conquer” her way into the final round.

Macron, who was elected president decisively in the second round with 66 percent of the vote, seems likely to be the true majority winner; one-on-one, he defeated Le Pen and probably would have done the same against the other candidates. But French elections don’t always produce a winner who has the most overall support among voters. In 2002, for example, Socialist candidate Lionel Jospin failed to advance to the runoff because he split the left-wing vote with several others and finished third, while incumbent president Jacques Chirac and National Front leader Jean-Marie Le Pen (Marine’s father) came in first and second, respectively. Chirac handily defeated Le Pen in the second round, but the shocking thing was that Le Pen was in the runoff rather than Jospin. Not only would Jospin have easily defeated Le Pen in a two-man race, but he might have beaten Chirac head-to-head as well. There’s a good chance that the wrong man—in this case, Chirac—was elected president.

By contrast, majority rule avoids such vote-splitting debacles because it allows voters to rank the candidates and candidates are compared pairwise: if a majority of voters rank candidate A ahead of B, this ranking holds whether or not C runs too, and so there is no sense in which C can take votes away from A. Several readers have suggested going a step further by having voters grade candidates (say, on a scale of 1 to 5) and electing the candidate with the highest average score—much as gold medals are awarded in Olympic diving. But there is a big difference between grading in the Olympics—where standards are clear and judgments reasonably impartial—and grading in politics, where criteria are highly variable and personal. Thus we doubt that grading schemes could work successfully in political elections: grades would have no common meaning, and voters would have strong incentives to distort the grades they award candidates.

The most obvious rationale for reforming the Electoral College is to make it conform to the principle of “one citizen, one vote” (as one reader put it). The Electoral College under current rules violates this principle; a vote by a Californian doesn’t count the same as one by an Ohioan. A number of other readers have pointed out, however, that there is a more subtle reason for reforming the Electoral College, one connected to majority rule.

Because it reduces vote-splitting, majority rule would encourage more major candidates to run in the general election. For example, under the existing system, Michael Bloomberg and Bernie Sanders had a powerful disincentive to run as independent candidates in the general election last fall because of the overwhelming likelihood that they would have split Hillary Clinton’s vote and handed the election to Donald Trump. But under majority rule, they could have run without this fear.

There is a risk that the presence of additional major candidates might prevent any one of them from getting 270 votes in the Electoral College. This could be avoided by amending the Electoral College system so that the winner is the candidate who wins the nationwide vote under majority-rule voting. Such a change could be instituted, for example, by revising the National Popular Vote Interstate Compact initiative, in which a state pledges to award its electoral votes to the winner of the national popular vote as long as states totaling at least 270 electoral votes make the same pledge. (The compact has already accumulated states worth 165 electoral votes.)

Specifically, we suggest that the national popular vote winner be defined as the national majority-rule winner (not the plurality-rule winner). Such a winner can be said to truly reflect voters’ preferences. In our view, this is the most important reform to aim at.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/WwPm3AZRi1c/

The Achievement of Chinua Achebe


Eliot Elifoson/National Museum of African Art/Smithsonian Institution Chinua Achebe at his house in Enugu, Nigeria, 1959

The genius of Chinua Achebe, like all genius, escapes precise analysis. If we could explain it fully, we could reproduce it, and it is of the nature of genius to be irreproducible. Still, there has been no shortage of attempts to explain his literary achievement, an achievement that starts with the fact that Things Fall Apart (1958), the first of the novels in his “African trilogy” defined a starting point for the modern African novel. There are, as critics are quick to point out, earlier examples of extended narrative written in and about Africa by African writers. Some of them—Amos Tutuola’s Palm-Wine Drunkard (1952), Cyprian Ekwensi’s People of the City (1954), to name but two also written by Nigerians—remain eminently worth reading. But place them beside the work of Achebe and you will see that in his writing something magnificent and new was going on.

One reason for this, which often passes without notice, is that Achebe solved a problem that these earlier novels did not. He found a way to represent for a global Anglophone audience the diction of his Igbo homeland, allowing readers of English elsewhere to experience a particular relationship to language and the world in a way that made it seem quite natural—transparent, one might almost say. Achebe enables us to hear the voices of Igboland in a new use of our own language. A measure of his achievement is that Achebe found an African voice in English that is so natural its artifice eludes us.

The voice I am talking about is, first of all, the narrative voice of the novel. Consider the scene, early on, when Okonkwo, a young man whose father has left him no inheritance, has come to ask for the seed yams he needs to begin his career as a farmer. Custom requires a general conversation before Okonkwo can turn to his business, and in the course of it someone tells an amusing story about a palm-wine tapper whose father, like Okonkwo’s, was poor. “Everybody laughed heartily,” Achebe writes, “except Okonkwo, who laughed un- easily because, as the saying goes, an old woman is always uneasy when dry bones are mentioned in a proverb. Okonkwo remembered his own father.” The point of view here is Igbo, but Achebe has allowed us to inhabit it.

This invocation of shared proverbial wisdom is also found in the direct speech of the characters. Okonkwo’s father, who is always greatly in debt, explains a little earlier in the novel why he cannot repay a loan to a friend who needs his money back. “Our elders say that the sun will shine on those who stand before it shines on those who kneel under them. I shall pay my big debts first.” As someone who has struggled over the years to translate proverbs from my father’s Asante language, I know how hard it is to make this proverbial way of speaking, this traditional form of argument, available in English. In these novels, both in the direct speech of Igbo characters and in the voice of the novel itself, we come to understand, appreciate, and accept the naturalness of this mode of speech and of thought. This allows us to enter an unfamiliar world as if it were our own. As James Baldwin put it: “When I read Things Fall Apart which is about…a society the rules of which were a mystery to me, I recognized everybody in it.” It is a mark of Achebe’s success that many of the African writers who followed him took up his way of representing the speech-world of their own societies.

Achebe was always clear that he saw the task of the African writer in his day as providing a counterblast to the misrepresentation of Africa in the European writings about the continent he had studied in his English literature classes in college. What was missing in all of them, he thought, was a recognition of Africans as people with projects—lives  they  were leading, aspirations they were striving for—and a rich existing culture, exemplified in the proverbs and the religious traditions that are threaded through these novels. He was writing, as he often said, against the Africa of Joseph Conrad’s Heart of Darkness. In one of his lively polemics against Conrad, Achebe comments on a few sentences from that book:

This passage, which is Conrad at his best, or his worst, according to the reader’s predilection, goes on at some length through “a burst of yells,” “a whirl of black limbs,” “of hands clapping,” “feet stamping,” “bodies swaying,” “eyes rolling,” “black incomprehensible frenzy,” “the prehistoric man himself,” “the night of first ages.” And then Conrad delivers the famous coup de grâce. Were these creatures really human?

Writing in Nigeria at the beginning of a new period of independence, Achebe believed that the writer’s contribution was to give his or her people a usable past, to recover their dignity in the face of a colonial culture that deprived them, in moments like these, of a decent self-respect. He wanted not to deny that colonization had changed his homeland deeply and irrevocably but to claim that, despite all this, there were profound continuities with the precolonial past to draw on.

The Igbo encounter with the British had begun in the 1870s, only three decades before formal colonization. British administration of Nigeria was imposed at the start of the twentieth century and ended with independence in 1960, so Achebe’s birth, in 1930, came almost exactly halfway through the colonial period. In his trilogy, he explores three periods in almost a century of the Anglo-Igbo encounter: the first arrival of the British in Things Fall Apart; the period of established colonial rule around the time of his own birth, in Arrow of God; and the last days of empire in No Longer at Ease, in each case through the eyes of Igbo protagonists.

One central strand of Achebe’s recovery of the past in these novels is an Igbo philosophy that is expressed in a proverb he offered up in No Longer at Ease: “Wherever something stands, something else will stand beside it.” Achebe often used this proverb in discussing his work, and he explained its significance once in an interview: “It means there is no one way to anything….If there is one God, fine, there will be others as well…. If there is one point of view, fine. There will be a second point of view.” The characters of his novels get into trouble in large measure because they fail to acknowledge this pluralistic vision.

Okonkwo’s crises in Things Fall Apart reflect his rigid adherence to a view of Igbo tradition that fails to recognize its supple flexibility. Though the arrival of Christian missionaries and colonial authority plays a part in the novel—and especially in its final denouement—the dramas of his life depend largely on his refusal to recognize the proper place of the feminine virtues, as Igbo tradition conceives them: peace, patience, and gentleness. All these, along with fertility, are attributes of the Earth goddess. And it is an offense against her that begins his tragic descent.

Ezeulu, the Chief Priest who is the main character of Arrow of God, is also inflexible in his pursuit of the commands of Ulu, the god he serves, as he understands them. Here again, though the novel’s final episode involves an encounter between Ezeulu and colonial authority, the central struggle in the book is between two forces within Igbo society, the new Christians and the servants of the old gods. And once again, we can say that Ezeulu falls because he does not recognize that “wherever something stands, something else stands beside it.”

It is only in No Longer at Ease that the opposition between the world of colonialism and older Igbo values takes center stage. Its main character, Obi, is the grandson of Okonkwo. The people of his hometown have banded together to send him to England for an education. When Obi returns to a job in the colonial administration, they expect him to share with them the fruits of his education. His alienation from their world precipitates a series of crises, as he tries to balance his obligations to them with his own, rather different values. But in this novel, too, Achebe represents the duality of Igbo society through the tensions between the new Christianity, represented by his father, and the traditions of Igbo narrative, which he learns from his mother. And Obi falls in the end in part because he sees an either-or in a situation that demands a both-and.

T. S. Eliot (whose poem “The Journey of the Magi” provided the title of No Longer at Ease) once said he doubted “whether a poet or novelist can be universal without being local too.” I can think of no literary work that more persuasively confirms this judgment than Chinua Achebe’s trilogy, which evokes for us the local world of Igboland while exploring themes that are recognizable to us all. Achebe, by inviting us into his world, expands our own.


Adapted from the foreword to Chinua Achebe: The African Trilogy, published in a new edition by Penguin Classics.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/9dwuHmLvJx8/