Месечни архиви: September 2016

The Candidates Laid Bare

Donald Trump and Hillary Clinton during the debate at Hofstra University, Hempstead, New York, September 26, 2016
Rick Wilking/ReutersDonald Trump and Hillary Clinton during the debate at Hofstra University, Hempstead, New York, September 26, 2016

Donald Trump’s performance in the first presidential debate Monday night left many commentators perplexed. He was sufficiently ill-prepared, dishonest, petulant, and finally out of gas to have sunk a normal candidate in a normal year. He showed us the lazy and arrogant Trump, Trump the bully, the Trump of the short attention span. Clinton, on the other hand, was polished and prepared—but not, as some of her followers had feared, over-prepared. She was unrattleable. Aware when Trump was speaking that the camera was trained on her as well, she kept her facial expressions under control and mainly looked bemused. When she was speaking, he made faces of scorn and irritation; and he often interrupted her and even talked over the moderator, Lester Holt, which isn’t done. Some commentators withheld judgment at first about how the debate went over with the public, even though they believed that Trump had done very badly, because so many of them had gotten it wrong in the primary debates: most press observers thought he’d behaved horribly in the South Carolina debate but then he won South Carolina by a large margin. It should have been evident, though, that the voters in the general election aren’t like the ones in the Republican primaries—and that’s Trump’s challenge now.

Trump did badly Monday night with focus groups of undecided voters in Ohio, Florida, and Pennsylvania, and a poll released Wednesday by Politico/Morning Consult showed Clinton gaining three points and Trump losing one—just what she needs. Fourteen percent said they were undecided. In an NBC/SurveyMonkey poll, 52 percent of respondents said that Clinton had won the debate, compared to just 21 percent for Trump. As the week went on, new polls were consistently showing gains for Clinton, putting her ahead of Trump in national measurements by from two to seven points. This doesn’t tell us what the effect was on the swing states; though the trends in those states would tend to reflect what’s happening at the national level.

In the first half hour (when he was relatively coherent), Trump pitched his comments to his supporters in the rust belt, slamming Clinton’s (and her husband’s) record on free trade agreements and the moving of plants and therefore jobs overseas. As usual, he went on some faulty assumptions—US jobs aren’t fleeing overseas to the extent he says—and hazy solutions. Its highly unlikely, to say the least, that existing trade agreements can be renegotiated to make them more favorable to the United States. Trade plus immigration have been the principal rationales of his campaign from the beginning, but the strong support of blue-collar white men without college degrees isn’t sufficient to get him elected; Trump has to appeal to women, but it’s hard to see what he said in the debate that would cause them to want to support him. He did get Clinton somewhat on the defensive on trade, her opposition to trade agreements having come lately. Perhaps Trump’s most effective line, repeated from time to time throughout the debate, was, Clinton had been in government for thirty years so how come she didn’t get the things done that she’s now advocating? The line made no literal sense but it served to underscore one of his main attractions to voters, that she’s the insider and he represents change.

While Trump was clearly winging it after the beginning, Clinton had an effective plan that she executed flawlessly. Her strategy was based on her belief that she could defeat Trump in the election on the basis of his character and personality. Her aides said that the goal was to try to jack up the enthusiasm of her supporters and also reach out to uncommitted suburban women and millennials who have yet to back her. First she unnerved Trump by questioning his business prowess (Trump, like many braggarts, has a notoriously thin skin), pointing out that his father had loaned him $14 million to start his own business. Trump repeated that he’d been given a “small loan.” Then she went at the way he’d “stiffed” small business people whom he’d contracted to work on his buildings, his “long record of engaging in racist behavior,” and his derogation of women. She told the stories of real people whom he’d mistreated.

To be frank, Trump didn’t seem very smart in the way he handled the debate. He walked into every trap that Clinton set for him. Perhaps he’s so unself-aware that he thought he was doing just fine. He let Clinton lead him into insulting women once again—out of nowhere he denigrated Rosie O’Donnell, whom he’d already insulted for no apparent reason in the first debate in the primaries. But the event that was to live on after debate was Clinton’s summoning up the story of twenty years ago when Trump—who, Clinton made a point of saying, liked to sponsor and hang around beauty pageants—had insulted Miss Universe, Alicia Machado (“Donald, she has a name”), for having gained weight after the pageant, calling her “Miss Piggy” and also “Miss Housekeeping”—an apparent reference to the fact that she’s Latina (from Venezuela).

Trump’s reaction was puzzlement—“Where did you find that?” But, as is his wont, Trump made things worse for himself, just as he’d done with the Khans, the Muslim parents of a US army captain killed in Iraq, and with Judge Gonzalo Curiel, an American of Mexican heritage, by going further. The next morning on Fox and Friends, he attacked Machado again, saying she’d gained “a massive amount of weight” and had been “the worst contestant” ever, “a real problem.” He thereupon invited the wrath of women everywhere who’d ever had weight problems and probably their parents, as well. And then there are the Hispanics, who can’t have liked this. As the Khans did, Machado, who is now an American citizen, is making the rounds of the talk shows.

Trump not only walked into traps; he gave the Clinton campaign fresh material. His interpolations as Clinton made some charges would be particularly useful. The Clinton campaign plans to hammer Trump over his unintended admission that he hadn’t paid any taxes last year by butting in with the comment that that was “smart”; and that his cheering on the impending housing crisis in 2008 was “business.” As for Clinton’s mention that she’d invited to the debate the architect of a clubhouse on one of his golf courses who hadn’t been paid, Trump employed the standard excuse, “Maybe he didn’t do a good job.”

Clinton spoke to millennials and suburban women when she charged that Trump had called climate change a hoax invented by the Chinese. (Trump lied in denying that he’d done that.) To African-Americans Clinton dwelt on the implications of Trump using the “racist” charge about Obama’s birth to get his start in national politics—and then pressing this “birther” myth for five years. Trump’s abrupt concession on September 16 that “President Obama was born in the United States” did him little good. Trump’s turnabout, which his aides had urged him to take care of before the debate, in a very brief press conference, during which he showed off his new hotel in Washington, only reminded people how extensively he’d pushed the baseless rumor to delegitimize the first black president, which is understood to have very much bothered the usually unflappable Obama. Trump’s lie about Clinton having started the birther rumor was the last straw for a press corps already frustrated by Trump’s constant lying. It was in articles about this press conference that the word “lie” began to appear, and in time major outlets—The New York Times, The Washington Post, Politico, and The Los Angeles Times—started to publish running accounts of Trump’s lies. Trump bragged during and after the debate that he’d forced Obama to release his birth certificate; at the time Trump had questioned whether what Obama released was real. Perhaps Clinton’s single most effective line in the debate—clearly a rehearsed one—was, “Donald, I know you live in your own reality.”

In the debate Trump showed once again that he doesn’t understand the purpose and benefits of our international alliances or nuclear policy or how the Federal Reserve works. But, typically, Trump was incapable of admitting that he hadn’t done such a good job—he claimed victory—and had to offload the blame on others.

In his appearance on Fox and Friends, Trump complained that his mic hadn’t worked well and was scratchy—perhaps a conspiracy against him—though no one said they heard such noises or had trouble hearing him in the hall. He accused Holt, the moderator, of asking him the harder questions. Trump was at his smarmiest in suggesting toward the end of the debate that he was going to say something “extremely rough to Hillary, to her family, and I said to myself, I can’t do it.” Afterward, in comments in the “spin room” and wherever he could the next day, he was more explicit, saying that he’d held back from bringing up Bill Clinton’s past affairs because Chelsea, a friend of Ivanka’s, was in the hall. (Though presumably Chelsea would have heard it on television had she been elsewhere.) Like a chorus, Trump’s surrogates, RNC chairman Reince Priebus and Rudy Giuliani, praised him for being so restrained about bringing up a very sensitive matter, with Giuliani, seemingly Trump’s id, going further in talking about Bill’s affairs, mentioning Monica Lewinsky—if anyone needed reminding. After the debate and also on Morning Joe the next day campaign manager Kellyanne Conway praised Trump’s saintly restraint. On Wednesday, Trump’s son Eric said his father had shown “courage” in not bringing up Bill Clinton’s sex life. How all this would help Trump with undecided voters was unclear.

Among the few actual substantive arguments between Clinton and Trump was one over stop-and-frisk policing, in particular in New York. The tactic had been instituted by Giuliani during his tenure as New York mayor and Giuliani had convinced Trump to back it. But a federal judge had ruled that the process amounted to racial profiling and while Giuliani’s successor, Mike Bloomberg, appealed the ruling, Bill de Blasio dropped it when he came into the mayoralty. De Blasio maintains that after the practice was stopped violent crime in New York continued to decrease.

In the commentary after the debate, to mask how terribly Trump had come off—unlike anything I’d ever seen in six decades of presidential debates—Conway raised the bar by saying that Clinton had failed to knock him out. It wasn’t clear how this was supposed to happen: Trump throwing up his hands and saying “I give up?” The boxing metaphor for debates is one of the things wrong about them. My note to self afterward was to never assume what Trump will do on a major occasion. During the Republican convention I was certain that Trump would act like a statesman when he delivered his acceptance speech and I was sure that in the first debate he would be low-key, “presidential,” and even a bit sensitive toward others. Never mind. He was himself and that’s just as well. The public saw the real Trump. Anyway, is he a good enough actor and sufficiently disciplined that he can play “presidential”?

Patrons at McGregor's Bar and Grill during the first presidential debate, San Diego, California, September 26, 2016
Sandy Huffaker/TPX/ReutersPatrons at McGregor’s Bar and Grill during the first presidential debate, San Diego, California, September 26, 2016

The debate took place against the backdrop of an essentially tied presidential race, with the press having declared that Trump had the “momentum.” By the third week of September, battleground states where Clinton had been seen as safely ahead—Colorado, Pennsylvania—were suddenly tied. (Ohio was tilting toward Trump.) The Clinton campaign had greatly outspent Trump on advertising, almost all of it on ads challenging Trump’s character and fitness to be president. During August, confident that Clinton would carry Virginia and Colorado, her campaign ceased advertising in those states. The dead heat in Pennsylvania was particularly worrisome to the Clinton camp since they had counted on carrying the state in order to prevent Trump from reaching 270 electoral votes; the shift in Colorado was alarming because voters there largely fit the profile of Clinton supporters: younger, more educated, and a high percentage of them Hispanics. Clinton’s performance on Monday brought considerable relief to her supporters.

Clinton’s problem is that she hasn’t been attracting new followers in large numbers and many of those she already has have been lukewarm. Low turnout in November could be a big problem for her. Something about Hillary Clinton just doesn’t sell. While she’s widely accused of being an inveterate liar, I’ve said before that her lies in this campaign have been about the “damn spot” of her election campaign—the server. Her behavior over the server reminded people of her evasiveness in her years as First Lady. Clinton can be cold and off-putting, but she can also be very warm; of late she’s been much less packaged and more spontaneous. But first impressions tend to stick. One’s reaction to her depends on which Hillary one knows. (I’ve met both over the years.) There’s no doubt that the “tough woman” puts some people off, but what else do they want in a president?

Clinton also has a couple of political problems that aren’t her fault: after Bernie Sanders spent months during the primaries attacking her as a tool of Wall Street and part of a corrupt system, he has yet to convince a great many of his followers to support her. (Now Trump is picking up Sanders’s sly demand that she release the transcripts of her speeches to Goldman Sachs: Sanders knew full well that anyone who gives a speech to a group—no matter what it’s paying—is likely to flatter them at the outset; the demand encouraged the naïve thought that a politician would cut a deal with a group in a speech attended by numerous people.) Sanders’s recent appearances on her behalf—he did an event with her Wednesday in New Hampshire to talk about the plan she adapted from his of offering free public college education and lowering student debt—will test whether he can persuade many of them to back her. In the end, Sanders and Clinton come from different political places so it’s not at all clear that he can. Sanders had a vision, while, puzzlingly, Clinton has yet to offer one. Her campaign’s slogan, “Stronger Together,” isn’t exactly a vision. But she badly needs a hefty portion of the millennials who are Sanders’s major constituency.

A second problem is that Clinton is running as the nominee of the party that’s controlled the White House for eight years; voters in this country have a pattern of electing the other party after two terms—a pattern even more pronounced when the incumbent is a Democrat. And there’s an unknown factor that should be mentioned: no one can tell now how many blacks and Hispanics, as well as elderly people and students, will be blocked from casting a ballot by the voter ID laws that are still popular in Republican-governed states.

A further question is how much staying power the third-party candidates—Gary Johnson, of the Libertarian Party, and Jill Stein, of the Green Party—will have. While Stein is but a blip, scoring at most three points in important states, Johnson, with the more well-known Bill Weld as his running mate, is on the ballot in all fifty states and could make the difference in such states as Colorado, New Mexico, Nevada, and New Hampshire. Johnson is known as a bit of an odd duck. His stunning appearance on Morning Joe, when he was asked what, as president, he would do about Aleppo and he drew a blank, was just one sign of an unserious figure whose real role is to muck up the presidential race. When confronted on Meet the Press with the fact that he couldn’t win but could affect the outcome, he replied with insouciance, “Some parties need wrecking.”

The reckless egotism that leads some people to put themselves in a position to distort the outcome of a presidential race—as Ralph Nader did in 2000—is quite remarkable if not very admirable. What’s most disturbing is that by offering the illusion that they can affect policy, which they’re not strong enough to do, they can draw younger people into a hopeless crusade and end in increasing their cynicism. According to a recent New York Times/CBS poll, over a third of voters aged eighteen to twenty-nine said that they’d vote for Johnson or Stein, and 10 percent said that if the choice was only Clinton or Trump they wouldn’t vote. This was twice as many as in any other age group. It’s widely thought that Johnson’s numbers will go down as people get closer to actually casting a vote and realize that they could be helping elect Trump. Nader cost Al Gore the 2000 election, one of the most fateful ones in our history—perhaps to be eclipsed by the current one. In a Times account, several millennial voters told reporters they were too young to remember Nader.

The week leading up to the debate showed how issues can whang into an election campaign, dominate the discussion and coverage for a few days—until the next one occurs. First we had, on September 17, the bombings and attempts at more of them in New York City and New Jersey. Terrorism! And on the eve of the UN General Assembly in Manhattan, in the communications capital of the world. And then a few days later, this was supplanted by the police shootings of black men under ambiguous circumstances in Tulsa and Charlotte. A third such shooting occurred on Wednesday, in San Diego.

The assumption—wrong in the event—that Trump would go low-key in the debate was encouraged by his response to the Tulsa shooting of a black man with his hands up. Trump addressed this in a soft voice and with evident sympathy for the victim. But the fabulist in him took it further, offering a character assessment of a man he didn’t know. “He looked like a really good man.” This appeared to be the first sign of what promised to be, or so we were given to think, the Great Softening. Unless one counts Trump’s generic statement in August that he “regretted” if he’d said the wrong words (about whomever) and had “caused personal pain” (to whomever). Trump’s sudden turn, such as it was, was believed to have come especially at the urging of Kellyanne Conway, whose mission was to make him more acceptable to white women. (No one—despite his rhetoric, not even Trump—expects him to pick up the votes of minorities in significant numbers.)

But while Tulsa produced an unaccustomed softness in Trump, the demonstrations in reaction to the Charlotte shooting produced his more bombastic side, the one that appeals to white supremacists. In his comments about Charlotte, Trump was summoning up his Nixon “law and order” routine, first introduced last summer by his former campaign chairman Paul Manafort and a major theme at the Republican convention.

Claiming without evidence that the Charlotte demonstrators were using drugs, he blamed the riots on Obama and Clinton. Obama had shown “weakness,” Trump said, while Clinton “shared directly in the responsibility for the unrest” by criticizing the police. Criticism of law enforcement, actual or implied, had been a divisive point in our politics almost from the outset of the Obama presidency, when the president took the police in Cambridge, Massachusetts to task for arresting Harvard professor Henry Louis Gates for trying to enter his own house. Ever since, Obama has trod carefully in matters involving the police and blacks; even when he attend the memorial for five Dallas police shot by a lone gunner, he was criticized by some for his mention of blacks murdered by police and his reference to slavery.

Now the first presidential debate dominates the discussion; it’s often the case that the aftermath is as important as the debate itself but this time the result wasn’t ambiguous. Only Trump, being Trump, and a few flunkies declared outright victory. (Inexplicably, Conway had built up expectations before the debate, calling him “the Babe Ruth of debaters.”) Trump’s show on Monday night put the lie to how deft he’d been in dispatching sixteen skilled politicians in the primaries.

Trump’s closest aides and advisors know that they have a problem. Leading congressional Republicans hid out from the press rather than comment on his handling of the debate. The word has gone out from the Trump camp that the next one will be different. There won’t be some dozen people briefing him, as was the case this time; Roger Ailes, who couldn’t get Trump to rehearse the situation by standing at a podium and responding to someone playing Clinton, will take a more commanding part, though the next “debate” is in the form of a town hall. But will Trump’s attention span suddenly grow? Could he somehow come across as well informed? His camp agreed that in the first debate he let some big subjects go by—he didn’t know enough to bring something up even if he hadn’t been asked about it. But the efficacy of the subjects they listed is questionable: Benghazi (which seven congressional committees had investigated and come up empty); the email server (the public seems tapped out on that subject); Obamacare (more promising). If Trump really thinks that he’ll be greatly aided by bringing up Bill Clinton’s affairs, as he says he intends to, well, what can one say?

By rights the results of a debate—one event lasting about an hour and a half—shouldn’t supplant months of campaigning by the candidates. Small and sometimes inconsequential things that happen in this kind of forum can lead to large conclusions: Richard Nixon’s perspiring; Michael Dukakis’s mechanical response to a hypothetical question positing that his wife had been raped and murdered; George H. W. Bush’s looking at his watch (perhaps he just wanted to know how much time was left to make certain points); Al Gore’s sighs; Reagan’s canned one-liners (“There you go again”). But what was different about the first debate of this election, with an audience of eighty-five million—the most-watched presidential debate ever—is that it was more revealing about character and characteristics of the two nominees than any debate in modern history.

Part of Elizabeth Drew’s continuing series on the 2016 election.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/JA2Y0PIWgmA/

In Saudi Arabia: Can It Really Change?

Saudi Defense Minister and Deputy Crown Prince Mohammed bin Salman, right, with Omani Defense Minister Badr bin Saud al-Busaidi and US Secretary of Defense Ashton Carter at the US–Gulf Cooperation Council summit in Riyadh, April 2016
Fayez Nureldine/AFP/Getty ImagesSaudi Defense Minister and Deputy Crown Prince Mohammed bin Salman, right, with Omani Defense Minister Badr bin Saud al-Busaidi and US Secretary of Defense Ashton Carter at the US–Gulf Cooperation Council summit in Riyadh, April 2016

Until the Wahhabi conquest of the Arabian peninsula at the turn of the last century, the mixture of sects there was as diverse as it was anywhere in the old pluralist Middle East. In its towns there lived, among others, Sufi mystics from the Sunni branch of Islam, members of the Zaidi sect, which is linked with the Shia branch of Islam, Twelver Shia traders, and seasonal Jewish farmhands from Yemen.

From the eighteenth century onward, successive waves of warriors from the Wahhabi revivalist movement, formed from Sunni tribesmen in the hinterland, have struggled to enforce a puritanical uniformity on the cosmopolitan coast. Toby Matthiesen recounts in The Other Saudis that, a few years after taking the eastern shores of the peninsula from the reeling Ottomans in 1913, Wahhabi clerics issued a fatwa obliging local Shias to convert to “true Islam.” In Hijaz, the western region that includes Mecca, Medina, and Jeddah, militant Wahhabi clerics and their followers ransacked the treasuries of the holy places in Mecca, lopped the dome off the House of the Prophet in Medina, and razed myriad shrines.

But their success was only partial. In 1930, when the Wahhabi Brethren began raiding Iraq and Jordan and upsetting the region’s British overlords, Abdulaziz al-Saud, the modern state’s founder, reined them in, slaughtering the zealots by the hundred.

Afterward, the peninsula regained much of its old tempo. Shia clerics applied their versions of Islamic law in the east. Jeddah’s newspapers continued to publish listings of Western as well as Islamic New Year’s Eve celebrations, cinema screenings, and concerts. Then, in 1979, apparently inspired by the Iranian overthrow of the Shah and the establishment of an Islamic republic earlier that year, Islamic militants stormed Mecca’s Grand Mosque, the holiest place in Islam, and declared a new order under a leader who proclaimed himself the Mahdi—the redeemer—and sought to replace the Saudi monarchy. Wahhabi forces loyal to the monarchy counterattacked, saved the al-Sauds, and retook the mosque. But a crucial deal was made: loyalist clerics approved the removal of the militants by force; but in return demanded that Saudi royals cede them power to strictly control personal behavior. The last cinemas and concert halls shut down. Women were obliged to shroud themselves in black.

Thirty-five years later, foreign descriptions of Saudi Arabia remain for the most part remarkably bleak. The writers of all four books under review examine the domination of the al-Saud dynasty with the fascination with which a zoologist might regard a black widow snaring its prey. Pascal Menoret describes young men whose only escape from Riyadh’s Islamist social strictures is the homoerotically charged practice of joyriding down the city’s grim highways. Matthiesen describes the often difficult lives of two million Shias in eastern Saudi Arabia—many of them employees of oil companies—whose right to practice their form of Islam contracts and expands according to royal whim. Paul Aarts and Carolien Roelants describe the suppression of Saudi women, who still need a man to study, work, travel, or open bank accounts. Simon Ross Valentine is appalled and fascinated by the power of Wahhabi clerics; he stays behind after a clumsy public decapitation to watch a mosque steward hose down the blood. Yet through all of these recent books comes a nagging question: If Saudi Arabia really is the wellspring of ISIS and if it imposes, as it often does, an orthodox conformity, how, a century after its creation, does the kingdom these authors describe remain, as they also make clear, such a heterogeneous and nuanced place?

Each of the authors acknowledges the gap between the totalitarian ideal and the looser reality. “Wherever I lived in [the Kingdom of Saudi Arabia],” writes Valentine in a chapter entitled “Serpents in Paradise,” “I was not only offered drugs and alcohol, but also ‘woman, for good time.’” Aarts is surprised by a portrait gallery violating sharia injunctions against figurative art. There are plenty of censors, but the Internet and satellite TV, he found, have made them obsolete. Menoret records how the joyriders have turned the uniform urban grids into an escape route from state planners and authoritarian governors as they speed down the streets.

Most striking of all is Matthiesen’s meticulous portrayal of contemporary Shiism. He describes how the Shia residents of the Eastern Province are treated as second-class citizens; but he makes it clear that they are also able to stage Shia ritual processions through the streets, and how their ayatollahs maintain networks of close relations with one another and with their Iranian counterparts that “allow for a certain independence from the state.” Some have opened hawzat, or theological colleges, including one for women. Such open displays of Shia religiosity and autonomy make many a Wahhabi cleric writhe. But they survive nevertheless.


Mike King

In January, I went with my editor in chief from The Economist to Saudi Arabia to meet Mohammed bin Salman, a young, previously little-publicized royal, known to his courtiers as MbS. Upon his aging father Salman’s coronation in January 2015, he rose to become deputy crown prince, minister of defense, and de facto ruler. We met with him at an inauspicious time. He had marked the New Year by executing forty-seven people—including forty-three Sunni jihadists and four Shias—the kingdom’s largest group of executions since the crackdown that followed the retaking of Mecca’s Grand Mosque in 1979. Throughout the meeting, the young prince watched reports of the executions on a large television screen—seeming to confirm the caricature of himself on social media as a teenager who played at brutal statecraft as if it were a video game. Iranian protesters had stormed the Saudi embassy in Tehran, and in response MbS promptly severed relations with Iran. “We try as hard as we can not to escalate anything further,” he told us at dinner, while his acerbic foreign minister, Adel al-Jubeir, portrayed his masters as valiantly defending against the Persian Empire’s march west.

Mohammed bin Salman’s treatment of domestic affairs seemed as headstrong as his treatment of foreign ones. Apparently in return for sanctioning the youngster’s accumulation of power, the clerical establishment secured the dismissal of the country’s first female minister, appointed in laxer times by Abdullah, the late king. Religious police resumed their raids on private premises. A young female accountant told us how they had detained a male colleague sharing her office, in violation of their codes. A spring festival in the south was shut down after prepubescent girls joined in a folkloric dance. McDonald’s revamped its fast-food franchises, and renovated signs segregating their counters and seating areas by sex.

At literary salons, writers recounted stories of people jailed for blaspheming. Some were fed watermelon to fill their bladders, they said, and then had their penises tied. In November 2015 Ashraf Fayadh, a Palestinian poet raised in Saudi Arabia, was sentenced to death for voicing religious doubts. “I am Hell’s experiment on the Planet Earth,” he had written in his offending volume of poems. (After much international protest and a worldwide reading of his poems, a panel of judges upheld the verdict of apostasy but commuted the sentence to eight years in prison and eight hundred lashes.) “For the first time in my life, I’m truly afraid,” a news editor told me. The dearth of names in this review is testimony to how nervous even prominent figures have become.

Having proven his conservative and repressive capabilities, MbS tacked leftward. Earlier this year, after the executions, he stripped the special unit of the morality police of its powers to arrest people and locked up popular preachers who dared challenge this change. Among them was Abdul-Aziz al-Tarifi, a prominent televangelist, who sneered, “There are some rulers who think that renouncing their religion to satisfy infidels will put an end to the pressures on them.” News of his arrest soon after was tweeted 22,000 times.

Similarly dismissive of tradition, Mohammed bin Salman pointedly gave his first on-record interview to my editor in chief, an unveiled Western woman who rejected the black abaya our minders wanted her to wear. He received her in the living room of his out-of-town rest house in a renovated desert fortress. The daggers of old battles hung from the wattle-and-stucco walls above them. IPads lay strewn on the coffee table in front of them. He expressed views in favor of reform. Curing the kingdom’s oil “addiction” would require diversifying its economy, he said, which in turn might require modernizing its rigid hierarchies. Women should have a more productive part in the kingdom’s economy. Migrants should have the rights of residents. (“All nationalities?” an alarmed interviewer on Al-Arabiya, a Saudi-owned Arabic-language channel, asked him later. “Without a doubt,” replied MbS without a twitch.)

A relaxation of the social code would have economic advantages. To discourage his citizens from frittering away their earnings on trips to Dubai or Beirut (both capitals where women can drive and people freely drink), the Saudi kingdom, he said, should construct its own tourist resorts, to keep the money at home. As part of his $5 billion plan to develop the country’s entertainment sector, he told us, he would build theme parks and resorts on the kingdom’s untouched islands in the turquoise Red Sea. Saudi pop stars—“the best in the Arab world”—who performed in the kingdom in his father’s youth might soon be allowed back to perform—perhaps before the year’s end. Footage from Mecca during his grandfather’s reign showed women riding on camels, beating drums, and selling wares in the marketplace. They might yet do so again.

MbS’s new education minister, an academic whose book Wahhabi censors had banned for criticizing clerical control over curricula, spoke of breaking the preachers’ stranglehold by opening branches of American universities in the kingdom. An Information Ministry official showed me architects’ drawings for a Royal Arts Complex that he said would wean the kingdom off “ISIS values.” Saudi Arabia had closed its last public cinema in the 1970s, but the new complex would have both a movie theater and an opera house. One day, the official said, it might stage La Bohème. “We want to break the social resistance that prevents women driving, provide an alternative to the conservatives, and work gradually to eliminate extremism,” he told me. Another official added that MbS’s recent acquisition of a $3.5 billion stake in Uber, the cell phone app for ordering taxis, would give women greater freedom of movement.

An adviser to Mohammed bin Salman compared the young prince to Sheikh Mohammed bin Rashid al-Maktoum, who turned his sleepy creek of Dubai into a libertine metropolis. But for all his talk of theme parks, the only one near completion is Diriya, the reconstructed town outside Riyadh where his forefathers and the founder of the Wahhabis, Ibn Abdel Wahhab, sealed their pact in 1744. In the 1980s, Saudia Arabia’s King Fahd built an opera house that never opened because of religious objections and remains a gleaming white elephant on the outskirts of Riyadh. Advisers who had anticipated an announcement that women would be allowed to drive sounded glum when MbS dismissed the idea.

There are private beaches where local women can wear bikinis, but the kingdom seems unprepared for mass domestic tourism on a scale that proliferates elsewhere in the Middle East. Such a development, a Jeddah hotelier told me, would happen “only over the graves of the religious establishment.” As MbS attempts to placate both camps, he risks satisfying none.

Each year, Jeddah, Saudi Arabia’s second city, hosts a festival recalling pre-Wahhabi times. Under the seemingly innocuous slogan Kunna Kidda, “That’s How We Were,” charitable associations funded by local businessmen evoked memories of a more pluralist past. From the glistening white square where the regime stages its executions, I joined the crowds floating through the arched gates into the Old City. In blown-up sepia photographs lining the port city’s historic alleys, religious buildings flattened when the al-Sauds and their conquering puritans descended from the desert highlands in 1925 rose again. Beneath lattice balconies, families stopped to marvel at Sufi lodges and the domed shrine of Eve, the first woman—toppled by zealots who frown on saint-worship. Inside a glass case running the length of a house, mannequins flaunted the colorful capes women wore before the Wahhabi sheikhs mandated that they wear black. Recalling a time when Jeddah was Arabia’s diplomatic capital, spotlights illuminated the whitewashed buildings that were once the American and British consulates, as well as the residence of the Ottoman caliph, the steps of which were so shallow that a camel could plod to the fifth floor.

Excited girls, outnumbering the men in their segregated stands, cheer comedians on an open-air festival stage. Between acts, a DJ spins discs, defying the ban on music. “Suck me,” screech its English hip-hop lyrics. After the show, the more adventurous of both sexes then mingle onstage, taking group selfies. “The festival is our answer to the desert tribes who disparage our cosmopolitan port city ways,” one of the organizers tells me.

His remarks underlined just how much resistance Wahhabis face in a peninsula relandscaped as their own. Mecca’s ancient hill has been laid low and its old town leveled to make way for sixteen towering apartment hotels, and shrines to the Prophet’s descendents, historically venerated by Sunnis and Shia alike, have been bulldozed. “The crime has been committed,” says a Jeddah art curator and conservationist, who on his office wall has a painting of a group of bland hotels looming over the Kaaba—the inner sanctum of Mecca’s Grand Mosque—and shrouding its black sanctity in shadow. “Our task is to salvage what remains.”

But unlike the Islamic State, which in two years of depredation has purged its territory of Muslim and non-Muslim nonconformists, Wahhabis have failed to suppress the peninsula’s many cultures and sects, despite a century of rule. Zaidis, in their wan-colored adobe houses beneath the shadow of Yemen’s mountains, and the Nakhawila, Medina’s indigenous Shias, continue to visit the graveyards where the shrines of the Prophet’s family once stood. On Thursday nights, their Sufi neighbors recite their zikr, or mystical incantations—deemed profanities by the Wahhabis—to the beat of the daf, or traditional drum. In Medina, erudite advocates of conservation, drawn primarily from the shurafa, the pre-Wahhabi nobility of Hijaz, successfully lobbied the authorities not to let the Wahhabis demolish the Prophet Muhammad’s house as part of their expansion.

Some even detect a growing acceptance of other religions and a reexamination of the Wahhabi doctrine—cited by a senior royal—that non-Muslim worship should not be allowed in the entire peninsula. (“If Saudi Arabia had lands in Africa, we would undoubtedly have opened a church there,” he said.)The kingdom includes perhaps the largest and fastest-growing Christian community in the Middle East. Despite the formal ban on non-Muslims in the holy cities of Mecca and Medina, non-Muslim domestic servants and drivers live and work in the shadow of Mecca’s Grand Mosque. Though Christians are forbidden from worshiping publicly, congregations at weekly prayer meetings on foreign compounds can be several hundred strong. “A generation ago we pretended Christmas didn’t exist,” says a Saudi businessman who has two live-in Christian servants from the Philippines. “Now we give them Christmas presents, and host Winter Festival receptions for our Christian employees.”

Students at Effat Women’s University, Jeddah, Saudi Arabia, 2009; photograph by Olivia Arthur from her 2012 book, Jeddah Diary
Magnum PhotosStudents at Effat Women’s University, Jeddah, Saudi Arabia, 2009; photograph by Olivia Arthur from her 2012 book, Jeddah Diary

At one of Riyadh’s universities, a minor Saudi prince who studied Hebrew in Boston teaches Jewish studies. And with their non-Arabic signs and street food, whole stretches of southern Riyadh feel more Bengali, Keralite, and Afghan than similar parts of London. That Saudi Arabia tries to conceal such diversity from the outside world underscores its deference to its clerical establishment, but for a journalist raised on books like those listed here the kingdom’s tentative foray into multiculturalism can be jarring.

One morning I went to Riyadh’s modernist off-white Criminal Court. I had been told that anyone hoping for a fair hearing should grow an unkempt beard, showing piety. In Courtroom 39, a clean-shaven taxi driver and father of six from the poor southeastern part of the capital was pleading for mercy from a young, bearded judge in white Wahhabi garb who was sentencing him to eighty lashes for drinking whisky. “But the police said that you would let me off with a warning if I confessed,” the taxi driver protested. “No man can tamper with the punishment God has prescribed [in the Koran],” the judge said, in a tone that suggested he wished he could. Glancing at the conspicuous foreigner in his courtroom, he placed a Koran under his armpit and reenacted a mercifully limp-wristed whipping. “The police can only use their lower arm,” he said, interrupting proceedings to tell me that patriarchal tradition, not the Koran, was to blame for excesses and that he favored letting women drive.

In his dilapidated house on the eastern outskirts of Riyadh, I talked with Hassan Farhan al-Maliki, now an active protester but formerly a well-paid bureaucrat whose needs were all met by the state. As a clerk in the Education Ministry, he told me he had distributed cassettes of Bin Baz, the chief mufti who preached that the world was flat. But a trip to Afghanistan and the predominance of Saudis involved in the suicide attacks of September 11, 2001, he says, induced a change of heart. Dar al-Razi, a publishing house in Amman, Jordan, published his book, Preacher Not Prophet, a refutation of the “corrupting” tenets of Wahhabism’s founder, Ibn Abd al-Wahhab. He was dismissed from the ministry and sent, twice, to prison. Unrepentant, he emerged to denounce the kingdom’s application of God’s law that he told me “was hard on the people and soft on the rulers.”

The elite, al-Maliki argued, bypassed sharia in their beach clubs and mansions equipped with cinemas and bars, but the poor had no such retreats. “The clerics serve the regime by banning protests and freedom of expression, and exonerating all its corrupt acts,” he said as we had tea. The Prophet himself, he said, lived peacefully among kuffar, or nonbelievers, in Mecca. Why couldn’t their self-proclaimed successors?

Al-Maliki is unusual in his determination to withstand the regime’s pressures and temptations, but he is not alone. When I visited the small town of Awamiya, near Dammam, the Eastern Province capital and large oil center, I found that it had been taken over by Shia insurgents. Activists had used a bulldozer to dig up and block the one-lane road leading into the town. Snipers were said to lurk in the date palms, waiting for security vehicles. Locals celebrated their intifada, which, they said, had chased out Saudi forces and fortified the town against their return. The nearest checkpoint when I visited was unmanned, and Saudi policemen inspected papers several kilometers away, standing behind large cement barricades. People in the town proudly told me that they had rejected government offers of help in guarding against ISIS militants, who over the past year had blown up seven Shia mosques in the kingdom. Instead, on Fridays, local volunteers patrolled mosques in the town and neighboring villages, guarding against outside attack.

The execution in January of this year of the Shia preacher Nimr al-Nimr, Awamiya’s leading cleric, had revived the protest movement that Shias in the Eastern Province had launched in 2011 as part of the Arab Spring. “After the killing of Nimr, we see ISIS and the security forces as one and the same,” one of his relatives told me. Graffiti pronouncing “Death to the al-Sauds” could be seen throughout the town, including on the cemetery walls, and inside them female relatives tended to the shrines of eighteen local men whom, they said, Saudi forces had shot dead while suppressing unrest. Lampposts were draped in mournful black ribbon, and Nimr’s image hung over the town on posters, billboards, and stencils imprinted on walls. Two armored cars were parked in front of the sole police garrison, their gun turrets pointing into town. The approach road was strewn with barbed wire, rocks, and burned tires. Not a Saudi policeman was to be seen.

Nimr’s brother, Mohammed, guided me around the town, introducing local grocers, peddlers hawking banned Shia liturgies, and women grieving for sons and brothers buried in the cemetery. Almost everyone I spoke to had a close relative in one of the regime’s jails. Mohammed’s son, Ali, had been detained, aged seventeen, for participating in protests and was now on death row. The family had a history of protest dating back four generations, Mohammed explained, after Saudi Arabia conquered the Eastern Province in 1913. A century later, Nimr al-Nimr had revived his grandfather’s cry of resistance, appealing to young Shias to rise against systemic state discrimination.

Though Shias make up over 10 percent of Saudi Arabia’s population, their Saudi rulers had yet to appoint a single Shia minister after a century of rule. Shia community leaders across the Eastern Province told me that the authorities had staffed Shia schools with Wahhabi teachers who taught that Shias were apostates, and they pocketed the oil revenues while leaving Awamiya and Shia villages near the oil wells sunk in poverty. For over a decade, Nimr al-Nimr had championed the cause of equality. If the Saudis opposed it, he warned, Shias might opt for separation. In the heady days of 2011, he roused Shias onto the street. Alone in Saudi Arabia, the Shias of the Eastern Province joined the Arab Spring protests. Five years later, Nimr was beheaded. Outraged by his execution, the town of Awamiya simmered with anticipation of self-rule.

And yet Nimr’s brother, Mohammed, is no revolutionary. He runs a plumbing business selling toilets, drives a Lexus SUV, and relaxes on weekends in his palm groves on the outskirts of town. In better times, he participated in officially sanctioned interfaith meetings with Wahhabi clerics. Even after his brother was executed and his son sentenced to death, Mohammed insists that the rift with the al-Sauds is redeemable. “We tell the government to deal with Sunnis and Shias politically, but they only respond with security.” He told me that if Mohammed bin Salman had only diverted a fraction of the billions spent fighting Shias in Yemen and Syria and maintaining the standoff with Iran to development in Shia towns in the kingdom and around its borders, Shias across the region, including in Awamiya, would be kissing his hands.

Popular sentiment mattered less when Saudi Arabia could distribute payments from its oil revenues with abandon to relieve its citizens’ frustrations. But in an age of low oil prices and bloated budget deficits, the Saudis might have to broaden their popular base if they are to persuade their people to foot the bill. As long as Saudis pay no income tax, they have no right to representation, Mohammed bin Salman insists. But if he is to realize what he says are his two policy objectives—transforming the kingdom from a single-resource state into a productive economy and securing regional support to stymie Iran’s advance west—MbS will need to reach out beyond the Wahhabi core of the hinterland to the country’s many diverse sects on its productive edges.

Few Shias, Sufis, or secular Saudis want the kingdom to collapse, least of all to ISIS zealots. MbS’s vision of a new social contract suggests that he understands the benefits of a more inclusive society, even if he stops short of fully engaging his kingdom’s multiple parts. There is still a chance that future books about the kingdom might not be so dark, but MbS will need more than words if he is to convince his heterogeneous population that the Saudis are rulers for all their people, not just themselves and the Wahhabis.

—September 14, 2016

Source Article from http://feedproxy.google.com/~r/nybooks/~3/oYRfppeaUEE/

Auschwitz on Trial: The Bully and the Witness

Timothy Spall as David Irving in Denial, 2016
Laurie Sparham/Bleecker StreetTimothy Spall as David Irving in Denial, 2016

In her 1993 book, Denying the Holocaust, the American academic Deborah E. Lipstadt called David Irving, a British amateur historian, “one of the most dangerous spokesmen for Holocaust denial.” I had observed Irving a year or so before Lipstadt’s book came out, addressing a neo-Nazi rally in the dreary east German town of Halle. He cut an odd figure, in his fawn trench coat, bellowing in accented German to a rowdy crowd of skinheads raising their arms in a Nazi salute. In 1996, he decided to sue Lipstadt and her publisher, Penguin Books, for libel, on the grounds that her accusation damaged his career as the serious historian that he claimed to be.

Irving had gone around Europe and the US telling sympathetic audiences that no Jews were gassed at Auschwitz, and Hitler had no genocidal intent. He even disrupted one of Lipstadt’s seminars, offering $1,000 to anyone who could show that Hitler had known about any plan for mass murder. Those who believed in this, he said in one 1991 speech, were, “Auschwitz Survivors, Survivors of the Holocaust, and Other Liars—or the ASSHOLs.”

Under British law, the defendant in a libel case has to prove that her assertion is true. So to win, Lipstadt’s lawyers had to prove in court that the mass murder of Jews by gas, and other means, was not just an assertion, but a fact. Not only that, but also that Irving had willfully denied the truth to promote his racist and anti-Semitic arguments.

Directed by Mick Jackson and written by David Hare, the new film Denial is the story of the trial, which took place in London in 2000. I was there on one of the most dramatic days, when the historian Richard Evans, as a witness for the defense, made mincemeat of Irving’s contention that Himmler had wanted to save the German Jews from deportation. This didn’t stop Irving from grandstanding to a claque of leather-clad men and blowsy blonde women who looked on adoringly from the visitors’ benches, as he turned with a wink and a meaty raised thumb whenever he thought he had landed a point in his favor.

Rachel Weisz as Deborah E. Lipstadt in Mick Jackson's Denial, 2016
Laurie Sparham/Bleecker StreetRachel Weisz as Deborah E. Lipstadt in Mick Jackson’s Denial, 2016

Courtroom drama has a rich cinematic tradition in Britain and the US—not least because the Anglo-American jury system turns the courtroom into a kind of theater, with lawyers having to persuade ordinary citizens by putting on a good show. Irving versus Lipstadt was unusual in that her legal team decided to dispense with such theatricals: no jury, just a judge; no emotional scenes on the witness stand, just dry facts, designed to bury Irving in his own lies.

This might have been a problem for a movie. Arguments are not inherently dramatic. But the title, Denial, is not only about Irving’s views on the Holocaust. Irving, played in the movie by the excellent Timothy Spall, is a showman, who would have humiliated witnesses if they had been called and who would have held Lipstadt up to ridicule. Lipstadt, played by the equally superb Rachel Weisz, is not averse to playing to the gallery herself, albeit mostly in academic fora. In the trial, she had wanted to confront Irving directly and give Holocaust survivors a platform to have their testimonies heard.

It was with a great deal of heartburn, therefore, that she finally agreed to her legal team’s insistence that she keep her mouth shut during the trial and not subject survivors to Irving’s intimidation. This was her form of denial. Her barrister, Richard Rampton (Tom Wilkinson), and solicitor, Anthony Julius (Andrew Scott), would run the show. She would just have to trust them.

Such is the central drama in David Hare’s script. Rampton and Julius saw the trial in legal terms. Their job was to find the best strategy to win the case. This clashed with Lipstadt’s view of herself as a spokeswoman for her people, called to keep the memory of Jewish suffering alive.

The best scenes in the movie focus on this conflict. At the site of Auschwitz-Birkenau, for example, we see Lipstadt praying in front of the ruins of the gas chamber, while Rampton is making careful notes and asking awkward questions about the exact procedures of mass murder. She is paying her respects to the dead. He is doing forensic work. On the “sacred” spot of the killing, cool analysis and a search for legal proof look like disrespect to her.  

When Rampton, a decent, wine-loving, Mozart-adoring Scot, has to break the news to Deborah Lipstadt, a feisty New Yorker, in a Krakow bar, that she will need to remain silent during the trial, she takes it as an insult—to her, and to the Jewish people.

All this is shown with great delicacy. The scenes of the trial itself are equally riveting. Hare was careful to stick to the exact words uttered by Rampton, Irving, the academic witnesses, and the owlish Justice Gray (Alex Jennings). The atmosphere in the courtroom is exactly as I remember it: Irving blustering and bluffing, Rampton cool and deadly, and the onlookers a weird mixture of louche Irving worshippers and anguished Jewish survivors.  

Tom Wilkinson as the lawyer Richard Rampton in <em>Denial</em>, 2016
Laurie Sparham/Bleecker StreetTom Wilkinson as the lawyer Richard Rampton in Denial, 2016

The weakness of the film lies in the two protagonists, despite some brilliant acting. No attempt was made to flesh out the character of David Irving. This was apparently deliberate, as Hare himself has stated: “The film is not about Irving’s psychology. He is seen almost exclusively from Deborah’s point of view, so I have no right to speculate or try to explain Irving.”

But this is a limitation as far as the film is concerned. Irving is portrayed by Spall as a somewhat mad English gent, with an ingratiating manner and a fanatical stare. In reality, Irving is more of a bruiser, a vulgar bully who came from what George Orwell called the lower upper-middle-class, that is to say, not petty bourgeois, but not upper-class either. His father was a naval officer. Irving likes to come across as a plummy upper-class toff while being consumed with hatred for people who might be considered to be his social or intellectual betters. At the trial this was especially visible during his exchanges with Richard Evans, not a man from the upper-class either, but a genuine intellectual who later became the Regius Professor of History at Cambridge. Neither man could even look the other in the eye.

Lipstadt’s point of view, not just of Irving, but also of herself, is taken too much at face value. She is an admirable scholar and a brave woman. Her victory in the trial was not only richly deserved, but essential to show up Holocaust denial as the anti-Semitic propaganda that it is. But there is something histrionic about the view of her as a fighter for her people. Over and over we see her jogging past the statue of Queen Boadicea on the Embankment in London, and after her victory in court she looks up in a kind of rapture to the bronze impression of this Celtic rebel against the Romans in AD 61 as a fellow leader of resistance. The overt parallel drawn between a history professor at Emory University in Atlanta and the image of a heroine revived in the nineteenth century to glorify Queen Victoria might seem a little strained. And the musical score to accompany this peculiar form of heroine worship is suited better to patriotic schlock like Chariots of Fire than to a courtroom drama about the Holocaust.

There was no need for this. There is more than enough drama in a classic story of hubris and nemesis, of a menacing British racist who tried to stifle his critic and failed. Not that Irving would admit this. In a television interview after the trial, partly reenacted in the film, Irving is asked whether he will now finally stop denying the Holocaust. His answer: “Good Lord, no.”

Mick Jackson’s Denial opens in New York City and Los Angeles on September 30.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/7sTlfPKM_wI/

Tony Blair’s Eternal Shame: The Report

The Report of the Iraq Inquiry

George W. Bush and Tony Blair at a joint press conference at Hillsborough Castle, near Belfast, Northern Ireland, April 2003
Charles Ommanney/Contact Press ImagesGeorge W. Bush and Tony Blair at a joint press conference at Hillsborough Castle, near Belfast, Northern Ireland, April 2003

How did it happen? By now it is effortless to say that the invasion of Iraq in 2003 by American and British forces was the most disastrous—and disgraceful—such intervention of our time. It’s also well-nigh pointless to say so: How many people reading this would disagree? For Americans, Iraq is their worst foreign calamity since Vietnam (although far more citizens of each country were killed than were Americans); for the British, it’s the worst at least since Suez sixty years ago this autumn, though really much worse on every score, from political dishonesty to damage to the national interest to sheer human suffering.

Although skeptics wondered how much more the very-long-awaited Report of the Iraq Inquiry by a committee chaired by Sir John Chilcot could tell us when it appeared at last in July, it proves to contain a wealth of evidence and acute criticism, the more weighty for its sober tone and for having the imprimatur of the official government publisher. In all, it is a further and devastating indictment not only of Tony Blair personally but of a whole apparatus of state and government, Cabinet, Parliament, armed forces, and, far from least, intelligence agencies.

Among its conclusions the report says that there was no imminent threat from Saddam Hussein; that the British “chose to join the invasion of Iraq before the peaceful options for disarmament had been exhausted”; that military action “was not a last resort”; that when the United Nations weapons inspector Hans Blix said weeks before the invasion that he “had not found any weapons of mass destruction and the items that were not accounted for might not exist,” Blair wanted Blix “to harden up his findings.”

The report also found that deep sectarian divisions in Iraq “were exacerbated by…de Ba’athification and…demobilisation of the Iraqi army”; that Blair was warned by his diplomats and ministers of the “inadequacy of U.S. plans” for Iraq after the invasion, and of what they saw as his “inability to exert significant influence on U.S. planning”; and that “there was no collective discussion of the decision by senior Ministers,” who were regularly bypassed and ignored by Blair.

And of course claims about Iraqi WMDs were presented by Downing Street in a way that “conveyed certainty without acknowledging the limitations of the intelligence,” which is putting it generously. Chilcot stops short of saying directly that the invasion was illegal or that Blair lied to Parliament, but he is severe on the shameful collusion of the British intelligence agencies, and on the sinister way in which Blair’s attorney general changed his opinion about the legality of the invasion.

Planning and preparations for Iraq after Saddam “were wholly inadequate,” Chilcot says, and “the people of Iraq have suffered greatly.” Those might seem like statements of the blindingly obvious, as does the solemn verdict that the invasion “failed to achieve the goals it had set for a new Iraq.” It did more than merely fail, and not only was every reason we were given for the war falsified; every one of them has been stood on its head. Extreme violence in Iraq precipitated by the invasion metastasized into the hideous conflict in neighboring Syria and the implosion of the wider region, the exact opposite of that birth of peaceable pro-Western democracy that proponents of the invasion had insisted would come about. While Blair at his most abject still says that all these horrors were unforeseeable, Chilcot makes clear that they were not only foreseeable, but widely foreseen.

Nor are those the only repercussions. Chilcot coyly says that “the widespread perception”—meaning the correct belief—that Downing Street distorted the intelligence about Saddam’s weaponry has left a “damaging legacy,” undermining trust and confidence in politicians. It is not fanciful to see the Brexit vote, the disruption of the Labour Party, and the rise of Donald Trump among those consequences, all part of the revulsion across the Western world against elites and establishments that were so discredited by Iraq. And so how could it have happened?

By now the war has produced an enormous literature, including several official British reports, beginning with the Hutton Report of January 2004 and the Review of Intelligence on Weapons of Mass Destruction the following July after an inquiry chaired by Lord Butler, a former Cabinet secretary. While its criticism of named individuals was muted, it built up a dismal story of incompetence and official deceit.

One member of Butler’s panel, which took no more than five months to hear evidence and report, was John Chilcot, a retired senior civil servant who had worked in the Home Office and with the intelligence agencies. On June 15, 2009, Gordon Brown, who had succeeded Blair as prime minister two years earlier, told Parliament that “with the last British combat troops about to return home from Iraq, now is the right time to ensure that we have a proper process in place to enable us to learn the lessons of the complex and often controversial events of the last six years,” and he announced a new inquiry, chaired by Chilcot. In those two years, everything had gone wrong for Brown, from continuing violence in Iraq to financial collapse, and his plain purpose was to push the matter aside and distance himself from his predecessor.

One of the comic subplots of this unfunny story is the way that Brown, as throughout his career, always tried to avoid being associated with contentious questions or difficult decisions. “For when they reach the scene of crime—Macavity’s not there!,” nor James Gordon Brown if he could help it. Chilcot mentions that Brown would sometimes send Mark Bowman, his private secretary, to meetings concerned with Iraq in his place, in the hope he could avoid personal responsibility.

So it was characteristic that when Brown first assigned Chilcot to lead the inquiry, it was to be held in camera, with as little publicity as possible. But parliamentary and public outcry put a stop to that, and Chilcot began his hearings in public view. They could all be followed, and then accessed online, and this has already been made use of by Peter Oborne for Not the Chilcot Report, a concise assessment, carefully sourced, that appeared before the report itself, and Tom Bower, whose Broken Vows is a full-dress assault on every part of Blair’s record. That includes a hair-raising account of his wildly profitable financial career since leaving office, but the book’s most startling contribution to the Iraq debate is the number of attributed quotations from former very senior government officials who belatedly criticize Blair and a war which, it must be remembered, he had begun by ignoring all professional advice from anyone who knew anything at all about the subject. A Foreign Office authority on Iraq who pleaded with him that, from all previous experience, the invasion would likely be fraught and possibly calamitous, was dismissed by Blair: “That’s all history, Mike. This is about the future.”

Over seven years, much has been done to obstruct the inquiry. Sir Jeremy Heywood, the present Cabinet secretary, deplorably tried to protect Blair, and although much of what Blair wrote to Bush in the year before the war has been published, Bush’s side in the correspondence has been withheld. In any case there was the ludicrous process of “Maxwellization,” by which anyone criticized adversely in an official report is shown the criticisms before publication and allowed to respond. This dates back nearly fifty years to a legal challenge to such a report by the crooked publisher Robert Maxwell; that such a process should still be named after the greatest scoundrel to disfigure British public life in our time suggests that it could usefully be reexamined.

Scarcely any individual or even institutional buyer is likely to acquire the twelve printed volumes of the report, although every family of the 179 British service personnel who died in Iraq is being presented with a set, for what consolation that may be, while the entire report is freely available online. Nor are many likely to read all 2.6 million words of it, but the 62,000-word executive summary is well worth reading. It illuminates once more, but very clearly, the yawning gulf between what Blair was saying publicly in the year before the war to Parliament, and even to his own Cabinet, and what he was saying in private to Bush.

Hence the anger with which the press pounced on Blair’s letter to Bush on July 28, 2002: “I will be with you, whatever.” It has taken some people a long time to grasp this. The story falls into place when those words are read in conjunction with the Downing Street Memo written in the greatest secrecy five days before Blair’s promise of fealty, in which Sir Richard Dearlove, the head of MI6, the Secret Intelligence Service, reported on his recent talks in Washington. “Bush wanted to remove Saddam,” the memo said, “through military action, justified by the conjunction of terrorism and WMD. But the intelligence and facts were being fixed around the policy.”

While the spread of nuclear weapons was plainly a problem, Iraq was far from the gravest threat. Sir William Ehrman, Foreign Office director of international security in 2000–2002, told Chilcot that the nuclear programs of Iran, Libya, and North Korea were “maturing” and were “probably of greater concern than Iraq,” not to mention Pakistan, where A.Q. Khan, then nuclear program director, was operating something like a mail-order system in nuclear know-how, and had supplied uranium-enriching equipment to Libya. WMDs might have been a plausible reason for invading Pakistan, just as Islamist terrorism might have been a plausible reason for invading Saudi Arabia, which had fostered al-Qaeda and from which most of the September 11 murderers came, but neither made any sense at all as reasons for invading Iraq.

President Bush and Prime Minister Blair at Hillsborough Castle, April 2003
Nick Danziger/Contact Press ImagesPresident Bush and Prime Minister Blair at Hillsborough Castle, April 2003

At the time the war began, Sir Jeremy Greenstock was British ambassador to the United Nations. He said on BBC radio some weeks ago, “Hans Blix told me privately, ‘I don’t know that they’ve got them and I don’t know they’ve not got them,’” which was the simple truth, and is perfectly congruent with Blix’s saying then that his inspection regime was working and needed more time. But Blair knew that the approaching war was unwanted and unpopular in his country: a poll on January 21 found 30 percent for war, 42 percent against. Aware that he could not take a reluctant Parliament and country to war on the basis that “we don’t know they’ve not got them,” he had little choice but to dissemble and mislead.

“I wouldn’t call it a lie,” says Andrew Turnbull, the Cabinet secretary at the time of the invasion, quoted by Bower. “‘Deception’ is the right word. You can deceive without lying, by leaving a false interpretation uncorrected.” Most of us would call that a distinction without a difference, but few who read Chilcot attentively will doubt that the brew of exaggeration, distortion, misrepresentation, suggestio falsi, and suppressio veri that was Blair’s case for war was anything other than mendacious.

What Blair knew well was that the Bush administration was determined to destroy Saddam, whether he possessed weapons of mass destruction or not. The purpose of the war was regime change for its own sake, even if in defiance of international law and the United Nations. And Blair’s great deception—his true crime—was not his September 2002 “dossier” and all the other claims about WMDs as such, false as those claims proved to be. It was his larger case, kept up for the best part of a year, that he had not committed the country to war, when privately he had.

For the British, this was the end of a long story, from the defeat of a British army by the Turks south of Baghdad in 1916, to the creation after that war—and then pacification by bombing—of a new country called Iraq, supposedly a friendly regime with Sunni Hashemite princes ruling a Shiite majority as well as Kurds, in which respect Saddam was the princes’ heir. After he invaded Kuwait in 1990, the British joined the campaign to expel him, led by President Bush the Elder but crucially with authorization by Security Council resolutions, and supported by Saudi Arabia as well as France among others.

At that time Blair was a rising politician still in his thirties and the Labour spokesman on employment. Until he was elected party leader after the sudden death of John Smith in 1994, Blair had shown no interest at all in international politics, although just before Smith died he saw Schindlers List. Blair “was spellbound,” he tells us, and his life was changed, though maybe not his alone. There can be no “bystanders,” Blair decided: “You participate, like it or not. You take sides by inaction as much as by action…. Whether such reactions are wise in someone charged with leading a country is another matter.” Yes, it is.

After he became prime minister in May 1997, Blair found new places to take sides. He sent British troops to restore order in Sierra Leone; he urged Western action to drive the Serbs out of Kosovo, where he was welcomed as the liberator he later thought he would be in Iraq; he tried to formulate such actions in a doctrine. One of the Chilcot panel members was Sir Lawrence Freedman, a well-known historian, who contributed to Blair’s famous or notorious Chicago speech of April 1999, a speech inspired by what the jurist Phillippe Sands has called “the emotional and ahistorical interventionist instincts that later led directly to the Iraq debacle.”*

Today it’s hard to recapture the mood of less than two decades ago, and the wave of adulation when Blair first entered Downing Street. Soon that adulation had washed across the Atlantic: well before The New York Times was writing about the “Blair Democrats,” Paul Berman had called Blair “the leader of the free world.” It would have gone to the head of a naturally humble man. Both Turnbull and Jonathan Powell, Blair’s erstwhile chief of staff, have spoken of his “Messiah complex,” without irony, alas: he really did come to believe that he was a new redeemer of mankind.

But the crucial events took place far from London or Kosovo, in Washington in November 2000, and in New York the following September. Robin Cook was Blair’s first foreign secretary, and in March 2003, the month of the invasion, the only member of the Cabinet to resign over Iraq. In his resignation speech, he rightly said that the invasion would not be taking place if Al Gore were in the White House, and so if one wanted to say who was ultimately responsible for the war, one answer would be the Supreme Court, when it feebly awarded the 2000 election to Bush the Younger.

We know that the new administration was discussing an invasion of Iraq as soon as Bush was inaugurated, urged on by the neoconservatives who had been publicly advocating a war to destroy Saddam for years past. Just what the neocons’ motives and objectives were, and those of the right-wing nationalists Dick Cheney and Donald Rumsfeld, may be debated. But one thing is certain: those motives and objectives were in no way shared by most Labour MPs and a “progressive” media in London, who were suspicious of American power and critical of Israel, who affected to revere international law, who thought that regime change as such was unlawful, and who made a cult of the virtue of the UN. To enlist their support was no easy task, but Blair was counting on the corrupt servility of his MPs as well as the supine credulity of the media, and he proved to be correct in his estimate of both.

Hence the angry bafflement of those supporters, unable to contemplate the possibility that Blair might actually have had a natural affinity with Bush and the neocons, while failing also to recognize his frantic desire—somewhat at odds with the tough and decisive persona he tried to project—to be the president’s best buddy: “I will be with you, whatever.” And so a false account of events became almost unavoidable for him. In her great biography of her father, Lord Salisbury, Queen Victoria’s last prime minister, Lady Gwendolen Cecil wrote ruthlessly of Disraeli that “he was always making use of convictions that he did not share, pursuing objects which he could not avow, manoeuvring his party into alliances which, though unobjectionable from his own standpoint, were discreditable and indefensible from theirs.” That exactly describes Blair, above all over Iraq.

Yet it will not do to blame Blair alone. Among the effects of the war were a collapse of cabinet government and parliamentary government, along with what might frankly be called the corruption of the intelligence agencies, as Dearlove and Sir John Scarlett, head of the Joint Intelligence Committee, colluded with Downing Street to “fix the facts,” as well as of Peter Goldsmith, the attorney general, who just as patently changed under pressure his previous advice that the invasion might be of dubious legality.

Since then Dearlove has been the head of a Cambridge college and is now chairman of an insurance company, Scarlett was knighted and promoted to succeed Dearlove as head of MI6 after the invasion and is now advisor to an investment bank, while Goldsmith works for the American law firm Debevoise and Plimpton. Whatever the fate of the Iraqis, the officials responsible for their plight have not suffered greatly.

Nor was Iraq the finest hour of the media, on either side of the Atlantic. The morning after Chilcot was published, the front pages of London newspapers shouted “Weapons of Mass Deception” (Sun), “Shamed Blair: I’m Sorry But I’d Do It Again” (Daily Express), “Blair Is World’s Worst Terrorist” (Daily Star), “A Monster of Delusion” (Daily Mail), “Blair’s Private War” (The Times). You would never guess from this chorus of outrage that those newspapers all supported the war at the time, as of course did almost all the America media, with the exception of that unlikely pair: the Knight-Ridder chain and The New York Review.

Sorriest of all were the liberal papers, The Guardian and its Sunday counterpart, The Observer. While The Observer fell completely for Blair and his war, The Guardian was more hesitant. And yet a week before the invasion it did say editorially, and lamentably, “But there is one thing Mr. Blair cannot be accused of: he may be wrong on Iraq, badly wrong, but he has never been less than honest.” No hindsight is needed to deplore those words—or to point out that the two papers had the right response ready-made, from what they had said about Suez in November 1956.

“It is wrong on every count—moral, military and political,” said the Manchester Guardian (as it still was). “To recover from the disaster will take years—if indeed it is ever possible.” More eloquent still was the Observer, with what is perhaps the single most famous editorial sentence to appear in a London paper in my lifetime, penned as British troops went ashore at Suez by David Astor himself, the paper’s owner-editor: “We had not realized that our government was capable of such folly and such crookedness.”

Apart from privately sharing the view that a stable democracy could be created in Iraq, Blair thought it was his duty to support Washington in principle, and that he was Bush’s guide as well as his friend. As early as March 2002, he told Labour MPs “very privately” that “my strategy is to get alongside the Americans and try to shape what is to be done.” He endlessly repeated this, to his Cabinet (without telling them that he had committed the country to war) and to favored journalists, some of whom swallowed it. When the invasion began, the commentator and military historian Max Hastings wrote in the Daily Mail that “Tony Blair has taken a brave decision, that the only hope of influencing American behavior is to share in American actions.”

All this displayed the kind of personal and national vanity that afflicts prime ministers, stemming from Churchill’s “special relationship” and Harold Macmillan’s even more pernicious image of “Greeks to their Romans.” These have been the grand illusions of British policy ever since: the belief that the two countries have a “special” affinity, and that the worldly-wise English can tutor and restrain the energetic but backward Americans. Successive prime ministers have failed to grasp the simple truths that the Americans neither want nor need such guidance, that the United States is a sovereign country whose interests and objectives may or may not coincide with British interests and objectives, and that like any other great power in history, it will pursue them with small regard for the interests of its supposed friends as well as its avowed enemies.

Before long Hastings saw the error of his ways, renouncing the war and denouncing Blair. As much to the point he has recently, and very truly, written that “the notion of a ‘special relationship’ was invented for reasons of political expediency by Winston Churchill, who then became the first of many prime ministers to discover it to be a myth.” It was just as much a myth for Blair, who “overestimated his ability to influence US decisions on Iraq,” Chilcot says, adding that Anglo-American friendship can “bear the weight of honest disagreement. It does not require unconditional support where our interest or judgments differ.”

One might add that, among every other perverse consequence, Iraq actually damaged Anglo-American relations, by lowering the British military in American esteem. The US Army were deeply unimpressed by their allies’ performance, with one general saying that the final ignominious British withdrawal from Basra could only be seen as a defeat.

That leaves Blair. His public attempt to answer Chilcot on the day the report appeared was excruciating, haggard, and incoherent; he seems dimly aware that his repute has collapsed and that he is more despised and ill-regarded than any other modern prime minister. Any public intervention by him now can only have the opposite effect, as in the summer of last year when, every time he begged Labour members not to vote for the aging leftist Jeremy Corbyn as party leader, he further ensured Corbyn’s triumph. Not that there is any need to feel pity for him: he feels quite enough himself, bemoaning “the demonic rabble tearing at my limbs,” which words may make others think of those Iraqi men, women, and children who suffered because of him.

His life now is hugely lucrative but hideous to behold, as he roams the world like the Flying Dutchman, with an estimated £25 million’s worth of properties, with a large fortune, including benefits from a Wall Street bank and a Swiss finance company, sundry Gulf sheiks and the president-for-life of Kazakhstan. He doubtless justifies to himself his work for Kazakhstan’s Nursultan Nazarbayev, whose regime has been strongly condemned by human rights organizations, in the same strange antinomian way he justified the manner in which he took us into the Iraq war: whatever he does must be virtuous because he does it.

Long after those distant years of triumph, the truth about Blair finally becomes clear. He believed himself to be a great leader and redeemer; some of the weirder passages in his memoir—“I felt a growing inner sense of belief, almost of destiny…I was alone”—suggest an almost clinically delusional personality; and of course he did something shameful or even wicked in Iraq. And yet in the end Tony Blair isn’t a messiah or a madman or a monster. He’s a complete and utter mediocrity. He might have made an adequate prime minister in ordinary days, but in our strange and testing times he was hopelessly out of his depth. Now we are left with the consequences.

  1. *

    See “A Very British Deceit,” The New York Review, September 30, 2010. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/gVHoJIADA70/

Gulping Down Shakespeare

Queen Margaret (Karen Aldridge) grieves a personal loss in the bloodshed of the Wars of the Roses in Barbara Gaines’s Chicago Shakespeare Theater production of <em>Henry VI</em>, Part Two, September, 2016
Liz LaurenQueen Margaret (Karen Aldridge) grieves a personal loss in the bloodshed of the Wars of the Roses in Barbara Gaines’s Chicago Shakespeare Theater production of Henry VI, Part Two, September, 2016

Should we take our Shakespeare in a gulp or in separate driblets? There are advantages to either course. His first audience had to take him in single plays, as they were conceived and put on. But we have his large body of work, and some plays are cross-referential, especially the plays of dynastic ups and downs around the British crown. The history plays beg for some consideration as a whole, and so sequences of them are now mounted by troupes in a single season, or in weekly or daily sequence. The disadvantage of this common practice, for the two groups of four most often linked for joint consideration, is that there is no way to guarantee that the same audience will be able to attend all the separate days of performance.

In its year-long, two-part series, the Chicago Shakespeare Theater has tried to solve this problem by showing three plays in a single day (running six hours with a dinner break, the procedure followed for some lengthy Wagner operas). The normal groups of four cannot be crammed into such a program, and heavy cutting must be indulged even to get three in. The first part of the group’s history gulp, called “Foreign Fire,” was in the Spring season, giving us Edward III, Henry V, and Henry VI, Part One (reviewed in these pages last May). The second gulp, “Civil Strife,” comes now to open the Fall season, presenting Henry VI, Parts Two and Three, and the ever-popular Richard III.

Nearly all modern productions of Shakespeare cut the plays, since they are too long for performance with breaks between the acts (there was no such thing in Shakespeare’s day). What is cut reflects the company’s position on what is essential to the play—or, in this case, plays. Barbara Gaines, the founding director of the Chicago company and the primary force behind the series, is a pacifist, so she thinks the deep futility of war is the most important (and relevant) aspect of these plays. She is right to find in Shakespeare an understanding that war poisons all social relationships. The three parts of Henry VI find multiple ways to emphasize this point. These early works are still influenced by the medieval morality plays and by festival pageants as living traditions. They can be as didactic as such ethical allegories.  

There is, for instance, the diptych of war miseries in Henry VI, Part Three, where a son kills his unrecognized father in battle and a father kills his unrecognized son. You could not say more directly that wars kill blindly, and Shakespeare says it over and over. Each killer laments, in traded verses from opposite sides of the stage, how unnatural it is that old men kill youth, and young men kill their elders in what Clausewitz calls the fog of war and Gaines sees as the total eclipse of morality in war.  If she is belaboring a point, it is one Shakespeare himself belabored thoroughly. Her staging of this scene is brilliant. It is preceded by King Henry VI’s famous pastorale, in which he wishes that his cares as a king (cares he has neglected, helping bring on this battle) could be exchanged for the carefree life of his subjects. He delivers this long daydream from a “molehill,” and then looks down on the father and son lamenting their respective parricide and filicide. Gaines has him exit after his speech then come creeping back to the scene from below—in effect, joining the audience to see how little his subjects live the carefree lives of his dreamy speech.

Gaines’s compression of the plays puts even more emphasis on the war theme. The difficulty with this is that the dynastic rivalries and alternations in power involve repeated genealogies of the royal claimants. They are there in the uncut plays, but they do not come up in such rapid succession, creating a fog of story. Besides, cutting things not obviously related to war means omitting events that step away from the story to look at it from a different angle. That is especially true of three things omitted:

      1. The comic “duel” of an armorer and his apprentice. Two commoners are arguing rival claims to the crown (which they are not supposed to do at all). When “the good Duke Humphrey” suggests that they settle the matter with a duel, King Henry allows this to go forward with comic weapons and knock-down farce. The king breaks all the rules of chivalry, which limited dueling to well-born combatants and special rules. In letting this little “war” go forward the king is signaling his irresponsible responsibility for the deadly war to follow.
      2. Duke Humphrey’s wife, Eleanor, consulting a witch, Margery Jourdain, to foretell the royal succession (a thing against the law). When Eleanor is convicted of witchcraft, this helps Humphrey’s enemies bring him down. This is as little ancillary to the play’s main point as the three witches are to the plot of Macbeth. The witches are infernal signposts to the witch element in Lady Macbeth, and Margery Jourdain is the supernal sign for Queen Margaret, who schemes at murder and lives to curse all the other cursing women in the play.
      3. The rebellion of Jack Cade. Gaines keeps this “Lord of Misrule” event in the play, but cuts it drastically, including the important trial of the virtuous Lord Saye by the raucous mob. Cade puts Saye’s head on a pole so he can make it kiss the heads of other nobles unjustly executed. Gaines lets Cade voice the famous line spoken by the most vicious of the rebels, a butcher: “The first thing we do, let’s kill all the lawyers.” Since Cade is a showman who celebrates ignorance, issues endless boasts and threats, and charms his rabid followers, Gaines gives him an orange wig. This may seem to go too far but for the fact that Cade’s authentic words are so close to Trump’s bluster. And we should remember that Cade was probably played by the famous clown Will Kemp. Despite this comic aspect of the scene, Cade is also an avatar of Richard III, who will lie and scheme and charm in ways that succeed far beyond the prophetic activity of Cade, confirming the idea that if Cade is a Lord of Misrule from medieval carnival, Richard is the Vice of morality plays. Devils can be devilish witty.
A young Richard (Timothy Edward Kane, at right) urges his father the Duke of York (Larry Yando) to violently enforce his claim to the throne, <em>Henry VI</em>, Part Two, September, 2016
Liz LaurenA young Richard (Timothy Edward Kane, at right) urges his father the Duke of York (Larry Yando) to violently enforce his claim to the throne, Henry VI, Part Two, September, 2016

Though much that is close to essential has been omitted here, what is retained is done with great energy and polish, especially the Richard of Timothy Edward Kane.  A gain for understanding Richard is that we see him in different contexts in the Henry VI plays—as a son and brother, selectively loyal, not entirely treacherous, surviving and rising above vicious partisan quarrels. Kane is not entirely treacherous. A Richard rising from this muck of furious ambitions is more understandable. It may take a monster to survive and (briefly) thrive in such madness. He has a further reason to rage at and through the sick culture of war—resentment at his deformity.

This theme, formulated by Freud, is made visible by the way Richard winces at any touching of his humped back. He does so when his brother Clarence hugs him—showing how affection itself hurts. In the famous seduction of Anne at the burial of her husband (killed by Richard), Kane collapses in sobs at her feet, and she touches the hump tenderly. But he rejects pity as well as scorn. When his own mother reviles him, she pounds on the hump as Richard cowers. Richard is in time isolated from all humanity, which makes him begin to scurry around his delusive throne like a trapped and lonely animal.

After Richard is killed by Richmond at Bosworth Field, we may wonder what can put together again a world so fragmented by war upon war. To address this, Gaines plays a card she takes from Hamlet, who put on his didactic play (“The Mouse Trap”) with a dumb show to begin and a jig to follow. The Chicago production begins with a young commoner, stiff in his new uniform, leaving a girlfriend who tries to hold him back from war. The soldier (named in the credits “Peter,” perhaps as a tribute to the apprentice in the omitted mock duel) shows up throughout the plays to run errands or report offstage events. But at the show’s end he comes back in a wheelchair, both his legs missing. When his girlfriend, in pity, reaches tenderly out to his stumps, he waves her away, then pounds the stumps in agony. But when a fellow veteran approaches him, he is welcomed. Other veterans file out and form a protective ring around the two.

Since Gaines the pacifist does not show weapons or blood in the conflicts on stage, characters are stabbed with invisible knives or blown over by explosions. Rather, as the killings begin, a blood-red trickle runs down the glass back wall of the set. As the killings occur in a quickening pace, multiple rivulets run and ramify. By the end of the plays, the entire wall is mass of deep red. But when the veterans turn and face the wall, they salute the blood and the back wall is cleared of it. Those who suffered without guilt are innocent of the blood taken from them. Theirs is the only honor left the nation—and this makes up the “jig” of Hamlet’s play.

Civil Strife,” Barbara Gaines’s day-long production of Shakespeare’s Henry VI, Parts Two and Three, and Richard III, runs at the Chicago Shakespeare Theater through October 9. Her earlier sequence of history plays, “Foreign Fire,” ran last spring.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/yRx-EsGPtYk/

The Green Universe: A Vision

An illustration of Freeman Dyson’s vision of ‘Noah’s Ark culture’—a space operation in which, ‘sometime in the next few hundred years, biotechnology will have advanced to the point where we can design and breed entire ecologies of living creatures adapted to survive in remote places away from Earth.’ Spacecraft resembling ostrich eggs will bring ‘living seeds with genetic instructions’ to planets, moons, and other ‘suitable places where life could take root.’ A new species of warm-blooded plants, ‘kept warm by sunlight or starlight concentrated onto it by mirrors outside,’ will enable the Noah’s Ark communities to survive.
Ron MillerAn illustration of Freeman Dyson’s vision of ‘Noah’s Ark culture’—a space operation in which, ‘sometime in the next few hundred years, biotechnology will have advanced to the point where we can design and breed entire ecologies of living creatures adapted to survive in remote places away from Earth.’ Spacecraft resembling ostrich eggs will bring ‘living seeds with genetic instructions’ to planets, moons, and other ‘suitable places where life could take root.’ A new species of warm-blooded plants, ‘kept warm by sunlight or starlight concentrated onto it by mirrors outside,’ will enable the Noah’s Ark communities to survive.

Robert Dicke was an experimental physicist at Princeton University. He liked to build things with his own hands. When NASA began making plans for landing astronauts on the moon, he thought of a scheme that would allow the astronauts to make a serious contribution to science. This would be good for science and also good for the astronauts. The scheme was to measure accurately the distance between two objects, one fixed on Earth and the other fixed on the moon. The measurements would give us improved understanding of the dynamics of the Earth-moon system.

The object on Earth would be a laser emitting very short pulses of light. The object on the moon would be a tray holding a hundred corner-cube glass reflectors. A corner cube is a piece of solid glass cut so as to reflect light efficiently. The corner cubes would reflect the laser pulses back to the laser. The timing of the reflected pulses would measure the distance between the laser and the tray. The astronauts would plant the tray on a firm piece of ground on the moon facing Earth. Because the corner cubes reflect light straight back to its source, the small variations in the orientation of the moon as it moves in its orbit do not disturb the measurement.

Dicke was a practical person. He went to the Edmonds Scientific Company toy store down the road from Princeton and bought a hundred high-quality glass corner-cube reflectors for $25 each. He asked the machine shop at the Princeton University physics department to attach the cubes to a metal tray with a stand to support it. The complete package, including materials and labor, cost a total of $5,000. Then he got in touch with NASA officials and told them he would be happy to supply the package at this cost for a moon mission. The NASA officials accepted his proposal enthusiastically, but they said, “You do not get to build it. We get to build it.” The proposal to build the package was put through the normal bureaucratic NASA acquisition process. According to Dicke, NASA paid $3 million to an industrial contractor for it. The reflectors were duly installed on the moon and are still reflecting laser pulses as Dicke intended. Doing things the NASA way increased the cost by a factor of six hundred.

The moon missions happened long ago. Now, fifty years later, there is still a clash between two cultures. There is Big Space, with big corporations receiving contracts from NASA to produce custom-built hardware and software following NASA procedures at enormous cost. And there is Little Space, aiming to carry out space operations in the Dicke style, using hardware and software mass-produced for other purposes by companies in a competitive market at vastly lower cost. The Big Space culture is still dominant, carrying out spectacularly successful high-cost missions, such as the Cassini mission that sent back detailed pictures of the satellites of Saturn, and the Kepler mission that discovered thousands of planets orbiting around other stars. But there are now several start-up companies operating independently of NASA in the Little Space culture, hoping to do space missions that will be bolder, quicker, and cheaper.

Will Marshall was a young engineer working in the Big Space culture at the Jet Propulsion Laboratory (JPL), a NASA center that builds big expensive spacecraft such as Cassini. He rebelled against that culture and decided to do things differently. Along with two other NASA alumni, he started his own company and built a satellite that he called Dove in his garage in Cupertino. The company then changed its name to Planet Labs and built 150 Dove satellites in a few years, with 150 more to be launched next year. His satellites are radically smaller and cheaper than anything built at the JPL, but they are equally well engineered and more agile. They belong to the Little Space culture, using modern miniaturized cameras and guidance systems and data processors, like those that are mass-produced for the cell phone and recreational drone industries.

A Dove satellite weighs about ten pounds and costs under a million dollars, including launch and operations and a communication system for distributing large amounts of information to the Planet Labs customers. The information consists of pictures of the ground taken from low earth orbit, with accurate color to show the type and condition of vegetation, with complete coverage of the planet every few days, and with “resolution”—the size of the smallest patches that can be seen in the picture—about ten feet. The customers are farmers looking at crops, foresters looking at trees, fire-control authorities looking at fires, environmentalists looking at pollution and erosion of land, and government officials at all levels looking at ecological problems and environmental disasters.

Marshall likes to describe how he lost twenty-six Dove satellites in 2014. They were sitting together on a big rocket that exploded on the launch-pad. The loss hardly affected his business, since he had had nine successful launches and only one failure. The lost satellites were quickly replaced and the replacements put in orbit. The great advantage of the Little Space culture is that every mission is cheap enough to fail. It makes a huge difference to the running of a business if failures are acceptable. Missions in the Big Space culture are too big to fail. In that culture they typically take a decade to plan and a decade to build. A Dove satellite is planned and built in a few months. Occasional failures in the Little Space culture are a normal part of the cost of doing business. If there are too many failures, the company running the business may collapse, but that is not an unacceptable disaster. Start-up companies evolve in a Darwinian ecology, where the fit survive and the unfit collapse.

Planet Labs and other start-up companies have proved that the Little Space culture is ready to take over a large share of future unmanned activities in space. The question remains open whether the Little Space culture can have a similarly liberating effect on manned missions. Can we expect to see manned missions becoming radically cheaper, so that we can travel with our machines at costs that ordinary people or institutions can afford? Neither Big Space nor Little Space shows us a clear path ahead to the fantasy worlds of science fiction, where bands of brave pioneers build homes and raise children among the stars.

Halfway between Big Space and Little Space, there is a group of companies that grew rapidly in recent years, led by SpaceX, a company founded in 2002 by Elon Musk. Musk is a young billionaire who has dreams of founding human colonies on Mars. His company builds big spacecraft paid for by big NASA contracts in the Big Space style, but he tries to keep the design and manufacture cheap and simple in the Little Space style. In ten years he has built a launcher, Falcon, and a transfer vehicle, Dragon, which ferry unmanned payloads from the ground to the International Space Station. He intends soon to include astronauts in his payloads. The SpaceX culture is a compromise, using commercial competition to cut costs while relying on the government for steady funding. The twenty-first century is likely to see manned missions exploring planets and moons and asteroids, and possibly making spectacular discoveries. But this century is unlikely to see costs of such missions low enough to open space to migration and settlement by ordinary citizens.

The three books under review describe space activities belonging to the Big Space and Little Space cultures that are now competing for money and public attention. Each book gives a partial view of a small piece of history. Each tells a story within the narrow setting of present-day economics and politics. None of them looks at space as a transforming force in the destiny of our species.

Julian Guthrie’s How to Make a Spaceship describes the life and work of Peter Diamandis, a brilliant Greek-American entrepreneur. Diamandis cofounded the International Space University, bringing together each year an international crowd of students and professors to its campus in Strasbourg, and providing a meeting place where academic thinkers and industrial doers exchange ideas. He founded the ISU when he was twenty-seven years old, less than half the age at which Thomas Jefferson founded the University of Virginia. The ISU has been growing smoothly for twenty-eight years. It is successful not only as an educational institution but as a job market where young people interested in space can find employers.

Diamandis also encourages competitive space projects by offering substantial prizes for clearly specified achievements. The latest and biggest of his prizes was $10 million for a privately funded spacecraft to reach an altitude of one hundred kilometers and land safely on the ground twice with a human pilot. The money came from Anousheh Ansari, a young Iranian-American computer engineer who had founded with her husband and brother-in-law the company Telecom Technologies. They sold the company for $440 million, of which they donated a small piece to Diamandis. The winner of the Ansari Prize was Burt Rutan, a legendary designer of weird-looking airplanes. He designed and built the SpaceShipOne vehicle that won the prize in 2004. Many other competitors made plans and built rocket ships. The total amount of money invested, by the winner and the losers, was many times the value of the prize.

A rendering of the icy surface of Enceladus, one of Saturn’s moons. According to Freeman Dyson, its active geysers ‘must originate in an underground system of channels connected to a warm deep ocean,’ suggesting a ‘promising place for us to look for evidence of life.’
David Seal/NASAA rendering of the icy surface of Enceladus, one of Saturn’s moons. According to Freeman Dyson, its active geysers ‘must originate in an underground system of channels connected to a warm deep ocean,’ suggesting a ‘promising place for us to look for evidence of life.’

Charles Wohlforth and Amanda Hendrix’s Beyond Earth describes the prospects for future manned space missions conducted within the Big Space culture. The prospects are generally dismal, for two reasons. The authors suppose that a main motivation for such missions is a desire of humans to escape from catastrophic climate change on Earth. They also suppose any serious risks to the life and health of astronauts to be unacceptable. Under these conditions, few missions are feasible, and most of them are unattractive. Their preferred mission is a human settlement on Titan, the moon of Saturn that most resembles Earth, with a dense atmosphere and a landscape of gentle hills, rivers, and lakes.

But the authors would not permit the humans to grow their own food on Titan. Farming is considered to be impossible because an enclosed habitat with the name Biosphere Two was a failure. It was built in Arizona and occupied in 1991 by eight human volunteers who were supposed to be ecologically self-sufficient, recycling air and water and food in a closed system. The experiment failed because of a number of mistakes in the design. The purpose of such an experiment should be to learn from the failure how to avoid such mistakes in the future. The notion that the failure of a single experiment should cause the abandonment of a whole way of life is an extreme example of the risk-averseness that has come to permeate the Big Space culture.

Farming is an art that achieved success after innumerable failures. So it was in the past and so it will be in the future. Any successful human settlement in space will begin as the Polynesian settlements in the Pacific islands began, with people bringing pigs and chickens and edible plants on their canoes, along with the skills to breed them. The authors of Beyond Earth imagine various possible futures for human settlement in various places, but none of their settlers resemble the Polynesians.

Jon Willis’s All These Worlds Are Yours describes the possibilities for alien forms of life to exist in remote places and the practical steps we might take to discover them. The places that are discussed are the planet Mars, the moon Europa of Jupiter, the moons Titan and Enceladus of Saturn, and the newly discovered planets orbiting around other stars. Willis considers Enceladus to be the most promising place for us to look for evidence of life. Enceladus has active geysers spraying jets of salt water and steam into space from hot spots on its surface. The geysers must originate in an underground system of channels connected to a warm deep ocean in which life might be flourishing. To study possible traces of life in microscopic detail, we should send an unmanned spacecraft through the jets to collect samples of droplets and vapor and bring the samples back to Earth to be examined at leisure in a well-equipped laboratory.

Such a proposal would make sense as a first step in a continuing sustained program of exploration of Enceladus. It makes no sense as an isolated one-shot venture. It unfortunately belongs to the NASA Big Space culture, the same culture that gave us the Viking mission to Mars in 1975. Viking was also a one-shot venture, announced with great fanfare as giving a decisive answer to the question whether there is life on Mars. When Viking found no evidence of life, the further exploration of Mars was abandoned for twenty years.

The effect of the Enceladus sample return mission, if it were a one-shot venture like Viking, would probably be the same. Even if kelp is sprouting and sharks are swimming in the Enceladus ocean, the spattered droplets collected from its geysers would probably show no conclusive evidence of life, and the essential question would remain unanswered. The most likely result of a sample return mission would be to raise new questions for following missions to answer. To discover life on an unexplored world will never be a job for a single mission.

All three books look at the future of space as a problem of engineering. That is why their vision of the future is unexciting. They see the future as a continuation of the present-day space cultures. In their view, unmanned missions will continue to explore the universe with orbiters and landers, and manned missions will continue to be sporting events with transient public support. Neither the unmanned nor the manned missions are seen as changing the course of history in any fundamental way.

The authors are blind to the vision of Konstantin Tsiolkovsky, the prophet who started thinking seriously about space 150 years ago. Tsiolkovsky saw the future of space as a problem of biology rather than as a problem of engineering. He worked out the theory of rockets and saw that rockets would solve the problem of space travel, to get from here to there. Getting from here to there is the problem of engineering that Tsiolkovsky knew how to solve. That is the easy part. The hard part is knowing what to do when you have got there. That is the problem of biology, to find ways to survive and build communities in space, to adapt the structures of living creatures, human and nonhuman, so they can take root in strange environments wherever they happen to be. Tsiolkovsky knew nothing of biotechnology, but he understood the problems that biotechnology would enable us to solve.

With Tsiolkovsky, we leave behind the parochial concerns of the twenty-first century and jump ahead to a longer future. In the long run, the technology driving activities in space will be biological. From this point on, everything I say is pure speculation, a sketch of a possible future suggested by Tsiolkovsky’s ideas. Sometime in the next few hundred years, biotechnology will have advanced to the point where we can design and breed entire ecologies of living creatures adapted to survive in remote places away from Earth. I give the name Noah’s Ark culture to this style of space operation. A Noah’s Ark spacecraft is an object about the size and weight of an ostrich egg, containing living seeds with the genetic instructions for growing millions of species of microbes and plants and animals, including males and females of sexual species, adapted to live together and support one another in an alien environment.

After the inevitable mistakes and failures, we will have acquired the knowledge and skill to build such Noah’s Arks and put them gently into suitable places in the sky. Suitable places where life could take root are planets and moons, and also the more numerous cold dark objects far from the sun, where air is absent, water is frozen into ice, and gravity is weak. The purpose is no longer to explore space with unmanned or manned missions, but to expand the domain of life from one small planet to the universe. Each Noah’s Ark will grow into a living world of creatures, as diverse as the creatures of Earth but different. For each world it may be possible to develop genetic and other instructions for growing a protected habitat where humans can live in an Earth-like environment. The expansion of human societies into the universe will be a small part of the expansion of life. After the expansion of life and the expansion of human societies have started, the new ecologies will continue to evolve in ways that we cannot plan or predict. The humans in remote places will then also have the freedom to evolve, so that they can move out of protected habitats and walk freely on the worlds where they have settled.

The essential new species, enabling Noah’s Ark communities to survive in cold places far from the sun, will be warm-blooded plants. A warm-blooded plant is a species with leaves and flowers and roots and shoots in a central structure, kept warm by sunlight or starlight concentrated onto it by mirrors outside. The mirrors are cold, separated from the warm center by a living greenhouse with windows that let the light come in but stop heat radiation from going out. The mirrors are attached to the greenhouse like feathers on a peacock. The mirrors and the greenhouse perform the same functions for a warm-blooded plant that fur and fat perform for a polar bear.

The entire plant, with the warm center and the greenhouse and the mirrors, must grow like a mammal inside its mother before it can be pushed out into the cold world. The new species of plants will be not only warm-blooded but also viviparous, growing the structures required for independent living while still inside the parent plant. To make viviparous plants possible, the basic genetic design of warm-blooded mammals must be understood and transferred to become a new genetic design for plants. Our understanding and mastery of genetic design will probably be driven by the needs of medical research, aimed at the elimination of disease from human, animal, and plant populations. Warm-blooded and viviparous plants will fill empty ecological niches on Earth before they are adapted for life support in Noah’s Arks. They may make Antarctica green before they take root on Mars.

Almost all the current discussion of life in the universe assumes that life can exist only on worlds like our Earth, with air and water and strong gravity. This means that life is confined to planets and their moons. The sun and the planets and moons contain most of the mass of our solar system. But for life, surface area is more important than mass. The room available for life is measured by surface area and not by mass. In our solar system and in the universe, the available area is mostly on small objects, on comets and asteroids and dust grains, not on planets and moons.

When life has reached the small objects, it will have achieved mobility. It is easy then for life to hop from one small world to another and spread all over the universe. Life can survive anywhere in the universe where there is starlight as a source of energy and a solid surface with ice and minerals as a source of food. Planets and moons are the worst places for life from the point of view of mobility. Because Earth’s gravity is strong, it is almost impossible for life to escape from Earth without our help. Life has been stuck here, waiting for our arrival, for three billion years, immobile in its planetary cage.

When humans begin populating the universe with Noah’s Ark seeds, our destiny changes. We are no longer an ordinary group of short-lived individuals struggling to preserve life on a single planet. We are then the midwives who bring life to birth on millions of worlds. We are stewards of life on a grander scale, and our destiny is to be creators of a living universe. We may or may not be sharing this destiny with other midwife species in other parts of the universe. The universe is big enough to find room for all of us. One writer who grasped the universal scale of human destiny was Olaf Stapledon, a professional philosopher who dabbled in science fiction. His books Last and First Men and Star Maker, written in the 1930s, remain as enduring monuments to his insight. Stapledon gave us a larger view of space, teeming with life and action, as the stage of a cosmic human drama.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/zuEHp_nq9yQ/

He Made It American

Stuart Davis: In Full Swing

an exhibition at the Whitney Museum of American Art, New York City, June 10–September 25, 2016; the National Gallery of Art, Washington D.C., November 20, 2016–March 5, 2017; the de Young, San Francisco, April 1–August 6, 2017; and the Crystal Bridges Museum of American Art, Bentonville, Arkansas, September 16, 2017–January 8, 2018
Stuart Davis: Landscape with Garage Lights, 32 x 42 inches, 1931–1932
Memorial Art Gallery of the University of Rochester/© Estate of Stuart Davis/Licensed by VAGA, New York, NYStuart Davis: Landscape with Garage Lights, 32 x 42 inches, 1931–1932

If there is a message in the Whitney’s large gathering of the work of Stuart Davis, it may be simply that time hasn’t dented the power of the painter’s work. While some of the pictures breathe merely a period air, a great many continue to give pleasure, and, as an added attraction—as the artist with his love for everyday turns of phrase might have said—it isn’t easy to say why.

In his day, and perhaps for viewers first coming to him now, Davis—who died in 1964, at seventy-one—was a Cubist of sorts whose special contribution was to give the style an American look. Into his Cubist-type arrangements of so many flat, interlocking shapes he incorporated details that conjured up an American world at its most generic: gasoline pumps and barbershop poles, New York City subway entrances and street lamps, and the masts of boats visible just beyond the warehouses in New England fishing towns. Bringing into his pictures words and phrases—whether from advertising, or a line from a Duke Ellington hit of 1931, or single words such as “now” and “cat” and “else”—Davis brought to Cubism as well an American sound and voice.

A populist and a man who was much given to propounding theories, Davis saw his Americanisms as part of his plan. He was convinced that a modern painter needed to give a sense of his or her time and place and to convey somehow what was novel and urgent about it. He would probably have liked knowing that the last major show of his art in New York, which was at the Metropolitan Museum twenty-five years ago, was called “Stuart Davis: American Painter.”

Yet what makes a larger impression on viewers now, I believe, is less Cubism, which is in itself a far less vital or pressing style for us, or Davis’s American note, which, certainly in his best pictures, is something we don’t take all that seriously. (It helps that he also had a long-running affair with things French, and mixed Paris in with New York.) What counts more is the way that over four decades he kept reimagining, and making more imposing, his art of form and color. Davis stands to the side of painters of his own era, such as Marsden Hartley and Edward Hopper—and of the next, such as Willem de Kooning and Jackson Pollock—in that his work seems hardly touched by psychology, or by any sense of mysteriousness, poignance, or raw tensions. At the same time, for all his evident gifts as a designer of abstract forms, his painting isn’t analytical or measured in spirit.

His pictures convey, rather, a pulsing, muscular human warmth. In his realm of shapes, lines, words, phrases, and glimpses of fire escapes, say, or brick walls, everything appears casual, nonchalant, and handmade. Yet all the elements are also beckoningly sturdy and firm—practically indominable—because they are fixed in place in painted surfaces that, even when they are not literally thick, in memory have the slablike but malleable thickness of cake frosting.

Davis’s most striking thought may have been to make words and phrases, and numbers and dates, parts of his images. It is still a little startling to see how, in still lifes from the early 1920s that include newspapers or commercial products, he was meticulously forthright about painting headlines, brand names, and advertising slogans. Whether or not one feels this is momentous, Davis was essentially formulating (though not single-handedly) Pop art three decades before it happened. (It was a movement that, encountering it when he was in his late sixties, he did not embrace.)

But Davis’s use of words and phrases is livelier and more involving when it touches on his excited response to jazz, or when we can’t fully make out the words (or dates or numbers)—or when they are made-up words (such as “eydeas”) or French words (such as fin and tabac). It is as if the tumble of clunky but spry shapes that constitutes a Davis picture represented his mind, and the letters and words set into the tumble were some combination of memories, specific points, and random nothings. Are there top-flight Davises that have no words or other elements standing for the “real” world? Yes. There are also some Davises in which, overloading the words, he seems to be parodying himself.

What most deeply struck this viewer at the Whitney show, however, was the artist who went about bringing often opposing elements into his scenes at the same moment. Davis was a kind of conservative revolutionary. He was as concerned to insist on, and celebrate, the upheaval in twentieth-century art that was abstraction as he was determined to save, in his work, the world that certain pure forms of abstraction seemed to want to leave out. The result, which Barbara Haskell, in her writing in the Whitney’s catalog, helped me see as never before, was a subtle two-sidedness—and a feeling for multiplicity and simultaneity—running through Davis’s art and thinking. It led him, we realize after a bit, to realign the roles and identities of things.

Words, for instance, can be given such prominence—“champion” is a prime example—that the pictures they are in might almost be portraits of them. Davis’s signature is sometimes so large that it has the presence of a subsidiary element in a still life (like a grape wandering off on its own). The 1940 Report from Rockport gives us, anchored among the shapes, the words “garage” and “Seine”—entities that probably only Davis would think of as equals. Here the words become virtually the cast of a two-character play.

Even when handling purely formal aspects of a picture, Davis transforms roles. He will sometimes make lines so widespread and substantial that they could be shapes. Background colors can be so vivid that we lose our sense of what is background and what foreground.

Davis’s individual colors, no matter how much or how little of the canvas they occupy, all somehow come out at us with relatively the same force. Bright, unnuanced, and unencumbered with shadows, his colors are almost the stars of his individual pictures. Yet the titles of Davis’s paintings can be nearly as winning, and his titles, like his words and shapes and signatures, also have the weight of self-contained entities.

Stuart Davis: Report from Rockport, 24 x 30 inches, 1940
Metropolitan Museum of Art/© Estate of Stuart Davis/Licensed by VAGA, New York, NYStuart Davis: Report from Rockport, 24 x 30 inches, 1940

His earlier pictures are unremarkable in this regard, but by the 1940s, with Report from Rockport and The Mellow Pad, Davis was seeing titles, it seems, as possessing their own life; and in his last fifteen or so years, his gift for naming took off. Davis titles such as Colonial Cubism, Ready-to-Wear, Cliché, and The Paris Bit have come to seem—to this writer, anyway—like old friends, and I may not be the only viewer who knows these titles but can’t necessarily remember the pictures they go with.

Perhaps other viewers, too, have loved Davis titles such as Tropes de Teens, Blips and Ifs, and the great Owh! in Sao Pão without having any sense of what they might mean (or desire to find out). My favorite probably is Rapt at Rappaport’s, which I confess I long glibly thought might be a Jewish delicatessen version of Report from Rockport. That Davis (both of whose wives were Jewish) was looking for a title with a Jewish ring to it is attested to by the fact that he thought at one point of calling the picture Engrossed at Grossinger’s, after a resort in the Catskills favored by a Jewish clientele. Rappaport’s, however, as Harry Cooper notes in his essay in the show’s catalog, was a toy store at the time that was known for its dotted wrapping paper. Davis recreates a bit of the paper in his image, in itself an abstract conglomeration of forms, and then—he obviously was not the only person to think of this—transposes “wrapped” into “rapt.”

Stuart Davis: Rapt at Rappaport’s, 52 x 40 inches, 1951–1952
Hirshhorn Museum and Sculpture Garden, Smithsonian Institution, Washington, D.C./© Estate of Stuart Davis/Licensed by VAGA, New York, NYStuart Davis: Rapt at Rappaport’s, 52 x 40 inches, 1951–1952

Then he goes further by painting the “T” in “RAPT” in (a rare, receding) blue, while “RAP” is in black. On a quick look, the picture can thus appear to be about a rap, or a talk session, at Rappaport’s as much as it is about both being entranced and wrapping paper. And in having this three-sided life Davis’s title is like his paintings in general, where abstract elements and vestiges of the real world slide in and out of each other.

With its many paintings in combinations of elemental red, yellow, green, black, orange, blue, and white—and pink and lavender—the Whitney’s exhibition is a bit like a toy store itself. The Davis-like subtitle it has been given, “In Full Swing,” lets us know that it is not a retrospective or overall view of the artist. Its organizers have understandably sought to differentiate it from the Met’s 1991 show, which started off with the painter’s early street scenes, landscapes, and self-portraits, done in differing naturalistic styles. The Whitney show begins just after that, with the paintings from the early 1920s of commercial items, made when Davis had first mastered a kind of Cubist language.

To start the show this way certainly gives the assembled works a flowing unity. But there is a slight loss here. Davis actually never had an uncertain, youthful style. His parents were artists, and they somehow went along with his dropping out of school after ninth grade to study art. He had five works in the Armory Show, in 1913, when he was all of twenty, and a few years later, and still a realist of sorts, he made paintings that, like aerial or kaleidoscopic views, present separate, unconnected scenes or settings in one overall scheme. They are his primal works. They let us know that even before he ventured into Cubist waters he was thinking seriously about simultaneity.

At the same time he was also making landscapes and self-portraits that were clearly indebted to Van Gogh. It was the Dutch painter, I believe, with his way of treating the surface of a painting as a matter of so many thrusting, rough-hewn, independent, and handwriting-like brushstrokes, who gave Davis the inspiration for the sort of physical texture, and possibly the emotional texture—as of something earnest and unrelenting—that he wanted for his own canvases. A self-portrait that he did in 1919 that is unmistakably beholden to Van Gogh is a powerhouse work in its own right.

As it is, the first paintings in the current exhibition with the same vibrant strength are not the pictures of tobacco papers and air fresheners that greet you at the beginning. Nor do the second set of pictures in the show—Cubist-type still lifes that have been much admired by commentators and presumably have kitchen utensils in them—have much meaty, contrasty power either. No, it is in a number of Paris street scenes, made from a yearlong stay there beginning in 1928, that Davis finally found a way to have black, white, and a range of colors all play equal parts at once. The Paris pictures, it is true, could be blown up to be wall decorations for a French restaurant or stage backdrops for a musical set in Paris. But with their buildings in buoyant, birthday-party colors, their overall airiness, and their witty use of black lines (what Saul Steinberg would do twenty years later), the pictures hit a note of almost aggressive insouciance that Davis hadn’t revealed before.

He went on to one of his richest paintings: the 1931 House and Street, which many people probably know because it is part of the Whitney’s superlative collection of Davis works. Looking at the picture in light of the importance for him of having different things take place at once, I realized I had not fully seen House and Street before (or fully registered its title). In this urban scene we look at two entirely separate places that have been bluntly conjoined but whose double identity, perhaps because we lose ourselves in the rhythmic placing of blocky forms and popping colors through the two halves, is almost camouflaged over.

But then the Depression, as it did for so many others, took its toll on Davis. There was almost no one buying the progressive art he made, and he was lucky to get commissions for mural projects. Based on examples that are part of the show—including the 1932 mural for the lounge of the men’s room at Radio City—one can feel that these works probably had more life in their original settings than they do here on their own. Most of Davis’s energies in the 1930s, though (as he said himself), went into political activism, chiefly for left-wing groups attempting to safeguard artists’ rights. He was the editor of the magazine Art Front and he early on became president of the American Artists’ Congress.

Participating in these conferences and marches meant that Davis, although he never became a member of the Communist Party, publically tolerated Stalin’s brutalities during the time. But the Soviet invasion of Finland took Davis over the edge. When the Artists’ Congress voted to approve this act, he resigned from it the next day and never returned to organized political causes.

In the 1940s he became, while sticking with the same sense of color and form that he had employed for some time, a transformed figure. As if washing his hands of the political life that had kept him from making many paintings to advance his own art in the previous decade—and seemingly giving up for the moment the idea of creating a public, popular art—he embarked on what would be a small number of not especially large pictures that nominally are of places, whether various parks or lower Seventh Avenue, where his studio was.

As he moved from one picture to the next, however, he put in less sense of the everyday American world. It hardly matters. These mostly horizontal works are packed with exploding nuggets of pointy and curving forms. Looking at the paintings, we are not sure if Davis’s subject is the sheer anarchic profusion of shapes and colors or the opposite: demonstrations of a fanatical control. These are paintings you want to stand before for a long time, maybe none more than Report from Rockport, a vaguely townscape-like scene, crowded with shapes in a space marked by yellow and pink. Feverish and playful, the picture recalls if anyone Joan Miró, Davis’s almost exact contemporary and during these years probably the most innovative painter anywhere.

Then in one of the more remarkable shifts in our art, Davis began making a different kind of painting in the beginning of the 1950s, when he was in his late fifties. In works he continued to paint until his death fourteen years later, words became increasingly important, and his canvas sizes, and the very spirit of the pictures, became bigger. The note of monumentality that Davis achieves in works such as the 1951 Visa, where the word “champion” takes up much of the picture and suggests a flag waving in the wind, may be owed to his efforts as a muralist. But the heroic spirit of these later paintings feels truly new for Davis.

The late, large canvases have generally been considered his best work. The organizers of the show would seem to concur, as there are many of them here—too many. After a dozen or so I found them running together (there are some two dozen on hand), and I began to wonder whether the smaller pictures that preceded them, with their abundance of intricate shapes, weren’t stranger and stronger. It is a good zone of uncertainty in which to find oneself. There aren’t many twentieth-century American painters who have had powerful and distinctly contained phases of work within their art as a whole.

The catalog for the exhibition is a quietly luxuriant affair. The pictures stand out in good, big reproductions, each on its own otherwise empty page. Haskell’s essay on all aspects of Davis’s career lucidly picks up the many strands, and Harry Cooper, writing on the painter’s way of continually reworking his earlier pictures—or making essentially an art about art—presents a Davis who is often lost sight of: an aesthete-engineer who could be entirely oblivious of his time and place. The highlight of the catalog, however, is a book-length chronology—at times going by the month, even the week—that Haskell has compiled. Her “A Chronicle” forms the fullest biography we have of the painter.

Like pictures of his that bring together house and street or Paris and New York, Davis’s life had two distinct sides. Until sometime in the late 1940s, his story was one of grueling stretches of near pennilessness and fruitless fights for recognition. Then quite swiftly there grew an awareness that he was a classic figure. Through both parts we follow his many relations with artists, dealers, and critics, his family life and marriages, and his involvement in politics and with the jazz scene that he followed closely from his earliest days. Threaded through all of it are quotes from letters, reviews, and commentary from many sources but mostly from the high school dropout himself, who published a good number of articles and statements in his lifetime and left over 10,000 pages of writing, mostly about art, in his journals.

Inspired no doubt by her subject, with his belief that in a painting no subject is inherently more meaningful than any other—and his affinity for multifariousness in itself—Haskell keeps the many details, incidents, and quotes, of whatever import, coming to the reader in the same lean, direct way. To start in on this chronicle you probably need to have fallen in love at some time with at least one work by Stuart Davis. Once you have settled into the account, though, you may find it hard to leave off. It held the present writer, well, rapt.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/BCzfWrSdBRY/

How the Financing of Colleges May Lead to Disaster!

Bill Clinton at a summit on youth and productivity organized by the for-profit Laureate International Universities, Mexico City, February 2015
Mario Guzmán/EPA/ReduxBill Clinton at a summit on youth and productivity organized by the for-profit Laureate International Universities, Mexico City, February 2015

When the financial industry—banks, hedge funds, loan companies, private equity—gets too involved in any particular activity of the economy or society, it’s usually time to worry. The financial sector, which represents a mere 4 percent of jobs in this country but takes a quarter of all private sector profits, is like the proverbial Las Vegas casino—it always wins, and usually leaves a trail of losers behind. So perhaps alarms should have been raised among both financial regulators and educational leaders when, two decades ago, for-profit colleges began going public on the NASDAQ and cutting deals by which private equity firms would buy them out. Apollo Group, the parent company of the University of Phoenix, was one of the first, becoming a publicly traded corporation in 1994, at a time when the university had a mere 25,000 students. By 2007 the university had expanded to 125,000 students at 116 locations. This was growth pushed by investors who viewed students as federally subsidized “annuities” that, via their Pell Grants and student loans, would produce a fat and stable return in the form of tuition fees.

It’s an issue that’s been front and center in recent months, not only with the scandal surrounding Trump University and the recent closure of the ITT chain of for-profit colleges, but also the news that Bill Clinton was, during five years, paid a total of $17.6 million to serve as an “honorary chancellor” of the for-profit college company Laureate International Universities. The sector has been raking in money for some time now. Throughout the roaring 1990s, for-profit college and university enrollment grew by nearly 60 percent, compared to a mere 7 percent rise in the traditional nonprofit sector.

As one Credit Suisse analyst looking at the $35 billion industry put it, “it’s hard not to make a profit” in the for-profit education sector. The stock prices of for-profit colleges and universities (FPCUs) reflected that; they rose more than 460 percent between 2000 and 2003 with much support from public subsidies. Their promotional budgets rose, too—Apollo recently spent more on marketing than Apple, the world’s richest company.

But education, sadly, did not benefit. As A.J. Angulo outlines in his detailed history of the for-profit sector, Diploma Mills, that’s because such schools spend a large majority of their budgets not on teaching but on raising money and distributing it to investors. In 2009, for example, thirty leading FPCUs spent 17 percent of their budget on instruction and 42 percent on marketing to new students and paying out existing investors. Is it any wonder, then, that investigations into the industry from 2010 to 2012 found that while it represented only 12 percent of the post-secondary student population, it received a quarter of all federal aid disbursements and was responsible for 44 percent of all loan defaults, many of them by working-class students who either couldn’t afford to graduate or, once they did, found their degrees were largely useless in the marketplace? As one critic of the system puts it in the book, “There is no way to escape being a slave to the quarterly report. Quality education and higher earnings are two masters. You can’t serve both.”

All this has huge ramifications not just for the victims of the for-profit sector (many are now waging successful lawsuits for debt relief) but for higher education as a whole. For-profit colleges and universities don’t exist in a vacuum. Their rise has happened in tandem with a fall in state funding for public education, budget squeezes at nonprofit state colleges, rising college fees (according to Bureau of Labor Statistics data the price of college and textbooks has tripled since 1996), a growth in student credit availability and debt, stagnant wages, and a rising sense of hysteria—sometimes justified, other times not—that the system of higher education in America is broken and must be fixed.

Certainly all these factors have been huge issues in the 2016 presidential campaign, propelling the unlikely success of Bernie Sanders during the primaries. One of the most memorable moments of the Democratic National Convention came during Sanders’s speech, when young delegates wept as he endorsed Hillary Clinton. She has, in turn, been under political pressure to take up his banner; her platform now includes a mandate to make in-state tuition free at public colleges and universities for all Americans whose families make up to $125,000 a year.

Thoughtful people can disagree on whether college should be free, and if so for whom, but it’s a timely and important question. As Harvard academics Claudia Goldin and Lawrence F. Katz made so clear in their 2008 book, The Race Between Education and Technology, economic growth and national competitiveness are predicated on education staying ahead of technology, thereby enabling workers with higher and higher skill levels to be more productive. Economic growth basically depends on productivity plus demographics. Since the 1980s this link has been broken, as educational attainment in the US has faltered—over the last thirteen years, the US has ranked third from the bottom among OECD nations in gains in education attainment beyond high school.

One result, according to Goldin and Katz, as well as any number of other experts who study the topic (see William G. Bowen and Michael S. McPherson’s Lesson Plan, for instance), is slower economic growth. That creates a destructive snowball effect—lower growth equals less money in tax coffers and less public funding for educational institutions, which contributes to worse educational outcomes. And in an era in which human talent is a scarcer resource than financial capital, it also means slower economic growth. In the public sector, which educates 80 percent of American students, state funding hit a peak in 1980 and has been falling ever since. Not surprisingly, the decline in funding has hit working-class students the hardest, a point that Sara Goldrick-Rab lays out sharply in Paying the Price. While the average net price of college education as a percentage of family income has risen moderately for the top 75 percent of the socioeconomic spectrum, it has skyrocketed for the bottom quartile, who paid 44.6 percent of their income for a degree in 1990, versus 84 percent today.

All of these changes have their roots in the rise from the late 1970s onward in “neoliberal” economic thinking—which assumes incorrectly that the marketplace is always fair and efficient and better than public institutions at allocating resources—and the subsequent financialization of everything. Neoliberal theory, or at least the twentieth-century, laissez-faire reincarnation of it, assumes that markets empower everyone; in reality powerful institutions, and in particular financial institutions, end up dominating both the economy and society.

“Financialization” is an academic term for the trend by which Wall Street and its methods have come to reign supreme in America, permeating not just the financial industry but also many other parts of both the private and public sectors. It includes such basic matters as the growth in size and scope of finance and financial activity in the economy (the size of the industry as a percentage of GDP has more than doubled the last forty years); the rise of debt-fueled speculation instead of productive lending; the ascendancy of shareholder value as the sole model for corporate governance; the proliferation of risky, selfish thinking in both the private and public sectors; the increasing political power of financiers and the CEOs they enrich; and the way in which a “markets know best” ideology remains the status quo in many academic and policy circles.

University of Michigan professor Gerald Davis, one of the preeminent scholars of the trend, likens financialization to a “Copernican revolution” in which business and society have reoriented their orbit around the financial sector. This revolution is often blamed on bankers. But it was facilitated by shifts in public policy, from both Republicans and Democrats, and crafted by the government leaders, policymakers, and regulators entrusted with keeping markets operating smoothly. Greta Krippner, another University of Michigan scholar, whose Capitalizing on Crisis is one of the most comprehensive books on the topic, believes this was the case when financialization began its fastest growth, in the decades from the late 1970s onward. According to Krippner, that shift encompasses Reagan-era deregulation, the further deregulation of Wall Street under Bill Clinton’s administration, and the rise of the so-called ownership society under George W. Bush that pushed property ownership rates higher and further tied individual health care and retirement to the stock market.

The financialization of education was part of this fundamental change as well, a point that student debtor turned activist Cryn Johannsen lays out in Solving the Student Loan Crisis. As she puts it, “students…are defined as consumers seeking out personalized education and training that will make them marketable,” a concept that disconnects higher education from its value as a public good. Of course, American higher education was never completely devoid of mercantilism (for-profit business and trade schools have been around since the nineteenth century) and it’s virtually never been free; but payment for it was in the past split more evenly between families, the government, and philanthropy, and the civic benefits were as highly valued as the economic ones (which, crucially, were seen as accruing to the nation, rather than just the individual).

As Bowen and McPherson describe, as far back as the colonial era there were public efforts to help students attend college. The Morrill Act of 1862, for example, which gave land grants for the founding of many of America’s best-known public universities, created a system by which states would subsidize public university tuition, making it affordable for middle-class students to go to college. Private universities did the same via endowments and fees paid by the elites.

The poor were mostly left out of the equation until after World War II, when it became clear that America needed a more highly trained workforce to ensure growth in an increasingly competitive international landscape (and, by the 1960s, a burgeoning information economy). The federal government began with offering World War II veterans grant and loan programs like the GI Bill and later, as part of President Lyndon B. Johnson’s Great Society program, the Pell Grant and the Guaranteed Student Loan Program, later renamed the Stafford Loan. As Beth Akers and Matthew M. Chingos note in Game of Loans, LBJ had personal reasons to make college more affordable—he had been a student debtor himself who struggled to pay off $220 in loans ($3,100 in today’s dollars) from Southwest Texas State Teachers’ College, as well as a private student loan and an auto loan on which he eventually became delinquent (he had to hide the car so the lender wouldn’t repossess it).

Federal programs like these still provide a huge amount of support, accounting for 67 percent of all student aid in 2014–2015. Why, then, do we have a $1.2 trillion student debt bubble? There are several reasons. First and foremost is that while federal support for higher education has remained relatively steady over the last few decades, individual state support for students and funding for universities has been falling. One of the main reasons for that drop was the tax revolt led by Grover Norquist and supported by the Koch brothers and other rich conservative donors. Particularly in red states like Texas, Virginia, and North Carolina, tax cuts came at a time when state budgets were already taking hits from things like the savings and loan crisis, the dot-com bubble and subsequent recession, and most recently and dramatically, the financial crisis and recession of 2007 and 2008. As Akers and Chingos point out, before 2008, states provided roughly $9,000 per student for higher education. Today, that number has fallen to around $7,000, the lowest level in thirty years. This has resulted in higher announced prices to attend many schools; it’s no accident that the public school with the highest list price (New Hampshire) also has the lowest level of state funding.

Higher attendance over the last two decades has also increased costs (and led to more debt, given the increased number of students trying to complete degrees). So has an open race for richer students—colleges all too often invest in luxury facilities to attract more full-fee-paying students, or, in the case of the for-profit sector, take enormous profit shares (margins of 30 percent mirror those in certain parts of the financial sector itself). But lower state funding is the principal reason that prices have risen at nonprofit public colleges and universities at nearly twice the rate of private four-year institutions since 2000.

That bifurcation, which affects the bottom 80 percent of the socioeconomic spectrum much more than the top 20 percent, has been exacerbated by the growing income and wealth divide over the same period. Tuition costs are rising most for the students who can afford them least, with predictable results. One study in Virginia found that since the 1990s, retention rates for first-year students in the lowest quintile have been 11 percentage points lower than for the top quintile. As Bowen and McPherson put it:

This growing inequality is, in our view, a serious national problem…increases in tuition and fees for all but the most affluent would seem much less onerous if their incomes were increasing as rapidly, or more rapidly, than college costs. We continue to be surprised by how little attention is given to this aspect of the affordability problem—especially by those who choose to assign blame almost exclusively to educational institutions.

Both the wealth divide and the tendency to blame the victim (witness conservatives who use Pell Grant fraud in the for-profit sector as an argument for doing away with public financial aid programs altogether) stem from neoliberal policies and attitudes with their emphasis on market outcomes. Angulo quotes David Salibury, the head of the Cato Institute’s Center for Education Reform, defending for-profit diploma mills, and arguing against more regulation in the for-profit sector:

My gut feeling on diploma mills is the whole idea of having to regulate this is the denial of intelligence of consumer and marketplace. If people want to waste their money buying a diploma from a diploma mill, let them do so.

Yet Adam Smith himself would have said that in order for a market to function fairly and efficiently, all players require equal access to information, a real understanding of market prices, and shared moral values. None of that is true today in the educational sphere. The student loan market, for example, is complex and opaque. Both Pell Grants and the loan system operate as personal vouchers, which puts more responsibility on individuals to track the money needed for payment. Given that most of us don’t have many chances to learn from experience in the education market (we only get a few shots at it in our lifetime, as Bowen and McPherson point out), it’s no surprise that studies show that most students have no idea how the system works. In Game of Loans, we learn that only a quarter of first-year college students can predict their debt load within 10 percent of the correct amount, in large part because students are regularly overpromised financial aid in complex deals that then change year by year, just like the subprime mortgages that blew up in 2008.

Meanwhile, like the risk managers at the too-big-to-fail banks who were oblivious to exploding derivatives on their balance sheets, educational experts don’t have all the information either. In Game of Loans, University of Michigan professor of public policy and student aid expert Susan Dynarski sums up the problem:

Imagine that a big, complicated company holds a huge portfolio of loans, many of which are in default or delinquency. The company’s leadership and some vocal shareholders demand a detailed review but receive a thin and incomplete report from the loan division.

Financial analysts at headquarters want to scrutinize the data. But the loan division doesn’t turn it over. Without better data, the firm can’t move forward.

This dysfunctional enterprise is fictional, but in at least some respects it bears more than a passing resemblance to the United States government, which has a portfolio of roughly $1 trillion in student loans…. The Education Department, which oversees the portfolio, is playing the part of the loan division—neither analyzing the portfolio adequately nor allowing other agencies to do so.

As it is, anyone who wants to understand who’s holding the exploding bag of student debt has to cobble together facts and figures from disparate public and private data.

Donald Trump with Michael Sexton, the president of Trump University, at the announcement of its founding, New York City, May 2005
Bebeto Matthews/AP ImagesDonald Trump with Michael Sexton, the president of Trump University, at the announcement of its founding, New York City, May 2005

The similarities between the student loan market and the financial crisis don’t stop there. Aside from the opaque credit market and the increase in asset prices, you’ve got borrowers paying above-market rates (student loan prices, fixed by the government, have not fallen despite near-zero real interest rates), widespread fraud, conflicts of interest between educators and regulators (which are predictably understaffed and underfunded), and a huge industry lobbying on behalf of the sector to keep things as they are. No surprise that the for-profits come off particularly badly in this respect; like the financial industry itself, they have raised a huge war chest to combat legislation, holding back efforts to make the industry more transparent and successfully fighting off numerous lawsuits and regulatory efforts. The money they throw at both marketing and lobbying has prompted a similar increase in spending in those areas within the nonprofit sector.

The student debt crisis is similar to the subprime crisis in another crucial way: predatory practices target the most vulnerable, often using the complex computer models that were employed to make risky mortgages look better on paper. The use of algorithmic models that rank colleges has led to an educational race where schools offer more and more “merit”-based rather than need-based aid to students who’ll make their numbers (and thus rankings on things like the US News and World Report “Best Colleges” list) look better. Profit-making institutions in particular troll for information on economically or socially vulnerable would-be students and find their “pain points,” as a recruiting manual for one for-profit university, Vatterott, describes it. The data can be found in any number of online questionnaires or surveys the students may have unwittingly filled out. As former quantitative trader turned social activist Cathy O’Neil describes in her book Weapons of Math Destruction, the schools can then use this to funnel ads to likely targets, including welfare mothers, recently divorced and out-of-work people, those who’ve been incarcerated, or even those who’ve suffered injury or a death in the family.

Why haven’t educational leaders been more vocal about this crisis? Perhaps because like regulators and politicians involved in the 2008 crisis, they too are victims of neoliberal ways of thinking. Universities have been duped by Wall Street into bad debt deals, just as public municipalities such as Detroit were. New research by the progressive Roosevelt Institute has found that seven of the eight largest universities in the state of Michigan, for example, have gotten involved in risky interest-rate swap deals in recent years, resulting in millions of dollars in unnecessary fees, further raising costs for students. The very idea that a large number of American universities are now involved in swaps that put them far out of their financial depth raises disturbing questions about how their balance sheets are being managed.

But the financialization of education and the debt bubble it has brewed raise a deeper question: Who, exactly, is higher education for? Who is it helping? While a four-year degree does ensure a job paying more than $15 an hour for most graduates, it is no longer a ticket to social mobility for the poorest. Among those who do graduate, debt loads can result in downward mobility. In her Solving the Student Loan Crisis, Johannsen cites a 2013 study by the liberal think tank Demos that found that the average student debt burden for a married couple with two four-year degrees ($53,000) actually led to “a lifetime wealth loss of nearly $208,000.”

Such a burden is a huge economic concern, and not just for millennials. The majority of college graduates in the US now move back home with their parents, often for several years. The class of 2016, the most indebted in history, cannot afford homes, cars, or other trappings of a middle-class life, which is an obvious problem for an economy of which 70 percent is accounted for by consumer spending.

How to fix things? The notion of making four-year college free for everyone is an attractive and politically popular idea (at least on the left), but it would require a debate over competing needs—for example defense—that would likely be stalled in Congress. What’s more, it would disproportionately benefit middle- and upper-class students and their families who actually can afford their debt loads. While a larger proportion of student debt today is being taken on by richer families and those with graduate degrees than, say, ten years ago, it’s important to remember that, as Akers and Chingos put it in Game of Loans, “what matters is not the level of debt, but the borrower’s ability to repay it.” Talking about the poor rarely garners votes, but that’s where the real social and economic benefits of free tuition are to be had. Some 14 million new jobs will be created between 2014 and 2024 in the US, but nearly all of them will require at least a two-year associate’s degree.

We should start by making community college the new high school—a basic necessity for every American—and work our way up the educational and economic food chain from there. We might think of paying for it by cutting billions in taxpayer aid to the for-profit sector. Not only is this sector responsible for 75 percent of the increase in debt defaults over the last decade, but as Angulo wisely points out in Diploma Mills, neoliberal principles should require such schools to compete in the free, rather than publicly subsidized, marketplace. Students who’ve been duped by predatory schools should be given debt relief and/or be allowed to refinance loans at preferential rates. The last thing we need for our economy or our politics is a repeat of the 2008 crisis, during which rich institutions were saved and borrowers got the shaft.

We should also make sure that the degrees being offered actually count for something—too many students are paying far too much for meaningless diplomas in sports marketing or business administration. The ideal of education—that students will be helped to realize their possibilities—is masked by many such courses. It will also require that the government have much better information about the outcomes of education, and much better analyses of them. A national database for higher education that has been proposed by President Obama might be a first step.

Reconsidering and reforming our system of higher education should move beyond debates about whether STEM skills—those promoted by the study of science, technology, engineering, and math—trump liberal arts. We need both, not only because it’s impossible to predict exactly what the jobs of the future will be, but also because critical thinking in any field is the most important measure of economic and civic success. We need a deeper shift in the American system—we must once again start to think about public education as an investment in our future as a nation, the way our leaders did forty years ago. It is, after all, an asset, rather than a cost, on our national balance sheet.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/oKRHu3Wet74/


Seven years after the start of the revolution, rebel army leader Camilo Cienfuegos, center, and his fellow "barbudos" approaching Havana, 1959
Lee Lockwood/TaschenSeven years after the start of the revolution, rebel army leader Camilo Cienfuegos, center, and his fellow “barbudos” approaching Havana, 1959

The twentieth century yielded few moments of political euphoria as heady as Fidel Castro’s triumph on January 1, 1959. Against all odds, Castro and his bearded band defeated a US-backed dictator in the name of post-colonial independence, economic equality, free education and health care. It was for some time an eminently salable story, and in certain quarters still is. Schoolboys in Cuba relive the myth every New Year’s. Dressed as revolutionary barbudos (“bearded ones”), they parade in green fatigues and brandish toy guns to re-enact, with wild, fierce glee, a victory that took place long before their parents were born.

Those who aren’t Cuban schoolboys will be grateful for Castro’s Cuba, a compilation of photographs and text from 1959 to 1969 by the late Lee Lockwood, a US photojournalist who specialized in Communist regimes. For some, this sumptuous, large-format art book may be a gateway or flashback, bringing on a renewed rush of feeling. For others, these images of pale, awestruck, overjoyed, or grimly determined faces registering the thrill of Fidel’s proximity will constitute a further inoculation against all such revolutionary romance. Either way, the book is a supreme lesson in the populist power of the Castro cult.

Fidel Castro during “The Year of the Heroic Guerrilla,” dedicated to the memory of Che Guevara, Revolution Square, Havana, 1967
Lee Lockwood/TaschenFidel Castro during “The Year of the Heroic Guerrilla,” dedicated to the memory of Che Guevara, Revolution Square, Havana, 1967

During a 1965 stay in Cuba, Lockwood was granted an unprecedented five-day interview with Castro. The tapes were transcribed by Cuban functionaries into 420 pages of Spanish typescript, which were packaged into bound volumes and delivered to Lockwood in his New York apartment by the Cuban mission to the UN. The interview was the core and principal raison d’être of the 1967 Castro’s Cuba, Cuba’s Fidel, the first edition of this text, which was accompanied by a collection of about one hundred small, grainy black-and-white photographs. Half a century later, the photos, many heretofore unpublished and all gorgeously produced (some in color), have expanded to take up most of the space; the interview is now far less relevant than the images.

Fidel Castro holding <em>The Feeding of Cattle in Latin America</em>, by Mexican author Jorge de Alba, 1964
Lee Lockwood/TaschenFidel Castro holding The Feeding of Cattle in Latin America, by Mexican author Jorge de Alba, 1964

Lockwood asks all the right questions, and presses his subject on a number of sore points, but the answers too often serve only to reveal how snugly both subject and reporter—despite his best efforts—are cocooned within a cult of personality. Fidel extolls the glowing future of Cuba’s farming sector and makes a display of his own detailed study of the latest advances in agricultural science (a photo shows him chomping a cigar and holding up a book on cattle nutrition in Latin America). He spends many pages claiming that the island is going to become agriculturally self-sufficient. Never mind. In 2015, according to the UN World Food Programme, Cuba imported 70 to 80 percent of its domestic food requirements.

In his original foreword to the book, Lockwood called it a portrait of Cuba “with Fidel superimposed on the foreground”; his awareness that the force field of Fidelmania distorts everything was to his great credit. One of the century’s colossal jokes was playing itself out: a communal ideology that purported to refute the Great Man theory of history was, in practice, applying it as fully as any other political system ever had. “We think of the revolutionary state as an instrument of the power of the workers and peasants,” Fidel recites in response to the charge that he is a dictator with absolute power. But when challenged to cite a single instance of “intuitive communication” with the people that resulted in the leadership’s rectification of a mistake, Fidel’s got nothing. “Doubtless there must have been such mistakes, but offhand I don’t recall any,” he says.

The details Lockwood glimpses are the book’s most enduring and revelatory feature. Here’s Fidel’s personal photographer, the “satanically appealing” Alberto Díaz Guiterrez, known to all as Korda, who wears tailor-made fatigues, races around Havana in a Porsche, and declares that he has “only two abiding passions in life: making love and photographing Fidel.” Here’s Celia Sánchez, one of the most powerful people in Cuba and a hero of the revolution. When she shows up at the remote hacienda on the Isle of Pines where Lockwood’s interview is being conducted, she polishes Fidel’s boots and takes charge of his personal housekeeping.

Fidel Castro's photographer Alberto Díaz Guiterrez, known as Korda, 1965
Lee Lockwood/TaschenFidel Castro’s photographer Alberto Díaz Guiterrez, known as Korda, 1965

Lockwood photographs Fidel himself, and people who are looking at, touching and photographing Fidel. He also photographs the photographs and other images of Fidel that proliferate before his lens as the years wear on: a tiny Fidel speaks into a microphone, jabbing a finger into the sky atop a platform papered with a vast picture of his own face. That same face is stamped on a bandanna wrapped around the shoulders of a man who listens as Fidel gives a speech. A torn flyer showing Fidel and Karl Marx is taped to the wall in a cafe behind an avid group of chess players.

This omnipresent iconography of Fidel had begun to vanish from the Cuban street even while the Maximum Leader’s every action was still news and he was regularly appearing on television, long before his retreat from public life due to ill health and old age. And that’s the most striking change Lockwood’s work reveals to an eye accustomed to what Cuba looks like now. The cars in these photos, already outdated, are still part of every Cuban cityscape, though their engines are now mostly Japanese diesel. The Havana buildings whose decay was already of concern to Lockwood are in most cases far more dilapidated now, though some of the most important have been magnificently restored. But today, the political figures obsessively represented throughout Cuba are the martyrs, those who died young: Che, Camilo Cienfuegos, and the Ur-martyr and founding father of Cuban political consciousness, José Martí. Theirs are the faces on the currency, on posters and flyers, T-shirts, and souvenirs. They are the ones on the billboards the government hagiographically erects in strategic places; they are the ones in the Plaza de la Revolución.

Classic twentieth-century dictators of all ideological stripes left statues of themselves in the central squares, to be gilded, pulled down, or both, by those who came after. Through more than half a century as the nation’s leader, Fidel never did. Lockwood’s photos now remind us of this: though he never learned to relinquish power, Fidel did somehow learn to disappear.

 Lee Lockwood’s Castro’s Cuba is published by Taschen.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/zoPyKxjmumA/

For an Economic Boycott and Political Nonrecognition of the Israeli Settlements in the Occupied Territories

To the Editors:

We, the undersigned, oppose an economic, political, or cultural boycott of Israel itself as defined by its June 4, 1967, borders. We believe that this 1967 armistice line, the so-called Green Line, should be the starting point for negotiations between the Israeli and Palestinian parties on future boundaries between two states. To promote such negotiations, we call for a targeted boycott of all goods and services from all Israeli settlements in the Occupied Territories, and any investments that promote the Occupation, until such time as a peace settlement is negotiated between the Israeli government and the Palestinian Authority.

We further call upon the US government to exclude settlements from trade benefits accorded to Israeli enterprises, and to strip all such Israeli entities in the West Bank from the tax exemptions that the Internal Revenue Service currently grants to American nonprofit tax-exempt organizations. The objects of our call are all commercial and residential Israeli-sponsored entities located outside the 1967 Green Line. It is our hope that targeted boycotts and changes in American policy, limited to the Israeli settlements in the Occupied Territories, will encourage all parties to negotiate a two-state solution to this long-standing conflict.

David Abraham
Kai Bird
Todd Gitlin
Bernard Avishai
Peter Beinart
Peter Brooks
Adam Hochschild
Arlie Hochschild
Michael Kazin
Deborah Meier
Deborah Dash Moore
Martin J. Sherwin
Michael Walzer
Edward Witten
and seventy others

To sign this letter, e-mail stopsettlements2016@gmail.com.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/2Sauzyh6Kyc/