Месечни архиви: January 2016

The Clinton System

Hillary Rodham Clinton and Bill Clinton during the seventh annual meeting of the Clinton Global Initiative (CGI), New York City, September 22, 2011
Daniel Berehulak/Getty ImagesHillary Rodham Clinton and Bill Clinton during the seventh annual meeting of the Clinton Global Initiative (CGI), New York City, September 22, 2011

On January 17, in the final Democratic debate before the primary season begins, Bernie Sanders attacked Hillary Clinton for her close financial ties to Wall Street, something he had avoided in his campaigning up to that moment: “I don’t take money from big banks….You’ve received over $600,000 in speaking fees from Goldman Sachs in one year,” he said. Sanders’s criticisms coincided with recent reports that the FBI might be expanding its inquiry into Hillary Clinton’s emails to include her ties to big donors while serving as secretary of state. But a larger question concerns how Hillary and Bill Clinton have built their powerful donor machine, and what its existence might mean for Hillary Clinton’s future conduct as American president. The following investigation, drawing on many different sources, is intended to give a full sense of the facts about Clinton and not to endorse a particular candidate in the coming election.

It’s an axiom of Washington politics in the age of Citizens United and Super PACs that corporations and the very rich can channel almost unlimited amounts of money to candidates for high office to pave the way for later favors. According to the public service website Open Secrets, in the 2016 campaign, as of October, in addition to direct campaign contributions, Jeb Bush had at his disposal $103 million in “outside money”—groups such as PACs and Super PACs and so called “dark money” organizations that work on behalf of a particular candidate. Ted Cruz had $38 million in such funds, Marco Rubio $17 million, and Chris Christie $14 million.

Yet few have been as adept at exploiting this big-money politics as Bill and Hillary Clinton. In the 2016 campaign, as of October, Hillary Clinton had raised $20 million in “outside” money, on top of $77 million in direct campaign contributions—the highest in direct contributions of any candidate at the time. But she and her husband have other links to big donors, and they go back much further than the current election cycle. What stands out about what I will call the Clinton System is the scale and complexity of the connections involved, the length of time they have been in operation, the presence of former president Bill Clinton alongside Hillary as an equal partner in the enterprise, and the sheer magnitude of the funds involved.

Scale and complexity arise from the multiple channels that link Clinton donors to the Clintons: there is the stream of six-figure lecture fees paid to Bill and Hillary Clinton, mostly from large corporations and banks, which have earned them more than $125 million in the fifteen years since Bill Clinton left office in 2001. There are the direct payments to Hillary Clinton’s political campaigns, including for the Senate in 2000 and for the presidency in 2008 and now in 2016, which had reached a total of $712.4 million as of September 30, 2015, the most recent figures compiled by Open Secrets. Four of the top five sources of these funds are major banks: Citigroup Inc, Goldman Sachs, JPMorgan Chase & Co, and Morgan Stanley. The Clinton campaign meanwhile has set a goal of raising $1 billion for her Super PAC for the 2016 election.

Finally there is the nearly $2 billion that donors have contributed to the Clinton Foundation and its satellite organizations since Bill Clinton left office. It may seem odd to include donations to the foundation among the chief ways that corporations and the super-rich can gain access to the Clintons, earn their goodwill, and hope for future favors in return. The foundation’s funds are mostly spent on unequivocally good causes—everything from promoting forestation in Africa and helping small farmers in the Caribbean to working with local governments and businesses in the US to promote wellness and physical fitness.

Moreover, not all donors to the Clinton Foundation and its affiliated institutions are corporate. The Bill and Melinda Gates Foundation, for example, ranks among the largest contributors to the Clinton Foundation, having made grants totaling more than $25 million since its inception, and with a special focus, to quote a 2014 Clinton Foundation press release, on a partnership “to gather and analyze data about the status of women and girls’ participation around the world.”

But among the largest contributors to the foundation are many of the same donors that have supported Hillary Clinton’s political campaigns and that have paid the Clintons six-figure lecture fees. For these mainly corporate donors, access to the Clintons may be as important as the purposes for which their donations are used. According to a February 2015 analysis of Clinton Foundation funding by The Washington Post, the financial services industry has accounted for the largest single share of the foundation’s corporate donors. Other major donors to the foundation have included US defense and energy corporations and their overseas government clients.

Former US presidents have long used charitable foundations as a way to perpetuate their influence and to attract speaking fees as a lucrative source of income. But the Clintons are unique in being able to rely on the worldwide drawing power of former president Bill Clinton to help finance the political career of Hillary Clinton—with the expectation among donors that as a senator, secretary of state, and possible future president Hillary Clinton might be well placed to return their favors. The annual meetings of the Clinton Global Initiative have provided a prime setting for transactions between the Clintons and their benefactors. Among the corporate sponsors of the 2014 and 2015 CGI conferences in New York City, for example, were HSBC, Coca Cola, Monsanto, Proctor and Gamble, Cisco, PricewaterhouseCoopers, the Blackstone Group, Goldman Sachs, Exxon Mobile, Microsoft, and Hewlett Packard. For sponsorship of $250,000 or more, corporate executives attending the CGI meetings can enjoy special privileges up to and including direct access to the Clintons.

Frank Giustra speaks as former President Bill Clinton looks on during a press conference announcing the Clinton Giustra Sustainable Growth Initiative, New York, June 21, 2007
Shannon Stapleton/Reuters/CorbisFrank Giustra speaks as former President Bill Clinton looks on during a press conference announcing the Clinton Giustra Sustainable Growth Initiative, New York, June 21, 2007

In a 2013 investigative article for The New Republic, Alec MacGillis described the annual CGI meeting as a complicated give and take in which CEOs provide cash for CGI projects in exchange for access to Bill Clinton. MacGillis focused on the activities of Douglas Band, a former low-level aide in the Clinton White House, who at the CGI meetings arranged favors for selected CEOs such as “getting them on the stage with Clinton, relaxing the background checks for credentials, or providing slots in the photo line.” At the CGI’s 2012 meeting it was Muhtar Kent, then CEO of Coca Cola, who, The New York Times reported, “won a coveted spot on the dais with Mr. Clinton.”

Along with the Clinton Foundation, lecture fees have offered another way for interested parties such as Citicorp and Goldman Sachs to support the Clintons beyond direct campaign donations. Data drawn from the Clintons’ annual financial statements, the Clinton Foundation, and the banks themselves show that between 2001 and 2014 Bill Clinton earned $1.52 million in fees from UBS, $1.35 million from Goldman Sachs, $900,000 from the Bank of America, $770,000 from Deutsche Bank, and $650,000 from Barclays Capital. Since she stepped down as secretary of state in February 2013, Hillary Clinton has been earning comparable fees from the same sources. Of the nearly $10 million she earned in lecture fees in 2013 alone, nearly $1.6 million from major Wall Street banks, including $675,000 from Goldman Sachs (the payments referred to by Bernie Sanders in the January 17 2016 debate), and $225,000 each from UBS, Bank of America, Morgan Stanley, and Deutsche Bank.

Among the most striking and troubling aspects of the Clinton System are the large contributions corporations and foreign governments have made to the Clinton Foundation, along with Bill Clinton’s readiness to accept six-figure speaking fees from some of them, at times when the donors themselves had a potential financial interest in decisions being made at Hillary Clinton’s State Department. An investigation published in April 2015 by Andrew Perez, David Sirota and Matthew Cunningham-Cook at International Business Times shows that during the three-year period from October 2009 through December 2012, when Hillary Clinton was secretary of state, there were at least thirteen occasions—collectively worth $2.5 million—when Bill Clinton received a six-figure speaking fee from corporations or trade groups that, according to Federal Government records, were at the time engaged in lobbying at the State Department. These payments to Bill Clinton in 2010 included: $175,000 from VeriSign Corporation, which was engaged in lobbying at the State Department on cybersecurity and Internet taxation; $175,000 from Microsoft, which was lobbying the government on the issuance of immigrant work visas; $200,000 from SalesForce, a firm that lobbied the government on digital security issues, among other things. In 2011, these payments included: $200,000 from Goldman Sachs, which was lobbying on the Budget Control Act; and $200,000 from PhRMA, the trade association representing drug companies, which was seeking special trade protections for US-innovated drugs in the Trans-Pacific Partnership then being negotiated.

And in 2012, payments included: $200,000 from the National Retail Federation, which was lobbying at the State Department on legislation to fight Chinese currency manipulation; $175,000 from BHP Billiton, which was lobbying the State Department to protect its mining interests in Gabon; $200,000 from Oracle, which, like Microsoft, was seeking the government to issue work visas and measures dealing with cyber-espionage; and $300,000 from Dell Corporation, which was lobbying the State Department to protest tariffs imposed by European countries on its computers.

During Hillary Clinton’s tenure as secretary of state, US defense corporations and their overseas clients also contributed between $54 and $141 million to the Clinton Foundation. (Because the foundation discloses a range of values within which the contributions of particular donors might fall, only minimum and maximum estimates can be given.) In the same period, these US defense corporations and their overseas government clients also paid a total of $625,000 to Bill Clinton in speaking fees. In March 2011, for example, Bill Clinton was paid $175,000 by the Kuwait America Foundation to be the guest of honor and keynote speaker at its annual Washington gala. Among the sponsors were Boeing and the government of Kuwait, through its Washington embassy. Shortly before, the State Department, under Hillary Clinton, had authorized a $693 million deal to provide Kuwait with Boeing’s Globemaster military transport aircraft. As secretary of state, Hillary Clinton had the statutory duty to rule on whether proposed arms deals with foreign governments were in the US’s national interest.

Further research done by Sirota and Perez of International Business Times and based on US government and Clinton Foundation data shows that during her term the State Department authorized $165 billion in commercial arms sales to twenty nations that had given money to the Clinton Foundation. These include the governments of Saudi Arabia, Oman, Qatar, Algeria, Kuwait and the United Arab Emirates, all of whose records on human rights had been criticized by the State Department itself. During Hillary Clinton’s years as secretary of state, arms sales to the countries that donated to the Clinton Foundation ran at nearly double the value of sales to the same nations during George W. Bush’s second term. There was also an additional $151 billion worth of armaments sold to sixteen nations that had donated funds to the Clinton Foundation; these were deals organized by the Pentagon but which could only be completed with Hillary Clinton’s authorization as secretary of state. They were worth nearly one and a half times the value of equivalent sales during Bush’s second term.

Among the most important, and lucrative, business friendships the Clintons have formed through the Clinton Foundation and the Clinton Global Initiatives has been that with Canadian energy billionaire Frank Giustra. A major donor to the foundation for many years, Giustra became a member of its board and since 2007 has been co-sponsor of the Clinton Giustra Sustainable Growth Initiative, or CGGI. In turn, Bill Clinton’s political influence and personal contacts with foreign heads of state have been crucial to Giustra’s international business interests.

In September 2005, Bill Clinton and Giustra travelled to Almaty, the capital of Kazakhstan, to meet with Kazakh President Nursultan Nazarbayev. At their meeting Clinton told Nazarbayev that he would support Kazakhstan’s bid to become chair of the Organization for Security and Cooperation in Europe (OSCE). The OSCE is a body with the responsibility for verifying, among other things, the fairness of elections among member states. According to multiple sources, including the BBC, The Washington Post, and The New York Times, Nazarbayev coveted this position for Kazakhstan, primarily as a mark of European diplomatic respectability for his country and himself.

Clinton’s endorsement of the Kazakh bid was truly bizarre in view of Kazakhstan’s ranking by Transparency International as among the most corrupt countries in the world—126th, on a par with Pakistan, Belarus, and Honduras. Freedom House in New York judges Kazakhstan to be “not free,” with Nazarbayev clocking up Soviet-era margins of victory of 90 percent or more in Kazakh presidential elections. Yet in a December 2005 letter to Nazarbayev following one of his landslide victories, Bill Clinton wrote: “Recognizing that your work has received an excellent grade is one of the most important rewards in life.” It is unclear what influence, if any, Bill Clinton’s support for Nazarbayev may have had in Kazakhstan’s efforts to lead the OSCE, but in 2007, after the United States gave its backing to the bid, Kazakhstan was chosen as the next chair of the OSCE, a position it assumed in 2010.

Possible reasons for Clinton’s support become clearer when we scrutinize the activities of Frank Giustra. In a January 31, 2008 article in The New York Times, Jo Becker and Don Van Natta, Jr., provided detailed evidence that Nazarbayev brought his influence to bear to enable Giustra to beat out better-qualified competitors for a stake in Kazakhstan’s uranium mines worth $350 million. In an interview with the Times, Moukhtar Dzakishev, then chair of the state-owned nuclear holding company Kazatomprom, confirmed that Giustra had met with Nazarbayev in Almaty, that Giustra had told the dictator he was trying to do business with Kazatomprom, and that he was told in return, “Very good, go to it.”

The deal was closed within forty-eight hours of Clinton’s departure from Almaty. Following this successful visit to Central Asia, Giustra donated $31 million to the Clinton Foundation. He then made a further donation of $100 million to the foundation in June 2008.

In an interview with David Remnick for a September 2006 New Yorker profile on Clinton’s post-presidency, Giustra described how his ties to Clinton could work for him and his interests. With Bill Clinton at that moment riding aboard his private executive jet for a journey across Africa (“complete with leather furniture and a stateroom,” according to The New Yorker), Giustra told Remnick that “all of my chips, almost, are on Bill Clinton. He’s a brand, a worldwide brand, and he can do things and ask for things that no one else can.”

The Clinton-Giustra connection became even more important in Colombia, where from 2005 onward Bill Clinton arranged a series of meetings between Giustra and then-president Álvaro Uribe, during which Clinton was frequently present. Giustra was already known in Colombia as the founder and backer of Pacific Rubiales, a Colombian oil company formed in 2003. In 2007, according to The Wall Street Journal, Bill Clinton had invited Uribe and Giustra to meet with him at the Clintons’ home in Chappaqua, New York.

The meetings provided a way for Giustra to lobby Uribe and his administration on behalf of Pacific Rubiales at a time when the Uribe administration was seeking to end the dominance of the national oil company, Ecopetrol, and open up the sector to foreign investors. These contacts appear to have born fruit for Giustra. In 2007 Pacific Rubiales signed a $300 million deal with Ecopetrol to build a 250 kilometer pipeline between Meta and Casanare provinces in Central Colombia. In the same year, Pacific Rubiales gained control of the Rubiales oilfield, Colombia’s largest.

Former Colombian President Álvaro Uribe and Bill Clinton in Bogota, Colombia, June 22, 2005
Miguel Solano/AFP/Getty ImagesColombian President Álvaro Uribe and Bill Clinton in Bogota, Colombia, June 22, 2005

Uribe was a singular interlocutor for Clinton and Giustra. The Colombian leader had been viewed by the George W. Bush administration as a crucial ally in the War on Drugs, in which Colombia was often held up as a success story. Yet Uribe and his political allies had longstanding connections to the Colombian drug cartels. In a 1991 intelligence report from the US Defense Intelligence Agency (DIA), declassified in August 2004, described Uribe as “a Colombian politician and senator dedicated to collaboration with the Medellin Cartel at high government levels…. Uribe was linked to a business involved in narcotics activities in the United States. [He] has worked for the Medellín cartel” and is “a close personal friend of Pablo Escobar Gaviria,” the longtime drug kingpin.

A 2011 report on events of 2010 by Human Rights Watch provides detailed evidence that Uribe was not free of this poisonous legacy when he was dealing with Clinton and Giustra. The report described President Uribe’s administration as “racked by scandals over extrajudicial killings by the army, a highly questioned paramilitary demobilization process, and abuses by the national intelligence service,” which participated in illegal surveillance of human rights defenders, journalists, opposition politicians, and Supreme Court justices. Hillary Clinton was warned about these human rights violations when, as secretary of state, she met with Bill Clinton, Giustra, and Uribe during a trip to Bogota, the Colombian capital, in June 2010. In an email message relayed to Secretary Clinton by the US Embassy in Bogota, Rep. Jim McGovern of Massachusetts warned that “while in Colombia, the most important thing the Secretary can do is to avoid effusive praise for President Álvaro Uribe.”

Hillary Clinton chose to ignore the warning. Addressing Uribe in the visit’s keynote speech, Clinton described him as an “essential partner to the United States” whose “commitment to building strong democratic institutions here in Colombia” would “leave a legacy of great progress that will be viewed in historic terms.” During her visit Clinton also affirmed her support for a US-Columbia free trade agreement, from which Giustra and other wealthy investors stood to benefit. This reversed her previous opposition to the agreement during her campaign for president in 2008, on grounds of Colombia’s poor human rights record, especially concerning the rights of labor unions.

Since the Giustra deal, there have also been complaints about the treatment of workers at Pacific Rubiales’s Colombian oil fields, which has been the target of numerous strikes and lawsuits by pro-labor groups. In an August 2011 speech to the Colombian Senate, Jorge Robledo, leader of the Polo Democrático Alternativo (Social Democratic) Party in the Colombian Senate, described the living quarters for Pacific Rubiales employees as “concentration camp-like,” with work shifts that sometimes exceeded sixteen hours a day for weeks on end, inadequate sanitary facilities and shared beds, and with the company relying on third-party hiring halls to avoid unionization and the payment of pension and healthcare benefits. (In April 2015, Peter Volk, general counsel for Pacific Rubiales, denied these allegations, saying that the corporation “fully respects the rights of its workers and demands from companies that provide services to it to also do so.”)

The record of the Clinton System raises deep questions about whether a Hillary Clinton presidency would take on the growing political influence of large corporate interests and Wall Street banks. The next president will need to address critical economic and social issues, including the stagnating incomes of the middle class, the tax loopholes that allow hedge-funders and other members of the super-rich to be taxed at lower rates than many average Americans, and the runaway costs of higher education. Above all is the question of further reform of Wall Street and the banking system to prevent a recurrence of the behavior that brought about the Great Recession of 2007-2008.

So far, Hillary Clinton has refused to commit herself to a reintroduction of the Depression-era Glass-Steigall Act, which Bill Clinton allowed to be repealed in 1999 on the advice of Democrats with close ties to Wall Street, including Robert Rubin and Larry Summers. The reintroduction of Glass-Steigall, favored by Bernie Sanders, would prevent banks from speculating in financial derivatives, a leading cause of the 2007-2008 crash. With leading Wall Street banks so prominent in the Clintons’ fundraising streams, can Hillary Clinton be relied upon to reform the banks beyond the modest achievements of the Dodd-Frank bill of 2010?

Source Article from http://feedproxy.google.com/~r/nybooks/~3/LlfoFz9Hd-M/

Revolution from Another Angle

Alexander Rodchenko: Pioneer Playing a Trumpet, 1930
Centre Pompidou, Paris/Estate of Alexander Rodchenko/RAO, Moscow/VAGA, New YorkAlexander Rodchenko: Pioneer Playing a Trumpet, 1930

The infatuation of early twentieth-century Russian avant-garde artists with the Bolshevik regime is a well-known story. Like many romances of unequally matched partners, it ended badly. On one side, there was a fantastically talented, motley assortment of artists unattached to the upper echelons of pre-revolutionary art patronage and power; on the other was a revolutionary party that, to the surprise of many, seized power in a vast country. Concurrently, technical advances were making photography and film—relatively new mediums—ever more accessible and easier to produce. As Lenin famously noted, “cinema was the most important of the arts,” but both were of vital importance to a regime that needed to communicate with a largely illiterate population. The serendipitous confluence of technology, art, and politics in these fields is the subject of the Jewish Museum in New York’s current exhibition, “The Power of Pictures: Early Soviet Photography, Early Soviet Film.”

Russia’s new political masters wanted to create a new society and a “new Soviet man.” Many of the best-known avant-garde artists embraced this task with enthusiasm: some felt as though their art was the engine driving history. Artists like El Lissitzky, Rodchenko, Stepanova, Goncharova, Malevich, Mayakovsky, and Tatlin—to varying degrees influenced by Cubism, Futurism, and other western European movements, as well as by Russian folk traditions—had been making work that in different ways sought to redefine the very notion of art. In the cultural domain, part of the greater Bolshevik task following the Revolution was to create a new social infrastructure for producing, displaying, and distributing the visual arts. Private art collections were nationalized. Museums, exhibitions and art schools were reorganized; new art schools were formed and there was much discussion about the very concept of the museum. Artists helped create state propaganda on myriad subjects, from politics to literacy to alcoholism and women’s rights.

A sequence from Victor Turin’s film Turksib (1929)

The blockbuster exhibitions of Russian avant-garde art hosted by the Guggenheim, MoMA, and many European museums over the last twenty to twenty-five years usually covered every possible medium and art form, including architecture and applied arts like furniture and fabric design. In contrast, the curators of “The Power of Pictures” have opted for an in-depth approach to the use of photography—from still-lifes to film—during the first years of the revolution. This allows them to explore an area in which Jewish artists were unusually prominent, and to demonstrate how the avant-garde’s legacy remained visible through photographic works long after all independent artistic organizations were banned in 1932.

Georgy Zimin: Still Life with Light Bulb, 1928-1930
Museum Ludwig, Cologne/Rosphoto/State Russian Centre for Museums and Exhibitions of Photography, St. PetersburgGeorgy Zimin: Still Life with Light Bulb, 1928-1930

The result is a rich, tightly curated survey of photo-based works that reveals the incredible stylistic diversity of the period. There are photogram “still-lifes” by El Lissitzky and Georgy Zimin that look like eerie x-rays of everyday objects. Some photographs, such as Boris Ignatovich’s Factory (1929) and Georgy Petrusov’s Dnepr Hydroelectric Dam (1934-1935) capture reality in images with such strong abstract elements as to be initially indecipherable; the curving and rectangular diagonals in Petrusov’s photo could be seen as a stylized hammer and sickle from a distance. Оther photographs, like Alexander Grinberg’s soft-focus nude, Sitting Girl (1928), and Georgy Zelma’s Meeting at the Kolkhoz (1929), though quite different in subject matter, are equally atmospheric and verge on the painterly.

The pieces on view include familiar images of military parades, athletes, and sports events, as well as cityscapes taken from every imaginable angle by many different photographers. But there are also photo-reportage sequences from farms, factories, and the colossal Soviet engineering projects of the 1930s, such as the aforementioned Dnepr Hydroelectric Dam and the White Sea-Baltic Sea Canal. There are individual portraits of all sorts: snapshots of photographers at work, lying on the ground or holding cameras high overhead to take pictures of parades from surprising angles, which contrast dramatically with romanticized straight-on studio portraits by Moisei Nappelbaum of figures as incompatible as the poet Anna Akhmatova and Felix Dzerzhinsky, founder of the Soviet secret police.

Georgy Petrusov: Dnepr Hydroelectric Dam, 1934–35
Georgy Petrusov/Alex Lachmann CollectionGeorgy Petrusov: Dnepr Hydroelectric Dam, 1934–35

A large selection of posters, photo magazines, book covers and examples of photomontage demonstrates the wide-ranging uses of photography, as well as cinema’s heavy influence on design principles during the first two decades of the Soviet state. The twelve silent films from the period that the curators included, many of them not widely known, offer viewers a rare, first-hand opportunity to observe the mutual influence these media had on each other. In addition to classics of Soviet cinema, like Sergei Eisenstein’s Battleship Potemkin and October: Ten Days that Shook the World, and Dziga Vertov’s frenetic, wildly innovative Man With a Movie Camera, they range from a comedy about a country girl and class conflict in an apartment building (The House on Trubnaya), to an expressionistic film based on Gogol’s story “The Overcoat.”

Esfir Shub’s ground-breaking documentary chronicle, The Fall of the Romanov Dynasty, uses actual pre-Revolutionary historical and stock footage from many sources to portray the class struggle that led to the Revolution of 1917. (For those unable to spend enough time at the museum to watch them all, most of the films can be streamed online.) As Jens Hoffmann notes in a very informative catalog essay on the film industry after the Revolution, avant-garde-influenced experimental films were not very popular with the public, which vastly preferred more traditional entertainment. But their subsequent influence on filmmaking worldwide is more than evident.

Photography was the perfect medium for promoting the new state order. Its use in newspapers, magazines, posters, journals, and books as something other than portraiture was a new phenomenon. It was by definition “modern” and “forward-looking”—a non-elitist medium for the age of mechanical reproduction. Though there were photo societies in the early years of the century, there was no “Academy of Photography” dictating aesthetic criteria for the medium, there were no museums, patrons, or collectors of photography or much of a market as such. Photography was not generally considered a fine art form at the time (though interestingly, Anatoly Lunacharsky, the Bolsheviks’ commissar of culture, argued that it was). For this reason, in part, it was an open field that offered unprecedented professional opportunities for Jews.

In his catalogue essay, the Russian art historian Alexander Lavrentiev, grandson of the artists Varvara Stepanova and Alexander Rodchenko, gives a nuanced view of the complex situation in which Soviet photography developed, and provides a map of the period’s “photo landscape.” Photography was dominated by three groups or tendencies, whose aesthetics mirrored, to some extent, the spectrum of political factions on the post-Soviet cultural stage. None of these groups opposed the Revolution, however; initially, in fact, most artists and the intelligentsia supported the regime.

Georgy Zelma: Three Generations in Yakutsk, 1929
Museum Ludwig, Cologne/Rosphoto/State Russian Centre for Museums and Exhibitions of Photography, St. PetersburgGeorgy Zelma: Three Generations in Yakutsk, 1929

On the aesthetic “right,” the “pictorialists” more or less represented an older generation of traditional studio portraitists and landscape photographers, including Moisei Nappelbaum and Alexander Grinberg, who were often criticized for overly aestheticizing their subjects and approaching photographs like paintings. The “leftists,” or photographic avant-garde, coalesced around Alexander Rodchenko, eventually forming the group “October,” which included Boris Ignatovich, his sister, Olga, and wife, Elizaveta Ignatovich, Eleazar Langman, and Mikhail Kaufman (the brother of filmmaker Dziga Vertov).

In the early 1920s, Rodchenko abandoned painting for Constructivist design and photomontage, much of it done in collaboration with the poet and artist Vladimir Mayakovsky. As he moved increasingly into “straight” photography, he rejected the complacency of what he termed the “belly-button perspective” of traditional images, be they paintings or photographs. Around 1925, influenced by the Bauhaus photographer Moholy-Nagy, Rodchenko began shooting photographs from odd and unorthodox angles in order to disrupt the viewer’s expectations and force a new perception of reality. (He was later attacked in print for imitating Europeans who were interested exclusively in the formal characteristics of images rather than their social utility.) He produced a highly influential series on his apartment building in central Moscow; some pictures look straight down from a high balcony at the heads of pedestrians in the courtyard; one veers up at a direct vertical along the outer wall to show a man standing on a fire escape. He photographed a young Pioneer scout playing a trumpet from an angle directly below the boy’s chin, emphasizing the powerful outward and upward sculptural thrust of the head and instrument. This now iconic image was criticized at the time for distorting reality and the human form.

Finally, ROPF (Russian Society of Proletarian Photographers) constituted the political and aesthetic “center.” It included many young Jewish photographers, most of whom remained prominent through the 1930s, World War II, and into the post-war period—Max Alpert, Arkady Shaikhet, Semyon Fridlyand, Georgy Petrusov, Mikhail Prekhner, and Yakov Khalip, among them. They often had no formal training, but learned on the job; unlike many of the photographers associated with October, they were not artists who had turned to photography. ROPF was dedicated to photojournalism that supported the ideological goals of the state with literal narratives, and its most enduring contribution to Soviet photography was the “photo essay.” The first and best known of these was Alpert and Shaikhet’s “Twenty-Four Hours in the Life of the Filippov Family” (1931), which selected photos taken by three photographers over a period of five days in order to represent the most “typical” aspects of a worker’s everyday family life: the children at nursery school, the elder Filippov reading a newspaper and attending a political education class, his wife learning to read and shopping for food, the family moving into a new apartment. This genre became increasingly popular as a propaganda tool. The masses had no trouble understanding it, and editors could easily control the narrative or message.

Despite the existence of “leftist,” “centrist,” and “right” wings in photography, and the use of aesthetic polemics in to gain advantage in political battles, much of the actual work in the show belies the notion that the lines delineating these groups were very clearly drawn. Lavrentiev views the landscape of early Soviet photography as having various peaks and hills, stylistically varied cities, towns and villages, all connected to one another by well-traveled bridges and walkways, and all fed by the same river, comprising the major photo journals and agencies, in particular, Sovetskoe foto (later Proletarskoe foto), running through a valley alongside them.

Alexander Rodchenko and Varvara Stepanova: interior spread of USSR in Construction, published in Moscow, 1935
Museum of Fine Arts, Houston/Estate of Alexander Rodchenko/RAO, Moscow/VAGA, New YorkA spread by Alexander Rodchenko and Varvara Stepanova in USSR in Construction, 1935

This rather bucolic description helps explain a striking feature of Soviet photography in the first decade and a half after the Revolution. Perhaps because of the relative newness of the medium, its practitioners were influenced by and borrowed from one another, both in style and content, to a degree unparalleled in the other arts. Arkady Shaikhet’s 1928 photos of the staircase of a new apartment seen from directly above and below (the former appeared on the cover of Ogonyok in 1928) are clearly indebted to Rodchenko’s perspectives, to give only one example. And, as Lavrentiev notes, though Rodchenko’s “angled photographs generated opposition…[they] were infectious and provoked a desire to imitate.”

Members of ROPF made free use of the unusual perspectives generally associated with the October group when it suited their purposes, and both the center and the left groups photographed the same subjects. Modernist compositional principles brought photography closer to cinema, with its mobile, constantly changing viewpoints, as Lavrentiev writes, a fact amply illustrated by the multiple perspectives and focal distances used to striking effect in film posters and book design. Echoes of Anton Lavinsky’s photomontage poster for The Battleship Potemkin can be seen in the juxtaposition of sailor and cannon in Yakov Khalip’s 1938 photograph On Guard. Nearly all of these photographers, regardless of their affiliations, worked together at one time or another on publications like The USSR in Construction, much admired for its innovative use of avant-garde photomontage principles well into the late 1930s.

That “desire to imitate” probably extended to painting, as well: though painting is beyond the scope of the exhibition, throughout “The Power of Pictures” one catches tantalizing glimpses of similarities between photographs, cinematic images, and the paintings of the period. It is tempting, for instance, to see Olga Ignatovich’s dynamic 1930s photo of a soccer goalie blocking a ball in mid-air as the inspiration for Alexander Deineka’s stunning, bannerlike (1.2 х 3.5 meters) oil on canvas, Goalkeeper, 1934. The influence of the avant-garde is obvious in the work of almost every member of ROPF, and in a good deal of the cinema of the time. Victor Turin’s film Turksib (1929), for instance, chronicles a real-life, proletarian epic, the construction of the Turkestan-Siberian railway, but does so in a rhythmic, patterned visual language that owes a great deal to Sergei Eisenstein and Rodchenko.

Throughout the 1920s, multiple political and artistic factions scrambled for limited resources and the ideological approval needed to obtain them. For a while, avant-garde artists occupied positions of power. But as the 1920s progressed, less and less aesthetic diversity was tolerated; by the time of the first five-year plan (1928), which marked the end of NEP and the economic experiment with small private businesses, the ideological polemics and name-calling among artists had become quite fierce and had serious consequences in terms of access to commissions, jobs and patronage. Experimentation of all sorts was being skewered as “bourgeois” and “formalist”—аs “art for art’s sake” that was imported from the West and had no place in the new Soviet world. Ironically, many of the terms used in political attacks against avant-garde artists were similar to those that they had once used in their endeavor to overturn traditional aesthetics before the Revolution.

Arkady Shaikhet: Red Army Marching in the Snow, 1927–28
Estate of Arkady Shaikhet/Nailya Alexander GalleryArkady Shaikhet: Red Army Marching in the Snow, 1927–1928

As Stalin consolidated political power, all of the arts, from painting to poetry, were reined in. The “April Decree” of 1932 put an end to the multiplicity of independent artistic organizations (avant-garde or otherwise) and set the stage for herding all artists, writers, composers, and architects, into national “creative unions.” In 1934, Boris Ignatovich photographed a rather glum Pasternak and Korney Chukovsky at the First Congress of the Union of Soviet Writers, where “Socialist Realism” (a term whose meaning could conveniently be stretched or contracted according to political necessity) was declared the official art of the USSR. The “belly-button perspective” had won out—but it was in photography in its many guises that the perceptual innovations of the Russian avant-garde lived on the longest, to leave an indelible and challenging legacy to generations of artists around the world.

“The Power of Pictures: Early Soviet Photography, Early Soviet Film,” is on view at the Jewish Museum in New York through February 7. It will continue to the Frist Center for the Visual Arts in Nashville, Tennessee, from March 11-July 4; and the Joods Historisch Museum in Amsterdam from July 24-November 27. The catalog, edited by Susan Tumarkin Goodman and Jens Hoffmann, is published by Yale University Press.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/vYCNy6ygWhg/

Bizet Wins at the Met

Bizet, who was twenty-four at the time, was handed the libretto on short notice and can hardly be blamed for its thinness, pressured as he was to complete the score within a few months. In the event he turned its shortcomings into virtues. The blank spaces where characterization and dramatic development might have been became a generously open field for self-sufficient lyricism. The plot, such as it is, can be reduced to an overlapping set of mechanically laid-out conflicts: the priestess Leila (sworn to preserve her chastity to secure divine protection for the pearl fishers) is torn between her religious vow and her uncontrollable passion for the hunter Nadir, Nadir is torn between his passion for Leila and his friendship with the fisherman Zurga, and Zurga is torn between his friendship with Nadir and his jealous rage at being rejected by Leila. 

After Leila and Nadir are caught in a forbidden embrace and condemned to death, it will be up to Zurga—in rueful atonement for having stirred up the pearl fishers to demand the execution of the lovers—to help the couple escape and offer himself to the vengeance of the mob. One might say that The Pearl Fishers is in some way “about” passion and friendship and, perhaps, the perils of superstitious belief, but the same could be said about the average Jon Hall vehicle. The opera’s true subject is its own music, which is another way of saying that it is indeed about passion. 

The Ceylon on which The Pearl Fishers is set is as much an alternate fantasy world as any of the exotic locations in Rameau’s Les Indes Galantes (1735), updated with a very slight sprinkling of mid-nineteenth-century ethnographic pretensions. Essentially it’s an empty arena where elemental passions can surge unrestrainedly (until they slap up against the limits of an ominous and intransigent religion), a zone airily unencumbered by the tedium of European details and procedures and oppressive furniture. Penny Woolcock’s production, which originated at the English National Opera, fills out that zone by bringing the opera into something resembling the present. For the ruined pagoda and the scattering of bamboo huts and palm trees indicated in the libretto, she has substituted a populous waterfront shantytown whose pagoda is flanked on the far shore by a billboard advertising pearls. That it’s the modern world can be deduced from the blue jeans and wristwatches and refrigerators in evidence here and there, interspersed among more traditional accoutrements and structures. 

The updating has oddly little effect on the import of the story; the central romantic triangle and the condemnation of the lovers by a punitive religious law are not significantly reshaped by being moved out of their timeless setting. The novel and unavoidable association provoked by this modernizing is the thought of the rising sea levels that in the era of global warming put such island communities in jeopardy. The association is underscored by the projection, later in the opera, of a photographic image of a giant wave, prompting memories of the tsunami of 2004. It is perhaps an attempt to reframe the devastating storm that befalls Zurga and his people—a disaster that for the Paris audience of 1863 was no more than the occasion for a frisson of romantic sublimity—by placing it under the sign of a genuine impending catastrophe. On the other hand, the water ballet with which Woolcock opens her production over the prelude with its rhythm of slowly pulsing tide, in which through a scrim we are shown a convincing illusion of divers gliding down through the sea’s depths and then shooting up toward the light of the surface, seems to evoke and surpass the spectacular theatrical effects of Bizet’s day. As the curtain rises the orchestra picks up the pace for the opening chorus, a vigorous collective song to drive away evil spirits whose thundering kettledrum accompaniment and hammerlike refrain—Chassez, chassez les esprits méchants!—tunes instantly into a world of unbridled folk energies. As he would do even more overpoweringly in Carmen, Bizet finds a choral language to express a crowd’s excitement, and already hints at underlying menace. 

Pieces of exposition are quickly laid down, as Zurga (Mariusz Kwiecien) is elected village chief by acclamation and his friend Nadir (Matthew Polenzani) returns after a year’s absence boasting in quick and jaunty fashion about his encounters with tigers, jaguars, and panthers; then most of the chorus wanders off so that the two men can launch into their celebrated tenor-baritone duet “Au fond du temple saint,” of an unalloyed gorgeousness bound to melt most resistance. The melting of resistance is indeed the subject at hand, as the two men revive their shared memory of seeing for the first time, thanks to a parted veil, the divinely beautiful priestess Leila (Diana Damrau). The recitative that precedes the duet is almost ominous in its descending lines, as if to signal the entry into a forbidden precinct. Stepping past that moment of transition we are abruptly in the heart of the opera, where constrictions fall away and we are encouraged simply to bathe in the pleasures of tone and its ornamentations, accented by flute and harp, prolonging an emotion in luxuriant indulgence. 

The superb performance by the two singers produced the necessary sense of a suspension of time and of ordinary law. The erotic is fused with the religious aura in a fine intertwining of purity and perverse desire, an intertwining in which the music makes itself thoroughly at home. (The exotic setting conveniently alleviates this mixing of sacred and profane.) External dramas will be little more than distractions in The Pearl Fishers; its deepest pleasures are in the moments when all that narrative can be kept at bay, erased by a lyric outpouring fortified by distinctive lashings of orchestral coloring decisively applied by conductor Gianandrea Noseda. For most of its length—or at least for most of its first two acts, before the demands of plot resolution take over in the last—the opera is not a series of dramatic encounters but rather a dreamy succession of states of being. 

It is music whose ostensible subject is emotion recollected. In their duet Zurga and Nadir anticipate what they will one day remember when everything else has faded. In Nadir’s great aria “Je crois entendre encore”—sung with rare beauty by Polenzani—he relives his memory of what he once heard Leila singing. His vocal lines become in effect a description of hers, and his singing the sound of power held back, self-consumed, the highest note not defiantly projected but tapering off into internalized silence. 

In her prayer to Brahma, just at the moment when she is about to let everything slide into crisis for love of Nadir, Leila sings about her own song, her own voice as it rides a rhythm suggesting some seaborne lullaby. It is the sound of her voice, as the chorus affirms, that will avert danger, and as sung by Damrau her trilling elaborations are beseeching flares launched into darkness. Each of the principal singers seemed perfectly attuned to such implications; the strength and delicacy with which each note was shaped and each line followed, as if some fresh nuance of emotion were just then being uncovered, affirmed the sincerity by which the music triumphs over any insufficiencies of the libretto. 

Utter conviction is the only way to go with such material. There was at least one dodgy moment when Leila’s sudden recognition of Nadir in the welcoming crowd—Ah! c’est lui!—set off a minor wave of giggles. For a second the whole illusion teetered, with the danger that the dramaturgical underpinnings would be seen for what they were, triggering the kind of nervous distancing laughter one sometimes hears at screenings of old movies. The music saved matters, as it does most of the time in The Pearl Fishers, establishing as it does a place of refuge from the drama itself, a paradise of secrecy in which there is unhindered freedom to contemplate the woman behind a veil, the woman seen at night when she thinks herself alone, the woman overheard singing her innermost thoughts. An air of almost trance-like passivity surrounds the lovers drawn helplessly to each other. They are incapable of doing anything at all except sing about what they are feeling. If the words they sing are often a distillation of poetic banalities of the period—au sein de la nuit/transparent et pur/comme dans un rêve (in the heart of the night/transparent and pure/as in a dream)—Bizet finds a way to attach a real value to them. 

In the opera’s last stretch—from the moment Leila and Nadir are apprehended by the outraged high priest Nourabad (Nicolas Testé)—the magic dissipates somewhat as the indignation of the mob and Zurga’s jealous fury take charge of events. Leila and Nadir are taken prisoner, and Leila pleads with Zurga for her lover’s life in a duet that gives them plenty of opportunities for dramatic singing but whose vehement style is at odds with the spell created earlier. Zurga is described in the Met program as a “complex” character, but as one watches him lurch from one emotion to its contrary according to the schematic promptings of the plot, he begins to seem merely erratic, despite the best efforts of Mariusz Kwiecien to create a coherent portrait of a petty local despot recovering his truer self at last. 

In fact the resolution imagined by the librettists—Zurga sets the offstage dwellings of the pearl fishers on fire in order to get them out of the way so he can liberate the captives awaiting execution—is notably hurried and unconvincing. Here the conflagration was visible, as the settlement at the rear of the stage went up in flames, creating a suitable contrast to the watery imagery of the opening. Stagecraft comes to the rescue of a slightly muffled ending, as Zurga is left alone to confront the vengeance of the frustrated mob, while the lovers, vindicated despite having fallen into “the accursed snares of love,” make their getaway to some other island, some other opera.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/dTANlVuQCtk/

A Different T.S. Eliot

T. S. Eliot, 1956; photograph by Cecil Beaton from Mark Holborn’s book Beaton: Photographs, just published by Abrams with an introduction by Annie Leibovitz
Cecil Beaton Studio Archive/Sotheby’sT. S. Eliot, 1956; photograph by Cecil Beaton from Mark Holborn’s book Beaton: Photographs, just published by Abrams with an introduction by Annie Leibovitz


For much of the twentieth century, T.S. Eliot’s pronouncements on literature and culture had the force of a royal command. “In the seventeenth century,” he wrote, “a dissociation of sensibility set in, from which we have never recovered.” Probably no such separation of thought from feeling ever occurred, but sober historians analyzed it as if were as real as the Industrial Revolution. “Poetry is not a turning loose of emotion,” Eliot wrote, “but an escape from emotion; it is not the expression of personality, but an escape from personality.” Two generations of critics worked to do his bidding by banishing from the canon poets like Shelley whom Eliot had judged insufficiently impersonal.

Eliot’s prose borrowed its sober and severe authority from the intensity and power of his poetry. His long poems The Waste Land (1922) and Four Quartets (1943), like many of his shorter ones, evoked a synthesizing vision of public and private disorder: the emotional and erotic failures of individual persons and the chaotic anomie of contemporary Europe, individuals and societies both thirsty for life-giving waters, both waiting for the transforming commandments that, in The Waste Land, “the thunder said.” No other modern writer had his power to portray, simultaneously and in sharp focus, the disasters of both the inner world and the outer one.

When Eliot died in 1965 much of his authority died with him. Academic and journalistic opinion agreed that he had hoped public disorder could be resolved by imposing the kind of order favored by authoritarians; that, as a WASP from an old New England family, he felt superior to Jews and other outsiders to the high culture he embodied; that he held repugnant attitudes about women and sex. His detractors wrote entire books setting out the evidence against him, while his defenders replied with books that denied the evidence or explained it away.

Robert Crawford’s Young Eliot, the first volume of a two-part biography, and The Poems of T.S. Eliot, edited and massively annotated by Christopher Ricks and Jim McCue, make it possible to see more deeply than before into Eliot’s inner life, to perceive its order and complexity in new ways, and to recognize that his detractors and his defenders were responding to attitudes that Eliot condemned in himself and to beliefs that his poems simultaneously expressed and rebuked.


The first sixteen years of Eliot’s life, from his birth in St. Louis in 1888 until the year he attended Milton Academy near Boston before entering Harvard, are almost entirely undocumented. All that survive are two letters and a few numbers of a handwritten family magazine he began when he was eleven. More convincingly than earlier biographies, Young Eliot fills in the blanks by identifying books and events from Eliot’s childhood that he later transformed into poetry. The disastrous St. Louis cyclone of 1896, for example, gave him the apocalyptic imagery heralding The Waste Land’s “damp gust/Bringing rain.”

Other phrases in the poem had roots in Eliot’s prep school reading: James Russell Lowell’s “the river’s shroud” became Eliot’s “the river’s tent.” Eliot got his adult reputation for vast learning from the dazzling variety of quotations in The Waste Land. Crawford notes that many of these were remembered from one of his required school texts, Francis Palgrave’s anthology The Golden Treasury.

A voice in The Waste Land greets someone on a London street as “Stetson,” as if identifying him with his hat. Crawford reports that Eliot’s mother belonged to a ladies’ club addressed by a Mrs. Stetson. Eliot printed a poem under the pseudonym Gus Krutzsch, a name that also appears in an early draft of The Waste Land; one of Eliot’s St. Louis schoolmates was named August R. Krutzsch.

Crawford explores Eliot’s ambivalence toward his distinguished Anglo-American family, which had also produced President Charles William Eliot of Harvard, who later kept urging him to take an academic post there. Eliot took pride in his manners and class, but felt alienated from his parents’ earnest nineteenth-century piety. He was nostalgic about his English origins; the “dissociation of sensibility,” some readers observed, coincided with the Eliots’ ancestors’ voluntary uprooting from England to America. But he also felt a lifelong nostalgie de la boue, starting with stories he wrote about hobos in his family magazine, later in his half-appalled fascination with the violent world of Boston Irish boxers and barkeeps in his “Sweeney” poems and the tough-guy milieu of his unfinished play Sweeney Agonistes.

Crawford reports that Eliot was a graceful dancer and expert sailor but was self-conscious about his protuberant ears and a congenital hernia that required him to wear a truss. He asked himself in Ash-Wednesday (1930), “Why should the agèd eagle stretch its wings?” (He was around forty at the time.) The children of a friend had “nicknamed him ‘The Eagle’ because of the size of his nose.” His poetry tended to portray the human body as separate parts, not as a whole. From “Preludes”: “all the hands”; “yellow soles of feet”; “short square fingers”; “eyes/Assured of certain certainties.” From “The Love Song of J. Alfred Prufrock”: “The eyes that fix you in a formulated phrase”; “Arms that are braceleted and white and bare.” From The Waste Land: “Exploring hands encounter no defence”; “My feet are at Moorgate, and my heart/Under my feet.” Even his image of primitive unconsciousness in “Prufrock”—“I should have been a pair of ragged claws/Scuttling across the floors of silent seas”—was an evocation of body parts, not something whole like W.B. Yeats’s chestnut tree that will not divide into “the leaf, the blossom or the bole.” And in The Waste Land his image of wished-for erotic satisfaction was another collage of body parts: “your heart would have responded/Gaily, when invited, beating obedient/To controlling hands.”

The young Eliot concealed his physical anxieties with the obscene heartiness of his comic (or would-be comic) verses about King Bolo and his queen, which he sent first to laddish college friends, later to connoisseurs of scatological bawdry like Ezra Pound. Crawford writes reverently of Eliot’s poetry and critical prose; but he adds critical distancing comments whenever he detects “a hint of misogyny or homophobia,” as if to reassure censorious readers that he shares their sense of the moral urgency of scolding dead people.

At Harvard Eliot loafed through his first year, was placed on academic probation, and only became serious about his classes when he began studying ancient and modern philosophy and languages. Shortly before he graduated, he wrote a two-stanza poem, “Silence,” which he never published, about an experience “for which we waited,” one that overwhelms his consciousness of everything else. The second stanza reads:

This is the ultimate hour
When life is justified.
The seas of experience
That were so broad and deep,
So immediate and steep,
Are suddenly still.
You may say what you will,
At such peace I am terrified.
There is nothing else beside.

Crawford suggests that this was prompted by Eliot’s recent hospitalization for scarlet fever, and describes it merely as a poem that “registers emotional disturbance” about something “fearful.” But the poem describes a moment of religious awe, a terrifying vision of the peace that passeth understanding. Eliot recalled it in the moments of visionary intensity in The Waste Land and Four Quartets:

       my eyes failed, I was neither
Living nor dead, and I knew nothing,
Looking into the heart of light, the silence.
And the lotos rose, quietly, quietly,
The surface glittered out of heart of light.

W.H. Auden, drawing inferences from the poetry, told friends that Eliot had mystical visions of which he never spoke. (W.B. Yeats never had one, Auden added, but talked about them all the time.) Between 1911 and 1914, when Eliot was a graduate student in philosophy at Harvard, reading Buddhist and Hindu scriptures, he focused increasingly on religions more visionary and demanding than his parents’ Unitarianism, more committed to a reality that was otherworldly and absolute.

Crawford records with subtle sympathy Eliot’s failed love for his Boston contemporary Emily Hale, “intelligent, vulnerable, strictly brought up and defensively ‘proper.’” Eliot was devastated when he made his feelings clear and she gave him no possibility of hope—although in fact she was secretly in love with him, and remained so all her life. Eliot seems to have addressed her, also secretly, in lines in The Waste Land that recalled his inner surrender to her: “My friend, blood shaking my heart/The awful daring of a moment’s surrender…” The notes in the new Poems of T.S. Eliot record Eliot’s correction of a French translation from “Mon ami” to “Mon amie,” triple-underlining the feminizing “e.”

Eliot left America for England in 1914, and ignored pleas for his return sent by his family and the Harvard philosophy department. In 1915, in a state of erotic despair, and apparently still a virgin, he impulsively married the flirtatious, neurotic Vivien Haigh-Wood, and descended into a miserably entangling marriage, “sexually awkward” (as Crawford reports) for both, constantly shaken by medical and psychological crises. Eliot seems to have suffered from recurring impotence; Vivien had an affair with Bertrand Russell. The crises culminated in Eliot’s mental breakdown in 1921—“entering the whirlpool,” in The Waste Land’s phrase—followed by a tentative, half-achieved sense of renewal and recovery. He asked near the end of The Waste Land, “Shall I at least set my lands in order?” Eliot spent the next few decades—in Four Quartets and his books The Idea of a Christian Society (1939) and Notes Towards the Definition of Culture (1948)—trying to imagine what that order might be like.


Shortly after the Munich Agreement of September 1938, when Britain and France capitulated to Hitler’s territorial demands in Central Europe, Eliot wrote in The Idea of a Christian Society:

I believe that there must be many persons who, like myself, were deeply shaken by the events of September 1938, in a way from which one does not recover; persons to whom that month brought a profounder realization of a general plight…. The feeling which was new and unexpected was a feeling of humiliation, which seemed to demand an act of personal contrition, of humility, repentance and amendment; what had happened was something in which one was deeply implicated and responsible.

He was repenting personally for the civilization that had given him his early advantages and in which he had now become a literary eminence:

It was not…a criticism of the government, but a doubt of the validity of a civilization. We could not match conviction with conviction, we had no ideas with which we could either meet or oppose the ideas opposed to us. Was our society, which had always been so assured of its superiority and rectitude, so confident of its unexamined premises, assembled round anything more permanent than a congeries of banks, insurance companies and industries, and had it any beliefs more essential than a belief in compound interest and the maintenance of dividends?

This is not the language of a fascist sympathizer. Eliot was mistaken for one because he publicly doubted the value of democracy, but his doubts were focused on its inability to give a moral and intellectual answer to the force-worship of the dictators:

The term “democracy,” as I have said again and again, does not contain enough positive content to stand alone against the forces that you [readers] dislike—it can easily be transformed by them. If you will not have God (and He is a jealous God) you should pay your respects to Hitler or Stalin.

In the world of practical politics, a choice between God and the dictators seems impossibly stark, but Eliot, as always in his political writings, was thinking of the opposed societies of blessed and damned souls in Dante’s Commedia, who made an equally stark choice between an ascent through Purgatory to Paradise and a descent into the prison-state of Hell.

Whatever flaws he found in democracy, Eliot never imagined that any traditional, hierarchical political system knew any better how to “have God.” “To identify any particular form of government with Christianity,” he wrote, “is a dangerous error: for it confounds the permanent with the transitory, the absolute with the contingent.” Some years earlier, Eliot told Bertrand Russell that he wanted to write about “Authority and Reverence,” about some form of religious authority that did not rely on discredited political systems: “There is something beneath Authority in its historical forms which needs to be asserted clearly without reasserting…forms of political and religious organization which have become impossible.” He wrote in an essay: “The ideas of authority, of hierarchy, of discipline and order, applied inappropriately in the temporal sphere, may lead us into some error of absolutism or impossible theocracy.”

Eliot’s detractors cite his praise for Charles Maurras, whose Action Française movement was monarchist, nationalist, and thuggishly anti-Semitic. Crawford quotes Eliot addressing Maurras in a letter as “Cher Maître”; but two hundred pages later, he quotes Eliot warning English readers against Maurras’s “intemperate and fanatical spirit” in his campaign to protect French culture against foreign influences.

Crawford makes no comment on this apparent contradiction, but the solution to it may be found in Eliot’s syllabus for an adult education course he taught on modern French literature. Under Maurras’s name and the name of his early ally Pierre Lasserre, the syllabus briefly characterizes their work: “Their reaction [to democracy] fundamentally sound, but marked by extreme violence and intolerance.” Eliot made an absolute distinction between, on the one hand, the faults and frailties of democracy and, on the other, the “extreme violence” and “fanatical spirit” of every political movement that sought to overturn it. Eliot said almost nothing about the democratic traditions of equality and rights because he thought real equality was possible only in a society built on the conviction that every soul is equal before God, and individual rights could be fulfilled only in a society like Dante’s Paradise where everyone can say, freely and gratefully, “In His will is our peace.”

T. S. Eliot
T. S. Eliot; drawing by David Levine

Eliot made careful use of his patrician manners to advance his career, but his poems kept insisting that his social superiority left him just as distant as anyone else from the remote Absolute that, after his conversion to Anglicanism in 1927, he called God. The section titled “A Game of a Chess” in The Waste Land portrays the emotionally sterile upper-class marriage of a scarcely disguised nervous Vivien and silent Eliot in an expensively decorated drawing room, followed by a monologue in a pub about the degraded marriage of a lower-class couple named Albert and Lil. The point is that the two marriages are equally sterile, that the social status and artistic refinement that Eliot tried to value in himself were futile defenses against his humiliating sense of spiritual failure.

In the same way, a poem that almost everyone reads as a statement of anti-Semitic disdain, “Burbank with a Baedeker: Bleistein with a Cigar,” is Eliot’s rebuke against his own pharisaical fantasy that an educated WASP is somehow closer to God than even the coarsest caricature that he could imagine of a Jew. Cigar-smoking Bleistein is a mere congeries of body parts and cultures: “A saggy bending of the knees/And elbows, with the palms turned out,/Chicago Semite Viennese.” Yet the WASP Burbank—Eliot’s self-portrait—has nothing better to claim for himself: he gets culture secondhand from a Baedeker guidebook (Eliot wrote careful notes in his own Baedekers); he is sexually impotent (“the God Hercules/Had left him”) when seduced by the diseased Princess Volupine (Vivien in aristocratic disguise), with her “blue-nailed, phthisic hand”; and he is reduced to passive aesthetic nostalgia at “Time’s ruins.”

The degree to which a writer shares the prejudices of his family, his class, and his culture is less telling than the degree to which he is ashamed of them. Ezra Pound was defiantly unashamed of his prejudices. Eliot was more than ashamed: he was penitential. His poems are elliptical confessions of attitudes that he knew he must reject, although he also knew that, in Montaigne’s words, “we cannot rid ourselves of that which we condemn.” This may help to explain why he continued to reprint “Burbank” and “Gerontion”—another disguised self-portrait of someone spiritually sterile who imagines himself superior to “the Jew”—despite objections from readers and reviewers; he refused to withdraw what was in effect a penitential confession because other people disapproved of the faults he had confessed.

Around 1951, at a London reading with Eliot and many other poets in attendance, one of the writers on the program, Emanuel Litvinoff, recited a poem denouncing Eliot’s anti-Semitism: “I am not one accepted in your parish/Bleistein is my relative.” Other poets shouted in Eliot’s defense. Meanwhile, an observer remembered, “Eliot leaned forward, his head in his hands, muttering over and over, ‘It’s a good poem, it’s a good poem.’”

A rebarbative phrase about Jews in his 1934 book of lectures, After Strange Gods, later became notorious, and had nothing penitential about it. Eliot was imagining what a society committed to tradition might be like, and, as always in his social speculations, made no practical suggestions. “Serious difficulties” faced any effort to revive or establish a tradition: “It does not so much matter at present whether any measures put forward are practical, as whether the aim is a good aim, and the alternatives intolerable.” His imaginary traditional society would be unified in the way that real societies are not, with “homogeneity of race and a fundamental equality.” What is important, he said, “is unity of religious background; and reasons of race and religion combine to make any large number of free-thinking Jews undesirable.”

Eliot wrote After Strange Gods for an American lecture series in May 1933, and later told Isaiah Berlin that he would never have printed the sentence about free-thinking Jews had he “been aware of what was going to happen, indeed had already begun, in Germany…. I still do not understand why the word ‘race’ occurs in the sentence, because my emphasis was on the adjective free-thinking.” Again writing “theoretically” about an imaginary parallel universe shaped only by tradition and theology, he told Berlin:

Theoretically, the only proper consummation is that all Jews should become Catholic Christians [i.e., members of a universal church, not necessarily the Roman one]. The trouble is, that this ought to have happened long ago: partly because of the stiff neckedness of your people; and largely [Eliot’s footnote: Perhaps chiefly! The apportionment is not immediately relevant] because of the misbehaviour of those who called themselves Christians, this did not happen.

When After Strange Gods appeared in 1934, Auden, whose politics were practical, not imaginary, wrote to Eliot: “Some of the general remarks…rather shocked me, because if they are put into practice, and it seems quite likely [they will be], would produce a world in which neither I nor you I think would like to live.” As early as 1940, years before the book became the subject of public controversy, Eliot wrote to a friend that it was “largely drivel,” written to avoid bankruptcy. He never allowed any of it to be reprinted.

Crawford quotes a letter written to Eliot by his mother, Charlotte Eliot: “It is very bad in me, but I have an instinctive antipathy to Jews, just as I have to certain animals.” Crawford plausibly infers that “anti-Semitism was a prejudice substantially unspoken in the Eliots’ St. Louis household, but indisputably present.” Yet the simple statement of Charlotte’s Unitarian conscience, “It is very bad,” was the hidden theme of the poems in which Eliot simultaneously disdained Jews and confessed his own absolute spiritual failure.

In 1934, Eliot separated from Vivien; she had become increasingly unbalanced, and in 1938 was confined by her brother to an asylum where she died in 1947. (Despite rumors to the contrary, Eliot took no part in the commitment procedure.) After the separation, Eliot continued his normal working life as a director at the publishing firm of Faber & Faber while privately withdrawing into penitent asceticism. At 6:30 every morning he knelt on the stone floor of a local church. In the flat he shared with his bibliophile friend John Hayward, the brightly painted rooms at the front were Hayward’s, while Eliot took the dark rooms at the back. His bedroom was lit with one bare bulb, and an ebony crucifix hung on the wall above his bed.

Eliot’s sense of personal implication in the failures of his civilization seems to have arisen from the same deep source that gave him his unique double vision of personal and social disorder in The Waste Land. At the heart of his thought and feeling was an unspoken conviction that he, like the society in which he lived, had failed to become what he ought to be, something cohesive and whole, that with all his authority and fame, he lacked a unified personal self. In the same way that his civilization seemed “a congeries of banks, insurance companies and industries,” he seemed to himself—as he said in the title of a poem about himself that he wrote in French—a “Mélange Adultère de Tout.” His body was a set of disparate parts, his mind a disordered mixture of cultures, eras, classes, and languages, “fragments I have shored against my ruins.” In Four Quartets the soul he meets in a modern version of Purgatory—described in Dantesque stanzas—is not a unique individual soul like everyone in Dante, but a figure “Both one and many” with “The eyes of a familiar compound ghost.” He asked in “Gerontion,” “After such knowledge, what forgiveness?” Without a self that could be forgiven, Eliot could not imagine forgiveness.

All the fragmentary selves—his own and others’—were in desperate need of the purgatorial fire that might anneal them each into something whole. Dante’s last glimpse of Arnaut Daniel in Purgatory recurs in The Waste Land: “Poi s’ascose nel foco che gli affina” (Then he hid himself in the refining fire). Eliot wrote in After Strange Gods:

It is in fact in moments of moral and spiritual struggle depending upon spiritual sanctions…that men and women come nearest to being real. If you do away with this struggle, and maintain that by tolerance, benevolence, inoffensiveness and a redistribution or increase of purchasing power, combined with a devotion, on the part of an élite, to Art, the world will be as good as anyone could require, then you must expect human beings to become more and more vapourous.


In “Tradition and the Individual Talent” Eliot wrote that “the more perfect the artist, the more completely separate in him will be the man who suffers and the mind which creates.” He wrote in the same essay that a poet must have “a feeling that the whole of the literature of Europe from Homer and within it the whole of the literature of his own country has a simultaneous existence and composes a simultaneous order.”

Robert Crawford’s biography honors the Eliot who suffered by showing, contrary to his self-negating wish, how inseparable he was from the mind that created. Christopher Ricks and Jim McCue, in their astonishingly rich notes on Eliot’s sources in English and French poetry and much else, honor the Eliot who, as they implicitly portray him, perceived the whole of European literature in a simultaneous order.

The Poems of T.S. Eliot prints all of Eliot’s published and unpublished verse, including his obscene limericks and the rhymed addresses he wrote on postcards and envelopes, together with a thousand pages of densely printed commentary and four hundred pages of textual apparatus. The text and notes have been beautifully produced by Faber & Faber for the edition published in America by Johns Hopkins, but the edition is awkwardly divided into two volumes instead of taking its logical shape as three volumes, one each for the poems, the commentary, and the lists of textual variants.

An edition like this one, in which one page of verse exfoliates into as many as a dozen pages of commentary, evokes thoughts of extravagant editorial follies like the one parodied by Vladimir Nabokov in Pale Fire. In fact, Ricks and McCue are models of editorial discretion who let Eliot annotate himself. Their notes include, in addition to Eliot’s sources, extensive quotations from his prose and verse. The editors annotate “a moment’s surrender” in The Waste Land with, among other things, a sentence from “Tradition and the Individual Talent”: “The progress of an artist is a continual self-sacrifice, a continual extinction of personality.”

The new edition includes five previously unknown poems that Eliot wrote to his second wife, Valerie Fletcher, whom he married in 1957, when he was sixty-eight and she was thirty. She had been his secretary at Faber & Faber, and, in a near recurrence of his failed relation with Emily Hale, he seems to have been the last person in the firm to realize that she was in love with him. In Eliot’s last play, The Elder Statesman (1959), old Lord Claverton finds in his daughter’s love “the peace that ensues upon contrition.” Her forgiveness has given him reality: “It’s the real you I love,” she says.

Eliot’s poems to Valerie include one in praise of her breasts, celebrating their varying shapes when she stands or lies on her back or side; another in which his fingers move from her nipple to her navel and beyond; a limerick about “a nice girl named Valeria/Who has a delicious posterior”; and a poem about their lovemaking:

I love a tall girl. When we lie in bed
She on her back and I stretched upon her,
And our middle parts are busy with each other,
My toes play with her toes and my tongue with her tongue,
And all the parts are happy. Because she is a tall girl.

He and his wife are still, as he was in earlier years, congeries of body parts, but some of those parts, “busy with each other,” have become the instruments of love.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/ZxW8-aygzhk/

The Triumph of the Hard Right

Carly Fiorina, Chris Christie, Ted Cruz, Jeb Bush, and Donald Trump at the Republican presidential debate in Las Vegas, December 2015
Robyn Beck/AFP/Getty ImagesCarly Fiorina, Chris Christie, Ted Cruz, Jeb Bush, and Donald Trump at the Republican presidential debate in Las Vegas, December 2015

Everybody told everybody early in this year’s presidential campaign (during what was called Trump Summer) that we had never seen anything so sinisterly or hilariously (take your choice) new. But Trump Summer was supposed to mellow into Sane Autumn, and it failed to—and early winter was no saner. People paid to worry in public tumbled over one another in asking what had gone wrong with our politics. Even the chairman of the Republican National Committee, Reince Priebus, joined the worriers. After Mitt Romney lost in 2012, he set up what he called the Growth and Opportunity Project, to reach those who had not voted Republican—young people, women, Latinos, and African-Americans. But its report, once filed, had no effect on the crowded Republican field of candidates in the 2016 race, who followed Donald Trump’s early lead as he treated women and immigrants as equal-opportunity objects of scorn. Now the public worriers were yearning for the “good old days” when there were such things as moderate Republicans. What happened to them?

The current Republican extremism has been attributed to the rise of Tea Party members or sympathizers. Deadlock in Congress is blamed on Republicans’ fear of being “primaryed” unless they move ever more rightward. Endless and feckless votes to repeal Obamacare were motivated less by any hope of ending the program than by a desire to be on record as opposing it, again and again, to avoid the dreaded label RINO (Republican in Name Only).

E.J. Dionne knows that Republican intransigence was not born yesterday, and he has the credentials for saying it because this dependably intelligent liberal tells us, in his new book, that he began as a young Goldwaterite—like Hillary Clinton (or like me). He knows that his abandoned faith sounded themes that have perdured right down to our day. In the 1950s there were many outlets for right-wing discontent—including H.L. Hunt’s Lifeline, Human Events, The Dan Smoot Report, the Fulton Lewis radio show, Willis Carto’s Liberty Lobby, the Manion Forum. In 1955, William F. Buckley founded National Review to give some order and literary polish to this cacophonous jumble. But his magazine had a small audience at the outset. Its basic message would reach a far wider audience through a widely popular book, The Conscience of a Conservative, ghostwritten for Barry Goldwater by Buckley’s brother-in-law (and his coauthor for McCarthy and His Enemies), L. Brent Bozell.

The idea for the book came from Clarence Manion, the former dean of Notre Dame Law School. He persuaded Goldwater to have Bozell, who had been his speechwriter, put his thoughts together in book form. Then Manion organized his own and other right-wing media to promote and give away thousands of copies of the book. Bozell did his part too—he went to a board meeting of the John Birch Society and persuaded Fred Koch (father of Charles and David Koch) to buy 2,500 copies of Conscience for distribution. The book put Goldwater on the cover of Time three years before he ran for president. A Draft Goldwater Committee was already in existence then (led by William Rusher of National Review, F. Clifton White, and John Ashbrook). Patrick Buchanan spoke for many conservatives when he called The Conscience of a Conservative their “New Testament.”

The Goldwater book, Dionne says, had all the basic elements of the Tea Party movement, fully articulated fifty years before the Koch brothers funded the Tea Party through their organizations Americans for Prosperity and Freedomworks. The book painted government as the enemy of liberty. Goldwater called for the elimination of Social Security, federal aid to schools, federal welfare and farm programs, and the union shop. He claimed that the Supreme Court’s Brown v. Board decision was unconstitutional, so not the “law of the land.” He said we must bypass and defund the UN and improve tactical nuclear weapons for frequent use.1

It was widely thought, when the book appeared, that its extreme positions would disqualify Goldwater for the presidency, or even for nomination to that office. Yet in 1964 he became the Republican nominee, and though he lost badly, he wrenched from the Democrats their reliably Solid South, giving Nixon a basis for the Southern Strategy that he rode into the White House in the very next election. The Southern Strategy had been elaborated during Nixon’s campaign by Kevin Phillips, a lawyer in John Mitchell’s firm. The plan did not rely merely on Southern racism, but on a deep conviction that, as Phillips put it in a 1968 interview, all politics comes down to “who hates who.”2 In that interview, Phillips laid out an elaborate taxonomy of hostilities to be orchestrated by Republicans—another predictor of the Tea Party. Dionne argues, with ample illustration decade by decade, that this right-wing populism would remain a Republican orthodoxy, latent or salient, throughout the time he covers.

Joe Scarborough, in a recent book, The Right Path: From Ike to Reagan, How Republicans Once Mastered Politics—and Can Again, claims that moderate conservatism is the real Republican orthodoxy, interrupted at times by “extremists” like Goldwater or the Tea Party.3 He suggests Dwight Eisenhower as the best model for Republicans to imitate. Yet Scarborough is also an admirer of Buckley, and his thesis does not explain—as Dionne’s thesis does—why Buckley despised Eisenhower. Eisenhower, as the first Republican elected president after the New Deal era of Roosevelt and Truman, was obliged in Buckley’s eyes to dismantle the New Deal programs, or at least to begin the dismantling. Buckley resembled the people today who think the first task of a Republican president succeeding Obama will be to repeal or take apart the Affordable Care Act.

Eisenhower, instead, adhered to the “Modern Republicanism” expounded by the law professor Arthur Larson, which accepted the New Deal as a part of American life. Eisenhower said, “Should any political party attempt to abolish social security, unemployment insurance, and eliminate labor laws and farm programs, you would not hear of that party again in our political history.” It was to oppose that form of Republicanism that Buckley founded National Review in 1955, with a program statement that declared: “Middle-of-the-Road, qua Middle-of-the-Road is politically, intellectually, and morally repugnant.”

Buckley hated Eisenhower’s foreign policy as much as his domestic one. He said, “Eisenhower was above all a man unguided and hence unhampered by principle. Eisenhower undermines the Western resolution to stand up and defend what is ours.” When Russia put down the 1956 uprising in Hungary and Eisenhower did not intervene, National Review called for people to sign the Hungary Pledge—to have no dealings with iron curtain products or exchanges (Buckley’s wife had to give up Russian caviar).

Admittedly, Buckley did not, like Robert Welch (founder of the John Birch Society), think Eisenhower was a secret Communist (as many Republicans now think Obama is a secret Muslim). Buckley thought that Eisenhower had no greater purpose than his own success: “It has been the dominating ambition of Eisenhower’s Modern Republicanism to govern in such a fashion as to more or less please more or less everybody.”

The sense of betrayal by one’s own is a continuing theme in the Republican Party (a Fox News poll in September 2015 found that 62 percent of Republicans feel “betrayed” by their own party’s officeholders). The charges against Eisenhower were repeated against Nixon, who brought Kissingerian “détente” into his dealing with Russia and renewed diplomatic ties to China. On the domestic front, he imposed wage and price controls and sponsored the welfare schemes of Daniel Patrick Moynihan. Buckley joined the effort to “primary” Nixon in 1972 by running John Ashbrook against him. Buckley campaigned for Ashbrook in New Hampshire, but he succumbed to pleas from Spiro Agnew (before his disgrace) and Henry Kissinger (a new friend of his) that he endorse Nixon for the general election.

The story keeps repeating itself. The right opposed Gerald Ford—not for pardoning Nixon (as the left did) but for continuing Kissinger’s effort at détente (the Helsinki Accords), advancing the Panama Canal treaty, and making the hated Nelson Rockefeller his vice-president. (At the 1976 Republican convention, Ford had to humiliate Kissinger and Rockefeller in order to head off Reagan’s challenge, and made Bob Dole his running mate—all to no avail).

Both Bush presidents were denounced by the Republican right, the first for raising taxes, the second for expanding Medicare’s pharmaceutical support and expanding the government’s role in education—and the two of them for increasing the size and cost of government. Even the sainted Reagan disappointed the hard right with his arms control efforts, his raising (after cutting) taxes, his failure to shrink the government, and his selling of arms to Iran (though that bitterness has been obscured by the clouds of myth and glory surrounding Reagan).

To be on the right is to feel perpetually betrayed. At a time when the right has commanding control of radio and television talk shows, it still feels persecuted by the “mainstream media.” With all the power of the one percent in control of the nation’s wealth, the right feels its influence is being undermined by the academy, where liberals lurk to brainwash conservative parents’ children (the lament of Buckley’s very first book, God and Man at Yale). Dionne shows how the right punishes its own for “selling out” to any moderate departures from its agenda once a person gets into office.

A good example of this was the rejection of one of its own “Young Guns,” House Majority Leader Eric Cantor, in his 2014 bid for reelection. Dionne writes:

Cantor’s district was changed [by gerrymander] to give him more Republican voters—and he lost especially badly in the primary among the new and very conservative voters who had been moved in to strengthen him in the general election.

Even Kevin Phillips proved that he was not only a connoisseur but a practitioner of resentment against leaders when he said that Bill Buckley was selling out when he palled around with liberals on his yacht—he called him the leader of la-di-dah conservatives.4 (Who hates who now?)

E. J. Dionne with President Obama at the Catholic-Evangelical leadership summit on overcoming poverty, Georgetown University, Washington, D.C., May 2015
Nicolas Kamm/AFP/Getty ImagesE. J. Dionne with President Obama at the Catholic-Evangelical leadership summit on overcoming poverty, Georgetown University, Washington, D.C., May 2015

Joe Scarborough claims that the Republicans have continually oscillated between moderates and extremists. But he could find only two stellar moderates in the last half-century, Eisenhower and Reagan. Some oscillation! Dionne comes closer to the facts with his tale of a ground bass of growls against moderation, swelling at times or diminishing, but continuously present and becoming more embittered. It is appropriate that this feeling has been in alliance with the Confederate South, the loser of a war it still thinks it should have won. The rest of the Republicans may not be as racist as the South, but they cannot prevail at the federal level without it. All grievances gravitate toward one another.

Some conservatives rightly say that Bill Buckley was their best advocate—he elevated their vocabulary and taste, and he shuddered away from vulgarians like Robert Welch and Willis Carto. But he knew there could not be a conservative party if the South were not included in it. In 1957 he published an editorial titled “Why the South Must Prevail.” It said:

The central question that emerges…is whether the White community in the South is entitled to take such measures as are necessary to prevail, politically and culturally, in areas where it does not predominate numerically? The sobering answer is Yes—the White community is so entitled because, for the time being, it is the advanced race.

This feeling that superior people have license to circumvent democracy is still with us—when strategic gerrymandering and restrictive voting procedures freeze out minorities, the young, and the elderly, giving Republicans stronger representation in Congress than the popular vote warrants. Chief Justice John Roberts perpetuated this inequity when he voided Section Four of the 1965 Voting Rights Act—a decision followed immediately by a rush to impose new restrictions on who, where, and by what validation people can vote.

The idea that America has somehow outgrown or transcended racism is an ever-renewable delusion. Some hoped that the election of a black president would mark the end of racism. But in fact it blew on the embers of racism we have beneath us all the time. Otherwise how could Americans continue to think against all evidence that the incumbent president is not even a citizen of the country he leads? As a blanket statement, only Republicans think that. In a September 2015, Public Policy poll, only 29 percent of Republicans grant that Obama was born in the US (among Trump supporters, it is 21 percent), and 54 percent think he is a Muslim (it is 66 percent among Trump supporters).

The surge of Trump for half a year at the top of Republican polls was mystifying to many people. They thought the lead would quickly evanesce, since it was based on the man’s celebrity-cum-effrontery, not on any real political support. But the support was entirely political. Remember, Trump first became a political factor by claiming that Obama has no valid birth certificate—a charge he has never abandoned. He was mocked for that, but why should he abandon it? It turns out that a majority of Republicans have held that belief and never renounced it. Trump was giving voice to the growl of Republican orthodoxy that Dionne analyzes.

The truth is that conservatives are right to feel that their own moderates are sell-outs. To be (even moderately) a moderate is to leave the Republican Party—to be what Buckley called an immoral Middle-of-the-Roader. To accept Enlightenment values—reason, facts, science, open-mindedness, tolerance, secularity, modernity—is to lower one’s guard against evils like evolution, concern about global warming, human equality across racial and sexual and religious lines—things Republicans have opposed for years and will not let their own members sell out to. They rightly intuit that there is only one Enlightenment party in America, and the Republicans are not it. That is why they have to oppose in every underhanded way they can the influence of younger people who are open to gays, to same-sex marriage, to feminism.

This is the conclusion I come to from a reading of Dionne’s account of Republicans across the half-century story he tells. But I must admit that Dionne does not come to the same conclusion. He still has hopes that moderate forces can ride to the rescue of the party. He places those hopes in the men he calls Reformicons. He still entertains the dream of Russell Kirk that American Republicans will somehow become Yankee Edmund Burkes, as Thoreau hoped to form Yankee Buddhists. It is not surprising, then, that Dionne calls a Burke scholar, Yuval Levin, the leader of his Reformicons. Membership in this movement seems relatively easy. There is no entry test on Burkean knowledge.

Dionne comes up with a wide scatter of Reformicon members—it contains, besides Levin, Michael Needham, Mike Lee, Michael Gerson, Peter Wehner, David Frum, Michael Strain, Josh Barro. Bruce Bartlett, Ross Douthat, Reihan Salam, Charles Murray, Ramesh Ponnuru, David Brooks, Arthur Brooks, and Henry Olsen. Dionne admits that these are rather “bookish types”—there is only one office holder among them (Senator Lee of Utah). Some of these “moderates” are at least part-time Tea Partiers. Needham and Lee, for instance, cooked up the 2013 government shutdown for which Senator Ted Cruz took most of the credit or blame.5 Loose as this aggregation is, Dionne even grants John McCain a kind of honorary membership in his maverick days—before he started palling around with Sarah Palin (she may have read something, but I doubt that it was Edmund Burke).

This is not a very disciplined or cohesive cadre. Some would no doubt have trouble identifying themselves with others in the “moderate” movement for which Dionne is recruiting them. They have different institutional loyalties and no shared base for concerted “reformiconning.” Besides, Dionne shows that it is risky for Republicans even to toy with talk of moderation. That is why even “mainstream” Republican candidates steer away from support for evolution, or measures against global warming, or taxes on the rich, or same-sex marriage. None wants to be guilty of compromise or looking soft. The right hated George H.W. Bush’s plea for a “kinder and gentler” party kindling “a thousand points of light.” They had no more kind or gentle feeling about George W. Bush’s “compassionate conservatism” and his promise to “leave no child behind.” They think that is Democrat talk. And they are right.

  1. 1

    Barry Goldwater, The Conscience of a Conservative, half-century edition edited by C.C. Goldwater, with a foreword by George Will and an afterword by Robert F. Kennedy Jr. (Princeton University Press, 2007). 

  2. 2

    See Garry Wills, Nixon Agonistes: The Crisis of the Self-Made Man (Houghton Mifflin, 1970), pp. 265–269.  

  3. 3

    See Garry Wills, “Can He Save the GOP from Itself,” The New York Review, January 9, 2014. 

  4. 4

    See John B. Judis, William F. Buckley, Jr.: Patron Saint of the Conservatives (Simon and Schuster, 1988), p. 378. 

  5. 5

    See Stephen Moore, “Michael Needham: The Strategist Behind the Shutdown,” The Wall Street Journal, October 11, 2013; and Alex Rogers, “Utah Senator Mike Lee: The Man Behind the Shutdown Curtain,” Time, October 22, 2013. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/JdhTvjSyl-w/

China: Surviving the Camps

Provincial Party Secretary Wang Yilun, one of Heilongjiang's most powerful leaders, is criticized by Red Guards from the University of Industry and forced to bear a placard around his neck with the accusation "counterrevolutionary revisionist element," Harbin, northern China, August 23,1966
Li Zhensheng/Contact Press ImagesProvincial Party Secretary Wang Yilun, being criticized by Red Guards from the University of Industry and forced to bear a placard with the accusation “counterrevolutionary revisionist element,” Harbin, China, August 23, 1966

By now, it has been nearly forty years since the Cultural Revolution officially ended, yet in China, considering the magnitude and significance of the event, it has remained a poorly examined, under-documented subject. Official archives are off-limits. Serious books on the period, whether comprehensive histories, in-depth analyses, or detailed personal memoirs, are remarkably few. Ji Xianlin’s The Cowshed: Memories of the Chinese Cultural Revolution, which has just been released in English for the first time, is something of an anomaly.

At the center of the book is the cowshed, the popular term for makeshift detention centers that had sprung up in many Chinese cities at the time. This one was set up at the heart of the Peking University campus, where the author was locked up for nine months with throngs of other fallen professors and school officials, doing manual labor and reciting tracts of Mao’s writing. The inferno atmosphere of the place, the chilling variety of physical and psychological violence the guards daily inflicted on the convicts with sadistic pleasure, the starvation and human degeneration—all are vividly described. Indeed, of all the memoirs of the Cultural Revolution, I cannot think of another one that offers such a devastatingly direct and detailed testimony on the physical and mental abuse an entire imprisoned intellectual community suffered. After reading the book, a Chinese intellectual friend summed it up to me: “This is our Auschwitz.”

To mentally relive such darkness and to record it all in such an unswervingly candid manner could not have been easy for an elderly man: Ji was over eighty at the time of writing. In the opening chapter, he confessed to having waited for many years, in vain, for others to come forward with a testimony. Disturbed by the collective silence of the older generation and the growing ignorance of the young people about the Cultural Revolution, he finally decided to take up the pen himself.

Originally published by an official press in Beijing in 1998, during a politically relaxed moment, The Cowshed probably benefited from the author’s eminent status in China. A celebrated Indologist, Ji was also a popular essayist and an avowed patriot who enjoyed good relations with the government. With genial, grandfatherly manners, he had become, in his august age, one of those avuncular figures revered by the public and loved by the media. The book has sold well and stayed in print. But authorities also quietly took steps to restrict public discussion of the memoir, as its subject continues to be treated as sensitive. The present English edition, skillfully translated by Chenxin Jiang, is a welcome, valuable addition to the small body of work in this genre. It makes an important contribution to our understanding of that period.

Reading Ji’s account again, however, has also renewed some of my old questions and frustrations. How much can we really make sense of a bizarre, unwieldy phenomenon like the Great Proletarian Cultural Revolution? Can we truly overcome barriers of limited information, fading historical memory, and persistent ideological biases to have a genuinely meaningful and illuminating conversation about it today? I wonder. The delicate circumstances surrounding Ji’s memoir in China, in a way, demonstrate both the entangled complexity of the events and the precarious state of historical testimony.

Like other ordinary Chinese, Ji had no idea what the Cultural Revolution was all about when Mao Zedong launched it in 1966. Son of an impoverished rural family in Shandong, Ji had managed, through diligence and scholarship, to get a solid, cosmopolitan education in republican China. Having spent a decade in Germany studying Sanskrit and other languages, Ji returned with a Ph.D. to teach at China’s preeminent Peking University, where he soon became the chairman of its Eastern Languages Department. Though disliking the corrupt Chiang Kai-shek regime, he stayed away from politics, a field he’d never had any interest in. But when the Communists came into power in 1949, like most educated Chinese at the time, Ji saw hope for a stronger nation and more just society.

Being a political drifter, however, was no longer an option. Under the rule of the Chinese Communist Party (CCP), mass mobilization and political campaigns became a national way of life and no one was allowed to be a bystander, least of all the intellectuals, a favorite target in Mao’s periodic thought-reform campaigns. Feeling guilty about his previous passivity, Ji eagerly reformed himself. He joined the Party in the 1950s and actively participated in the ceaseless campaigns, which had a common trait: conformity and intolerance of dissent. In the 1957 Anti-Rightist Movement, more than half a million intellectuals were denounced and persecuted, even though most of their criticisms were very mild and nearly all were Party loyalists. The fact that Ji was able to stay out of harm’s way was probably due to two factors: his poor peasant background and his reputation as one who never stuck his neck out and towed the Party line sincerely.

In fact, he was doing just that in the first year of the Cultural Revolution. Peking University was quickly transformed into a chaotic zoo of factional battles, with frantic mobs rushing about attacking professors and school officials labeled as capitalist-roaders-in-power. A bewildered Ji tried his best to keep a low profile by hiding in the crowds. But he had a vulnerable spot: he abhorred a cadre named Nie Yuanzi, the leader of the dominant Red Guard faction on campus. Although every faction in China claimed loyalty to Chairman Mao, Nie enjoyed a special status: she penned the very first big-character poster of the Cultural Revolution, attacking certain Peking University officials and received Mao’s personal endorsement for it. Disgusted by her bullying style, Ji decided, in an uncharacteristically rash moment, to join her opponents’ faction. This was a fatal mistake. Nie’s followers took their vengeance immediately: they raided Ji’s home one night, smashing furniture and digging up, inevitably, some ridiculous evidence that Ji was a hidden counterrevolutionary.

From that moment onward, Ji’s life became a dizzying descent into hell. The ensuing chapters in the book are the most shocking and painful to read. There are many searing, unforgettable vignettes. Ji’s meticulous preparations for suicide, which was aborted only at the last moment by a knock on the door. The long, screaming rallies where Ji, already in his late fifties, and other victims were savagely beaten, spat on, and tortured. The betrayal by his former students and colleagues. An excruciating episode in the labor camp: Ji’s body collapsed under the strain of continuous struggle sessions; his testicles became so swollen he couldn’t stand up or close his legs. But the guard forced him to continue his labor, so he crawled around all day moving bricks. When he was finally allowed to visit a nearby military clinic, he had to crawl on a road for two hours to reach it, only to be refused treatment the moment the doctor learned he was a black guard. He crawled back to the labor camp.

A noteworthy feature of The Cowshed is its entangled theme of guilt and shame. In memoirs about Maoist persecutions, authors typically portray themselves as either hapless, innocent victims or, occasionally, defiant resisters. The picture is murkier in Ji’s recollection. He writes about Chinese intellectuals’ eager cooperation in ideological campaigns and how, under pressure, they frequently turned on one another. He mocks his own “aptitude in crowd behavior” and admits that, until his own downfall, he had also persecuted others:

Since we had been directed to oppose the rightists, we did. After more than a decade of continuous political struggle, the intellectuals knew the drill. We all took turns persecuting each other. This went on until the Socialist Education Movement, which, in my view, was a precursor to the Cultural Revolution.

And what was his involvement in the Socialist Education Movement? “Without quite knowing what I was doing, I joined the ranks of the persecutors.”

To Ji, this is a forgivable sin because if he and many other Chinese intellectuals have been guilty of persecuting one another, it was largely because the intellectuals as a class had been compelled to feel deeply guilty and shameful about themselves. Ji described how this was achieved through the fierce criticism and self-criticism sessions, a unique feature of the Maoist thought-reform campaigns. Ji’s own ideological conversion was accomplished through such a ritual.

Impressed by the Communist victory and early achievements, he blamed himself fervently for not being sufficiently patriotic and selfless: he was selfish to pursue his own academic studies in Germany while the Communists were fighting the Japanese invaders; he was wrong to avoid politics and to view all politics as a tainted game, because the Communist politics was genuinely idealistic and noble. Only after beating himself up about all his sins did he manage to pass the collective review and gain acceptance as a member of the “people.”

Ji describes the overwhelming sense of guilt as “almost Christian,” which led to a feeling of shame and induced a powerful urge to conform and to worship the new God—the Communist Party and its Great Leader. Afterward, like a sinner given a chance to prove his worthiness, he eagerly abandoned all his previous skepticism—the trademark of a critical faculty—and became a true believer. He embraced the new cult of personality, joining others to shout at the top of his voice “Long Live Chairman Mao!”  Through this process, millions of Chinese intellectuals cast off their individuality. For Ji, the feeling of guilt became so deeply engrained that, even after he was locked up in the cowshed, he racked his brain for his own faults rather than questioning the Party or the system.

Ji was obviously not a shrewd political animal or a deep thinker. Admitting that his eyes were finally opened only after the Cultural Revolution ended, he refrained from analyzing the larger political picture or interpreting the motives of those who launched the chaos. But he clearly felt that the country on the whole had failed to learn a real lesson from what happened. Toward the end of the memoir, he writes:

My final question is: What made the Cultural Revolution possible?

This is a complicated question that I am ill-equipped to answer; the only people in a position to tackle it refuse to do so and do not seem to want anyone else to try.

Ji was of course alluding to the Chinese government’s quiet ban on any deep probing of the subject, a policy still in effect today. First and foremost is the question of Mao. Everyone knows that Mao is the chief culprit of the Cultural Revolution. Well-known historical data points to a tangle of factors behind Mao’s motivation for launching it: subtle tension among the top leadership of the CCP since the Great Leap Forward, which led to a famine with an estimated thirty to forty million deaths; his desire to reassert supremacy and crush any perceived challenge to his personal power by reaching down directly to the masses; his radical, increasingly lunatic vision of permanent revolution; his deep anti-intellectualism and paranoid jealousy. But, from the viewpoint of the Party, allowing a full investigation and exposure of Mao’s manipulations would threaten the Party’s legitimacy. If the great helmsman gets debunked, the whole ship may go down. Mao as a symbol is therefore crucial: it is tied to the survival of the Party state.

Then there is the thorny issue of the people’s participation in the Cultural Revolution. The Red Guards were only the best-known of the radical organizations. At the height of madness, millions of ordinary Chinese took part in various forms of lawless actions and rampant violence. The estimated death toll of those who committed suicide, were tortured to death, were publicly executed, or were killed in armed factional battles runs from hundreds of thousands to millions. This makes it extremely difficult, if not impossible, to bring all of the perpetrators to account.

Consequently, the situation has been handled in a manner that reflects both cynical and pragmatic calculations: After arresting and blaming it all on the ultra-leftist Gang of Four, the government officially condemned the period as a “ten-year disaster,” tolerated a short period of limited public ventilation, then moved to contain the damage. It’s one of those noiseless bans done through internal control; investigation, discussion, and publication have been variously forbidden, discouraged, or marginalized. Over time, the topic has faded away as though it all happened quite naturally.

This situation is especially unsatisfying and unfair to those who suffered untold atrocities. Most of the teachers who were beaten up by their Red Guard students never received an apology. Most of the scholars who were tortured in the countless cowsheds continued, as Ji did, to live and work among their former persecutors. Some of the former perpetrators thrived in the new era, building successful careers and lives.

Ji himself worried about “stepping on people’s toes.” After writing the first draft of his memoir in 1988, he kept it in a drawer for years, for fear it might be viewed as a personal vendetta. He then revised it heavily, toning down his prose and keeping most of the persecutors unnamed. He said he wanted no revenge, just to write a honest historical document, so that young Chinese would know the past and would not let it happen again. He sounded apologetic about letting his emotions get the better of him in the earlier draft. Still, the reader can probably catch a strange tone of sarcasm and self-mockery in the narrator’s voice.

I found Ji’s tone odd and puzzling at first until it occurred to me that this is not an uncommon rhetorical device in Chinese writing or talking: to control seething anger or to deflect unbearable pain, one often turns to black humor or sarcastic hyperbole. A Chinese elementary school teacher who was tortured and jeered at in public struggle sessions during the Cultural Revolution told me that the sense of physical and psychological violation was so ferocious it felt like being gang-raped. He had nightmares about it for years. Later, a friend pointed out that he would adopt a facetious tone whenever he spoke about the experience. “I hadn’t noticed the tone myself,” he told me. “I think I turned it all into a joke because I can’t bear the pain and the shame with a straight face.”

Ji also seemed to suffer a survivor’s shame. Many scholars and writers committed suicide in the early part of the Cultural Revolution to avoid the indignities they faced, and he repeatedly mentioned his ambivalence about his failed attempt at suicide. This has to do with an ancient code of honor for a Confucian scholar. In the memoir, Ji recalls his first encounter after the Cultural Revolution with the senior apparatchik Zhou Yang. Zhou had supervised the persecution of many intellectuals until he himself was persecuted during the Cultural Revolution. Zhou’s first words to Ji were: “It used to be said that ‘the scholar can be killed, but he cannot be humiliated.’ But the Cultural Revolution proved that not only can the scholar be killed, he can also be humiliated.” Zhou roared with laughter, but Ji knew it was a bitter laugh.

Ji Xianlin died in 2009. Two years after his death, a Peking University alumna named Zhang Manling who had been close to Ji published a piece about their friendship and made a few unusual revelations. In 1989, after the students began their hunger strike on Tiananmen Square, Ji and several other Peking University professors decided to publicly show their solidarity with the youngsters by paying them a visit. Ji, the oldest and most famous of the professors, traveled in high style: sitting on a stool on top of a flat-backed tricycle, which was fastened with a tall white banner that said “Rank One Professor Ji Xianlin,” the seventy-eight-year-old Ji was peddled by a student from the west-side campus across the city. When they finally arrived in Tiananmen Square, the students burst into delighted cheers.

During the post-massacre purge, at all the faculty meetings where everyone was forced to biao tai (declare their position), Ji would only say: “Don’t ask me, or I’ll say it was a patriotic democratic movement.” Then one day, Ji walked off from his campus residence, hailed a taxi, and asked to be taken to the local public security bureau. “I’m professor Ji Xianlin of Peking University,” Ji said to the police on arrival. “I visited Tiananmen Square twice. I stirred up the students, so please lock me up together with them. I’m over seventy, and I don’t want to live anymore.” The policemen were so startled they called Peking University officials, who rushed over and forcibly brought Ji back to campus.

It was, again, one of those high-pressured, terrifying and tragic moments in China’s long history. But this time, acting alone, Ji lived up to the honor of a true Confucian scholar.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/_JlaSDKINuM/

The Anger of Ta-Nehisi Coates

Ta-Nehisi Coates, New York City, 2012
Ramsay de Give/The New York Times/ReduxTa-Nehisi Coates, New York City, 2012

In Langston Hughes’s “Let America Be America Again,” a poem published in 1936, a narrator speaks for those who struggle—the poor white, the Negro bearing slavery’s scars, the red man driven from the land, the immigrant clutching hope—and he offers the consolation, the defiance, of the young man, the farmer, the worker, united in demanding that America become “the dream the dreamers dreamed,” “the land that never has been yet.” Hughes addressed rallies of thousands in the Midwest and predicted that because the Depression had been so traumatic, mainstream America would go to the left politically. He got it wrong and spent the next two decades coping with the fallout, professionally, of having been sympathetic to communism.

Hughes was a panelist alongside Richard Wright at the National Negro Congress in Chicago in 1936, but two years later in “Blueprint for Negro Writing,” Wright dismissed the Harlem Renaissance writers as part of the black literary tradition of prim ambassadors who “entered the Court of American Public Opinion dressed in the knee-pants of servility.” Hughes was so identified with the Negro Awakening of the 1920s that he seemed to Wright to belong to an older generation, though there were only six years between them. Wright got his start publishing in leftist magazines and although he toed the Communist line of working-class solidarity that conquered race difference, and could envision in his early poetry black hands raised in fists together with those of white workers, the spirit of his revolt had very little of Hughes’s Popular Front uplift. His feelings were much more violent.

In “Between the World and Me,” a poem that appeared in Partisan Review in 1935, Wright’s narrator imagines the scene of a lynching:

And one morning while in the woods I stumbled suddenly upon the thing,
Stumbled upon it in a grassy clearing guarded by scaly oaks and elms.
And the sooty details of the scene rose, thrusting themselves between the world and me….

There was a design of white bones slumbering forgottenly upon a cushion of ashes.
There was a charred stump of a sapling pointing a blunt finger accusingly at the sky.

Wright’s “I” recalls that the passive scene has woken up. “And a thousand faces swirled around me, clamoring that my life be burned.” “They” had him; his wet body slipped and rolled in their hands as they bound him to the sapling and poured hot tar:

Then my blood was cooled mercifully, cooled by a baptism of gasoline.
And in a blaze of red I leapt to the sky as pain rose like water, boiling my limbs.

The poem’s last line shifts to the present tense. The speaker is now dry bones, his face “a stony skull staring in yellow surprise at the sun.”

Wright was not the first to treat the site of a lynching as a haunted place. Hughes himself wrote more than thirty poems about lynching, investigating the effects on families and communities. But “Between the World and Me” doesn’t draw a moral from having contemplated the grisly scene. There is no promise of either redemption or payback. The poem concentrates on the violence to the black man’s body, on trying to get us to step into the experience of his “icy fear.”

The black struggle in the US has a dualist tradition. It expresses opposing visions of the social destiny of black people. Up, down, all or nothing, in or out, acceptance or repudiation. Do we stay in the US or go someplace else, blacks in the abolitionist societies of the 1830s debated. We spilled our blood here, so we’re staying, most free blacks answered. Some people now say that maybe Booker T. Washington’s urging black people to accommodate segregation saved black lives as he raised money to build black educational institutions. Marcus Garvey recast segregated life as the Back to Africa movement, a voluntary separatism, a black nationalism. W.E.B. Du Bois battled Garvey as he had Washington, but by 1933 Du Bois gave up on his militant integrationist strategies, resigned from the NAACP and The Crisis magazine, embraced black nationalism, and in 1935 published his landmark history, Black Reconstruction in America. Which is better: to believe that blacks will achieve full equality in American society or to realize that white racism is so deep that meaningful integration can never happen, so make other plans?

Ralph Ellison, Harlem, New York, 1947; photograph by Gordon Parks
The Gordon Parks FoundationRalph Ellison, Harlem, New York, 1947; photograph by Gordon Parks

Wright was condescending about Hughes’s gentle autobiography, The Big Sea (1940), as was Ralph Ellison, who, then in his Marxist phase, complained that the poet paid too much attention to the aesthetic side of experience. Ellison praised Wright’s autobiography, Black Boy (1945), but the spectacular success of Wright’s novel Native Son (1940) drove him to be as different from Wright as he could in Invisible Man (1952). They both broke with the Communist Party in the early 1940s but saw themselves as opposites. Wright moved to France in 1946 in the mood of an exile, the black intellectual alienated from US society, while Ellison remained at home, the artist sustained by what he saw as a black person’s cultural ability to keep on keeping on.

In later years, Ellison remembered Wright, six years his senior, as a father figure whom he had quickly outgrown. But Wright’s example inspired the young James Baldwin to move to Paris in 1948. Wright was hurt when Baldwin declared his independence from the protest tradition by denouncing Native Son. Baldwin later defended his criticisms, arguing in part that Wright’s concentration on defining his main character by the force of his circumstances sacrificed that character’s humanity. Baldwin’s turn would come in Leroi Jones’s essay collection Home (1965), in which he sneered at Baldwin for being popular on the white liberal cocktail circuit. Worse was in store for Baldwin, the understanding queer in a time of narrow macho militancy.

Jones, on the verge of reinventing himself as Amiri Baraka, fumed about the “agonizing mediocrity” of the black literary tradition. For him, the Harlem Renaissance had been too white, and never mind that Hughes in his manifesto, “The Negro and the Racial Mountain,” published in 1926, had proclaimed the determination of members of his generation of black writers to express their dark-skinned selves without apology. If black American history can be viewed as the troubled but irresistible progression of black people toward liberation, then it would appear that every generation of black writers redefines the black condition for itself, restates the matter in its own language. “There has always been open season on Negroes…. You don’t need a license to kill a Negro,” Malcolm X said.

The fatalism of 1960s black nationalism and the wisdom of not believing America’s promises form part of Ta-Nehisi Coates’s intellectual inheritance from his father. Not only is Coates’s memoir The Beautiful Struggle (2008) a moving father-and-son story, it is an intense portrait of those whom the black revolution left behind, but who never broke faith with its tenets nonetheless:

Even then, in his army days, Dad was more aware than most. Back in training he’d scuffled with a Native American soldier, who tried to better his social standing by airing out the unit’s only black. After they were pulled apart, Dad walked up to his room, calmed down, and then returned to the common area. On a small table, he saw a copy of Black Boy. He just knew someone was fucking with him. But he picked up the book….
In Richard Wright, Dad found a literature of himself. He’d read Manchild in the Promised Land and Another Country, but from Wright he learned that there was an entire shadow canon, a tradition of writers who grabbed the pen, not out of leisure but to break the chain….
Now he began to come to. When on leave, he stopped at book stands in search of anything referencing his own. He read Malcolm’s memoir, and again saw some of his own struggle, and now began to feel things he’d, like us all, long repressed—the subtle, prodding sense that he was seen as less. He went back to Baldwin, who posed the great paradox that would haunt him to the end: Who among us would integrate into a burning house?

Coates’s father was discharged from the military in 1967 when he was twenty-one and went to work as a baggage handler and cabin cleaner at the Baltimore airport. The early civil rights movement had taken place on television, southern and religious, remote from him. But his “new Knowledge” was his line drawn in the sand and to him Gandhi was “absurd” because “America was not a victim of great rot but the rot itself.” Coates tells us that while reading newspapers left behind on planes from the West Coast, his father discovered the Black Panthers. “My father was overcome.” In 1969, he offered himself to the Baltimore chapter, eventually becoming its head after he lost his job because of his arrest for moving guns.

Three years later the Panthers were falling apart, an organization wrecked by the FBI, paranoia, arrests, purges, factional disputes, murder. His father, Coates writes, was not the insurrectionary/suicidal type and his chapter had been more like a commune. “When he woke in the morning he thought not of guns but of oil, electricity, water, rent, and groceries.” Local chapters had financed themselves through the sale of the Panther newspaper and after every Panther chapter except the one in Oakland had been shut, initiatives such as free breakfast for children or clothing distribution programs stopped. Foot soldiers were left to languish in prisons; damaged souls lost the refuge, the fantasy, of hanging out with the revolution. The remaining national leadership harassed Coates’s father when he quit, but he “left the Panthers with a basic belief system, a religion that he would pass on to his kids.”

Coates says that his father, a survivor, was more suited to the real world than he knew and he founded his own propaganda machine, including a bookstore, printer, and publisher, calling it the George Jackson Movement, after the Black Panther who was shot trying to escape from Soledad prison. His father’s storefront was the church that Coates, born in 1975, grew up in, forced to study works of black history known only on the black side of town.

But it was music that set him on the path to consciousness, knowledge. Coates was twelve when he heard Eric B. & Rakim’s “Lyrics of Fury.” From trying to write his own rap, his relationship with and curiosity about words extended to his father’s shelves. “That was how I found myself.” He learned that his “name was a nation, not a target.” “When I was done, I emerged taller, my voice was deeper, my arms were bigger, ancestors walked with me, and there in my hands, behold, Shango’s glowing ax.”

His father met his mother in what they saw as a revolution. They were the kind of parents who found summer programs to put the kids in, college prep classes to enroll them in, and decent high schools outside their school district, and they started practice sessions for the SAT. They not only showed up at PTA meetings, they sat in on Coates’s classes when they felt they had to. And it wasn’t just them. His coming-of-age story includes teachers who also had been changed by the revolution in black consciousness. The school facilities were inadequate, but the teachers pushed students who didn’t understand what they were talking about when they begged them not to waste their chances. All that mattered in Coates’s high school world were girls, clothes, the mall, territory, styling, fights, gangs, homies, reputation, staying alive in West Baltimore, and the music. Black male adolescence had its soundtrack.

When Coates put his hand in his English teacher’s face, Coates’s father came to school and knocked his son down:

My father swung with the power of an army of slaves in revolt. He swung like he was afraid, like the world was closing in and cornering him, like he was trying to save my life. I was upstairs crying myself to sleep, when they held a brief conference. The conference consisted of only one sentence that mattered—Cheryl, who would you rather do this: me or the police?

Coates says that it took him a while to realize how different his family was. They boycotted Thanksgiving, and fasted instead. Most of his friends were fatherless, around him the young were getting locked up, dying of gunshots, and crack brought the end of the world. His father’s Afrocentric publishing business succeeded somewhat, but he also did what he had to, including beekeeping. He held on to jobs as a janitor at Morgan State, a black college, and as a research librarian at Howard University, some ways away in Washington, D.C., just so his children could have free tuition. “What did I know, what did I know/of love’s austere and lonely offices?” Robert Hayden asks himself in his poem about his father, “Those Winter Sundays.” But Coates dedicates The Beautiful Struggle to his mother. His father had a few children by other women. One year he became a father by two women at the same time.

In his writings, Baldwin stressed that the Negro Problem, like whiteness, existed mostly in white minds, and in Between the World and Me, Coates wants his son, to whom he addresses himself, to know this, that white people are a modern invention. “Race is the child of racism, not the father.” He admits that he is haunted by his father’s generation, by a sense if not of failure then of something left unfinished. He wants to go back. He named his son after Samori Touré, the nineteenth-century Islamic ruler who resisted French colonial rule in West Africa, writing, “The Struggle is in your name.”

The struggle is what he has to bequeath to his son and although he tells him that he hasn’t had to live with the fear that Coates himself did at age fifteen, he’s sure his son understands that there is no difference between him and Trayvon Martin as a youth at risk because he is black in America. His body is not his own; it is not secure. He can be destroyed by American society and no one will be held responsible.

In American history Coates finds the answer to why he believes the progress of those who think themselves white was built on violence and looting, on stolen black bodies. People were Jewish or Welsh before they were white. The Irish used to be black socially, meaning at the bottom. The gift of being white helped to subdue class antagonism. Coates wants his son to know that government of the people had not included his family before, that American democracy is self-congratulatory and white people forgive the torture, theft, and enslavement on which the country was founded.

The way Coates himself grew up was the result of policy, of centuries of rule by fear. Death could come out of the afternoon, in the form of a boy who idly pulled a gun on him. Fear and violence were the weaponry of his schools as well as his streets:

I think back on those boys now and all I see is fear, and all I see is them girding themselves against the ghosts of the bad old days when the Mississippi mob gathered ’round their grandfathers so that the branches of the black body might be torched, then cut away.

And maybe it is his understanding of this fear that lets Coates explain in an exculpatory fashion the severe beatings he regularly got from his father. Meanwhile, television sent him dispatches from another world of blueberry pies and immaculate bathrooms. He sensed that “the Dream out there,” the endless suburbia of “unworried boys,” was connected somehow to his fear.

Certain people will do anything to preserve the Dream. They want to believe that the past has little effect on the present. As Coates puts it:

“We would prefer to say such people cannot exist, that there aren’t any,” writes Solzhenitsyn. “To do evil a human being must first of all believe that what he’s doing is good, or else that it’s a well-considered act in conformity with natural law.” This is the foundation of the Dream—its adherents must not just believe in it but believe that it is just, believe that their possession of the Dream is the natural result of grit, honor, and good works…. The mettle that it takes to look away from the horror of our prison system, from police forces transformed into armies, from the long war against the black body, is not forged overnight. This is the practiced habit of jabbing out one’s eyes and forgetting the work of one’s hands.

Coates is glad that his son is black. “The entire narrative of this country argues against the truth of who you are.” The experience of being black gives a deeper understanding of life than that afforded to those stuck in the Dream. “They made us into a race. We made ourselves into a people.” For Coates, black history is “our own Dream.”

James Baldwin
James Baldwin; drawing by David Levine

In The Fire Next Time (1963), Coates’s literary model for Between the World and Me, Baldwin addresses his nephew and tells him early on that “you can only be destroyed by believing that you really are what the white world calls a nigger.” Baldwin’s polemic is unforgiving of America. He then goes on to describe the frustration of black people through a visit to the Chicago headquarters of the separatist Nation of Islam. In The Fire This Time (2007), a memoir of being black and gay in the South, Randall Kenan addresses his nephew, telling him that there is much discussion about what it means to be black and that as bad as things still are, a new class of “black folk” has emerged, the “bourgeois bohemian,” “a black intelligentsia given new and larger wings by meritocracy.” Coates, however, is confessing to his son that he, his father, cannot ultimately protect him.

He is aware of the anger in him and recalls that when his son was five they were leaving a movie theater on the Upper West Side and he nearly went off on a white woman who shoved his son because he wasn’t moving fast enough. He got into a shouting match with the white parents around him and then agonized over his uncool behavior. “I have never believed it would be okay.” The future was in our hands, Baldwin warned.

Coates wants his son’s life to be different from his, for him to escape the fear. He is pained by his son’s disappointment when the announcement comes that no charges would be lodged against Michael Brown’s killer in Ferguson. Coates urges his son to struggle, but not for the American Dreamers, their whiteness being “the deathbed of us all.” Coates remembers how “out of sync” he felt with the city on September 11, 2001. Race may be a construct, but his resentment at its damage is deep. He also says that he has never felt comfortable with the rituals of grieving in the black community. His parents weren’t just nonreligious, they were anti-Christian.

Some critics of Between the World and Me have noted that Coates offers no hope, or doesn’t believe that black people can shape their future. “It is the responsibility of free men to trust and to celebrate what is constant—birth, struggle, and death are constant, and so is love,” Baldwin said. Maybe Coates’s lack of belief in “agency,” why he sees us at the mercy of historical forces, is explained by the case of a Howard classmate, Prince Jones, a Born Again Christian and the son of physicians, who in 1993 was killed by a police officer who had stopped his jeep in suburban Maryland. The policeman was the only witness to what happened, which was never fully explained. The Prince George’s County cop who shot Jones and the prosecutor who declined to prosecute him were both black. The population in that county is overwhelmingly black. To move to this black suburb represented a step up for blacks in Baltimore.

In the militant writing of the 1960s, on sale in his father’s bookstore and what Coates read in the library he loved at Howard, the aim was to get black and to stay black, to be on your guard against the corruption of assimilation. Rejection of the American dream—middle-class life—was implicit. As a cultural inheritance, authentic blackness became a form of ownership and intellectual capital for Coates’s hip-hop generation. You could get paid and still keep it real. Malcolm X was their hero. They didn’t believe in nonviolence. Telling it like it is, Malcolm X style, was the way to stay sane. Social hope was for clowns. You must not fall for it. Protect yourself. This is more than skepticism. To be resigned means you are not in danger of being anyone’s fool.

Coates writes in an intellectual landscape without the communism or Pan-Africanism that once figured in debate as alternatives to what white America seemed to offer. Hip-hop nationalism—of Coates’s time, say, KRS-One, Public Enemy, or the Wu-Tang Clan—has none of the provincialism of 1960s black nationalism. Coates says that he understands both Frederick Douglass, who advised blacks to remain in the US, and Martin Delany, who led a group of blacks to Liberia. What it means to be black still changes from place to place. “For a young man like me, the invention of the Internet was the invention of space travel.” Coates’s wife fell in love with Paris and the French language and then so did he, he says, and without thinking of Wright or Baldwin. Or Sartre or Camus, he adds. For Coates, writing is his alternative country.

Coates is in a very recognizable tradition, but that tradition is not static. Wright warned the white men of the West not to be too proud of their easy conquest of Africa and Asia. Baldwin invoked retribution of biblical magnitude if America did not end its racial nightmare. For Coates, it’s too late, given the larger picture. He speculates that now that the American Dreamers are plundering “not just the bodies of humans but the body of the Earth itself,” “something more awful than all our African ancestors is rising with the seas.”

He takes away America’s uniqueness. Human history is full of people who oppressed other people. To be white now has no meaning divorced from “the machinery of criminal power.” Is it a problem that Coates comes across as entirely reasonable in his refusal in this book to expect anything anymore, socially or politically? Harold Cruse’s anger against the betrayal of black nationalism in The Crisis of the Negro Intellectual (1967) led him to tell off both the black activist and the white Communist in the strongest language possible. Coates is nearly as fed up as Cruse, but his disillusionment is a provocation: it’s all your fault, Whitey.

This is a rhetorical strategy of the tradition but to address an audience beyond black people is to be still attempting to communicate and enlighten. No author of a book on this subject can be filled with as much hopelessness as the black writer who no longer sees the point in anyone offering a polemic against racist America.

Du Bois never knew his father. He lived from the year the freedmen were enfranchised to the day before the March on Washington, and died a Communist in African exile. Hughes hated his father, an engineer who lived in Mexico in order to get away from Jim Crow. Wright’s sharecropper father abandoned the family. Ellison was two years old when his father died. Baldwin pitied the preacher who was really his stepfather. Baraka’s father was a postal supervisor, middle-class and in New Jersey.

Baraka gave a eulogy for Baldwin after his death, in part because he had become unpopular with whites late in his career. Baldwin turned out to have had Wright’s career, that of the engaged black writer. But he admired Ellison, who chose his art over being a spokesman, and never finished his second novel. Baldwin’s biographer, James Campbell, remembered that after he ran into Ellison at the Newport Jazz Festival, Baldwin said, “Ralph Ellison is so angry he can’t live.”

Source Article from http://feedproxy.google.com/~r/nybooks/~3/4R4SEIKafjw/

Uganda: When Democracy Doesn’t Count

Riot police dispersing a gathering of opposition supporters in Jinja, eastern Uganda, September 10, 2015
James Akena/Reuters/CorbisRiot police dispersing a gathering of opposition supporters in Jinja, eastern Uganda, September 10, 2015

In 1940, Franklin Roosevelt told Americans that, by arming Britain against the Nazis, we’d serve as an “arsenal for democracy.” But during the cold war, the opposite was often true, and apparently still is. According to two recent studies, the United States provides aid and sells weapons far more often to autocratic regimes than to democracies; even China partners with democracies more than America does. This pattern is particularly clear in sub-Saharan Africa. For a brief period after the cold war, America used foreign aid and other measures to pressure many countries to democratize; some, like Ghana, Tanzania and Zambia, now hold more or less credible elections. But today, our strongest military allies there, especially in eastern Africa, do not.

In Ethiopia, which receives nearly $2 billion from Western donors each year, the ruling party and its allies won every parliamentary seat in last May’s election; during the campaign, opposition supporters were beaten and arrested, and opposition groups said the outcome was rigged. Since mid-December, at least 140 peaceful protesters there have been shot dead by security officers. In Burundi, lavish foreign aid emboldened President Pierre Nkurunziza to run for what many say is an unconstitutional third term last July, sparking a coup attempt and bloody street battles. The opposition boycotted the election, and Nkurunziza won easily, but the UN now fears that what started as a crackdown on street protests may escalate into civil war. Some 100,000 refugees have already fled the country. Rwanda’s President Paul Kagame recently announced that he too intends to scrap term limits and run again in 2017, perpetuating a fear-based regime in which numerous dissidents have been jailed and some killed.

Now Uganda, one of our most important African military allies, will hold presidential and parliamentary elections on February 18. Despite strong opposition, this election may be decided outside the voting booth too. In exchange for putting Ugandan troops at America’s disposal, often without parliamentary approval and other niceties required by more democratic countries, Uganda has received some $15 billion in foreign aid from the West since 1990. But since the country gained independence from Britain in 1962, it has never had a peaceful transfer of power; President Yoweri Museveni, in office since taking over in 1986 after years of civil war, has overseen a feast of corruption remarkable even by African standards.

Change is long overdue. Uganda’s child death rate is higher than that of any of its neighbors, except those at war. Only one fifth of students initially enrolled actually take the exams they need to graduate from primary school, according to unpublished research by lawyer Godber Tumushabe. And although the World Bank has long touted the country as an economic success story, 63 percent of the population lives under the Bank’s own poverty threshold of $3.10 a day. The income of most Ugandans is actually in the form of food they grow themselves, a Uganda Bureau of Statistics official told me, but a spate of land-grabbing cases throughout the country threatens even these meager livelihoods.

Dissidents calling attention to these problems have been subject to arbitrary arrest, seizure of property, detention without trial, and mysterious disappearances and deaths that many believe to be politically motivated. In Uganda, such abuses have tended to peak during campaign seasons, and the current one is no different. Thugs wearing ruling party T-shirts have stolen and defaced opposition campaign materials, and police have tear-gassed and fired on opposition supporters, and arrested and allegedly tortured opposition campaign agents. One was recently found decapitated and another has disappeared.

In November, I followed the campaign of leading opposition candidate Kizza Besigye in Busoga, a particularly destitute rural area not far from the capital. Besigye—a former army colonel and Museveni’s doctor during the war that brought him to power—made eight speeches a day, eschewing many big towns in favor of villages that have been devastated by thirty years of corrupt leadership. This region once had a real economy, with factories and shops, now nearly all derelict. There is water six feet underground but barefoot children still haul it for miles in buckets on their heads. Besigye promised to invest in education and health care, put a stop to rampant land-grabbing, and improve farmers’ access to credit. He says he will pay for this in part by enforcing anti-corruption laws and reducing the size of Uganda’s bloated parliament and civil service, which have become huge patronage machines. If Besigye—or another opposition candidate—were to win the election, it wouldn’t solve all of Uganda’s problems, but it might break the stranglehold of a corrupt system that has put down deep roots over decades.

However, Besigye and other candidates have faced obstacles getting their messages across. In October, police threw tire-cutters in front of Besigye’s convoy without warning. Although no one was hurt, several cars collided and were damaged beyond repair. In early January, a group of displaced people invited Besigye to inspect their miserable living conditions, but police blocked the way and fired into the crowd, seriously injuring several people. Social media videos show police planting mortars and carrying assault rifles at Besigye rallies, presumably to frighten the crowds. Another candidate, former Prime Minister Amama Mbabazi, has claimed that throughout the country, his rallies have been disrupted by pro-Museveni hooligans, while the police look on and do nothing. The police deny the allegation.

Uganda seldom makes headlines, but it’s been crucial to Western security since colonial times.  In 1875, Henry Morton Stanley convinced Queen Victoria that controlling the source of the Nile, then thought to be in Uganda (it’s actually in Rwanda), would give her leverage over the Mahdists and other unruly Muslim groups wreaking havoc on her empire in Sudan and Egypt downstream. Today, Uganda hosts one of the most important sites in the new chain of US military installations along the edge of the Sahara desert, from Senegal to Somalia, once again aimed at containing Islamist militants.

These bases, under construction since 2008, conduct training missions with African armies, airlift troops, and launch airstrikes against local terrorist groups such as Nigeria’s Boko Haram, Central Africa’s Lord’s Resistance Army, and Somalia’s Al-Shabbab. Christine Mungai, writing in the Nairobi-based online magazine MGAfrica, has called the new US installations a “hippo trench”: hippos attack some three thousand people a year and Africans living near lakes sometimes build trenches around their gardens because hippos can’t jump. In this case, the hippos are Islamic fundamentalists threatening to recruit terrorists to strike the US and possibly disrupt world trade routes or gain control of Africa’s uranium mines, oil wells, and other strategic resources.

Uganda’s collaboration in the hippo trench project has earned it a virtually free pass when it comes to human rights violations. Although Obama joined other Western heads of state in strong-arming Museveni into reversing a controversial anti-homosexuality law in 2014, his officials have said little about abuses of democratic rights, including those detailed in the State Department’s own annual human rights reports. Sanctions—usually small aid suspensions over corruption—have been light, temporary, and rare.

What can we expect on election day? According to a recent Royal Africa Society-sponsored poll conducted by British academics, if the elections had been held in November and December 2015, Museveni would have won 66 percent of the vote while his closest challenger, Besigye, would have received 24 percent. Opposition leaders find this hard to believe. The polls are organized with the assistance of the president’s office, they say, and conducted under the gaze of local officials appointed by Museveni.

I too wondered about this. Besigye’s November nomination brought the capital Kampala to a standstill. Hundreds of thousands of people poured into the streets, and it took his convoy five hours to drive a few miles from the Electoral Commission headquarters to a stadium for a rally. Ugandan politicians typically bribe voters at rallies with wads of shilling notes, bars of soap, or other small gifts. President Museveni has been photographed numerous times handing over canvas sacks of cash. But at Besigye’s rallies, the people have been giving him money, as well as live goats and turkeys, pieces of roast chicken, even furniture—and setting fire to their ruling party membership cards. A woman named Jane interviewed by journalists from Uganda’s Monitor newspaper walked fifteen miles just to give him five thousand shillings, or about $1.50.  She had lost her sandals and her feet were bleeding, but she told the reporters she was happy to have given Besigye the money and would now go home. Museveni’s rallies have also attracted large crowds, but many supporters are paid and bussed in. Some have complained to journalists that they’d had no idea where busses were even going.

The European Union will send a few hundred observers to monitor Uganda’s elections, but they are unlikely to prevent rigging because there are roughly 30,000 polling stations. When Besigye ran against Museveni in 2006, a similarly desultory EU observer mission found electoral “shortcomings,” but they were apparently insufficient to lead to punitive steps such as a reduction in foreign aid or sanctions. After that election, Besigye petitioned the Ugandan Supreme Court to annul the results and call for a re-run. All seven judges acknowledged there had been widespread voter intimidation, fraud, and violence. According to one of the judges, he and four of his colleagues also initially agreed that these abuses justified a re-run. However, the judge, in his recent memoir, writes that two changed their minds when the president privately told them he’d call out the army if the election was annulled.  The final tally was 4-3 in favor of Museveni, but the decisions were published ten months late, suggesting the majority had difficulty writing them.

If the February election outcome isn’t credible, Ugandans may decide to fight it out on their own, just as the people of Ethiopia and Burundi have been trying to do. Perhaps anticipating this, Information Minister Jim Muhwezi has warned that Museveni might let the army take over if he doesn’t win or if the opposition protests. Police Chief Kale Kayihura has meanwhile recruited hundreds of thousands of volunteer “Crime Preventers” from villages all over the country. The Crime Preventer initiative was created without legislation and the exact number of recruits and their command structure is unknown. Kayihura claims it’s modeled on Britain’s Neighborhood Watch Program, but Neighborhood Watch volunteers merely report suspicious activity in their communities to the local police and are never supposed to intervene. By contrast, many Crime Preventers wear Museveni t-shirts, are paid to attend his rallies, assault opposition supporters, and learn to strip and assemble an AK-47. Human Rights Watch and other groups have called for the Crime Preventers to be disbanded.

In an opinion piece in Uganda’s Observer newspaper, published on Martin Luther King Day, Patricia Mahoney, Charge D’Affaires at the US Embassy in Kampala, paid tribute to the slain civil rights leader and expressed concern about election violence. But her article, titled “The Path of Nonviolence is More Powerful,” seemed odd to me, given US military activity in the region. Anyway, it was not King’s fine rhetoric and charisma alone that changed America, but also the army that finally went down south and integrated the schools, protected demonstrators, and enforced the law of the land. Neither could have done it without the other. What happens in Uganda on February 18 will similarly depend in part on Uganda’s security forces, which are said to be split between loyalists and those who are as disgruntled as anyone about the problems in their country.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/cdR5-8GweRE/

The Real Legacy of Steve Jobs

Steve Jobs: The Man in the Machine

Steve Jobs

Apple founder Steve Jobs as ‘the son of a migrant from Syria’; mural by Banksy,at the ‘Jungle’ migrant camp in Calais, France, December 2015
Philippe Huguen/AFP/Getty ImagesApple founder Steve Jobs as ‘the son of a migrant from Syria’; mural by Banksy,at the ‘Jungle’ migrant camp in Calais, France, December 2015

Partway through Alex Gibney’s earnest documentary Steve Jobs: The Man in the Machine, an early Apple Computer collaborator named Daniel Kottke asks the question that appears to animate Danny Boyle’s recent film about Jobs: “How much of an asshole do you have to be to be successful?” Boyle’s Steve Jobs is a factious, melodramatic fugue that cycles through the themes and variations of Jobs’s life in three acts—the theatrical, stage-managed product launches of the Macintosh computer (1984), the NeXT computer (1988), and the iMac computer (1998). For Boyle (and his screenwriter Aaron Sorkin) the answer appears to be “a really, really big one.”

Gibney, for his part, has assembled a chorus of former friends, lovers, and employees who back up that assessment, and he is perplexed about it. By the time Jobs died in 2011, his cruelty, arrogance, mercurial temper, bullying, and other childish behavior were well known. So, too, were the inhumane conditions in Apple’s production facilities in China—where there had been dozens of suicides—as well as Jobs’s halfhearted response to them. Apple’s various tax avoidance schemes were also widely known. So why, Gibney wonders as his film opens—with thousands of people all over the world leaving flowers and notes “to Steve” outside Apple Stores the day he died, and fans recording weepy, impassioned webcam eulogies, and mourners holding up images of flickering candles on their iPads as they congregate around makeshift shrines—did Jobs’s death engender such planetary regret?

The simple answer is voiced by one of the bereaved, a young boy who looks to be nine or ten, swiveling back and forth in a desk chair in front of his computer: “The thing I’m using now, an iMac, he made,” the boy says. “He made the iMac. He made the Macbook. He made the Macbook Pro. He made the Macbook Air. He made the iPhone. He made the iPod. He’s made the iPod Touch. He’s made everything.”

Yet if the making of popular consumer goods was driving this outpouring of grief, then why hadn’t it happened before? Why didn’t people sob in the streets when George Eastman or Thomas Edison or Alexander Graham Bell died—especially since these men, unlike Steve Jobs, actually invented the cameras, electric lights, and telephones that became the ubiquitous and essential artifacts of modern life?* The difference, suggests the MIT sociologist Sherry Turkle, is that people’s feelings about Steve Jobs had less to do with the man, and less to do with the products themselves, and everything to do with the relationship between those products and their owners, a relationship so immediate and elemental that it elided the boundaries between them. “Jobs was making the computer an extension of yourself,” Turkle tells Gibney. “It wasn’t just for you, it was you.”

In Gibney’s film, Andy Grignon, the iPhone senior manager from 2005 to 2007, observes that

Apple is a business. And we’ve somehow attached this emotion [of love, devotion, and a sense of higher purpose] to a business which is just there to make money for its shareholders. That’s all it is, nothing more. Creating that association is probably one of Steve’s greatest accomplishments.

Jobs was a consummate showman. It’s no accident that Sorkin tells his story of Jobs through product launches. These were theatrical events—performances—where Jobs made sure to put himself on display as much as he did whatever new thing he was touting. “Steve was P.T. Barnum incarnate,” says Lee Clow, the advertising executive with whom he collaborated closely. “He loved the ta-da! He was always like, ‘I want you to see the Smallest Man in the World!’ He loved pulling the black velvet cloth off a new product, everything about the showbiz, the marketing, the communications.”

People are drawn to magic. Steve Jobs knew this, and it was one reason why he insisted on secrecy until the moment of unveiling. But Jobs’s obsession with secrecy went beyond his desire to preserve the “a-ha!” moment. Is Steve Jobs “the most successful paranoid in business history?,” The Economist asked in 2005, a year that saw Apple sue, among others, a Harvard freshman running a site on the Internet that traded in gossip about Apple and other products that might be in the pipeline. Gibney tells the story of Jason Chen, a Silicon Valley journalist whose home was raided in 2010 by the California Rapid Enforcement Allied Computer Team (REACT), a multi-agency SWAT force, after he published details of an iPhone model then in development. A prototype of the phone had been left in a bar by an Apple employee and then sold to Chen’s employer, the website Gizmodo, for $5,000. Chen had returned the phone to Apple four days before REACT broke down his door and seized computers and other property. Though REACT is a public entity, Apple sits on its steering committee, leaving many wondering if law enforcement was doing Apple’s bidding.

Whether to protect trade secrets, or sustain the magic, or both, Jobs was adamant that Apple products be closed systems that discouraged or prevented tinkering. This was the rationale behind Apple’s lawsuit against people who “jail-broke” their devices in order to use non-Apple, third-party apps—a lawsuit Apple eventually lost. And it can be seen in Jobs’s insistence, from the beginning, on making computers that integrated both software and hardware—unlike, for example, Microsoft, whose software can be found on any number of different kinds of PCs; this has kept Apple computer prices high and clones at bay. An early exchange in Boyle’s movie has Steve Wozniak arguing for a personal computer that could be altered by its owner, against Steve Jobs, who believed passionately in end-to-end control. “Computers aren’t paintings,” Wozniak says, but that is exactly what Jobs considered them to be. The inside of the original Macintosh bears the signatures of its creators.

The magic Jobs was selling went beyond the products his company made: it infused the story he told about himself. Even as a multimillionaire, and then a billionaire, even after selling out friends and collaborators, even after being caught back-dating stock options, even after sending most of Apple’s cash offshore to avoid paying taxes, Jobs sold himself as an outsider, a principled rebel who had taken a stand against the dominant (what he saw as mindless, crass, imperfect) culture. You could, too, he suggested, if you allied yourself with Apple. It was this sleight of hand that allowed consumers to believe that to buy a consumer good was to do good—that it was a way to change the world. “The myths surrounding Apple is for a company that makes phones,” the journalist Joe Nocera tells Gibney. “A phone is not a mythical device. It makes you wonder less about Apple than about us.”

To understand this graphically, one need only view online Eric Pickersgill’s photographic series “Removed,” in which the photographer has excised the phones and other electronic devices that had been in the hands of ordinary people going about their everyday lives, sitting at the kitchen table, cuddling on the couch, and lying in bed, for example. The result is images of people locked in an intimate gaze with the missing device that is so unwavering it shuts out everything else. As Pickersgill explains it:

The work began as I sat in a café one morning. This is what I wrote about my observation:

Family sitting next to me at Illium café in Troy, NY is so disconnected from one another. Not much talking. Father and two daughters have their own phones out. Mom doesn’t have one or chooses to leave it put away. She stares out the window, sad and alone in the company of her closest family. Dad looks up every so often to announce some obscure piece of info he found online. Twice he goes on about a large fish that was caught. No one replies. I am saddened by the use of technology for interaction in exchange for not interacting. This has never happened before and I doubt we have scratched the surface of the social impact of this new experience. Mom has her phone out now.

One assumes that Steve Jobs would have been heartened by these images, for they validate the perception—promoted by, among others, Jobs himself—that he was a visionary, a man who could show people what they wanted and, even more, a man who could show people what they wanted before they even knew what they wanted themselves. As Gibney puts it, “More than a CEO, he positioned himself as an oracle. A man who could tell the future.”

And he could—some of the time. It’s important to remember, though, that when Jobs was forced out of Apple in 1985, the two computer projects into which he had been pouring company resources, the Apple 3 and another computer called the Lisa, were abject failures that nearly shut the place down. A recurring scene in Boyle’s fable is Jobs’s unhappy former partner, the actual inventor of the original Apple computer, Steve Wozniak, begging him to publicly recognize the team that made the Apple 2, the machine that kept the company afloat while Jobs pursued these misadventures, and Jobs scornfully blowing him off.

Jobs’s subsequent venture after he left Apple, a workstation computer aimed at researchers and academics, appropriately called the NeXT, was even more disastrous. The computer was so overpriced and underpowered that few were sold. Boyle shows Jobs obsessing over the precise dimensions of the black plastic cube that housed the NeXT, rather than on the computer’s actual deficiencies, just as Jobs had obsessed over the minute gradations of beige for the Apple 1. Neither story is apocryphal, and both have been used over the years to illustrate, for better and for worse, Jobs’s preternatural attention to detail. (Jobs also spent $100,000 for the NeXT logo.)

Sorkin’s screenplay claims that the failure of the NeXT computer was calculated—that it was designed to catapult Jobs back into the Apple orbit. Fiction allows such inventions, but as the business journalists Brent Schlender and Rick Tetzeli point out in their semipersonal recounting, Becoming Steve Jobs, “There was no hiding NeXT’s failure, and there was no hiding the fact that NeXT’s failure was primarily Steve’s doing.”

Steve Jobs speaking at a conference in San Francisco in front of a photograph of himself and Apple cofounder Steve Wozniak, 2010
Kimberly White/CorbisSteve Jobs speaking at a conference in San Francisco in front of a photograph of himself and Apple cofounder Steve Wozniak, 2010

Still, Jobs did use the NeXT’s surviving asset, its software division, as the wedge in the door that enabled him to get back inside his old company a decade after he’d been pushed out. NeXT software, which was developed by Avie Tevanian, a loyal stalwart until Jobs tossed him aside in 2006, became the basis for the intuitive, stable, multitasking operating system used by Mac computers to this day. At the time, though, Apple was in free fall, losing $1 billion a year and on the cusp of bankruptcy. The graphical, icon-based operating system undergirding the Macintosh was no longer powerful or flexible enough to keep up with the demands of its users. Apple needed a new operating system, and Steve Jobs just happened to have one. Or, perhaps more accurately, he had a software engineer—Tevanian—who could rejigger NeXT’s operating system and use it for the Mac, which may have been Jobs’s goal all along. Less than a year after Jobs sold the software to Apple for $429 million and a fuzzily defined advisory position at the company, the Apple CEO was gone, and the board of directors was gone, and Jobs was back in charge.

Jobs’s second act at Apple, which began either in 1996 when he returned to the company as an informal adviser to the CEO or in 1997 when he jockeyed to have the CEO ousted and took the reins himself, propelled him to rock-star status. True, a few years earlier, Inc. magazine named him “Entrepreneur of the Decade,” and despite his failures, he still carried the mantle of prophecy. It was Steve Jobs, after all, who looked at the first home computers, assembled by hobbyists like his buddy Steve Wozniak, and understood the appeal that they would have for people with no interest in building their own, thereby sparking the creation of an entire industry. (Bill Gates saw the same computer kits, realized they would need software to become fully functional, and dropped out of Harvard to write it.) But personal computers, as essential as they had become to just about everyone in the ensuing two decades, were, by the time Jobs returned to Apple, utilitarian appliances. They lacked—to use one of Steve Jobs’s favorite words—“magic.”

Back at his old company, Jobs’s first innovation was to offer an alternative to the rectangular beige box that sat on most desks. This new design, unveiled in 1998, was a translucent blue, oddly shaped chassis through which one could see the guts of the computer. (Other colors were introduced the next year.) It had a recessed handle that suggested portability, despite weighing a solid thirty-eight pounds. This Mac was the first Apple product to be preceded by the letter “i,” signaling that it would not be a solitary one-off, but instead, in a nod to the burgeoning World Wide Web, expected to be networked to the Internet.

And it was a success, with close to two million iMacs sold that first year. As Schlender and Tetzeli tell it, the iMac’s colorful shell was not just meant to challenge the prevailing industry aesthetic but also to emphasize and demonstrate that under Steve Jobs’s leadership, an Apple computer would reflect an owner’s individuality. “The i [in iMac] was personal,” they write, “in that this was ‘my’ computer, and even, perhaps, an expression of who ‘I’ am.”

Jobs was just getting started with the “i” motif. (For a while he even called himself the company’s iCEO.) Apple introduced iMovie in 1999, a clever if clunky video-editing program that enabled users to produce their own films. Then, two years later, after buying a company that made digital jukebox software, Apple launched iTunes, its wildly popular music player. iTunes was cool, but what made it even cooler was the portable music player Apple unveiled that same year, the iPod. There had been portable digital music devices before the iPod, but none of them had its capacity, functionality, or, especially, its masterful design. According to Schlender and Tetzeli,

The breakthrough on the iPod user interface is what ultimately made the product seem so magical and unique. There were plenty of other important software innovations, like the software that enables easy synchronization of the device with a user’s iTunes music collection. But if the team had not cracked the usability problem for navigating a pocket library of hundreds or thousands of tracks, the iPod would never have gotten off the ground.

By 2001, then, Apple’s strategy, which had the company moving beyond the personal computer into personal computing, was underway. Jobs convinced—or, most likely, bullied—music industry executives, who had been spooked by the proliferation of peer-to-peer Internet sites that enabled people to download their products for free, into letting Apple sell individual songs for about a dollar each on iTunes. This, Jobs must have known, set the stage for the dramatic upending of the music business itself. Among other consequences, Apple, and its millions of iTunes users, became the new drivers of taste, influence, and popularity.

Apple’s reach into the music business was fortified two years later when the company began offering a version of iTunes for Microsoft’s Windows operating system, making iTunes (and so the iPod) available to anyone and everyone who owned a personal computer. Providing a unique Apple product to Microsoft, a company Jobs did not respect, and that he had accused in court of stealing key elements of the Mac operating system, only happened, Schlender and Tetzeli suggest, because Jobs’s colleagues persuaded him that once Windows users experienced the elegance of Apple’s software and hardware, they’d see the light and come over from the dark side. In view of Apple’s recent $1 trillion valuation, it looks like they were right.

The iPod, as we all know by now, gave way to the iPhone, the iPod Touch, and the iPad. At the same time, Apple continued to make personal computers, machines that reflected Jobs’s clean, simple aesthetic, brought to fruition by Jony Ive, the company’s head designer. Ive was also responsible for the glass and metal minimalism of Apple’s handheld devices, where form is integral to function. Mobile phones existed before Apple entered the market, and there were even “smart” phones that enabled users to send and receive e-mail and surf the Internet. But there was nothing like the iPhone, with its smooth, bright touch screen, its “apps,” and the multiplicity of things those applications let users do in addition to making phone calls, like listen to music, read books, play chess, and (eventually) take and edit photographs and videos.

Steve Jobs’s hunch that people would want a phone that was actually a powerful pocket computer was heir to his hunch thirty years earlier that individuals would want a computer on their desk. Like that hunch, this one was on the money. And like that hunch, it inspired a new industry—there are now scores of smart phone manufacturers all over the world—and that new industry begot one of the first cottage industries of the twenty-first century: app development. Anyone with a knack for computer programming could build an iPhone game or utility, send it to Apple for vetting, and if it passed muster, sell it in Apple’s app store. These days, the average Apple app developer with four applications in the Apple marketplace earns about $21,000 a year. If someone were writing a history of the “gig economy”—making money by doing a series of small freelance tasks—it might start here.

Alex Gibney begins his movie wondering why Steve Jobs was revered despite being, as Boyle’s hero says of himself, “poorly made.” (In the film, he says this to his first child, whose paternity he denied for many years despite a positive paternity test, and whom he refused to support, even as she and her mother were so poor they had to rely on public assistance.) Gibney pursues the answer vigorously, and while the quest is mostly absorbing, it never gets to where it wants to go because the filmmaker has posed an unanswerable question.

And here is another: With one new book and two new movies about Jobs out this season alone, why this apparently enduring fascination with the man? Even if he is the business genius Schlender and Tetzeli credibly make him out to be, the most telling lesson to be learned from Jobs’s example might be summed up by inverting one of his favorite marketing slogans: Think Indifferent. That is, care only about the product, not the myriad producers, whether factory workers in China or staff members in Cupertino, or colleagues like Wozniak, Kottke, and Tevanian, who had been crucial to Apple’s success.

iPhones and their derivative handheld i-devices have turned Apple from a niche computer manufacturer into a global digital impresario. In the first quarter of 2015, for example, iPhone and iPad sales accounted for 81 percent of the company’s revenue, while computers made up a mere 9 percent. They have also made Apple the richest company in the world. The challenge, now, as the phone and computer markets become saturated, is to come up with must-have products that create demand without the enchantments of Steve Jobs.

This past year saw the launch of the much-anticipated Apple Watch, which failed to generate consumer enthusiasm. Sales dropped 90 percent in the first week and continued to disappoint for the rest of the year. The company also released the iPad Pro, a larger, more powerful iPad, and it, too, did not fare well. There have been rumors of an upcoming Apple car—maybe it’s electric, maybe self-driving, maybe built from the ground up, maybe in partnership with Mercedes-Benz, maybe it will be launched in 2019, maybe that’s too soon—but when the current Apple CEO, Tim Cook, was asked about the car by Stephen Colbert on his show, and again in December by Charlie Rose on 60 Minutes, he was less than forthcoming.

Even so, in the years since Jobs’s death, despite failing to introduce a blockbuster product, and despite its recent drop in share price, the company continues to grow. 2015 was Apple’s most profitable year so far, with revenues of $234 billion. According to financial analysts, this either makes Apple stock a bargain or a bear poised to fall from a tree. So far, no one has created an app that can predict the future.

Apple’s release of Siri, the iPhone’s “virtual assistant,” a day after Jobs’s death, is as good a prognosticator as any that artificial intelligence (AI) and machine learning will be central to Apple’s next generation of products, as it will be for the tech industry more generally. (Artificial intelligence is software that enables a computer to take on human tasks such as responding to spoken language requests or translating from one language to another. Machine learning, which is a kind of AI, entails the use of computer algorithms that learn by doing and rewrite themselves to account for what they’ve learned without human intervention.) A device in which these capabilities are much strengthened would be able to achieve, in real time and in multiple domains, the very thing Steve Jobs sought all along: the ability to give people what they want before they even knew they wanted it.

What this might look like was demonstrated earlier this year, not by Apple but by Google, at its annual developer conference, where it unveiled an early prototype of Now on Tap. What Tap does, essentially, is mine the information on one’s phone and make connections between it. For example, an e-mail from a friend suggesting dinner at a particular restaurant might bring up reviews of that restaurant, directions to it, and a check of your calendar to assess if you are free that evening. If this sounds benign, it may be, but these are early days—the appeal to marketers will be enormous.

Google is miles ahead of Apple with respect to AI and machine learning. This stands to reason, in part, because Google’s core business emanates from its search engine, and search engines generate huge amounts of data. But there is another reason, too, and it loops back to Steve Jobs and the culture of secrecy he instilled at Apple, a culture that prevails. As Tim Cook told Charlie Rose during that 60 Minutes interview, “one of the great things about Apple is that [we] probably have more secrecy here than the CIA.”

This institutional ethos appears to have stymied Apple’s artificial intelligence researchers from collaborating or sharing information with others in the field, crimping AI development and discouraging top researchers from working at Apple. “The really strong people don’t want to go into a closed environment where it’s all secret,” Yoshua Benigo, a professor of computer science at the University of Montreal told Bloomberg Business in October. “The differentiating factors are, ‘Who are you going to be working with?’ ‘Am I going to stay a part of the scientific community?’ ‘How much freedom will I have?’”

Steve Jobs had an abiding belief in freedom—his own. As Gibney’s documentary, Boyle’s film, and even Schlender and Tetzeli’s otherwise friendly assessment make clear, as much as he wanted to be free of the rules that applied to other people, he wanted to make his own rules that allowed him to superintend others. The people around him had a name for this. They called it Jobs’s “reality distortion field.” And so we are left with one more question as Apple goes it alone on artificial intelligence: Will hubris be the final legacy of Steve Jobs?

  1. *

    When Bell died, every phone exchange in the United States was shut down for a moment of silence. When Edison died, President Hoover turned off the White House lights for a minute and encouraged others to do so as well. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/_HauQXWLD40/

The Collision Sport on Trial


Requiem for a Running Back

Richard Rodgers of the Green Bay Packers catching Aaron Rodgers’s seventy-yard pass to win the game against the Detroit Lions, December 3, 2015
Leon Halip/Getty ImagesRichard Rodgers of the Green Bay Packers catching Aaron Rodgers’s seventy-yard pass to win the game against the Detroit Lions, December 3, 2015


Of the many sayings attributed to Vince Lombardi, the legendary coach of the Green Bay Packers, the one that seems most relevant to football today is not about winning, the pursuit of excellence, or the importance of will and character, but rather this: “Football is not a contact sport; it is a collision sport.”

Collisions are the essence of football. They are intended to occur on every play in every game. Football, Lombardi would say, comes down to blocking and tackling. Every block and tackle is a collision, and every collision could bring some measure of pain. When Lombardi was a boy in Brooklyn, his father, Harry, a tough little man who ran a butcher shop, pounded into him the notion that pain was all in his mind.

The truth is that Lombardi himself had a low pain threshold. He was often disabled with injuries when he was a member of the Fordham line romanticized in the 1930s as the Seven Blocks of Granite. But like many effective leaders, he drew on an understanding of his own weaknesses as a means of eliminating them in others. When he was a prep coach at St. Cecilia High School in Englewood, New Jersey, he would line up across from his young players and order them to charge at him while he bellowed “Hit me! Hit me!” At Green Bay, Lombardi would cackle with delight during training camp when it came time for the one-on-one collision drill known as the nutcracker.

I grew up in Wisconsin during the 1960s, when Lombardi’s Packers were winning five championships in nine seasons, and later wrote a biography of him, which might explain why, although baseball is my preferred game, the Packers are my favorite team in any sport. The seasonal progression from radiant fall Sundays to frostbite playoff games in the darkening winter and the superstitions that come with watching the Packers are part of my emotional life, bringing joy and anguish, and if that is pathetic, it is a condition I share with millions of National Football League fans. But my attachment to football has been loosened by an increasing sense of guilt about whether I am complicit in supporting an unacceptably debilitating and duplicitous enterprise. America’s superpower game has never been more popular, yet evidence against it is amassing on many fronts, none more troubling than what science now says about the long-term ramifications of those collisions. I’ve wondered whether I could resolve the conflict between my attraction to the game and concern about what it does. On a larger scale, I’ve wondered whether football could repair itself and be made safer.

In a search for answers, I studied a diverse collection of books, articles, transcripts, and films about football. They included three books that considered the physical, sociological, and financial aspects of the sport to support their theses—Steve Almond’s manifesto Against Football, Gregg Easterbrook’s response, The Game’s Not Over: In Defense of Football, and Gilbert Gaul’s Billion-Dollar Ball, a deeply reported look into the corporatization of the college game and how it can take precedence over academic concerns.

The rest dealt mostly with the neurological effects that football collisions can have on the brain. Two were films: the well-publicized Concussion, a Hollywood movie starring Will Smith as the Pittsburgh pathologist who discovered a neurodegenerative disease in the brain tissue of deceased football players that came to be known as CTE (chronic traumatic encephalopathy), and the not-yet-released documentary Requiem for a Running Back, about Rebecca Carpenter’s quest to understand her troubled father, Lew Carpenter, who played in the NFL with Lombardi’s Packers fifty-five years ago and suffered from CTE. I also read transcripts of television interviews on the topic of football brain trauma conducted by Charlie Rose and PBS’s Frontline; and the essential writings of Steve Fainaru and his brother, Mark Fainaru-Wada, especially League of Denial, a book that documented two decades of obfuscation and deceit by the NFL in dealing with brain injuries.


Before all else, football must be identified for what it has become, far beyond the blocking and tackling—a colossal entertainment business that benefits from an economic system tilted in its favor.

The NFL, operating as a monopoly exempt from antitrust legislation, brings in $11 billion a year. The owners have been reported to pay their hand-picked commissioner, Roger Goodell, an annual salary of over $35 million. Most of the money comes from television. Easterbrook notes that on the list of the most-watched television events in American history, the Super Bowl holds the top twenty spots. Sunday Night Football on NBC has been the top-rated show on any channel since 2011, and ESPN’s Monday Night Football has been the number-one cable show since 2006.

The football games of the major college teams and conferences are not far behind as businesses, even as they enjoy the benefits of nonprofit tax status. Several conferences, such as the Big Ten, have their own lucrative television networks and, as Gaul writes, “are operated like entertainment divisions, with CEO-style executives and celebrity coaches collecting Wall Street–level salaries.” At the University of Oregon, known as “Nike U” because of the largesse of one billionaire alumnus, Phil Knight, founder of the shoe company, the equivalent of more than $180,000 was spent on each football player, by Gaul’s estimate. The “student-athletes” are tutored in the “Taj Mahal of academic services” buildings, a $42 million glass-and-steel modernist structure off-limits to other students, and trained in a Football Performance Center that reminded Gaul of an upscale shopping mall, replete with plush Ferrari leather meeting seats and locker rooms with “floor-to-ceiling glass walls and marble flooring imported from Italy.” The academic honors program at Oregon is housed in a basement.

Just as with movie stars, pop idols, and other big-time entertainers, any comparison of NFL salaries to those of other professions can be depressing. In 2013, according to Almond, the Minnesota Vikings paid a defensive end $1 million per game “to maul opposing quarterbacks.” For that same price, communities in a state then facing a budget deficit could have hired 474 elementary school teachers or 661 police officers. Fans tend to accept the disparity as a fact of life in our market-driven, celebrity-loving society. They fund most of it, not only by writing monthly checks for cable sports channels and buying game tickets and team merchandise, but also indirectly through the tapping of public funds and granting of tax breaks so that wealthy owners can profit from new and better stadiums.

In Seattle, where Seahawks fans are admiringly called the 12th Man because of the decibel level of their relentless cheering, nearly three quarters of the construction funds for CenturyLink Field came from Washington State taxpayers. “The owner, Paul Allen, pays the state $1 million per year in ‘rent’ and collects most of the $200 million generated,” Almond writes. “If you are wondering how to become, like Allen, one of the richest humans on earth, negotiating such a lease would be a good start.”

All of the above is accepted as troublesome by Easterbrook, Almond, and Gaul despite their disparate conclusions about the fate of football. Gaul, with his focus on college football, has the most provocative perspective on what comes next. The monetization of a superficially amateur sport has made poorly funded programs like those of New Mexico State or Alabama-Birmingham poorer and rich programs like Ohio State and Oregon richer. When Gaul asked the commissioners of twelve conferences, including the Big Ten and the PAC conference in western states, if they were worried about their bubble bursting, “they only laughed in response.” This was the gilded age of college football, Gaul concluded, but “the thing about gilded ages is they eventually collapse on themselves.”

How might they collapse? Whatever the football community does about making the game safer, Gaul saw signs of another problem looming ahead—the unstoppable force of technology. Schools that now profit from football should

be concerned about the disruptive implications of tablets, cell phones, and other gadgets not yet imagined…[and] the younger generation of fans who aren’t nearly as committed to the live-game experience as their parents and grandparents were…. Even students at mighty Alabama are leaving at halftime and not coming back. It is that little portable screen they keep fiddling with to distract themselves.

In their comments on the merits of the game, Easterbrook and Almond draw on a wide range of arguments, but in the end the crucial divide involves brain trauma. Easterbrook’s assessment is summarized in his title: The Game’s Not Over, even if the NFL is “broken and needs reform.” He calls it “the quintessential American sport, a magnificent incarnation of our national character,” and praises its aesthetic beauty, the way it brings together fans of disparate races and incomes, and how it provides an outlet for emotion and manliness in an artificial universe where, unlike the real world, nothing terrible occurs.

The problem, Easterbrook argues, is not at the pro level but earlier, in “youth football”—played by adolescents—and on high school teams, where the vast majority of concussions occur in brains that are more vulnerable. There are about two million boys playing youth tackle football and another 1.1 million on high school teams, while there are about two thousand in the NFL. Easterbrook would prohibit tackle football for kids below a certain age. He believes that changes in NFL rules and improvements in equipment will diminish the likelihood of long-term brain trauma for the pro players. He advises:

Yes, keep watching the NFL. The games are fabulous; the players know the risks and are well compensated. I watch the NFL on television avidly, and attend many games with enthusiasm. I never feel the slightest compunction. You shouldn’t either.

Almond, a reformed Oakland Raiders fanatic, struggled with conflicted feelings for years, but finally concluded that “our allegiance to football legitimizes and even fosters within us a tolerance for violence, greed, racism, and homophobia.” Where Easterbrook sees the sport as that magnificent incarnation of the American character, Almond asks:

What does it mean that the most popular and unifying form of entertainment in America…features giant muscled men, mostly African- American, engaged in a sport that causes many of them to suffer brain damage?

And he sees no reason to trust that the NFL will clean up the game: “As fans, we want to believe that league officials will choose the righteous path over the profitable one. This is nonsense and always has been.”


In the debate about football and brain trauma, Mike Webster’s dead brain started it all, in a sense, and Chris Borland’s living brain intensified the discussion. Separated by forty years, their stories weave together through the writings and films I examined on the subject.

“Iron Mike” Webster was a Hall of Fame center who played for the Pittsburgh Steelers, won four Super Bowl rings, and died in 2002 at age fifty. By then he was a broken man who lived in a pickup truck, estranged from his family, shocking himself with a Taser and attaching his teeth with superglue. It was his brain tissue that Dr. Bennet Omalu—the main character in Concussion—examined at the Allegheny County Coroner’s Office in Pittsburgh, leading to the discovery of CTE. Flashbacks depicting Webster’s tormented last days, as portrayed by actor David Morse, are among the movie’s most telling scenes.

As Concussion unfolds, the NFL responds to Dr. Omalu’s findings about Webster’s brain by calling him a quack and claiming that the CTE discovery is bad science. The campaign against him continues even after he documents strikingly similar damage in the brain tissue of other troubled former players who died too young. This reaction became part of a pattern. When Alan Schwarz of The New York Times started writing about traumatic brain injuries and football, Paul Tagliabue, then the commissioner, said dismissively that this “is one of those pack journalism issues, frankly.” The NFL formed its own study committee, stacked with doctors affiliated with the league. As League of Denial documents, the goal was to obfuscate the problem. Theirs was the junk science, not Omalu’s. One scientific paper declared: “Professional football players do not sustain frequent repetitive blows to the brain on a regular basis.” Webster endured more than 70,000 blows during his long career.

Chris Borland was an inside linebacker who played one brilliant season for the San Francisco 49ers, then retired in March 2015 at age twenty-four after studying the potential long-term effects the game might have on his brain. “I want to be seventy-five and healthy if possible,” he told Rebecca Carpenter in her documentary. One magazine labeled him “the most dangerous man in football.”

Lombardi and Wisconsin connect the two. Webster grew up on a potato farm in northern Wisconsin revering the Packers during the 1960s while they were winning five championships. He went on to play center at the University of Wisconsin–Madison before reaching the NFL. League of Denial opens with a scene of Webster at Pittsburgh’s training camp as a slow and undersized fifth-round draft choice in 1974 proving himself by excelling at the brutal nutcracker drill that Lombardi made infamous. Borland, whose family came from Wisconsin and moved to Ohio, grew up idolizing Lombardi. After my book appeared, he sent me a local newspaper photograph showing him in sixth grade dressed as Lombardi in a comically oversized camel-hair coat and stocking cap, presenting a book report on my biography of the coach.

Like Webster, Borland played college football at Wisconsin. His roommate was from Rhinelander, where Webster went to high school. Webster was a hero in the Wisconsin football pantheon when Borland played there. His Hall of Fame plaque hung outside the locker room. Borland took Webster’s determined approach to the game as the model of a way to prove himself, but though football was important to him, it was not the only thing. Reporters in Madison considered Borland an unusually thoughtful and independent athlete. In 2011, he joined the mass protests at the state capitol against Governor Scott Walker’s anti-union agenda.

At training camp during his first season with the 49ers, Borland suffered a concussion, and from then on, even as he excelled in games, he thought about retiring. He made the decision after reading League of Denial and consulting with Robert A. Stern, an expert on brain trauma and a professor of neurology at Boston University. By then, there was no debate about the validity of CTE, though it could only be diagnosed posthumously by examining brain tissue. Without acknowledging guilt, the NFL had settled a class-action lawsuit filed by thousands of former players charging that the league for years had covered up what it knew about traumatic brain injuries. (Although the players were awarded a total allotment of nearly $1 billion spread out over twenty years, the deal was largely considered a win for the owners; to collect the money the players had to waive the right to further litigation.)

Stern’s colleague, Ann McKee, had discovered CTE in the brains of scores of deceased football players of several generations, including Frank Gifford, a member of the Hall of Fame who died in his eighties, and Junior Seau, who killed himself at forty-three. But it was Webster whom Borland could not get off his mind when he decided that the risk of playing was not worth the reward. The 49ers responded by sending him a bill to repay much of his signing bonus. He still owes about $300,000. A few months after he retired, he visited his old high school coach in Ohio, who asked him whether he could teach his young players a safer way to tackle. Borland politely declined, explaining: “I think that’s a really difficult thing to do.”

Rebecca Carpenter, director of Requiem for a Running Back, looking at the brain of her father, football player Lew Carpenter, with neuropathologist Ann McKee
Eric Wycoff/You Gotta Love LLCRebecca Carpenter, director of Requiem for a Running Back, looking at the brain of her father, football player Lew Carpenter, with neuropathologist Ann McKee

The stories of Webster and Borland and Dr. Omalu all appear in the documentary Requiem for a Running Back, along with several powerful encounters that Rebecca Carpenter and her producer, Sara Dee, had with aging players and their families. No scene in the dramatization Concussion can match the agony of watching John Hilton, who played tight end in the NFL from 1964 to 1974, lose his train of thought, his eyes watering, a look of sheer desperation washing over him, as he tries to explain his mental condition; or the pain on the face of the wife of Mike Pyle, a center for the 1963 champion Chicago Bears, as she tells Carpenter, “One day you wake up and think, I don’t have a husband anymore. He’s sitting next to me, but…” The current estimates are that nearly 30 percent of all NFL players will suffer some form of dementia by the time they are sixty-five. Most players, unlike Borland, will still say it is worth the risk. But David Hovda, the head of UCLA’s Brain Injury Research Center, explained to Carpenter, “Brain injury does not happen to one person. It happens to an entire family.”

Rebecca Carpenter had spent years trying to understand her father, Lew, who grew up near the cotton fields of the Arkansas Delta, started as a running back at the University of Arkansas, and became a football lifer, ten years as a player, a coach for thirty-one more. On the field, Rebecca said, “he was beautiful, and I mean really, really beautiful,” but at home his anger and withdrawal had cast a shadow over her childhood and later became so pronounced that his wife, after a long and loving marriage, felt no choice but to leave him.

When he died at age seventy-eight in 2010, his family received an inquiry from Ann McKee, the neuropathologist in Boston. She had read Carpenter’s obituary, saw that he ostensibly had never suffered a concussion during his career, and asked whether his brain could be examined as a control in the CTE studies. The family agreed, and months later Rebecca was in Boston looking through a microscope at the brown strands of tau protein that had riddled her father’s diseased brain tissue. McKee said to her, “On a scale of one to four, four being the worst, your father was a four.”

Since it cannot be diagnosed in living players, CTE is not a fully understood disease. Its symptoms appear to vary widely from severe dementia to depression to bursts of anger. But Lew Carpenter’s brain reinforced what leading neuroscientists now believe—that it is not severe concussions so much as the repetitive subconcussive blows that football players endure over a career that are more often the cause, the toll of thousands of collisions and jarring movements that shake the brain inside the skull. This calls into question whether the NFL’s concussion protocols and changes in rules can fix things. As Susan Margulies, a concussion expert at the University of Pennsylvania, explained to Charlie Rose, no helmet has been devised that can “effectively reduce the rotational acceleration, that sloshing within the head that’s happening in the brain itself.”

In late November, during the middle of my research, I announced to my wife that I was “off football.” The cumulative effect of what I had read and viewed seemed too damning for me to continue as a fan. But this decision also happened to coincide with a Packers loss to the Chicago Bears on Thanksgiving night. A team that had started the season 6–0 was now 7–4, and it was exasperating to watch them struggle. Was I “off football” or conveniently using my newfound knowledge as a rationalization to avoid the pain? I spent the following days with anything but football: no college games on Saturday, no NFL on Sunday or Monday night. I imagined what it would be like to be Garry Wills, who once told me that he had never watched ESPN.

Green Bay’s next game was the following Thursday night, against the Detroit Lions. I tried to resist watching but gave in, clicking on the television in our hotel room for the start of the game. With the Packers trailing 17–0 before halftime, off went the set. This served me right for being weak, I said to myself. An hour later, unable to sleep, I checked my cell phone and saw that the score was 23–21 Detroit.

No debate now. On went the television, and I watched the final tense minutes—the Lions’ completion on third and long that seemed to clinch the game; the Packers getting the ball back one final time with less than a half-minute remaining; the desperate multilateral play that seemed to end it all as Aaron Rodgers, the Packers’ quarterback, was tackled with the clock reading 00:00; the penalty flag against a Detroit lineman for yanking Rodgers’s face mask, allowing Green Bay one last untimed play; the snap, Rodgers retreating and feinting left, barely avoiding a sack, rolling out to his right, gathering momentum as he approached the line of scrimmage and launched a remarkable gung-ho spiral, the ball arcing as high as a punt, nearly scraping the rafters of indoor Ford Field before descending toward the end zone seventy yards downfield and floating into the hands of the felicitously named Richard Rodgers, who had turned and backpedaled and jumped high to make the catch in front of a scrum of teammates and opposing defensive backs. HE CAUGHT IT! HE CAUGHT IT! WE WON! WE WON! I screamed, waking up my wife if not the entire tenth floor of the hotel. Easterbrook’s The Game’s Not Over took on a new meaning.

I quit smoking cold turkey thirty years ago; this was more difficult. Of all the people I had come across during my research on football, Chris Borland was the one I admired most. I wanted to support him in every way I could, yet he had played and I had only watched and now I could not yet bring myself to stop watching, even though that made me feel less than virtuous. I was not alone with these conflicted impulses. Ann McKee, who had studied scores of diseased football brains at Boston University, acknowledged that she remained a fan. “I have, like, these two faces,” she told Almond as he was making his case against football. McKee grew up in Appleton, Wisconsin, a Packers fan like me. Among the artifacts in her office was a bobblehead doll of Aaron Rodgers.

And what of Rebecca Carpenter? Was she off football now? Yes, she wrote to me, and in truth it never gave her much pleasure. This said, she acknowledged how marvelous it could be. “That Aaron Rodgers pass with zero seconds on the clock: Who didn’t think that was beautiful? Holy shit, it’s an amazing game!”

Source Article from http://feedproxy.google.com/~r/nybooks/~3/Aynz0VaFphI/