Месечни архиви: август 2017

Kenya: The Election & the Cover-Up

Baz Ratner/TPX/ReutersKenyans waiting to vote in the presidential election, Gatundu, Kenya, August 8, 2017

On August 8, millions of Kenyans formed long, orderly lines outside polling stations across the country to vote in presidential and local elections. Kenya is notorious for corruption, and virtually all prior elections had been marred by rigging. This time, however, the US and Kenya’s other donors had invested $24 million in an electronic vote-tallying system designed to prevent interference. When Kenya’s electoral commission announced on August 11 that President Uhuru Kenyatta had won another five-year term with over 54 percent of the vote, observer teams from the African Union, the European Union, and the highly respected US-based Carter Center, led by former Secretary of State John Kerry, commended the electoral process and said they’d seen no evidence of significant fraud. Congratulations poured in from around the world and Donald Trump praised the elections as fair and transparent.

But not everyone was happy. Raila Odinga, leader of the opposition National Super Alliance party, or NASA, declared the election a sham as soon as the results began coming in. On August 18, he submitted a petition asking Kenya’s Supreme Court to annul it and order a re-vote. The petition claims, among other things, that nearly half of all votes cast had been tampered with; that NASA’s agents, who were entitled by law to observe the voting and counting, had been thrown out of polling stations in Kenyatta strongholds; and that secret, unofficial polling stations had transmitted fake votes. The Supreme Court is expected to rule on September 1, but on August 29, the court registrar reported that some 5 million votes, enough to affect the outcome, were not verified.

Signs that something weird was going on emerged well before the election. A month earlier, Kenya’s electoral commission contracted Ghurair, a Dubai publishing firm, to print ballots. Newspaper reports linked the company to Kenyatta’s inner circle, and Kenyan courts ordered the electoral commission to use a different firm. The order was ignored, and the electoral commission issued a single-source contract to Ghurair anyway, citing time pressure. Then the accounting firm KPMG reported that more than a million dead people might still be registered as voters. NASA officials complained that Ghurair could print extra ballots to be used to create pro-Kenyatta ghost votes. Kerry dismissed these concerns, quipping after the election, “The people who voted were alive. I didn’t see any dead people walking around.”

Ten days before the election, the brutally tortured corpse of the electoral commission’s IT manager, Chris Msando, was discovered in some bushes outside Nairobi. CCTV footage shows his car roaming around the city for hours in the middle of the night before he died. Also in the car were two men and a woman, whose dead body was discovered beside Msanado’s, suggesting a “love triangle” explanation. Many Kenyans expressed skepticism. Msando managed the electronic system for transmitting results from polling stations, and he’d been complaining to the police of death threats for weeks. Kenya’s donors, including the EU’s ambassador to Kenya, praised the government for its commitment to investigating the murders, though many Kenyans suspected the police of being involved in them. But when the US and UK offered to help with the investigation, the police declined. Kerry warned the opposition not to politicize the killing.

A week before the election, a team of US and Canadian advisers who had been helping Odinga’s campaign set up a parallel system to verify the vote counting were arrested at gunpoint and deported. Then Odinga’s spokesman fled too, citing death threats. Then the NASA vote-counting office was ransacked. The Carter Center noted in its report that the raid had probably been carried out by Kenyan security personnel.

Election day brought more problems. According to Kenya’s electoral laws, representatives from all political parties are permitted to witness the voting and the counting of ballots in polling stations after polls close. Each representative then signs a form known as 34A, certifying the count, and receives a carbon copy. The new $24 million system was supposed to enable scans of the 34A forms to be sent to the electoral commission and posted online immediately, so they could be double checked by all parties and the public. But that system broke down at polling stations all across the country, so only the numbers were sent to Nairobi, often not by the new system but by text message. NASA officials pointed out that these numbers could have been changed en route and noted various suspicious findings in the unofficial early returns, including 100 percent voter turnout at some polling stations—with all votes for Kenyatta; a consistent 11 percent spread between Odinga and Kenyatta during the vote counting—a virtual statistical impossibility; and a phenomenon known as “unvoting,” in which the totals for some candidates actually fell as more votes came in. In his remarks on behalf of the Carter Center, Kerry admitted that there had been some “little aberrations here and there,” but none that “we thus far feel affected the overall integrity of the process.”

Electoral commission officials were supposed to deliver their 34A copies to one of 290 constituency-level centers, where the totals would be recorded on forms known as 34Bs. Copies of all 34As and 34Bs were then supposed to be delivered physically to the national tally center in Nairobi, where they were to be put online—if they had not been already. But almost none were actually online on the day Kenyatta was declared the winner. 

Shortly before departing Kenya, John Kerry praised the electoral commission for having done an “extraordinary job to ensure that Kenya has a free, fair and credible poll.” He then urged the opposition to “get over it and move on.”

Thomas Mukoya/ReutersThe Kenyan Independent Electoral and Boundaries Commission (IEBC) preparing to announce election results, Nairobi, Kenya, August 11, 2017

People who have witnessed election fraud in other African countries have told me that it’s normally done by making small changes to large numbers of tallies and this appears to have happened in Kenya, where there were over 40,000 polling stations. After NASA submitted its petition, a team of American experts led by University of Michigan Professor of Statistics and Political Science Walter Mebane volunteered to conduct a forensic analysis of the results. Results that have been tampered with show patterns and Mebane’s computer program identified over half a million fraudulent votes in this manner—almost certainly an underestimate of the true number. 

According to Mebane, the paper forms provide the true test of the integrity of the election. The Supreme Court’s registrar assembled a team of experts to physically examine the 34A and B forms that the electoral commission claimed to have used to arrive at the final results. According to their analysis, nearly a third of the forms have irregularities: some are blank, some are signed in the same handwriting, some come from polling stations that didn’t officially exist, some show results that differed from the totals on the copies of the form in NASA’s possession and from the totals announced by the electoral commission, and thousands lack official stamps, signatures, and watermarks. When the Supreme Court-appointed team examined the logs of the electoral commission’s server, it found that numerous unauthorized users had entered the system before and after the election, that the electoral commission chairman had uploaded and removed 34A forms, and that some polling center results had been added before the election had actually occurred.

Despite the growing evidence that the election was a fraud, Kenya’s notoriously corrupt judiciary may dismiss the case. When Odinga disputed Kenyatta’s victory after a similarly flawed election in 2013, the justices ruled that the election should stand, even though results from much of the country are not available even now, and probably never will be. 

Another rigged election in Africa is not news. But that US election observers were so quick to endorse it is shocking. Perhaps they believed that wrapping the election up quickly would prevent violence. After Kenya’s 2007 election, which most observers have since concluded was rigged against Odinga, some of his supporters went on a looting and killing spree in ruling-party strongholds. Gangs backed by ruling-party officials fought back and the ensuing mayhem left more than a thousand people dead, caused hundreds of thousands to flee their homes, and nearly shut down the economy of much of eastern Africa, which relies on transport from the Kenyan coast. Members of Odinga’s coalition were quoted making ethnically charged statements, but it was Kenyatta and his current deputy, William Ruto—who was then allied with Odinga, but has since switched sides—who were charged by the International Criminal Court with crimes against humanity for organizing and supporting the violent gangs. (The cases against them collapsed after witnesses were intimidated or died under mysterious circumstances.)

If the observers think urging Odinga to “move on” will avoid a rerun of 2007, they are likely mistaken. The Bush White House’s rush to congratulate Odinga’s rival, Mwai Kibaki, after the rigged 2007 election helped fuel the violence that followed.

A far more troubling possibility is that the US wants Kenyatta to remain in power, at the expense of democracy. Kenya lies in one of the most volatile regions of the world. Its neighbor Somalia has been a war zone for a decade; conflict in South Sudan has sent more than two million refugees scrambling to neighboring countries, including Kenya, since 2013. Two of Kenya’s other neighbors, Uganda and Ethiopia, are ruled by US-backed autocrats who have instigated or worsened these conflicts. Ethiopia’s US-assisted invasion of Somalia in 2006 set off the mayhem there, promoting the rise of the Islamist terrorist group Al-Shabaab. In 2014, Uganda entered the South Sudan civil war on the government’s side. Humanitarian organizations called for an arms embargo, which would have made Uganda’s involvement illegal. The UN Security Council, including Russia and China, seemed open to an embargo, but the Obama did not pursue it.

Kenyatta, a drowsy-looking bon vivant and the son of Jomo Kenyatta, Kenya’s first post-independence president, is supported by a powerful network of Kenyan politicians and businessmen, mostly of Kikuyu ethnicity, who have been looting the country for decades. He has aligned Kenya with US policy by, for example, deploying Kenyan forces in AMISOM, the US- and UK-supported African Union peacekeeping mission in Somalia.

Odinga, a taciturn, ambitious seventy-two-year-old of Luo ethnicity, whose father was Jomo Kenyatta’s post-independence vice-president and later his rival, has long nursed a grudge against Kenyatta’s Kikuyu elite. He spent ten years in jail for participating in a failed coup against Jomo Kenyatta’s hand-picked successor, Daniel Arap Moi, in 1982 and he fought vigorously for Kenya’s progressive 2010 Constitution which weakened Kenya’s formerly all- powerful presidency and made local officials more accountable to their people. Odinga has pledged to deliver a plan to withdraw Kenya’s troops from Somalia in the first ninety days of his presidency. NASA officials point out that the AMISOM deployment has provoked terrorist attacks on a Nairobi shopping mall and a university, killing hundreds and devastating Kenya’s tourist industry. Odinga is also close to South Sudan’s beleaguered opposition, and might help force the US-backed government into negotiations. This is something the Obama administration seems not to have wanted, and the Trump administration seems not to either.

Thomas Mukoya/TPX/ReutersOpposition leader Raila Odinga greeting supporters, Nairobi, Kenya, August 13, 2017

When I asked a member of the Carter Center delegation why his team was so confident about Kenyatta’s victory, he sent me a six-page report by a US-funded Kenyan NGO called the Election Observer Group. It describes a “verification” survey of the presidential results from 1,703 randomly selected polling stations around the country. According to the report, the survey predicted the electoral commission’s final results to within 0.3 percentage points for all eight candidates, including very minor ones who’d received only a few thousand votes.

It was obvious at once that something wasn’t right with this report. The NGO’s projected results were suspiciously accurate and the authors neglected to describe their sampling strategy. The sampling strategy is crucial—after all, voter preferences are not randomly spread around the country but clustered, with Kenyatta’s supporters in some regions and Odinga’s in others. A spokesman for the NGO told me that the survey was carefully stratified, but after carrying out a similar “verification” study during Kenya’s 2013 election, the same NGO declined requests to share its methodology until months after the contested vote, and when it did, several polling stations in the planned sample were reportedly missing.

A statistician friend who looked over the report for me put it this way: “Working backwards, from a known… or desired… election outcome, even I would know how to choose 1,700 polling stations to make results work. You would simply toss into the hopper Kikuyu area polling stations or remove Luo stations as needed.” Kikuyus tend to support Kenyatta; Luos, Odinga.

The Carter Center official was sanguine: “This [report] makes it highly unlikely that a large scale systematic manipulation—digital or manual—occurred during tabulation,” he wrote me. “Any significant discrepancies would have been discovered in the parallel count.”

But the study he was touting seemed to me like a piece of fake news—a flood of which had poured into Kenya around the election, virtually all pro-Kenyatta and/or anti-Odinga. Reports that Odinga had killed white farmers and that American think tanks believed Kenyatta would win appeared on newly created, convincing-looking blogs like “Foreign Policy Journal” and on mock-ups resembling Kenya’s largest daily, The Nation. While cooked-up stories about celebrities and UFOs are common in Africa, partisan fake news like this is not.

Days before the election, an official-looking document—that may or may not be genuine—was leaked to an opposition member of parliament. It described plans to deploy “regime friendly” soldiers to two of Nairobi’s largest slums, both packed with Odinga supporters. In case the people rose up after the results were announced, these men were to cut off the water and electricity supplies and block access to the city center.

A few days after the election, an obviously fake “Embassy cable” began circulating on Whatsapp, complete with US government heading and transmission codes. The unsigned author, addressing his or herself to the “Secretary of State,” predicted that if Odinga won the election, his tribesmen would be so happy they’d go on a rampage for months, looting and pillaging and destabilizing eastern Africa. While the predictions in the document are absurd, they reflect what many Kenyans probably think Americans think of them, and seemed designed to demoralize those Kenyans who have long suspected a US hand in the rigging of their elections.

Last spring, Kenyatta’s party hired, for a reported $6 million, the data research firm Cambridge Analytica, which helped elect Donald Trump and sway Britain’s Brexit vote. Cambridge Analytica’s parent company is Strategic Communications Limited, which is now working for the State Department. Articles in Slate and Politico suggest that SCL has in the past engaged in disinformation campaigns to sway elections in developing countries. The company denies this.

The most disturbing article concerning the Kenyan election appeared on the New York Times editorial page two days after the results were announced. Entitled “The Real Suspense in Kenya,” the editorial claimed that election observers had “witnessed no foul play,” even though the Carter Center’s report, in contrast to the observer’s public statements, mentions Msando’s killing, the NASA office raid, and the problems with the transmission of results.

The editorial also accused Odinga of “fann[ing] the embers of ethnic strife,” when he’d actually urged his supporters to remain calm. NASA considered organizing a nonviolent protest—permitted under Kenyan law—but deemed it too dangerous. There was spontaneous protesting and some sporadic looting in Odinga strongholds after Kenyatta’s victory was announced, but according to human rights groups, there is no evidence that this was organized, or that Odinga or NASA had anything to do with it. As the Times editors should have known, there was election-related violence, but virtually all of it was carried out by government security forces. For days after the results were announced, special police units cracked down mercilessly, killing at least twenty-four people in Odinga strongholds. The police claimed the victims were criminals or inciting violence, but this is doubtful. In the lakeside city of Kisumu, police went house to house, hurling teargas and beating and shooting people. Some victims were dragged out of bed and killed. At least ten deaths have been so far documented in this city alone, and more than a hundred others were beaten or suffered gunshot wounds. Among the dead are a nine-year-old girl shot by a stray bullet in Nairobi while playing on her balcony and a six-month-old beaten to death, in her own house, while in her mother’s arms. After Kisumu Governor Peter Anyang Nyong’o told reporters that fishermen had discovered five corpses in body bags floating in Lake Victoria, at least one of which had bullet wounds, the police claimed they were all drowning victims.

The Times editorial also failed to mention that reporters covering the police abuses have been beaten and arrested and that two highly respected Kenyan NGOs investigating them were closed down and raided by the police. A similarly misleading editorial appeared in The Washington Post on the day the election results appeared.

The US government has a disturbing history of meddling in the politics of developing countries; during the cold war, it also influenced some of our most prominent editors and journalists to downplay human rights abuses committed by its undemocratic allies. In countries like Kenya, where important US interests are at stake, the onslaught of mass-media distortions, and biased international election observers and Western-backed NGOs, suggest the possibility of concerted strategy. As the Chinese general Sun Tzu put it in his famous book The Art of War, “To subdue the enemy without fighting is the acme of skill.” But to do that, you need to make him feel he has already lost.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/UaAB8ZZvtBs/

Trump’s Hoodlums

Shay Horse/NurPhoto via Getty ImagesA right-wing militia group, Charlottesville, Virginia, August 12, 2017

Turn on Russian television any day of the week and you are certain to stumble upon a show in which a group of people who appear to be regular citizens (that is, they have no uniforms or government-issued documents) stage a raid of one sort or another. They barge into a store or a restaurant, for example, and demand to see employees’ identity documents, the storage area, or the cooking facilities. Without fail, they find violations of laws or regulations: the staff, natives of Central Asia, don’t have work permits! The store stocks vodka bottles with no alcohol-tax stamps affixed to them! The cook doesn’t cover her hair! At the end of the show, the raiders often pass their tearful, terrified victims to uniformed law enforcement officers, who sometimes appear less than enthusiastic about the task being handed to them.

These raiders have no official titles or legal powers. What directs their actions are the militant rhetoric and the promise of broad impunity that emanate from the Kremlin—and, of course, the glory and recognition of being on television. YouTube and RuTube contain a trove of other vigilante videos, including of self-appointed vice squads who beat up gay men or suspected drug dealers on camera.

Sometimes these vigilantes get in trouble with the law: occasionally a murderer of gay men is caught and jailed, and once in a while a vigilante-gang leader is reined in, though his partners in crime continue to roam free. But in general, the arrangement is low-risk for the perpetrators and convenient for the Kremlin. Vigilantes work fast. Russian law enforcement is not exactly subject to a lot of institutional constraints, but it can be sluggish, and it carries out violence in a dragged-out, bureaucratic way. The vigilantes, on the other hand, make a spectacle of their work, creating the sort of generalized dread on which autocracies thrive. At the same time, vigilantes, who work in small clumps, do not pose the sort of threat to the autocrat that powerful institutions of state sometimes can.

Putin did not invent vigilantes, of course: autocrats frequently rely on delegating violence to extralegal actors or, as in the case of Rodrigo Duterte of the Philippines, on the willingness of law enforcement officers to carry out extralegal violence in exchange for the promise of impunity. Duterte has made this promise explicit; more often, incitement to violence contains a tacit guarantee of protection.

Over the last two weeks, we have seen Donald Trump send out both kinds of signals to the vigilantes of his own choosing. His refusal to condemn the violent marchers in Charlottesville, in pointed and repeated break with political convention, was rightly interpreted by the white supremacists as a signal of encouragement. And his pardoning of former sheriff Joe Arpaio—before he was even sentenced—protected a law enforcement officer from facing any consequences for a long history of brutal violations of constitutional rights. Trump had encouraged extralegal violence in the past—like when he called on police not to be “too nice” to suspects. But the two weeks bracketed by the violence in Charlottesville and the pardon of Arpaio herald a definite turn away from the institutions of a government he despises.

Unlike an established autocrat like Putin, who delegates violence because he prefers his institutions ineffectual, Trump has been encountering some resistance from within his government. Grownups seem to be taking charge at the White House. Congressional Republicans have become more willing to criticize Trump, and he cannot contain his fury with them. His secretaries of state and defense have distanced themselves from him. In response, Trump now turns toward the gun-toting hoodlums who share his contempt for institutions.

None of this is entirely new, of course. Trump’s presidential campaign was built on disdain for Washington, for the very way American government is constituted. The vitriol he has directed against Mitch McConnell and, before him, the Freedom Caucus and, of course, Congressional Democrats, comes prepackaged. But now Trump appears to be getting hemmed in  by the generals at the White House. He has been compelled to give up his most odious advisers, Steve Bannon and Sebastian Gorka. His new press secretary, unlike the old one, is not acting like a delusional attack dog—she is, rather, smoothing corners, projecting normality by framing the president’s tantrums as “policy differences,” as she did when asked about Trump’s fight with McConnell. In other words, the administration is starting to run like a large family-owned business after the patriarch has developed dementia: by creating a parallel, functioning hierarchy and keeping the workings of the place out of sight of the nutjob boss.

It would appear that this is what the institutions of American government do to resist the usurping force of a would-be tyrant: they default to bureaucratic mode. This is not a pretty sight—it’s certainly not what democracy looks like. Increasingly, the public cannot see who is making decisions, except that it’s not the president the public (technically) elected. That positions Trump perfectly to appeal for action on the part of his base.

Trump’s base shares his contempt for the Washington institutions that are once again exposing their duplicitous nature. Some of this base also happens to be armed. This includes Americans civilians who like guns (and Trump). It also includes the self-arming militia types. It includes Immigration and Customs Enforcement officers who have for months now been encouraged to exert force indiscriminately. And it includes rogue law enforcement officers who see themselves in Arpaio. The elderly Arizona sheriff may be unemployed, but his pardon extends the promise of immunity to any cop who is fed up with being “nice” enough to stay within Constitutional constraints.

“Be wary of paramilitaries,” the Yale historian Timothy Snyder warned in his recent book On Tyranny: Twenty Lessons from the Twentieth Century.

When the men with guns who have always claimed to be against the system start wearing uniforms and marching with torches and pictures of a leader, the end is nigh. When the pro-leader paramilitary and the official police and military intermingle, the end has come.

Back in February, when his book came out, this seemed perhaps a little far-fetched. Now that the men with torches have marched and the president has encouraged the police who would intermingle with them, Snyder’s words look prescient. And Trump’s apparent decision to lift Obama-era restrictions on the flow of surplus military equipment to local law enforcement appears like a predictable and easily decipherable signal to the police to seize all the extralegal power they can.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/_TS4GyH9U0Q/

Fukushima from Within

Kazuto Tatsuta/Kodansha, Ltd.A spread from Ichi-F: A Worker’s Graphic Memoir of the Fukushima Nuclear Power Plant by Kazuto Tatsuta, 2017

Kazuto Tatsuta’s Ichi-F: A Worker’s Graphic Memoir of the Fukushima Nuclear Power Plant occupies a unique position in the history of comics. It is probably the first work of journalistic comics in the world to supersede its prose counterparts as the most popular source on its topic. In the case of Ichi-F, that topic is the cleanup and decommissioning work at the crippled Fukushima Daiichi nuclear power plant, the local name of which (“F-1,” flipped to “1-F”) gives the book its title.

The publisher of the English edition, Kodansha Comics, however, has opted to call this 550-page tome of dry, detailed reportage a “graphic memoir,” presumably because autobiography seems the easiest way to sell literary-minded comics outside the young-adult market these days. The original Japanese subtitle describes the manga instead as a “rōdōki,” literally a “record of labor,” putting more emphasis on the work itself than the person doing the work. The difference might seem trivial, but it speaks to many of the things that Ichi-F both succeeds and fails in doing.

Kazuto Tatsuta/Kodansha, Ltd.Panels from Ichi-F, 2017

Based on three stints as a temporary laborer at the Daiichi facility—the first in the second half of 2012 and then two stints over a few months in 2014—Ichi-F is a peculiar kind of exposé. “I told myself, if there really was a ‘hidden truth of Fukushima’ like they said, I’d go there and see what it was for myself,” Tatsuta says near the start of the book, as he begins looking for jobs at the plant through Tokyo-based employment agencies. But from the beginning it is clear that he’s not looking for dirt. As Tatsuta pulls back the dark veil surrounding what was supposedly the most toxic place in the world, the landscape he reveals is unexpectedly and refreshingly bland. With clear, diagrammatic visuals and plenty of worksite chatter, Tatsuta narrates the typically long days of menial janitorial and construction work, as well as the tedious but necessary safety measures—from the different types of protective suits, gloves, and masks that have to be worn depending on where one works, to the constant monitoring of one’s radiation exposure to ensure, not just health, but access to the maximum number of work hours. He also explains the subcontracting system that has efficiently recruited enough men (3,000 to 7,500 were on site on any given day in the years Tatsuta was there, with a high rate of turnover) to stabilize the plant, but has been widely criticized for diverting two-thirds or more of worker pay to middlemen.

True to its “memoir” tag, Ichi-F shows the author growing as an informed and conscientious citizen while working at the facility—though not in the direction we might expect from a book set after a meltdown. Tatsuta starts the book suspicious of antinuclear critics and protests; not a hundred pages in, he’s convinced they spout hogwash, at least on the subject of what is happening at ground zero. He likewise comes to the conclusion that media exposés about exploitative contracts, suicidally radioactive work conditions, and unreported worksite deaths are not only largely unfounded, but also detrimental to the progress of both the cleanup operations and the economic recovery of the surrounding region.

There are also warm episodes featuring him playing guitar at evacuee housing camps in the nearby city of Iwaki (where many of the Daiichi workers also stay) or being interviewed by the press after the first installments of his manga were published in Japan. But these personal detours are never truly confessional or introspective. Like Tatsuta’s repeated images of men furrowing their brow and getting down to work, then smiling when the job is done, they serve to display the narrator’s trustworthiness and approachability.

Kazuto Tatsuta/Kodansha, Ltd.Panels from Ichi-F, 2017

Though Tatsuta’s manga is not the only first-hand worker’s description of what has gone on in Fukushima (there are a handful of prose accounts), it is the one that gets referenced most frequently in Japan as a counterpoint to the many reports of worksite deaths (which are few, and none of which have had to do with radiation), worksite dangers (as Tatsuta shows, safety protocols are stringent and, with some exceptions in the immediate post-meltdown years, have been strictly enforced, such that heatstroke is today the biggest health concern), and worker exploitation through the subcontracting system. But Tatsuta’s nonchalance can be hard to swallow, especially given the long latency periods of radiation illnesses, the scandals involving underreported exposure doses, and a number of documented cases of companies abusing the subcontracting system to steal hazard pay and avoid government meddling in the event of workplace injuries. As important as worker safety and satisfaction are, Tatsuta’s singular focus on them tends to distract from some of the larger issues that surround Fukushima Daiichi.

Paternalism is a serious problem when it comes to nuclear matters in Japan, as in other countries. It was, and continues to be, a central trait of the corporatist state that insisted on nuclear power against strong regional resistance (often led by women), and created the conditions for the meltdowns by cutting corners and ignoring warnings in the first place. You have to look hard for the women in Ichi-F. There’s one, a female reporter, on page 320: “Wow, what a looker!” says Tatsuta’s coworker. Meanwhile, his colleagues circulate unnerving rumors of women in TEPCO (Tokyo Electric Power Company, operator of the Daiichi facility) uniforms, in managerial positions no less—this in a chapter set as late as 2014. Tatsuta criticizes anti-nuclear hysteria, but that is hardly the only ideology that skews perception of what is going on in Fukushima.

For example, the areas near Daiichi are now being developed as centers for a new “decommissioning industry” offering good jobs with stable employment and high pay, with R&D facilities to make Japan a global leader in nuclear decontamination, waste processing, and reactor decommissioning. These initiatives are being touted as both necessary for post-disaster remediation and beneficial to the region’s long-term recovery. High hopes are being placed in robotics, which is being used to measure radiation and remove debris in highly contaminated environments, as illustrated in the last chapters of Ichi-F. News reports suggest, however, that this is not going well, with frequent technical failures caused by frighteningly high radiation near the melted fuel.

Kazuto Tatsuta/Kodansha, Ltd.A page from Ichi-F, 2017

The irony, which Tatsuta fails to comment on, is that decommissioning is being led by the same companies that built and operated Japan’s nuclear plants: Hitachi GE, Toshiba Westinghouse, and Mitsubishi Heavy Industries, for example, as well as large construction corporations like Shimizu, Kashima, and Takenaka. French nuclear giant Areva is also involved, through a joint venture with the Japanese company Atox, which specializes in nuclear facility maintenance and waste disposal. Tatsuta is happy to share the details of his own paycheck and outline the general workings of the subcontracting system that swallows up most of the money laid out by the government and TEPCO for post-disaster labor costs. But he keeps the names of his employers and the master contractors anonymous. Likewise, when he depicts the billboard affixed to the face of Daiichi’s entry facility, which is emblazoned with the logos of the major companies involved, the text is rendered so indistinctly that only a few names can be made out.

Tatsuta clearly doesn’t care where the money goes (back to the nuclear and construction industry) or where it originally came from (taxpayers and energy consumers). He fails to see that, when Japan signed up for nuclear power in the 1950s, it made a deal with the devil; because of the technical complexity, security issues, political interests, and capital-intensiveness involved in nuclear power, the country now has no choice but to ask its jailor for deliverance. No amount of masculine sweat and good-natured smile will change that. When the decommissioning at Daiichi is due to take at least until 2050 and cost at least 21.5 trillion yen (189 billion USD), should the radiation exposure doses of individual workers—a subject that takes up a good chunk of Ichi-F—really be the only numbers we’re concerned with? And with fifty-some aging and halted reactors in Japan, Fukushima itself is just the beginning.

Ichi-F has sold hundreds of thousands of copies in Japan and been celebrated extensively in the press. American, French, German, and other foreign reporters interviewed Tatsuta even before translations of his manga appeared. Despite this fame, the public knows little about the artist beyond the restricted window he provides in Ichi-F. “Kazuto Tatsuta” is a pseudonym, and all photographs show him disguised in a Mexican wrestling mask. He claims that he originally hid his identity so that he would be able to work at the plant again. But this shroud of secrecy, along with Tatsuta’s tendency to dismiss antinuclear voices while giving TEPCO and the Japanese government a free pass, has led some to suspect the author of being a lackey of the nuclear industry.

His workaday drawing and layout style does suggest past experience with made-to-order manga from corporate, institutional, or educational clients, but that doesn’t prove anything. Late in the book, he offers a peek onto his professional past with a panel showing a sampling of sports, “documentary” (about what?), and trashy “true stories” comics for “cheapo convenience store mags.” We know, through press reports, that Tatsuta was in his late forties when he drew Ichi-F, so one assumes a fairly extensive resume of past comics work; what would that oeuvre reveal about his politics and associations if we knew his real name and could look it up? Alas, all we are really shown about Tatsuta is that he earnestly believes in what he sees with his own eyes, in the merits of hard work, and in the good intentions and dedication of his workmates and their employers. And he seems to be adverse to any of the personal or political reflection that transforms a report or recollection into a worthwhile memoir, or for that matter into a persuasive work of journalism.

Kazuto Tatsuta/Kodansha, Ltd.Panels from Ichi-F, 2017

Some find Ichi-F insufficiently angry. I certainly do. But it’s worth remembering what the climate was like in Japan when Tatsuta began drawing Ichi-F in 2013. The meltdown was still an ongoing event, even if things were no longer in a state of apocalyptic emergency. The public worried about what was happening and what would happen. They looked to the press for help, only to have to wade through obfuscations from officials and half-truths from muckrakers. Passionate misinformation was still the norm, and people were exhausted by the instability it was causing in their lives. Some citizens had already taken matters into their own hands by creating radiation hot-spot maps and working directly with farmers and organic produce collectives to figure out what was safe to eat, or where to live or let their children play. Radiological dosimetry and nuclear risk assessment became home sciences, and Geiger counters mass consumer goods. While other artists and writers raged about what lurked behind radiation’s cloak of invisibility, Tatsuta worked in parallel with activists and researchers (many of whom would probably disagree with his politics otherwise) who endeavored to find ways to make the threat visible and knowable and, if not controllable, then at least navigable.

Ichi-F may not be beautifully drawn or eloquently written. The perspective may be narrow and at times politically naïve, even infuriating. But it does not succumb to the superficial, fear-mongering nonsense that infects so much post-Fukushima reporting and art, both inside and outside Japan, from bogus computer-generated images showing the Pacific Ocean as a contaminated cesspool to sculptural installations presenting the black sacks used for removing contaminated soil as if they were time bombs or body bags. As a result, Tatsuta has given us a book that actually matters, with information and perspectives that we can actually debate, and that people will be referring to long after the cloud of doom has passed.

Kazuto Tatsuta’s Ichi-F: A Worker’s Graphic Memoir of the Fukushima Nuclear Power Plant, translated into English by Stephen Paul, is published by Kodansha Comics.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/R0QCGBm8rOw/

Making Memories

Eric Edelman/RetroCollage.comCollage by Eric Edelman

On September 1, 1953, William Scoville, a neurosurgeon at Hartford Hospital in Connecticut, operated on a twenty-seven-year-old man named Henry Gustav Molaison, who suffered from severe epilepsy. Scoville removed two pieces of tissue—the left and right sides of the hippocampus—from Molaison’s brain. The hippocampus, located near the center of the brain, forms a part of the limbic system that directs many bodily functions, and Scoville thought that epileptic seizures could be controlled by excising much of it. The result, however, as the journalist Philip Hilts wrote in Memory’s Ghost (1995), was that

from H.M.’s moment in surgery onward, every conversation for him was without predecessors, each face vague and new. Names no longer rose to the surface, neither histories nor endearing moments came anymore. Reassurances of welcome had to be sought every moment from every look in every pair of eyes.

H.M., as he came to be known in the medical literature (his real name was not disclosed until his death in 2008), could no longer remember anything he did. He could not remember what he had eaten for breakfast, lunch, or supper, nor could he find his way around the hospital. He failed to recognize hospital staff and physicians whom he had met only minutes earlier, remembering only Scoville, whom he had known since childhood. Every time he met a scientist from MIT who was studying him regularly, she had to introduce herself again. He could not even recognize himself in recent photos, thinking that the face in the image was some “old guy.” Yet he was able to carry on a conversation for as long as his attention was not diverted.

H.M.’s condition suggested that the hippocampus was essential for the conversion of short-term memories to long-term memories, and he became the most widely cited example in studies of the distinction between them. Eric Kandel, James Schwartz, and Thomas Jessell drew on his case in 2000:

Brain trauma in humans can produce particularly profound amnesia for events that occur within a few hours or, at most, days before the trauma. In such cases older memories remain relatively undisturbed…. Studies of memory retention and disruption of memory have supported a commonly used model of memory storage by stages. Input to the brain is processed into short-term working memory before it is transformed through one or more stages into a more permanent long-term store.1

Patient H.M., by Scoville’s grandson, Luke Dittrich, is a memoir of his grandfather and H.M. Much of the book describes, with justified quiet indignation, the failures of the neurosurgical procedures that were widely practiced by Scoville and other neurosurgeons in the past century.

The procedures that Dittrich describes have a long history. In the late nineteenth century, for example, Dr. Gottlieb Burckhardt, a Swiss psychiatrist, “performed the first modern neurosurgical attacks on mental illness.” Burckhardt had no experience or training as a neurosurgeon, but one of the first patients he selected for his experiments was a “‘disturbed, unapproachable, noisy, fighting’…fifty-one-year-old, ‘particularly vicious woman,’ who’d been institutionalized for sixteen years.” After five operations, over the course of which he removed eighteen grams of her brain, Burckhardt noted that his patient had become “more tractable.” As Dittrich writes, “Her previous intelligence, he added, ‘did not return.’” Burckhardt concluded that his patient “has changed from a dangerous and excited demented person to a quiet demented one.”

Psychosurgery became increasingly popular in the 1940s, and in 1949, Egas Moniz received the Nobel Prize for inventing the procedure called lobotomy, in which the neural connections to the prefrontal lobe are severed. Dittrich writes:

The Nobel Committee had endowed psychosurgery with a patina of nobility, demonstrating that future breakthroughs in the field might pay great professional, therapeutic, and scientific dividends. For ambitious tinkerers like my grandfather, the lure was irresistible.

He gives a fascinating portrait of Scoville, who sought professional advancement through his experimental operation on H.M., describing him as “a restless explorer in the operating room, never satisfied with existing techniques or methods, even the ones he had invented.” What emerges from Dittrich’s account is a profound sense of the ignorance, the arrogance, and the passion that drove his grandfather and other neurosurgeons to perform operations that often left their patients demented. They had a drive to innovate—to pursue new, untried, experimental procedures with unpredictable consequences—and were untroubled by their harmful outcomes.

Dittrich shows how H.M.’s case pointed the way to a better understanding of some of the more puzzling aspects of how our brains function and the nature of our conscious behavior. After surgery, he notes, H.M. was insensitive to pleasure and pain. When subjected to increasing levels of heat from a dolorimeter, which causes considerable pain in normal subjects, “Henry sat calmly,…even as his skin began to burn and turn red.” He lost “a capacity for desire”: “in the six decades between his operation and his death he never had a girlfriend, or a boyfriend, never had sex, never even masturbated.” H.M.’s insensitivity and his indifference to pleasure and pain seem critical to an understanding of his loss of memory. For all of our memories are subjective. Your memories are in relation to you, your friend’s memories are in relation to him or to her, and so on. The loss of pleasure and pain is a loss of subjectivity, of an ability to relate to objects, to persons, and to oneself—an ability H.M. lost when Dittrich’s grandfather removed his hippocampus.

Dittrich’s book concludes with an interview with Suzanne Corkin, a professor of psychology at MIT. For almost fifty years she studied H.M., and she and her mentor, Brenda Milner, wrote a number of important papers about the hippocampus’s function in establishing long-term memories. They showed that H.M. could no longer form memories of space or time or acquire general knowledge of the world, but he could learn new motor skills. Their work was the basis of the understanding of memory and hippocampal function since the 1960s. When Dittrich interviewed Corkin in 2015, he asked what she was going to do with her notes on H.M.:

Dittrich: Are you aiming to give his files to an archive?

Corkin: Not his files, but I’m giving his memorabilia to my department. And they will be on display on the third floor….

Dittrich: Right. And what’s going to happen to the files themselves?

She paused for several seconds.

Corkin: Shredded.

Dittrich: Shredded? Why would they be shredded?

Corkin: Nobody’s gonna look at them.

Dittrich: Really? I can’t imagine shredding the files of the most important research subject in history. Why did you do that?

Corkin: Well, you can’t just take one test on one day and draw conclusions about it.

Many readers will be shocked by the revelation that Corkin’s notes were shredded. (Whether they were remains a matter of controversy; in 2016 MIT responded to Dittrich with an open letter claiming that nothing was actually destroyed, and Dittrich then posted online a recording of his interview with Corkin telling him the material was gone.) Yet even had they survived, they would not have revealed much of the deeper significance of H.M.’s case, which has become evident only through new neurobiological research.

Recent studies of how the brain organizes space and regulates how one makes sense of one’s environment have shown that the hippocampus is concerned with much more than converting short-term memories into long-term memories. For example, H.M.’s sensations, thoughts, and perceptions after the operation had no continuity at all. “Every day is alone in itself,” Corkin quotes him as saying. Summarizing H.M.’s interview transcripts, Dittrich writes:

The most compelling moments were always the rare ones when Henry would try to explain what it was like to be him…. He never quite succeeded, since his amnesia wouldn’t let him hold on to the ideas long enough to get them out. He’d seem on the verge of a breakthrough, of a definitive statement, and then his train of thought would derail, and he’d start all over again.

These and other observations of scientists who studied H.M. are consistent with the more recent finding that, in the words of the neuroscientists Marc W. Howard and Howard Eichenbaum, “one of the functions of the hippocampus is to enable the learning of relationships between different stimuli experienced in the environment.” The work of Eichenbaum and others has begun to give us not only a new view of the function of the hippocampus, but a new understanding of the nature of memory. It is becoming increasingly clear that human and animal memory depend on the ability of the hippocampus to establish relations between an individual and his or her surroundings.

Laboratory of Comparative Human Cognition/UC San DiegoThe Soviet neuropsychogist Alexander Luria, author of The Mind of a Mnemonist (1968),
with patients; 1960s

Studies by brain scientists including Eichenbaum and John O’Keefe have shown that the hippocampus is made up of cells with different kinds of functions. Most important are “place” cells, discovered by O’Keefe in research that won him the Nobel Prize, which respond to an animal’s location in space by causing electrical discharges called action potentials, creating mental maps of an animal’s environment. These maps are at various scales, like maps of an entire city as opposed to maps of individual streets. “Place cells,” wrote Howard and Eichenbaum in 2015, “are apparently not coding for a place per se but a spatial relationship relative to a landmark, or set of landmarks.”

There is considerable evidence that the activities of hippocampal neurons also help establish our relationships to many other types of environmental and internal stimuli, such as sounds, odors, pain, pleasure, and fear. Howard and Eichenbaum proposed that “the spatial map in the hippocampus is a special case of a more general function in representing relationships…including both spatial and non-spatial [stimuli].” In each case, the neurons are able to convey a relationship to our consciousness. The hippocampus also organizes temporal stimuli (including when an event took place) and sequential stimuli (indicating the order of a series of events). The hippocampus receives and integrates many other varieties of information to create multisensory relations, which is what memory is all about.2

But in what sense are relationships of this kind involved in remembering other sorts of information that apparently have nothing to do with specific events or our environment, such as random lists of words and numbers? Consider, for example, Alexander Luria’s description in his book The Mind of a Mnemonist (1968) of a patient, S, who could

recall tables of numbers written on a blackboard. S. would study the material on the board, close his eyes, open them again for a moment…and…reproduce one series from the board.

How is this ability to recall random words and numbers, even years later, related to what scientists have recently suggested is the function of the hippocampus, which is apparently essential to our capacity to remember? Luria describes how the mnemonist remembers. He never recalls arbitrary lists of words or numbers without first establishing a setting—a relation—in which he heard the lists:

Experiments indicated that [the mnemonist] had no difficulty reproducing any lengthy series of words whatever, even though these had originally been presented to him a week, a month, or a year, or even many years earlier…. During these test sessions S. would sit with his eyes closed, pause, then comment:… You were sitting at the table and I in the rocking chair… You were wearing a gray suit and you looked at me like this… Now, then, I can see you saying…

In other words, the mnemonist accesses (i.e., recalls) what appear to be imprinted words only by recalling the setting in which they were first “imprinted” in his memory. Once he recalls that setting, S. has a technique that allows him to memorize arbitrary lists of numbers, words, or both. The mnemonist, Luria notes, when given a long series of words to memorize, would

find some way of distributing these images of his in a mental row or sequence. Most often (and this habit persisted throughout his life), he would “distribute” them along some roadway or street he visualized in his mind. Sometimes this was a street in his home town, which would also include the yard attached to the house he had lived in as a child and which he recalled vividly. On the other hand, he might also select a street in Moscow. Frequently he would take a mental walk along that street…and slowly make his way down, “distributing” his images [evoked by the words] at houses, gates, and store windows.

There is no example in Luria’s book suggesting that the mnemonist can recall without establishing a setting. We would suggest that all recollections depend on a setting that the individual may or may not be aware of.

This mnemonic technique has been known since the ancient Greeks. Cicero tells us that an aristocrat named Scopas was giving a banquet, at which the poet Simonides chanted a poem in honor of his host that included “a passage in praise of Castor and Pollux.”3 Subsequently a note was brought to Simonides that two young men were waiting for him outside, but when he went to greet them he did not find them. Meanwhile the banquet hall collapsed during his absence, killing all of the guests. The corpses were badly mangled and could not be identified. Simonides remembered the place where each of the guests was sitting and was therefore able to identify them.

Simonides is generally known as the inventor of the art of memory. Most remarkable is that the art he invented operates not unlike the way the hippocampus creates human and animal memory by means of cells that map location in space, or create temporal markers, or encode sequences of events.

Essential to the brain’s creation of memories is that all of our memories are subjective—they are created from the point of view of the individual who is remembering. We have a sense of self because we have a preexisting sense of our body that contains that self. The basis of our subjectivity is our “body image,” a coherent, highly dynamic (it is constantly changing with our movements), three-dimensional representation of the body in the brain. This body image is an abstraction the brain creates from our movements and from the sensory responses elicited by those movements—using one’s left hand to pick up an apple, for example. “The coherence of consciousness through time and space is again related to the experience of the body by way of the body image,” John Searle wrote in these pages in 1995. “Without memory there is no coherent consciousness.”4

Since our subjectivity depends on our body image, if our body image is altered for neurological reasons, so too are our recollections. After he badly injured his leg on a mountain in Norway, Oliver Sacks described what is known as the “alien limb” phenomenon in his book A Leg to Stand On (1984):

The leg had vanished, taking its “place” with it. Thus there seemed no possibility of recovering it…. Could memory help, where looking forward could not? No! The leg had vanished, taking its “past” away with it! I could no longer remember having a leg. I could no longer remember how I had ever walked and climbed.

Since the nineteenth century it has been known that the brain creates “maps” of the body in the cortex. There is a cortical map of sensations (a sensory map) and a cortical map of movement (a motor map). In the sensory cortical map (also known as the sensory homunculus), the region in the brain that is activated, for example, by touching the hand, fingers, and arm—the cortical area that “represents” the sensations created by a cotton swab moved from the tip of the fingers to the arm—is adjacent to the representation of the face.

A counterpart of the alien limb is the “phantom” limb—a limb perceived by an amputee who feels as if an arm or leg that was lost in surgery is still attached to the body. The phantom limb might be extremely painful. When points remote from the amputation line are touched, such as the amputee’s face, he or she paradoxically feels a phantom limb. Remarkably, memories related to the original limb may be linked to the phantom limb. The subject may even perceive that the phantom limb is wearing a wedding ring or jewelry; when the weather turns humid, the phantom limb may experience arthritic pain. The patient’s phantom limb is not only a recollection of the lost arm or leg, but one that includes the patient’s experiences related to that limb.

Or take the case of a man whose memories are transformed when he becomes blind, as the theologian John Hull describes in his book Touching the Rock (1990). Hull became increasingly blind between the ages of twenty and forty. When he lost his sight, he noted, “the proportion of people with no faces increased…. I have fairly clear pictures of many people whom I have not met again during these three years, but the pictures of the people I meet every day are becoming blurred. Why should this be?” Hull answers his own question:

In the case of people I meet every day my relationship has continued beyond loss of sight, so my thoughts about these people are full of the latest developments in our relationships. These have partly converted the portrait, which has thus become less important. In the case of somebody I know quite well but have not seen for several years, nothing has happened to take the place of the portrait, and when I think of those people, it is the portrait which comes to mind.

Hull goes on to say that he was deeply distressed that he was losing the visual portraits of his wife and children. Hull’s memories (as is true of all of our memories) were continuously being “updated.” He could still visualize people he had known before he became blind and had not been in contact with since. But now that he was living in a world without any new images, his memories of people with whom he was regularly in touch were being updated into a nonvisual form—the sounds of their voices and the sensations of touching their hands and faces. When one becomes blind, the continuity of visual memory is lost.

When memories are first formed, they are “short-term” and unstable. But with time, the physical representation of the memory in the brain formed by the synaptic junctions between neurons becomes more stable. This process is called consolidation. The stabilized memories then become “long-term” memories. H.M.’s brain was unable to create long-term memories. Recent neurophysiological studies have shown that even long-term memories are very dynamic and that each time the brain tries to activate a “memory trace”—the physical representation of the memory in the brain, also called the “engram”—the nature of that trace changes. In other words, memories are altered every time the brain recalls them. This alteration of an existing memory is called reconsolidation. Because the memory trace changes, you can never remember the same thing twice in exactly the same way.

The process of reconsolidation, scientists have shown, changes the memory—that is, the way the memory is represented at the synaptic junction is altered. The recognition of the malleability of memory is nothing new. What is new is the observation that the connections between neurons that many scientists believe have a central part in generating memories change whenever the brain seeks to recover the information they represent. These changes may be the reason we can generalize. Over time, some memories are assimilated into categorizations or generalizations. When we recall taking the subway, we do not necessarily recall each trip separately but rather taking the subway in general; and such recollection may include an image of the subway. The brain simplifies our understanding of our environment and our relationship to it.

Memory may appear to be a reproduction of images, sounds, and even thoughts that can be stored in the brain in a manner analogous to the way information can be stored on a CD, but it is becoming increasingly evident that this is too limited an understanding. Rather, as Eichenbaum, O’Keefe, and others have shown, memory is the establishment by the hippocampus of complex relations among a variety of sensory stimuli from the point of view of the individual who is remembering. Thus when Scoville removed H.M.’s hippocampus, H.M. lost more than an ability to convert short-term memories to long-term memories; he lost the ability to establish such relations.

Yet scientists still don’t understand the ways that changes in the synaptic junctions between neurons, or changes in the neurons themselves, are related to our memories, thoughts, and actions. Indeed, neurobiology has yet to define the physical nature of the long-lasting changes in neuronal connections that are created as memories are formed. Even a simple memory must involve vast numbers of such changes. Advanced techniques for imaging brain activity, such as fMRI, reveal which brain regions are activated when a memory is recalled, but the resolution is far too low to study individual neurons, let alone individual synapses. As Luke Dittrich has so aptly shown, much of what we know about memory today still comes from studying the irreparable harm done to H.M.

  1. 1

    Principles of Neural Science, edited by Eric Kandel, James Schwartz, and Thomas Jessell, fourth edition (McGraw-Hill, 2000), p. 1244. 

  2. 2

    The shifting perspectives so characteristic of the artistic imagination in the twentieth century (in music, art, and literature) are probably related to hippocampal function as well. For example, in Plaisir de Jouer, Plaisir de Penser (2016), Charles Rosen and Catherine Temerson write that Proust calls his narrator Marcel, “blurring the distinction between novel and autobiography,” and that Alan Ayckbourn “in one of his plays puts two households simultaneously on stage [having dinner on separate days]…. At the end of each dinner the same person is drenched (a bowl of soup is thrown in his face in one scene; in the other the plumbing has collapsed).” 

  3. 3

    See Francis A. Yates, The Art of Memory (Routledge, 1966), p. 1ff. 

  4. 4

    The Mystery of Consciousness: Part II,” The New York Review, November 16, 1995. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/B8iZR8nbpak/

Cartier-Bresson’s Distant India

Henri Cartier-Bresson/Magnum PhotosMuslim refugees on a train from Delhi to Lahore, in Kuinkshaha, India, 1947

Henri Cartier-Bresson is perhaps the most well-known photographer in India, or rather—an important distinction—the photographer whose work is most well-known. He first visited India in the fall of 1947. One of only two Western photographers granted access to Gandhi, Cartier-Bresson shot a series of portraits of the ailing leader the week before he was killed by Nathuram Godse, a Hindu chauvinist, in January 1948. Cartier-Bresson then covered Gandhi’s funeral and the national mourning that followed.

First published in Life magazine, these photos brought Cartier-Bresson worldwide recognition. They were also widely reproduced in India, and are today so familiar there that his authorship is usually forgotten. The same is true of many quieter, more tableaux-like photos he took on subsequent visits in 1950, 1966, and 1980. In “Henri Cartier-Bresson: India in Full Frame,” the Rubin Museum brings together selections from each of these trips.

Henri Cartier-Bresson/Magnum PhotosThe Rangwala retail and wholesale cloth market, Ahmedabad, Gujarat, India, 1966

Cartier-Bresson came to India at a turning point in his career. Before the war, he had been an art photographer influenced by Giorgio de Chirico’s moody geometry and the theories of his friend André Breton. But he now had—as he wrote in a 1947 manifesto for Magnum, the influential photojournalist collective that he helped found—a “curiosity about what is going on in the world, a respect for what is going on and a desire to transcribe it visually.” The curiosity would drive a long career in photojournalism, for which his India trip was an apprenticeship.

On his first visit, Cartier-Bresson shot fluent and respectful portraits of Indian politicians (Sardar Patel, Nehru, Gandhi). His photograph of Nehru and the Mountbattens—Nehru sharing a joke with Edwina, as her husband Louis looks away—is a history textbook favorite. (It’s also insightful; Nehru and Edwina are rumored to have had an affair). The quiet, almost hushed late portraits of Gandhi owe their success to tact. Unlike Life’s Margaret Bourke-White, who was also present, Cartier-Bresson shot without flash, which gives his prints a softer, more human finish. “We are bound to arrive as intruders,” he later reflected in his essay “The Picture-Story.” “It is essential, therefore, to approach the subject on tiptoe…. It’s no good jostling or elbowing.”

Cartier-Bresson’s coverage of public events was withdrawn, almost retreating. This was perhaps a moral reflex: 1947 and 1948 were the worst years of post-partition communal violence. A shot of Muslim refugees taking a train to Lahore—did they make it past the border?—is haunting precisely for how little it reveals.

His restraint is more puzzling when he turns to less extreme subject matter, such as fisherman at work or women drying their laundry, in part because it’s entirely uncharacteristic of him. Cartier-Bresson’s European street photography was openly virtuosic; he shot with great agility and precise timing (passing reflections in a puddle, light flashing on an eyeglass). In a way, the challenge of composition was his secret subject. “Each of his famous pictures refers internally to the act of shooting it,” Arthur Danto wrote in The Nation in 1987, “and each, for all its laconic title, is eloquent with the implied narrative of the successful kill.”

By contrast, Cartier-Bresson’s Indian photos are quiet, self-effacing, and resolutely static. Even when he shoots in crowds, as he does at a cattle sale, there is little sense of movement or noise. If in Europe he chased the “decisive moment,” there’s something conspicuously timeless about his panoramas of Indian peasants and cowherds. He also uses a different perspective. In Europe, he’s almost indecently close to his subjects; in India, he shoots from afar, with a sort of wide-angle pastoralism or classicism. 

It’s hard not to detect a sense of social estrangement here. In fact, Cartier-Bresson made a style out of his outsider status. Had he suppressed this self-knowledge, his work might have turned sentimental or prying. (A photo of Muslim women framed by a cloud is a rare lapse in this regard.)

Henri Cartier-Bresson/Magnum PhotosPeople and their cattle, Jaipur, India, 1947

Cartier-Bresson’s later India photos feel driven by a sociological impulse. They are simply but carefully framed to convey facts. Much like V.S. Naipaul, he was drawn to the poignant contrasts of post-colonial development. For example, a famous 1966 photo shows India’s first missile being transported on a bicycle. Another more bitter shot shows barefoot workers digging at the site of India’s first nuclear power plant.

Perfume seller, Ahmedabad (1966) is the photograph closest in spirit to Cartier-Bresson’s European work. A nested portrait of a plump street-vendor framed by cheap paintings and wares, it’s a moving, accomplished shot. But even here you sense a wry self-awareness. The frame of paintings is dazzling, but it keeps the photographer out.

“Henri Cartier-Bresson: India in Full Frame” is at the Rubin Museum through January 29, 2018.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/NdnI5qY0rz8/

Cartier-Bresson’s Distant India

Henri Cartier-Bresson/Magnum PhotosMuslim refugees on a train from Delhi to Lahore, in Kuinkshaha, India, 1947

Henri Cartier-Bresson is perhaps the most well-known photographer in India, or rather—an important distinction—the photographer whose work is most well-known. He first visited India in the fall of 1947. One of only two Western photographers granted access to Gandhi, Bresson shot a series of portraits of the ailing leader the week before he was killed by Nathuram Godse, a Hindu chauvinist, in January 1948. Bresson then covered Gandhi’s funeral and the national mourning that followed.

First published in Life magazine, these photos brought Bresson worldwide recognition. They were also widely reproduced in India, and are today so familiar there that his authorship is usually forgotten. The same is true of many quieter, more tableaux-like photos he took on subsequent visits in 1950, 1966, and 1980. In “Henri Cartier-Bresson: India in Full Frame,” the Rubin Museum brings together selections from each of these trips.

Henri Cartier-Bresson/Magnum PhotosThe Rangwala retail and wholesale cloth market, Ahmedabad, Gujarat, India, 1966

Bresson came to India at a turning point in his career. Before the war, he had been an art photographer influenced by Giorgio de Chirico’s moody geometry and the theories of his friend André Breton. But he now had—as he wrote in a 1947 manifesto for Magnum, the influential photojournalist collective that he helped found—a “curiosity about what is going on in the world, a respect for what is going on and a desire to transcribe it visually.” The curiosity would drive a long career in photojournalism, for which his India trip was an apprenticeship.

On his first visit, Bresson shot fluent and respectful portraits of Indian politicians (Sardar Patel, Nehru, Gandhi). His photograph of Nehru and the Mountbattens—Nehru sharing a joke with Edwina, as her husband Louis looks away—is a history textbook favorite. (It’s also insightful; Nehru and Edwina are rumored to have had an affair). The quiet, almost hushed late portraits of Gandhi owe their success to tact. Unlike Life’s Margaret Bourke-White, who was also present, Bresson shot without flash, which gives his prints a softer, more human finish. “We are bound to arrive as intruders,” he later reflected in his essay “The Picture-Story.” “It is essential, therefore, to approach the subject on tiptoe…. It’s no good jostling or elbowing.”

Bresson’s coverage of public events was withdrawn, almost retreating. This was perhaps a moral reflex: 1947 and 1948 were the worst years of post-partition communal violence. A shot of Muslim refugees taking a train to Lahore—did they make it past the border?—is haunting precisely for how little it reveals.

His restraint is more puzzling when he turns to less extreme subject matter, such as fisherman at work or women drying their laundry, in part because it’s entirely uncharacteristic of him. Bresson’s European street photography was openly virtuosic; he shot with great agility and precise timing (passing reflections in a puddle, light flashing on an eyeglass). In a way, the challenge of composition was his secret subject. “Each of his famous pictures refers internally to the act of shooting it,” Arthur Danto wrote in The Nation in 1987, “and each, for all its laconic title, is eloquent with the implied narrative of the successful kill.”

By contrast, Bresson’s Indian photos are quiet, self-effacing, and resolutely static. Even when he shoots in crowds, as he does at a cattle sale, there is little sense of movement or noise. If in Europe he chased the “decisive moment,” there’s something conspicuously timeless about his panoramas of Indian peasants and cowherds. He also uses a different perspective. In Europe, he’s almost indecently close to his subjects; in India, he shoots from afar, with a sort of wide-angle pastoralism or classicism. 

It’s hard not to detect a sense of social estrangement here. In fact, Bresson made a style out of his outsider status. Had he suppressed this self-knowledge, his work might have turned sentimental or prying. (A photo of Muslim women framed by a cloud is a rare lapse in this regard.)

Henri Cartier-Bresson/Magnum PhotosPeople and their cattle, Jaipur, India, 1947

Bresson’s later India photos feel driven by a sociological impulse. They are simply but carefully framed to convey facts. Much like V.S. Naipaul, he was drawn to the poignant contrasts of post-colonial development. For example, a famous 1966 photo shows India’s first missile being transported on a bicycle. Another more bitter shot shows barefoot workers digging at the site of India’s first nuclear power plant.

Perfume seller, Ahmedabad (1966) is the photograph closest in spirit to Bresson’s European work. A nested portrait of a plump street-vendor framed by cheap paintings and wares, it’s a moving, accomplished shot. But even here you sense a wry self-awareness. The frame of paintings is dazzling, but it keeps the photographer out.

“Henri Cartier-Bresson: India in Full Frame” is at the Rubin Museum through January 29, 2018.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/NdnI5qY0rz8/

Why We Must Still Defend Free Speech

This article will appear in the next issue of The New York Review.

Evelyn Hockstein/The Washington Post/Getty ImagesWhite nationalists marching on the University of Virginia campus, Charlottesville, August 2017

Does the First Amendment need a rewrite in the era of Donald Trump? Should the rise of white supremacist and neo-Nazi groups lead us to cut back the protection afforded to speech that expresses hatred and advocates violence, or otherwise undermines equality? If free speech exacerbates inequality, why doesn’t equality, also protected by the Constitution, take precedence?

After the tragic violence at a white supremacist rally in Charlottesville, Virginia, on August 12, these questions take on renewed urgency. Many have asked in particular why the ACLU, of which I am national legal director, represented Jason Kessler, the organizer of the rally, in challenging Charlottesville’s last-minute effort to revoke his permit. The city proposed to move his rally a mile from its originally approved site—Emancipation Park, the location of the Robert E. Lee monument whose removal Kessler sought to protest—but offered no reason why the protest would be any easier to manage a mile away. As ACLU offices across the country have done for thousands of marchers for almost a century, the ACLU of Virginia gave Kessler legal help to preserve his permit. Should the fatal violence that followed prompt recalibration of the scope of free speech?

The future of the First Amendment may be at issue. A 2015 Pew Research Center poll reported that 40 percent of millennials think the government should be able to suppress speech deemed offensive to minority groups, as compared to only 12 percent of those born between 1928 and 1945. Young people today voice far less faith in free speech than do their grandparents. And Europe, where racist speech is not protected, has shown that democracies can reasonably differ about this issue.

People who oppose the protection of racist speech make several arguments, all ultimately resting on a claim that speech rights conflict with equality, and that equality should prevail in the balance.* They contend that the “marketplace of ideas” assumes a mythical level playing field. If some speakers drown out or silence others, the marketplace cannot function in the interests of all. They argue that the history of mob and state violence targeting African-Americans makes racist speech directed at them especially indefensible. Tolerating such speech reinforces harms that this nation has done to African-Americans from slavery through Jim Crow to today’s de facto segregation, implicit bias, and structural discrimination. And still others argue that while it might have made sense to tolerate Nazis marching in Skokie in 1978, now, when white supremacists have a friend in the president himself, the power and influence they wield justify a different approach.

There is truth in each of these propositions. The United States is a profoundly unequal society. Our nation’s historical mistreatment of African-Americans has been shameful and the scourge of racism persists to this day. Racist speech causes real harm. It can inspire violence and intimidate people from freely exercising their own rights. There is no doubt that Donald Trump’s appeals to white resentment and his reluctance to condemn white supremacists after Charlottesville have emboldened many racists. But at least in the public arena, none of these unfortunate truths supports authorizing the state to suppress speech that advocates ideas antithetical to egalitarian values.

The argument that free speech should not be protected in conditions of inequality is misguided. The right to free speech does not rest on the presumption of a level playing field. Virtually all rights—speech included—are enjoyed unequally, and can reinforce inequality. The right to property most obviously protects the billionaire more than it does the poor. Homeowners have greater privacy rights than apartment dwellers, who in turn have more privacy than the homeless. The fundamental right to choose how to educate one’s children means little to parents who cannot afford private schools, and contributes to the resilience of segregated schools and the reproduction of privilege. Criminal defendants’ rights are enjoyed much more robustly by those who can afford to hire an expensive lawyer than by those dependent on the meager resources that states dedicate to the defense of the indigent, thereby contributing to the endemic disparities that plague our criminal justice system.

Critics argue that the First Amendment is different, because if the weak are silenced while the strong speak, or if some have more to spend on speech than others, the outcomes of the “marketplace of ideas” will be skewed. But the marketplace is a metaphor; it describes not a scientific method for identifying truth but a choice among realistic options. It maintains only that it is better for the state to remain neutral than to dictate what is true and suppress the rest. One can be justifiably skeptical of a debate in which Charles Koch or George Soros has outsized advantages over everyone else, but still prefer it to one in which the Trump—or indeed Obama—administration can control what can be said. If free speech is critical to democracy and to holding our representatives accountable—and it is—we cannot allow our representatives to suppress views they think are wrong, false, or disruptive.

Should our nation’s shameful history of racism change the equation? There is no doubt that African-Americans have suffered unique mistreatment, and that our country has yet to reckon adequately with that fact. But to treat speech targeting African-Americans differently from speech targeting anyone else cannot be squared with the first principle of free speech: the state must be neutral with regard to speakers’ viewpoints. Moreover, what about other groups? While each group’s experiences are distinct, many have suffered grave discrimination, including Native Americans, Asian-Americans, LGBT people, women, Jews, Latinos, Muslims, and immigrants generally. Should government officials be free to censor speech that offends or targets any of these groups? If not all, which groups get special protection?

And even if we could somehow answer that question, how would we define what speech to suppress? Should the government be able to silence all arguments against affirmative action or about genetic differences between men and women, or just uneducated racist and sexist rants? It is easy to recognize inequality; it is virtually impossible to articulate a standard for suppression of speech that would not afford government officials dangerously broad discretion and invite discrimination against particular viewpoints.

But are these challenges perhaps worth taking on because Donald Trump is president, and his victory has given new voice to white supremacists? That is exactly the wrong conclusion. After all, if we were to authorize government officials to suppress speech they find contrary to American values, it would be Donald Trump—and his allies in state and local governments—who would use that power. Here is the ultimate contradiction in the argument for state suppression of speech in the name of equality: it demands protection of disadvantaged minorities’ interests, but in a democracy, the state acts in the name of the majority, not the minority. Why would disadvantaged minorities trust representatives of the majority to decide whose speech should be censored? At one time, most Americans embraced “separate but equal” for the races and separate spheres for the sexes as defining equality. It was the freedom to contest those views, safeguarded by the principle of free speech, that allowed us to reject them.

As Frederick Douglass reminded us, “Power concedes nothing without a demand. It never did and it never will.” Throughout our history, disadvantaged minority groups have effectively used the First Amendment to speak, associate, and assemble for the purpose of demanding their rights—and the ACLU has defended their right to do so. Where would the movements for racial justice, women’s rights, and LGBT equality be without a muscular First Amendment?

In some limited but important settings, equality norms do trump free speech. At schools and in the workplace, for example, antidiscrimination law forbids harassment and hostile working conditions based on race or sex, and those rules limit what people can say there. The courts have recognized that in situations involving formal hierarchy and captive audiences, speech can be limited to ensure equal access and treatment. But those exceptions do not extend to the public sphere, where ideas must be open to full and free contestation, and those who disagree can turn away or talk back.

The response to Charlottesville showed the power of talking back. When Donald Trump implied a kind of moral equivalence between the white supremacist protesters and their counter-protesters, he quickly found himself isolated. Prominent Republicans, military leaders, business executives, and conservative, moderate, and liberal commentators alike condemned the ideology of white supremacy, Trump himself, or both.

When white supremacists called a rally the following week in Boston, they mustered only a handful of supporters. They were vastly outnumbered by tens of thousands of counterprotesters who peacefully marched through the streets to condemn white supremacy, racism, and hate. Boston proved yet again that the most powerful response to speech that we hate is not suppression but more speech. Even Stephen Bannon, until recently Trump’s chief strategist and now once again executive chairman of Breitbart News, denounced white supremacists as “losers” and “a collection of clowns.” Free speech, in short, is exposing white supremacists’ ideas to the condemnation they deserve. Moral condemnation, not legal suppression, is the appropriate response to these despicable ideas.

Some white supremacists advocate not only hate but violence. They want to purge the country of nonwhites, non-Christians, and other “undesirables,” and return us to a racial caste society—and the only way to do that is through force. The First Amendment protects speech but not violence. So what possible value is there in protecting speech advocating violence? Our history illustrates that unless very narrowly constrained, the power to restrict the advocacy of violence is an invitation to punish political dissent. A. Mitchell Palmer, J. Edgar Hoover, and Joseph McCarthy all used the advocacy of violence as a justification to punish people who associated with Communists, socialists, or civil rights groups.

Those lessons led the Supreme Court, in a 1969 ACLU case involving a Ku Klux Klan rally, to rule that speech advocating violence or other criminal conduct is protected unless it is intended and likely to produce imminent lawless action, a highly speech-protective rule. In addition to incitement, thus narrowly defined, a “true threat” against specific individuals is also not protected. But aside from these instances in which speech and violence are inextricably intertwined, speech advocating violence gets full First Amendment protection.

In Charlottesville, the ACLU’s client swore under oath that he intended only a peaceful protest. The city cited general concerns about managing the crowd in seeking to move the marchers a mile from the originally approved site. But as the district court found, the city offered no reason why there wouldn’t be just as many protesters and counterprotesters at the alternative site. Violence did break out in Charlottesville, but that appears to have been at least in part because the police utterly failed to keep the protesters separated or to break up the fights.

What about speech and weapons? The ACLU’s executive director, Anthony Romero, explained that, in light of Charlottesville and the risk of violence at future protests, the ACLU will not represent marchers who seek to brandish weapons while protesting. (This is not a new position. In a pamphlet signed by Roger Baldwin, Arthur Garfield Hays, Morris Ernst, and others, the ACLU took a similar stance in 1934, explaining that we defended the Nazis’ right to speak, but not to march while armed.) This is a content-neutral policy; it applies to all armed marchers, regardless of their views. And it is driven by the twin concerns of avoiding violence and the impairment of many rights, speech included, that violence so often occasions. Free speech allows us to resolve our differences through public reason; violence is its antithesis. The First Amendment protects the exchange of views, not the exchange of bullets. Just as it is reasonable to exclude weapons from courthouses, airports, schools, and Fourth of July celebrations on the National Mall, so it is reasonable to exclude them from public protests.

Some ACLU staff and supporters have made a more limited argument. They don’t directly question whether the First Amendment should protect white supremacist groups. Instead, they ask why the ACLU as an organization represents them. In most cases, the protesters should be able to find lawyers elsewhere. Many ACLU staff members understandably find representing these groups repugnant; their views are directly contrary to many of the values we fight for. And representing right-wing extremists makes it more difficult for the ACLU to work with its allies on a wide range of issues, from racial justice to LGBT equality to immigrants’ rights. As a matter of resources, the ACLU spends far more on claims to equality by marginalized groups than it does on First Amendment claims. If the First Amendment work is undermining our other efforts, why do it?

These are real costs, and deserve consideration as ACLU lawyers make case-by-case decisions about how to deploy our resources. But they cannot be a bar to doing such work. The truth is that both internally and externally, it would be much easier for the ACLU to represent only those with whom we agree. But the power of our First Amendment advocacy turns on our commitment to a principle of viewpoint neutrality that requires protection for proponents and opponents of our own best view of racial justice. If we defended speech only when we agreed with it, on what ground would we ask others to tolerate speech they oppose?

In a fundamental sense, the First Amendment safeguards not only the American experiment in democratic pluralism, but everything the ACLU does. In the pursuit of liberty and justice, we associate, advocate, and petition the government. We protect the First Amendment not only because it is the lifeblood of democracy and an indispensable element of freedom, but because it is the guarantor of civil society itself. It protects the press, the academy, religion, political parties, and nonprofit associations like ours. In the era of Donald Trump, the importance of preserving these avenues for advancing justice and preserving democracy should be more evident than ever.

—August 24, 2017

  1. *

    The leading collection of essays advancing this critique is Mari J. Matsuda, Charles R. Lawrence III, Richard Delgado, and Kimberlé Williams Crenshaw, Words that Wound: Critical Race Theory, Assaultive Speech, and the First Amendment (Westview, 1993). For a thoughtful defense of hate speech regulation on liberal premises, see Jeremy Waldron, The Harm in Hate Speech (Harvard University Press, 2012). 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/HDf-1i0n6ec/

Alice Coltrane’s Songs of Bliss

J. Emilio Flores/Corbis via Getty ImagesAlice Coltrane and her son Ravi with a photograph of John Coltrane, September 4, 2004

When the saxophonist John Coltrane was asked in 1966 what he hoped to be later in life, he replied, “I would like to be a saint.” He would be canonized by the African Orthodox Church in the 1980s, but John wasn’t the only holy person in his family. His widow, Alice Coltrane, who had had spiritual inclinations since childhood, took the Sanskrit name Turiyasangitananda (which she translated as “the Transcendental Lord’s highest song of bliss”) and donned the saffron robes of a Hindu swami in the late 1970s.

An important jazz musician in her own right, Alice Coltrane played piano in her husband’s groups from 1966 until his death the following year. After John passed away, Alice recorded a dozen albums under her own name, ranging from straight-ahead jazz to experimental mixtures of orchestral music and improvisation to Hindu chants performed in gospel arrangements. Her corpus remains one of the most varied and underappreciated in jazz, complicated by her unorthodox religious convictions and the towering legacy of her husband, whose vision she often claimed to be fulfilling.

Alice stopped recording commercial albums in 1978, but she continued creating music with members of the Shanti Anantam Ashram, a religious community in Agoura Hills, California, that she founded in 1983 and would lead until her death in 2007, at the age of sixty-nine. Alice and her congregation produced four albums of their sacred songs, which were released on cassette in limited numbers by Avatar Book Institute, a publishing house associated with the ashram. Selections from these rare and out-of-print recordings were recently released as The Ecstatic Music of Alice Coltrane Turiyasangitananda by the world-music label Luaka Bop, marking the first time these works have been made widely available.

Alice espoused an eclectic set of beliefs drawn largely from Hindu texts such as the Bhagavad Gita and Vedas, but also from Taoism, Christianity, Zoroastrianism, and ancient Egyptian religion. She claimed to have had mystical experiences since the age of nine and said she was clairvoyant, telepathic, and able to levitate; she told interviewers she could stay awake for days in a meditative trance, and wrote of memories from past lives. Such assertions of supernatural abilities are likely to make many listeners uncomfortable. Yet by all accounts she was beloved by her family, followers, and fellow musicians. It’s worth noting that Coltrane’s ashram was free from the sexual and financial scandals that have surrounded other self-styled spiritual leaders.

Alice Coltrane led services at her ashram every Sunday. Worship was largely centered around a traditional Hindu form called bhajan, or devotional chant, which consists of repetitions of the name of a particular deity and invocations to it. The music the congregation created was a far cry from that of South Asia. Franya Berkman, a musicologist who visited the ashram in the early 2000s, described the weekly services:

[Alice] would make offerings at the altar, take her seat behind the Hammond B3 organ, and begin…. Playing syncopated chords with her left hand and a soaring, pentatonic melody with her right, she would signal the song leader in the men’s section to start the men singing. The women would respond, and blues-inflected devotional music would fill the room…. The congregation would create harmonies and counterpoint, and cry and shout in response to members’ musical and emotional outpourings. They would clap ecstatically, and join in with tambourines and other hand-held percussion instruments.

The result, as the new compilation reveals, was a complex and sometimes befuddling blend of gospel, pop, rock, and Indian religious music. At times the congregation’s music—with its pulsing beats, synthesizers, and indistinct singing—sounds as if it could be field recordings manipulated by a DJ; at other times it seems dangerously close to the New Age fads of the late twentieth century, a hybrid of Eastern and Western spirituality that does little to address the complicated histories of its various influences.

But the recordings on The Ecstatic Music never fall into pastiche or baseless cultural appropriation. The unexpected combination of styles and influences are held together by the passion and devotion of the congregation. As unusual as the ashram recordings might sound to listeners, they contain the music of a religious community that viewed these performances as a sacrament. Coltrane herself was an immensely talented musician who saw music as a way of expressing one’s faith and communicating with the divine, and the songs included on The Ecstatic Music emerged from a lifetime of spiritual and artistic searching.

Born in 1937 in Detroit, Alice McLeod grew up in a family of observant Baptists. Often described as a child prodigy, she began accompanying the choirs at her family’s church at a young age and also studied European piano repertoire. As a teenager, she played at the services of a local Pentecostal congregation. The experience convinced Alice of the religious power of music. “The people in the audience were so overcome with the spirit,” she recalled, “they weren’t singing anymore; some were just walking around the church. Half of the choir had to be carried out.” The belief that music was a means for reaching the divine would stay with her for the rest of her life.

During high school, Alice became involved in the lively Detroit jazz scene, which, in the 1950s, was known for combining bebop with rhythm-and-blues, resulting in a grittier, earthier style than the jazz that came out of the coasts. Prominent musicians Alice played with at this time included the Jones brothers (Elvin, Thad, and Hank), Yusef Lateef, Kenny Burrell, and Bennie Maupin. In 1960, she moved with her first husband, the singer Kenny Hagood, to Paris, where she frequented the apartment of bebop pioneer Bud Powell, whom she considered a mentor.

After her marriage fell apart, Alice lived in New York for a year before returning to Detroit, where she was hired by the vibraphonist Terry Gibbs. In 1963, Gibbs’s band shared a weeklong double-bill at Birdland in New York with John Coltrane’s quartet. Alice had long been fascinated by John’s music, especially his 1961 album Africa/Brass, with its primal-sounding arrangements by the multi-instrumentalist Eric Dolphy over which John played in a restrained but forceful style, evoking the cadences of a preacher. At Birdland, Alice didn’t dare approach the shy and serious Coltrane. He, however, approached her on the third night, walking behind her backstage while playing a melody on his saxophone. When she told him it was beautiful, he replied, “It’s for you.” By the end of the week, she had left Gibbs’s group in order to travel with John as he toured around the world.

Joe Alper Photo Collection LLCAlice and John Coltrane, circa 1966

John and Alice shared a conviction that music could be a spiritual practice. Following a revelation in 1957, John’s work took on a focus and urgency that seemed to convey the divine force that he felt underlay all musical expression, as exemplified in titles like A Love Supreme, Om, “Peace on Earth,” “Offering,” “Love,” and “Serenity.” He said that he hoped “to be a force for real good” that could “inspire [listeners] to realize more and more of their capacities for living meaningful lives.” John’s influence on Alice was profound. An intensely studious man, he introduced her to the Bhagavad Gita and other Eastern texts that would later form a central part of her religious beliefs. After John died, Alice never spoke of him as being dead—she referred instead to his “transition”—and in subsequent years she would claim to receive visitations from his spirit.

In late 1965, McCoy Tyner, John’s longtime pianist, left to pursue a solo career, and John asked Alice to join his band. In the mid-1960s, John had begun to explore the irregular phrasing, harsh timbres, and loose beats of free jazz artists such as Ornette Coleman and Albert Ayler, largely abandoning standard harmonies and forms in favor of more open structures for improvisation. Although Alice had stopped playing piano in order to raise her and John’s children and had no experience with the new avant-garde music, she was swiftly integrated into the ensemble’s atonal, atemporal style. John encouraged Alice to approach the piano as if it were a harp, and she developed a technique that combined shimmering arpeggios with rapidly shifting harmonies, resembling a lusher, less percussive version of Tyner’s playing. Although the Coltranes’ music could be harsh and frenetic or hymn-like and ethereal, throughout it all there was a meditative intensity that drew impassioned responses from listeners. Reactions to their concerts were strikingly similar to those at the Pentecostal services Alice witnessed as a teenager: “Someone in the audience would stand up, their arms upreaching, and they would be like that for an hour or more,” she recalled. “Their clothing would be soaked with perspiration, and when they finally sat down, they practically fell down.”

John died suddenly from liver cancer in 1967, leaving Alice widowed with four children—a daughter from her first marriage and three sons with John. She began to suffer from insomnia, and her weight fell from 128 to 95 pounds. She later called this period her tapas, a yogic term for spiritual austerity, writing that this “purificatory spiritualization [brought] about the expansion and heightening of my consciousness-awareness level.” During this time she began recording her own music and ensembles and developed an interest in Hinduism and ancient Egyptian religion.

Alice’s solo recordings from the late 1960s and early 1970s—on which she was joined by many who had played with John, as well as other prominent jazz musicians such as bassist Ron Carter and saxophonist Joe Henderson—feature steady rhythms, simple bass lines, instruments from India and the Middle East, and impressionistic, bluesy soloing. Although the music was more groove-based and less dissonant than John’s late work, Alice continued to explore the spiritual concerns that had occupied her husband. She began playing a harp that John had ordered before he died but that only arrived in 1968. Alice’s harp would become a crucial aspect of her music, adding a celestial quality to albums like the majestic Journey in Satchidananda (1971), named after an Indian swami of whom Alice had become a devoted follower. The work from this period is her most celebrated and accessible; in addition to the Eastern influences alluded to in her song and album titles, the music reflects Alice’s education in the church, her experience playing bebop tinged with R&B in Detroit, and her admiration for pop artists like Aretha Franklin and Ray Charles.

Michael Ochs Archives/Getty ImagesAlice Coltrane, late 1970s

Alice’s early albums are remarkably confident and daring for someone who had never before led her own ensembles and who had largely given up public performance in order to look after her family. Her surety is all the more impressive considering that the jazz avant-garde of the 1960s and 1970s was almost entirely male. Although some female jazz instrumentalists and composers had achieved renown in more traditional settings, there were virtually no women playing in the avant-garde at the time, and Alice was the first to record under her own name. As Alice herself acknowledged, royalties from John’s music helped her achieve the comfort and financial freedom that other musicians didn’t have; she was also indebted to John for her contract with Impulse! Records, which arose from the label’s desire to gain her permission to issue the vast amount of his unreleased recordings. She would soon become an important artist for the label. Impulse! producer Ed Michel said admiringly that the musicians Alice recorded with “treated her with the respect she deserved and did as they were told. Guys who wouldn’t put up with anything from a lot of people would ‘Yes, ma’am’ her because she earned it.” Using the studio she and John had constructed at their home in Dix Hills, on Long Island, and receiving production credits on most of her albums, she was one of the few major-label jazz artists to have complete autonomy over her work.

Her later recordings for Impulse! were more tumultuous, characterized by rollicking beats, sudden shifts between heavenly consonance and harsh dissonance, and a blistering, bop-inspired technique on electric organ—a striking contrast to her more fluid harp playing. She showed an interest in orchestral music, including arrangements of excerpts from The Firebird on Lord of Lords (1972). (In that album’s liner notes she wrote that she had been visited by the spirit of Igor Stravinsky, who had told her “I wanted you to receive my vote” before offering her an elixir in a glass vial.) She also dubbed string arrangements and organ over recordings of John. The result, released in 1972 as Infinity, was maligned by critics and fans, but the settings Alice created for John’s improvisations are surprisingly cohesive, revealing new aspects of his music and the direction it may have taken had he lived longer.

That same year, Alice moved her family to California, where she opened the Vedantic Center, which soon attracted followers. Four years later, she had a revelation in which, she said, God instructed her to become a swami. Her music changed once again, with the albums Radha-Krsna Nama Sankirtana (1976) and Transcendence (1977) featuring singers performing Hindu chants in bright, joyous renditions that resembled gospel and contemporary pop more than Indian music, avant-garde jazz, or orchestral works. Following her 1978 album Transfiguration, Alice devoted herself entirely to her family and her religious pursuits, and in 1983, she purchased a large parcel of land in Agoura Hills, near Malibu, where she created the ashram.

The songs collected on The Ecstatic Music of Alice Coltrane Turiyasangitananda were recorded between 1982 and 1995. Most of Alice’s arrangements of bhajans, the invocations that serve as the basis for her devotional music, use two-chord vamps and call-and-response between an individual and the chorus or between different groupings of the chorus. Alice primarily plays electric organ and an Oberheim OB8 analog synthesizer, using dissonant chords and occasional blues lines to propel the music forward while creating a dark, ominous texture around the singers; there is little of the sunniness of her earlier renditions of Hindu prayers. One of the most arresting aspects of the ashram recordings is Alice’s use of the synthesizer’s pitch-bending function, with which she makes dramatic sweeps that overpower the singers and resemble air raid sirens. The effect can be startling, suggesting a destructive deity in need of propitiation rather than one offering its divine love. At times the music resembles that of Nusrat Fateh Ali Khan in its trance-inducing mysticism; or of Fela Kuti in its simple melodies and propulsive, disco-like beats; or of Stevie Wonder in its imaginative superimposition of gospel music and synthesizers. Yet on the whole this music is unlike anything else.

Sri Hari Moss/Luaka BopAlice Coltrane and members of the Shanti Anantam Ashram, Agoura Hills, California, 2006

The simple forms, recognizable harmonies, and repetitive nature of the chants would have been amenable to a congregation of varying musical abilities, but the singers included on this album, many of whom are African-American and were raised in black churches, are outstanding. When singers break off from the chanting with some melisma or other embellishment, the mixing of the recording often makes them sound distant, and the chorus can form an indistinct, ululating mass, requiring close listening in order to discern individual contributions. This is especially true on “Hari Narayan,” throughout which a woman’s sanctified wailing is partially obscured by Alice’s organ and the chorus’s antiphonal chanting. Soloists are recorded with greater clarity. Panduranga John Henderson, a member of the ashram who had been a singer in Ray Charles’s band, delivers a powerful solo in the middle section of “Om Rama,” and the Indian musician Sairam Iyer sings a Tamil poem on a version of “Journey in Satchidananda,” played here at a dirge-like tempo with lyrics celebrating the song’s namesake. Iyer’s voice, lithe and clear, blends impressively with Alice’s organ, following the bends in her tones and nestling comfortably into the timbre of the instrument.

Four of the eight songs feature the voice of Alice herself, who had never sung on her commercial records. Indeed, no one had heard her sing before 1982, when, according to Radha Botofasina-Reyes, a longtime resident of the ashram, “she said she had meditated and the Lord had told her she must sing.” Alice’s voice has a mysterious, androgynous quality, and although it can be out of tune and unsteady, she sings confidently and movingly. As Botofasina-Reyes remembers, “She said [her voice] sounded that way because it was neither male nor female—it was the voice of the soul.”

“Om Shanti” in particular suggests this otherworldly nature. Alice’s rhythmic but airy articulation of the consonant-heavy Sanskrit text—“Ananta natha parabrahman om”—is echoed by ghostly overdubs of her own voice. The key changes from Bb major to G minor, percussion enters, and Alice leads the chorus in call-and-response; her voice gradually subsides as the haunting swells of the congregation become more prominent. Another standout track is “Er Ra,” based on an ancient Egyptian text that, according to the liner notes, has no known translation. Alice sings and accompanies herself on harp, her slow, sparse plucking giving way to waves of arpeggios, and her plaintive voice alternating between raspy vulnerability and fearlessness.

Franya Berkman, whose invaluable study Monument Eternal: The Music of Alice Coltrane remains the only full-length treatment of its subject, argued that Alice’s work could be seen as a spiritual autobiography, in the manner of black female preachers such as Rebecca Cox Jackson and Sojourner Truth, who relied on a variety of styles and modes in order to tell an authentic account of their experiences. So too, Berkman argues, did Alice use whatever means were available in order to express herself authentically. One might wonder why, earlier in her career, Alice recorded virtually unaltered transcriptions of works by Igor Stravinsky. It was, in part, an attempt to claim the worldly Russian as a spiritual composer—after all, he was devoted to the Russian Orthodox Church and composed many sacred works—and to place herself as his spiritual, if not stylistic, descendant. But it was also a fearless assertion of her individuality. She didn’t seem to care that a black woman trained in the church, bebop, and the avant-garde wouldn’t be expected to espouse European modernism; she loved his music, and she made it a part of her own.

It is perhaps this assured, deeply felt eclecticism that has gained Alice a following among younger listeners. A performance one Sunday in May by the remaining members of the ashram as part of the Red Bull Music Festival in New York attracted a notably youthful crowd, and the attendees of saxophonist Ravi Coltrane’s recent two-day run at The Jazz Gallery in Manhattan, at which he played his mother’s music, were more diverse in age than those usually seen at jazz concerts. The presence of other members of the Coltrane family, as well as many musicians, in the audience made the concert seem like a long-overdue celebration of a great but neglected artist who has been recognized as a forebear by a later generation. Although The Ecstatic Music contains devotional music, one needn’t ascribe to Alice’s theology or believe in the veracity of her mystical visions to appreciate the emotional honesty of her work. When asked by an interviewer in 2004 what she demanded of her initiates, Alice replied, “I ask that they be sincere in their purpose.”

The Ecstatic Music of Alice Coltrane Turiyasangitananda was recently released by Luaka Bop.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/5MgxhzsvlZA/

Take a Hike!

GrangerHikers ascending Tyndall Glacier in Rocky Mountain National Park, Colorado, circa 1920

To the uninitiated it can be hard to understand why anyone would go hiking. Today’s fleece- and Gore-Tex–clad masses may take for granted the attraction of spending weekends doing what, for most of human history, qualified as grunt work: trudging through the wilderness, surrounded by dangerous animals, a heavy pack on your back. Earlier advocates had to be more candid. “This is very hard work for a young man to follow daily for any length of time,” wrote John Meade Gould in a popular guide in 1877. “Although it may sound romantic, yet let no party of young people think they can find pleasure in it for many days.”

Henry David Thoreau offered similar advice. “If you are ready to leave father and mother, and brother and sister, and wife and child and friends,” he wrote in “Walking,” his classic hiking treatise, “and never see them again…then you are ready for a walk.” When I was a child my parents had already been indoctrinated into modern hiking culture; my sister and I knew better. I would only go for a hike if promised M&Ms at every stop. My sister, cannier than I, demanded a new CD before each trip, which she then listened to on headphones while the great outdoors passed by.

Why do people hike? Surprisingly little has been written on the origins of so unnatural an activity. Silas Chamberlin, an official at a Pennsylvania-based hiking advocacy organization and a recent Ph.D. who studies environmental history, has written the first comprehensive account of the pastime, On the Trail: A History of American Hiking. Looking back it can seem easy to draw a direct line from men like Thoreau and John Muir to hikers today. We climb the same mountains: Thoreau, in The Maine Woods, writes about his struggle to ascend Mount Katahdin, the endpoint of the modern Appalachian Trail; Muir, in The Mountains of California, describes much of the landscape passed through by the path that now bears his name, the 211-mile John Muir Trail that runs from Mount Whitney to Yosemite. We also share many of the same goals. Thoreau preferred to hike “absolutely free from all worldly engagements”; Muir spent days by himself in the wilderness, with nothing but the animals in the forest for company.

Chamberlin’s participation in the often ignored club hiking community—34 million Americans go hiking each year, but only two million belong to hiking clubs—leads him to ask how typical Thoreau and Muir really were at the beginning of American hiking. Early hikers shared with these men a love of nature, Chamberlin agrees, and they may have also admired the daring of those who walked in the forest alone. But what most early hikers sought was not solitude; it was fellowship. The decisive moment in the rise of American hiking was thus the formation of groups like the Appalachian Mountain Club and the Sierra Club, in which “meetings, dances, meals, and simple companionship were almost as important as the act of walking itself.”

As one New England woman recounts, the working class felt no need for a club—typically a project of the middle class or wealthy—to authorize their leisure. “It was our custom,” she wrote of her days off, “to wake one another at four o’clock, and start off…together over some retired road whose chief charm was its familiarity, returning to a very late breakfast, with draggled gowns and aprons full of dewy roses.” Chamberlin nonetheless shows that the early clubs were responsible for much of the development of hiking as a discrete activity, distinct from a stroll in the park, or a long journey along roads, or the surprisingly popular nineteenth-century spectator sport of competitive walking.

The social ambitions of the clubs were evident from their memberships. When the first significant hiking association, the Appalachian Mountain Club (AMC), formed in Boston in 1876, the group’s magazine declared that it had been founded on the marriage of “scientific and aesthetic elements,” so that “the former, like a strong husband, would do the laborious honor-bearing work, and the latter as a graceful enthusiastic consort, would win many friends to the association.”

This language was not just figurative: the AMC, like most hiking clubs, recruited men and women alike. Perhaps the founders were thinking of how the Shoshone woman Sacajawea helped guide an earlier trip into the mountains. Meriwether Lewis and William Clark, along with their followers John C. Frémont, Clarence King, and Ferdinand Hayden, were among the most widely read “nature writers” of the day; it’s not too much of a stretch to think that the early clubs saw themselves as recreating in miniature these more famous ventures in the union of romance and science. And unlike Thoreau and Muir, when these explorers recounted their tales of long walks through the woods, they could at most offer only the pretense of facing the wild alone—their government-sponsored expeditions required dozens of participants.

As the members of the AMC marched off into the mountains of New England, botanizing and charting routes up peaks, they too did so in large groups. Out west the Portland Mazamas held their first gathering in 1894 atop nearby snow-capped Mount Hood—155 men and thirty-eight women met at the summit. In California the Sierra Club, founded by Muir in 1892, organized regular trips into the high country. A representative outing included 287 members. Even if early hikers had wanted to travel by themselves, the equipment would have made an expedition difficult: a typical multiday trip required heavy canvas tents, cast-iron Dutch ovens, rubber mattresses, and sheet-iron stoves—much of it surplus equipment from the Civil War. Early hiking looked less like a country idyll, more like an army encampment. So much for the solitude of the wild.

The clubs’ show of scientific credibility could not be sustained for long. The Mazamas promoted one group of trips with the promise of establishing “heliographic communication” along the entire West Coast; in the event, club members used mirrors to signal from Mount Baker in northern Washington to Diamond Peak in central Oregon, an impressive accomplishment, but one that, expedition organizers admitted, had little scientific value. Those with greater interest in the production of knowledge rebelled at their fellows’ devotion to the picturesque. “The wish to enjoy the prospect becomes the pretext for repeated halts,” complained one scientifically inclined hiker; distracted by beauty, “the will acts with less vigor.” By the early twentieth century the marriage of science and romance had ended in divorce.

Without science for cover, the clubs needed some new excuse for their love of long walks in the woods—romance and beauty seemed suspect, especially given that the clubs, while they admitted women, remained dominated by men. The most obvious place to turn was the great new passion of late-nineteenth- and early-twentieth-century America, bodily culture. “The real joy of hiking is that it is highly healthful and at the same time interesting,” a Cleveland club member offered. A hiker from Allentown, Pennsylvania, was more unrestrained: “The next time you climb that mountain, and your chest heaves, and you feel like your lungs will explode, remind yourself,” he wrote, “it’s all for health’s sake.”

For some, hiking offered religious benefits. “Our trips have always embraced…first, the worship of God,” one hiker insisted; his club regularly scheduled religious services at a rock formation they dubbed Dan’s Pulpit. There do not, however, seem to have many priests or rabbis in the woods.

Indeed, although Chamberlin cherishes the early hiking clubs too much to draw out this point, the evidence he presents suggests they may have been one of the central rallying points—along with the Episcopal Church and Ivy League football—for a new elite culture that for the first time excluded all Jews and Catholics. Basing their identity on the then-novel concepts of “muscular Christianity” and “Anglo-Saxonism,” club hikers desired to present themselves as ancient and rooted in the land. “There’s nothing like a good, honest-to-goodness, upright, God-fearing, one hundred percent American, red-blooded autumn hike,” one member wrote in his club’s log book—perhaps parodically, as Chamberlin insists, but if so the parody also reveals the character of much hiking rhetoric.

The founder of another club felt that his group was reminiscent of a leading WASP imperialist organization, dubbing it “a sort of advanced Boy Scouts.”1 Other clubs promised health of a distinctly racial variety. “All we need are a few more trails…and the color of young Americans will soon turn from putty to bronze,” a Wisconsin club declared, promising its hikers “two rosy cheeks.” At least one southern club explicitly limited its membership to whites. Northern clubs may not have needed to: although Chamberlin notes that some clubs were racially open, he does not discuss a single nonwhite hiker before World War II.

After World War I the clubs’ horizons, which had rarely reached beyond local or regional borders, expanded to include entire mountain ranges. What remain the most prominent symbols of American hiking culture were the result: the Appalachian Trail, 2,160 miles from Georgia to Maine, and the Pacific Crest Trail, 2,659 miles from the Mexican to the Canadian border. Like the great wagon roads of the nineteenth century or the federal highways then being charted across the country—Route 66 was established in 1926—these trails knit together the national landscape.

The lack of utilitarian function also made the trails’ ideological purpose more evident. The chief architect of the Pacific Crest Trail, Clinton Clarke, saw the project in explicitly racial and religious terms. The “negro boys” of America, he complained in 1937, had remained “closer to the soil” and so were taking “all the athletic prizes,” while whites suffered from “too much sitting on soft seats in motors, too much sitting in soft seats in movies, and too much lounging in easy chairs before radios.” Only a long trip in the woods by “clean, strong young Christians,” Clarke’s assistant wrote, could “preserve our Christian civilization,” while eradicating communism as well. The great attraction of the new trail, according to a young man who blazed a section, was “the fact that I was one of the first fellows to participate in such a conquest of this kind.”2

W.R. Ross/National Geographic CreativeThe Wanderlusters, a coed hiking club based in Washington, D.C., circa 1915

Back east the founder of the Appalachian Trail, Benton MacKaye, was a rather different figure, a supporter of the Soviet Union and a friend of Sinclair Lewis, John Reed, and Lewis Mumford. MacKaye believed his trail would provide a solution to the labor unrest of the period—much of which was led by Wobbly lumberjacks and miners—by offering land and work in government-owned towns, newly built along the trail in the forest; no less a man of his time than Clarke, MacKaye termed his scheme “colonization.”

Like any activity oriented around that great cipher, Nature, hiking is ideologically flexible. After World War II the culture established by the clubs underwent a radical change. A new breed came to prominence: the “thru-hiker.” The first, Earl Shaffer, had never belonged to a hiking club. He spent the summer of 1947 trying “to walk the army out of my system, both mentally and physically,” by becoming the first person to trek the entire length of the Appalachian Trail. When Shaffer finished, the public guardians of club hiking culture were incredulous. An official questioned him at length, only to relent when Shaffer produced a day-by-day diary and hundreds of photographs documenting the trip.

Thru-hiking was too threatening to the clubs. The goals were different: speed and fame (several clubs banned hiking races) as well as a therapeutic approach to nature that seemed insistently antisocial, a rejection of fellowship. The success of thru-hikers also called into question the need for the material resources the clubs provided. Trips like Shaffer’s “proved that it was possible to hike without a camp cook, heavy equipment, experienced guides, or other benefits of club outings,” Chamberlin writes. This shift came about in large part because of the spread of new technology like lightweight nylon tents and freeze-dried food. Shaffer and his followers—including Martin Papendick and Colin Fletcher—looked less like club members and more like high-tech loners, perhaps new versions of Muir and Thoreau, perhaps a portent of what Robert Putnam diagnosed as the quintessential postwar American social pathology, “bowling alone.”

Chamberlin contends that the hiking culture that followed has been a pale shadow of that produced by the clubs. Robert Moor offers a more promising view. On Trails, his account of his own 2009 thru-hike of the Appalachian Trail and the practice of trail-making more generally, shows how contemporary hikers have moved beyond the sport’s WASP origins and, in part by returning to the thought of Muir and Thoreau, in part through the canonization of writers like Jack Kerouac, Gary Snyder, and Edward Abbey, come to see hiking as a way to create not a club but a kind of utopian community.

A typical parody of the hiking culture that arose after the 1940s might run like this. Hikers are just as white, wealthy, and socially snobby (if not explicitly racist) as ever, but now, because they hike alone or in small groups instead of clubs, they are obsessed with individual speed as well as ever newer and more expensive equipment that, when not in use, piles up in the garage along with all the other detritus of suburban life.

This image, which Chamberlin peddles as he contrasts postwar hiking culture with the positive aspects of the clubs, has some truth to it. But the consumerist hiker, who, however much he enjoys walking, does not so much escape to the wild as use the wild as an excuse to indulge in yet more shopping, is an apt label for only part of the contemporary hiking community. The remainder are more likely to rely on the same piece of gear until it falls apart after decades of use—hiking equipment may be the last bastion against planned obsolescence in the American economy—or else, as in the recent craze for lightweight backpacking, to look for ways to repurpose common items like old plastic water bottles, which are lighter than the most high-tech versions on offer. My favorite book on the subject recommends, “Make your own stuff, and making it out of trash is always best!”

Moor, who hikes with a tarp instead of a tent and dehydrates his own food, is clearly of this latter tribe. His spiritual guide to hiking is not the latest outdoors company catalog but the poet Gary Snyder, at least as channeled by Kerouac in The Dharma Bums: “Walk along looking at the trail at your feet and don’t look about and just fall into a trance as the ground zips by”; only then will you achieve the true “meditation of the trail.” Moor had set out on the Appalachian Trail with no particular goal other than “to live in a prolonged state of freedom.” The first day he realized the hike required a kind of submission. In his journal he wrote:

There are moments when you cannot help but feel that your life is being controlled by some not-entirely-benevolent god. You skirt down a ridge only to climb it again; you climb a steep peak when there is an obvious route around it; you cross the same stream three times in the course of an hour, for no apparent reason, soaking your feet in the process. You do these things because someone, somewhere, decided that that’s where the trail must go.

Because the path had been carved out by trail-builders and past hikers, to follow it, Moor found, was to be a slave to determinism. His sense of mastery, as he finished, was mixed in equal measure with a feeling of humility. “On a trail, to walk is to follow.”

To walk is also to be part of a community, although often an unplanned one. The Appalachian Trail has changed since Shaffer’s lonely expedition in 1947: in 2015 roughly 2,700 hikers set out from Georgia intending to walk the entire length to Maine. A similar number attempted the Pacific Crest Trail, with about fifty hikers departing from the Mexican border on a typical day in April of that year. This may not sound like a crowd, until all those hikers arrive around dinnertime at one of the shelters or campsites along the way. As the weeks go by a free-floating community develops. Moor fell in with a group for the first part of his trip, then outpaced them. “Weeks or months later,” he writes, “whenever I slowed down or they sped up, I would bump into these friends again, as if by some miraculous coincidence.” If not by intention—the original promoters had expected that few people would hike the trail’s entire length—this was by a kind of design. “The miracle,” Moor writes, “was the trail itself, which held us together in space like so many beads on a string.”

Moor set out alone, but he found on his hike a community in which, as the Appalachian Trail’s founder, Benton MacKaye, had hoped, “cooperation replaces antagonism, trust replaces suspicion, emulation replaces competition.” Unlike many other works about hiking—Cheryl Strayed’s best-selling 2012 memoir Wild being the most prominent example—Moor does not take this experience as the occasion for an anguished excavation of his past. Instead his experience becomes the starting point for a series of reflections on the nature of trails themselves, from the earliest surviving traces of animal movement 565 million years ago to the arts of concealment that make possible the well-maintained trails of today. Throughout, Moor returns to the same paradox: the way that the careful planning of trail advocates like Chamberlin can come together with the spontaneous activity of individuals like him and his trail-mates to create a hiking culture that expresses a utopian critique of modern society.

The trail, for Moor, is not separated off from the modern world; rather, the trail becomes that world’s inverted mirror. He is obsessed with the concept of “stigmergy,” a biologist’s term for how creatures like ants and termites self-organize without any central command. Unlike its close cousin the market, stigmergy assumes altruism, not competition. With animal trails much of this altruism is inadvertent: a creature cannot travel to a food source without leaving behind a trace of the way it went—and as other animals follow, that trace turns into a trail.

In humans, a similar logic can be found in the paths of culture. Moor details the practices of tribes like the Western Apache, which see the past itself as a trail that must be carefully attended and preserved. Then “the land grows to contain not just resources,” he writes, “but stories, spirits, sacred nodes, and the bones of ancestors.” Moor tends to ignore the way overuse can cause a pathway to expand until it destroys a landscape, or consensus leads to a monochrome culture stuck in the same old ruts. He takes the perspective of the hiker, trying to account for what made possible the brief utopia he found on the Appalachian Trail: a physical landscape planned by trail builders, a cultural landscape created by hikers—even those in the woods for only a day or two—devoted to not just traveling the trail but building a community and helping one another along the way.

Moor, perhaps without meaning to, also carries out a subtle critique of the hiking culture he inherited from the early-twentieth-century clubs. Against the spirit of conquest that motivated some early hikers, he bases his understanding of trails on a relationship to the land drawn from Native American traditions. He devotes another chapter to the International Appalachian Trail, a project that explodes the nationalist impulse behind the long trails of the 1920s and 1930s by taking seriously the idea of a trail concerned with respect for geology itself, establishing pathways through every remnant of the original Appalachian mountain formation, from Mexico to Canada, Scotland to Morocco.

A still-further revision may be needed. Moor notes the absurd specificity of the word “hike,” which carries with it both a sense of work—the word’s etymology lies somewhere between “to hoist” and “to sneak”—and an assumption of wilderness. Other languages are more capacious. In German, to hike is wandern, to wander; in French, it is randonner, which originally meant to move with impetuosity. Even in English, “hiking” is a peculiarly North American word: for the British and the Irish, a walk can designate any kind of perambulation, from a stroll in the park to a trip through the Alps; New Zealanders go tramping, while Australians prefer bushwalking.

In the United States, it wasn’t until around 1900 that the words “hike” and “hiker” began appearing in the annals of outdoor societies. A young woman on a Sierra Club outing left a portrait of this new specimen, the hiker: “He is harmless, but is not generally loved, for he is a little overbearing and given to much talking of a certain catalogue of hours and distances which he keeps in his mind and calls his record.” A few years later Muir, when asked for his own opinion on hiking, rejected the term, preferring “to saunter”; most others talked of tramping. The former strikes me as too pious—Muir adopts Thoreau’s folk etymology, according to which “saunter” comes from “à la Sainte Terre,” a pilgrimage to the Holy Land—while the latter is too redolent of cultural slumming. I prefer the term taken up by Henry James in The Art of Travel: the next time you see me on the trail, whether in a park, along the street, or in the woods, you’ll find me rambling.

  1. 1

    For the imperialist origins of the Boy Scouts, see Ian Buruma, “Boys Will Be Boys,” The New York Review, March 15, 1990. 

  2. 2

    Chamberlin neglects the racial impulses of Clare and his associates; these details come from Jenn Livermore, “The Pacific Crest Trail: A History of America’s Relationship with Western Wilderness,” senior thesis, Scripps College, 2014; and Glynn Wolar, “The Conceptualization and Development of Pedestrian Recreational Trails in the American West, 1890–1945: A Landscape History,” Ph.D. dissertation, University of Idaho, 1998. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/o4fwvxot0Ro/

What Makes a Terrorist?

Lorenzo Meloni/Magnum PhotosA suspected member of ISIS being taken into custody, Hamam al-Alil, Iraq, March 2017

In the wake of the terrorist attacks in and around Barcelona, clichés about radicalization are again making the rounds. For some, the twelve young members of the cell behind the Barcelona attacks, all men, were “brainwashed”; for others the blame falls on the town of Ripoll for becoming a “terrorist breeding ground”; for others yet it’s Islam as a whole that must be held accountable. For those who study radicalization and terrorism, all of these explanations fall short.

The greatest difficulty for our ability to understand and respond to terrorism and radicalization is linear thinking. Arguing that radicalization is caused by poverty because most modern jihadists come from marginalized neighborhoods is the same flawed logic as arguing that radicalization is caused by Islam because jihadists are all Muslims. Even combining Islam and marginalization as risk factors doesn’t get us far, as only a fraction of a percentage of marginalized Muslims join jihadist groups. One can add many more factors and still end up with the same dilemma. Trying to find a root cause of radicalization is doomed from the start because it assumes a single, linear chain of causation.

Instead, it is better to think of radicalization as a phenomenon in which the whole is greater than the sum of its parts. Multiple factors interact in complex ways that cause radicalization to emerge in individual people and groups. As with other complex systems, such as ecosystems, removing one factor does not cause the system to collapse but instead to evolve in ways that may be positive or negative. In the jihadist movement there have been many small tipping points, including the USSR invasion of Afghanistan in 1979, the 2003 US invasion of Iraq, and the Syrian civil war of 2011—each of which mobilized a new generation of fighters.

Profiles of jihadists have evolved over the years. Generally, revolutionary movements attract different kinds of recruits at different stages in their development. Many of the founders and leaders of the modern jihadist movement were educated members of the upper-middle or upper classes. Even many early foot soldiers were of above-average socio-economic status. Research on recruits to jihadist groups using data from the 1970s to 2010 found that members of these groups were six times more likely than the general population to have a bachelor’s degree. In the Middle East, engineering schools are often the most competitive programs and only take the best and brightest students; jihadists were seventeen times more likely to have an engineering degree.

New recruits to al-Qaeda spent months or even years at training camps, where they were vetted by leadership for their mental stability and ideological purity. This vetting even applied to relationships among leaders. When the billionaire Osama bin Laden started to expand his network, he was selective about the social caliber of people he chose to ally himself with. In 1999, when he met Abu Musab al-Zarqawi, the founder of what would become ISIS, he was suspicious of him not only for his extremist beliefs in apostatizing moderate Muslims, but also because of Zarqawi’s criminal past.

But criminal pasts would eventually become a standout feature of European jihadists venturing toward Syria and Iraq. According to one study of a small database of European jihadists, 57 percent of eventual Syria-bound jihadists had a petty or violent criminal past. Studies of Syria-bound foreign fighters from Norway and Germany found that they were overwhelmingly from lower socio-economic backgrounds. Many recent European radicalization “hotspots” are neighborhoods known for their high rates of unemployment and crime. ISIS propaganda geared toward Europeans alluded to these criminal pasts by offering jihad as a form of redemption, claiming that “sometimes people with the worst pasts have the brightest futures.”

The evidence that early al-Qaeda members were more educated, psychologically stable, and ideologically grounded is consistent with a group in the early period of a movement’s development, consisting of self-organizing networks operating clandestinely. Nascent decentralized groups rely on a reputation for success as the prime attractor for new adherents. Failing at an attack would be embarrassing and costly, and therefore only the best and brightest should be entrusted with such a duty.

On the other hand, ISIS operated like a traditional military in carrying on a local insurgency. It held and governed land in a way that al-Qaeda never did, and this loosened its stringency regarding recruits. The group sucked up fighters from areas under its control with promises of money and power, and appealed to the downtrodden of the Muslim diaspora to join their cause. Ideological purity, education, and law-abiding pasts took a back seat to the need for soldiers. If al-Qaeda, with its careful vetting and training, was the special forces of the jihadist movement, then ISIS was the infantry.

But as ISIS’s goals continued to evolve so too did their recruits. Few women from Europe ventured to Syria in the early days of the conflict, but by 2014 one in seven European foreign fighters were women, and by 2016 that number had jumped to one in three. Women didn’t become more vulnerable to radicalization over that period—instead, they were targeted for radicalization. Until 2014, ISIS’s local insurgency demanded mostly young men of fighting capacity and thus had little need for women. In June 2014, ISIS declared its so-called Caliphate and shifted its focus to state-building. In order to legitimize that state, the immigration of women, children, and families was explicitly sought after. Once the women arrived they began recruiting female friends, family members, and strangers over the Internet to pull in more “lionesses,” as they were often called, leading to the jump seen in 2016.

Since ISIS’s caliphate began collapsing in early 2016, they have been further expanding the use of other types of recruits. Women have planned to carry out attacks, new converts to Islam with no previous radical ties (known as “clean men”) have been alleged to be go-betweens connecting aspiring attackers with ISIS core members, lone actors (who have a greater instance of mental illness than group actors) have been inspired or directed to attack, people both younger and older than the norm have been recruited. The organization is exploiting all the resources at its disposal to maintain its strength in the eyes of its supporters.

These changes in patterns of recruitment show that profiles of recruits reveal more about changes in conflict dynamics than about the psychological vulnerabilities of certain demographics. Disaffected youth or marginalized communities may have been convenient targets for recruitment in recent circumstances, but long-term strategies for the prevention of radicalization must look beyond these current dynamics.

In addition, well-meaning policies that can be perceived as profiling run the risk of alienating the communities involved, as has been seen with the UK’s “Prevent” strategy. But even when we focus on a narrow range of times and locations it is hard to detect a pattern. The core members of the Paris-Brussels terrorist network were mostly petty criminals from a marginalized neighborhood in Brussels. The Barcelona attackers were well-integrated youth from a culturally cohesive rural town. What they do have in common is that they were both groups of siblings and childhood friends.

As the structures of terrorist organizations evolve so too do their recruitment methods. In failed states, such as Syria, groups take on a hierarchical “command-cadre” structure, which resembles a formal military and allows the group to operate openly while providing security and governance in the area it controls. For some inhabitants of such areas, joining them may be more a matter of practicality than of conviction. In developed nations, such as in Europe, terrorist groups must operate clandestinely and thus take on a “network” structure. Networks are self-organizing, though they often contain charismatic leaders who pull together disparate individuals and small groups of friends.

Prior to the US invasion in 2001, al-Qaeda had begun to achieve a small-scale command-cadre structure in Afghanistan. It had a limited leadership structure and many hundreds of graduates from its training camps. The al-Qaeda leadership were hosted in Afghanistan by the Taliban and so they operated more like a venture capital firm, to which members of its various international networks would come to seek training, funds, and contacts.

European recruits of al-Qaeda in the 1990s and 2000s were often small groups of friends who would co-radicalize each other and then seek out opportunities to train in foreign camps. In a 2009 multi-nation study, researchers found that 75 percent of al-Qaeda members were recruited by a friend, 20 percent by a family member, and only 5 percent by a stranger. This recruitment pattern is what would be expected for a funding, plotting, and training structure like al-Qaeda that was waging a global jihad.

By contrast, the jihadist groups in Syria were waging a local insurgency and were setting up multiple command-cadre structures. In addition, by this time a series of prolific recruiters had gained a foothold in Europe. The hierarchical structures in Syria were able to work in tandem with their networks in Europe to create a mix of top-down and horizontal recruitment. For example, by 2015, nearly one in three Belgian foreign fighters in Syria were recruited by just two people: Khalid Zerkani and Fouad Belkacem. Some of those recruits then recruited their friends, which led to a social domino effect of radicalization.

Dirk Waem/AFP/Getty Images Fouad Belkacem while on trial for “incitement to discrimination, hatred and violence against non-Muslims,” Antwerp, November 30, 2012

Much radicalization is this phenomenon of friends recruiting friends. Preliminary findings on Western ISIS fighters indicate that very few recruits were self-radicalized; for the vast majority, radicalization was facilitated through social interaction. The Internet can facilitate this, but the existence of very specific geographical hotspots that produce the bulk of jihadists indicates that, when it comes to recruitment, offline factors are more important than the Internet. The picture emerging of the Barcelona attackers is more typical of radicalization in Europe. A charismatic leader, in the form of a radical imam, began to groom at least four sets of brothers and close friends, who then further co-radicalized one another.

Anybody can be exposed to new moral beliefs but when those beliefs become part of the day-to-day conversations of your friends, they have a greater chance of being acted upon. A common belief about those who join violent groups is that they are looking for brotherhood or sisterhood, and those groups certainly do offer that. But often it is in fact a pre-existing sense of belonging that is the risk factor. When radical ideas get introduced into tight-knit networks of friends, these groups act as echo chambers that reinforce those beliefs. The beliefs then act as a social glue that brings the friends closer to one another as a group, and distances the group as a whole from the rest of society.

As this process continues, the values become sacred and the identities of the individuals become fused with the group. Indeed, field studies by Artis International—a consortium of researchers and practitioners studying violent conflict, of which I am a part—of residents in two radicalization hotspots in Morocco show that it is the combination of holding a sacred value and being closely connected with your group of friends that motivates people to fight and die for their values. Strong identification with close comrades was a principal determinant of willingness to sacrifice oneself, a University of Oxford study found, among Libyan revolutionaries fighting the Qaddafi regime in 2011. My own studies on jihadist-group sympathizers in Paris and Barcelona show that, contrary to what many people believe, identification with Islam or the Muslim ummah (worldwide Muslim community) does not strongly predict willingness to fight and die for jihadist ideals. Instead, transcendent beliefs shared with close friends increased willingness to commit violence.

Most prevention policies aim to stop radicalization for every single person. This is a tall order and unlikely to succeed. A more evidence-based approach would be to try to mitigate group radicalization. Values and beliefs are socially embedded. Once the social setting changes, the beliefs may lose their grounding. For this reason, friends are not only crucial for the radicalization process but can be important in the prevention and de-radicalization process as well. Prevention, de-radicalization, and reintegration programs in Germany, Sweden, Denmark, and Sri Lanka have all used moderate friends and family members to pull a person away from violent extremism.

The existence of hotspots of radicalization can perhaps best be understood using epidemiology. When tracing back the origins of local European networks we often find a “patient zero” who is the first person to bring radical ideas into a community. This could be a recruiter, a radical imam as in the case of Barcelona, or any other person with the propensity and skills to spread extremist ideas. The rate of propagation of these ideas may partly be attributable to the sheer number of vulnerable individuals in those areas, though, again, it’s often friends and family members who act as catalysts between the ideas and new adherents. The rate of propagation may also be due to the bystander effect, whereby non-radical individuals do not report suspicious behaviors. This effect can be enhanced by rampant social disorganization in certain neighborhoods. If areas are already heavily afflicted by petty or organized crime, drug-dealing, or vandalism, then residents habituate to a level of nefarious behavior in their midst. This can be seen as a weakening of the community immune system, which in more organized areas would detect and expel the intruding ideas at an early stage.

Reducing social disorganization in certain communities may help increase their resistance to extremism. But bombarding radicalization hotspots with counter-radicalization programs—which often involves getting teachers, social workers, or community leaders to report on those they oversee—can make residents of those areas feel suspect, which may do more harm than good. Economic development may not be effective either. Southern European countries, such as Spain and Italy, have worse economic integration of their immigrant populations than do northern European countries, such as Sweden, Denmark, Germany, or the UK. Yet the northern European countries have higher per capita radicalization rates than the southern countries. Economic development of certain communities should be welcomed but it may not be the most effective strategy for preventing young men like the well-integrated Barcelona attackers from radicalizing.

Working directly with the non-radical friends and family members of those on terrorist watch lists avoids the pitfalls of other approaches. In most cases, non-radical friends and family have no idea their loved ones are on watch lists, and if they do, don’t know how to intervene. Programs that help facilitate this interaction could be successful.

Radicalization is a complex system that cannot be reduced to its individual factors. International conflicts, social networks, community, ideology, and individual vulnerabilities all combine to let radicalization emerge. Some of these factors may be more volatile, such as individual personalities, while others are more stable, such as social networks. But only a holistic view of this phenomenon can provide the understanding needed for designing policies to counter the pull of extremist groups.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/RQkppvblRVs/