At the end of the eighteenth century, the French naturalist Jean-Baptiste Lamarck noted that life on earth had evolved over long periods of time into a striking variety of organisms. He sought to explain how they had become more and more complex. Living organisms not only evolved, Lamarck argued; they did so very slowly, “little by little and successively.” In Lamarckian theory, animals became more diverse as each creature strove toward its own “perfection,” hence the enormous variety of living things on earth. Man is the most complex life form, therefore the most perfect, and is even now evolving.
In Lamarck’s view, the evolution of life depends on variation and the accumulation of small, gradual changes. These are also at the center of Darwin’s theory of evolution, yet Darwin wrote that Lamarck’s ideas were “veritable rubbish.” Darwinian evolution is driven by genetic variation combined with natural selection—the process whereby some variations give their bearers better reproductive success in a given environment than other organisms have.1 Lamarckian evolution, on the other hand, depends on the inheritance of acquired characteristics. Giraffes, for example, got their long necks by stretching to eat leaves from tall trees, and stretched necks were inherited by their offspring, though Lamarck did not explain how this might be possible.
When the molecular structure of DNA was discovered in 1953, it became dogma in the teaching of biology that DNA and its coded information could not be altered in any way by the environment or a person’s way of life. The environment, it was known, could stimulate the expression of a gene. Having a light shone in one’s eyes or suffering pain, for instance, stimulates the activity of neurons and in doing so changes the activity of genes those neurons contain, producing instructions for making proteins or other molecules that play a central part in our bodies.
The structure of the DNA neighboring the gene provides a list of instructions—a gene program—that determines under what circumstances the gene is expressed. And it was held that these instructions could not be altered by the environment. Only mutations, which are errors introduced at random, could change the instructions or the information encoded in the gene itself and drive evolution through natural selection. Scientists discredited any Lamarckian claims that the environment can make lasting, perhaps heritable alterations in gene structure or function.
But new ideas closely related to Lamarck’s eighteenth-century views have become central to our understanding of genetics. In the past fifteen years these ideas—which belong to a developing field of study called epigenetics—have been discussed in numerous articles and several books, including Nessa Carey’s 2012 study The Epigenetic Revolution2 and The Deepest Well, a recent work on childhood trauma by the physician Nadine Burke Harris.3
The developing literature surrounding epigenetics has forced biologists to consider the possibility that gene expression could be influenced by some heritable environmental factors previously believed to have had no effect over it, like stress or deprivation. “The DNA blueprint,” Carey writes,
isn’t a sufficient explanation for all the sometimes wonderful, sometimes awful, complexity of life. If the DNA sequence was all that mattered, identical twins would always be absolutely identical in every way. Babies born to malnourished mothers would gain weight as easily as other babies who had a healthier start in life.
That might seem a commonsensical view. But it runs counter to decades of scientific thought about the independence of the genetic program from environmental influence. What findings have made it possible?
In 1975, two English biologists, Robin Holliday and John Pugh, and an American biologist, Arthur Riggs, independently suggested that methylation, a chemical modification of DNA that is heritable and can be induced by environmental influences, had an important part in controlling gene expression. How it did this was not understood, but the idea that through methylation the environment could, in fact, alter not only gene expression but also the genetic program rapidly took root in the scientific community.
As scientists came to better understand the function of methylation in altering gene expression, they realized that extreme environmental stress—the results of which had earlier seemed self-explanatory—could have additional biological effects on the organisms that suffered it. Experiments with laboratory animals have now shown that these outcomes are based on the transmission of acquired changes in genetic function. Childhood abuse, trauma, famine, and ethnic prejudice may, it turns out, have long-term consequences for the functioning of our genes.
These effects arise from a newly recognized genetic mechanism called epigenesis, which enables the environment to make long-lasting changes in the way genes are expressed. Epigenesis does not change the information coded in the genes or a person’s genetic makeup—the genes themselves are not affected—but instead alters the manner in which they are “read” by blocking access to certain genes and preventing their expression. This mechanism can be the hidden cause of our feelings of depression, anxiety, or paranoia. What is perhaps most surprising of all, this alteration could, in some cases, be passed on to future generations who have never directly experienced the stresses that caused their forebears’ depression or ill health.
Numerous clinical studies have shown that childhood trauma—arising from parental death or divorce, neglect, violence, abuse, lack of nutrition or shelter, or other stressful circumstances—can give rise to a variety of health problems in adults: heart disease, cancer, mood and dietary disorders, alcohol and drug abuse, infertility, suicidal behavior, learning deficits, and sleep disorders. Since the publication in 2003 of an influential paper by Rudolf Jaenisch and Adrian Bird, we have started to understand the genetic mechanisms that explain why this is the case. The body and the brain normally respond to danger and frightening experiences by releasing a hormone—a glucocorticoid—that controls stress. This hormone prepares us for various challenges by adjusting heart rate, energy production, and brain function; it binds to a protein called the glucocorticoid receptor in nerve cells of the brain.
Normally, this binding shuts off further glucocorticoid production, so that when one no longer perceives a danger, the stress response abates. However, as Gustavo Turecki and Michael Meaney note in a 2016 paper surveying more than a decade’s worth of findings about epigenetics, the gene for the receptor is inactive in people who have experienced childhood stress; as a result, they produce few receptors. Without receptors to bind to, glucocorticoids cannot shut off their own production, so the hormone keeps being released and the stress response continues, even after the threat has subsided. “The term for this is disruption of feedback inhibition,” Harris writes. It is as if “the body’s stress thermostat is broken. Instead of shutting off this supply of ‘heat’ when a certain point is reached, it just keeps on blasting cortisol through your system.”
It is now known that childhood stress can deactivate the receptor gene by an epigenetic mechanism—namely, by creating a physical barrier to the information for which the gene codes. What creates this barrier is DNA methylation, by which methyl groups known as methyl marks (composed of one carbon and three hydrogen atoms) are added to DNA. DNA methylation is long-lasting and keeps chromatin—the DNA-protein complex that makes up the chromosomes containing the genes—in a highly folded structure that blocks access to select genes by the gene expression machinery, effectively shutting the genes down. The long-term consequences are chronic inflammation, diabetes, heart disease, obesity, schizophrenia, and major depressive disorder.
Such epigenetic effects have been demonstrated in experiments with laboratory animals. In a typical experiment, rat or mouse pups are subjected to early-life stress, such as repeated maternal separation. Their behavior as adults is then examined for evidence of depression, and their genomes are analyzed for epigenetic modifications. Likewise, pregnant rats or mice can be exposed to stress or nutritional deprivation, and their offspring examined for behavioral and epigenetic consequences.
Experiments like these have shown that even animals not directly exposed to traumatic circumstances—those still in the womb when their parents were put under stress—can have blocked receptor genes. It is probably the transmission of glucocorticoids from mother to fetus via the placenta that alters the fetus in this way. In humans, prenatal stress affects each stage of the child’s maturation: for the fetus, a greater risk of preterm delivery, decreased birth weight, and miscarriage; in infancy, problems of temperament, attention, and mental development; in childhood, hyperactivity and emotional problems; and in adulthood, illnesses such as schizophrenia and depression.
What is the significance of these findings? Until the mid-1970s, no one suspected that the way in which the DNA was “read” could be altered by environmental factors, or that the nervous systems of people who grew up in stress-free environments would develop differently from those of people who did not. One’s development, it was thought, was guided only by one’s genetic makeup. As a result of epigenesis, a child deprived of nourishment may continue to crave and consume large amounts of food as an adult, even when he or she is being properly nourished, leading to obesity and diabetes. A child who loses a parent or is neglected or abused may have a genetic basis for experiencing anxiety and depression and possibly schizophrenia. Formerly, it had been widely believed that Darwinian evolutionary mechanisms—variation and natural selection—were the only means for introducing such long-lasting changes in brain function, a process that took place over generations. We now know that epigenetic mechanisms can do so as well, within the lifetime of a single person.
It is by now well established that people who suffer trauma directly during childhood or who experience their mother’s trauma indirectly as a fetus may have epigenetically based illnesses as adults. More controversial is whether epigenetic changes can be passed on from parent to child. Methyl marks are stable when DNA is not replicating, but when it replicates, the methyl marks must be introduced into the newly replicated DNA strands to be preserved in the new cells. Researchers agree that this takes place when cells of the body divide, a process called mitosis, but it is not yet fully established under which circumstances marks are preserved when cell division yields sperm and egg—a process called meiosis—or when mitotic divisions of the fertilized egg form the embryo. Transmission at these two latter steps would be necessary for epigenetic changes to be transmitted in full across generations.
The most revealing instances for studies of intergenerational transmission have been natural disasters, famines, and atrocities of war, during which large groups have undergone trauma at the same time. These studies have shown that when women are exposed to stress in the early stages of pregnancy, they give birth to children whose stress-response systems malfunction. Among the most widely studied of such traumatic events is the Dutch Hunger Winter. In 1944 the Germans prevented any food from entering the parts of Holland that were still occupied. The Dutch resorted to eating tulip bulbs to overcome their stomach pains. Women who were pregnant during this period, Carey notes, gave birth to a higher proportion of obese and schizophrenic children than one would normally expect. These children also exhibited epigenetic changes not observed in similar children, such as siblings, who had not experienced famine at the prenatal stage.
During the Great Chinese Famine (1958–1961), millions of people died, and children born to young women who experienced the famine were more likely to become schizophrenic, to have impaired cognitive function, and to suffer from diabetes and hypertension as adults. Similar studies of the 1932–1933 Ukrainian famine, in which many millions died, revealed an elevated risk of type II diabetes in people who were in the prenatal stage of development at the time. Although prenatal and early-childhood stress both induce epigenetic effects and adult illnesses, it is not known if the mechanism is the same in both cases.
Whether epigenetic effects of stress can be transmitted over generations needs more research, both in humans and in laboratory animals. But recent comprehensive studies by several groups using advanced genetic techniques have indicated that epigenetic modifications are not restricted to the glucocorticoid receptor gene. They are much more extensive than had been realized, and their consequences for our development, health, and behavior may also be great.
It is as though nature employs epigenesis to make long-lasting adjustments to an individual’s genetic program to suit his or her personal circumstances, much as in Lamarck’s notion of “striving for perfection.” In this view, the ill health arising from famine or other forms of chronic, extreme stress would constitute an epigenetic miscalculation on the part of the nervous system. Because the brain prepares us for adult adversity that matches the level of stress we suffer in early life, psychological disease and ill health persist even when we move to an environment with a lower stress level.
Once we recognize that there is an epigenetic basis for diseases caused by famine, economic deprivation, war-related trauma, and other forms of stress, it might be possible to treat some of them by reversing those epigenetic changes. “When we understand that the source of so many of our society’s problems is exposure to childhood adversity,” Harris writes,
the solutions are as simple as reducing the dose of adversity for kids and enhancing the ability of caregivers to be buffers. From there, we keep working our way up, translating that understanding into the creation of things like more effective educational curricula and the development of blood tests that identify biomarkers for toxic stress—things that will lead to a wide range of solutions and innovations, reducing harm bit by bit, and then leap by leap.
Epigenetics has also made clear that the stress caused by war, prejudice, poverty, and other forms of childhood adversity may have consequences both for the persons affected and for their future—unborn—children, not only for social and economic reasons but also for biological ones.
See our essay “Evolving Evolution” in these pages, May 11, 2006. ↩
The Epigenetic Revolution: How Modern Biology is Rewriting Our Understanding of Genetics, Disease, and Inheritance (Columbia University Press, 2012). ↩
The Deepest Well: Healing the Long-Term Effects of Childhood Adversity (Houghton Mifflin Harcourt, 2018). ↩
The flag of Malaysia’s Parti Keadilan Rakyat, or People’s Justice Party (PKR), is turquoise-blue with red stripes at both ends. At its center is a stylized white “O.” It symbolizes the black eye of Anwar Ibrahim, Malaysia’s former deputy prime minister, who was a rising political star in the 1990s until he criticized the ruling National Front, a right-wing coalition led by Dr. Mahathir Mohamad, and was shipped off to jail for alleged sodomy. In September 1998, before a show trial, Anwar was beaten up by a police chief. Thereafter, a photo of Anwar’s bruised face became a symbol of opposition to the National Front, which had, in one form or another, been in power since Malaysia achieved full independence in the early 1960s.
Mahathir claimed at the time that Anwar’s black eye was “self-inflicted,” caused by his “pressing a glass over his eyes.” Anwar went to jail for six years, until 2004. Then, incredibly, in 2015 he was jailed again—again for alleged sodomy. Anwar has openly criticized many establishment figures and has long been viewed by those in power as a volatile and threatening figure. He was released from prison, upon receiving a special royal pardon, only this week.
Mahathir had ruled Malaysia with an iron fist, crushing dissenters like Anwar, from 1981 to 2003. In 2018, at the age of ninety-two, he was back, campaigning to become prime minister once more—under the PKR’s turquoise-blue flag. His running-mate was Anwar Ibrahim’s wife, Wan Azizah Wan Ismail, who founded the PKR after her husband went to jail. Mahathir’s campaign promise was to obtain a pardon for Anwar if the coalition won, eventually to step down himself and hand over power to his former deputy. This unlikely-seeming team of former rivals buried their differences in the single-minded hope of ousting the spectacularly corrupt administration of Malaysia’s most recent prime minister, Najib Razak—and they did so knowing that they would need all the star power they could get.
Their convoluted alliance was at the heart of Malaysia’s historic general election last week. For the first time in the country’s post-independence history, an opposition coalition succeeded in unseating the National Front. Mahathir led the Pakatan Harapan, or the “Alliance of Hope,” against his own former party. Uniting this alliance was its animus against Najib, a genteel, foreign-educated former protégé of Mahathir’s, who has reportedly stolen almost $700 million from a state fund named 1Malaysia Development Berhad. That scandal, generally known by the initials “1MDB,” along with Najib’s oversight of an unpopular goods and services tax that aimed to simplify tariffs, also angered millions of voters.
Mahathir has referred to his part in supporting Najib’s rise as “the biggest mistake that I have made in my life,” and in 2016, he quit Najib’s party, the United Malays National Organization (UMNO), one of the main components of the National Front coalition. Mahathir’s show of remorse, backed by his efforts to make reparation, is something rarely seen in Malaysian public life and high office.
Even so, the opposition’s victory surprised everyone. Postmortems were written for Malaysia’s fourteenth general election months before it happened. The country was so gerrymandered, people assumed, and the National Front’s hold on government so strong, that Najib would easily win re-election, despite his unpopularity and scandals. Instead, turnout was high, reflecting popular discontent: over 12 million Malaysians, or 82 percent of those eligible, cast votes. The Alliance won 121 seats (of 222), giving it a decisive majority in parliament.
If Najib had won, the increasingly illiberal trend in Malaysian society toward a strongly conservative form of Islam would undoubtedly have continued. In recent decades, Malaysia has received extensive religious investments such as scholarships, mosques, preachers, universities, schools, and textbooks from Saudi Arabia, which has systematically propagated its brand of puritanical Salafi Islam across the Muslim world, as a matter of basic foreign policy. Under Najib, Salafi preachers and ideas gained mainstream platforms; and many members of the Salafi-identified Association of Malaysian Scholars have joined the UMNO outright.
“Najib needed Islamic legitimacy in order to boost his beleaguered image,” Ahmad Farouk Musa, a liberal Islamic scholar based in Kuala Lumpur, told me. “He got this from Malaysia’s Salafi network, many of whose figures have joined UMNO, increasingly defining its religious stances.” Meanwhile, Saudi leaders have also cultivated close personal ties with Najib; some are implicated in the 1MDB scandal, though they claim that a mysterious $681 million deposit to Najib’s personal account last year was a “genuine donation.”
Najib’s election defeat, however, does not mean that the trend of Islamicization in Malaysian society will be checked, since it predates his time in office. Malaysia has long had, for instance, parallel legal systems, with civil law for all citizens and Islamic law for the Muslims (and sharia courts in every state). Many prominent figures have railed against the Saudi investments and the “Arabization” of Malaysia’s religious traditions, including the Sultan of Johor (the constitutional monarch of that state) and Marina Mahathir, Dr. Mahathir’s eldest daughter and a prominent social activist, who has called it “Arab colonialism.” According to Mohamed Nawab Mohamed Osman, a Singapore-based security scholar, Malaysia may be the “weakest link” in Southeast Asia’s resistance to Islamist radicalization “because of the mainstreaming of puritan ideas.”
Lily Rahim, an associate professor at the University of Sydney, agrees that the popular uptake of Salafi ideas has “certainly increased” in the last decade, “encouraged by the UMNO-led government, opposition party PAS, conservative Islamic NGOs, state ulama [clerics] and the Islamic bureaucracy.” The UMNO may have suffered a heavy setback in this year’s election, but the country’s main Islamist party, the Malaysian Islamic Party (PAS), increased its share of the popular vote from 15 percent to 17 percent. (The president of the PAS, Haji Hadi Awang, is himself a Saudi university alumnus.) The PAS quit the Alliance of Hope last year because it wanted to impose hudud, or strict Islamic law, in Kelantan, a rural state that today stages public floggings of criminals.
“The natural alliance now is between UMNO and PAS [against the Hope Alliance],” Tom Pepinsky, a political scientist and Southeast Asia specialist at Cornell University, told me. This would unite the most vocally Malay party and the most vocally Muslim one. “That’s a large majority, essentially all Malay Muslim… and we know that conversations between their leadership have been happening for a while now.” Pepinsky said they would be a force to be reckoned with at the next election, which must take place by May 2023.
Malaysia has entered a period of uncertainty. No one seemed able to predict what would happen once Najib’s re-election bid was rejected. After all, Malaysia had never before seen a democratic transfer of power.
Thousands gathered outside the State Palace on May 10, a day after the election, waving and wearing PKR flags. Most had had a sleepless night after the polls closed, tracking the results on their screens and watching Mahathir claim victory, but with no official word all night from either Najib’s camp or the Election Commission. Najib finally conceded at around noon, enabling Mahathir to be sworn in by King Muhammad V, the country’s constitutional monarch. Mahathir was driven through the palace gates around 4:30 PM, in formal Malay traditional dress of a stiff black cap and a gold-threaded sarong, for the swearing-in, which was scheduled for 5 PM.
A cloudless afternoon turned to dusk, and then to night. Mahathir was still yet to be sworn in. Some young Malaysians camped out on a hill overlooking the palace while others stocked up on buttery roti canai flatbreads from a nearby café. No one went home. Mahathir had made Thursday a national holiday, and Friday, too. Marina Mahathir unexpectedly joined the crowd at around 8:30 PM. She had declared on Twitter that she was too exhausted to join her father in person, but when the delay became apparent, she turned around her car from the airport—she was on her way to a speaking engagement in Bangladesh—and came to the palace, after all.
“Well, he doesn’t tell us anything,” she told me, talking about her father’s sudden decision to return to political life. “We had a feeling he would run, because a lot of gears were turning these last few months, but he is very much his own man.” She sat on the sidewalk, wearing a loose black vest, slacks, and ballet flats. “Given how long this is going on, I realized I just couldn’t leave the country tonight.”
A minor celebrity in her own right, she was greeted by well-wishers and people asking for photos with her on the lawn of the palace grounds. “Do you think they are discussing something inside?” asked one elderly man, whom she had greeted warmly.
“Discuss what, lah!?” she said, using the common Malay interjection to signal her distaste. The Malay royals are known to dislike Mahathir, and no one expected him to receive an especially warm welcome. Never had an election victory turned out to be so suspenseful. The entire day, with its endless delays, was a puzzle. No one was quite sure of the result, nor could they go to work, so most people just recharged their smartphones, waiting for word in a daze.
Finally, a few minutes after 9:30 PM, Mahathir’s meeting with the king was broadcast.
“It’s on Amani!” Marina told the crowd, and everyone tuned in on their phones to the network that was live-streaming the ceremony. We watched her father meet an incredibly bored-looking king, who slouched through the proceedings, a cell phone visibly bulging in his shirt pocket.
So far, though, Mahathir’s elaborate election strategy has proceeded according to plan. He is firmly in power, and arranged for Anwar to be pardoned right before Ramadan. He also prevented Najib Razak from leaving the country, purportedly so that he can face trial for his alleged corruption.
This is all promising, but there are still several more steps before Anwar can realize Mahathir’s promise of power. To start with, Anwar would need to win a parliamentary seat of his own; the most likely scenario is that his wife will resign from hers, triggering an election in which he will stand in her stead. Mahathir, meanwhile, has given himself a leisurely timeline of two years before he plans to cede his position, by which time he will be ninety-four (he is already the world’s oldest state leader).
Less encouraging is that the stunning electoral upset has done little so far to reverse the erosion of civil liberties in Malaysia—a process that began well before Najib’s administration. During his last term, Mahathir himself behaved like an autocrat, stamping out challenges from opponents through political maneuvering, sacking judges, crushing freedom of assembly, and jailing critics. He also handed out government contracts to cronies and issued antisemitic diatribes. (Although there are hardly any Jews in Malaysia, Mahathir has made ready use of demagogic conspiracy-theory tropes, claiming that Jews “rule the world by proxy.”) This week, in a blow to press freedom, Mahathir announced that he will not repeal Malaysia’s widely criticized “anti-fake news law,” enacted by Najib ahead of the election.
Mahathir has also always aggressively promoted affirmative action for ethnic Malays, who make up more than 50 percent of the population. Back in 1970, he wrote a controversial book, The Malay Dilemma, arguing that the Malay race was naturally nonconfrontational and lazy, and that it was this that led to their subjugation by British colonists and then Chinese businessmen; this racial disadvantage, he proposed, needed the redress of affirmative action in order to preserve Malays’ status as bumiputera, sons of the soil.
Mahathir’s pro-Malay policies have been criticized in the past for causing economic stagnation, encouraging discrimination, and spurring the flight of talented ethnic Chinese and Indian Malaysians. Although Mahathir abandoned the UMNO, he transferred his allegiance to the Malaysian United Indigenous Party, which has an almost indistinguishable focus on ethnic Malays. It seems unlikely that he will abandon Malaysia’s race-based policies. Anwar, however, has said in an interview since his release that he hopes eventually to reform the race-based affirmative action policies in favor of a “more transparent” system based on merit and socioeconomic class.
Many commentators believe that if civil liberties improve at all under the new administration, it won’t be because of Mahathir but thanks to the people around him. “I’m under no illusions that Mahathir is a liberal democrat, but I do think that he allied himself with opposition parties founded on the premise that civil liberties deserve respect under Malaysian law,” Pepinsky told me. “If he doesn’t uphold at least some of them, he will have a tough time serving his coalition.”
The mood in Malaysia remains optimistic, even as the mundane logistics of power transfer and administration-building take the place of initial jubilation. The election result was an extraordinary proof that Malaysian democracy is not simply theoretical.
Outside the State Palace on May 10, I shared my phone screen with Dennis Ignatius, a former ambassador to Chile and Argentina who is now a newspaper columnist. He wears round glasses and speaks in the clipped tone of someone who spent his early years under British colonial rule. “For a long time, it felt like this whole region has gone dark,” he said. “Cambodia, Myanmar, the Philippines, Vietnam.” Today, Southeast Asia’s political establishment is packed with authoritarians. “I still can’t believe we managed to change something here.”
For years, he has been railing against the National Front, corruption, and Saudi investments—albeit with necessarily delicate phrasing—in his columns. Like most other Alliance voters, he never imagined his vote would actually propel them to victory. “I am sixty-seven years old,” he said. “This morning, for the first time ever, I woke up as a man without a mission.”
As for Anwar Ibrahim, he was released from a prison hospital to an ecstatic crowd of admirers on May 16, a day before the first fast of the holy month of Ramadan. He emerged in a natty black suit, and spontaneously canceled the expected news conference because of the boisterous throng of supporters that awaited him.
“I have forgiven him,” he said later, when asked about Najib. “But the issue of injustice toward the people, crimes committed against the people, endemic corruption that has become a culture in this country, that he has to answer for.”
Anwar is seventy years old and has spent the last twenty of them in and out of jail. He has promised to run for parliament within Mahathir’s two-year interregnum, but not immediately. First, he said, he would need some “time and space.”
Al-Britannia, My Country: A Journey Through Muslim Britain
Europe’s Angry Muslims: The Revolt of the Second Generation
One of the consequences of the defeat of ISIS in Iraq and Syria is that many of the estimated five to six thousand Europeans who had gone to Mesopotamia to fight for or live under its so-called caliphate are now coming home. Depending on the policies of their respective countries, they may be jailed, closely watched, or placed in rehabilitation programs (France is the toughest, Denmark among the most understanding). No one knows how much of a threat these returnees pose. They may be repentant and ready to be lawful citizens, or they may be planning acts of revenge. Regardless of the setbacks suffered by ISIS on the battlefield, the list of atrocities committed by Muslim jihadis on European soil continues to grow. From a series of bomb, vehicle, and knife assaults in Britain in early 2017—the worst of which, claiming twenty-two lives, took place during a concert at the Manchester Arena—to more recent ones in Spain and France, it is clear that the appetite of a tiny number of Muslims for killing infidels is undimmed.
Although terrorist attacks in Europe continue to attract much attention, they don’t dominate the news as much as they did when they were a horrendous novelty back in 2014 and 2015. That terrorists can create localized but not widespread panic has been proved time and again; Sir Jeremy Greenstock, the former head of the United Nations’ counterterrorism committee, has aptly described Islamist terrorism as “a lethal nuisance.”
And yet this nuisance has made some progress in achieving what the French social scientist Gilles Kepel, one of Europe’s foremost authorities on Islamist militancy, maintains was the goal behind bringing the jihad home: creating an unbridgeable “fracture” between Europe’s Muslims and non-Muslims. According to Kepel, after engaging in holy war against the Soviets in Afghanistan, the jihadis moved on to the “near abroad” (Bosnia, Algeria, and Egypt), before homing in on the West itself, first—and in most dramatic fashion—the US, and since 2012 Western Europe, in a campaign of attacks by small, often “home-grown,” cells and individuals.1 That longed-for fracture gave Kepel the title of his latest addition to the vast literature on the subject, La Fracture, published a few months after its author was included on a hit list of seven French public figures singled out for execution by the jihadi assassin Larossi Abballa. (Abballa was killed by police in June 2016 after murdering a police officer and his wife.) “If you want to kill me, kill me,” Kepel taunted the jihadis during an interview in April 2017 with The New York Times.
In contrast to attacks committed by non-Muslims such as Stephen Paddock, who massacred fifty-eight people at a concert in Las Vegas on October 1, 2017, jihadi attacks have repercussions on the communities and traditions that are believed to have encouraged them. Each atrocity increases by a fearsome multiple the distrust, surveillance, and interference to which Muslims in the West are subject. In the month following the Arena bombing, the Manchester police logged 224 anti-Muslim incidents, compared to thirty-seven in the same period a year earlier. On June 19, 2017, when a white Briton, Darren Osborne, plowed his van into a group of Muslims in North London, killing one, many Britons, including ones I spoke to, felt that the Muslims had had it coming.2 This is what Kepel means by fracture: jihadism engenders a reaction “against all Muslims,” while populist politicians “point the finger at immigrants or ‘Islam.’”
Many Europeans of Muslim heritage, of course, are contributing to the life of their adoptive nations, very often while retaining elements of their faith and culture.3 But with each jihadi attack, the oft-heard formula that Islam is a religion of peace that has been perverted by an isolated few loses currency.
This was brought home to me a few days after the North London attack as I listened to an exchange on a popular radio show between the host, Nick Ferrari, and Khola Hasan, a prominent female member of one of Britain’s sharia councils. (These bodies spend much of their time dissolving unhappy marriages contracted under Islamic law and have been criticized for arrogating the duties of the state.) It began with Hasan dismissing ISIS as a “death cult that is pretending to have grown from within the Muslim faith.” So opportunistic was the jihadists’ attachment to Islam, she went on, that the mass murderers who rampaged about shouting “Allah” could just as easily be yelling “Buddha” or “Jesus.” “Remind me,” Ferrari interjected glacially, “of the last time a group of Christians…blew up children coming out of a pop concert, because I must have been off that day,” and he went on to say that “something about the faith” had caused the current problems. “That’s the whole point!” Hasan replied, in obvious distress. “It’s not about the faith,” she said, before the telephone line mysteriously—and rather symbolically—went dead.
According to an opinion poll commissioned by Le Figaro in April 2016, 63 percent of French people believe that Islam enjoys too much “influence and visibility” in France, up from 55 percent in 2010, while 47 percent regard the presence of a Muslim community as “a threat,” up from 43 percent. A poll conducted in Britain around the same time found that 43 percent of Britons believe that Islam is “a negative force in the UK.” Many British Muslims, I was told by a Muslim community activist in Leeds, spent the hours after the Las Vegas massacre “praying that the perpetrator wasn’t a Muslim,” for had he been, it would have led to furious responses online, in addition to the usual round of ripped-off hijabs and expletives in the street, if not actual physical threats.
The integration of Muslims became a political issue in Europe in the 1980s. In Britain, Muslim activists began to split from the black community, and tensions developed between the two. In France, years of neglect by the government, combined with the popularization of Salafist ideas through the Afghan jihad, undermined the complacent old assumption that the country’s North African immigrants were inevitably socialists and secularists. French cultural control, or dirigisme, and British multiculturalism—their respective approaches to the integration of immigrants—are logical extensions of their contrasting versions of empire (France’s civilizing mission versus British laissez-faire) and informed in Britain’s case by the principle of diversity inherent in a conglomerate United Kingdom. (Germany has adopted an uneasy mixture of the two.)
When the French banlieus erupted in rioting in 2005, and again when jihadist terror struck France in January 2015, many Britons attributed France’s troubles to its unwise policy of forcing North African immigrants to adopt all aspects of French culture, including a reverence for the French language and a secular and republican ideology. The British, by contrast, allowed different communities to maintain their diverse characteristics while seducing them with symbols such as Parliament and the crown. Even the London bombings of 2005, in which fifty-two people were killed, didn’t do lasting damage to this optimistic approach, in part because the eight-year gap before the next jihadist attack encouraged Britons to think of it as an aberration.
Back in 1975, the authors of a UK government document on education argued that “no child should be expected to cast off the language and culture of the home as he crosses the school threshold.” Here was the essence of multiculturalism, under which the state saw no advantage in weakening the ties of community and tradition that bound together citizens of foreign origin. On the contrary, they should be encouraged—as the Greater London Council put it—to “express their own identities, explore their own histories, formulate their own values, pursue their own lifestyles.”
Education and public housing were two areas in which multiculturalism made substantial inroads. In many cases no efforts were made to dilute a homogeneous immigrant community, which led to monocultural areas such as—in James Fergusson’s phrase in Al-Britannia,My Country, his highly sympathetic new survey of British Muslims—“the Muslim marches of east Birmingham,” while schools were encouraged to acknowledge the histories and cultures of “back home.” As a result of Britain’s multiculturalist program, under which communities were encouraged to represent themselves, somewhat in the manner of the British Raj, Muslim-majority areas gained Muslim councilors, mayors, and members of Parliament in much higher numbers than similar areas did in France.4
Bradford, in West Yorkshire, is one of several former manufacturing towns in England that have given shape to multiculturalism in its most controversial form. Not all of Britain’s Muslims live in enclosed communities, by any means; the Muslims of East London and Leicester are substantially intermixed with other arrivals of various religions. But parts of Bradford—in the tight little streets around Attock Park, for instance, where the women dress in Pakistani jilbab gowns—are defiantly monocultural. Bradford’s Muslim community of at least 130,000 is dominated by families from a single district of Pakistan-administered Kashmir, many of whom migrated in the 1960s. The skyline is punctuated by the minarets of 125 mosques, and the city’s systems of education, philanthropy, and commerce (not to mention marriage and divorce) have been formed by the attitudes of a close-knit, extremely conservative community. Poverty and drug-related crime have afflicted the town, but I was told repeatedly during recent visits that if it weren’t for their cohesion, ethos of self-help, and faith in an exacting but compassionate God, the Muslims of Bradford would be much worse off.
In recent years, England’s encouragement of multiculturalism has weakened in response to terrorist attacks and a rapid increase in the Muslim population, which has doubled since 2000 to more than three million people. By 2020 half the population of Bradford—which, besides being one of the country’s most Muslim cities, has one of its highest birth rates—will be under twenty years old. Responding to this demographic shift and the fear of terrorism, Britain under David Cameron and, more recently, Theresa May has given the policy of multiculturalism a very public burial, a shift that seems entirely in tune with the defensive impulses that led a small majority of voters to opt for Brexit. (I was told in Bradford that many Muslim inhabitants of the city also voted for Brexit, to indicate their displeasure at the recent arrival of Polish and Roma immigrants.) Typical of the panicky abandonment of a venerable article of faith was May’s reaction to the terrorist attack on London Bridge in early June 2017, after which she demanded that people live “not in a series of separated, segregated communities, but as one truly United Kingdom.” A central element of the government’s anti-extremism policy is the promotion of “British” values such as democracy, the rule of law, individual liberty, and tolerance.
One of the observations made in the 1975 government report on education was that the mothers of some pupils in British schools might be living “in purdah, or speak no English.” The report’s tone was neutral—nowhere did it suggest that this represented a threat to the civic state. In today’s environment, however, this statement might seem to provide evidence of communities spurning the British way of life—and along with it the values of emancipation and individual fulfillment that official British culture holds dear. The ghetto, runs this line of thinking, is the first stop on a journey to extremism and terrorism.
That such a direct connection exists between Muslim communities and extremist organizations is of course widely contested. In fact, many Islamist terrorists have left suffocating communities in order to find a new “family” among globalists whose aim, in the words of another French scholar of jihadis, Olivier Roy, is “a new sort of Muslim, one who is completely detached from ethnic, national, tribal, and family bonds.”5 It is most often this new “family,” and not a Muslim-dominated hometown, that inculcates jihadist ideology. The very Muslim societies and schools that Britons criticize for their illiberal attitudes may, because they encourage family and community cohesion, be a more effective obstacle to revolutionary jihadism than any amount of government propaganda. But Britons from the liberal left and the patriotic right both view conservative Muslim communities as patriarchal, chauvinistic, and homophobic—in many cases with good reason. And the old policy of deferring to more “moderate” Islamic groups has been undermined by the perception that they are stooges, or not very moderate.
The end of multiculturalism and its replacement with heightened surveillance and the emphasis on national cultural values are dealt with in detail in Fergusson’s Al-Britannia. Under the government’s anti-radicalization program, called Prevent, teachers, hospital staff, and other public sector workers must report anyone they regard as actual or potential extremists. Everyone agrees on the necessity of identifying potential radicals, whether Islamist or neofascist, but under Prevent many Muslims have been unjustly labeled, bringing shame that lasts long beyond the “voluntary” counseling that follows referral. And the gaffes committed in the name of Prevent are legion. The room of an Oxford student (a Sikh, as it happens) was searched after she was overheard praying, and a fourteen-year-old schoolboy was questioned by the authorities after he used the word “ecoterrorism.”
Britain’s newfound hostility toward illberal Muslim enclaves was brought to light by a national scandal in 2014, when details were revealed of an alleged plot to Islamize schools in Birmingham, the country’s second-largest city. Over the previous decade there had been a concerted effort to bring more Muslim staff into schools in Muslim-majority parts of Birmingham. Regular prayer and the Ramadan fast were assiduously promoted by school authorities, and some segregation between the sexes was introduced. On occasion teachers expressed rebarbative views on homosexuality and women’s rights. But in reinforcing these Islamist values—and downgrading the “British” values they were supposed to champion—the educators at these schools also created the conditions for a rise in academic and disciplinary standards, with a concomitant effect on exam results and employment opportunities.
After the scandal broke, teachers were suspended at three Birmingham schools, which had an adverse effect on morale and exam results. Just one disciplinary charge was upheld by the subsequent tribunal, and allegations of a plot have been discredited. But the damage to the reputation of the schools and their pupils has been considerable.
For Fergusson, the fracture in today’s Britain isn’t so much between Muslims and non-Muslims as among Muslims: the young are pushing against their parents, and the combination of sexual temptation and stalled incomes (marriage is a luxury many cannot afford) creates frustration that can slide into nihilism. Fergusson has been criticized for being too sympathetic to some Islamists who have been painted as menaces by the media, such as Asim Qureshi, a senior member of the Islamic advocacy group Cage, who in 2015 described Mohammed Emwazi, the British ISIS militant known as “Jihadi John,” as a “beautiful young man.” But Fergusson’s belief that British Muslims should be valued because of their faith, not in spite of it, is a major improvement on the self-interested toleration that has often passed for an enlightened position on the Muslim question.
The late American political scientist Robert S. Leikin, for example, whose book Europe’s Angry Muslims was recently reissued with a preface on the rise of ISIS, argued that one reason Westerners must combat anti-Muslim discrimination is that “such bigotry robs us of allies, including Muslims in the West, just when we have an imperative need for informants.” It is hard to think of a statement against intolerance more hostile to the notion that a Muslim might actually be “one of us.”
While Fergusson believes that recognizing the faith and values of Muslim communities is essential to society’s cohesion, Kepel regards it as a weapon that has been placed in the hands of Islamists by liberal bleeding hearts. Kepel’s journey is an interesting one. In 2004 he was a member of a commission that recommended—and secured—the prohibition of ostentatious religious symbols in French schools (the so-called “headscarf ban”). But he has also spent much of his career as a high-flying academic investigating the combination of poverty, cultural entrapment, and Salafist ideas that has created a sense of alienation in today’s Muslim generation. His previous book, Terror in France, was a meticulous modern history of Muslim France through various causes célèbres, from the publication of cartoons lampooning the Prophet Muhammad in 2005, to Marine Le Pen’s complaint in 2010 that sections of society were under “occupation” (for which she was acquitted on hate crime charges), to the first jihadist attacks on French soil in 2012.
Especially interesting in light of the Muslim generational divide that Fergusson concentrates on is Kepel’s description of an abortive attempt by the Union des organisations islamiques de France to impose order on the riot-hit banlieus in 2005. This body of religious and lay leaders, considered by many to be the most powerful in France, had lost prestige because of its inability to prevent the passing of the headscarf ban, and was dominated by aging Muslim Brothers out of touch with the young who were out trashing cars. The fatwa issued by the union’s theologians declaring vandalism to be haram, or illicit, had the opposite effect of the one intended. The following day more than 1,400 vehicles were destroyed and thirty-five policemen were wounded.
Kepel’s La Fracture is a collection of radio essays and commentaries on events in France and the Islamic world. Its most interesting part is the long epilogue, in which he displays his distrust, common among establishment intellectuals, of communal politics, which are held to be at odds with French republicanism’s emphasis on equal opportunity. The trouble, as Kepel sees it, lies not simply in the violence the jihadis perpetrate, but in the long game of their fellow travelers who aim to foment the fracture while working just within the framework of the law. In a secular republic like France, he writes, “a religious community cannot be represented as such in the political sphere”—but this is exactly the principle that a new generation of Muslim activists is trying to subvert.
Kepel singles out Marwan Muhammad, the energetic and well-educated young leader of a pressure group called the Collective against Islamophobia in France, as perhaps the most dangerous of these fellow travelers. Over the summer of 2016, following a jihadi attack in Nice that killed eighty-six people, several seaside towns issued a ban on burkinis. Muhammad was prominent in orchestrating the subsequent protests against what he called a “hysterical and political Islamophobia,” which gave France much unflattering publicity and ended with a judge declaring the bans illegal. Kepel regards the storm around burkinis—along with another outcry around the same time over the mistreatment of two Muslim women in a restaurant—as populist agitations masquerading as human rights campaigns. He accuses such movements of calling out instances of “Islamophobia” (the quotation marks are his) with the express purpose of making Muslims feel like victims.
There is no room for naiveté in this discussion. Plenty of Westerners were told by Ayatollah Khomeini while he was in his French exile that he had no desire to introduce Islamic rule in Iran. Within a year of his return to Iran in February 1979 the country was an Islamic republic. It is possible that Marwan Muhammad and others dream of a Muslim France. I don’t know if this is the case and neither does Kepel—though he has his suspicions. Certainly, there is much in the doctrinaire French enforcement of laïcisme that reminds of me of the despairing measures of Turkey’s secular republic before it was finally overwhelmed by the new Islamists led by Recep Tayyip Erdoğan. But France, despite its large and fast-growing Muslim population—around 8.4 million, or one-eighth of the total population—isn’t about to fall to the new Islamists. And whatever the ultimate goal of activists like Muhammad, their entry into the political mainstream and adept advocacy for increased rights bode well for the goal of integrating Muslims into European institutions.
President Emmanuel Macron has made friendly overtures to France’s Muslims, and during his campaign last year he acknowledged that terrible crimes were committed by the French in Algeria. On November 1 anti-terrorism legislation came into force that transferred some of the most repressive provisions of France’s state of emergency—which ended on the same day—into ordinary law. Prefects will continue to be allowed to restrict the movement of terror suspects and shut down places of worship without a court order, even if raids on people’s homes—a particularly controversial feature of the state of emergency—are now possible only with the permission of a judge. To be Muslim will be to remain under suspicion, to be belittled, profiled, and worse. As in Britain, the short-term imperative of keeping people safe is proving hard to reconcile with the ideal of building a harmonious society.
Europe has become more anti-Muslim as it has become more Muslim. Though it is hard to find many cultural affinities between the Pakistanis of Bradford, the Algerians of Marseille, and the Turks of Berlin, Islam remains the main determinant of identity for millions of people. That this is the case in hitherto multicultural Britain and laïque France suggests that, for all the differences between the two countries’ systems and the relative tolerance of the British one, neither has been able to solve the problem of Muslim integration. As long as this remains the case, and as long as the Muslim population continues to increase so quickly, Islam will continue to cause apprehension among very large numbers of Europeans. They have made their feelings clear by supporting anti-immigration candidates in election after election across the continent, stimulated in part by Angela Merkel’s profoundly unwise decision in 2015 to admit more than one million refugees to Germany.
The panic is caused by rising numbers, sharpened by fears over terrorism. Governments can act to allay both of these things, and they are acting. But they must also recognize Islam for what it is: a European religion. Seldom does mainstream Western discourse make room for the good Islam does—stabilizing communities, minimizing crime and delinquency, and providing succor for millions. Like the Christian temperance movements of the nineteenth century, Islam wards off the alcoholic disarray that descends on many towns on Friday and Saturday nights; you only need visit the emergency room of an urban hospital in “white” Britain to see the wreckage. There are many factors in Europe’s Muslim crisis, but perhaps the most fundamental is that Islam is never part of any general consideration of values in a successful modern society. Its position is at the margins of society, spoken at rather than engaged with.
In support of this thesis Kepel cites an influential online screed by a Syrian-born veteran of the Afghan jihad who goes by the nom de guerre Abu Musab al-Suri. His Call for a Global Islamic Resistance (2005) prophesies a civil war in Europe that will create the conditions necessary for the triumph of the global caliphate. ↩
See my article following that attack, “Britain: When Vengeance Spreads,” NYR Daily, June 24, 2017. ↩
The mayor of London, Sadiq Khan, for instance, enjoys much popularity among liberal-minded Londoners of all backgrounds, while not hiding the fact that his faith is important to him. ↩
Although Muslims are nowadays more represented in French public life than they once were, and an estimated fifteen of them entered the National Assembly in the 2017 elections, in some of the Muslim-majority communes around Paris one still has the impression of being in a French colony. In and out of the regional administrations step men and women “of French stock,” as the somewhat agricultural phrase goes, salaried, suited, and assured of a decent pension. They are administrators of a brown population as surely as their forefathers were in Oran or Rabat. ↩
See Olivier Roy, Jihad and Death: The Global Appeal of Islamic State, translated by Cynthia Schoch (Hurst, 2017). ↩
The Bank of England is one of Britain’s distinct contributions to history. It was chartered in 1694 to lend money to King William for war on France, when a company of London merchants received from Parliament the right to take deposits in coin from the public and to issue receipts or “Bank notes.” The bank financed a summer’s fighting in the Low Countries, gave the business district of London, known as the City, a currency for trade, and lowered the rate of interest for private citizens.
In the next century, the Bank of England developed for Crown and Parliament sources of credit that permitted Britain, during 115 years of intermittent warfare, to contain and then defeat France and to amass, in Bengal and Canada, the makings of an overseas empire. Through “discounts,” or unsecured lending to merchants and bankers, the bank provided the City with cash and influenced rates of lending and profit, and thus the course of trade. In the war that followed the French Revolution of 1789, it was forced to stop paying its banknotes in gold and silver; it had issued more notes than it had gold with which to back them. Across the Channel, for want of a public bank of his own, King Louis XVI of France lost his kingdom and his head.
In the nineteenth century the Bank of England became the fulcrum of a worldwide system based on gold and known as “the bill on London.” Over a succession of City crises at approximately ten-year intervals, it took on the character and functions of a modern central bank. By World War I, it was propping up an empire living beyond its means. In conjunction with the Federal Reserve Bank of New York and the German Reichsbank, the Bank of England’s longest-serving governor, Montagu Norman, sought to develop a club of central banks that would impose on the chaos of international commerce and the caprices of government a pecuniary common law. In reality, the Bank of England had to fight ever-greater runs on sterling until, in 1992, it was routed and sterling was taken out of the European Exchange Rate Mechanism, the forerunner of the euro.
Even in the postwar period, the bank had successes. In 1986 it directed a dismantling of anticompetitive practices in London’s stock and bond markets that heralded a quarter-century of British prosperity. It also invented the phrase (“Big Bang”) by which the reforms came to be known. Nationalized in 1946, the bank recovered some of its independence in 1997. After the collapse of Lehman Brothers in New York in 2008, it flooded London with money. Shorn of many of its ancient functions and traditions, its armies of scriveners long gone to their graves, the bank is now headed by a Canadian, Mark Carney. What would be more shocking to the shades of the strict Protestants who supplied the bank’s first Court of Directors and shareholders: the 120th governor is a Roman Catholic.
Historians such as Sir John Clapham, John Fforde, Richard Sayers, and Forrest Capie have told parts of this story. David Kynaston tells it all, or at least up to 2013. A careful and thorough writer, Kynaston made his name in the 1990s with a history in four volumes of the City of London after 1815, much of it drawn from the Bank of England’s archive. He then turned to general social history in Austerity Britain, 1945–1951; Family Britain, 1951–1957;and Modernity Britain, 1957–1962.The later books brought him readers who would not be interested in discount brokers and commercial paper.
In Till Time’s Last Sand Kynaston devotes four chapters to the lives of the thousands of clerks that powered this engine of credit and the warren in Threadneedle Street where they worked, one chapter each for the eighteenth and nineteenth centuries, and two for the twentieth. He records the arrival of round-sum banknotes, telephones, women, college graduates, computers, economists, and management consultants in an overstaffed and tedious institution. His models are the novelists of London such as Dickens, Trollope, Gissing, Galsworthy, and Wells. His title comes from an elegy for Michael Godfrey, the first deputy governor, blown to bits next to King William on a visit to the siege trenches around Namur in 1695.
What there is not in this book is political economy, which may be a fault from one or more points of view, but suits its subject. As Montagu Norman is supposed to have said, the place is “a bank and not a study group.” The Bank of England never “got” economics, but sometimes in governors’ speeches and in evidence to parliamentary committees it showed an interest in economic theory (bullionist, Keynesian, or monetarist) to please or mislead the west, or political, end of London. Later on, it published a quarterly bulletin that seemed particularly designed to obfuscate. Down the long corridor of the bank’s history, such doctrines are apparitions.
The Bank of England was always less secretive than tongue-tied. “It isn’t so much that they don’t want to tell you what’s going on,” a twentieth-century banker said. “It’s more that they don’t know how to explain it: they’re like a Northumbrian farmer.” David Kynaston explains the Bank of England.
In the 1690s King William’s ministers became intrigued by a City catchphrase, “a fund of Credit.” It meant that a secure parliamentary tax over a number of years could be used to pay the annual interest on a loan to the king by the City’s merchants. Such a tax could, as we would say, be capitalized.
In April 1694 Parliament granted the king duties on ships’ cargoes and beer and spirits up to £100,000 a year, which would be sufficient (after £4,000 in management expenses) to pay an 8 percent interest on a loan of £1.2 million to make war with France. The City merchants who put up the loan were allowed to incorporate as “the Governor and Company of the Banke of England.” As Adam Smith wrote later, the king’s credit must have been bad “to borrow at so high an interest.”
The bank’s charter, which was at first for just eleven years, was prolonged again and again, always at the price of a further payment to the Crown. Parliamentary acts of 1708 and 1742 gave the Bank of England a monopoly on joint-stock (i.e., shareholder-owned) banking in London and its environs. In 1719 the finance minister of France, the Scottish-born John Law of Lauriston, devised a scheme to convert the liabilities of the bankrupt French Crown into shares of a long-term trading company. In a sort of panic, men such as the journalist and novelist Daniel Defoe warned that unless Britain followed, it would forfeit all the country had gained from twenty years of warfare. “Tyranny has the whip-hand of Liberty,” Defoe wrote.
The bank was sucked into an auction against the South Sea Company for the privilege of refunding the British national debt, only to find that South Sea, by bribing members of Parliament with free shares, had rigged the auction. In the spring and summer of 1720, shares in South Sea rose to giddy heights and then collapsed, and the bank was there to rescue the state creditors. The prime minister, Sir Robert Walpole, swept South Sea’s bribes under the parliamentary carpet.
As a consequence, the bank gained a position in the British constitution that it has never entirely lost and a satrapy in the eastern part of London, where king and Parliament must ask leave to enter. From its quarters in Grocers’ Hall, in Poultry, the bank moved in 1732 to its present address in Threadneedle Street, where it expanded to occupy a three-acre city block. Rather against its nature, the Court employed as its architect in the later part of the century a talented bricklayer’s son, John Soane, who did more than anyone to create the neoclassical style in bank architecture that conceals, behind columns and pediments, the rampage that is deposit banking.
In 1745 the bank survived a run when supporters of the former ruling house of Stuart, which had been ousted by King William in 1688, invaded England from the north. The bank paid its notes in silver shillings and sixpences. During the “Gordon Riots” of June 1780, an armed crowd sacked London’s prisons and was repulsed from the main gate of the bank by cavalry and foot guards. The incident gave rise to the Bank Picquet, a night watch of redcoats in bearskin helmets that was only stood down in 1973.
More perilous than either of those setbacks were the demands for cash the bank faced in the 1790s from the wartime prime minister, William Pitt the Younger. Inundated with short-term government bills it was expected to cash on sight, amid a business slump and an invasion scare, the Bank of England lost almost all its reserves and, early in 1797, told Pitt it could no longer pay out in gold. In March of that year, the Irish MP and playwright Richard Brinsley Sheridan created the popular image of the bank as “an elderly lady in the City, of great credit and long standing, who…had contracted too great an intimacy and connexion at the St James’s end of town.” Two months later the cartoonist James Gillray drew a lanky, freckled Pitt stretching to land a kiss on a gaunt dame dressed in banknotes, with notes in her hair for curl papers, sitting on a chest of treasure: “POLITICAL RAVISHMENT, or The Old Lady of Threadneedle-Street in danger!”
During Restriction, when Parliament lifted the requirement that the bank convert banknotes into gold and which lasted until 1821, it printed notes of £1 and £2 for the use of the public at large. Forgery was rife, and the bank sent hundreds of men and women to the gallows or the penal colonies for the crime. For the journalist William Cobbett, who like everybody else in those days thought gold and silver coins were money in a way that banknotes were not, there was no offense in forging a forgery. “With the rope, the prison, the hulk and the transport ship,” Cobbett wrote in 1819, “this Bank has destroyed, perhaps, fifty thousand persons, including the widows and orphans of its victims.” Behind these antique British institutions, there is always the flash of a blade.
Gold was a hard master. The bank all but stopped converting banknotes again in 1825. Having been reluctant for years to discount to Jewish houses, it was saved by a shipment of French bullion from Nathan Rothschild. As the government in Westminster became better organized and the country’s banks consolidated and grew, there were complaints (in the words of a Norfolk banker) that “the pecuniary facilities of the whole realm should thus depend on the management of a small despotic Committee.”
In 1844 the Peel Banking Act (after Sir Robert Peel, the prime minister) gave the bank a monopoly over the issuing of banknotes in England but prescribed that any increase must be matched by an increase in gold reserves. Constrained in its banking business and overtaken by giant joint-stock banks forged by mergers outside London, the bank became a lender of last resort, ready to mobilize its resources and those of the City to keep isolated cases of bad banking practice from paralyzing trade. It became the arbiter of who was fit to do business in the City and who was not.
The bank came to this position not through theory but in the tumult of events, above all the failure of the discount broker Overend Gurney in 1866 and the first rescue of the blue-blooded Baring Brothers & Co in 1890. Though it could not drop Bank rate (its short-term interest rate) too far or fast without losing its gold reserves, the bank could nudge interest rates in the direction it wanted. The central bank familiar from our times took shape.
With the outbreak of war in 1914, the bank again stopped payment in gold. When £350 million in war loans failed to sell in full, the governor, Lord Cunliffe, without telling his colleagues, let alone the public, bought the unsold £113 million for the bank’s account.
Montagu Norman, “Old Pink Whiskers” (as Franklin D. Roosevelt called him), ruled the bank from 1920 until 1944 and made so many enemies in the Labour Party and in British industry that the bank’s nationalization after World War II became inevitable. Devoted to the idea of the British Empire, Norman was too high-strung and unconventional to be part of it, like a character from Rudyard Kipling (whom he adored). Under his advocacy, Britain returned to gold in 1925, but industry could not tolerate the high interest rates needed to sustain the value of sterling on the global market. There was a general strike in 1926, and in 1931 Britain left the gold standard (no doubt for all time). Those events made the reputation of Norman’s principal opponent, John Maynard Keynes, whose doctrine of stimulating demand in bad times became British orthodoxy until it perished amid soaring consumer prices and a 17 percent Bank rate in the later 1970s.
Norman was close to both Benjamin Strong of the Federal Reserve Bank of New York and Hjalmar Schacht of the German Reichsbank. He thought of central bankers as “aristocrats,” as he wrote to Strong in 1922, who would dispel the “castles in the air” built by troubled democracies. Misled by Schacht, who was himself deceived, Norman did not see that for Adolf Hitler a central bank was just another weapon of war. Parliament was outraged in 1939 when, after Hitler’s invasion of Czechoslovakia, Norman transferred to the Reichsbank £6 million in gold deposited in London by the Czechoslovak National Bank. “By and large nothing that I did,” he wrote two years before his death in 1950, “and very little that old Ben did, internationally produced any good effect.”
Norman demolished pretty much all of Soane’s bank but the curtain wall and erected in its place the present building, designed by Sir Herbert Baker. Nikolaus Pevsner, the expert on England’s buildings, called the destruction more terrible than anything wrought by the Luftwaffe in World War II: “the worst individual loss suffered by London architecture in the first half of the 20th century.” Kynaston thinks that judgment unfair.
With the clarity of hindsight, many authors have said that the postwar Bank of England, by 1946 just an arm of the UK government in Whitehall, should never have tried to maintain sterling as a currency in which foreign countries held their reserves. Siegmund Warburg, the outstanding London merchant banker of the postwar era, said that Britain was now the world’s debtor, not creditor. As a reserve currency, he much later recalled having argued, sterling was “a very expensive luxury for us to have…. The Governor of the Bank of England at the time didn’t like this statement at all.” Charles Goodhart, one of a handful of economists who trickled into Threadneedle Street, thought the bank should have sought an alliance with the Continental central banks. “The Bank (and Whitehall) exhibited devastating Euro-blindness,” he wrote.
Kynaston has no time for such brooding. Sterling remained a reserve currency, in part because as a consequence of the fight against the Axis Powers and Japan, Britain was too poor to redeem the sterling still in use abroad. What follows is a story of Pyrrhic victories and sanguinary defeats, fading British military strength, bad judgment in Whitehall, poor management of industry, and, between 1970 and 1990, a rise in consumer prices greater than in the previous three hundred years combined. In a word, found by Kynaston in the bank’s market report for Friday, November 17, 1967, when its dealers spent £1.45 billion to defend sterling’s value against the dollar and failed: “Crucifixion.”
Yet this was also the period, as Kynaston shows, when the bank encouraged London to capture the dollars that had piled up in Europe as a result of the Marshall Plan and US imports of foreign luxuries, and turn them into loans for European industry. Who needed automakers if you had eurobonds (and the Beatles)? As a market for capital, the City regained almost all the ground it had lost since 1914.
Step by step, so as not to frighten the horses, the bank began to reform. In 1969 the management consulting firm McKinsey & Co was invited in. “I will…tell them,” John Fforde, the chief cashier, said of the two McKinsey partners, “we would be pleased to see them at lunch from time to time and will add, tactfully, that we would not expect them to lunch with us every day.” Kynaston has an ear for this sort of thing.
The highlight of the book is his account of relations between the bank and Margaret Thatcher, who came to power at No. 10 Downing Street in 1979 and at once abolished the paraphernalia of exchange control that had been in place to protect sterling since 1939. Seven hundred and fifty bank employees, housed at an ugly building by St. Paul’s Cathedral, were put to doing something else.
Thatcher and her chancellors were at first devotees of the doctrines of the Chicago economist Milton Friedman, who held that restricting the growth of money (variously defined) would halt the rise in consumer prices, rather as night follows day. As Kynaston writes, “It was monetarism or bust at No. 10.” The bank obliged with an array of monetary measures, set up targets, aimed at them, and missed.
Chancellor Nigel Lawson cooled on monetarism and instead instructed the bank to maintain sterling’s exchange rate with the deutsche mark, the West German currency, so as to impose on the UK economy the discipline and order of German industry. That policy, and its successor, the Exchange Rate Mechanism of the European Union, came to grief on “Black Wednesday,” September 16, 1992, when those who bet against sterling overwhelmed the bank. At 4 PM that day, the bank’s dealing room stopped buying pounds. A US banker recalled “Everyone sat in stunned silence for almost two seconds or three seconds. All of a sudden it erupted and sterling just free-fell. That sense of awe, that the markets could take on a central bank and actually win. I couldn’t believe it.”
While the Continental countries moved toward the euro, a chastened Britain sought its own solution. The credit of the Bank of England was not what it was, but it was still a great deal more solid than that of any particular UK government. Why not give the bank back its independent power to set interest rates free of the pressures of Parliament and prime minister? That idea, outlined by Lawson back in the 1980s and killed by Thatcher, was executed in 1997 by the Labour chancellor Gordon Brown, who established at the bank an independent Monetary Policy Committee. There followed ten years of prosperity and price stability.
At the same time, the bank lost its regulatory function. The bank had always governed the City not through rules but by small acts of disapproval, often so subtle that they were known as the “Governor’s eyebrows.” This approach became antiquated after the Big Bang brought to London foreign bankers less attuned to the facial expressions of Englishmen, and in any case the bank had failed to detect fraud and money-laundering at the Pakistani-owned Bank of Credit and Commerce International. Bank regulation was transferred to the Financial Services Authority, which did it no better. Regulation was returned to the bank in 2013.
Kynaston had access to the bank’s archives only through 1997, and his long final chapter, covering the twenty-first century, is drawn from newspaper reports, a couple dozen interviews, and the speeches of governors Eddie George (until 2003) and Mervyn King (2003–2013). He makes no great claim for it. There is nothing here of cryptocurrencies like bitcoin, Carney’s polymer banknotes, or his far-flung speeches on subjects from Scottish independence to global warming, which would have astonished his 119 predecessors.
As it turned out, the calm that followed bank independence was perilous. With its eyes fixed on consumer prices, the bank failed to act when the price of assets such as real estate started going through the roof. As King put it just before being appointed governor, “you’ll never know how much you need to raise interest rates in order to reduce asset prices.” The banks found themselves with loans secured on hopelessly overvalued security. In 2008, three British banks—Royal Bank of Scotland, Bank of Scotland, and Lloyd’s—and a couple of building societies (savings and loans) lost their capital. It was the greatest bank failure in British history.
Like its counterparts in the US, Japan, and continental Europe, the Bank of England bought securities from the surviving commercial banks in the hope that they would use the cash created to make loans and forestall a slump in business. This program, known as quantitative easing, has added nearly £500 billion to British banks’ reserves. Whether it has stimulated trade, hampered it, or had no effect at all is impossible to say. As Eddie George put it in other circumstances, and very much in the Bank of England style, “It is easy to slip into the position of the man on the train to Brighton who kept snapping his fingers out of the window to keep the elephants away. Since he saw no elephants, his technique was self-evidently effective.”
Brexit presents the Bank of England with all the challenges of the past three hundred years plus a few more. The Tories and the Scottish Nationalists detest Mark Carney, while Labour wants to move the bank from London to Birmingham, an industrial city in the English Midlands with little by way of banking, no stock exchange, and fewer places of recreation. To survive its fourth century, the bank will need all its cunning.
Nobuyoshi Araki and Nan Goldin are good friends. This is not surprising, since the seventy-seven-year-old Japanese photographer and the younger American are engaged in something similar: they make an art out of chronicling their lives in photographs, aiming at a raw kind of authenticity, which often involves sexual practices outside the conventional bounds of respectability. Araki spent a lot of time with the prostitutes, masseuses, and “adult” models of the Tokyo demi-monde, while Goldin made her name with pictures of her gay and trans friends in downtown Manhattan.
Of the two, Goldin exposes herself more boldly (one of her most famous images is of her own battered face after a beating by her lover). Araki did take some very personal photographs, currently displayed in an exhibition at New York City’s Museum of Sex, of his wife Yoko during their honeymoon in 1971, including pictures of them having sex, and of her death from cancer less than twenty years later. But this work is exceptional—perhaps the most moving he ever made—and the kind of self-exposure seen in Goldin’s harrowing self-portrait is largely absent from Araki’s art.
Araki is not, however, shy about his sexual predilections, recorded in thousands of pictures of naked women tied up in ropes, or spreading their legs, or writhing on hotel beds. A number of photographs, also in the current exhibition, of a long-term lover named Kaori show her in various nude poses, over which Araki has splashed paint meant to evoke jets of sperm. Araki proudly talks about having sex with most of his models. He likes breaking down the border between himself and his subjects, and sometimes will hand the camera to one of his models and become a subject himself. But whenever he appears in a photograph, he is posing, mugging, or (in one typical example in the show) holding a beer bottle between his thighs, while pretending to have an orgasm.
If this is authenticity, it is a stylized, theatrical form of authenticity. His mode is not confessional in the way Goldin’s is. Araki is not interested in showing his most intimate feelings. He is a showman as much as a photographer. His round face, fluffy hair, odd spectacles, T-shirts, and colored suspenders, instantly recognizable in Japan, are now part of his brand, which he promotes in published diaries and endless interviews. Since Araki claims to do little else but take pictures, from the moment he gets up in the morning till he falls asleep at night, the pictures are records of his life. But that life looks as staged as many of his photographs.
Although Araki has published hundreds of books, he is not a versatile photographer. He really has only two main subjects: Tokyo, his native city, and sex. Some of his Tokyo pictures are beautiful, even poetic. A few are on display in the Museum of Sex: a moody black-and-white view of the urban landscape at twilight, a few street scenes revealing his nostalgia for the city of his childhood, which has almost entirely disappeared.
But it is his other obsession that is mainly on show in New York. It might seem a bit of a comedown for a world-famous photographer to have an exhibition at a museum that caters mostly to the prurient end of the tourist trade, especially after having had major shows at the Musée Guimet in Paris and the Tokyo Photographic Art Museum. But Araki is the least snobbish of artists. The distinctions between high and low culture, or art and commerce, do not much interest him. In Japan, he has published pictures in some quite raunchy magazines.
The only tacky aspect of an otherwise well-curated, sophisticated exhibition is the entrance: a dark corridor decorated with a kind of spider web of ropes that would be more appropriate in a seedy sex club. I could have done without the piped-in music, too.
Sex is, of course, an endlessly fascinating subject. Araki’s obsessive quest for the mysteries of erotic desire, not only expressed in his photographs of nudes, but also in lubricious close-ups of flowers and fruits, could be seen as a lust for life and defiance of death. Some of these pictures are beautifully composed and printed, and some are rough and scattershot. One wall is covered in Polaroid photos, rather like Andy Warhol’s pictures of his friends and models, except that Araki’s show a compulsive interest in female anatomy.
Genitalia, as the source of life, are literally objects of worship in many Japanese Shinto shrines. But there is something melancholy about a number of Araki’s nudes; something frozen, almost corpse-like about the women trussed up in ropes staring at the camera with expressionless faces. The waxen face of his wife Yoko in her casket comes to mind. Then there are those odd plastic models of lizards and dinosaurs that Araki likes to place on the naked bodies of women in his pictures, adding another touch of morbidity.
Private CollectionNobuyoshi Araki: Komari from “L’Amant d’août” (“Suicide in Tokyo”), 2002
Private CollectionNobuyoshi Araki: Tokyo Comedy, 1997
Taka Ishii Gallery, TokyoNobuyoshi Araki: Winter Journey, 1989–1990/2005
But to criticize Araki’s photos—naked women pissing into umbrellas at a live sex show, women with flowers stuck into their vaginas, women in schoolgirl uniforms suspended in bondage, and so on—for being pornographic, vulgar, or obscene, is rather to miss the point. When the filmmaker Nagisa Oshima was prosecuted in the 1970s for obscenity after stills from his erotic masterpiece, In the Realm of the Senses, were published, his defense was: “So, what’s wrong with obscenity?”
The point of Oshima’s movie, and of Araki’s pictures, is a refusal to be constrained by rules of social respectability or good taste when it comes to sexual passion. Oshima’s aim was to see whether he could make an art film out of hardcore porn. Araki doesn’t make such high-flown claims for his work. To him, the photos are an extension of his life. Since much of life is about seeking erotic satisfaction, his pictures reflect that.
Displayed alongside Araki’s photographs at the Museum of Sex are a few examples of Shunga, the erotic woodcuts popular in Edo Period Japan (1603–1868), pictures of courtesans and their clients having sex in various ways, usually displaying huge penises penetrating capacious vulvas. These, too, have something to do with the fertility cults of Shinto, but they were also an expression of artistic rebellion. Political protest against the highly authoritarian Shoguns was far too dangerous. The alternative was to challenge social taboos. Occasionally, government officials would demonstrate their authority by cracking down on erotica. They still do. Araki has been prosecuted for obscenity at least once.
One response to Araki’s work, especially in the West, and especially in our time, is to accuse him of “objectifying” women. In the strict sense that women, often in a state of undress, striking sexual poses, are the focus of his, and our, gaze, this is true. (Lest one assume that the gaze is always male, I was interested to note that most of the viewers at the Museum of Sex during my visit were young women.) But Araki maintains that his photographs are a collaborative project. As in any consensual sado-masochistic game, this requires a great deal of trust.
Some years ago, I saw an Araki show in Tokyo. Susan Sontag, who was also there, expressed her shock that young women would agree to being “degraded” in Araki’s pictures. Whereupon the photographer Annie Leibovitz, who was there too, said that women were probably lining up to be photographed by him. Interviews with Araki’s models, some of which are on view at the Museum of Sex, suggest that this is true. Several women talk about feeling liberated, even loved, by the experience of sitting for Araki. One spoke of his intensity as a divine gift. It is certainly true that when Araki advertised for ordinary housewives to be photographed in his usual fashion, there was no shortage of volunteers. Several books came from these sessions.
But not all his muses have turned out to be so contented, at least in retrospect.
Earlier this year, Kaori wrote in a blog post that she felt exploited by him over the years (separately, in 2017 a model accused Araki of inappropriate contact during a shoot that took place in 1990). Araki exhibited Kaori’s photos without telling her or giving her any credit. On one occasion, she was told to pose naked while he photographed her in front of foreign visitors. When she complained at the time, he told her, probably accurately, that the visitors had not come to see her, but to see him taking pictures. (The Museum of Sex has announced that Kaori’s statement will be incorporated into the exhibition’s wall text.)
These stories ring true. It is the way things frequently are in Japan, not only in relations between artists and models. Contracts are often shunned. Borderlines between personal favors and professional work are blurred. It is entirely possible that Araki behaved badly toward Kaori, and maybe to others, too. Artists often use their muses to excite their imaginations, and treat them shabbily once the erotic rush wears off.
One might wish that Araki had not been the self-absorbed obsessive he probably is. Very often in art and literature, it is best to separate the private person from the work. Even egotistical bastards, after all, can show tenderness in their art. But in Araki’s case, the distinction is harder to maintain—this is an artist who insists on his life being inseparable from his pictures. The life has dark sides, and his lechery might in contemporary terms be deemed inappropriate, but that is precisely what makes his art so interesting. Araki’s erotomania is what drives him: aside from the posing and self-promotion, it is the one thing that can be called absolutely authentic.
Taka Ishii Gallery, TokyoNobuyoshi Araki: Erotos, 2013
Private CollectionNobuyoshi Araki: Flowers, 1985
“The Incomplete Araki: Sex, Life, and Death in the Works of Nobuyoshi Araki” is at the Museum of Sex, New York, through August 31.
In the April 5 issue of The New York Review, you published an article entitled “Knifed with a Smile.” I would like to make a few comments regarding that text:
The text states that none of the whistleblowers recall receiving an apology from Karolinska Institutet (KI). That could very well be the case; however, it is an irrefutable fact that on December 20, 2016, KI published a public apology in one of the most widely distributed and read Op-Ed pages of all the Swedish dailies, Dagens Nyheter: “That the whistleblowers who raised the alarm about Paolo Macchiarini’s activities were not listened to is unacceptable, and here, in this article, KI publicly apologizes to the whistleblowers.” (Text in Swedish: “Att de visselblåsare som larmade om Paolo Macchiarinis verksamhet inte blev hörda är oacceptabelt och här ber KI offentligt visselblåsarna om ursäkt för det.”) This information was sent to Professor Elliott on October 31, 2017.
Furthermore, the article implies that KI is actively trying to forget or hide the Paolo Macchiarini case. This is simply not correct. There is an ongoing investigation of scientific misconduct regarding publications in which Paolo Macchiarini is named as the principal investigator. A decision by Karolinska Institutet’s vice-chancellor regarding that investigation will come later this spring. During the past two years, Karolinska Institutet has also implemented new and revised routines and guidelines and reinforced supervision as a direct consequence of this case—and there are more changes yet to come.
Finally, Karolinska Institutet has made a significant effort to create and maintain total transparency throughout the process and case regarding Paolo Macchiarini and his involvement with KI. This includes openly referring to and citing criticism against KI. Please take a look at our website for more details: ki.se/en/news/the-macchiarini-case-timeline.
Chief Press Officer
Carl Elliott replies:
Peter Andréasson’s letter should give no one any confidence that the Karolinska Institute has learned from the Macchiarini scandal. It is true that Karin Dahlman-Wright, the pro-vice-chancellor of the Karolinska Institute, informed me of an article in Dagens Nyheter on December 20, 2016, which included a brief statement of apology to “the whistleblowers.” However, the supposed apology seemed almost laughably inadequate. It concerned a single finding of research misconduct by Macchiarini in an article published in Nature Communication, and the misconduct had concerned tissue engineering in rats. Yet by the time the article appeared, Macchiarini had been charged with manslaughter; the vice-chancellor of the Karolinska Institute had resigned in disgrace; an external review had identified research misconduct in six published papers by Macchiarini; and at least five patients who had gotten synthetic trachea implants from Macchiarini were dead.
In the article cited, Dahlman-Wright addressed only the single instance of research misconduct, not the suffering inflicted on patients. She did not apologize for the fact that the leaders of the Karolinska Institute and the Karolinska University Hospital had protected Macchiarini. She failed to mention that institutional leaders had reported the whistleblowers to the police and threatened to fire them. In fact, she did not even do the whistleblowers the courtesy of mentioning their names. Neither does Andréasson. For the record, the whistleblowers are Matthias Corbascio, Thomas Fux, Karl-Henrik Grinnemo, and Oscar Simonson.
When I followed up by e-mail to Dahlman-Wright, I questioned the effectiveness of an apology that the whistleblowers themselves did not consider sufficient. I also pointed out that they had been threatened with dismissal and reported to the police. In response, Dahlman-Wright referred me again to the article in Dagens Nyheter and said my questions about the threats should be directed to the Karolinska University Hospital. A spokesperson for the hospital replied that it had no plans to apologize. When I put the same questions to the newly appointed vice-chancellor, Ole Petter Ottersen, he declined to answer and referred me back to the statement by Dahlman-Wright.
For years officials at the Karolinska Institute insisted that the whistleblowers were wrong and that we should believe the administration instead. Many did, and the results were disastrous. Now Karolinska Institute officials are again insisting that the whistleblowers are wrong and that true reform is underway. Who should we believe this time?
In a footnote to his review of Adam Becker’s book What Is Real?: The Unfinished Quest for the Meaning of Quantum Physics [NYR, April 19], David Albert mentions that during the years I worked with David Bohm on my doctoral dissertation I “never heard of Bohm’s theory until long after [my] dissertation had been completed.” This is a bit misleading. I was Bohm’s student at Birkbeck College during the years 1963 to 1965 and had written a paper on Bohm’s “pilot wave” theory as an undergraduate at the University of Cape Town. In fact, the paper was the reason Bohm agreed to take me on as a graduate student. What is true, as far as I recall, is that, apart from our initial meeting, he never talked about the theory during my time at Birkbeck.
Albert’s remark that after facing “a cruel wall of silence” Bohm “seems to have decided not to mention his beautiful theory to anybody ever again,” and that it was only “sometime in the 1980s that a small and embattled community of physicists, mathematicians, and philosophers, who had learned of the theory from Bell, began to take an active interest in what Bohm had done,” is revisionist history. Bohm renewed work on his theory in a series of papers with Basil Hiley in the 1980s after Hiley showed him a numerical simulation of Bohmian particles passing through a two-slit system forming an interference pattern (published as joint work by Christopher Philippides, Christopher Dewdney, and Hiley in the Italian physics journal Il Nuovo Cimento in 1979).
Distinguished University Professor
University of Maryland
College Park, Maryland
David Z Albert replies:
I am grateful to Professor Bub for correcting my account of his conversations with David Bohm—I must have somehow misunderstood, or misremembered, what he told me about them. And I am sorry if I inadvertently produced the impression that Bohm’s very real and very profound discouragement with the reception of his theory in the 1950s persisted for the rest of his life—Professor Bub is quite right in pointing out that Bohm eventually changed his mind and took his theory up again in the 1980s. I knew that, of course, and did not mean to suggest otherwise.
From my table next to a large window inside a café, I watch the young man. The orange glow of a late afternoon sun drapes him in thick layers, lying across his shoulders and accenting his face. I recognize him for the East African that he is, a young man of Eritrean or Ethiopian origin with a slender frame, delicate features, and large eyes. He has the gaunt look of other recently arrived immigrants whom I have met, a thinness that goes beyond a natural state of the body. He moves differently from one accustomed to the space he inhabits; his gait is a series of cautious, jagged steps forward. He appears frightened, overly sensitive to those who brush past him. He seems as if he is trying to coil inside himself, shrink enough to avoid being touched. Though I can note all of these details, I know there is nothing really special about him, not in Florence, Italy. He is just one of the many refugees or migrants who have made their way here from East Africa, a physical embodiment of those now-familiar reports and photographs of migration.
Pedestrians amble past on the narrow sidewalk, casting long shadows in the golden light of dusk. They are caught up in their private conversations, lost in the steady rhythm of their exchanges. They are unaware of the young man I am observing, staring past my own reflection to get a better look. They do not realize that he is picking up speed behind them, his body stiffening with each passing second. He bends forward at the chest, slightly at first, then as if he might tip over from his own momentum. He moves that way for several paces before he starts to push past pedestrians, oblivious to those he nearly trips. He is a wild, wayward figure careening carelessly through the busy sidewalk, distracted by his own thoughts.
Then, abruptly, he stops. He is so still that curious eyes turn on him, this sunlit figure stepping calmly into the middle of the busy intersection. He stands there, immobile and slightly stunned as cars come to a halt and motorcyclists slow. Traffic waits for him to move. Instead, he begins to gesture, a conductor leading an invisible orchestra. His bony arms bend and extend, propelled by an energy only growing stronger. Each sweep of his hand pulls the rest of him upward then twists him in an awkward circle. He continues as observers pause, then shake their heads and walk on by. Soon, he is working his mouth around words, and even before he starts, I know he is about to shout.
I let everything else disappear so I can focus on the developing scene. People move past him, irritated but still polite. Motorists carefully angle around his intruding figure. Everyone ignores him as best as they can, treating him as no more than a mild disturbance, unremarkable. He continues gesticulating, his head turning one way then the other, his actions getting progressively faster. There is a strange kind of rhythm beginning, an erratic dance that is leaving him desperate to keep pace. While I watch, something squeezes against my chest and makes me take a sudden breath. I don’t understand the ache that fills me. Or maybe I just do not want to recognize it. Maybe I do not want to find the words because to do so would mean to tumble down somewhere dark, far from this bright and busy street.
I have come to the café to escape the day’s barrage of disturbing news. I have come with a notebook and my pen to distance myself from reminders of the turbulence continuing in America, in Ethiopia, in the Mediterranean, in the Middle East, in Europe: everywhere. I have come to find a way out of what I know in order to make my way toward a space where I can imagine, unhindered by unnecessary distractions. I have come to be alone, to write in solitude, free of the noise that has seemed to follow me for months, or perhaps it has been years. It is hard to know how to measure time, how to orient oneself when horror and shock begin to embed themselves into the pulse of daily life. It has become easy to live in the present moment, to spin from one disturbing event to the next, to move so quickly between disasters that entire days are spent in stupefied surprise.
Lazarus, I think, as I keep watching this young man: a defiant body refusing stillness, resisting quietness. A body using noise to stay alive, to move, to be seen. The waitress comes to take my order and smiles down at my notebook. I notice the couple next to me eyeing it warily, as if they are afraid I am taking notes on their conversation. No one seems to be aware of the drama unfolding outside the café where a young black man with unkempt hair is spinning in increasingly wide circles, motioning wildly, shouting incoherently at passersby. He is a spectacle without an audience. He is an actor in Shakespeare’s tale, full of sound and fury.
He spins and flings his arms. He throws up a hand and snaps his wrist. He closes a palm over an ear and listens to his own whispers. He frowns and smiles, laughs alone, then twirls and catches another stranger’s stare. There is anger in his spastic energy. There is sorrow and confusion in his eyes. He is breaking, I say to myself, and doing what he can to keep himself together. My reflection catches my eye and so I put my head down, and in my notebook I write: “You did not leave home like this. This is what the journey does.” It comes again, that ache in the middle of my chest. For a moment, it is so strong that I am sure he can feel it. I am certain it is a tether binding us together and he will turn in just the right way and I will be exposed. If he looks at me, then our lives will unfold and in front of us will be the many roads we have taken to get to this intersection in Florence and we will reveal ourselves for what we are: immigrant, migrant, refugee, African, East African, black, foreigner, stranger, a body rendered disobedient by the very nature of what we are.
When I glance up again, the young man has quieted down. Now, he looks almost bored as he weaves between pedestrians while twisting a lock of hair around a skinny finger. He moves lazily, as if he has accomplished what he set out to do. From where I sit, it looks as though he is walking toward me, but he is simply following the sidewalk, and soon it will force him to proceed directly past the open door of the café where I am. As he saunters past, I notice a small bald patch on the back of his head. It is a perfect circle, as if a round object was placed on his scalp to burn away his hair through to his skin. I tell myself that I cannot possibly know what it is, that it could be an illusion, it could be just a leaf stuck in his hair, but that is not enough to keep myself from flinching.
Stories come back to me, told by a friend who crossed the Sahara to get to Europe by way of North Africa. He spoke of horrifying treatment at the hands of human traffickers and police in detention centers and makeshift prisons. He shared what he could and skipped the rest. In moments when several who made the journey were gathered, I would watch them point to their scars to help fill the lapses in their stories. Sometimes, there was no language capable of adding coherence to what felt impossible to comprehend. Sometimes, it was only the body that bore the evidence, pockmarks and gashes forming their own vocabulary. Staring at the busy intersection, I don’t want to consider what this young man might have gone through to arrive in Italy, to be in the street on this day. That he is alive is a testament to his endurance. What he has been subjected to, what might have caused that scar, what was too much for his mind to accept—these thoughts lead the way to far darker realities than I can possibly know. I look back at the first note I took upon seeing him: “You did not leave home like this. This is what the journey does.”
Lazarus was given the chance to walk again in the land of the living. On one hand, it was a simple proposition: he obeyed the command to stand up and he was able to live. The rest of his days paled in the brilliant light of this astounding miracle. It is easy to imagine that he moved gracefully through his new existence, a man pulsing with this exposure to divine grace and might. We want to think that when he rose from the dead, he did so untainted and unburdened. That it was a rebirth, free of unsettling wisdom. But Lazarus was an ordinary man who opened his eyes to find himself incomprehensible. Somewhere between the end of this life and his second chance, he shifted forms, became a miracle and a stranger, remolded from loved one to aberration.
Medical science understands death to be a process rather than a single event. Though death might seem a cataclysmic and sudden event, the body undergoes several functions before it no longer lives. The various organs that support it collapse one by one. They each must cease all activity for an extended period of time in order for a person to be declared dead. It is not sufficient for the heartbeat and circulation alone to stop, for example, they must cease long enough for the brain to also die. The end of life involves a journey, a series of steps before that ultimate destination. A body requires certain signposts to nudge it in the right direction. An abrupt shift in that progressive movement disrupts the order of things. It deforms a natural process and leaves behind something warped and unrecognizable.
Perhaps this explains Lazarus’s complete silence in John 11 and 12 in the Bible. To give him a voice would mean to grapple with the messiness that his resurrection created. It would be to insert a complex, human component in a direct and potent lesson. Though the Sanhedrin wanted to kill him along with Jesus Christ, though his resurrected life and all that it represented was as much a threat to them as the claims of Jesus, Lazarus is not allowed to speak. He is a muted miracle, still alive today as a metaphor for uncanny second chances. We have found many ways to make use of his example, but we do not know what to do with the living man. In part, it is because the Bible reveals so little about him. His story ends when he is no longer convenient. But to assume that he became worthless once he stepped free from his grave is to shrink his life down to its most significant moment. It is to believe that nothing else can possibly matter after so great a feat. It is to embrace the idea that we are, all of us, simple beings relentlessly pivoting around the same occurrence, trapped by the enormity of an important event, as if it is both the sun that guides us and the darkness that leaves us spinning in uncertain space.
There is a phrase in medieval Chinese literature used to explain the biological phenomenon of an ailing body that revives, suddenly and briefly, only to collapse and die. It is hui guang fan zhao, translated as “last glow before sunset,” that brief shimmer before night. I think of this as the café where I sit begins to empty and a new set of patrons streams in. A DJ near me starts to spin his music against the slowly darkening sky outside. Through the window at my side, I gaze past my own reflection to focus on the unbroken flow of pedestrians and motorists at the intersection. The young man I observed earlier is gone, and in his place, routine and repetition have stepped in. I see him for a moment, though, leaving home, wherever that might have been, and making the tortuous trek through the Sahara. I see him trapped in containers and overloaded trucks and crowded boats. I see him struggle with a deadening stillness, then step onto land to face the boundaries set up in Europe. The journey is designed to test the body’s resilience. Its intent is to break a human being and rearrange him or her inside. Every inch forward is a reminder of one’s frailty. You do not arrive the same as when you left. You will sometimes look at a stranger and recognize yourself reflected in that new life: impossibly alive, walking through the lingering glow of a splendid sun while trying to spin free of a permanent darkness.
This essay is adapted from The Displaced: Refugee Writers on Refugee Lives, edited by Viet Thanh Nguyen, published by Abrams Press.
In 1966, the pianist Cecil Taylor appeared in Les Grandes Répétitions, a series of Nouvelle Vague-influenced documentaries for French television about Olivier Messiaen, Karlheinz Stockhausen, and other modern composers. Taylor, who died at eighty-nine in April, was the only jazz musician featured. The avant-garde jazz movement was young, brash, and commanding increasing respect from a classical establishment that had been, at best, indifferent to black music, and Taylor, a conservatory-trained pianist who was creating a radical synthesis of jazz improvisation and European modernism, had emerged as one of its most militant and sophisticated leaders. That same year, he had ended a four-year recording silence with two extraordinary albums, Unit Structures and Conquistador! He was also profiled in A.B. Spellman’s classic book on the avant-garde, Four Lives in the Bebop Business. After more than a decade working menial jobs to pay his bills, he was finally living off his art, and being noticed. Far from being grateful for the attention, though, he insisted that mere recognition was not enough; he wanted to change the very terms of the discussion about musical creation and musical value.
“It’s all music,” he declares in Les Grandes Répétitions, wandering through a vast and elegant Parisian hôtel particulier in a black turtleneck and sunglasses, cigarette in hand, confidently expounding his aesthetic philosophy as if he were a character in a Godard film. “The way one prepares bread, cooks dishes that we eat, can be something that causes the sense to create that which we color by calling emotion… The instrument is just an object. The music comes from inside.” And what music it is, percussive, jangling, and hypnotic, as Taylor pounds the keys and plucks the strings of his piano, provoking impassioned responses from the alto saxophonist Jimmy Lyons, the drummer Andrew Cyrille, and the bassist Alan Silva—one of the best bands, or “units,” he ever assembled.
Where, the off-camera interviewer asks Taylor, did he study music? “Well, the study would have to be divided into two categories, those of the academy and those of the areas usually located across the railroad tracks. In this case, the railroad tracks were located outside of Boston in a town called West Medford, and there I heard other musics.” What Taylor means is that in the clubs of black West Medford, he was listening to jazz, which had a far deeper impact on him than the classical music he was studying at the New England Conservatory; but the interviewer is puzzled, and asks for clarification.
“Was there a conservatory across the railroad tracks?” “There are never conservatories across railroad tracks.”
“What was across the railroad tracks?”
“Grass and trees.”
Talking with Cecil Taylor was nearly as memorable as watching him play. We became friendly in 2011, not long after I moved to Fort Greene, where he had lived since the early 1980s. The first time we were supposed to have dinner, he stood me up. (He claimed he couldn’t find the bar, a few blocks from his house.) But a couple months later, I ran into him in front of an Italian restaurant where he often held court. He wore a checkered shirt, big, thick silver bracelets on each wrist, and a black cap that was somewhere between a beret and a yarmulka. “Why, hello, young man,” he said, “would you like to join me for dinner?” We were ushered in by the host, who addressed Taylor as “Maestro” and brought him a glass of Prosecco with a dollop of lemon sorbet, his preferred apéritif.
As we ate, a procession of admirers and hangers-on stopped by our table to pay their respects. One was a tall West Indian man in a homemade white turban who called himself The Captain, and seemed to know Taylor well. I asked him what sort of work he did. “I do a variety of things,” he replied. He and Taylor were meeting up later: I was ending my day, Taylor was just beginning his. When I asked about the bill, he looked at me as if I were insane. The Maestro did not pay for his meals.
We met a few more times, sometimes over dinner, sometimes just to chat on the street. Taylor always seemed eager to talk, but he didn’t like to answer questions, at least not directly. I was initially perplexed by his style of conversation, which struck me as maddeningly digressive and almost impossible to follow. Eventually, I understood that, much like his music, Taylor’s conversation, for all its flights, was intricately patterned, and intensely, even compulsively focused. He invariably talked about the people he loved and the artists he admired: his father, a professional cook from whose kitchen “the most wonderful smells would emanate”; his formidable mother, who spoke French and German and took him to the ballet; Billie Holiday and Lena Horne, both of whom he worshipped; Jimmy Lyons, who had given twenty-six years of saintly devotion to Taylor’s Unit; the architect Santiago Calatrava, whose bridges he adored; and the poet Frank O’Hara, who shared his love of modern art and “was rather pleasant to look at.” (Did he know O’Hara well, I once asked him. “I don’t know anyone well,” he replied.) He railed against the injustices of “so-called American democracy”—he once showed me his heavily underlined copy of Michelle Alexander’s The New Jim Crow—and against the smaller but, to him, no less infuriating injustices of the music establishment, which had granted him less credit for launching the free jazz movement than it had to the alto saxophonist Ornette Coleman, whose bluesier, more melodic style was easier on the ear.
Cecil Taylor was very proud but also very thin-skinned. Black, gay, and artistically unyielding, he had attracted slights and insults throughout his long career, and he remembered them all. He recalled saying that Lester Young “liked gentlemen” to a fellow musician, only to be told: “I’m not interested in that shit.” After Lyons died in 1986, the drummer Elvin Jones told Taylor, “Well, now that Jimmy’s dead, I guess it’s over for you.” (He and Lyons, who was straight, were never involved romantically.) Once, when we were having dinner, a late 1950s recording by Miles Davis came on the stereo. For many years, Taylor said he could barely listen to Davis, who had insulted his music after nearly but not hiring him for his 1960s quintet (the job went to Herbie Hancock). “But I seem to be enjoying Miles tonight,” he said. “Maybe it’s because you’re here. You’re very easy to talk to.”
The wounds remained fresh, but Taylor remained curiously attached to those who had inflicted them. There was, for example, the late Bill Dixon, the brilliant, embittered trumpeter and composer who never forgave Taylor for the fact that his most famous recorded solo had appeared not on one of his own albums but on Taylor’s Conquistador! “Bill was a genius,” remarked Taylor, “but he didn’t realize there were other geniuses. And he was more subtly vindictive than Miles.” Taylor often spoke of his estranged friend the poet and jazz critic Amiri Baraka, whom he insisted on calling by his former name, LeRoi Jones. They had been close in the late 1950s and early 1960s, until Baraka brought Allen Ginsberg over to Taylor’s apartment in the East Village. Ginsberg wanted Taylor to write music for a reading of Howl, but Taylor declined, out of loyalty to the black Beat poet Bob Kaufman, whom Taylor felt Ginsberg had unfairly overshadowed. As they were leaving, Baraka sneered, “the problem with our jazz musicians is that they’re not literate.” Still cut by that remark, Taylor told me, “I took a friend to one of ’Roi’s readings years later, after he’d started calling himself Amiri Baraka. I asked him what he thought. ‘Very impressive,’ he said, ‘but how many times can you hear the word black?’ ’Roi started out as a poet, but became a polemicist,” a word he pronounced with disdain.
Baraka died in 2014, two years after that conversation. A year later, Ornette Coleman, with whom Taylor had an even more fraught relationship, died. He never stopped talking about the two of them, and his tone did not soften. “Success makes people less fiery,” he said. “Somehow they become more amiable.” Taylor never became amiable. After Coleman’s memorial, I phoned Taylor to say how moved I’d been by his performance, a solo étude full of shimmering, pointillistic detail; he called Coleman, who was from rural Texas, a “country boy” who had seduced the New York jazz critics, and mocked his opaque theory of “harmolodics,” according to which harmony, melody, and sonic motion are on equal footing. (In fact, Taylor loved Coleman as much as he resented him, and the two used to practice together in the early 1980s, at Coleman’s loft; whether any recordings exist is a tantalizing question.) Was Taylor settling scores? Certainly. But he was also paying a perverse kind of tribute to a rivalry that had altered the course of musical history. As Taylor put it to me in one of our last conversations: “All of the people who’ve mattered to me, all the people I’ve ever cared for, all the people who’ve put up with me, all those people are gone.”
That Taylor lived as long as he did was not the least of his accomplishments. He had a great hunger for life, and for risk-taking; he told his friend the Village Voice writer Robert Levin that he himself was surprised that he had survived the AIDS era. Unimpressed by status and résumés, Taylor befriended rich and poor alike. The bassist William Parker, who played with him for more than two decades, told me that Taylor would sometimes reserve an entire row of seats at his concerts for a group of homeless friends. (This openness sometimes left him vulnerable: a contractor working on his Brooklyn brownstone swindled him of the $500,000 he had received for the 2013 Kyoto Prize; the man was later sentenced to prison.) His appetite for after-hours hanging out was insatiable. On the evening of the 2003 New York blackout, Taylor was spotted walking over the Brooklyn Bridge into Manhattan, while everyone else was rushing home in the other direction. Almost until the end, he pursued pleasures physical and chemical with an abandon that made his longevity even more of a miracle, as if his body defied laws that applied to the rest of us.
Of the many stories about Taylor’s adventures, the one I’ve always found most revealing—and I am assured by a close Taylor associate that it is not apocryphal—is his seduction of a man who came to burglarize his home in Brooklyn. The burglar became his lover and moved in for several months. This strikes me as a perfect allegory for Taylor’s music, which dramatizes the conquest of danger, the porous line between power and vulnerability, fear and desire, terror and seduction. (These are, of course, qualities that Romantic philosophers associated with “the sublime,” and Taylor was one of the last Romantics.) Taylor’s music is beautiful, but its beauty is daunting, even frightening, and therefore less assimilable than Coleman’s, or even the late, cacophonous work of John Coltrane.
Taylor, who often thought about his music in relation to architecture, described it as “constructivist.” The most conspicuous building block of his style was his use of tone clusters: shattering cascades of notes, sometimes produced by his fists or forearms. He traced this device, and his powerful touch, to his African ancestral heritage. “In white music,” Taylor told the British journalist Val Wilmer, “the most admired touch is light… We in Black Music think of the piano as a percussive instrument: we beat the keyboard, we get inside the instrument.” (Not that he rejected the Western classical tradition: as he put it in Les Grandes Répétitions, “I don’t divide musics. I feel that one must absorb them—digest them, eat them.”) The uninitiated were startled: the piano wasn’t built for this kind of assault, any more than the canvas was meant to be splattered with paint from a can. Taylor’s tone clusters became his signature, and are as much of an emblem of modernism as Pollock’s drips. Tone clusters were not Taylor’s invention: they had appeared as a flourish in the work of Henry Cowell, Stockhausen, and other modern classical composers. But in his music, where improvisation was an extension of composition, another way of elaborating form, Taylor turned clusters to different ends, using them to create kinetic waves of sound, with elaborate structural patterns. The result was an alternative to conventional swing, a new method of generating momentum that the musicologist Ekkehard Jost called “energy.”
That energy could be exhausting for the listener, since Taylor’s pieces often went on for more than an hour without pause. The pianist Jason Moran told me: “The first thing that comes to mind with Cecil is strength, how physically strong he was, like Olympic athlete strength.” Yet Taylor’s strength never came at the expense of precision, even at extreme levels of velocity and volume. Like Thelonius Monk, Taylor played every note with intention, and scarcely used the pedals, since, with them, “what you hear is a blur.” Taylor’s studies of Bach as a child had taught him that “each note was a continent, a world in itself, and it deserved to be treated as that. When I practice my own technical exercises, each note is struck, and it must be done with the full motion and amplitude of the finger being raised and striking—it must be heard in the most absolute sense.” Thanks to these exercises, Taylor developed an exceptional finger dexterity, and a complete mastery of his attacks and releases. When he hit a cluster, he would raise certain fingers so that some notes would end up short, while others would continue to ring. As the pianist Craig Taborn told me, “fingers don’t do that naturally.”
This combination of strength and precision transformed Taylor concerts into events of rare power. The only piano recital I’ve attended that rivaled them was a performance of Julius Eastman’sGay Guerrilla, a piece for four pianos. In a lovely remembrance for TheNew Yorker, Alex Ross portrayed Taylor as an exponent of the “art of noise,” in the tradition of composers like Ligeti and Xenakis, and of punk bands like Sonic Youth. But Taylor was also an exquisitely lyrical pianist whose softer playing was as memorable for its delicacy as his attack was for its ferocity. Gary Giddins, one of his great champions and most insightful interpreters, called him “our Chopin.” Taylor’s rendition of Rodgers and Hammerstein’s “This Nearly Was Mine” on his 1960 album The World of Cecil Taylor—one of the last standards he would ever perform—captured its heartbroken mood with quiet caresses of the keys, punctuated by imaginative, often vertiginous leaps in dynamics. His own compositions—notably, his mysterious, crepuscular 1966 piece “Enter, Evening”—had an alluring streak of sensuality, even eroticism.
For Taylor, sound implied the movement of bodies: his art was always deeply corporeal, and only became more so. Small, graceful, and rather feline, he often came on stage in pajamas or sweatpants that gave him the flexibility he needed, yet made anything he wore seem elegant. Drawing inspiration from flamenco and Kabuki theater, he moved about the piano as if he were dancing with it, and he collaborated with Mikhail Baryshnikov, Dianne McIntyre, and the Butoh artist Min Tanaka. “People used to snigger when Thelonious Monk got up and started moving,” he said, but “I didn’t find it funny; in fact, I was rather mesmerized.” He found it odd, on the contrary, that “the so-called—in quotes—serious fine artists just sit and look glum. There is not much happening with their bodies. But the body is an instrument.”
The first time I saw him perform solo, in the early 1990s, he began by tip-toeing up to, and then around the piano, for a good ten minutes. He read one of his poems, which I found inscrutable, but he delivered it beautifully, in a grave, sonorous, stylized voice. The ritual that preceded the playing felt, at first, like a kind of foreplay: the Maestro was giving us another kind of pleasure, while making us wait. But it was also, I came to realize, a way of throwing himself, and his audience, into the rhythms of his imagination, the dance of his music. “Rhythm,” he wrote in his liner notes to Unit Structures, “is life the space of time danced through.”
Cecil Taylor was as urbane an intellectual as jazz has ever known: reader of Camus, friend of the Beats, student of modernist architecture. But in describing his work, he often invoked metaphors of magic, spirits, or nature, likening himself to “a vehicle for certain ancestral forces”; he evoked his fascination with growth, vegetation, and the environment in such album titles as Air Above Mountains, Garden, In Florescence, and The Tree of Life. The formalist language of musicology reminded him of his unhappy days at the New England Conservatory, where he clashed with professors who belittled his hero Duke Ellington and he sought refuge in the “grass and trees” of West Medford jazz clubs. (“The first thing that I did,” after graduating, he said, “was to walk down 125th Street [in Harlem] and listen to what was happening.”) Although he admired the work of Ligeti and Xenakis (a former architect, he noted approvingly), he also said, “I’ve spent years learning about European music and its traditions, but these cats don’t know a thing about Harlem except it’s there.” His tradition, he emphasized, was an oral, mystical one, and while he developed a peculiar, almost indecipherable form of notation, he mostly frowned on the use of scores in performance. “The problem with written music,” he explained in Les Grandes Répétitions, “is that it divides the energies of creativity… While my mind may be divided looking at a note, my mind is instead involved with hearing and playing that note, making one thing of an action. Hearing is playing. Music does not exist on paper.”
It existed in performance, where Taylor, like Ellington, was both pianist and conductor, leading the members of his unit in improvisatory suites that expanded and contracted in bursts of energy, like living organisms. In rehearsals—Taylor was a prodigious practicer—he drilled his unit in his music’s “cells,” the phrases, riffs, and motifs that supplied cohesion and a sense of direction. But they were never told what to play, and had to find a place for themselves in Taylor’s music. (According to William Parker, this was not always easy, since Taylor “was already playing all the parts.”) The most illuminating account of how this worked in practice was written by Taylor himself, in the liner notes to Unit Structures, a work for sextet. “Form is possibility… The player advances to the area, an unknown totality, made whole through self-analysis (improvisation), the conscious manipulation of known material.” In the “plain” established by “group activity,” he continues, “each instrument has strata: timbre, temperament,” while the piano serves “as catalyst feeding material to soloists in all registers.”
Taylor was the master builder of the free jazz revolution. This was not well understood at the time, in large part because Coleman’s emancipation of jazz improvisation from the chordal structures of bebop fit so naturally into an American mythology of negative liberty, of removing constraints—or, more to the point, of overthrowing one’s masters. In Coleman’s “free jazz,” improvisors could explore their melodic ideas in relation to their fellow musicians, rather than a formal structure; the purpose was to liberate musicians from the rigidities of bop improvisation, and to allow for greater spontaneity in the moment. Taylor was less interested in freedom from inherited forms than in the freedom, or obligation, to create new ones. “The whole question of freedom has been misunderstood,” he said. “If a man plays for a certain amount of time—scales, licks, what have you—eventually a kind of order asserts itself… There is no music without order—if that music comes from a man’s innards… This is not a question, then, of ‘freedom’ as opposed to ‘non-freedom,’ but rather it is a question of recognizing ideas and expressions of order.” If Coleman left the house of bebop in ruins, Taylor showed what might be put in its place. The work of jazz composers like Anthony Braxton, Henry Threadgill, and Wadada Leo Smith—members of the Chicago-based Association for the Advancement of Creative Musicians—is all but inconceivable without Taylor’s example.
In his lifelong revolt against convention, Cecil Taylor’s first act of rebellion was becoming a musician. Born in 1929, he grew up in a middle-class home in Long Island—“a very puritanical home,” he said, where he was taught that “one was going to hell if one masturbated.” Percival and Almeida Taylor, his parents, were both educated professionals with Native American mothers. They were friendly with Sonny Greer, Ellington’s drummer, and took their only child to hear swing bands in Harlem. But they were also determined to raise him to become “a dentist, a doctor, or a lawyer.” (Taylor once jokingly described himself as a “very well brought up… displaced peasant of the Black middle class.”) Although Percival sometimes sang the blues, the “minister of culture,” in Taylor’s words, in the house was Almeida, for whom serious music could only be European classical music. She became Taylor’s first piano teacher when he was five, and had him reading Schopenhauer by the time he was nine.
Almeida Taylor died when her son was fourteen. Cecil credited her with instilling a sense of pride in his Cherokee ancestry, and giving him “the opportunity to be able to transcend cultures,” but she was a volatile, severe, and punitive mother, and theirs was a tormented relationship. “Cecil’s childhood was compromised,” Parker told me. “The piano was like a meditation for Cecil, a safe world, and as long as he was playing it, everything was all right. The piano bench was his throne, a place where the spirits entered and taught him how to live.”
But when Taylor began to play in New York in the mid-1950s, after leaving the New England Conservatory, the piano bench was far from a safe place for him, and no spirits were there to rescue him from the wrath provoked by his music, or from the pervasive homophobia of the jazz world, where same-sex desire was common but very much on the down-low. (Taylor never hid the fact that he was gay, but wondered how a “three-letter word” could “define the complexity of my identity.”) Musicians deserted jam sessions when they saw him; at one point, someone who disliked Taylor’s playing broke his wrists. Club-owners complained that people became so mesmerized by his playing that they forgot to buy drinks. The Argentine writer César Aira’s short story “Cecil Taylor,” which reimagines his early years, concludes with Taylor being escorted out of a club where he’s been gigging, and paid “twenty dollars, on the condition that he would never show his face there again.” The pianist Marilyn Crispell, whose early work owed much to Taylor’s style, told me, “When I think about Cecil, what I think about is the courage it took to be black and gay and to be playing this totally unheard of, weird music back in the 1950s. He really opened the way for all of us who came after.”
The controversy aroused by Taylor’s work never entirely subsided, even as Coleman was gradually integrated into the mainstream. Taylor’s music was “atonal,” or too European, or it didn’t swing, charges that made Taylor, a highly sensitive man, even more defensive. Jazz critics, he remembered, “were prepared to hear Stravinsky and Bartok in my playing, but not Ellington and Horace Silver.” In fact, few musicians played Ellington’s compositions with as much authority or originality as Taylor did, in his 1956 trio version of “Azure,” or in his 1960 octet version of “Things Ain’t What They Used to Be.” But his Ellingtonian affinities went beyond these homages: Taylor drew powerfully on what he called Ellington’s “orchestral approach” to the piano—using the keyboard to conduct and feed ideas to his sidemen with the aim of generating larger structures—as well as his sumptuous writing for horns. Recognizing this, the great arranger Gil Evans hired Taylor to write a group of big band pieces for Evans’s 1961 album Into the Hot.
Taylor readily acknowledged what he had learned from Bartok, who taught him “what you can do with folk material,” and from white jazz pianists like Dave Brubeck, who deepened his understanding of harmony. But Taylor aimed to produce a black vernacular sound, one that reached back from Ellington and Monk to earlier styles like boogie-woogie and stride piano. His favorite piano was the Bösendorfer Imperial grand, precisely because of its nine extra keys at the funkier lower register. He refused to have it tuned, since tuned pianos sounded “European” (that is, classical) to him, and he couldn’t get the quarter-tones he loved. The blackness of Taylor’s sound should have been obvious, but it was lost on those who missed its lower frequencies. Even a critic as perceptive as Gunther Schuller was capable of writing, in a review of Taylor’s first album, Jazz Advance, released in 1956: “One does not feel the burning necessity that what he says has to be said. Especially on the blues, one has the impression that Taylor lets us in on the workings of his mind, but not his soul.”
Taylor recognized no such distinction, but he understood all too well that intellectually-minded black musicians were often reproached by white critics for neglecting their souls, the thing they supposedly knew best. “The most terrifying thing in our society is to feel,” he said, and in his music, he gave expression to feeling in radical ways that made new and unusual demands on his audience, as in the most adventurous works of literature and visual art. More than any other jazz musician of his generation, Taylor defined himself as an artist, and therefore, in his view, as a member of an elite: artists, he told his friend Robert Levin, were “the true aristocrats of society.” He didn’t reject the term “jazz”: he was too enamored of jazz musicians like Billie Holiday and Lester Young—and what Taylor loved, he loved fully. But he wondered “if jazz is a noun, an adjective, or a political science term.” Never fully embraced by the jazz world, he was lionized by writers, poets, dancers, and artists who admired his audacity and had as little use for categories as he did. His work demanded what Susan Sontag might have termed an erotics of listening (after her call in Against Interpretation for “an erotics of art”). Those who fretted over the deeper meaning of Taylor’s work, or its relationship to jazz tradition, were cutting themselves off from its distinctively visceral pleasures.
One of his earliest (and loudest) admirers was Norman Mailer, who heard Taylor at the Five Spot, on the Bowery, in the early 1960s, and was so astonished that he stood up on his chair and declared, “This guy Cecil Taylor is so much better than Monk.” Mailer cost Taylor his gig: an influential friend of Monk’s reported the comment to Joe Termini, the Five Spot’s co-owner, who was already looking for a pretext to fire Taylor. “Norman knew about a lot of things, but music was not one of them,” Taylor told me at one of our dinners, adding that “if it weren’t for Monk I could not have existed.”
This was true, and I could see why being compared favorably to a musician he revered might have left him ill at ease. Still, Mailer was right about the impending change of the guard: Taylor was a revolutionary, and his music made all those who came before him sound a little older, even a little dated. On his early recordings (1956–1960), Taylor kept one foot in bop, as if he were still testing the waters, working with the only straight-ahead rhythm section he ever used, the drummer Denis Charles and the bassist Buell Neidlinger. But even on albums like Jazz Advance and The World of Cecil Taylor, you can hear all the elements of his mature style: the percussive tone clusters and radical dissonances, the unusually rich timbral variety, the daring oscillation between explosive fury and lyrical repose, the exuberant use of call-and-response.
What brought these elements into focus and turned them into a bold and coherent style was several years on the couch. “Cecil was the first black guy to have his own white shrink,” the drummer Sunny Murray joked, but Taylor, who went into analysis in the late 1950s, never doubted its value: “I lost perhaps 90 percent of my guilt, and I could go ahead and do what I felt I had to do.” The fruits of this liberation were fully heard for the first time on his 1962 trio date with Murray and Jimmy Lyons, Live at the Café Montmartre, recorded at a club in Copenhagen. Taylor is no longer playing standards or traditional song forms, much less chords. There’s an almost wistful allusion to the recent past in Lyons’s sweet, singing alto, which, in its high notes, evokes Charlie Parker. But Lyons resists any fixed time signature as he weaves in and out of Taylor’s relentless percussive flurries, creating a beautifully improbable synergy that, in spite of the music’s density, allows it to soar. Murray, meanwhile, is everywhere, playing textures against Taylor’s piano without bothering to maintain a steady pulse. For all the memories of bop stirred by Lyons, the performance opened the door to a new and disorienting world, without the metrical compass usually supplied by a drummer, even in early free jazz. “Cecil’s group was the one that broke the time barrier,” Craig Taborn says. Although Murray took great pride in his work with Taylor, he later complained that “working with Cecil Taylor was one of the worst things that ever happened to me,” because he “became stereotyped in that role and no one wanted to hear me play.” Taylor himself would not record a note for the next four years—one of the many gaps in his recording history.
Although less extensively recorded than other artists of his stature, Cecil Taylor still managed to amass one of the most imposing, and varied, bodies of work in postwar music. It included small group masterpieces like Conquistador! and Spring of Two Blue J’s, with Lyons and the magnificent drummer Andrew Cyrille, who combined Murray’s polyrhythmic energy with a stronger sense of pulse; dense and funky suites like 3 Phasis, full of ornate counterpoint with Lyons’s alto, Raphe Malik’s trumpet, Ramsey Ameen’s violin, and the delightfully bombastic drumming of Ronald Shannon Jackson; and duos with drummers like Max Roach, Tony Williams, and Tony Oxley. Last year, a ravishing duet with the new music accordionist Pauline Oliveros, that had been filmed in Troy, New York, in 2008, surfaced online. Taylor’s playing in this is subdued, intimate, crystalline in its lyricism: he looks over his piano at Oliveros, echoing some of her lines, responding playfully to others, creating a shared isthmus between their very different musical worlds. I was reminded of some lines from one of Taylor’s poems:
We have abilities to
become in otherness’s ourselves
transported beyond pedestrian
In his solo recitals, Taylor pursued a different kind of dialogue—one that was with himself. (“Improvisation,” he noted, “is the ability to talk to oneself.”) Albums such as Indent, Silent Tongues, Air Above Mountains, For Olim, and The Willisau Concert are ambitious, complex, sometimes intimidating works; they advance what the pianist Matthew Shipp calls “a grand gesture of presenting a solo piano cosmos.” Taylor began performing as a soloist in the late 1960s, a little before Keith Jarrett, who is more often credited with pioneering the solo improvised recital. Although Taylor spoke contemptuously of Jarrett (“Keithie-Poo,” he called him), he confessed that his rival “gave me the desire to work very hard on my technique.” The results were staggering. The pianist Fred Hersch, who first heard Taylor’s solo work in the early 1970s, told me he was bowled over by his “ability to jump really wide intervals, to go from low to high almost instantaneously and then work with these shapes. He had this kinetic awareness of all eighty-eight notes of the playground of the piano.” But when Hersch revisited Taylor’s work after his death, he noticed something else: “how incredibly organized his music is, how disciplined.” Although Taylor was less widely imitated than Jarrett, Hancock, or McCoy Tyner, the force, complexity, and intricacy of his playing have made him particularly attractive to pianists working at the outer edges of jazz—a category of three or four generations of musicians, both in America and Europe, that would include Muhal Richard Abrams, Don Pullen, Borah Bergman, Marilyn Crispell, Alexander von Schlippenbach, Irène Schweizer, Matthew Shipp, Craig Taborn, Kris Davis, Jason Moran, Vijay Iyer, and Angelica Sanchez.
Taylor’s biggest audiences were always in Europe, which added to his bitterness about America. “Freedom in America,” he said, “is the freedom of having poison in the air.” In 2016, he finally received a celebration in his hometown, not at Jazz at Lincoln Center, which ignored him, but at the Whitney Museum. In what turned out to be his last concerts, he performed over two nights at the museum, which also devoted an entire floor to Taylor memorabilia. But the most thrilling Taylor retrospective—the high point of the second half of his career—took place in Berlin in 1988, when he spent nearly two months performing with some of Europe’s finest musicians, serving as a one-man bridge between two schools that had drifted apart, the black American avant-garde and European free improvisation.
On Alms/Tiergarten (Spree)—one of the thirteen albums on the FMP boxed set that emerged from his Berlin visit—he led a seventeen-piece orchestra that generated gloriously thick, overpowering slabs of noise beneath which any pianist other than Taylor would have been crushed. (The saxophonist and composer Anthony Braxton said that Taylor’s orchestral style reminded him of Edgard Varèse, or rather, correcting himself, “an African Varèse,” since “if Cecil were to read that he might buy a gun and shoot me!”) Taylor, who considered his music a “celebration of life,” never sounded more joyous than in the music he made in Berlin. It had been two years since the death of Jimmy Lyons, his closest collaborator, and after a period of silence, he was ready to play again.
“It was like the Godfather had arrived,” William Parker said, describing the welcome Taylor received in Berlin. “Cecil was around a positive thing in Europe, and there wasn’t anything to drag him down.” Over those two months, Parker shared an apartment with Taylor, and often cooked for him. It was the closest Taylor ever came to a domestic life. He loved being taken care of, but as Parker recalls, “he was fighting with being normal—like, is everything OK?” When I asked Parker what those concerts in Berlin were like, he remembered something the bassist Sirone (Norris Jones) told him: “The great thing about playing with Cecil is that when you play with him, you know you’re going to go all the way, and you’re not going to stop until the music gets where it’s going.” One night, Taylor went all the way with Parker and the drummer Tony Oxley, the other members of his Feel Trio, and Taylor was exultant. Oxley, who was in charge of collecting the money, asked how he should divide it. “Just keep the money,” Taylor said. “The music was so good, I don’t want the money. I am very happy because we were able to play music tonight, and nothing else counts.”
The battle against infiltration in the border areas at all times of day and night will be carried out mainly by opening fire, without giving warning, on any individual or group that cannot be identified from afar by our troops as Israeli citizens and who are, at the moment they are spotted, [infiltrating] into Israeli territory.
This was the order issued in 1953 by Israel’s Fifth Giv’ati Brigade in response to the hundreds of Palestinian refugees who sought to return to homes and lands from which they had been expelled in 1948. For years after the war, the recently displaced braved mines and bullets from border kibbutzim and risked harsh reprisals from Israel’s army to reclaim their property. The reprisals included raids on refugee camps and villages that often killed civilians, as the Israeli historian Benny Morris and others have laid out. Still, refugees persisted in their attempts to return, and Israel persisted in viewing these attempts as “infiltration.”
Over the past six weeks, Israeli soldiers have killed some forty Palestinians in the Gaza Strip, the majority of them unarmed civilians, and injured more than five thousand protesters. As the US relocated its embassy to Jerusalem Monday, the violence escalated alarmingly. Israeli forces shot dead at least another fifty Palestinians and injured more than 2,400, making it by far the bloodiest day yet in the current round of protests in Gaza.
Like their grandparents, these Palestinians are seeking justice and redress for their families’ expulsion from their land. Unlike the house-to-house reprisal attacks of the 1950s, however, today’s killings are carried out from a distance. Israeli snipers are positioned on raised berms just beyond the sophisticated fence and expansive buffer zone that separate Gaza from Israel. From this safe perch, soldiers aim their rifles and shoot Palestinian protesters; according to Amnesty International, the targeting includes “what appear to be deliberately inflicted life-changing injuries.” Yesh Gvul, the movement founded in 1982 by Israeli combat veterans who refused to serve in the war in Lebanon, has publicly endorsed the call by the Israeli human rights organization B’Tselem urging these soldiers to disobey a patently illegal order.
Yet the shootings continue. As in the 1950s, Israeli officials justify the army’s use of overwhelming lethal force as a necessary security deterrent. Calling the protests a “March of Terror,” Israel’s Minister of Defense, Avigdor Lieberman, noted that the army will not “hesitate to use everything [it] has” to stop them.
Palestinians in Gaza have preferred to name their demonstration the “Great March of Return.” It began on March 30, when thousands congregated close to crossing-points into Israel. The start date marked the anniversary of Israel’s shooting of six unarmed Palestinian citizens as they participated in strikes and marches in 1976 against the government’s appropriation of their private lands. The March of Return was planned to continue until May 15, the anniversary of the Nakba, the Palestinian “catastrophe” caused by the formation of the state of Israel in 1948. (For Palestinian citizens of Israel, even commemorating the day has been penalized since 2011 by the Nakba Law.)
Seventy years ago this month, more than 700,000 Palestinians fled or were forced to flee homes that fell within the borders of the nascent state of Israel. These refugees, their children and their grandchildren, are the “infiltrators” whom Israel is still bent on deterring from claiming their right of return. This right was first upheld by the United Nations in 1949 and has been ratified every year since, making it one of the rights most consistently upheld by the General Assembly in the UN’s history.
Out of this mass exodus of refugees, close to 300,000 had taken shelter in the Gaza Strip, overwhelming the coastal enclave’s population, which was then about 80,000. They built temporary residences on the peripheries of Gaza’s cities and waited for a resolution to their dispossession, often only a few miles from their original homes. Today, their grandchildren have proclaimed a “national” and “humanitarian” march at the fence area, in which “Palestinians of all ages and various political and social groups… meet around the universal issue of the return of refugees and their compensation.”
Hamas, the party that has ruled the Gaza Strip since 2007, has jumped on the bandwagon of this popular movement. Its leaders have made speeches encouraging Gazans to join the marches, while its administration has offered services, including bus rides and tents, to support the protests. Facing its own challenges in Gaza, primarily in the form of economic stagnation and humanitarian suffering, Hamas hopes to reap the rewards of this nonviolent protest—though its efforts to do so threaten to hijack the protests and derail what has hitherto been a genuine grassroots mobilization. To underscore its engagement with this movement, Hamas has temporarily embraced a tactical commitment to popular resistance rather than press its official policy of armed struggle. Yahya Sinwar, Hamas’s leader, recently stated that protesters will be unarmed, stressing that Hamas is not seeking a new war with Israel. While there have been isolated instances of armed fighters attempting to breach the fence, Hamas’s military restraint is evident in the fact that, at the time of this writing, no rocket had been fired from Gaza despite Israel’s repeated and reckless use of excessive force.
This March of Return is partly then a story of Gaza, one with deep roots in the Strip’s history and current predicament. The Gaza Strip forms less than 1.3 percent of the land of historic Palestine. But because of its geographic proximity to Israel, refugee restlessness, and population density, Gaza is an exceedingly troublesome sliver of land for Israel. It has been a hotbed of resistance, giving birth to national leaders, armed movements, and popular uprisings. Since 1948, as the French historian of the Middle East Jean-Pierre Filiu has charted, Israel has unleashed no less than twelve full-scale wars on this coastal enclave, resulting in the deaths of thousands of Palestinian civilians. In a 1956 letter to Israeli’s prime minister, David Ben-Gurion, regarding the ferocity of Israel’s military tactics toward Gaza, UN Secretary General Dag Hammarskjöld wrote, “You believe that reprisals will avoid future incidents. I believe that they will provoke future incidents.”
The trouble is not just with Gaza’s popular defiance. This strip of land presents Israel with another, equally insoluble, challenge: demography. In 1967, the Gaza Strip fell under Israel’s direct control as part of the wider occupation of the West Bank, including East Jerusalem, and the Golan Heights. Israel’s occupation initially entailed placing 1.8 million Palestinians in Gaza under direct military rule, while it settled a mere 4,000 Jews in the Strip. Integrating such a high number of non-Jews under Israeli jurisdiction, which included the West Bank, threatened to make Jews a minority ruling over a majority population of non-Jews.
Shortly after the first Palestinian Intifada erupted in Gaza in 1987, Israel initiated measures to correct this problem and began separating the Gaza Strip from the rest of the territories. As the veteran Israeli journalist Amira Hass has reported in detail, stringent crossing requirements and elaborate permit systems were put in place. These measures were expanded against the backdrop of the peace process that began in 1993 with the signing of the Oslo Accords between Israel and the Palestine Liberation Organization (PLO).
While countless rounds of negotiations wore on throughout the 1990s, Israel gradually reshaped the architecture of its occupation. After three years of the bloody Second Intifada, under the pretext of security, Israel’s prime minister, Ariel Sharon, announced his decision in 2003 to formalize Israel’s separation policies toward the Gaza Strip—to “disengage”—while simultaneously strengthening Israel’s hold on the West Bank. In a rapid transition, Israel’s occupation of Gaza morphed from direct colonization into a system of external control.
The loss in the 2006 election in Gaza by Fatah, the dominant PLO faction, to Hamas, an Islamist organization that refuses to recognize the state of Israel and is committed to armed resistance, appeared to give credence to Sharon’s unilateral security measures. Fighting between the two groups eventually broke out, leading to Hamas’s takeover of the Strip in 2007—though this came only after months of US and Israeli interference, which included supporting Fatah’s efforts to undermine the elected party, arming Fatah, and starving the democratically-elected Hamas government of funds. Hamas’s seizure of power provided a perfect alibi for Israel’s policies of separation and enclosure of the Gaza Strip. Israel reacted by tightening its hold over Gaza into a hermetic blockade, conclusively severing the Strip from the outside world and creating, in effect, an isolated Hamas-run territory there.
Sharon’s reconfiguration, however, did not change the essential facts of the occupation. Politicians in Israel may have hoped the new policy would give the impression that Gaza’s two million Palestinian inhabitants no longer fell under Israeli jurisdiction, thereby resolving their demographic quandary. But all international organizations agree that the Gaza Strip remains firmly under Israel’s grip. It is the Israeli government that today controls Gaza’s population registry, a clear sign that the state has never relinquished full control. The Coordination of Government Activities in the Territories (COGAT) is an administrative unit within the Israeli Ministry of Defense that “implements the government’s civilian policy within the territories of Judea and Samaria and toward the Gaza Strip.” Perversely, as its website notes, this includes “the implementation of humanitarian aid programs” in the occupied territories. For Gazans, COGAT is the entity that controls the entry and exit of all goods and people from the enclave with dire precision, including the calculation of the calories required to avert mass starvation.
The Israeli government’s references to “infiltrators” who threaten to swarm into Israel have little basis in the reality of Gaza as an occupied territory, and obscure Israel’s history of harsh reprisals against it. Absent from the official discourse is the fact that, under international law, Israel has a responsibility to protect those civilians living under its occupation. Absent, also, is the acknowledgement that Gaza is not a state bordering Israel. It is an anomalous space in which a non-Jewish population is penned for reasons of demographic engineering—namely, to safeguard the Israeli state’s ethno-nationalist aims. Recent data (from COGAT) suggest that more non-Jews than Jews now live in the land of historic Palestine between the Jordan River and the Mediterranean Sea.
The Great March of Return is thus not just a story of Gaza. Israel’s fear of “infiltration” is not limited to a few Palestinians breaking through the fence. It is a deeper and more existential fear that dates back to 1948—a fear that demands for Palestinian rights, which Israel has long worked to marginalize, might infiltrate the consciousness of a restive population. It is a fear that defiance might seep back into everyday Palestinian life and undermine an occupation that has been designed to ensure permanent subservience. This goal was embedded in the architecture of the Palestinian Authority (PA), the main product of the 1993 Oslo Accords and the entity ruling over the West Bank, which is committed to expansive security coordination with Israel. Palestinians have grown increasingly disillusioned with the PA, viewing it as little more than a subcontractor to the occupation that puts Israeli security interests before Palestinian rights. Although originally planned as a temporary measure lasting five years, the PA has become a permanent institutional fixture of the occupation, subsuming the PLO and forfeiting Palestinian liberation in return for limited powers of local government.
The PA’s drift toward authoritarianism and its crackdown on civil society have prevented protests in the West Bank. Yet the grassroots defiance in Gaza is the latest manifestation in a long history of popular struggle that has recently gathered force throughout the Palestinian population. Arab political parties in Israel have become far more active against the government’s systemic discrimination against non-Jews. In April, thousands of Palestinian citizens of Israel carried out their own March of Return south of Haifa as Israel celebrated its independence. Last year, Palestinians in East Jerusalem successfully led the “prayer intifada” to protest Israeli efforts to alter the status quo around al-Aqsa Mosque. Elsewhere, the Palestinian diaspora is coalescing into a more effective international solidarity movement organized around the goals of freedom, justice, and equality.
Such widespread civic engagement and grassroots mobilization has rarely been seen since 1987, with the eruption of the First Intifada. That popular uprising was first met with force—Yitzhak Rabin as defense minister famously instructed Israeli soldiers to “break the bones” of protesters—and then sidetracked into fruitless diplomatic efforts toward a two-state solution, beginning with the Oslo Accords and ending with President Trump’s Jerusalem declaration in 2018. Palestinians have come to realize that this political process, which held out for Palestinian nationhood as part of that strategy, failed them. The endless negotiations merely allowed Israel to entrench its occupation to previously unimaginable levels.
Today, time has run out even for the bad faith that characterized Israel’s approach to the peace process—demanding Palestinian concessions while building more settlements on occupied land. Israeli leaders now openly and brazenly speak of annexation. As comparisons to South Africa’s system of apartheid become more difficult to ignore, nationalist leaders like the minister of education, Naftali Bennett, are pushing Israel toward a one-state reality, seemingly confident that an apartheid-like regime might, in Israel’s case, prove sustainable. Little in the five decades of occupation has challenged Israel’s belief that it can run things in ways that flagrantly violate international law.
Both the Palestinian Authority and Hamas’s Gaza government have become servile proxy authorities, stabilizing a captive population under Israel hegemony. While the PA is explicitly committed to this supine role, Hamas’s pacification is different—arising from its de facto need to manage and restrain its official policy of armed resistance from Gaza because of the devastating force Israel has used in a series of short wars against the coastal enclave since 2007. In these circumstances, the Palestinian struggle for self-determination has, in effect, dissolved into numerous local battles: equality for Palestinian citizens of Israel, freedom of movement for West Bankers, residency rights for East Jerusalemites, education for refugees, an end to the blockade for Gazans.
This fragmentation is not, however, a given for all time. The dense smoke, burning tires, and the masses of people huddled under gunfire on Friday afternoons is what, at this moment, the recalibration of the Palestinian struggle looks like. The images coming out of Gaza are an indication of Palestinian disenchantment with the political process and with their leaders. In a deeper and more significant way, we are also witnessing a revival of the core principles that always animated the Palestinian cause but that were displaced in the tangled maze of political negotiations.
Israel rightly fears the power of such popular mobilization. Movements like the Great March of Return have the potential to transcend the fracturing of Palestinian political aspirations so deftly imposed by the state, by uniting the Palestinian people around a single message of rights. Israel’s response to this message—from the Giv’ati Brigade commands of the 1950s to Rabin’s orders during the First Intifada and Lieberman’s recent statements on the March of Return—is always to resort to overwhelming force. For close to a century, Palestinian popular protests for rights have yielded only bloodshed. But Palestinians also take notice of global precedents.
From Sharpeville to Selma, the history of marches for civil and political rights is long and bloody. This mass mobilization around the core principles of Palestinian liberation—arising from civil society independently of discredited political leaderships—holds immense power to disrupt the status quo. Whether this movement, from East Jerusalem to Gaza, Israel to the West Bank, eventually bends toward justice depends on whether the international community will tolerate Israel’s capacity to deny an entire people their basic rights and rob them of a future because they are not Jewish. The past record is not encouraging, but something new has started.