This month, Poland marks fifty years since the “March events” of 1968, when mass protests erupted in response to the stagnant Communist regime of Władysław Gomułka and its campaign of censorship and chicanery directed at Jews and intellectuals. The anniversary comes at a time when the current government is facing criticism at home and across the world for undermining free speech and the independent judiciary, and for refusing to take in any refugees. Among its crude moves to establish ideological control at home and flout opinion in the West is a recently passed amended law criminalizing claims that the Poles were complicit in or jointly responsible for the Holocaust.
The law certainly invites comparisons between now and 1968. This time around, though, Poland’s government is not backed by Moscow; its current isolation is from both West and East, and the country seems to be caught, not for the first time, in a vise—ideal conditions for a new cycle of xenophobic hysteria and bigotry.
There is nothing so reminiscent of Communist-era censorship culture as the coercive, patronizing ideological commentaries with which cultural officials of the Law and Justice party, also known by its Polish initials as PiS, have in the last few years been responding to books, plays, and films related to the Holocaust. When Paweł Pawlikowski’s film Ida, about a convent novice who discovers she is Jewish, was shown on Polish television in 2016, it was framed by an introduction and “discussion” by state-approved experts, who kindly explained to the unsuspecting viewer how the film “insults the Polish nation.” Last month, the cover of the newsletter of the Institute for National Remembrance (IPN) which is at the center of the latest efforts to defend Poland’s “good name,” showed a photo of concentration camp inmates on the top half of the page and, on the bottom half, a photo of smiling Nazi officers, with the bold title “Germans Dealt This Fate to People.”
The visual message—the Germans are the historical villains—is clear enough for a non-Polish audience, but the title is a more specific reference to, and subversion of, a classic literary text commonly used in Polish high schools to teach the history of the Holocaust. In 1946, the eminent Polish novelist Zofia Nałkowska (1884–1954) published a slim, devastating collection of prose titled Medallions (available in English since 2000 in an excellent translation by Diana Kuprel) and gave it the epigraphLudzie ludziom zgotowali ten los: “People dealt this fate to people.” In the hands of Poland’s current state arbiters of truth, a statement about human evil (inviting readers to inhabit it and reflect on their own potential for good and evil) must be shrunk and adapted to a narrow partisan view of history. There are grounds for worrying whether the plan is to carry out such a shrink-fit operation on all of Polish culture.
On February 7, Andrzej Melak, a Law and Justice member of parliament, called for Medallions to be provided with editorial commentary. He had discovered that Nałkowska uses phrases the recent Polish legislation was designed to combat. In the last piece in the collection, “The Adults and Children of Auschwitz,” Nałkowska writes: “Not tens of thousands, not hundreds of thousands, but millions of human beings underwent manufacture into raw materials and goods in the Polish death camps.” And a few paragraphs later: “The Germans promised Jews arrested in Italy, Holland, Norway, and Czechoslovakia prime working conditions in Polish camps.” (My italics, in both cases.) There is the phrase, not mentioned in the recent law but reflecting precisely the language against which the amendment is aimed. Nałkowska is beyond the reach of a legal suit, and the wording of the law suggests it will not be applied to historians or artists, but this only raises the question of how and to whom it will be applied. In a statement in English on its website, the IPN points to “the media” as chiefly responsible for what it considers the repeated slander of Poland, but the fact that many scholars and artists involved in the debate on Polish-Jewish history are also regular contributors to media outlets makes it clear that they, too, may be liable for prosecution if their views receive a sufficiently wide airing.
The sociologist and journalist Karolina Wigura wrote in a recent article that the law was drafted as if to shut up a single person: the historian Jan T. Gross, who broke the silence that had formed around the story of the Jedwabne massacre—in which the town’s Jews were killed on the streets and in their homes or rounded up and burned in a barn—and stubbornly defends his thesis that the Poles “actually killed more Jews than Germans during the war.” As Gross has said himself, though, the government did not need the new law to harass him, and it makes more sense to see the legislation as the Law and Justice party’s way of playing to its own far-right nationalist constituency. In doing so, the government has only exposed its provincialism and insecurity.
As soon as the new “Holocaust law” entered into force this month, a non-profit group called Reduta Dobrego Imienia (literally “The Good Name’s Redoubt,” usually translated as “The League Against Defamation”) moved to bring a civil suit against an Argentinian daily, Pagina 12, for having sullied Poland’s honor. The offense was “mixing two threads”: that of the Jedwabne massacre, and the story of the “doomed soldiers” (żółnierzywyklęci) of the Polish nationalist resistance during and after World War II. Train stations across Poland have been decorated this March with IPN-sponsored posters commemorating these soldiers, who are an important part of the Polish history right-wing commentators feel has been neglected. The fact that the Argentinian editors illustrated an article on the agonizing subject of Jedwabne with a photograph of Polish soldiers was like waving a red rag to a bull.
The eight grim tales that make up Medallions were the result of Zofia Nałkowska’s work as a member of the Central Commission for the Investigation of German Crimes, which was established in the spring of 1945. The Commission (in 1949 renamed “for Nazi Crimes”) was the institutional precursor of the IPN, and its history illustrates the perils of entrusting the legislation of historical truth to a state institution. Launched under the aegis of the postwar Polish government (led by Moscow-trained Communists), the Commission always acted under the influence of political imperatives. Should its members have come across evidence of Soviet war crimes (such as the Katyń massacre of Polish officers), they were obliged to ignore or obfuscate them, just as all Poles were under the strongest possible pressure to refer to the Soviets as “liberators” instead of “occupiers.”
Despite these severe limitations, the Commission did invaluable work in documenting the Nazis’ campaign of mass murder and enslavement in Poland, and Nałkowska’s own prose, exemplary in its purity and sobriety, captures its horrors better than any other work of literature I know, except the writings of Holocaust survivors. The Commission, in which Jewish lawyers, researchers, and a few remaining representatives of Jewish communities participated, collected and preserved German documents, interviewed witnesses and perpetrators, and helped prepare the trials of the main criminals of the Nazi-occupied General-Government: Rudolf Hess, Hans Frank, Amon Goeth, Jürgen Stroop, and their fellows. At this moment in Polish history, when the Institute of National Remembrance is crying out against what it calls the “deficit of historical truth” in the attribution of guilt for the Holocaust, it is important to note that the Commission charged Polish collaborators, too.
But when loyal servants of Stalin consolidated their hold on the Polish state after 1949, the Commission’s activities were curtailed and fully subordinated to Communist-dominated prosecutors’ offices. For over a decade, the Commission was powerless to do much except preserve the documentation that had already been gathered. In the 1960s, party ideologues reactivated the Commission, but this time it was purged and put to use as an instrument for the shameful state-sponsored campaign of antisemitism that led to the emigration of a large part of Poland’s remaining Jews. Under a new director, Czesław Pilichowski, himself a former member of the extreme-right movement ONR (Obóz Narodowo-Radykalny, or National Radical Camp), the Commission approved a new policy “to rebut the slanderous campaign of lies” about antisemitism in Poland. Jewish intellectuals were publicly accused of slandering Poland, supposedly with the aid of both Zionists and neo-Nazis. Many of them, including Jan Gross, spent time in prison.
As the successor to this tainted institution, the current Institute of National Remembrance is just as subject to the changing tides of political power. Under the Law and Justice administration, it was the Institute that drafted this new law; but under another government in 2001–2004, the Institute carried out a thorough and entirely creditable investigation into the Jedwabne massacre, whose “perpetrators sensu stricto,” the prosecutor Radosław Ignatiew found, were local Poles.
Incorporating the earlier commissions, the IPN came into being in 1998 as primarily the keeper of a great hoard of documents, those left by the Communist-era secret service. In the lugubrious game of competitive martyrology that Poland seems to be playing again, millions of documents of Communist crimes are pitted against millions of documents of Nazi crimes. It is a pitifully accountant-like approach to the immeasurable losses and injuries of war, as many have remarked. Missing from the current debate about what can and cannot be said about history is the presence, so strongly evoked in Nałkowska’s brief stories, of the dead themselves.
In Poland, as in virtually every country that has been occupied by hostile powers, the dead are unsettling presences not just because they are dead but also because invoking their real presences means acknowledging the humiliation involved in the kind of choices people were forced to make in wartime. In their statement, the historians at the IPN make the astonishing assertion that “the truth never humiliates.” But as much as we want them to sit still for us as martyrs (or criminals), the dead contain a discomfiting mixture of guilt and innocence. Some of the Polish soldiers who fought the Nazis and the Communists also engaged in murderous actions against Jews. Those whose identity and national pride are bound up with those soldiers may well feel humiliated by that fact. In a democracy, these complicated realities of human history cannot be left for state officials to adjudicate. These moral knots are the stuff from which the greatest Polish literature has sprung. This is why the kind of insidious ideological control that has returned to Poland under the Law and Justice party is so disturbing and bizarre.
In its cultural mission abroad, the Polish government has been industriously promoting awareness and celebration of historical figures that it feels are neglected, including the Polish citizens honored by Yad Vashem as Righteous among the Nations. Through its Book Institute, Poland has also been pushing authors regarded by the present regime as solidly “patriotic,” such as the poet and noted scholar of Romanticism, Jarosław Marek Rymkiewicz. At a state-funded conference on Polish literature in translation held in Kraków last summer and organized by the Book Institute, all conference participants were welcomed with a gift bag including a big new Polish edition of the collected poems of Jan Polkowski, a Catholic poet of undeniable gifts who was active in Solidarity in the 1980s and is now one of the PiS government’s most fervent literary adherents. The bag also included a notebook bearing the handsome countenance of Joseph Conrad, who, trapped in Kraków at the outbreak of World War I, was berated by members of his own family for not being enough of a Polish patriot. Today, Poland needs Conrad, not as the face of national pride, but for his sense of tragic irony, of noble aims subverted by blind forces, his warnings that human knowledge is contingent, provisional.
To try to fix a country’s history and victim status by force of law is both foolish and futile. Like Holocaust denial laws, the Polish law will not stop people from saying things the Poles find offensive. It also leads the government immediately into inconsistencies. The Polish government resents the judgment laid upon it by the European Court of Human Rights for facilitating the CIA’s torture program—both by making available a building in the town of Stare Kiejkuty that was used to interrogate terror suspects, brutally and unlawfully, in 2002 and 2003, and by letting the US use an airport to fly detainees in and out. But when it is the United States that Poland is helping, turns of phrase suggestive of complicity (“Polish black sites”) do not excite the same legislative fervor.
There are still considerable differences in government censorship powers between now and half a century ago: in 1968, you could be sent to jail for ten years for “insulting the Polish state.” The Institute of National Remembrance in its new amendment envisions a maximum sentence of three years. In 1968, there was a rigorous system of censorship that had to be countered by an underground publishing scene; these days, the divide is chiefly between state media and commercial channels. The syllabuses and examination reading lists for Polish schools may now reflect the conservative literary taste of the average Law and Justice voter, but I have seen few signs that Poland’s writers and artists, so well trained in defiance, feel seriously inhibited in their work. And not all tides are flowing in the same direction.
In 2016, a new “uncensored” edition of Miron Białoszewski’s famous Memoir of the Warsaw Uprisingappeared in Poland, restoring a number of passages removed when it was originally published, chiefly descriptions of Polish citizens laughing or cheering at Germans shooting into the Warsaw ghetto. (The recent NYRB Classics edition of Madeline Levine’s extraordinary translation includes these passages.) Writing and research on World War II and antisemitism continues unabated, and there is a large section of Polish society that vociferously resists racial and gender prejudice when it gets wheeled out as a political tool. There are masterly new books out in Poland on the Kielce pogrom of 1946 by Joanna Tokarska-Bakir and on Irena Sendler, one of Poland’s most prized “Righteous Gentiles,” by Anna Bikont. There is every reason to hope the country will choose historical complexity over historical cartoon.
At the end of the last chapter of Medallions, Nałkowska mentions a Nazi law that forbade anyone from reproaching members of the Nazi party for past crimes (after all, so many of them had been recruited from the ranks of pimps, thieves, and common murderers). The Law and Justice party is more subtle: it feels entitled to reproach Poles with some crimes, those loosely referred to as “Communist crimes,” but it wishes to protect Poles from being reproached for other crimes in the national past.
Visitors to Budapest may have read somewhere that Hungary has the first autocratic regime in the European Union. The capital on the Danube does not feel like that: the atmosphere is relaxed, not repressive; no paramilitaries are marching; if anything, one might come across a small demonstration against the government, politely escorted by police. The ruling “ism” would appear to be not authoritarianism but hedonism: from the beautifully restored thermal baths to the beer gardens in the old Jewish quarter, affluent natives and an ever-growing number of tourists just seem to be enjoying themselves.
There is no personality cult around Hungary’s leader, Viktor Orbán, who has been prime minister since 2010. Orbán has understood that authoritarian populism must never evoke images familiar from twentieth-century dictatorships: no violence in the streets, no knocks on doors by the secret police late at night, no forcing citizens to profess political loyalty in public. Instead, power is secured through wide-ranging control of the judiciary and the media; behind much talk of protecting hard-pressed families from multinational corporations, there is crony capitalism, in which one has to be on the right side politically to get ahead economically.
Like all populists, Orbán has no difficulty in presenting himself as an underdog fighting “the elites”—preferably “shadowy” ones that threaten the nation with their “globalist” networks. This past fall, the government waged a vicious campaign against the Hungarian-American hedge fund manager and philanthropist George Soros, alleging that his “empire” is bent on striking a “final blow to Christian culture.” It is worth remembering that Orbán was the first major European politician to endorse Trump (whose victory he celebrated as a “return to reality” in the face of political correctness and liberal hypocrisies). Hungary is of course not the US, but the country shows clearly how populists with enough power operate when in government.
Paul Lendvai, a Hungarian-Austrian journalist who spent several decades reporting on Central Europe for the Financial Times, has written a highly illuminating biography of Orbán, whom he calls “the ablest and most controversial politician in modern Hungarian history.” Orbán: Hungary’s Strongman also serves as a useful overview of Hungarian history since the fall of communism—after all, Orbán has been central to the country’s development since at least the late-1990s, when he was first elected prime minister. Lendvai portrays him as ruthless, absolutely relentless in the pursuit of power, and, on many occasions, outright vengeful.
Orbán has long cultivated the image of a man born to fight: his passions are for soccer and spaghetti Westerns. The avenger played by Charles Bronson in Once Upon a Time in the West is a particular favorite; he claims to have seen the movie at least fifteen times. He likes to brandish his “plebeian” origins and values: his family lived without running water; the children had to labor in the fields during school holidays. This picture leaves out the fact that Orbán’s father was the typical Homo Kádáricus, the product of “goulash communism” under János Kádár, who led the country from 1956 to 1988. Kádár had struck a tacit deal with Hungarian society: politics should be left to him, and in return people would not have to pretend to believe in communism; instead, they could find happiness in family life and even run small businesses. Back then, Western accounts of the country invariably contained the cliché of the “happiest barracks in the Eastern Bloc.” Part of an upwardly mobile rural middle class that both despised and served socialism, Orbán’s father became the head of the machinery department in a local farm collective. Orbán was a good student, and in the mid-1980s he joined the István Bibó College in the Buda hills, a kind of intellectual fraternity house for law students from the countryside. The college had been set up by the socialist regime, but some of the tutors teaching there were dissident intellectuals. Soros supported it financially.
In 1988, Orbán and other students set up the Alliance of Young Democrats (Fidesz). They took the word “young” literally: no one above the age of thirty-five was allowed to join. Their program was liberal, anticlerical, and suspicious of nationalism. Eventually, the Fidesz founders were to abandon these ideals for their exact opposites. But they never abandoned one another. Today the country’s president, the speaker of parliament, and the author of Hungary’s 2012 constitution all happen to be Orbán’s friends from university days.
Lendvai emphasizes the particular characteristics of this political brotherhood. They shared relatively humble origins in the countryside and grew resentful of the urbane intellectuals who tutored them. Some of these older liberals had formed a successful party, the Free Democrats, after the Kádár regime and, in the eyes of Orbán and friends, patronized the young firebrands as a not yet fully educated youth branch of their party. Whether the country boys split from the older liberals because they had a chip on their shoulder is debatable—after all, this story is just another version of the populist notion that the country is forever divided between “the real, rural Hungary” and the cosmopolitan (sometimes called “foreign-hearted”—i.e., Jewish) Budapest liberals. What is beyond dispute is that Orbán discovered that resentment could be turned to political advantage. As he put it in an interview, “By origin I am not a sensitive intellectual…there is in me perhaps a roughness brought up from below. That is no disadvantage as we know that the majority of people come from below.”
Orbán took up a Soros-sponsored scholarship to go to Oxford, where he set out to research the idea of civil society in the history of European political thought. But he cut his stay short to enter the fight for the leadership of Fidesz. He managed to purge all his opponents and radically altered the party’s program after Hungary’s major center-right party, which had formed the first government after the fall of communism, dramatically lost support. Orbán, nominally a Protestant, suddenly discovered religion and sought an alliance with the churches. He explained that he could not “talk to the people” if he did not understand the churches’ “important part in Hungarian life.” His party’s image utterly changed: the former long-haired student leaders began to advocate the ideal of the polgári, a civic-minded, patriotic bourgeois akin to the German Bürger, with a strong work ethic and a commitment to traditional family values. Evidently, that vision appealed to voters: in 1998, Orbán, at the age of thirty-five, became Europe’s youngest prime minister.
Hungary was then still seen as a leader in the process of “transition” from state socialism to a market economy and also as a model pupil of the European Union, which the country joined in 2004. It was the shock of Orbán’s political life when he unexpectedly lost the 2002 elections to a technocrat who had been nominated by the Hungarian Socialist Party, the successor party to the Communists. Initially Fidesz alleged election fraud. Orbán exclaimed that the nation simply could not be in opposition (thereby, like all populists, claiming that he and only he represented the people). The surprise was even greater as his government had showered welfare benefits on the electorate before election day—a practice that the new left-wing government would continue. It put Hungary on an unsustainable financial trajectory that nearly led to bankruptcy in 2008.
In 2010, power virtually fell into Orbán’s lap: the left had been discredited by a disastrous economic record and corruption scandals. Lendvai describes the Hungarian Socialist Party as “a disgusting snake pit of old Communists and left-wing careerists posing as Social Democrats.” Hungary’s peculiar electoral system ensured that the 53 percent Fidesz won at the polls translated into a two-thirds majority in parliament. Declaring that this had been no ordinary election but a “revolution at the ballot box,” Orbán proceeded to establish an Orwellian-sounding “System of National Cooperation.” He also reinforced Hungary’s “Trianon Trauma,” the country’s self-image as a great power that had been victimized by the West because of the post–World War I Treaty of Trianon, as a result of which the country lost two thirds of its territory and a third of ethnic Hungarians ended up in neighboring countries. And he had his party pass a new constitution that codified the nation’s Christian character in a preamble beginning with an appeal to God.
Already in 2009 Orbán had announced that the country was in need of a “central political forcefield” that would dominate politics for fifteen to twenty years. The major check on power in the two decades after 1990 had been the constitutional court. After 2010, Fidesz first packed it and then took away most of its powers. From his defeat eight years earlier Orbán had drawn the lesson that his government’s achievements had not been communicated “efficiently enough.” Accordingly, Fidesz now took over the public and most of the private media. The government also started a campaign against foreign banks and supermarkets, levying special taxes on them. This economic nationalism distracted from the fact that Hungary today has both the highest value-added tax in the EU and the lowest corporate tax—hardly policy choices one would associate with “plebeian values.”
Fidesz changed not only the state, the economy, and the culture; it also changed the people themselves. About a million ethnic Hungarians in neighboring states were given citizenship; meanwhile, for a variety of reasons, about 500,000 people left the country. Almost all the new citizens who participated in the 2014 elections voted for Fidesz, while the emigrants found it difficult to register at consulates in New York and London. In 2014, Fidesz received another two-thirds majority in parliament, even though its share of the vote had dropped from 53 to 45 percent. International observers, noting well-executed gerrymandering and the ruthless use of the entire state apparatus for pro-Fidesz propaganda, declared the election free but not fair.
Orbán now proclaimed his aim of creating an “illiberal state” based on the values of work, family, and nation (the very slogan that the wartime French Vichy regime had once adopted). He cleverly ran together the political and economic meanings of “liberalism,” leaving open whether he was propounding economic nationalism or something politically authoritarian. The latter interpretation was ever more plausible, as Budapest sought to strengthen ties with Russia, Turkey, Azerbaijan, Kazakhstan, and other illiberal states. Given memories of the Soviets and the Ottoman occupation, Orbán’s “opening to the East” was hardly popular, but it allowed Hungary’s leader to present himself as a cunning underdog who would play East and West off each other, all to the Magyars’ advantage.
Domestically, Orbán’s vision of a dominant central force seemed to have been realized: the major opposition parties were the post-Communists and a new far-right party, Jobbik, the only major political organization not tainted by corruption. It seemed that the two could never unite against the government. Jobbik’s rise enabled Orbán to answer EU criticisms of his undermining the rule of law by warning of a horror scenario: if an overweening Brussels weakened him, he threatened, the EU might one day have to deal with real neo-Nazis in power.
Yet in the months after his triumphant reelection things went awry for the man now often referred to as the “Viktator.” Orbán fell out with one of his oldest friends from school, Lajos Simicska, a powerful oligarch and the brains behind Fidesz’s party finances. Simicska switched his allegiance to Jobbik, explaining that he could not tolerate Orbán’s cozying up to Putin. The two-thirds majority in parliament disappeared after an independent candidate won a by-election. And a new generation of Fidesz leaders, for whom the hard days at the Bibó College were familiar only from history books (which Fidesz now also fully controlled), made their luxury lifestyles all too conspicuous: their expensive watches screamed nouveau riche, as opposed to the discreet charm of the polgári. The party’s popularity plummeted.
Then Orbán hit on the issue that not only saved him from domestic troubles, but also made him a figure of real consequence in Europe. In the spring of 2015, the government decided to build a fence on the border with Serbia to keep out refugees, and it staged a “national consultation” on immigration. An enormous campaign for “yes” accompanied this exercise in fake direct democracy, and Orbán did not hesitate to invoke conspiracy theories to generate fear of people who, according to Fidesz propaganda, had to be either economic migrants or Muslim terrorists. The exact results of the “consultation” have never been revealed, let alone checked by independent observers.
Orbán’s strategy of presenting himself as the last protector of a Europe in which Christianity and the nation-state are sacred succeeded both domestically and internationally. At home, he outflanked Jobbik on the right. In the EU, Orbán managed to turn a conflict that should have been about institutions—could the EU tolerate the abolition of the rule of law in a member country?—into one about ideals: his “Christian national identity” versus what he derided as “liberal babble” from Brussels. Henceforth, critics of his attacks on the basic rules of liberal democratic governance were regularly dismissed as just having different, and subjective, values.
Few politicians outside Hungary were eager to take up Orbán’s call to wage a pan-European Kulturkampf. But plenty on the respectable center-right were happy to use him for their own short-term purposes: Bavarian conservatives celebrated Orbán at a meeting in a monastery in the fall of 2015 to make a show of their opposition to Angela Merkel’s refugee policies. The Christian Democrat Sebastian Kurz, who was sworn in as Austria’s chancellor in December, praised Orbán to prove his own toughness on immigration. Surely they all know that Orbán is in effect leading a far-right government in which religion is never about ethics—what we actually believe or do—but purely about identity: who we think we are.
As with Trump’s victory, Orbán’s success over the years does not demonstrate that right-wing populism is an unstoppable force. Rather, his victories have been enabled by the cynicism of center-right politicians in Europe who refuse to distance themselves from what is in fact a white nationalist government. German Christian Democrats, for instance, are less concerned about the rule of law in Hungary or other supposed “European values” than about major investments by automobile companies, such as Audi, the second-largest employer in Hungary, and Mercedes, both of which receive subsidies from the Hungarian state.
Are there limits to what Orbán can do? For years, there have seemed to be three red lines: conflicts with neighboring countries over their large Hungarian minorities, violence on the streets, and open displays of anti-Semitism. Orbán has by and large eschewed conflict with successor states to the Habsburg Empire. In fact, the more he has been criticized by Brussels, the more he has tried to build up the Visegrád Four (or V-4)—Poland, Hungary, Slovakia, and the Czech Republic—as a bloc that protects the supposedly real European values of Christianity and nationalism by refusing to take in refugees.
At home, Fidesz has been extremely careful to avoid anything that could look like serious human rights violations. When tens of thousands demonstrated in the spring of 2017 against the threatened closure of the Central European University (founded and endowed by Soros), the police were restrained. Free speech is not suppressed in Hungary, at least not openly; bloggers are free to criticize the government, and all kinds of debates can be staged in Budapest coffeehouses. The government seems to use other means to control speech. In 2015, Hungary’s largest left-leaning newspaper was bought by a dubious Austrian investor and, a year later, abruptly closed down, supposedly for financial reasons.
As my colleague Kim Lane Scheppele has emphasized, the very instruments that the West once considered crucial for a transition from socialism to liberal democracy—law and the market—have been used to establish a soft autocracy: after all, the creation of a new Hungarian constitution and Orbán’s capture of the judiciary were done in a procedurally correct manner, as one would expect from a party of clever lawyers. And the closing of the liberal newspaper was, ostensibly, caused by the market, not politics.
Without a functioning media, a government’s missteps, corruption, and embarrassments will not show up at all on screen or paper. Consider, for example, the mayor of Orbán’s hometown and one of his friends from primary school, Lőrinc Mészáros, who was an unemployed pipefitter a decade ago. He is now the fifth-richest man in the country, and his business has grown faster than Mark Zuckerberg’s. With disarming frankness, Mészáros once explained that “the good Lord, good luck and the person of Viktor Orbán have certainly all played a role” in his success. Orbán’s own family is listed in a Forbes report as being worth €23 million.
One reason for Orbán’s opening to the East—and his enthusiasm for strongmen from Azerbaijan to China—is that standards of transparency in business transactions are decidedly lower there than in the West. Construction in the beautiful new Budapest, often funded by the EU, also provides excellent opportunities for submitting inflated bills. But since 90 percent of the media is effectively controlled by Fidesz or its allies, most people will not be aware of these abuses. Local newspapers are now all owned by oligarchs close to the government—a situation that recently prompted the US State Department to make a grant available to support “fact-based” reporting in rural Hungary.
As Lendvai emphasizes, Orbán relishes conflict and positively needs enemies. While populist leaders use the pompous rhetoric of “National Cooperation,” what they really do is relentlessly create and recreate divisions in society. This partly explains the latest campaign against what Fidesz calls the “Soros Plan.” This plan, Fidesz has informed all eight million voters, mandates the transportation of a million migrants into the EU each year and would force states to be soft on crime committed by migrants. In billboards and TV ads, Soros has been portrayed as a grinning puppet master controlling not just left-liberal parties in Hungary but also the major EU institutions. This imagery evokes the worst anti-Semitic stereotypes from European history, drawing from conspiracy theories favored by the Nazis: the Jewish financier as the evil genius behind Bolshevism. Orbán has compared the “Soros Empire” to the Soviet Union and alleges that, together with “Brussels bureaucrats,” this evil empire is forging an alliance “against the European people,” as “Europe is currently being prepared to hand its territory over to a new mixed, Islamized Europe.” A faithful Fidesz deputy felt compelled to attribute the “Soros Plan” to Satan himself.
Once again, the campaign is designed to outflank Jobbik on the right, since it is perceived as the most significant threat to Fidesz in the spring 2018 elections. But it also serves to justify an attack on the remnants of civil society. NGOs that have benefited from grants given by Soros’s foundations have exposed government scandals. A law passed earlier this year forces all NGOs that receive more than €24,000 from abroad to declare themselves as “foreign-supported.” Orbán also ordered the secret services to investigate these NGOs, claiming that they could pose a threat to “national security.” In Orbán’s rhetoric, Hungary is locked in a fight with Soros for nothing less than its national existence.
The EU has reacted helplessly to such Putin-like measures. Orbán has gloated that in response to criticisms from Brussels, he has performed a “peacock dance”: pretending to listen, making cosmetic adjustments to laws, and then proceeding as planned with the consolidation of power. His regime has been possible not despite but because of the EU. When measured in relation to GDP, Hungary is the top recipient of EU funds, which have contributed decisively to the country’s economic growth and which are to the regime what oil money is to Arab despots: a free resource that can be distributed at will to buy political support and strengthen Fidesz oligarchs. In effect, the EU finances its most vocal internal enemy, an enemy who says he feels more at home with politicians in Astana, the capital of Kazakhstan, than in Brussels. Open borders for one’s own citizens and closed borders for refugees are an ideal combination for Orbán: the former ensures that frustrated citizens can just leave (and probably will have no time or energy left to organize political opposition after waitressing in London or Berlin for ten hours a day).
Lendvai is unsure of how to classify Orbán’s regime. Like any successful political movement, Fidesz has produced ideologues. But its right-wing think tanks have contributed little more than statements such as, “If something is done in the national interest, then it is not corruption.” Meanwhile, Orbán has become a hero of the far right all over the world, with fans such as the Republican US congressman Steve King, who tweeted that “Orban has uttered an axiom of history and of humanity. Western Civilization is the target of George Soros and the Left.” A conference about the future of Europe, organized and financed by the Hungarian foreign ministry to mark the country’s presidency of the V-4 in 2017–2018, featured among the invitees Milo Yiannopoulos and Götz Kubitschek, a leading figure of the German far right whom even Bavarian conservatives would not touch with a barge pole. Fidesz’s vision of the V-4 appears to be a kind of Disneyland of the far right: Christianity reigns supreme, no Muslims are allowed, the traditional family triumphs. (Orbán’s cabinet contains exactly zero women; according to the prime minister, females are just not tough enough for politics.)
Like other populist leaders, Orbán presents his government as being based on direct democracy (on account of the frequent, highly manipulative “national consultations”) in contrast to what he dismisses as “liberal nondemocracy.” Some critics have called Hungary fascist, but the system is clearly not—after all, the government does not seek to mobilize people, encourage mass violence, or demand total ideological conformity; in that regard, it actually resembles Kádárism (under which the discontented were also readily given passports).
In the end, Lendvai settles on the term “Führer democracy” to emphasize the extraordinary centralization of power in the Viktator’s hands. And he endorses the idea of the “mafia state,” a term coined by the Hungarian sociologist Bálint Magyar to suggest that the reign of Fidesz has little to do with political ideas, but is simply a means for a “political family” to plunder the country under the protection of its godfather. Lendvai’s characterization of Orbán as capable of adopting any belief according to political expediency chimes with that interpretation.
The last chapter of Lendvai’s book is entitled “The End of the Regime Cannot Be Foreseen.” What seems foreseeable is another victory for Fidesz in 2018. The opposition remains divided and largely demoralized. Apart from Fidesz, the successor party to the Communists still has the best political infrastructure, but not much moral credibility. Lendvai observes that socialists are the only Hungarian politicians who appear in the Panama Papers. This certainly helps to make the common Fidesz claim that “all sides steal” more believable.
Surprisingly, Fidesz lost a by-election in a stronghold in southern Hungary at the end of February, after all opposition parties had effectively backed an independent candidate. If Orbán were to lose a national vote, would he really go quietly, or is he, like Putin, determined to die in office? Many of his cronies, apparently wishing to live like nineteenth-century magnates, have acquired huge landed estates. Hungary now has, according to Lendvai, a concentration of land ownership unprecedented in modern Europe. As far as a peaceful transition to democracy is concerned, this is worse than money stashed away in the Caribbean. But however the story ends, a country that already has a deeply tragic view of its own history will one day have to come to terms with the years lost to a kleptocracy veiled in national colors.
One day around 26,000 years ago, an eight-to-ten-year-old child and a canine walked together into the rear of Chauvet Cave, in what is now France. Judging from their tracks, which can be traced for around 150 feet across the cave floor, their route took them past the magnificent art for which the cave is famous and into the Room of Skulls—a grotto where many cave-bear skulls can still be seen. They walked together companionably and deliberately, the child slipping once or twice, as well as stopping to clean a torch, in the process leaving a smear of charcoal.
It’s nice to imagine that the pair’s Huckleberry Finn–like exploration became the stuff of legend in their clan, for at the time Chauvet Cave’s recesses were abandoned, its art and cave-bear bones were already thousands of years old, and soon thereafter a landslide would seal the cave entrance. Whatever happened, the pair’s adventure certainly became famous in 2016, when a large radiocarbon dating program that included the smear of charcoal discarded by the child confirmed that the tracks constitute the oldest unequivocal evidence of a relationship between humans and canines.*
You might think that fossil bones and ancient DNA would allow scientists to trace our relationship with canines through the transition from wolf to dog, but this is not straightforward. Over thousands of years of domestication, canine DNA has become hopelessly mixed, and even the most complete Ice Age canine skeletons cannot be absolutely identified as wolf or dog. A 36,000-year-old canine skull from Goyet Cave in Belgium illustrates some of the problems confronting researchers. It is the earliest dog-like skull ever found, being relatively small and short-faced, as dogs’ are, but wolves’ generally are not; yet genetic analysis reveals that it is not closely related to any living wolves or dogs.
Genetic studies of wolves and dogs indicate that their lineages split between 30,000 and 40,000 years ago, and the limited archaeological evidence suggests that the split occurred in Europe. Cattle, sheep, goats, and pigs were all first domesticated much later—beginning around 10,500 years ago—in the Near East. The realization that humans and dogs have been companions for at least 30,000 years has prompted a reconsideration not only of the relationships’ origins, but also of its consequences.
In The First Domestication, Raymond Pierotti and Brandy Fogg argue that important insights into the origins of the canid–human relationship can be gained from studying the relationships between various indigenous peoples and wolves, which they claim are often ones of mutually profitable coexistence. One example concerns the Blackfoot people (NiiTsitapiiksi) of what is now Montana and parts of western Canada. They “were fond of wolves as companions,” sleeping on wolfskins and singing songs to encourage wolves to join them in a hunt. Numerous American indigenous cultures, it seems, have stories about wolves and humans sharing food and cooperating in other ways. The corpus of stories provides Pierotti and Fogg with evidence indicating “solid cooperation and a form of reciprocity…linked to guiding and the sharing of food.” Intriguingly, the stories often involve wolves helping women, and in Pierotti’s experience wolves and wolf/dog hybrids have a natural affinity for women that is only rarely seen when they interact with men.
Pierotti and Fogg believe that humans and wolves, both being large, intelligent social carnivores, were preadapted by evolution to cooperate with each other. As attractive as this idea is, it cannot be the whole story, for humans and social canids have coexisted for hundreds of thousands of years, yet there is no genetic or archaeological evidence of a bond forming until around 30,000 years ago. So we must ask ourselves why, if we are predisposed to bond, our species and African wild dogs, which coexisted for hundreds of thousands of years, did not form a bond; and also why Neanderthals and Denisovans, which coexisted with wolves for half a million years, likewise did not do so. And most intriguingly, why did modern man, who coexisted with wolves after humans left Africa 60,000 years ago, not form a bond until 30,000–40,000 years ago, after they entered Europe?
An important clue may exist in the nature of those first Europeans, for they were very unusual hybrid beings that resulted from the interbreeding of Neanderthals and humans. Until new migrations arrived in Europe 14,000 years ago, Europeans had a high proportion (around 8 percent) of Neanderthal genes, a figure that in modern populations is reduced to around 2 percent. As the well-documented phenomenon of hybrid vigor illustrates, hybrids can be highly distinctive, exhibiting characteristics seen in neither parent, and it is worth investigating the possibility that the Neanderthal/human hybrids interacted with canids in novel ways that led to domestication.
Conventionally, when we think of domestication, we envisage a process driven by humans and imposed on the creature being domesticated. But Pierotti and Fogg reframe it as an evolutionary process that was driven by cooperation and mutual benefit, and that profoundly shaped both species. They outline their concept in a hypothesis based on “thirty years of research experience watching wolves and their interactions with humans and with one another combined with careful studies of the traditions and attitudes towards nature of Indigenous peoples in North America.” The hypothesis begins with a subordinate, pregnant female wolf that decides to make her den adjacent to a human camp. A human female feeds her, and her cubs grow up with humans nearby, allowing the humans and wolves to learn how to cooperate.
This cooperative model of domestication is new, and it challenges the more conventional view that domestication was initiated by humans who took wolf cubs back to their camp, where they were fed and grew up in a human society. Given the scant archaeological record, either view could be correct, but what is really striking about Pierotti and Fogg’s hypothesis is the idea that once the relationship began, it proceeded as a form of coevolution that had a significant effect on human social structures. They even go so far as to write:
There are…indications that human social and even ethical systems—which many admire and hold, at least in theory, to be the highest achievement of humanity—were invented first by early canids.
In support of this idea they cite researchers arguing that “wolves had a powerful influence on the social ethics of early human groups through a kind of ‘lupification’ of human behavior.” These are startling and novel ideas, but without knowing more about the social structures of humans who lived without dogs (such as Tasmania’s Aborigines), it is difficult to assess how “lupified” we are relative to our ancestors before they associated with wolves.
One of the most persuasive of Pierotti and Fogg’s arguments is that the relationship between humans and dogs can be divided into two phases: the period between around 40,000 and 15,000 years ago, which left little trace in dog anatomy, and a subsequent, more intense phase that led to rapid changes in it. Our understanding of the early phase is shadowy because it involved mostly behavioral changes that have left little in the fossil record. The latter phase, however, is marked not only by the appearance of anatomically distinctive dogs but of burials of dogs with humans, which attest to the growing bond between the species. There is even some genetic evidence that around this time a second, independent domestication of wolves occurred in East Asia. With the domestication of cattle, sheep, goats, and crops, the human–dog relationship changed yet again, for diets became less rich in meat, and there were flocks to guard. Finally, around 4,000 years ago, human selection of dogs for various traits intensified, and the earliest breeds, including greyhound-like types, emerged.
The relationship between canids and humans continues to evolve, and today its most fraught aspect involves interactions between that most independent canid—the wolf—and rural people. Attitudes toward wolves are at their most extreme in the US, where some people of European descent are deeply hostile to wolves in a way that is disproportionate to their potential to cause physical or economic harm. Pierotti and Fogg blame this wolf-hatred on Christianity, which they claim demonized wolves because “the Christian Church decided that many humans were living too close to nonhumans on a respectful basis.” By way of proof, they claim that the greatest haters of wolves in America tend to have “strong religious backgrounds.”
Distinguishing a dog from a wolf is not always straightforward, and because several US states have laws prohibiting the keeping of wolves or wolf/dog hybrids, expert testimony is often called upon to determine whether an animal is a dog or wolf. Pierotti has been called as a witness eighteen times; having kept wolf/dog hybrids and studied both wolves and dogs extensively, his testimony is highly valued. In many cases, he has been able to demonstrate that the canid in question has no admixture of wolf genes. But as he points out, there is a larger question here, for the laws assume that wolves are more dangerous than dogs, when in fact the reverse is true. In the US between 1979 and 1996, more than three hundred people were killed by 406 dogs, and only fifteen of these instances involved purported wolf/dog crosses, some of which, according to Pierotti, are “highly questionable.” The situation with purebred wolves is even more clear-cut, for there is not a single example of a wolf in nature killing a human in the entire history of North America.
It is alarming, although perhaps predictable, that in the US attitudes toward wolves are becoming divided along political lines, with Republicans continuing to introduce and support laws that restrict or persecute them, while First Nations people and many Democrats are much more tolerant, or even encouraging, of the animals. In Europe, by contrast, a more easygoing relationship is being established, as was demonstrated in 2016 when one of the first wolves to enter the Netherlands in centuries crossed the border from Germany. It wandered the streets of a village for several days, the sanguine Dutch displaying little more than minor concern. Elsewhere in Europe, however, where wolves cause more harm to livestock, schemes are in place whereby farmers are compensated by the EU for losses to their flocks. Conflict between wolves and humans in Europe will doubtless continue, but in Europe wolves have a bright future, in part because most landowners seem willing to abide by laws and regulations formulated to address their interactions with the animals. In the US, however, wolves continue to be shot whenever they encroach onto private land. As a result, today more wolves can be found in Europe than in all the US, including Alaska, despite the fact that Europe’s human population is more than twice as large as that of the US.
Profound insights into how dogs evolved from wolves come from a remarkable, multidecade experiment on foxes that was carried out under the supervision of the Russian geneticist Dmitri Belyaev from the 1950s onward. Because much of the research was published in Russian, How to Tame a Fox, which is cowritten by Lyudmila Trut—a central figure in the project over many decades—will be widely welcomed for the extraordinary detail it contains. Initiated at a time when Soviet science was suffering under the malign influence of Trofim Lysenko, whose false ideas on genetics and how to increase crop production found favor with Stalin, research of this type was extremely dangerous to conduct. Indeed, Dmitri Belyaev’s brother Nicholai, an acclaimed plant geneticist who was skeptical of and opposed Lysenko’s theories, was arrested and executed in 1937.
This seems to have left Dmitri more determined than ever to see Russian genetics regain its international reputation for excellence. At first, he had to cloak his research as an attempt to increase production at fur farms; but after Lysenko fell from grace, and following Stalin’s death in 1953, he could work more freely. By the 1970s genetics was once again a respectable science in the Soviet Union, and geneticists in the West began to hear of Belyaev’s remarkable work. With international collaboration again possible, his astonishing results began to be independently verified.
Belyaev’s experimental method was simple in the extreme. Out of the thousands of silver foxes held at a fur farm, he simply selected for ones that were calmer than normal in the presence of humans. After just a few generations of selective breeding, some offspring of these slightly tamer foxes started to seek out human company. Breeding these individuals produced foxes that showed changes in their reproductive systems that are typical of domesticated animals, which often bear more than one litter per year. Astonishingly, a few of the selected foxes even began to wag their tails and bark—characteristics otherwise seen only in dogs. Eventually foxes were produced that had varied color patterns in their fur, curly tails, and floppy ears, all of which are characteristic of domesticated animals, but not wild ones. A few even began vocalizing with a sound that was reminiscent of human laughter. None of these traits had been selected for—Belyaev’s team selected only for a fox’s degree of comfort around humans.
Extensive checks, including experiments in which embryos from normal foxes were transplanted into the uteruses of selected females, were undertaken, in order to demonstrate that genes rather than learned cues were responsible for the changes. Despite the undoubted genetic basis of the change, the precise mechanisms remain poorly understood, though clearly hormonal regulation—particularly of stress hormones—is important. By demonstrating that a simple selection mechanism could, over an exceedingly brief time, have such a large effect, Belyaev’s experiment had a major impact on our understanding of how the dog–human relationship began.
In a Stone Age camp, humans would have had no control over adult canids. But if a few canids had lower levels of stress hormones and did not run away from humans in fear upon reaching adolescence, they may have formed a self-selecting group that developed a distinctive, human-associated ecology. As long as they remained genetically more or less isolated, domestication should have proceeded rapidly. The conundrum is thus why domestic dogs didn’t emerge earlier. Pierotti and Fogg posit one possible answer: they think that wolves, rather than domestic dogs, were better partners for Ice Age humans. With important new theories challenging orthodoxy, the origins and consequences of the dog–human relationship looks set to become a major topic of research.
The contemporary relationship between people and their dogs results from the long coevolution traced by Pierotti and Fogg, as well as genetic changes similar to those seen in Belyaev’s foxes. In some instances, the dog–human relationship can be deep—some would argue as deep as that between two humans. But do humans and dogs think in similar ways? Until recently the question seemed unanswerable. The American philosopher Thomas Nagel summed the situation up in his famous paper “What Is It Like to Be a Bat?,” in which he argued that the perceptions and experiences of bats and humans are so different that humans can never know the bat’s perspective, and vice versa. It’s an argument that’s been used to dismiss the idea that humans can know what it is like to be any animal species.
The advent of technologies like functional magnetic resonance imaging (fMRI) has created new ways to approach the question, and Gregory Berns, a neuroscientist with a deep love and respect for dogs, has spent his career using these new tools to understand how animals, and particularly dogs, think. The development of fMRI was a breakthrough because it potentially allows researchers to determine if the same structures are used for the same function in human and animal brains. As Berns puts it:
Where structure-function relationships in an animal’s brain are similar to those in our brains, it is likely that the animal is capable of having a similar subjective experience as we do. This, I believe, is the path toward understanding what it’s like to be a dog, or a cat, or potentially any animal.
The task Berns set himself turns out to be extremely challenging. Just getting dogs to enter and stay still in fMRI machines requires months of training, while fine-tuning the experiments to ensure that the results are genuine takes years. An important advancement came when Berns designed and conducted a definitive “marshmallow test” for dogs. The marshmallow test measures an individual’s ability to delay gratification. It was first conducted in the early 1970s on children as young as four, who were offered a choice of treats—one preferred and one less so. The experimenter would then leave the room and return in fifteen minutes with the preferred treat. At any time, however, the child could ring a bell, and the experimenter would return with the less preferred treat. In the 2000s brain scientists scanned the brains of the children (by then grown) who had taken the original test, revealing that those who had demonstrated an ability to delay gratification had increased activity in a part of the frontal lobe known as the inferior frontal gyrus.
The need for the dogs to remain motionless in the fMRI machine limits the kinds of experiments that can be conducted, and as Berns began to construct his marshmallow test for dogs he experienced several false starts. He eventually settled on an approach that involved first training the dogs to understand that when they heard a whistle they would get a treat, and then training them to learn that, at a hand signal given by their trainer, they had to lie still in the fMRI even when they heard the whistle, after which they would get the treat. The tests turned out to be far more difficult to conduct than you might think. But after several modifications, the researchers became confident that they were testing the dog’s degree of self-control.
Scans of the brains of the dogs that passed the test showed that activity was focused in a small area of the frontal lobe—a location similar to that activated in humans who had passed the marshmallow test. That this specific behavior is associated with the same general area of the brain in humans and dogs, whose brains differ in many ways, was, Berns concluded, proof that we can know what it is like to be a dog—at least with respect to delayed gratification. Over the years, Berns and his colleagues have investigated many species and brain areas, and it is astonishing how very different species use similar areas of the brain for the same tasks. Nagel, Berns concludes, was wrong: we can know what it is like to be a bat, for “echolocation was just an amplification of perceptual skills that humans also possessed.”
The growing recognition that brain function in humans and animals can be similar is challenging the ways we interact with and use animals, and in order to investigate this Berns takes us back to an incident that occurred while he was in medical school. “Dog lab,” he says, “was supposed to demonstrate how various drugs affected cardiovascular function.” The students were divided into groups of four, each of which was presented with an anesthetized dog whose chest had been opened by a surgical assistant, so the students could observe its heart and lungs as various drugs were administered. At the end of the session, Berns severed the aorta of the dog his group was using, killing it instantly, because the alternative was watching the dog linger while an injection of potassium chloride killed it. Participation in the exercise is, Berns tells us, “one of the deep regrets of my life…. The lab didn’t make me a better doctor, and it diminished me as a human being.”
By way of summary, Berns writes, “With similar brain architectures for the experience of joy, pain, and even social bonds, we can assume that animals experience these things much like we do, albeit without the words for those subjective states.” Modern humans may need Berns’s science to convince us of the similarity between us and other animals, but I suspect that those Neanderthal/human hybrids who entered into a relationship with wolves tens of thousands of years ago already knew it.
Anita Quiles et al., “A High-Precision Chronological Model for the Decorated Upper Paleolithic Cave of Chauvet-Pont d’Arc, Ardèche, France,” Proceedings of the National Academy of Sciences, Vol. 113, No. 17 (April 26, 2016). ↩
Once the most celebrated art historian in the world, Kenneth Clark’s star began to fade in the 1980s when a new generation of scholars rejected the object-based scholarship he epitomized and began to study works of art using Marxist, feminist, and psychoanalytical theory. When Clark placed a painting or a building in its historical setting it was to understand more fully how and why it was made, and what it meant to those who first saw it.
Theory-based art history takes the opposite approach: broadly speaking, the scholar is interested in the work of art not as an end in itself but for what its making might tell us about the society that created it, particularly its attitudes toward subjects like race, gender, and social inequality. This kind of art history is taught in most universities on both sides of the Atlantic today. The scholarship Clark represented survives mainly in some museums and exhibition catalogs. Whereas his books were once required reading in undergraduate courses, many are now out of print. Civilization, the television show that introduced millions of people around the world to art history and lit the spark that led to the mass popularity museums and galleries enjoy today, is largely forgotten.
A few years ago, it looked as though Clark’s achievement was well on its way to being lost. Then, in 2014, a delightful exhibition at Tate Britain sparked new interest in him by telling the story of his life through hundreds of works of art selected from the thousands he compulsively collected and commissioned, either for his private collection or for public institutions. With paintings, works on paper, sculpture, and ceramics ranging from Bosch to Bloomsbury, Cézanne, Rodin, and Henry Moore, the exhibition was further enlivened by the inclusion of quite a few of the fakes, duds, copies, and misattributions Clark acquired. The result was an unexpected hit with the public at a time when dreary, incoherent exhibitions curated by theory-based art historians were attracting critical opprobrium and public indifference.
James Stourton’s magnificent biography tells the story of Clark’s life in all its complexity and contradiction. It also reminds us that in his time Clark himself developed an innovative method for studying works of art—one that struck a balance between the then-prevailing disciplines of connoisseurship on the one hand and iconography on the other. And just as the Tate Britain exhibition showed the misses as well as the hits, the story Stourton tells makes it clear that Clark’s apparently gilded career was marked by almost as many failures as successes. The time has come to look at the achievements of a man whose vision influenced the art-viewing habits of generations.
Born in 1903 in London, the only child of a colossally rich heir to a textile fortune in Paisley, Scotland, Kenneth Clark was not a member of the upper classes. His parents were, in Stourton’s phrase, “in the mezzanine floor of English society—no longer trade, landed but not gentry.” They spent their time moving from one large ugly house to another in Scotland, Suffolk, and the South of France, and in each residence settled down to do absolutely nothing except kill birds and small animals. “My parents belonged to a section of society known as ‘the idle rich,’” Clark tells us on the first page of his autobiography, “and although, in that golden age, many people were richer, there can have been few who were idler.”
When he was not drinking or gambling, his father shot, sailed, and fished—and expected his son to do the same. But Kenneth was a born aesthete, much happier rearranging pictures of highland cattle by Rosa Bonheur in his father’s collection than blasting pheasants or standing for hours knee-deep in a trout stream. Clark Senior indulged his son’s interests in music and art and even approved of his early ambition to become an artist.
To have philistine parents providing plenty of money but neither culture nor direction must be the ideal background for an aesthete. Like his father, Clark grew up to become a sybarite and bon vivant—but with one important difference: from an early age, he was addicted to analytical work of a sort more usual in the realms of business administration than of art patronage.
Art history was not then taught in British universities. At Oxford Clark therefore read history, but learned about art and architecture by attaching himself to a series of older mentors. One of the first was the keeper of the Ashmolean Museum, Charles Bell, who offered his protégé unrestricted access to its drawings collection, with its incomparable holdings of sheets by Michelangelo and Raphael. In 1925, when Clark was in his final year at Oxford, Bell escorted him to Italy and there introduced him to his next and most important mentor, Bernard Berenson, the most famous connoisseur of Italian Renaissance art in the world.
In the early years of the twentieth century, art history was still in its infancy in Britain. Since relatively little was known about even major painters of the Renaissance, the foundation of art historical knowledge was connoisseurship and specifically a method of attribution developed in the mid-nineteenth century by an Italian historian, Giovanni Morelli. By looking at thousands of works of art and concentrating on minute details (an earlobe, a big toe, fingernails) the connoisseur trains the memory to recognize an artist’s “hand”—and in doing so builds up a coherent picture of his or her artistic personality.
At their first meeting, Berenson offered the untrained student the opportunity of a lifetime—the job of assisting in the preparation of a new and updated edition of his Drawings of the Florentine Painters, first published in 1903. The volume for which Clark was responsible consisted of alphabetical lists of artists followed by the drawings Berenson attributed to them—but no explanation of the reasoning behind the attributions.
His task was to review the original attributions in light of published and unpublished material that had emerged in the two decades since the first edition. Berenson had of course seen firsthand the works of art he’d attributed. But he had also revolutionized the use of photography for art historical study. This meant that Clark worked mostly in the photo library at Berenson’s villa, I Tatti, near Florence. Though he learned a lot from Berenson, the tedium of his research disillusioned him with the formal analysis of works of art.
Although their collaboration on the Florentine drawings ended in May 1929, the friendship between the two men lasted until Berenson’s death in 1959, largely maintained by letter. Robert Cumming’s impeccable edition of their correspondence amounts to more than just the chronicle of a friendship.1 The thoroughness of his annotations, year-by-year chronologies, and detailed biographical sketches even of the minor characters who flit through this correspondence transform the otherwise disappointing content of the letters into an invaluable work of reference.
In the same year he parted from Berenson, Clark believed he’d found an alternative to the drudgery of connoisseurship when he attended one of the last lectures by the German scholar Aby Warburg. Warburg, after whom the institute in London is named, focused not on the authorship of works of art but on their content. He pioneered the study of the origins and meaning of symbols and how they were used by medieval and Renaissance artists to transmit ideas. Clark was bowled over. “Thenceforward,” he wrote, “my interest in ‘connoisseurship’ became no more than a kind of habit.”2 But soon he came to realize that for Warburg’s followers, a picture’s aesthetic quality was of only passing interest. They were as concerned with crudely drawn, cheaply printed broadsheets as they were with paintings that hung in the Louvre. For them, as for the art theorists in our time, a work of art was not something to be understood or enjoyed for its beauty or because it moves or enlightens us, but because it opens a window onto the preoccupations of its time—for example, the study of alchemy in the late Middle Ages or the study of religions in the ancient world. Years later, after spending a day at the Warburg Institute, Clark complained that
all those dim wraith-like figures in corners of the book stores silently turning the pages of books on iconography seem like ghosts in a Hades of futility. I can see no life-principle in their labours, and I cannot even use their conclusions, because if they ever do publish anything they have forgotten what they are looking for.
Clark’s Damascene moment occurred in front of a work of art on his first visit to Giotto’s frescoes in the Arena Chapel in Padua, when he found himself responding not to the artist’s ability to create the illusion of three dimensions on a flat surface (what Berenson termed “tactile values”), or to the paintings’ symbolic content, but to Giotto’s “life-enhancing representations.” By this Clark meant the artist’s genius for building dramatic narrative and expressing emotional nuance through gesture, pose, and facial expression.3
In most of his articles, books, lectures, and broadcasts from the late 1920s onward Clark synthesized formalist and iconographical approaches to the study of art with historical understanding to create a method of inquiry that is uniquely his. He first asks who, what, when, and where the work was made, then questions why and under what circumstances the artist made it—and, crucially, how it was understood by those who first saw it. Clark always relates an artwork to its historical precedents and assesses the degree to which it conforms to or departs from earlier representations of the same subject.
Here he is in chapter six of his best book, The Nude (1956), discussing representations of the dead Christ in late medieval and Renaissance art. After looking at a French fifteenth-century altarpiece in which Christ’s stiffening corpse is shown stretched horizontally across his mother’s lap, he turns to the Italian tradition as exemplified in works by Donatello and Giovanni Bellini. Both show Christ’s dead body in half length, supported in an upright position by weeping angels. Christ’s face and upper torso are shown frontally as in an icon or like a variation on the theme of the Ecce Homo, “behold the man,” Pilate’s words when displaying the bound Christ to the crowd calling for his crucifixion. Then Clark turns to Michelangelo’s Pietà in St. Peter’s:
[Michelangelo] has accepted the touching northern iconography of the subject, the Christ stretched out on His mother’s knees, and yet has given to Our Lord’s body such an extreme refinement of physical beauty that it makes us hold our breath, as though to suspend the action of time. Michelangelo has adapted antique perfection to its northern setting by giving the body a rhythmic structure the reverse of that in the Gothic Pietà. Instead of rising in a series of angular gables, it sags like a garland.
In three sentences Clark combines art history, art criticism, and precise observation to convey the intensity of his emotional response to the statue. Notice his use of the term “Our Lord” to invite the reader to look at the Pietà in the same way those who first knelt in prayer before it did. What distinguishes Clark from Berenson, Warburg, and most of his art historical contemporaries is his respect for the spirituality of Michelangelo’s conception, coupled with his wholly secular delight in the way the sensuous nude carved in marble “sags like a garland.” Clark brought to the discipline of art history not just an encyclopedic knowledge of artists and their works but something less quantifiable—the ability to enter into an artist’s imaginative world.
Early in 1929 Clark returned to London with no idea of what he might do for the rest of his life. Almost at once—and by sheer chance—a career path opened when he was asked to install the ambitious loan exhibition of Italian Renaissance paintings that Mussolini sent to London as a gesture of good will. Clark had the unprecedented experience of hanging on the walls of the Royal Academy Masaccio’s Crucifixion, Botticelli’s Birth of Venus, Giorgione’s Tempesta, and Piero della Francesca’s Duke and Duchess of Urbino.
The success he made of the exhibition kicked off a fifteen-year career as a museum professional for which he had no obvious professional qualifications. In 1931 he succeeded Charles Bell as keeper at the Ashmolean. Three years later came his appointment as director of the National Gallery, aged thirty. And in 1935 he was commissioned to catalog more than six hundred drawings by Leonardo da Vinci in the Royal Collection at Windsor Castle.
The first thing he did at the National Gallery was to install electric lights—forty years after Bond Street picture dealers began the transition from gas to electricity. Visitors could at last see the collection on fog-bound London days, and the building now stayed open three evenings a week. Clark also rehung the entire collection, taking immense care to select sympathetic wall colors, arrange the pictures by school, and declutter galleries that hadn’t changed much since before the Great War.
Until his thirteen-part television series Civilization aired in 1969, Clark was probably best known in Britain as the wartime director of the National Gallery who removed its holdings to a cave in North Wales and then, during the Blitz, invited Dame Myra Hess and other musicians to perform in the empty galleries before daily audiences of a thousand or more.4 In the long term, though, Clark’s most important innovation was his effort to make the National Gallery’s collection more accessible to visitors by extending its hours and publishing inexpensive guidebooks aimed at nonspecialist visitors.5
Like a modern museum director, he built personal relationships with collectors and dealers with the aim of persuading them to give or bequeath their treasures to the gallery. In the case of the Armenian wheeler-dealer Calouste Gulbenkian, Clark came within a whisker of bagging not only the most important private collection of fine and decorative art in Europe, but also gaining an extension to the building in which to show it, together with an endowment that would have made the National Gallery the richest institution of its kind in the world.
But negotiations were suspended during the war, and in 1945 Clark resigned as director. Incredibly, his successor, Philip Hendy, refused even to meet Gulbenkian. Whatever the reason—and snobbish distaste for a foreigner with oil interests in the Middle East is the most likely explanation—a great prize was lost. The Gulbenkian Foundation is today housed in Lisbon.6 It is hard to read anecdotes like this without sympathizing with Clark’s antipathy for most of his curators at the National Gallery and also a fair number of his fellow art historians.
They in turn dismissed him, in Berenson’s words, as “un grand vulgarisateur.” His studies on Leonardo, the nude, landscape painting, and Piero were not based on original research, nor was he interested in the nuts and bolts of cataloguing: measurements, dating, chronology, provenance, and exhibition history. Apart from the respect accorded to his impeccable catalog of the Leonardo drawings at Windsor (1935), curators considered him a slapdash popularizer. When Clark said that his colleagues “loathed the sight of me,” he was certainly right.7
Despite his public image as a man of great learning, influential art historians like Anthony Blunt, John Pope-Hennessy, and Michael Levey dismissed his scholarship as lightweight. Though these mandarins were careful not to say to his face what they thought, their contemptuous views of his work trickled down to university classrooms. I learned about them when I first studied art history in the 1960s.
Clark was rich, handsome, and conspicuously clever—not qualities that endeared him to scholars, many of whom lived constricted lives on the margins of society. As a social figure, he reveled in what he later dubbed “the Great Clark Boom” of the 1930s—when as Sir Kenneth and Lady Clark he and his glamorous wife Jane crested the heights of British society and together formed intense friendships with Henry Moore, Graham Sutherland, John Piper, Ben Nicholson, Edith Sitwell, William Walton, and Vivien Leigh—to list only a few of the artists, composers, musicians, poets, dancers, and actors who were guests at their palatial residence in Portland Place near Regent’s Park.
After resigning both from the National Gallery and from the Royal Collection in 1945, Clark never worked in a museum or gallery again. In the decades to come he lectured and wrote about art while serving as chairman of the fledgling Arts Council and of the Independent Television Authority (ITV), the commercial channel set up in 1954 to challenge the monopoly of the BBC. In neither post did he find either success or fulfillment. By nature, Clark was impatient with bureaucracy, accustomed as he was to things done over lunch with a friend or a telephone call to the right person.
Stourton concludes that Clark’s leadership of the Arts Council is “unlikely” to have “made much of a difference” and after three years at ITV his contract was not renewed. But the contacts he made during his time as a TV executive led directly to what is generally seen as his most enduring achievement, the thirteen-part BBC series Civilization. That, as well as starry appointments such as founding trustee of both the Royal Opera and the Royal Ballet culminated in his ennoblement, as Lord Clark of Civilization in 1969, and the sort of celebrity usually accorded to pop stars. When he died in 1983 his had long been a household name.
Because he was a follower of Berenson and then a museum director, Clark was usually labeled an art historian, although one who wrote popular books for nonspecialists. But his purpose when writing about art was to evoke in words his personal insights and responses to buildings and paintings that brought a lump to his throat. The term for this is not “art historian” but the one he gave as his occupation at the time of his marriage in 1927: “art critic.” Most of his books are best described as critical essays, and Civilization is subtitled “A Personal View.” Try to imagine those words attached to the title of Blunt’s magisterial study of Poussin or Pope-Hennessy’s of Donatello, and you see the difference between a historian and a critic. A fundamental misapprehension has grown up around Clark. If only he’d been recognized as an art critic—or better still as a journalist—I think he’d have been seen differently and let off more lightly.
Stourton reminds us again and again that the chief literary inspiration behind Clark’s work was John Ruskin. Clark’s favorite among his own books was an anthology of Ruskin’s writings entitled Ruskin Today, and the very first words he says in the first episode of Civilization are Ruskin’s:
Great nations write their autobiography in three manuscripts: the book of their deeds, the book of their words and the book of their art. Not one of these books can be understood unless we read the two others, but of the three the only trustworthy one is the last.
In the episodes that follow he makes no claim to objectivity. The historical information he provides is vitally important, of course, to his understanding, but his poetic descriptions, critical insights, and audacious, sweeping, generalizations are what held his audiences spellbound. Thus the sculptured figures of kings and queens on the central doorway of the west portal at Chartres show “a new stage in the ascent of Western man” because the “refinement, the look of selfless detachment and the spirituality of these heads is something entirely new in art. Beside them the gods and heroes of ancient Greece look arrogant, soulless and even slightly brutal.” A few episodes later he calls the Reformation an “unmitigated disaster” from the point of view of those who “love what they see” and adds that its effects were “not only bad for art, but bad for life.”
This March the BBC and PBS started a “follow-up” to Clark’s classic series. Entitled Civilizations, it will cover in nine programs the whole of human history from the dawn of time to the present day—and include Asia, Africa, and the Americas as well as Europe. Clark stood in front of the camera to argue that the visual arts over a time span roughly corresponding to the chronological parameters of London’s National Gallery represent a pinnacle of human endeavor. The limitations he placed upon himself were one of the strengths of his series because it was the depth as well as the breadth of his knowledge that held us all spellbound.
In the forthcoming series none of the three presenters (Mary Beard, Simon Schama, and David Olusoga) is an art historian, let alone a recognized authority on a branch of the visual arts. The contributions of these seasoned television presenters may well be informative and entertaining, but I’ll be pleasantly surprised if the series is anything like as influential as the original.
Clark adored women—and in much the way he was besotted with art. His attraction to both was as impulsive as it was insatiable, and sometimes as unwise. The pages Stourton devotes to what he terms Clark’s “vigorous private life” sometimes read like Leporello’s “Catalog Aria.” Among his conquests, attachments, and lasting love affairs Stourton mentions Jane’s personal secretary, the family’s parlor maid, the sisters and wives of friends, at least one artist, a famous actress, a best-selling author, a distinguished American collector, and a married lady he met on board a ship to Australia—and who to his horror sought to continue their affair after disembarkation.
Clark had the insouciant attitude toward infidelity prevalent among the British upper classes in his time. Though Jane too had lovers, her husband’s affairs caused her distress and may have contributed to her alcoholism and dependence on drugs. Stourton suggests that she was unstable, as evidenced by tantrums directed not only at her husband but also at her children. Yet there was no question of divorce, and Clark nursed his wife tenderly in the years before her death in 1976.
My Dear BB: The Letters of Bernard Berenson and Kenneth Clark, 1925–1959, edited and annotated by Robert Cumming (Yale University Press, 2015). ↩
Clark never developed a reliable “eye,” a deficiency that would almost cut his career short when he acquired misattributed pictures for the National Gallery with public money. ↩
A review of this length can’t do justice to Clark’s intense interest in contemporary British art. Though he became friendly with the Bloomsbury critics Clive Bell and Roger Fry, he could not subscribe to their idea (like Berenson’s) that anything useful can be learned about a work of art through formal theories. Such a theory failed to take account of the poetic, emotional, and literary dimensions in art that spoke to him so powerfully. ↩
With the building almost empty (each month a single picture was brought back for display) Clark chaired the War Artist’s Advisory Committee, formed to commission contemporary artists like Henry Moore and Paul Nash to record in paintings and works on paper the country’s lost and threatened architectural heritage as well as the contribution to the war effort of miners, foundry workers, and those in military service. ↩
In tandem with his time at the National Gallery in 1937 Clark would at the personal request of George V become Surveyor of the King’s Pictures. ↩
Clark came to feel that Lisbon was the right place for the collection. To have shown furniture and decorative arts would have fundamentally changed the character of the National Gallery. ↩
There were of course defenders, among them Ernst Gombrich. ↩
In June 2014, after three Israeli teenagers from a settlement went missing, Israel launched Operation Brother’s Keeper, one of the most sweeping manhunts in the nation’s nearly seventy-year history. The dragnet, in which Israel arrested hundreds of Palestinians in the West Bank without charge, ended eighteen days later when soldiers discovered their bodies under a heap of rocks. In keeping with Jewish tradition, the families buried their sons the next day. As the nation mourned, three Israelis seeking revenge abducted a sixteen-year-old Palestinian, Muhammad Abu Khdeir, while he was walking to his local mosque in Jerusalem after a pre-dawn meal during the holy month of Ramadan. He was bludgeoned and then burned alive. A fifty-day armed conflict between Israel and Hamas ensued, in which Israel bombarded the overcrowded Gaza Strip with airstrikes that exacted a high civilian death toll, killing over 500 children.
The climate in Israel was tense and bellicose. As Hamas fighters fired rockets from Gaza at central Tel Aviv, right-wing Israeli nationalists assaulted antiwar protesters while chanting “Death to Arabs!” and “Death to leftists!” Veteran artists and public figures were labeled as traitors and received threats for even expressing regret at the loss of Palestinian children’s lives. One of Israel’s best-known poets, Natan Zach, now eighty-seven, told the Israeli website Walla! at the time: “The reason I no longer write in the papers is that I’m afraid someone will grab me on the street and beat me.” A month after a cease-fire was agreed, Israel’s then foreign minister, Avigdor Lieberman, decided it was time for his ministry to cut off all future support for the Israeli dancer and choreographer Arkadi Zaides for a work that allegedly vilified Israel’s military.
Zaides’s Archive is a solo dance piece set against a backdrop of video footage of Israeli soldiers and settlers in the West Bank. The footage was provided by B’Tselem, an Israeli group that documents human rights abuses in the Occupied Territories and has become anathema to the Israeli political establishment. Last year, Prime Minister Benjamin Netanyahu canceled a meeting with the German foreign minister, Sigmar Gabriel, after Gabriel met with representatives of B’Tselem and Breaking the Silence, an anti-occupation organization of former Israeli soldiers who publish testimonies about their service in the West Bank and Gaza. (Israel’s cabinet is currently promoting one bill that would ban members of Breaking the Silence from speaking in high schools, and another that would effectively outlaw them.)
“The moment I was linked to B’Tselem, that was it,” said Zaides, who was born in Belarus and moved to Israel at the age of eleven. “It’s black and white. They said, ‘He is damaging to soldiers and that’s it.’ None of them had seen the work.” In Archive, Zaides appears on stage in front of a giant screen, looking on with the audience as he projects excerpts from years of material captured by Palestinian civilians armed with cameras. In one scene, a soldier cocks his gun and retreats; in another a settler, his shirt tied around his face, throws stones. Zaides then simulates their movements, gestures and sounds in perfect synchronization. “I look at bodies of Israelis and put them in my own body. By this action, I question what is happening to our collective body,” he told us from Belgium, where he is now based.
Before the Foreign Ministry pulled its funding, the piece had already encountered local resistance. In 2014, after pressure from a few right-wing activists, as well as from a group called Mothers of Soldiers Against B’Tselem, the municipality of Petah Tikva, a working-class city just outside Tel Aviv, called on its local museum to shut down a video installation of Archive. The museum stood its ground, but when Zaides began performing the show in small theaters around Israel, protests were organized. One, in Jerusalem, became violent: audience members were verbally abused and physically assaulted outside, while Zaides himself needed a police escort. When, in 2015, Zaides was due to perform Archive in Paris, Israel’s justice minister, Ayelet Shaked, asked the French ambassador to Israel to shut it down on the grounds that it showed “terrorists as victims and IDF [Israel Defense Forces] fighters defending Israeli citizens with their bodies as criminals.” Later that year, Miri Regev, a newly appointed minister of culture and sport from Netanyahu’s Likud Party who had lambasted Zaides’s show before assuming her post, removed the ministry’s logo from the list of sponsors and promised to review its support.
Zaides went on to raise outside funds and perform Archive abroad without the foreign ministry’s money. “My work has become more radical because things have gotten more radical,” he added. “It’s not just politicians; it’s a social phenomenon. A lot of these voices are coming from the bottom up, and the politicians are reacting.”
Netanyahu, who is now Israel’s second-longest-serving prime minister, oversees the country’s most overtly right-wing government in history—one dominated by settlers and nationalists who reject a two-state solution, deny that an occupation exists, and openly seek to bolster Jewish power at the expense of democratic practices. In recent years, Israel’s parliament has passed a series of combative, censorial laws, including legislation limiting the funding of human rights organizations and, most more recently, a law barring entry to foreign nationals who actively call for boycotting Israel. In early 2018, Israel published a blacklist of twenty organizations whose leaders it has barred from entering the country because of their advocacy for the boycott, divestment, and sanctions movement.
It is hard to reconcile this political landscape with the breezy, mostly hedonistic atmosphere in the gay-friendly coastal city of Tel Aviv. Often referred to as “the bubble,” Tel Aviv has always been the country’s de facto cultural hub. A clutch of makeshift artist studios and contemporary art galleries, many of which participate in the prestigious Art Basel and Frieze fairs, operate there alongside a number of cinemas playing independent films. Art and the nation have always been inextricably linked; the national theater, Habima, was formed before the state was founded, and the ceremony proclaiming the establishment of the State of Israel took place on May 14, 1948, at the Tel Aviv Museum of Art. But the tacit acceptance of pockets of dissent—once touted as proof of Israel’s vibrant democracy and diversity—is vanishing even from this liberal redoubt.
“I’m used to a life in which there is some invisible wall separating my personal, everyday life from my life as a writer and as a public figure,” said Etgar Keret, one of Israel’s most internationally acclaimed authors, when we met him in a quiet, sun-drenched café in North Tel Aviv. “In the last Gaza War, I felt this wall collapse, and suddenly there was no separation between my two identities. To some, I wasn’t just someone thinking differently but a traitor who had exposed his true face and was trying to destroy his own society and country.”
Keret is married to the actress and filmmaker Shira Geffen, with whom he won the Cannes Film Festival’s Caméra d’Or award in 2007 for their movie Jellyfish. During the 2014 war, Geffen asked the audience at a screening of one of her films at the Jerusalem Film Festival to stand for a minute of silence to mark the killing of four Palestinian children by an Israeli missile on the beach in Gaza that day. Facebook groups and comments soon sprang up that included debates about the best way to attack her. A barrage of personal threats ensued. “The message for artists is that nothing good can come from being outspokenly politically left-wing,” said Keret, “but in many cases, something bad can come out of it.”
The Israeli actress Einat Weizman, an unassuming woman with jet-black hair and fair skin, is a local celebrity for her roles in romcoms and a popular 1990s TV show. Today, Weizman says she can no longer work in Israeli theaters. During the summer of 2014, she describes becoming a target of vicious harassment and incitement after a Facebook user posted a 2006 photo of her wearing a T-shirt that read “Free Palestine.” The timing and the language of the post made it seem as if the photo had been taken during the war. “I was Facebook-lynched,” she says. “I got thousands of hate messages, from people who wanted to kill me and rape me. And because they knew me in Israel as an actress for twenty years, it moved from Facebook to the streets.”
Weizman told us that people in her neighborhood had shouted and spat at her, forcing her to ask friends to escort her. “I had to process this experience somehow because my mental state was not good,” she said. The turmoil led to the play Shame, which she wrote and directed in 2015. Since then, she has decided to produce explicitly political work focused on Palestinian stories. “I think the situation in Israel is so dramatic and apartheid especially is so bad that I need to take a clear stand,” she told us, “and I’m doing theater just for this reason.” Last year, a play she wrote calledPrisoners of the Occupation was rejected by an annual fringe theater festival in the northern Israeli city of Acre. The play, based on Weizman’s research into the history of Palestinian hunger strikes in Israeli prisons, tells the story of Palestinian prisoners through their letters. “Every fourth Palestinian man was, will be, or is serving a prison sentence,” she said. “For Israelis, the occupation is fences and walls and checkpoints, but for Palestinian society the occupation results in prison.”
In an unprecedented move, the festival’s steering committee, which includes Mayor Shimon Lankri, a Likud member, voted to exclude the play from the program, claiming it could provoke tensions in the mixed Arab-Jewish city. Fellow artists who were to participate in the festival, members of the artistic committee, and, ultimately, the artistic director all resigned from the festival in protest. But Culture Minister Regev applauded the decision, saying, “The Palestinian narrative will not be funded by the Israeli government or public funds,” and suggested Weizman “go to Ramallah” and put on her show there.
Regev, who previously served as IDF spokesperson and chief military censor, is a rising star in the governing Likud Party, and some believe she is ambitious to succeed Netanyahu. Notorious for her incendiary, populist rhetoric, she once called African asylum seekers “a cancer in our body.” In January, she posted a video to Facebook of herself with members of Beitar—a soccer club known for its anti-Arab racism—as fans chanted in the background at the rival Arab team, “May your village burn down.” (The video has recently been removed.)
Like many in the ruling coalition, Regev is openly against the establishment of a Palestinian state and has proposed annexing parts of the West Bank. But unlike a majority of her cabinet colleagues, who are from more privileged Ashkenazi backgrounds (that is, Jews of European descent), she is a Mizrahi woman of Moroccan descent from a housing project in the working-class town of Kiryat Gat. (Mizrahim, or “easterners,” are Jews from countries in the Middle East and North Africa, some of whom trace their origins back to Spain and Portugal.) She curries favor well with voters, many of them Mizrahim like herself, in part because she fuses her political agenda with her promise to upend the monopoly that the Ashkenazi elite, historically more identified with the peace camp in Israel, has had on the country’s cultural establishment. “I have never read Chekhov,” Regev has declared, flaunting the fact that her cultural reference points are not those enshrined in the Israeli canon, which has marginalized the history and culture of non-European Jews. (In 2016, the Education Ministry formed a committee to integrate more Mizrahi heritage into Israel’s history textbooks.)
Osnat Trabelsi, a Mizrahi producer and filmmaker who has made groundbreaking documentaries that focus on Palestinian life under occupation—like Arna’s Children, about a theater for kids in the West Bank city of Jenin run by a Jewish woman—is on the opposite end of the political spectrum from Regev. Even so, Trabelsi believes that, as problematic a figure as Regev is, Ashkenazi men in positions of power have often treated her in patronizing and racist ways. “I don’t agree with her on many things,” she said, “but I cannot ignore the fact that she raises issues you could not talk about before.” Trabelsi says that in her thirty-year career the decision-makers she has met on the film and television boards have all been Ashkenazi men. For her, Regev’s ascent has exposed both Ashkenazi-Mizrahi and male-female disparities in Israel’s culture industry—if only on the level of rhetoric. “She hasn’t really made a change,” said Trabelsi. “Where are the film funds in the north or south of the country?” The more significant effects of Regev’s tenure, Trabelsi thinks, have been due to her hard-line rightist politics, since fewer and fewer political documentary films are being made—“there is fear, there is self-censorship.”
In her position as the country’s leading cultural watchdog, Regev has taken every opportunity to cut funds, or threaten to do so, to artists or institutions she deems harmful or offensive to the state. One of her first acts as culture minister was to levy financial penalties on theaters, dance troupes, or orchestras that do not perform in the settlements—and provide bonus payments to those that do. She has also said that she wanted to pull funding from several established national arts festivals because of performances involving nudity and from fringe theaters presenting subversive content. Although successive attorney generals have told her that her interference infringes on freedom of expression, that has not stopped her from crusading on the motto “freedom of expression, not freedom of funding.”
Since 2015, Regev has been a party to measures that effectively closed down the Al-Midan repertory theater, which self-identifies as the only Palestinian-Arabic theatre within the 1948 borders. Established in 1994, the small Haifa-based theater runs on an annual budget of about $1 million (of which 65 percent comes from the government, the rest from ticket sales). Much of the theater’s public funding has been withheld since 2015 when the company staged “A Parallel Time,” a play about six political prisoners in an Israeli prison, of whom one, a musician, has been granted permission to marry. The other prisoners try to make him an oud as a wedding gift, and we watch as they struggle to contrive the parts for it. The playwright, Bashar Murkus, created the piece from materials and letters from the prisoners—among them, Walid Daka, a Palestinian serving a life sentence for the kidnapping and murder of an Israeli soldier, Moshe Tamam, in 1984.
The play was performed dozens of times in Arabic, but it was when Hebrew subtitles were added that opposition arose. At first, pressure came from right-wing activists and the Tamam family, who protested outside the theater and called on the city to shut down the play, claiming that it supported terrorism. Then a barrage of attacks from the government followed. The Haifa municipality, which provides 40 percent of the theater’s budget, froze Al-Midan’s funds. A month later, Regev followed suit, cutting her ministry’s 25 percent contribution. The education minister, Naftali Bennett, rescinded the play’s eligibility for subsidized performances in schools. After Al-Midan sued the city, a court eventually determined there was no legal basis for withholding funding, and it was eventually reinstated.
During that time, according to the theater, a terrorism victims’ association supporting the Tamam family filed some forty-five complaints with Israel’s NGO registrar, alleging mishandling of funds by the theater. Not a single one of these claims was validated, but the comprehensive audit to investigate meant that funding for the theater was frozen for eighteen months. Even after Al-Midan cleared those bureaucratic hurdles, at the end of 2016, and began to receive its municipal funding again, the Culture Ministry still held out. “Miri Regev works for us; we pay her salary,” said Amer Hlehel, an actor and former artistic director of Al-Midan, who met with us in a café on Haifa’s main boulevard. “We are the people, the citizens, the taxpayers, and she is there to serve us.”
Despite a legal obligation to restore funding, Regev has yet to comply. “Occupation is not always a land grab; it’s also collecting your taxes but not paying your budgets,” remarked Al-Midan’s board chairman, Joseph Atrash. The Culture Ministry’s latest argument is that Al-Midan has lost its funding because it failed to fulfill its quota of performances. “If you’re a theater and you’re not working like a theater,” said Galit Wahba Shasho, the director of the Culture Ministry, who runs day-to-day operations for Regev, “you won’t be granted government money.” This appears to contradict an agreement reached in 2016 that the theater’s reduced operations during the budget freeze would not affect its eligibility for funding. Al-Midan has petitioned Israel’s Supreme Court to resolve the issue.
We met with Wahba Shasho in her spartan Tel Aviv office after learning that Minister Regev herself would not be available for the interview we’d requested. A woman sat in the corner silently monitoring our conversation, and Wahba Shasho kept a ministry press officer named Heli Samama-Fadida on speakerphone from Jerusalem while we were there. In answer to our questions about the ministry’s problems with Al-Midan and with Einat Weizman’s work, Wahba Shasho repeated that artists can do what they want but if they want state support, they must abide by “the letter of the law.” She repeated the phrase nineteen times during our interview. “Regev wants to legislate a more authoritative law that would draw a distinction between freedom of speech, which we all believe in and all hold up to,” said Wahba Shasho. “But if you want to get government money, you cannot go against the existence of the State of Israel.” When the interview concluded, both Samama-Fadida and Wahba Shasho demanded that we supply a copy of our recording.
Regev boasts of raising budgets for marginalized groups outside the center of Israel, including Palestinian citizens, who comprise over 20 percent of the country’s population. “Regev has tripled the budget to Arab culture,” Wahba Shasho said, “because she is motivated by cultural justice and equality.” Regev has indeed raised the ministry’s total annual budget from 650 million shekels to nearly 1 billion. But according to Jafar Farah, head of the Mossawa Center, an organization that promotes civil and equal rights for Israel’s Palestinian citizens, Regev increased the budget for Arab culture from 11 million to 34 million shekels only under the compulsion of a high court order in 2015, resulting from a 2012 petition by Mossawa. Even after the increase, Farah says, the Arab share of arts funding is still just 3 percent of the Ministry’s budget.
For Farah, talk about expanded budgets overlooks the lack of basic infrastructure for Palestinian artists. “The money is meaningless since there is no Arab museum, no school for Arab theater, no Arab fund for cinema. The way the system works, a Palestinian filmmaker cannot dream in his own language,” he said, noting the absence of Palestinians on any of the film committees that allocate funds. “If you have a dream in Arabic, you need to translate it into Hebrew—and this already damages it. The majority of great Palestinian filmmakers left because they couldn’t get the funds.”
Some draw parallels between the cultural climate here and that in the United States in the late 1980s, when conservative politicians used transgressive or provocative work by the photographer-artists Robert Mapplethorpe and Andres Serrano as an excuse to call for ending federal funding of the arts in general. The self-appointed crusader in this cause, Senator Jesse Helms, singled out Serrano, in particular, as a “jerk” who was told not to “dishonor our Lord” for his image of a crucifix submerged in urine, Piss Christ.
“The crux of contemporary censorship is the funding aspect,” said Chen Tamir, the curator of Tel Aviv’s Center for Contemporary Art. “Just as in the American culture wars, public funding here is being manipulated to become a mechanism of censorship.” Following Helms, Regev has proposed a law—dubbed the Loyalty in Culture Bill—that would give the culture minister the authority to deny funding to arts institutions on broad grounds that involve voicing criticism of the state of Israel. (Similar powers already belong to the Finance Ministry; Regev seeks to arrogate them for herself.) Introducing the bill on the floor of the Knesset in January 2016, Regev declared:
I will not be an ATM. I have a responsibility for public monies, and this law would grant me the authority to exercise my responsibilities and to deny support for institutionalized violations of the law.
Regev’s bill awaits final passage, but she has not hesitated to use the levers of power already at her disposal to influence institutions that were once accustomed to creative independence. Unlike the US, Israel has no real culture of philanthropy. Taxes are high and people expect the government to provide for them—a vestige of the country’s socialist origin story—including culture and the arts. “There are only a handful of good contemporary art institutions, commercial galleries, and funders or other partners that can facilitate the creation of artwork,” said Tamir. “So if you’re an artist whose work might jeopardize an institution, you’ll really have to struggle to find ways of funding and showing your work.”
The ubiquity of state funding—and most institutions’ dependency on it—mean that it is virtually impossible to create work in Israel without coming into contact with the government. Natasha Dudinski is the artistic director of “48mm Film Festival,” an annual event that curates and screens movies locally about the Nakba (“catastrophe” in Arabic), the word Palestinians use to describe the consequences of the establishment of the State of Israel in 1948, primarily the forcible displacement of 700,000 Palestinians. The social justice organization Zochrot (“remembering” in Hebrew), which founded the festival in 2013, has faced police harassment over its work to raise awareness about the destruction of Palestinian villages during Israel’s War of Independence. To maintain its artistic and political independence, “48mm” has never sought or accepted money from the Israeli government. Although the festival has always been a punching bag for right-wing politicians, Dudinski fears that even the festival’s financial independence may not be enough to protect it from Regev’s efforts at censorship.
In November, just two weeks before the festival was to begin, Regev asked Finance Minister Moshe Kahlon whether the Tel Aviv Cinemateque could be fined for holding screenings as part of “48mm,” asserting that the use of a screening room was an in-kind donation that violated the existing law that prohibits marking Israel’s Independence Day as the Nakba. Although Kahlon demurred on this point, other screening partners of the festival, such as the Arabic-Hebrew Theatre in Jaffa (also known as Al-Saraya), which caters particularly to the country’s Palestinian population, withdrew their participation for fear of government retribution. “It was really hard for me to see Al-Saraya giving up because of fear,” Dudinski said. “And it’s not only them—the situation is weakening everybody and giving the government more power because they see what they can do.”
The photographer Miki Kratsman has covered the Israeli-Palestinian conflict for more than three decades. His body of work includes the 2010 projectTargeted Killing, in which he used a long-range camera lens to mimic photographs taken by a drone of unsuspecting Palestinians, and a more recent series, developed with a grant from Harvard University, which consists of thousands of portraits of Palestinians in the Occupied Territories. That project was slated to appear alongside a show on refugee camps by the Chinese dissident artist Ai Weiwei at the prestigious Tel Aviv Museum of Art, but the show never took place. After pushing the opening back several times, the museum cited a scheduling conflict, but most Israeli artists considered it a case of self-censorship. “We can teach about Guernica, but we can’t produce our own Guernica,” Kratsman said over lunch near his studio. For Kratsman, the move foreshadows a more serious shift in Israeli society. In the coming years, he said, “it’s going to be about what you are allowed to say and not allowed to say; where you can be and can’t be; with whom you will be able to talk. We are just at the beginning of it.”
For many Palestinian citizens, like the poet Dareen Tatour, that time already arrived. In the early hours of October 11, 2015, a large police force descended upon her parents’ home in a suburban hillside town outside Nazareth and hauled Tatour off to the local police station. Before her arrest, Tatour worked as an office manager at a marketing firm. She had felt compelled at the time to write a Facebook post about the violent deaths of several Palestinians at the hands of Israelis. Her text included the phrase, “I am the next shahid.” In Palestinian culture, the word shahid (martyr) means, first and foremost, a victim. But for Israelis, it is code for suicide bomber. She had also posted a poem on Facebook that included the line “Qawem ya shaabi, qawemhum” (“Resist, my people, resist them”), alongside a video showing Palestinians throwing rocks and burning tires. Tatour says her words were an ode to all Palestinians, but for Israelis tasked with surveilling social media for clues to potential attacks, her words amounted to incitement to terrorism. (Tatour denies that she either incites or supports terrorism.)
At the time of her arrest, Tatour had no idea what it was about. The questions police put to her did not point to any crime. Instead, she was quizzed about her politics, her religion, her writing, and who her friends were. For three weeks, she says, she was cut off from her parents, forced to wear the same clothes she arrived in, and denied every request for pen, paper, books, and TV access. Only during the fourth round of interrogation did she finally understand the grounds of suspicion: “One poem. One photo. One Facebook status,” Tatour says with both indignance and humor.
After nearly a hundred days of detention, Tatour was indicted on November 2, 2015, for incitement to violence and terrorism, and support for a terrorist organization. She has been under house arrest for two and a half years now, initially obliged to wear electronic cuffs around her ankles, and forbidden from using the Internet or even a computer. We met with Tatour two weeks before her case’s closing remarks. She greeted us with a grin from the edge of her parents’ driveway, the farthest point she can travel outdoors alone, wearing a white hijab.
“Until now, I have been asking myself in many of the poems I wrote in prison, Why is poetry being accused?” Tatour said. “They want to paint a picture without occupation. But I live through it. I can’t write what they want. They want me to write that there is no occupation, no Palestinian people. It’s impossible. I am living it.” Tatour has been charged and now awaits the verdict, but says that whether she goes back to jail or not, she has her next book ready to go to press, and is finishing a memoir about her experience in prison. “If we are silent, we get ourselves in a worse situation. We cannot give them what they want.”
While few in Israel have heard of Tatour, Israel’s war on culture has recently found a much more high-profile target. On March 13, Israel’s ambassador to France, Aliza Bin-Noun, boycotted the opening night of the Israeli Film Festival in Paris because the program was headlined by Foxtrot, a feature film about the trauma two parents suffer after losing their son during his military service. The film—directed by acclaimed Israeli filmmaker Samuel Maoz, an IDF veteran of the 1982 Lebanon war—has won the Grand Jury Prize at the Venice Film Festival, swept the Ophir Awards (Israel’s equivalent of the Oscars), and was shortlisted by the Academy Awards (though it fell short of a best foreign film nomination). The story includes a scene in which Israeli soldiers cover up the murder of Palestinians. Although, by her own admission, Regev has not seen the film, she said it “harms the good name of the IDF” and that its selection for leading film festivals is “a disgrace.” And she used the occasion to repeat her threats about funding: “Whoever wants to make a movie like this can do so with their own private money.”
Israel’s Foreign Ministry has supported the Parisian festival since it launched eighteen years ago, and claims never to interfere with artistic content. But this year, the Ministry—headed by Netanyahu, who has subsumed the foreign minister’s portfolio’s into his own—asked the festival organizers to select a less “controversial” film “more suitable for the festive opening evening, which will include an audience of Jewish donors.” When the organizers refused, the Foreign Ministry instructed its ambassador to boycott the event—although the ministry spokesperson, Emmanuel Nahshon, rejected this characterization, insisting, “It is not a boycott; it is an act of protest.” Several weeks after the decision to boycott opening night was made public, the Foreign Ministry apparently softened its opposition by sending a cultural attaché to attend.
When Foxtrot won best film at the recent Ophir Awards, the actor Lamis Ammar, a Palestinian citizen of Israel who starred in the acclaimed Sand Storm (2016), was presenting the best supporting actress award. On stage, she told the audience of her mixed feelings about participating. “The industry has demanded that I conceal my Palestinian identity and political opinions for the sake of the assumed and unassumed racism of the viewers,” she said, going on to relate how she had withdrawn submission of a play she wrote for the Acre festival, out of solidarity with a “censored Israeli artist”—referring to Einat Weizman. “We will not stop creating. We will create with or without funding. From this stage, and if necessary, from prison, like Dareen Tatour.”
It is no accident, she said, that she has been unable to find funding in Israel for her play because “most Israeli art, at the end of the day, serves to justify Israeli wrongdoing, instead of addressing and eliminating it.” Ammar believes there are only a handful of Israeli artists, such as Einat Weizman, who consistently challenge the system. “Everyone is talking about silencing these days because Regev has begun silencing Jews, and it is new for them,” she said, “but we [Palestinians] have been silenced since 1948.”
Another dissident Israeli artist is Udi Aloni, a filmmaker who splits his time between Tel Aviv, Berlin, and Brooklyn. Aloni is the son of Shulamit Aloni, a revered former culture minister and a pioneer of the nation’s civil rights movement. Before his mother’s death, in 2014, he was regarded as untouchable, and the films he made—many of them focusing on the destructive realities of the occupation—were funded by the government. His last, the award-winning Junction 48 (2016), centered on the aspirations of a Palestinian rapper living in Lod, a mixed Arab-Jewish city with high crime rates, and where there have been Israeli demolitions of Palestinian homes. Today, though, in the Regev era, Aloni believes it’s unlikely that he’ll receive state funding again for work that highlights the plight of Palestinians.
“I can’t stop the flood, but I can create Noah’s ark,” he said. “And when the flood destroys all, we will be there to fill the void.”
While Regev has not been able to make good on many of her threats, the rhetoric has created a sufficiently fraught atmosphere that much of her strategy is in effect. She has managed to revamp what was once a small, overlooked ministry into a powerful mouthpiece for the Netanyahu government, putting Israel’s cultural elite on the defensive and setting a precedent for her successors. Regev, who has set her sights on becoming minister of defense in the next government (this would make her the first woman in this role), is just one manifestation of the rightward, nationalist, pro-settler drift of Israeli politics. Although Netanyahu is currently mired in several corruption investigations, and there has been talk of possible early elections, polls show that his Likud Party would nevertheless likely win a plurality of seats in the Knesset—making it easy to imagine his re-election. With no political alternative from the center and left to speak of in Israel right now, any resistance against the Likud agenda from Israel’s small community of artists and intellectuals seems like a voice in the wilderness. And Regev has capitalized on this moment masterfully. While her tenure may not represent a permanent, more coercive and censorious change in how the arts are funded in Israel, she embodies the sea-change in Israeli society, from a country that downplayed its inequities and declining democratic norms, to one that flaunts them.
In 2000, an Indiana politician attacked his opponent, John Frenz, by assailing the Kinsey Institute, founded more than fifty years earlier within Indiana University. “The Kinsey Institute is the largest library of pornography of its kind in the world,” his attack ad proclaimed, elaborating that its “library contains sex-related art, studies on bestiality, obscene photographs of children,” and that the institute supported “homosexuality as an accepted lifestyle.” This politician, a strong-values conservative named Eric Holcomb, is now the governor of Indiana. What terrible infraction had Holcomb’s opponent committed? He’d voted for a state budget, of which a tiny portion funded the Kinsey Institute.
Alfred Kinsey began his career as a biologist, with a focus on entomology and an expertise on the gall wasp. He didn’t switch to studying sex until he taught a course on “Marriage and Family” at Indiana University, and found that there was a dearth of scientific literature on sex. He began collecting information about the sexual histories of students in his courses, and amassed a large quantity of them. When the University’s president, Herman Wells, made Kinsey decide between continuing his sex research and teaching the course, he chose the former. In 1947, with the help of a lawyer, Kinsey founded the Institute of Sex Research “to continue research on human sexual behavior,” as well as to secure a library for his growing collection of books and artifacts, according to James Capshew’s Herman B. Wells: The Promise of the American University. The institute was created as a “separate entity that could be firewalled from local, political, and institutional mandates,” Judith Allen and her co-authors write in the recently published The Kinsey Institute: The First Seventy Years.
Since 1947, the institute’s core mission has been to study subjects that no other academic institution would touch: sex in all its permutations. According to its official history, it was founded to encourage “the right to study sex and to proclaim that study openly.” Kinsey published two essential works on human sexuality based on his surveys—Sexual Behavior in the Human Male (1948) and Sexual Behavior in the Human Female (1953)—that both became New York Times bestsellers and scandalized polite society with their revelations. Kinsey discovered that over a third of men had had sexual experiences with other men, a shocking discovery at a time when same-sex sexual activity was illegal and widely considered immoral. He also found that women were more likely to have orgasms from masturbation than from heterosexual intercourse, and that half of the female population had had sex before marriage. Over 50 percent of the people he surveyed reported having an erotic response to biting. Kinsey’s groundbreaking work legitimized sex research and paved the way for Masters and Johnson’s studies in the mid-1960s, which proved, through laboratory research on copulating and masturbating subjects, that women could have multiple orgasms, and that orgasms during masturbation were typically more pleasurable than those during intercourse.
During the decades after Kinsey’s death in 1956, the institute expanded his agenda—opening a sex clinic, hiring an art curator, starting public tours, and releasing a sex-reporting app. But over the past three years, it has quietly become a shell of its former self. As its new director Sue Carter told the Indianapolis Star in 2016, “Where once research into sex, gender and reproduction formed the backbone of the institute’s work, now love, sexuality and well-being will take center stage.” Its name has been shortened to “The Kinsey Institute.”
In many ways, Carter was an unusual choice to lead the institute. Although she is the first biologist to head up the institute since Kinsey himself, her career has focused on rodents—in particular, on the prairie vole, one of the few mammals that pair-bonds and is monogamous. Carter previously built a research lab at the University of Illinois-Chicago where she discovered that oxytocin—the so-called love hormone—played an essential part in promoting pair-bonding in prairie voles. Almost none of Carter’s hundreds of published scientific papers relate to human sexuality, and few of her articles have appeared in sex-research journals. This has baffled sex researchers at many other institutions. “The overarching kind of focus behind the institute was to look at sex research, not at research on love and not at research on intimacy,” said Cynthia Graham, a professor in sexual and reproductive health at the University of Southampton, who is also a Kinsey Institute fellow and editor of The Journal of Sex Research.
Perhaps tellingly, Carter’s own research on vole monogamy has been cited by pro-abstinence and anti-pornography organizations to justify their positions. Physicians for Life, a pro-life association based in Alabama that also advocates for abstinence, uses Carter’s research to argue that monogamy is natural and built into our biology. They claim that sex out of wedlock will lead to a release of oxytocin that produces an expectation of commitment which is then disappointed, leading to “depression, dissatisfaction, and the disruption of future bonding potential.” Meanwhile, a website called Your Brain On Porn uses Carter’s oxytocin research to argue that strong pair bonds make us less likely to pursue addictive behaviors like porn-watching. Another anti-porn advocate uses Carter’s research to make a diametrically opposed argument: that oxytocin “bonds” men to porn and away from human partners.
Since Carter took up her position at the institute, she has hired five former students or colleagues from her lab at the University of Illinois-Chicago; none of them do sex research. She’s also hired her husband, the psychologist Stephen Porges, as a “distinguished university scientist.” Porges had never written an article on sex research when he was hired, aside from one that he co-authored with his wife on sexual receptivity in hamsters. According to the Kinsey Institute’s website, over the three years that Carter has been the director, only about 40 percent of the publications coming out of the Kinsey institute have been related to human sexuality, compared to about 88 percent under the last two directors. Many of the reports produced under Carter’s leadership seem to be focused on the effects of oxytocin and vasopressin on everything from “aggression in domestic dogs” to schizophrenia in middle-aged women. Other studies are concerned with non-sex related psychological disorders like depression and anxiety.
Perhaps the most visible change at the institute under Carter has been to its art collection, which had, since the 1990s, been displayed in the affiliated gallery. The collections are extensive and include such rare pieces as Ian Hornak’s painting of nudes adorned with animal heads, Herb Ritts’s photographs of male nudes, and erotic works by Marc Chagall and Picasso. Public shows at the gallery ended in 2015. In addition to shuttering the gallery, Carter terminated the institute’s popular Juried Erotic Art Show, which had run for a decade. This event not only attracted large crowds, but also provided more artwork to the institute, as many of the pieces were donated.
All of the changes at Kinsey have occurred at a time when the political climate in Indiana has become increasingly proscriptive toward any form of sexual expression other than heterosexual sex within marriage. When Carter was hired, the governor of Indiana was Mike Pence (now vice president of the United States), who signed one of the most extreme anti-abortion state laws in US history and cut funding to Planned Parenthood. And about six months before she was hired, conservative critics of Kinsey had been particularly outspoken after the institute sought, and was granted, consultative status as a nongovernmental organization with the United Nations. In May 2014, Kinsey’s greatest nemesis, conservative activist Judith Reisman, founded an organization called Stop the Kinsey Institute, encouraging people to sign a petition that would revoke the institute’s newly won UN accreditation. Since the 1980s, Reisman had been arguing that Kinsey’s research was inaccurate, involved the sexual abuse of children, and was responsible for the “moral decay of millions of people.”
In the years since Carter was hired, the Kinsey Institute has become increasingly dependent on the support of the university and, as a consequence, the conservative state government. Kinsey initially set up the institute as an independent nonprofit organization associated with Indiana University, in an attempt to protect it from political pressure; thanks to its nonprofit status, the management of the institute was answerable only to an independent board of directors. But in December 2016, the institute merged with Indiana University, and now answers only to the university’s administration.
Many of the people I interviewed believe that the university saw the institute as a political liability, with the potential to compromise the university’s own access to state funding, and so, instead of getting rid of the institute, they made its focus less controversial. The fear of defunding has only become more grounded since Eric Holcomb—whose attack ad in 2000 said that “tax-payers should NOT be forced to support Kinsey”—replaced Pence as governor last year. Holcomb now signs off on the budget that supports the institute.
“There is a reason that I was selected and not someone from the more traditional sex-research field,” Carter told me, somewhat cryptically, after I asked her if she thought the organization of the institute had changed. She didn’t elaborate on what she meant, but instead went on to say that “Kinsey has to have some kinds of priorities to survive in the future. We cannot focus simply on things like sexual pleasure. We can’t because there’s no funding for it.”
Why did the institute merge with the university? Nearly everyone involved to whom I spoke cited a different reason. According to Amy Applegate, the chair of Kinsey’s board at the time, the merger extended protections of academic freedom to Kinsey while also giving the institute access to all the university’s resources. For her part, Carter said that the institute needed more financial support. Fred Cate, Indiana University’s vice president of research, cited both finances and integration with the library. Rick Van Kooten, the vice provost of research, said that institute could now enjoy stronger protections against legal challenges like the one in the 1950s, when the US Customs Office intercepted a box of erotic goods from overseas bound for Kinsey on the grounds that it was obscene. But even though the institute was an independent nonprofit at the time, it still managed to acquire the university’s help and prevail in its ensuing legal battle against the Customs Office. Jorge José, a former university administrator, said the merger was intended to give the university’s administration more “oversight over the whole operation of the Kinsey Institute.”
Whatever the actual reasons behind it—political, practical, or otherwise—the institute’s incorporation with the university has only compounded the effects of Sue Carter’s tenure, in making the institute less public, and less about sex. Perhaps the saddest part of the Kinsey Institute’s change in focus is that our country needs it now more than ever. The Kinsey Institute should be participating in the national conversation on sexual assault and harassment. The institute should be leading the way with discussions on the importance of female sexual pleasure, the “orgasm gap” between men and women, and the fact that so many women experience pain during sex. In the words of Edward Laumann, who has researched human sexuality at the University of Chicago for over forty years, the institute “is about human sexuality, so why the hell are we talking about voles, however interesting that may be?”
In his late teens, Federico García Lorca’s main interest was music and song. He was steeped not only in the Andalusian folk tradition but also in the European art song. He loved the work of Schubert and Beethoven.
Lorca’s arrangements for piano and voice of Andalusian folk songs, inspired by Manuel de Falla’s arrangement in 1914 of seven other Spanish songs, combine in the subtlety of their harmonies an attention to the European art tradition and to local expression. Lorca’s own voice, according to his contemporaries, was not very rich, but his playing was sophisticated and skillful, as can be attested by the series of recordings he made in 1931 accompanying Encarnación López, known as La Argentinita, the lover of his friend the bullfighter Ignacio Sánchez Mejías, whose death Lorca lamented in one of his most famous poems. His piano accompaniment on these songs can be exuberant, but it can be subdued as well. He believed in the concept of duende, a heightened soulfulness displaying an authentic, deep, and earthy emotion, but he was also a master of restraint.
Lorca’s politics overcame his natural instinct against ideology in the same way that, in his musical consciousness, the highly wrought emotions of a Schubert song opposed the fierce abandon of the traditional vocal style known as the cante jondo. Just as his talent as a lyric poet gave way to Surrealism and experimentation, under the pressure of his times his art became political. As an artist, he was interested in simple freedoms in a period in Spain when nothing was simple, when such interests came face to face with dark and malevolent forces.
He knew with an almost whimsical certainty that in Spain in 1936 the personal was political, and that the body itself, especially the body of a woman or a homosexual man, was as much the territory of conflict and destiny as the ownership of land or factories. In that fateful year he wrote a play, The House of Bernarda Alba, that had all the austerity and artful simplicity of a Schubert song. It was a play for women’s voices full of yearning and plaintive expression, but surrounded by a savage sense of restriction and cruelty that everyone in the audience Lorca imagined for his play would know and recognize.
He was too subtle to make this obvious or programmatic, and too interested in the pure excitement and depth of the conflict between his characters to make them smaller than the world outside. He wrote them with the same strange tenderness that George Eliot used to create her idealistic men such as Will Ladislaw or Daniel Deronda, or that homosexual writers from Henry James to Tennessee Williams used to imagine their women trapped by convention. And he shared with Oscar Wilde that pure confidence in his own brave gifts with no sense of his doom so close, save one that may be beyond us in its depth and its irony.
Indeed, he was working with such ease and speed in the last years of his life that publishers and producers could not keep up with him. At the time of his death in 1936, his long, ambitious poem “Poet in New York” and a number of other sequences, including “The Tamarit Diwan” and the “Dark Love Sonnets,” had yet to appear, and his plays The House of Bernarda Alba and The Public had yet to be staged.
Federico García Lorca was born near Granada in 1898, the eldest son in a prosperous family. When he was growing up, his father purchased an idyllic country house, complete with orchards and gardens, on the outskirts of Granada. The family spent time in the city itself too, so that he could be educated there. In the early 1920s in Madrid, Lorca developed a close friendship with Salvador Dalí and Luis Buñuel. He would also remain close to a group of Spanish poets, known as the Generation of ’27, which included Rafael Alberti, Pedro Salinas, and Vicente Aleixandre.
In essays and interviews, Lorca made clear that his allegiance was not to Spain, or even to Granada, but to the Vega, the rich plain to the west of the city where his father farmed, the land nourished by the rivers Darro and Genil that flow down from the Sierra Nevada. “My whole childhood,” he said,
was centred on the village. Shepherds, fields, sky, solitude. Total simplicity. I’m often surprised when people think that the things in my work are daring improvisations of my own, a poet’s audacities. Not at all. They’re authentic details, and seem strange to a lot of people because it’s not often that we approach life in such a simple, straightforward fashion: looking and listening…. I have a huge storehouse of childhood recollections in which I can hear the people speaking. This is poetic memory, and I trust it implicitly.
This idea that Lorca’s imagery and poetic voice came from the soil unmediated has echoes of the Irish Literary Renaissance and the efforts of writers such as W.B. Yeats, Augusta Gregory, and J.M. Synge to ally themselves with a native culture that was primitive and powerful, simple and untainted. Unlike the Irish writers, however, Lorca had been brought up in the very places he wished to invoke, using the same language as the people about whom he wrote:
I love the countryside. I feel myself linked to it in all my emotions. My oldest childhood memories have the flavour of the earth. The meadows, the fields, have done wonders for me. The wild animals of the countryside, the livestock, the people living on the land, all these are suggestive in a way that very few people understand…. My very earliest emotional experiences are associated with the land and work on the land.
But like the work of the Irish writers, the poems and plays Lorca wrote came from an imagination that was not itself primitive or nourished only by the soil. It was not “total simplicity.” While his poems used ballad forms or took the shape of folk songs, they also took their bearings from ideas of the unconscious and from Surrealism, from his friendships with Dalí and Buñuel and contemporary poets as much as from the people working on the land and living in the villages, even though images and metaphors he used could have their origins in ordinary local speech.
In an introduction to Lorca’s plays, his brother Francisco showed the relationship between the rich use of metaphor in ordinary speech and his brother’s playing with it in a poem like “The Marked Man,” from his “Gypsy Ballads.” He recalled the family nurse Dolores describing the source of a spring in her picturesque and vivid speech: “‘And imagine, a bull of water rose up.’ I remembered the impression this admirable phrase made on Federico for it appears later, more or less transformed…in these lines:
The heavy water bullocks
charge after the boys
who bathe inside the moons
of their curving horns.”
Lorca here took the idea of a spurt of water, or water coming from the ground, having the power and suddenness and surprise of a bullock’s charge. He played also with the image of the moon’s shape reflected in water as being a curving horn. He was combining bull and water and moon to suggest a sense of elemental danger coming fast, too fast for the imagery to be narrowed down or made too precise, coming as fast as speech or associations in the mind might come.
In Poet in Spain, a volume of new translations of Lorca’s Spanish poems, with the original on the facing page, Sarah Arvio translates the first line of these four as “the heavy water oxen” and replaces “curving” with “rippling.” Will Kirkland and Christopher Maurer in Collected Poems translate the first line as “Dense oxen of water” and also use “rippling.” (The Spanish word is ondulados, which means wavy or rolling or indeed rippling, which is clearly much better since it carries the idea of water along and suggests the moon in water rather than just the moon.) Thus Arvio manages to keep the water metaphor going from the first and third lines into the fourth. Her use of “water oxen” is more precise than “oxen of water” (which sounds like a translation even if it opens the metaphor more to imply that there were in fact no oxen at all, there was just spring water, or spurting water that, as Dolores the nurse would have it, was like a bull).
Lorca’s early poems are filled with elemental things, like a Miró painting—night, star, moon, bird—but they come with edges of strangeness and menace, like a Dalí painting—clock, knife, death, dream. He is never interested in just describing a scene. Instead, he begins to work on a set of associations, using echoes in the patterns of sound and sometimes a strict metrical form as undercurrent, thus suggesting a sort of ease or comfort at the root of the poem so that the branches can grow in any direction, with much grafting and sudden shifts, as his mind, in free flow, throws up phrases that, however unlikely, he allows in, thus extending the reach of the poem, or at other times pruning it briskly back.
Often, Lorca works as though making a quick sketch, jotting down a few images. It seems essential to me that any translator follow his punctuation so that the images meant to stand apart are allowed to do so. In her introduction, however, Arvio writes:
I’ve used almost no punctuation; this was my style of composition. I felt that punctuating, as I worked, hindered the flow of the language. When I was done it was too late to go back; the poems had their own integrity and didn’t need commas and periods. So I let them stand. I was fascinated to see, studying the manuscripts, that Lorca often wrote his drafts with little or no punctuation: a stray period, a comma in the middle of a line, an exclamation mark. He added on punctuation later; manuscripts unpublished at the time of his death were punctuated by an editor.
Since her translations are filled with intelligent decisions and a keen sense of the music in the original poems, since her ear for Lorca’s delicate and difficult tones is sensitive and sharp, indeed often inspired, this decision seems unwise. Hindering “the flow of the language” in the imagistic poems seems to me not only necessary but mandatory. Lorca, one presumes, added on the punctuation later because he saw the need for it.
For example, the Spanish original of the early poem “Delirium,” as printed in Arvio’s book, is made up of five discrete statements, in stanzas of two lines each, with a full stop after each stanza. The second and the third are translated as follows:
sigh as they fly
The blue and white
Even though Arvio has included line breaks, we still need a full stop after “Bee-eaters/sigh as they fly.” (Lorca has one.) Then we will know not to read on as though they fly in some direction that might be “The blue and white/distance.”
The poems darken as Lorca moves into his late twenties and early thirties. Death is both played with and confronted directly, but it is seldom absent. He said:
Everywhere else, death is an end. Death comes, and they draw the curtains. Not in Spain. In Spain they open them. Many Spaniards live indoors until the day they die and are taken out into the sunlight. A dead man in Spain is more alive as a dead man than anyplace else in the world.
Lorca’s poem “From Here” begins: “Tell my friends/I have died.” In “Another Dream,” the poem asks: “How many children does Death have?” And “Two Evening Moons” opens: “The moon is dead dead.” One of the sections of “Window Nocturnes” begins:
When I stick my head
out the window I see
how the blade of the wind
wants to cut it off
In this unseen
guillotine I have laid
the eyeless heads
of all my desires
In “Horseman’s Song,” he writes: “Death is watching me/from the Cordoba towers.”
In some of the poems, such as “Horseman’s Song,” and many of the later Gypsy Ballads, the death is violent. The sense of fear around violent death is in the very rhythms of the poems:
When stars thrust their spurs
into the gray water
when young bulls dream
of the sweep of a flower
blood cries rang out
near the Guadalquivir
In later poems, poems written close to the time of his own death, the ballad form gives way to a more refined stanza-led form as the sense of fear and foreboding moves close to lament, as in the powerful “Of the Dark Death”:
Wrap me in a veil when the sunrise
pelts me with fistfuls of ants
and soak my shoes in hard water
so that its scorpion claw will slip
In 1971, thirty-five years after Lorca’s death, a book was published in Paris, written in Spanish, by the Irish academic Ian Gibson, who would later write biographies of Lorca, Dalí, and Antonio Machado. Two years later it appeared in English with the title The Death of Lorca. I remember the chill I felt more than forty years ago as Gibson forensically and meticulously recounted the early days of the civil war in Granada, to which Lorca had returned shortly before the war broke out.
Gibson emphasizes in his book that Lorca had taken sides in the argument about liberalism and repression in Europe. In 1933, he writes, the poet signed “a manifesto condemning the Nazi persecution of German writers. And when, in 1935, Mussolini invaded Abyssinia, he cancelled a projected visit to Italy and signed another anti-Fascist manifesto.” Eighteen months before the war, Lorca spoke about conditions in Spain in an interview:
I will always be on the side of those who have nothing, of those to whom the peace of nothingness is denied. We—and by we I mean those of us who are intellectuals, educated in well-off middle-class families—are being called to make sacrifices. Let’s accept the challenge.
In the last interview he gave before coming to Granada, Lorca’s comments on the city would not have won him many friends among conservatives. He said of the fall of Granada and the expulsion of the Moors in 1492:
It was a disastrous event, even though they say the opposite in the schools. An admirable civilization, and a poetry, architecture and delicacy unique in the world—all were lost, to give way to an impoverished, cowed town, a wasteland populated by the worst bourgeoisie in Spain today.
Lorca’s family also had close links to progressive politics in Granada. On July 10, 1936, his brother-in-law, the socialist doctor Manuel Fernández-Montesinos, was elected mayor of Granada. But as Gibson makes clear, lines were not precisely drawn in Granada between left and right. Among Lorca’s closest friends in the city was the Rosales family, who were members of the conservative Falange, which supported General Franco’s uprising.
In the first weeks of that uprising, the repression in Granada was particularly severe, with more than two thousand people taken out and shot. Gibson writes: “The flower of Granada’s intellectuals, lawyers, doctors and teachers died…along with huge numbers of ordinary left-wing supporters.” Among those shot was Fernández-Montesinos.
As the forces of repression in Granada came to look for Lorca, he took refuge in a house owned by the Rosales family. He believed that he would be safe there. However, he was arrested a week later and then taken out and shot. The precise place where his body was buried has never been identified.
One of the most moving moments in Gibson’s book on Lorca’s execution, and also in his biography of the poet, is when the composer Manuel de Falla, who lived below the Alhambra, heard that Lorca had been arrested and made his way into the city to see if he could rescue him:
Falla was a tiny, timid man, and it is difficult to overestimate his courage on this occasion. In the Civil Government he was informed that Lorca was already dead, and it seems that he himself was in danger of being shot, despite his fervid and well-known Catholicism and his fame as a composer.
Lorca had first met Falla around 1920 when the composer, who had come to live in Granada, a city in which he had already set some of his best-known music, became increasingly fascinated by cante jondo,part of the traditional music of Andalusia being kept alive by the Gypsy population; it was “imbued,” Lorca said in a lecture in Granada in 1922, “with the mysterious color of primordial ages.” It is “a stammer, a wavering emission of the voice, a marvelous buccal undulation that smashes the resonant cells of our tempered scale, eludes the cold, rigid staves of modern music, and makes the tightly closed flowers of the semi-tones blossom into a thousand petals.” It “always sings in the night. It knows neither morning nor evening, mountains nor plains. It has only the night, a wide night steeped in stars. Nothing else matters.”
When Lorca met Falla, who was almost a quarter of a century his senior, he had already given up his dream of studying music to study law in order to please his father, but music became the bond between them. Falla was ready to treat the young poet almost as a son, and Lorca was careful to keep the more flamboyant parts of his life secret from the famously conservative composer, who lived with his sister.*
In order to give energy and respectability to the tradition of cante jondo—viewed as too primitive in some cosmopolitan quarters in Spain—Falla and Lorca organized a festival in Granada with some associates to coincide with the feast of Corpus Christi in June 1922. Lorca was already writing poems that mined and excavated and enriched the tones and formal structure of the cante jondo,poems that expressed deep anguish and intense feeling in ways both direct and rich with metaphor, and that connected the natural world or the world of objects with the self or the speaker.He became fascinated by the idea of duende,quoting a Gypsy singer who, while listening to Falla play his Nights in the Gardens of Spain, said: “Whatever has black sound has duende.” Duende was the opposite of mere virtuosity, it was what sent shivers down the spine when someone sang or played music or recited poetry. Speaking of duende as a person, Lorca told a Buenos Aires audience in 1933 that it
is a power, not a work. It is a struggle, not a thought…. The true fight is with the duende…. But there are neither maps nor exercises to help us find the duende. We only know that he burns the blood like a poultice of broken glass, that he exhausts, that he rejects all the sweet geometry we have learned, that he smashes styles, that he leans on human pain with no consolation…. The duende does not come at all unless he sees that death is possible. The duende must know beforehand that he can serenade death’s house…. The duende wounds. In the healing of that wound, which never closes, lie the strange, invented qualities of a man’s work…. The duende loves the rim of the wound.
Singers with duende walked for miles to take part in the festival that Lorca and Falla organized. While the excitement gave Lorca inspiration and nourishment for his work and led to his “Gypsy Ballads,” Falla, on the other hand, could not wait to get back to his quiet life. In his work after the festival, Gibson writes, “the Andalusian elements…were drastically reduced.”
Though Lorca and Falla attempted to work together in the few years that followed, the collaboration came to nothing. But suggesting that their paths diverged would be to misunderstand Lorca’s achievement not only in the more experimental and jagged long poem “Poet in New York,” but in the poems written in his late twenties and his thirties that are included in Sarah Arvio’s book. The air of simplicity in these poems is more like an alibi or a mask. The metaphors are darting and daring; while they are often random, they can form a pattern as deliberate and indicative as gravestones in a cemetery. They make clear that underlying everything is death, that death is approaching, filling the air with violence and menace and fright.
The imagery wavers between the fixed and the fluid, at times wild and unpredictable against the disciplined music of the metrical line. This is close to how Carol A. Hess, in her biography of Falla, describes his Harpsichord Concerto: “clarity of texture, use of preexisting materials and procedures, and heterogeneous timbres” refined “to a new level of distillation.”
In Lorca’s poetry, there is also a sort of clarity of texture that is darkened or even tossed aside by phrases or individual images that are filled with beauty and mystery, but seem also at times jagged or almost private, elusive, abstract, yielding no easy interpretation. Out of these heterogeneous timbres come moments of piercing clarity, when sex and death are at war or at play, when doom and fear are in the air, but at other times a lightness and sense of ease is suggested.
This poses enormous problems for translators. In the first section of “Lament for Ignacio Sánchez Mejías,” a poem written in 1934, for example, Lorca uses the phrase a las cinco de la tarde more than twenty-five times as a refrain. It means what Sarah Arvio says it means—“at five o’clock”—or what Galway Kinnell or Stephen Spender and J.L. Gili in their translations say it means—“at five in the afternoon.” In the Spanish however, the sound “ah” gets repeated four times in the line, and in between the word “cinco” has a hard “inko” sound that is both plaintive and tough. As it is repeated in the Spanish, the line has an emotional momentum. In the English, no matter what you do, it sounds like the time of a train.
In part three of the poem, Arvio comes up with an interesting solution when she translates the fifth line as “I have seen the gray rain chase the waves” (Kinnell translates this as “I’ve seen gray rains fleeing toward the sea” and Spender and Gili’s version is “I have seen grey showers move towards the waves”). Arvio’s single-syllable words suggest panic; in the meter, it sounds like a good line of English poetry—T.S. Eliot might have been proud of it—rather than a translation. Since both the rain and the waves are in movement, “chase” serves to emphasize this and suggests also that something urgent is at stake, even if in the original Spanish the rain is running away from something as well as running toward the sea.
The preceding two lines of this poem, however, are a perfect illustration of a problem that no translator can solve. In Arvio’s version they read, “The stone is a shoulder for carrying time/and trees of tears and ribbons and planets”; and in Kinnell’s version, “Stone is a shoulder for carrying away time/with its trees made of tears and ribbons and planets”; and in Spender and Gili’s version, “Stone is a shoulder on which to bear Time/with trees formed of tears and ribbons and planets.”
The first line, no matter what you do with it, is beautiful. The second line is the problem. What was Lorca imagining or seeing when he wrote it? In Spanish, the last three nouns each end in the same sound: lágrimas y cintas y planetas. The rhythm almost pulls them along, defying the reader to wonder what they might actually signify, so perhaps it doesn’t matter what Lorca was imagining or seeing. The sound of the words holds such questions at bay.
In English, however, when we see the word “ribbons” here, we are almost tempted to turn the book upside down or at least shake it to see if some logic, or even some seductive lack of logic, might sweetly emerge. And that is even before we come to the planets. Lorca, of course, would be shocked at the mere suggestion that we should want to know something as banal as what these words actually mean here, or what they are asking us to imagine or see. He would invoke the concept of duende and insist on its power to beguile the reader of his poetry, perhaps even in translation:
Before reading poems aloud…the first thing one must do is invoke the duende. That is the only way that everybody will immediately succeed at the hard task of understanding metaphor (without depending on critical apparatus or intelligence) and be able to catch, at the speed of the voice, the rhythmic design of the poem.
Manuel de Falla’s house and the house of the García Lorcas in the outskirts of Granada can both be visited. Falla’s single bed with a crucifix on the wall behind it and his spartan living conditions are in great contrast with the airy domestic beauty and openness of the house where the García Lorcas lived. ↩
Kwame Anthony Appiah is a writer and thinker of remarkable range. He began his academic career as an analytic philosopher of language, but soon branched out to become one of the most prominent and respected philosophical voices addressing a wide public on topics of moral and political importance such as race, cosmopolitanism, multiculturalism, codes of honor, and moral psychology. Two years ago he even took on the “Ethicist” column in The New York Times Magazine, and it is easy to become addicted to his incisive answers to the extraordinary variety of real-life moral questions posed by readers.
Appiah’s latest book, As If: Idealization and Ideals, is in part a return to his earlier, more abstract and technical interests. It is derived from his Carus Lectures to the American Philosophical Association and is addressed first of all to a philosophical audience. Yet Appiah writes very clearly, and much of this original and absorbing book will be of interest to general readers.
Its theme and its title pay tribute to the work of Hans Vaihinger (1852–1933), a currently neglected German philosopher whose masterwork, published in 1911, was called The Philosophy of “As If.”1 Vaihinger contended that much of our most fruitful thought about the world, particularly in the sciences, relies on idealizations, or what he called “fictions”—descriptions or laws or theories that are literally false but that provide an easier and more useful way to think about certain subjects than the truth in all its complexity would. We can often learn a great deal by treating a subject as if it conformed to a certain theory, even though we know that this is a simplification. As Vaihinger says, such fictions “provide an instrument for finding our way about more easily in the world.”
One of the clearest examples Vaihinger offers is Adam Smith’s assumption, for purposes of economic theory, that economic agents are motivated exclusively by self-interest—that they are egoists. Smith knew perfectly well that human motivation was much richer than that, as he demonstrated in his book The Theory of Moral Sentiments, a work less widely known than The Wealth of Nations. But as Vaihinger explains:
For the construction of his system of political economy it was essential for Adam Smith to interpret human activity causally. With unerring instinct he realized that the main cause lay in egoism and he formulated his assumption in such a way that all human actions, and particularly those of a business or politico-economical nature, could be looked upon as if their driving force lay in one factor—egoism. Thus all the subsidiary causes and partially conditional factors, such as good will, habit, and so forth, are here neglected. With the aid of this abstract cause Adam Smith succeeded in bringing the whole of political economy into an ordered system.
Vaihinger explored the phenomenon in a wide range of cases, from mathematics, the natural sciences, ethics, law, religion, and philosophy. Appiah’s range is equally wide, but his examples are different; he gives special attention to psychology, ethics, political theory, social thought, and literature. In general he defends the value of idealization, but he is also aware of its intellectual dangers. He emphasizes that it is essential to hold on to the contrasting concept of truth, and to keep in mind both the departures from truth that idealization involves and the specific purposes for which it is useful.
Appiah has packed into this short book an impressive amount of original reflection on a number of topics, so my discussion will have to be selective. He mentions some examples from the natural sciences, but in such abbreviated form that they cannot be understood by readers who are not already familiar with the theories in question.2 I shall discuss some cases where Appiah’s analyses of idealization are more accessible.
The contemporary theory of what is standardly referred to as economic rationality is descended from Adam Smith’s egoistic model of economic behavior; it is based on a much more sophisticated and quantitatively precise but still-idealized model of the psychology of individual choice. The modern discipline of decision theory has permitted a great increase in the exactness of what we can say about this type of human motivation, by introducing quantitative measures of subjective degrees of belief and subjective degrees of preference.
If, for example, on a cloudy day you have to decide whether or not to take an umbrella when you go out, you face four possibilities: (1) rain and umbrella; (2) no rain and umbrella; (3) rain and no umbrella; (4) no rain and no umbrella. Obviously your decision will depend both on your estimate of the likelihood of rain and on how much you mind getting wet, or alternatively how much you mind carrying an umbrella when it isn’t raining, but decision theory makes this more precise. It says your choice is explained by the fact that you assign a probability p between zero and one to the prospect of rain, and (ignoring misty in-between states) a probability of one minus p to the prospect of no rain, and that you assign a desirability, positive or negative, to each of the possibilities (1) to (4). By multiplying the probability and the desirability for each of these outcomes, one can calculate what is called the “expected value” of each of them, and therefore the expected value of taking an umbrella and of not taking an umbrella. The rational choice is to do what has the higher expected value.3
Decision theory applies this kind of calculus to choices among alternatives of any complexity, with any possible assignment of subjective probabilities and desirabilities. With the help of game theory it can be extended to multiperson interactions, as in a market economy. What interests Appiah is that the theory assigns these supposed quantifiable psychological states to individuals only on the basis of an idealization. They are not discovered by asking people to report their subjective probabilities and desirabilities: in general, people do not have introspective access to these numbers. Rather, precise psychological states of this type are assigned by the theory itself, on the basis of something to which people do have access, namely their preferences or rankings (better, worse, indifferent) among alternatives.
This by itself does not imply that the states are fictional: real but unobservable underlying causes can often be inferred from observable effects. The fiction comes from the way the inference proceeds in this case. Given a sufficiently extensive set of preferences (rankings of alternatives) by an individual, it is possible, employing relatively simple laws, to assign to that individual a set of subjective probabilities and desirabilities that would account for those preferences, if the individual were rational in the sense of the theory. But since rationality in the sense of the theory involves such superhuman capacities as immunity to logical error, instantaneous calculation of logical consequences, and assigning equal probability and desirability to all possibilities that are logically equivalent, it is clear that no actual humans are rational in this sense. So if we use the theory of economic rationality to think about the behavior of real human beings, we are treating them as if they were superrational (“Cognitive Angels,” in Appiah’s phrase); we are employing a useful fiction, which allows us to bring human action under quantitative laws.
The fiction is useful only for certain purposes. If it is not to lead us astray, we have to recognize the ways in which it deviates from reality, and to correct for those deviations when they make a difference that matters. This is in fact the concern of the recently developed field of behavioral economics, which tries to identify the consequences of systematic deviations of actual human behavior from the standards of classical economic rationality. (For example, people often fail to count logically equivalent possibilities as equally desirable: an outcome framed as a loss will be counted as less desirable than the same outcome framed as the absence of a gain; an outcome described in terms of the probability of death will be evaluated differently from the same outcome described in terms of the probability of survival.) Appiah’s point is more general: if we try to formulate laws of human psychology, we will inevitably have to ignore a great deal of the messy complexity of actual human life. This is sometimes legitimate, provided that we recognize the idealization and are prepared to restore the complexity when necessary—when, for example, assuming the rationality of every free market would send us off an economic cliff.
Consider next a completely nontechnical type of idealization that is omnipresent in contemporary thought and discourse: racial and sexual categories such as “Negro” and “homosexual.” The thought that someone—oneself or another—is a Negro or a homosexual has great personal, social, and political significance in our society. Yet in light of the actual complexity and variety of people’s biological heredity and erotic dispositions these are very crude concepts; they do not correspond to well-defined properties or categories in the real world. Nevertheless, Appiah says, we may find it indispensable to employ them:
In earlier work of my own, for example, I have argued both that races, strictly speaking, don’t exist, and that it is wrong to discriminate on the basis of a person’s race. This can usually be parsed out in a way that is not strictly inconsistent: What is wrong is discrimination against someone because you believe her to be, say, a Negro even though there are, in fact, strictly speaking, no Negroes. But in responding to discrimination with affirmative action, we find ourselves assigning people to racial categories. We think it justified to treat people as if they had races even when we officially believe that they don’t.
These cases do not start out as idealizations. “Negro” and “homosexual” became important social identities because it was widely believed that they were essential properties possessed by some people and not others, and that they had behavioral, social, and moral consequences. Appiah maintains that when someone who does not share these beliefs goes on using the terms, this is not just the verbal acknowledgment of a misguided but tenacious social illusion; it is an example of fictional thinking. We do not truly distance ourselves from these categories and perhaps should not:
Identities, conceived of as stable features of a social ontology grounded in natural facts, are often…assumed in our moral thinking, even though, in our theoretical hearts, we know them not to be real. They are one of our most potent idealizations.
This invites the question: When are these idealizations indispensable, and when on the contrary should we resist them, by appealing to the more complex truth? Appiah addresses this and related questions with great insight in an earlier book, The Ethics of Identity,4 but not here.
Appiah considers another type of idealization that he calls “counter-normative”: thinking or acting as if a moral principle is true although we know it isn’t. He believes we do this when we treat certain prohibitions—against murder or torture, for example—as moral absolutes. His view is that strictly, there are exceptions to any such rule, but it may be better to treat it as exceptionless. In that way we will be sure to avoid unjustified violations, without countervailing risk, since “it is remarkably unlikely that I will ever be in one of those situations where it might be that murder was permissible (and even less likely that I will ever be in one where it is required).” Appiah adds that sometimes the advantage of the fiction will depend on its acceptance not by an individual but by a community. Perhaps the strict rule against making false promises would be an example, since even if it is not universally obeyed, the general belief that it is generally accepted encourages people to trust one another.
Which moral rules one regards as fictions or idealizations will depend on what one believes to be the basis of moral truth. Appiah does not take up this large topic, but his discussion seems most consistent with the view that the ultimate standard of right and wrong is what will produce the best overall outcomes. Counternormative fictions then become useful if we will not achieve the best overall outcomes by aiming in each case at the best overall outcome: it is better to put murder and torture entirely off the table. This is an area of perennial controversy, but those who think the prohibitions on murder, torture, and false promises have a different source, dependent on the intrinsic character of those acts rather than overall outcomes, may be less prone than Appiah to attribute their strictness to idealization.
Appiah concludes with a topic of great philosophical interest, that of idealization in moral theory itself. There is some possibility of confusion here, because he is talking about idealization in a sense somewhat different from that discussed so far.
Every morality is an ideal; it enjoins us to conform to standards of conduct and character that we are often tempted to violate, and it is predictable that ordinary human beings will sometimes fail to conform, even if they accept the morality as correct. This by itself does not involve idealization in Appiah’s sense. The moral principles need depend on no assumptions that are not strictly true. A morality describes not how people do behave but how they should behave; and it has to assume only that they could behave in that way, even if at the moment many of them do not.
The idealization that interests Appiah occurs when political thinkers or philosophers theorize about morality. In developing their accounts, they will often imagine situations or possibilities that differ from what is true in the actual world, as an aid to evaluating moral or political hypotheses. One type of idealization consists in evaluating a moral or political principle by considering what things would be like if everyone complied with it. But as Appiah points out, this is far from decisive:
Consider a familiar kind of dispute. One philosopher—let us call her Dr. Welfare—proposes that we should act in a way that maximizes human well-being. What could be more evident than that this would make for the best world? Another—Prof. Partiality—proposes instead that we should avoid harm to others in general but focus our benevolence on those to whom we have special ties. There is every reason to doubt that this will make a world in which everyone is as well off as could be. But a world in which everyone is succeeding in complying pretty well with Prof. Partiality’s prescription might be better (by standards they share) than a world where most of us are failing pretty miserably to comply with Dr. Welfare’s. And given what people are actually like, one might suppose that these are the likely outcomes.
An ideal that cannot be implemented is futile. The question is, how much of a drag on moral ideals should be exercised by the stubborn facts of human psychology? How far can moral ideals ask us to transcend our self-centered human dispositions without becoming unrealistically utopian? As Appiah says,
Some aspects of human nature have to be taken as given in normative theorizing…, but to take us exactly as we are would involve giving up ideals altogether. So when should we ignore, and when insist on, human nature?
I would suggest that to idealize in this context is not to ignore human nature but to regard it, rightly or wrongly, as capable of change. Only if the change is impossible or undesirable is the idealization utopian.
Appiah illustrates a different kind of reason to avoid excessive idealization with the example of immigration policy. To even pose the problem that faces us we have to take the existence of national boundaries as given, as well as the fact that some states treat their own citizens with flagrant injustice or are beset by chaos and severe deprivation. In thinking about what obligations such a situation places on stable and prosperous states, it is no use imagining a unified world without state boundaries, or a world of uniformly just states in which people are free to move from one to another. Such ideal possibilities do not tell us what we should do now, as things are.
Appiah’s response relies on the idea of fortunate nations each doing their fair share toward alleviating the plight of those seeking asylum, while acknowledging that many nations probably won’t meet this standard. This too is an ideal, but it doesn’t depend on imagining a world very different from the actual one.
Immigration is a special case, but Appiah deploys a more general form of the argument—unsuccessfully, in my view—to criticize the structure of John Rawls’s theory of justice. Rawls presents his most general principles of justice by the device of what he called “ideal theory.” That is, he tries to describe the structure and functioning of a fully just or “well-ordered” society, in which “everyone is presumed to act justly and to do his part in upholding just institutions.” Rawls held that ideal theory was the natural first stage in formulating principles of justice, before proceeding to a systematic treatment of the various forms of injustice and the right ways to deal with them—such as criminal law and principles of rectification. The latter enterprise he described as “nonideal theory,” and he held that it depends on the results of ideal theory.
Appiah objects that the description of a fully just society is no help with the problem we actually face, which is how to make improvements in our actual, seriously unjust society. He adds:
The history of our collective moral learning doesn’t start with the growing acceptance of a picture of an ideal society. It starts with the rejection of some current actual practice or structure, which we come to see as wrong. You learn to be in favor of equality by noticing what is wrong with unequal treatment of blacks, or women, or working-class or lower-caste people. You learn to be in favor of freedom by seeing what is wrong in the life of the enslaved or of women in purdah.
But this is misguided as a response to Rawls, whose method in moral theory is to begin precisely with intuitively obvious examples of injustice like those Appiah cites. Rawls’s philosophical project is to discover general principles that give a morally illuminating account of what is wrong in those cases by showing how they deviate from the standards that we should want to govern our society. Such general principles are needed to help us judge what would be right in less obvious cases. Both levels of inquiry are essential to the systematic pursuit and philosophical understanding of justice, and the whole aim of Rawls’s theory is to unite them. It is highly implausible to claim that an understanding of the general principles that would govern a fully just society will not help us to decide what kinds of social or legal or economic changes to our actual society will make it more just.
There is much more in this rich and illuminating book, including a fine discussion of our emotional response to fiction and drama. Appiah’s insight is that when we feel genuine sadness at the death of Ophelia, it is not because of what Coleridge called the “willing suspension of disbelief,” but because of the suspension of “the normal affective response to disbelief.” We react as if we believe an unhappy young woman has died, although we do not believe it, so this is another case of idealization.
The examples that Appiah discusses are interesting in themselves, but he also thinks they offer a larger lesson:
Once we come to see that many of our best theories are idealizations, we will also see why our best chance of understanding the world must be to have a plurality of ways of thinking about it. This book is about why we need a multitude of pictures of the world. It is a gentle jeremiad against theoretical monism.
It isn’t just that we need different theories for different aspects of the world, but that our best understanding may come from theories or models that are not strictly true, and some of which may contradict one another. This is a liberating outlook, though care must be taken not to let it become too liberating. As Appiah insists, we should not allow the plurality of useful theories to undermine our belief in the existence of the truth, leaving us with nothing but a disparate collection of stories. It is conscious deviation from the truth that makes a theory an idealization, and keeping this in mind is a condition of its value.
The Philosophy of “As If”: A System of the Theoretical, Practical and Religious Fictions of Mankind, translated by C.K. Ogden (Harcourt, Brace, 1924). ↩
At several points he references the philosopher of science Nancy Cartwright, who explored the phenomenon in her book How the Laws of Physics Lie (Oxford University Press, 1983). ↩
For example, if your subjective probability of rain is 0.4 and your subjective desirabilities for the four possibilities are +1,–1,–6, +2, then the expected values are +0.4,–0.6,–2.4, +1.2. This makes the expected value for you of taking an umbrella–0.2 and of not taking one–1.2, so it’s rational to take one. ↩
“Men have to toughen up,” Jordan B. Peterson writes in 12 Rules For Life: An Antidote to Chaos, “Men demand it, and women want it.” So, the first rule is, “Stand up straight with your shoulders back” and don’t forget to “clean your room.” By the way, “consciousness is symbolically masculine and has been since the beginning of time.” Oh, and “the soul of the individual eternally hungers for the heroism of genuine Being.” Many such pronouncements—didactic as well as metaphysical, ranging from the absurdity of political correctness to the “burden of Being”—have turned Peterson, a professor of psychology at the University of Toronto, into a YouTube sensation and a bestselling author in several Western countries.
12 Rules for Life is only Peterson’s second book in twenty years. Packaged for people brought up on BuzzFeed listicles, Peterson’s brand of intellectual populism has risen with stunning velocity; and it is boosted, like the political populisms of our time, by predominantly male and frenzied followers, who seem ever-ready to pummel his critics on social media. It is imperative to ask why and how this obscure Canadian academic, who insists that gender and class hierarchies are ordained by nature and validated by science, has suddenly come to be hailed as the West’s most influential public intellectual. For his apotheosis speaks of a crisis that is at least as deep as the one signified by Donald Trump’s unexpected leadership of the free world.
Peterson diagnoses this crisis as a loss of faith in old verities. “In the West,” he writes, “we have been withdrawing from our tradition-, religion- and even nation-centred cultures.” Peterson offers to alleviate the resulting “desperation of meaninglessness,” with a return to “ancient wisdom.” It is possible to avoid “nihilism,” he asserts, and “to find sufficient meaning in individual consciousness and experience” with the help of “the great myths and religious stories of the past.”
Following Carl Jung, Peterson identifies “archetypes” in myths, dreams, and religions, which have apparently defined truths of the human condition since the beginning of time. “Culture,” one of his typical arguments goes, “is symbolically, archetypally, mythically male”—and this is why resistance to male dominance is unnatural. Men represent order, and “Chaos—the unknown—is symbolically associated with the feminine.” In other words, men resisting the perennially fixed archetypes of male and female, and failing to toughen up, are pathetic losers.
Such evidently eternal truths are not on offer anymore at a modern university; Jung’s speculations have been largely discredited. But Peterson, armed with his “maps of meaning” (the title of his previous book), has only contempt for his fellow academics who tend to emphasize the socially constructed and provisional nature of our perceptions. As with Jung, he presents some idiosyncratic quasi-religious opinions as empirical science, frequently appealing to evolutionary psychology to support his ancient wisdom.
Closer examination, however, reveals Peterson’s ageless insights as a typical, if not archetypal, product of our own times: right-wing pieties seductively mythologized for our current lost generations.
Peterson himself credits his intellectual awakening to the Cold War, when he began to ponder deeply such “evils associated with belief” as Hitler, Stalin, and Mao, and became a close reader of Solzhenitsyn’s novel The Gulag Archipelago. This is a common intellectual trajectory among Western right-wingers who swear by Solzhenitsyn and tend to imply that belief in egalitarianism leads straight to the guillotine or the Gulag. A recent example is the English polemicist Douglas Murray who deplores the attraction of the young to Bernie Sanders and Elizabeth Warren and wishes that the idea of equality was “tainted by an ideological ordure equivalent to that heaped on the concept of borders.” Peterson confirms his membership of this far-right sect by never identifying the evils caused by belief in profit, or Mammon: slavery, genocide, and imperialism.
Reactionary white men will surely be thrilled by Peterson’s loathing for “social justice warriors” and his claim that divorce laws should not have been liberalized in the 1960s. Those embattled against political correctness on university campuses will heartily endorse Peterson’s claim that “there are whole disciplines in universities forthrightly hostile towards men.” Islamophobes will take heart from his speculation that “feminists avoid criticizing Islam because they unconsciously long for masculine dominance.” Libertarians will cheer Peterson’s glorification of the individual striver, and his stern message to the left-behinds (“Maybe it’s not the world that’s at fault. Maybe it’s you. You’ve failed to make the mark.”). The demagogues of our age don’t read much; but, as they ruthlessly crack down on refugees and immigrants, they can derive much philosophical backup from Peterson’s sub-chapter headings: “Compassion as a vice” and “Toughen up, you weasel.”
In all respects, Peterson’s ancient wisdom is unmistakably modern. The “tradition” he promotes stretches no further back than the late nineteenth century, when there first emerged a sinister correlation between intellectual exhortations to toughen up and strongmen politics. This was a period during which intellectual quacks flourished by hawking creeds of redemption and purification while political and economic crises deepened and faith in democracy and capitalism faltered. Many artists and thinkers—ranging from the German philosopher Ludwig Klages, member of the hugely influential Munich Cosmic Circle, to the Russian painter Nicholas Roerich and Indian activist Aurobindo Ghosh—assembled Peterson-style collages of part-occultist, part-psychological, and part-biological notions. These neo-romantics were responding, in the same way as Peterson, to an urgent need, springing from a traumatic experience of social and economic modernity, to believe—in whatever reassures and comforts.
This new object of belief tended to be exotically and esoterically pre-modern. The East, and India in particular, turned into a screen on which needy Westerners projected their fantasies; Jung, among many others, went on tediously about the Indian’s timeless—and feminine—self. In 1910, Romain Rolland summed up the widespread mood in which progress under liberal auspices appeared a sham, and many people appeared eager to replace the Enlightenment ideal of individual reason by such transcendental coordinates as “archetypes.” “The gate of dreams had reopened,” Rolland wrote, and “in the train of religion came little puffs of theosophy, mysticism, esoteric faith, occultism to visit the chambers of the Western mind.”
A range of intellectual entrepreneurs, from Theosophists and vendors of Asian spirituality like Vivekananda and D.T. Suzuki to scholars of Asia like Arthur Waley and fascist ideologues like Julius Evola (Steve Bannon’s guru) set up stalls in the new marketplace of ideas. W.B. Yeats, adjusting Indian philosophy to the needs of the Celtic Revival, pontificated on the “Ancient Self”; Jung spun his own variations on this evidently ancestral unconscious. Such conceptually foggy categories as “spirit” and “intuition” acquired broad currency; Peterson’s favorite words, being and chaos, started to appear in capital letters. Peterson’s own lineage among these healers of modern man’s soul can be traced through his repeatedly invoked influences: not only Carl Jung, but also Mircea Eliade, the Romanian scholar of religion, and Joseph Campbell, a professor at Sarah Lawrence College, who, like Peterson, combined a conventional academic career with mass-market musings on heroic individuals.
The “desperation of meaninglessness” widely felt in the late nineteenth century, seemed especially desperate in the years following two world wars and the Holocaust. Jung, Eliade, and Campbell, all credentialed by university education, met a general bewilderment by suggesting the existence of a secret, almost gnostic, knowledge of the world. Claiming to throw light into recessed places in the human unconscious, they acquired immense and fanatically loyal fan clubs. Campbell’s 1988 television interviews with Bill Moyers provoked a particularly extraordinary response. As with Peterson, this popularizer of archaic myths, who believed that “Marxist philosophy had overtaken the university in America,” was remarkably in tune with contemporary prejudices. “Follow your own bliss,” he urged an audience that, during an era of neoconservative upsurge, was ready to be reassured that some profound ancient wisdom lay behind Ayn Rand’s paeans to unfettered individualism.
Peterson, however, seems to have modelled his public persona on Jung rather than Campbell. The Swiss sage sported a ring ornamented with the effigy of a snake—the symbol of light in a pre-Christian Gnostic cult. Peterson claims that he has been inducted into “the coastal Pacific Kwakwaka’wakw tribe”; he is clearly proud of the Native American longhouse he has built in his Toronto home.
Peterson may seem the latest in a long line of eggheads pretentiously but harmlessly romancing the noble savage. But it is worth remembering that Jung recklessly generalized about the superior “Aryan soul” and the inferior “Jewish psyche” and was initially sympathetic to the Nazis. Mircea Eliade was a devotee of Romania’s fascistic Iron Guard. Campbell’s loathing of “Marxist” academics at his college concealed a virulent loathing of Jews and blacks. Solzhenitsyn, Peterson’s revered mentor, was a zealous Russian expansionist, who denounced Ukraine’s independence and hailed Vladimir Putin as the right man to lead Russia’s overdue regeneration.
Nowhere in his published writings does Peterson reckon with the moral fiascos of his gurus and their political ramifications; he seems unbothered by the fact that thinking of human relations in such terms as dominance and hierarchy connects too easily with such nascent viciousness such as misogyny, anti-Semitism and Islamophobia. He might argue that his maps of meaning aim at helping lost individuals rather than racists, ultra-nationalists, or imperialists. But he can’t plausibly claim, given his oft-expressed hostility to the “murderous equity doctrine” of feminists, and other progressive ideas, that he is above the fray of our ideological and culture wars.
Indeed, the modern fascination with myth has never been free from an illiberal and anti-democratic agenda. Richard Wagner, along with many German nationalists, became notorious for using myth to regenerate the volk and stoke hatred of the aliens—largely Jews—who he thought polluted the pure community rooted in blood and soil. By the early twentieth century, ethnic-racial chauvinists everywhere—Hindu supremacists in India as well as Catholic ultra-nationalists in France—were offering visions to uprooted peoples of a rooted organic society in which hierarchies and values had been stable. As Karla Poewe points out in New Religions and the Nazis (2005), political cultists would typically mix “pieces of Yogic and Abrahamic traditions” with “popular notions of science—or rather pseudo-science—such as concepts of ‘race,’ ‘eugenics,’ or ‘evolution.’” It was this opportunistic amalgam of ideas that helped nourish “new mythologies of would-be totalitarian regimes.”
Peterson rails today against “softness,” arguing that men have been “pushed too hard to feminize.” In his bestselling book Degeneration (1892), the Zionist critic Max Nordau amplified, more than a century before Peterson, the fear that the empires and nations of the West are populated by the weak-willed, the effeminate, and the degenerate. The French philosopher Georges Sorel identified myth as the necessary antidote to decadence and spur to rejuvenation. An intellectual inspiration to fascists across Europe, Sorel was particularly nostalgic about the patriarchal systems of ancient Israel and Greece.
Like Peterson, many of these hyper-masculinist thinkers saw compassion as a vice and urged insecure men to harden their hearts against the weak (women and minorities) on the grounds that the latter were biologically and culturally inferior. Hailing myth and dreams as the repository of fundamental human truths, they became popular because they addressed a widely felt spiritual hunger: of men looking desperately for maps of meaning in a world they found opaque and uncontrollable.
It was against this (eerily familiar) background—a “revolt against the modern world,” as the title of Evola’s 1934 book put it—that demagogues emerged so quickly in twentieth-century Europe and managed to exalt national and racial myths as the true source of individual and collective health. The drastic individual makeover demanded by the visionaries turned out to require a mass, coerced retreat from failed liberal modernity into an idealized traditional realm of myth and ritual.
In the end, deskbound pedants and fantasists helped bring about, in Thomas Mann’s words in 1936, an extensive “moral devastation” with their “worship of the unconscious”—that “knows no values, no good or evil, no morality.” Nothing less than the foundations for knowledge and ethics, politics and science, collapsed, ultimately triggering the cataclysms of the twentieth century: two world wars, totalitarian regimes, and the Holocaust. It is no exaggeration to say that we are in the midst of a similar intellectual and moral breakdown, one that seems to presage a great calamity. Peterson calls it, correctly, “psychological and social dissolution.” But he is a disturbing symptom of the malaise to which he promises a cure.
Fifty-five years ago, The New York Review published its first issue. To celebrate the magazine’s emerald anniversary, in 2018 we will be going through the archives year by year, featuring some of the notable, important, and sometimes forgotten pieces that appeared in its pages. You can follow us on social media (Facebook and Twitter) for links to archival highlights along with the newest articles, and you can sign up for our twice-weekly email newsletter for periodic updates.
In September 1966, the twenty-five-year-old civil rights activist and head of the Student Non-Violent Coordinating Committee published this essay defining the goals of the black power movement.
For too many years, black Americans marched and had their heads broken and got shot. They were saying to the country, “Look, you guys are supposed to be nice guys and we are only going to do what we are supposed to do—why do you beat us up, why don’t you give us what we ask, why don’t you straighten yourselves out?” After years of this, we are at almost the same point—because we demonstrated from a position of weakness. We cannot be expected any longer to march and have our heads broken in order to say to whites: come on, you’re nice guys. For you are not nice guys. We have found you out. »
In the Review’s February 23, 1966 issue, Noam Chomsky published a 12,000-word essay on “The Responsibility of Intellectuals.” “We can hardly avoid asking ourselves,” Chomsky wrote, “to what extent the American people bear responsibility for the savage American assault on a largely helpless rural population in Vietnam… As for those of us who stood by in silence and apathy as this catastrophe slowly took shape over the past dozen years—on what page of history do we find our proper place?” In this exchange later that spring, George Steiner pressed Chomsky on the question of what political or personal actions ought to be taken to end the war.
GS: I write to express my admiration for your lucid and compelling essay. But I write also to ask what your next paragraph would be? The mendacities which surround us need exposure. But what then? You rightly say that we are all responsible; you rightly hint that our future status may be no better than that of the acquiescent intellectual under Nazism. But what action do you urge or even suggest?
NC: I do feel that the crucial question, unanswered in the article, is what the next paragraph should say. I’ve thought a good deal about this, without having reached any satisfying conclusions. I’ve tried various things—harassing congressmen, “lobbying” in Washington, lecturing at town forums, working with student groups in preparation of public protests, demonstrations, teach-ins, etc., in all of the ways that many others have adopted as well. The only respect in which I have personally gone any further is in refusal to pay half of my income tax last year, and again, this year. My own feeling is that one should refuse to participate in any activity that implements American aggression—thus tax refusal, draft refusal, avoidance of work that can be used by the agencies of militarism and repression, all seem to me essential. »
“I and my colleagues have been happily torn from a long nap by the energy of rock,” composer Ned Rorem wrote in January 1968, “principally as embodied in the Beatles. Naturally I’ve grown curious about their energy. What are its origins? What need does it fill?”
I never go to classical concerts any more and I don’t know anyone who does. It’s hard still to care whether some virtuoso tonight will perform the Moonlight Sonata a bit better or a bit worse than another virtuoso performed it last night. But I do often attend what used to be called avant-garde recitals, though seldom with delight, and inevitably I look around and wonder: what am I doing here? Where are the poets and painters and even the composers themselves who used to flock to these things? Well, perhaps what I am doing here is a duty, keeping an ear on my profession so as to justify the joys of resentment, to steal an idea or two, or just to show charity toward some friend on the program. But I learn less and less. Meanwhile the absent artists are at home playing records; they are reacting again, finally, to something they no longer find at concerts. Reacting to what? Why, to the Beatles, of course, whose arrival I believe is one of the most healthy events in music since 1950. »