Obama and the Legacy of Africa’s Renaissance Generation


Reuters/Obama For America/HandoutBarack Obama as a child with his father Barack Obama Sr., 1960s

It came to be a core belief held by the American public and media that Barack Obama was a self-creation who had stepped out of nowhere. In a racially divided society, for some the idea that he belonged to no tribe made it possible to vote for him. For his detractors, of whom Trump and his birther movement were the most visible, the belief provided an opportunity to claim that Obama was not a true American. Indeed, he cut a solitary figure: parents and American grandparents dead, no full siblings; what else there was of his family lived in Kenya, which might as well have been the moon to many Americans. Marriage to Michelle gave Obama what he appeared to lack, a family and a community, though his Kenyan ancestry meant he was a member of the African-American community by adoption rather than birthright.

Against the backdrop of the fantasy of normality to which American (and not just American) popular culture subscribes—that is to say, the insistence that all but a few grow up in the same town and live there all their lives—Obama’s story appeared unusual. The truth is that his grandparents made the move to Hawaii (after several moves around the country), doing what millions of Americans before them have done and continue to do: searching for better opportunities. One result is that families become stretched over distance and time until the links between uncles, aunts, cousins, and generations are broken and reformed with new generations in new places.

Even so, the stand-out fact of Obama’s biography remained and remains that he had been born of a Kenyan father and a white mother. “No life could have been more the product of randomness than that of Barack Obama,” wrote David Maraniss in his 2012 biography of the former president. This, though, is the case only when his life is viewed from an American perspective. From an African perspective, the tradition of sending young men to study overseas, as was the case with Barack Obama Sr., is a familiar and longstanding one. In 1852, William Wells Brown, the American playwright, fugitive slave, and abolitionist, noted that he might meet half a dozen black students in an hour’s walk through central London. Some sixty years before that, in 1791, the Temne King Naimbana (of what became Sierra Leone in West Africa) sent his son John Frederick to England, for reasons of political expediency (he sent another to France, and a third to North Africa to acquire an Islamic education). Tragically, John Frederick never made it home, but died on the return passage.

In the second half of the twentieth century, geopolitical events—the end of empires, the rise of nationalism in African countries, the cold war, communism, and the second “red scare”—would see an exponential rise in the numbers of Africans sent to study overseas. So the meeting of Obama’s parents came about more as the unintended consequence of political policy than by random chance. For me, Obama’s story is remarkably familiar. My parents met under very similar circumstances. My father was born in 1935 in Sierra Leone; Barack Obama Sr. was born in Kenya in 1936. My mother was white and British; Obama’s mother was a white American. Both women met and married the men who would become our respective fathers when those men were selected to study at university abroad—a story Obama relates only briefly in his memoir Dreams from My Father:

My father grew up herding his father’s goats and attending the local school, set up by the British colonial administration, where he had shown great promise. He eventually won a scholarship to study in Nairobi; and then on the eve of Kenyan independence, he had been selected by Kenyan leaders and American sponsors to attend a university in the United States, joining the first wave of Africans to be sent forth to master Western technology and bring it back to forge an new, modern Africa.

Obama was wrong about one thing: his father was not in the first wave of students sent overseas to master Western technology, though he was in the first wave of Kenyans who were sent to America. Up until then, most African students had been destined for Britain and, starting after World War II, to the Soviet Bloc and China. In fact, the adventures of this generation of Africans would one day inspire a genre of literature, collectively known as the “been to” novels, exemplified by Ay Kwei Armah’s Fragments, No Longer at Ease by Chinua Achebe, and Ama Ata Aidoo’s Dilemma of a Ghost, fictions that told of the challenges both of leaving the motherland for the West and of return.

*

My father’s insistence that only a British boarding school was able to provide an education good enough for his children had me in tears at Freetown’s Lungi Airport three times a year as we waited to board the plane to London. My father was unyielding, reminding us constantly of the value of the enterprise we were undertaking and about which I didn’t care in the slightest. Paying for our education came before buying a house, before foreign travel, before everything. My father’s own story was both extraordinary and yet, in its own way, entirely typical of the changing times in which he was born. The son of a wealthy farmer and a regent chief from the north of Sierra Leone, Mohamed Forna had won a scholarship at an early age to Bo School, “the Eton of the Protectorate,” as it was known, many miles from home in the south of the country.


Aminatta FornaMohamed Forna, 1957

At the time, Sierra Leone was a British colony, though one that was never settled by whites, who, unable to tolerate the climate, died in such droves from malaria and tropical illnesses that the country was dubbed “the white man’s grave.” British fragility made a crucial difference to the style of governance Britain chose to adopt in West Africa. Instead of a full-fledged colonial government such as existed in Kenya, where the climate of the Highlands was suited to both coffee and Europeans, in Sierra Leone the guardians of empire relied instead on a system of “native administration.” Bo School was founded by the British for the sons of the local aristocracy, who, according to plan, would play a leading role in governing Sierra Leone on behalf of the British.

Generally, the British were cautious about allowing their colonial subjects much in the way of book-learning. The colonial project had begun with a great deal of hubris, talk of a civilizing mission and the belief that Britain could create the world in its own image. Education was a part of that mission. But by the time Lord Lugard, the colonial administrator and architect of native administration, became the governor of Nigeria in 1912, he was sounding warnings against “the Indian disease,” namely the creation, through education, of an intellectual class who would embrace nationalism. Burned by the threat of insurrection elsewhere in the Empire, though still intent on pursuit of an administration staffed by local talent, the British allowed a few Africans just enough education to create a core of black bureaucrats, but no more.

Sierra Leone’s beginnings were a little different from those of Britain’s other African holdings. In the late eighteenth century, British philanthropists had established settlements there of people freed from slavery, many of whom had fled from America to Britain following Lord Mansfield’s 1772 ruling that protected escaped slaves. As part of this social engineering experiment, schools and even a university were established in the capital, Freetown. Fourah Bay College, established in 1827, was the first institute of higher education built in West Africa since the demise of the Islamic universities in Timbuktu. Elsewhere in Britain’s African dominions, and in the early days of empire, most educational establishments were built by evangelically motivated Christian missionaries, and they were tolerated but not encouraged by the colonial administration.

In Kenya in the 1920s, precisely what Lugard feared began to happen: missionary-educated Kenyan men established their own churches and challenged white rule. The locals had a name for Western-educated Kenyans: Asomi. Harry Thuku, the father of Kenyan nationalism (whose story is narrated in Ngũgĩ wa Thiong’o’s tale of the Mau Mau rebellion, A Grain of Wheat) was one such. In their churches, Asomi pastors accused the missionaries of distorting the Bible’s message to their own ends and preached an Africanized version of Christianity, and the Asomi founded associations to represent African interests and built their own schools in which pupils were imbued with a sense of patriotism and pride.

Still, whatever resistance Britain’s Colonial Office offered to the idea of the educated native, by the later days of empire, faced with ever-growing demands for colonial reform, the British began to build a limited number of government institutions, with the intention, in the words of the Conservative minister Oliver Stanley in 1943, of guiding “Colonial people along the road to self-government within the framework of the British Empire.” Any future form of self-governance was intended to create the basis for neocolonialism and a bulwark against the threat of communism.

Shifts in British attitudes, however, were soon outstripped by African ambitions. One million African men had fought on the Allied side during World War II, and those experiences had broadened their worldview. Many had learned to read and write—among them, Obama’s grandfather, Onyango, who, according to Obama family lore, traveled to Burma, Ceylon, the Middle East, and Europe as a British officer’s cook. Whether Onyango knew how to read and write English before he was recruited is unknown; it is possible, though unlikely. By the time he came back, however, he was able to teach his young son his letters before sending him to school. In Dreams from My Father, Barack Obama recounts Onyango’s surviving sister and his great aunt Dorsila’s memories of his grandfather: “For to [Onyango] knowledge was the source of all the white man’s power, and he wanted to make sure his son was as educated as any white man.”

Across the continent, emerging nationalist movements were gaining ground. For them, literacy followed by the creation of an elite class of professionals were the necessary first steps toward full independence. The courses on offer at the government colleges were restricted in subject and scope (syllabuses had to be approved by the colonial authorities) and the colleges themselves could admit only limited numbers of students. Energized and impatient, a new generation refused to wait or to play by the Englishman’s rules. With too few opportunities on the continent, they set their sights overseas, on Britain itself.

Few had the means to cover the costs of travel and fees. There were a limited number of scholarships available through the colonial governments, mainly to study subjects the local universities were not equipped to teach, such as medicine. A lucky few found wealthy patrons; others still were sponsored by donations from their extended families, and sometimes from entire villages. The Ghanaian nationalist and politician Joe Appiah, father of the philosopher Kwame Anthony Appiah, ditched his job in Freetown without telling his employers and bought himself a one-way ticket on a ship bound for Liverpool, hoping to get by on his luck and wit.

*


Aminatta FornaThe author’s parents, Mohamed Forna and Maureen Margaret Christison, on their wedding day, 1961

My mother Maureen has a particular memory of my father. On April 27, 1961, the day Sierra Leone became a self-governing nation, he got roaring drunk at a sherry party held by African students at the premises of the British Council in Aberdeen. The couple had married at the registry office in Aberdeen one month before, in a ceremony attended by their friends among the West African students. On the way home, on the top deck of the bus, my father lit six cigarettes and puffed on them all at once. “But Mohamed, you don’t even smoke,” my mother had protested. And my father replied: “I’m smoking the smoke of freedom, man. I’m smoking the smoke of freedom.”

In the decades between the two world wars, Britain emerged as “the locus of resistance to empire” where anti-colonial movements were shaped by the growth of Pan-Africanist ideals among artists, intellectuals, students, and activists from the colonies. The Kenyan writer and activist Ngũgĩ wa’ Thiong’o, commenting on his arrival in Leeds in 1964, remarked to me:

For the first time I was able to look back at Kenya and Africa, from outside Kenya. Many of the things that were happening in Africa at that time, independence and all that, were not clear to me when I was in Kenya but made sense when I was in Leeds meeting other students from Africa, Nigeria, Ghana, students from Australia, every part of the Commonwealth, students from Bulgaria, Greece, Iraq, Afghanistan—we all met there in Leeds, we had encounters with Marx with Lenin, and all that began to clarify for me a change of perspective.

Among those elites who gathered there, driven by, and driving, the desire for self-rule, were Jomo Kenyatta, Kwame Nkrumah, Michael Manley, Marcus Garvey, C.L.R. James, Seretse Khama, Julius Nyerere, as well as a number of African Americans, including Paul and Eslanda Goode Robeson. In London, anti-colonial and Pan-Africanist ideas were shared and enlarged, spurred by a shared experience as colonial subjects in their homelands and as the victims of racism and the color bar in Britain. “They were brought together too by the fact that the British—those who helped and those who hindered—saw them all as Africans, first of all,” writes Anthony Appiah. And so those who may previously never have identified themselves as such began to do so and explore the commonalities of race, racism, and nationalism. And out of those conversations arose new political possibilities involving international organizations and the opportunity for cultural exchange.

Arrival in Britain brought with it many shocks for the colonial student. Whereas before they were Sierra Leonian and Temne, Luo and Kenyan, Hausa and Nigerian, suddenly they were simply black, subject to all the attitudes and reactions conferred by their skin color. Signs declaring “No Irish, No Dogs, No Blacks” were still common on rental properties during my father’s time in Scotland. My mother told me of the insults my father endured in the street—directed at her as well, when they were together. Later, my father’s second wife—my stepmother, who also went to university in Aberdeen and vacationed in London, staying in the apartments of other African students—recalled the gangs of racist skinheads who arrived to break up their gatherings. “Somebody would run and call for the West Indians,” she told me, their Caribbean neighbors being more experienced in fending off such attacks. In a reversal of the immigrant dream story, Sam Selvon’s 1956 novel The Lonely Londoners tells the story of black people arriving in the 1950s in search of prosperity and a new life, only to discover cruelty and misery.

In order to confront the challenges of their new lives, as well as to keep abreast of political developments back home, the colonial students organized themselves into societies and associations. One such was the hugely influential West African Students’ Union, or WASU. If London was the heart of resistance, then WASU was its circulatory system. My father and his friends were all WASU members, as was every former student of that time from a West African country to whom I have ever spoken. WASU was the center of their social, cultural, and, especially, political life. It also “functioned as a training ground for leaders of the West African nationalist movement,” wrote the historian Peter Fryer; indeed, both Kwame Nkrumah and Joe Appiah were among the leading names who served on WASU’s executive committee.

Unnerved at the pace with which calls for independence were gathering, the Colonial Office kept a close eye on the students’ activities. In London, the department funded two student hostels, which aided the many students whom the color bar prevented from finding decent lodging (and also kept the students conveniently in one location). The civil servants also spied on the African students through MI5. A tug-of-war was taking place within the Colonial Office: on one side were the “softly-softlies” who favored an approach designed to promote good relations with the future leaders; on the other were the hardliners concerned that Communist ideas might take root among the rising generation. Such was the fear of Communist-inspired insurrection in West Africa that Marxist literature was banned and travel to Eastern European countries restricted in those countries.

The colonial administrator Lord Milverton once described WASU as “a communist medium for the contact of communists with West Africans” through the Communist Party of Great Britain. Then-parliamentarian David Rees Williams even accused the Communist Party of using prostitutes to spread its message and called for restrictions on the numbers of students entering the country from the colonies. Though MI5 did not go so far as to keep individual files on all the students, they did do so for the most visible leaders like Nkrumah, whose phone they tapped.

Certainly, there were Marxist sympathizers among the WASU leadership and the African student body in general. Ngũgĩ wa’ Thiong’o talked to me about his road to Marxism, which began during his student years in Leeds, when he saw poor whites for the first time and witnessed, during the student demonstrations in Leeds, white policemen turning on their own, a “vicious crushing of dissent.” Julius Nyerere turned to socialism during his time in Edinburgh, returning to Tanzania in 1952 to become a union organizer and later the first president of a new, socialist republic.


wasuproject.org.ukMembers of the West African Students’ Union (WASU), London, 1920s–1930s

By the 1960s, with the colonies gaining independence one by one, and China and the Soviet bloc beginning to offer their own scholarships, the softly-softly approach had prevailed within Britain’s Colonial Office. The administration of the students’ affairs was handed over to the British Council, which began a diplomatic charm offensive. Before they even left home, students on government scholarships were offered induction seminars on what to wear and how to conduct themselves in the homes of British people, and shown films on how to navigate the challenges of daily life. In one of these films, entitled Lost in the Countryside, a pair of Africans abroad (dressed in tweeds, they emerge from behind a haystack) are instructed firmly: “Do not panic! Find a road. Locate a bus-stop. Join the queue [and there in the middle of nowhere is a line of people]. A bus will arrive. Board it and return to town.” Once the students were in the UK, the British Council arranged home-stays for those Africans who wanted an up-close experience of the British (some 9,500 said they did). My stepmother recalls being advised never to sit in the chair of the head of household, a faux pas of which she has retained a dread all her life.

And finally, there were social events at the Council’s premises in various British cities. At a Christmas dance in the winter of 1959, my father, a third-year medical student at Aberdeen University, approached a young woman, a volunteer named Maureen who was helping to pour drinks for the party, put out his hand and said: “I’m Mohamed.”

*

If the attitude of the British authorities toward the West Africans was one of wavering welcome, the attitude toward the East Africans, Kenyans in particular, was even more complicated. In 1945, there were about 1,000 colonial students in Britain, two thirds of whom came from West Africa and only sixty-five of whom came from East Africa. In Kenya, a simmering mood of rebellion had by the 1950s given rise to the Mau Mau, a movement that explicitly rejected white rule and gave voice to the resentment against colonial government taxes, low wages, and the miserable living conditions endured by many Kenyans. The Mau Mau, which found its support mainly among the Kikuyu people who had been displaced from their lands by white farmers, demanded political representation and the return of land. Facing armed insurrection, in 1952 the British declared a state of emergency, and tried and imprisoned the nationalist leader (who would later become the first president of Kenya) Jomo Kenyatta, who had returned to his homeland from London in 1947.

Upon Kenyatta’s imprisonment, Kenyan nationalists turned to the United States for support. The activist Tom Mboya, a rising political star who in 1960 featured on Time magazine’s cover as the face of the new Africa, became the strongest voice calling for independence in Kenyatta’s absence. In 1959, Mboya began working with African-American organizations—in particular, the historically black private and state colleges, as well as civil rights champions such as Harry Belafonte, Sidney Poitier, Jackie Robinson, and Martin Luther King Jr.—and toured the United States talking about black civil rights and African nationalism as two sides of the same coin. His aim was to raise money for a scholarship program to bring Kenyan students to the US. Over two months, Mboya gave a hundred speeches and met with then Vice President Richard Nixon at the White House. By that point, independence for Kenya was a matter of when, not if—after all, Ghana had already attained independence—and it looked very much as though Britain was deliberately refusing Kenyans the help they needed to prepare for self-governance.

So here was Mboya offering the United States a foothold of influence in Africa, which Britain, even against the backdrop of a cold war scramble for the allegiance of African nations, was too churlish or too arrogant to secure. Although Nixon stopped short of agreeing to meet Mboya’s request for help, the Democratic candidate for the 1960 presidential election John F. Kennedy did do so, and his family’s foundation donated $100,000 to what became known as the “African student airlifts,” the first of which had taken place in 1957.

Mboya was a member of the Luo people, a friend of Onyanga’s, and sometime mentor to his son, Barack Obama Sr. On his own initiative, Obama Sr. had managed to secure himself an offer from the University of Hawaii, and this won him a place on a later airlift in 1959. Here was a young man with an excellent brain, and here, too, was a new dawn on the horizon bringing with it a new country—Obama Sr. saw himself as part of it all. The writer Wole Soyinka, who himself studied at Leeds, England, in the 1950s, had a name for them, the young men and women who came of age at the same time as their countries; he called them the “Renaissance Generation.”


Evening Standard/Getty ImagesJomo Kenyatta, the first president of Kenya, with Ghanaian Prime Minister Kwame Nkrumah at the Commonwealth Prime Ministers’ Conference, Marlborough House, London, 1965

Just as the West African students bound for Britain had been coached in what to expect, so the Kenyans were briefed on arrival in the United States, including about the prevailing racial attitudes they should expect to encounter there. The world-renowned anthropologist and now director of the Makarere Institute of Research Mahmood Mamdani, who traveled to the US on a 1963 Kenyan airlift, recalls being told it would be “preferable for us to wear African clothing when going into the surrounding communities because then people would know we were African and we would be dealt with respectfully.” Under colonial rule, Kenyans certainly did not share the privileges of whites; even so, for many African students the daily indignities of racial segregation in America came as a shock. At least one was arrested for trying to buy a sandwich at a whites-only lunch counter, and some of those studying at universities in the South were prompted by their experience of Southern racism to ask to be transferred to Northern colleges. As had been the case for their counterparts in Britain, a close eye was kept on their activities. Returning from a trip to Montgomery, Alabama, Mamdani got a visit from FBI agents; he recalls that they asked if he liked Marx, to which Mamdani replied in perfect innocence that he had never met the man. Informed that Marx was dead, he replied: “Oh no! What happened?” And as he told me in our conversation many years later: “The abiding outcome of that visit was that I went to the library to look up Marx.”

Obama Sr.’s choice of the University of Hawaii was, in many ways, an unfortunate one. Hawaii was more cosmopolitan than other parts of the United States and he did at least escape some of the racist attitudes that confronted other African students, but he was far from all the debates, meetings, lobbying, and activism about independence that were taking place at the universities and historically black colleges on the mainland. When the opportunity arose, he chose to continue his studies at Harvard—and part of the reason was undoubtedly that he wanted to get closer to the action. In 1961, Kenyatta was released from jail; two years later, Kenya declared independence. When all that happened, Obama was still a long way from home—just as my father was when Sierra Leone won its independence.

In time, Ngũgĩ would return from Leeds, and Mamdani from the United States. Ngũgĩ was by then a published author, having abandoned his studies to write Weep Not, Child. Mamdani went on to teach at Makarere University, which became the venue for the famous 1962 African Writers’ Conference, and he helped to transform it from a colonial technical college into a vibrant university. One of the few women on the airlift, Wangari Maathai, flew back home from Pittsburg in 1966, later to found the Green Belt Movement, an initiative focusing on environmental conservation that today is credited with planting fifty one million trees in Kenya and for which Maathai would be awarded a Nobel Peace prize. Still, for Kenya, as for every one of the new African nations, independence proved a steep and rocky road. Five hundred students who had earned their degrees overseas returned home, a significant proportion of them the American-educated AsomiThey would become the educators, administrators, accountants, lawyers, doctors, judges, and businessmen in the new Kenya. Despite the best efforts of Tom Mboya and his supporters, Kenya had only a fraction of the college-educated young professionals it needed.

*


Aminatta FornaThe author with her father, 1966

Eight years after he had left Sierra Leone, my father returned. His elder brother had died and his family wrote that Mohamed was needed at home. By then, he was a qualified medical doctor, with a wife and three children. The year before, Obama Sr. had also returned home after the US government declined to renew his visa. Medical students and those who went on to higher degrees, especially, had found themselves away for long periods, as much as a decade. Unsurprisingly, in that time, many of the men had formed romantic attachments with local women. If those relationships were frowned upon in Britain, they were illegal in much of America. Loving v. Virginia, the case before the Supreme Court that finally overturned the ban on interracial unions, was not decided until 1967. When the Immigration and Naturalization Service declined Obama Sr.’s request to remain in the country, his relations with women were reported to be part of the problem. Already, he had fathered one child with Ann Dunham, a son also named Barack, but that marriage was over, and he had formed a new relationship with another white woman, Ruth Baker.

In Britain, the authorities, though they did not encourage such unions, did not intervene except, notably, in the case of Seretse Khama, heir to the Bangwato chieftaincy in Bechuanaland (now Botswana) and Ruth Williams. This was at the behest of white-ruled South Africa, whose government would not tolerate an interracial marriage within its borders. Jomo Kenyatta had a child, Peter, with his British wife. I used to pass Peter in the corridors of the BBC, where for a time we both worked; he was in management, while I was a junior reporter awed by the prestige of his last name. The marriage of Joe Appiah to Peggy Cripps, the daughter of the Labour politician Sir Stafford Cripps, was one of the most high-profile unions of the day that also happened to be a mixed marriage.

Of Ann Dunham, first wife to Obama Sr. and mother of the future president, a childhood friend would later say: “She just became really, really interested in the world. Not afraid of newness or difference. She was afraid of smallness.” The same could be said of my mother, Maureen Christison. Aberdeen was simply too small for her. The African students represented a world beyond the gray waters of the North Sea. In the Scottish writer Jackie Kay’s Red Dust, her 2010 memoir of her search for her Nigerian father who studied in Scotland in the 1950s, her father overturns conventional wisdom in remarking how popular the male African students were with the local girls. The men frequently came from aristocratic families—both Appiah and Khama were royal, and my father was the son of a regent chief and landowner. “You must remember,” a contemporary of my parents observed during the time when I was researching my own memoir of my father, “they were the chosen ones.”

In 2017, in a New York Times op-ed assessing President Obama’s foreign policy legacy, Adam Shatz noted that Obama was “A well-traveled cosmopolitan… seemingly at home wherever he planted his feet. His vision of international diplomacy stressed the virtues of candid dialogue, mutual respect and bridge building.” Obama’s cosmopolitanism was rooted in several places: the fact of his Kenyan father (though not his immediate influence, since Obama Sr. was gone from the family before Obama was old enough to remember him), and later his painstaking search to assemble the pieces of his birthright, would do much to extend his vision. But before all of that, it was his mother, Ann, who instilled in him the foundations of his internationalism. She rehearsed for her son the version of his father’s story that Obama Sr. told of himself: that of the idealist devoted to building a new Kenya—albeit that in reality he was an unreliable husband and father, whose career came well short of his own expectations. It was Ann who remained true to that vision of a new world, who easily made friends with people of different nationalities, who subsequently married an Indonesian, and took her son to Indonesia to spend a formative period of his childhood, where she spent many years running development projects. My mother Maureen never returned to Scotland after the break-up of her marriage to my father. She married again, to a New Zealander who worked for the United Nations, and spent her life moving around the world, in time building her own international career within the UN.

Both women entered an international professional class, a group that the British historian David Goodhart disparagingly describes as the “anywheres”: people whose sense of self is not rooted in a single place or readymade local identity. If Obama’s search in Dreams from My Father was a quest for his African identity, it was also, and conversely, an attempt to discover whether he could ever be a “somewhere,” whether that somewhere was a place (in time, he would choose Chicago) or a people, part of an African-American community.

His next book, The Audacity of Hope, became, by contrast, a plea for complexity. Of his extended family of Indonesians, white Americans, Africans, and Chinese—in which I find a mirror for mine: African, European, Iranian, New World, and Chinese—Obama writes: “I’ve never had the option of restricting my loyalties on the basis of race or measuring my worth on the basis of tribe.” Obama knew and understood that he had more than one identity, that all of us do. Anthony Appiah credits his own avowed cosmopolitanism to his father Joe’s relaxed way with people from different worlds. I believe my father thought that his children would grow up to be both Sierra Leonian and British, a new kind of citizen, a new African, comfortable with our place in the world.


Kwame Anthony AppiahPeggy and Joe Appiah with their children, Ghana, circa 1972

For all the hope, there were bitter disappointments as well. Shortly after Obama Sr. returned to Kenya, his mentor Tom Mboya was assassinated. Obama Sr. would lose himself to drink and die in a car crash. My father arrived back in Sierra Leone to a government openly talking of introducing a one-party system, a threat to his democratic ideals. As politically opportunistic leaders across the continent quickly realized how easily the newborn institutions of democracy could be subverted to personal gain, the returning graduates would find themselves forced to confront the very governments they had come home to serve. In Ghana, Joe Appiah was jailed by his former good friend Nkrumah; Ngũgĩ wa’ Thiong’o would be imprisoned for sedition against the Kenyan government and then exiled; in Nigeria, Soyinka encountered a similar fate. My father was jailed and killed. Many would pay a high price for the privilege of having traveled beyond Africa, for coming of age at the same time as their countries, for working and dreaming of a Renaissance yet to come.

How many times in my own travels in this world have I come across one of them, the chosen, of my father’s generation? There’s a quality of character they wear, whose origins I have come to understand. They carry, alongside a worldly ease, a sense of duty, of obligation and responsibility, that imbues all they say and do. Unlike the generations that followed, they never saw their own future beyond Africa. I try to imagine an Africa if they had never been, and I cannot. There are those the world over who decry the failings and weaknesses of the post-independence African states at the same time as many in the West—after Afghanistan, after Iraq, and facing assaults on their own democratic institutions—have slowly come to the realization that nation-building is no simple task, that democracy takes more than a parliament building. The generation of Africans to whom the task fell of creating new countries knew, or came to know, that alongside the desires and dreams, and the promise of a new-found freedom, they had been set up to fail. Their real courage lay in the fact that they did not surrender, that they tried to do what they had promised themselves and their countries they would. They went forward anyway.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/M_XaNkPhsIA/

Trump & CNN: Case History of an Unhealthy Codependency


Sean Rayford/Getty ImagesPeople shouting behind CNN reporter Jim Acosta before a Trump rally, West Columbia, South Carolina, June 25, 2018

At nearly every Trump rally prior to the midterm elections, the chant went up: “CNN sucks.” To journalists, the cry had an ominous ring, amplified as it was by Trump’s repeated references to fake news and his description of journalists as the enemy of the people. The delivery of a pipe bomb in late October to CNN’s headquarters in New York confirmed the sense among journalists that they were under siege.

But is there any truth to the claim of CNN’s failings? Even at a time of such anti-press animus, it’s important to assess the fairness of the network’s coverage. From the moment Trump announced his candidacy in 2015, CNN President Jeff Zucker has made him the centerpiece of the network’s journalism—as well as its business model. On the latter, the strategy has been a grand success; according to a recent article in Vanity Fair (“Inside the Trump Gold Rush at CNN”), the network in 2018 expects to turn a profit of $1.2 billion on $2.5 billion in revenues, making it CNN’s most profitable year ever.

But what about its journalism? Much rides on the answer, for the network has become Exhibit A in the case of Trump supporters that the press is hopelessly biased against them. To assess CNN’s coverage, I regularly tuned into it in the days leading up to the elections. It was not a pretty picture.

Thursday, November 1, was representative of its problems. The day’s big story was Trump’s dark warnings about the migrant caravan making its way through Mexico to the US border. “A new low for Trump,” afternoon anchor Brooke Baldwin said of a new Trump ad that blamed Democrats for allowing an undocumented immigrant who had murdered two police officers to remain in the United States. For commentary, Baldwin turned to Valerie Jarrett, the former adviser to President Obama. Why her, I thought. Wouldn’t she be predictably opposed to Trump? She indeed was, calling the ad “a sad page from an old playbook called fearmongering 101.” Baldwin wondered why Trump’s supporters embraced “his lies.” Jarrett said she could offer no insight on that but did note her belief that it was important for our leaders to be “role models,” because “young people are watching.” What banality.

That evening, on Anderson Cooper’s show, the caravan remained the main focus. Earlier in the day, Trump, in a speech at the White House, had announced new measures aimed at stemming illegal immigration. “As he so often does,” Cooper said, the president “uttered a string of untruths.” For elaboration, he interviewed Ralph Peters, a retired lieutenant colonel. Peters had been an analyst on Fox News for years, routinely denouncing Obama and everyone associated with him. Disgusted by Trump, he left Fox in March 2018 and had since appeared frequently on CNN, directing at the president the same vitriol he had formerly heaped on Obama. The day’s events had been really difficult for him, Peters said, “because I want to take the president of the United States seriously, but he manages to be at once an embarrassing fool and an insidious menace.” He was an “un-American American president” who had made “absolutely repulsive, repugnant attacks on America.” When Cooper asked about Trump’s plan to send troops to the border, Peters dismissed him as a draft-dodger. I was puzzled why CNN was giving this marginal figure so much air time.

Next up was Jorge Ramos. The Univision anchor is a well-known critic of the president—in August 2015, he was ejected from a press conference after engaging in a testy exchange with Trump over his immigration policy, which he called “full of empty promises.” The previous week, Ramos had spent two days reporting on the caravan for CNN. Trump, Cooper said, continued to paint the caravan as an invasion when in fact it was a thousand miles from the border; nonetheless, “the president keeps peddling this lie.” Did Ramos agree? Yes, he said, it was a lie. In his time with the caravan he had seen not terrorists or criminals but young kids fleeing poverty and gangs. For several minutes the interview went on in this vein, with Cooper and Ramos jointly dissecting the president’s claims.

Given how bloated those claims were, it was certainly useful to have them punctured, but the amount of time CNN devoted to them seemed to be serving Trump’s aims by giving him a megaphone, and the zeal with which the network went after him seemed unprofessional. Yet, at 9:00 PM, when Cooper handed the baton to Chris Cuomo, the offensive continued. The president, Cuomo said, “is all in on fear and loathing.” Nothing he had said about the immigrant invasion “has any basis in reality” but was simply “Trumped-up talk.” But would enough people buy into that talk to drive turnout? For an answer, Cuomo spoke with Ohio Governor John Kasich. Kasich is another confirmed antagonist of Trump, having run against him in 2016 for the GOP presidential nomination.

“Do you agree with me on the basic proposition that there is no imminent invasion?” Cuomo asked. Yes, Kasich said, he did agree; it was all about “getting people stirred up.” Were the Republicans becoming “the party of fear and loathing?” Cuomo asked. Kasich hoped not, he said, for he doubted such rhetoric could win elections. As the segment ended, Cuomo thanked the governor “for speaking truth to power on this show as always.” I couldn’t decide which was worse—the cliché or its tendentiousness.

Cuomo was far more combative with his next two guests—former Republican Senator Rick Santorum and Amy Kremer, co-founder of Women for Trump. Both praised the president for keeping his campaign promise to crack down on illegal immigration. Cuomo pressed them on the president’s decision to send troops to the border; they pushed back just as hard. Cuomo deserved credit for giving time to the other side, but the exchange was unedifying; both Santorum and Kremer were professionals with well-rehearsed positions, and their conversation with Cuomo had the feel of a ritualized dance.     

And so it went throughout my time watching CNN. Trump was repeatedly criticized for lying, spreading fear and hate, making racist claims, and being a bigot. Anchors and commentators could not understand why he was making immigration the centerpiece of the campaign when he had a good story to tell about the economy. The interviews with the occasional Trump advocate were far outnumbered by those with people like David Glosser, the uncle of Stephen Miller, the Trump aide who has helped define his immigration policy. Glosser bitterly denounced his nephew, saying that had such a policy been in place a century earlier, his own forebears would not have been allowed into America when fleeing anti-Jewish pogroms in Europe. Given all the talk about Trump’s base and whether his race-baiting demagoguery resonated with it, I wanted to hear more from the base itself, but few of its members appeared.

More generally, the network’s coverage seemed uninformative, repetitive, and nakedly partisan. Apart from a some perfunctory I’m-here-in-red-state-America-to-speak-with-the-locals dispatches, it featured few in-depth reports on developments on the ground. Instead, it offered talking heads reciting familiar talking points. With immigration and related questions of national identity having become so salient both in America and throughout the world, I was surprised at how little genuine interest CNN showed in it.

What’s more, while routinely decrying the polarization afoot in the land, CNN hosts and pundits seemed to feed it with their bickering panels and partisan slugfests. On this, MSNBC and Fox News are equally guilty. Alexandra Pelosi, a documentarian whose latest production, Outside the Bubble (airing on HBO), chronicles her travels across the country to talk with ordinary Americans, recently told The New York Times that she blames cable news for the nation’s partisan divide: “There’s too much profit being made right now on the divide. How many people in those cable news studios ever really go spend the night in America, not just in the Four Seasons or wherever Trump is at the moment, but I mean really go to somebody’s house, have dinner and talk to them?”

Pelosi (the daughter of Nancy, the House minority leader) no doubt goes too far in holding the cable networks solely responsible for the nation’s divisions, but her indictment of them for not getting out of their studios more often and engaging with citizens at the grassroots seems not only accurate but applicable to the press as a whole.

To be fair, the nation’s top news organizations—the TimesThe Washington PostThe Wall Street Journal, NPR, Politico—do regularly get out into the field. For months before the midterms, their reporters toured the country, filing fact-filled reports on the battle for control of the House and the Senate. Yet even these followed a well-worn template, focusing overwhelmingly on the candidates and their consultants, polls and fundraising, who’s ahead and who behind, with perhaps two or three fleeting quotes from actual voters. Rare were the dispatches that sought to get beneath the surface and report in depth on communities and their residents—the challenges they face, the struggles they undergo, their aspirations, and their setbacks.

For the most part, the national press approaches the electorate much as the Democratic Party does, as an amalgam of distinct demographic groups, some rising, others declining. Michelle Goldberg captured this mindset last fall in a column for the Times: “America is now two countries, eyeing each other across a chasm of distrust and contempt. One is urban, diverse and outward-looking. This is the America that’s growing. The other is white, provincial, and culturally revanchist. This is the America that’s in charge.”

The type of casual condescension toward a large swath of America suggested by this statement is common in big-city newsrooms. The prevailing line is that white people, having long been accustomed to being in the majority, are panicked at the prospect of becoming a minority and so are drawn to Donald Trump and his campaign to Make America Great Again, which is code for keeping America white. Paul Krugman and many other liberal columnists have confidently concluded (on the basis of spotty data from 2016 exit polls and subsequent surveys) that Trump’s appeal to white workers is due exclusively to racism. Racism is surely a factor, but no doubt the travails of many communities in rural and rustbelt America are, too.

In a recent article in Politico magazine, Michael Kruse quoted the Republican consultant and pollster Frank Luntz on the twofold phenomenon of the Trump voter: “Half the people felt forgotten. And half of the people felt fucked.” This “F-squared” portion of the population, Luntz said, was the key to Trump’s victory. They help explain his sway over members of Congress and will help determine his fortunes over the next two years. Trump, Luntz observed, “is seeking to elevate those who feel oppressed by and taken advantage of by the elites, and he seems to raise them up and say, ‘Hey, guys, you’re now in charge.’”

Journalists—heavily concentrated in cities and mixing mostly with other affluent, highly educated urbanites—face a natural barrier in getting to know the F-squared part of America. Since Trump’s victory in 2016, they have spent more time in it, but it remains mostly a foreign land. With the divisions in the country seeming to harden in the wake of the midterms, journalists need to do a better job of overcoming them. This is especially true at CNN and the other cable networks. As Alexandra Pelosi suggests, I’d like to see Anderson Cooper, Chris Cuomo, and Wolf Blitzer get out of the studio more and really spend a night in America, visiting people in their homes and having dinner with them.

Sadly, Jim Acosta’s confrontation with President Trump at the post-election press conference seemed certain to heighten the divisions. For CNN, the encounter added to their star reporter’s visibility and the network’s image as a fighter for press freedom. To Trump and his supporters, Acosta’s grandstanding provided further evidence of the news media’s implacable hostility to them. Each side, in short, seemed to get from the encounter exactly what it wanted.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/z1kbEBGX0tQ/

Writing as Fast as Reality


Antonio Olmos/eyevine/ReduxAli Smith in her garden, Cambridge, England, 2005

I read the first two novels of Ali Smith’s seasonal quartet in Cairo, where long, warm, sunny days make up most of the year. In a city whose pace—a down-tempo lull—gives a sense that time is expanded, Autumn, with its meandering, time-traveling, light-footed story of a friendship between a young girl and an old man, felt exhilarating, deeply touching, even breathtaking. Winter, which is not strictly a sequel except in the seasonal sense and which revolves around a Christmas gathering at a family home in Cornwall, was fraught, overwhelming, dire. Too many people, too many egos, too many ideas, too much tension. “Ghastly” is how I have heard the season, which I have never experienced in its entirety, described—but the word “somewhat” applies to it and the temperament of the novel as well.

Winter begins tellingly, like Autumn, with a contemporary take on a Dickensian tale:

God was dead: to begin with.

And romance was dead. Chivalry was dead. Poetry, the novel, painting, they were all dead, and art was dead. Theatre and cinema were both dead. Literature was dead. The book was dead. Modernism, postmodernism, realism and surrealism were all dead. Jazz was dead, pop music, disco, rap, classical music, dead. Culture was dead.

As were history, politics, democracy, political correctness, the media, the Internet, Twitter, religion, marriage, sex lives, Christmas, and both truth and fiction. But “life wasn’t yet dead. Revolution wasn’t dead. Racial equality wasn’t dead. Hatred wasn’t dead.”

Smith, who was born in Scotland in 1962, is as attuned to the current moment as she is to the cycles of history that led us here. Growing up in council housing, Smith held odd jobs including waitressing and cleaning lettuce before persuing a Ph.D. in American and Irish modernism at Cambridge; she ultimately abandoned academia to write plays.

In Winter, Arthur (Art), who makes a living tracking down copyright-infringing images in music videos and also maintains a blog, Art in Nature, has just broken up with Charlotte, his conspiracy-theorist anticapitalist girlfriend, who has destroyed his laptop by drilling a hole through it and taken over his Twitter account to impersonate and ridicule him. Unable to face Christmas alone with his emotionally withdrawn, hypersensitive, and self-starved mother—Sophia, aka Ms. Cleves—and having promised her that he would bring along his girlfriend, he hires Velux (Lux), a gay Croatian whom he meets at an Internet café, to be a stand-in Charlotte (for £1000). At some point over that Christmas weekend, a long-estranged, politically and technologically aware hippie aunt, Iris, visits too. In their midst, accompanying Sophia, is the floating, disembodied head of a child. Bashful, friendly, nonverbal, it becomes something of a constant, if gradually dying, presence.

Family banter, conflict, political debate, reckonings, and reconciliation ensue. As do dreams, nightmares, hallucinations, and apparitions. Perspectives and narrators constantly change, shift, and collapse; parallel and tangential events are recounted at the same time. (“Let’s see another Christmas. This one is the one that happened in 1991.”) Conversation is structured and guided intuitively:

I cannot be near her fucking chaos a minute longer. (His mother talking to the wall.)

Lucky I’m an optimist regardless. (His aunt speaking to the ceiling.)

It is no wonder my father hated her. (His mother.)

Our father didn’t hate me, he hated what had happened to him. (His aunt.)

And mother hated her, they both did, for what she did to the family. (His mother.)

Our mother hated a regime that put money into weapons of any sort after the war she’d lived through, in fact she hated it so much that she withheld in her tax payments the percentage that’d go to any manufacture of weapons. (His aunt.)

My mother never did any such thing. (His mother.)

The events and the intricacies of the various interactions are both quotidian—“The walk from the gate to the house is unexpectedly far and the path is muddy after the storm. He puts his phone on to light the way. It buzzes with Twitter alerts as soon as he put it on. Oh God. So much for low reception”—and surreal. The plausible and the implausible are interchangeable, coming together in exuberant, tragicomic, and shrewd scenes:

Good morning, Sophia Cleves said. Happy day-before-Christmas.

She was speaking to the disembodied head…. The head was on the windowsill sniffing in what was left of the supermarket thyme. It closed its eyes in what looked like pleasure. It rubbed its forehead against the tiny leaves. The scent of thyme spread through the kitchen and the plant toppled into the sink.

At the dinner table and in allegory, tales are shared in multiple versions, forming a kaleidoscopic worldview and view of the family. An unreported chemical leak at a factory in Italy has killed trees, birds, cats, and rabbits, sent children to the hospital breaking out in boils, and poisoned the air, forcing everyone to leave their houses, which are then bulldozed. Sophia laughs at the vision of a cat with its tail falling off. Nobody else finds this funny. Smith’s characters navigate one another’s twisted humor and multidimensional takes on the world with various modes and tics of survival (the nervous laugh, the emotional withdrawal). Political and class divides mark every interaction, even the most intimate, and Winter brings out the perversity of privilege and choice:

What’ve you really been doing? Sophia said. Or have you taken idealistic retirement now?

I’ve been in Greece, Iris said. I came home three weeks ago. I’m going back in January.

Holiday? Sophia said. Second home?

Yeah, that’s right, Iris said. Tell your friends that. Tell them to come too. We’ll all have a fabulous time. Thousands of holidaymakers arriving every day from Syria, Afghanistan, Iraq, for city-break holidays in Turkey and Greece.

“None of my friends would be in the least interested in any of this,” Sophia responds.

Autumn set the precedent for Winter’s method of moral inquiry as well as for its use of found language and its form, which discards the conformity of sequence and layers fiction with contemporary political facts. This, as the seasons pass, is perhaps the only continuity, both within and between the novels. Characters don’t walk in from one book to the next, though references and preoccupations do. The pace picks up in Winter, possibly as Smith finds her creative stride. Remarkably, out of the abysmal state of world affairs she finds the capacity for inventiveness and play.

A master craftsperson, Smith seems to be completely liberated from ideas of what a novelist should be or do. There is no self-consciousness, no pretension. One has the impression that everything that meets her fancy, amusing or intriguing her, finds its way into her work. Wordplay, ideas on syntax, puns, banter with poetry and neologisms—“(What’s carapace?) It’s a caravan that goes at a great pace”—musings on images and representation, death, myth, painting, appropriation. As her characters turn to Google or the dictionary, one imagines she just did too:

She looked up at the consonants and vowels of what looked like a nonsense Scrabble game the people living here had painted round the room’s cornicing, still quite elegant regardless of the disrepair. i s o p r o p y l m e t h y l p h o s p h o f l u o r i d a t e w i t h d e a t h.

It is not by chance that Smith references art and artists so frequently in her work. (In her luminous 2014 novel, How to Be Both, the narrator of the historical novella that forms part of the narrative is the fifteenth-century Renaissance painter Francesco del Cossa; in Autumn, the iconoclastic 1960s British pop artist Pauline Boty is a shared fascination among the characters, as is the modernist sculptor Barbara Hepworth for Sophia in Winter.) While literary references seep through her novels, she also excavates and references histories of culture, politics, and art to come up with a language entirely their own. Smith’s novels are not so much prescient as they are intuitive and sensitive to nostalgia, the forces of collapse, and the breakneck speed with which we are hurtling toward further disaster.

She long ago abandoned traditional modes of storytelling. How to Be Both was printed in two editions, one with the historical narrator preceding a contemporary one, the other in reverse order. Before that came her fictionalized book of lectures, Artful, which was narrated by a character haunted by a former lover who writes a series of sharp lectures on art and literature, and the Booker finalist Hotel World, narrated in part by a spirit and the women around her affected by her death (it’s surreal, probing, compassionate, and witty all at once). Autumn and Winter, the first two in the quartet, are written in sort-of real time (think reality TV as novel) with stream-of-consciousness and political commentary coming together to form parallel narrative threads that connect the various characters, their actions, and the stories in their heads—past, present, future.

Smith seems to be attempting to write as fast as information and reality change, as fast as truth turns to fiction and fact is annulled. While letting her characters guide her—as well as guide and muse and struggle with themselves—she responds to current events that find their way into the story:

January:

it is a reasonably balmy Monday, 9 degrees, in late winter a couple of days after five million people, mostly women, take part in marches all across the world to protest against misogyny in power.

A man barks at a woman.

I mean barks like a dog. Woof woof.

This happens in the House of Commons.

The woman is speaking. She is asking a question. The man barks at her in the middle of her asking it.

More fully: an opposition Member of Parliament is asking a Foreign Secretary a question in the House of Commons.

She is questioning a British Prime Minister’s show of friendly demeanour and repeated proclamation of special relationship with an American President, who also has a habit of likening women to dogs.


Bridgeman ImagesOdilon Redon: Homage to Goya, circa 1895

Autumn was as attuned to political forces as Winter, but it seems to have been written in a state of slight shock or dismay—at the refugee crisis, the revolutions and fallen hopes in the Middle East, and the outcome of the Brexit referendum. It is breathless, too, but sadder, slower, and easier to take in. Winter moves with such ferocity that while reading it one is forced to pause, stand back, reread, and take a bird’s-eye view of the absurdity of what our culture has become: we battle to keep people fleeing war-torn countries out of our “homeland” for fear of what they might bring, how they might terrorize our lives, our jobs, our communities: “Ask them what kind of vicar, what kind of church, brings a child up to think that words like very and hostile and environment and refugees can ever go together in any response to what happens to people in the real world.”

There is the absurdity, too, of an age in which we adopt online avatars and take to Facebook, Twitter, and Instagram to share our thoughts, promote our work, curate our identities. Reading a blog post of Art’s, Lux clears her throat:

It doesn’t seem very like you, she says. Not that I know you that well. But from the little I know.

Really? Art says.

They are sitting in front of his mother’s computer in the office.

You don’t seem so ponderous in real life, Lux says.

Ponderous? Art says.

In real life you seem detached, but not impossible, she says.

What the fuck does that mean? he says.

Well. Not like this piece of writing is, Lux says.

Thanks, Art says. I think.

Meanwhile, on Twitter, “Charlotte is demeaning [Art] and simultaneously making it look like he is demeaning his own followers.” This manic unraveling, the pretense of Charlotte-as-Art and Lux-as-Charlotte, isn’t the future—it is our present.

Who are we, the bobbing child’s head begs us to ask, when we have lost who we were? The disembodied head, sometimes sad, sometimes simply looking on, might represent our pasts, or our conscience, or our lack of one:

How could it breathe anyway, the head, with no other breathing apparatus to speak of?

Where were its lungs?

Where was the rest of it? Was there maybe someone else somewhere else with a small torso, a couple of arms, a leg, following him or her about? Was a small torso manoeuvring itself up and down the aisles of a supermarket? Or on a park bench, or on a chair by a radiator in someone’s kitchen? Like the old song, Sophia sings it under her breath so as not to wake it, I’m nobody’s child. I’m no body’s child. Just like a flower. I’m growing wild.

Not that there aren’t glimmers of hope in these books. Political upheaval, and then revolution, change the very nature of our social interactions, splitting society, creating hierarchies, and dividing us into vehement tribal groups (Stay/Leave, pro-coup/pro–Muslim Brotherhood, Trump/anything else). But out of the fractures, the losses, even the mania (at Christmas, or at political breaking points like referendums or coups), we sometimes lose ourselves so completely that we eventually find common ground again. Lux, pretending to be Charlotte but acting with no pretense, disarms Sophia, who warms to her (and begins to eat). Art, skewered for pretension by someone no longer in his life, is forced to reckon with himself. Iris and Sophia, at political odds, so long estranged, reconnect through memories prompted by a song from childhood.

Winter is a novel about being alone, and of becoming more alone in an age of technology and manic ego, on the verge of exploding artificial intelligence. But it is punctuated with reminders of times past and what could still be salvageable. In one such section, Smith imagines “another version of what was happening” on the morning Winter describes:

As if from a novel in which Sophia is the kind of character she’d choose to be, prefer to be, a character in a much more classic sort of story, perfectly honed and comforting, about how sombre yet bright the major-symphony of winter is and how beautiful everything looks under a high frost, how every grassblade is enhanced and silvered into individual beauty by it, how even the dull tarmac of the roads, the paving under our feet, shines when the weather’s been cold enough and how something at the heart of us, at the heart of all our cold and frozen states, melts when we encounter a time of peace on earth.

One can imagine Winter—which is fast-paced and frenetic, sometimes to the point of exhaustion—being read eagerly some hundred years from now, in a future that tries to make sense of an Earth where much has imploded. In that future, it might appeal equally to the literary reader, if there still is one, and to the historian.

Smith’s quartet, so far, is not only an inventive articulation of the forces that have collided to make the present, but also a meditation on—and experiment with—time.* By structuring her books around the changing seasons in an epoch when the seasons themselves are unpredictable, even in question (“November again. It’s more winter than autumn”; “It will be a bit uncanny still to be thinking about winter in April”), she urges us to ask whether we can still save our planet, as well as future generations’ lives. It’s hard to imagine what Spring and Summer might bring—perhaps a complete halt, or inversion, of time awaits us—but the first two novels of the quartet are so free with form, as well as so morally conscious, that they come close to being an antidote to these times.

  1. *

    Spring will be published by Pantheon in April 2019. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/XuhxKwJ0vxQ/

The Sins of Celibacy

Pope Francis
Pope Francis; drawing by Siegfried Woldhek

On August 25 Archbishop Carlo Maria Viganò published an eleven-page letter in which he accused Pope Francis of ignoring and covering up evidence of sexual abuse in the Catholic Church and called for his resignation. It was a declaration of civil war by the church’s conservative wing. Viganò is a former apostolic nuncio to the US, a prominent member of the Roman Curia—the central governing body of the Holy See—and one of the most skilled practitioners of brass-knuckle Vatican power politics. He was the central figure in the 2012 scandal that involved documents leaked by Pope Benedict XVI’s personal butler, including letters Viganò wrote about corruption in Vatican finances, and that contributed to Benedict’s startling decision to abdicate the following year. Angry at not having been made a cardinal and alarmed by Francis’s supposedly liberal tendencies, Viganò seems determined to take out the pope.

As a result of Viganò’s latest accusations and the release eleven days earlier of a Pennsylvania grand jury report that outlines in excruciating detail decades of sexual abuse of children by priests, as well as further revelations of sexual misconduct by Cardinal Theodore McCarrick, the former archbishop of Washington, D.C., Francis’s papacy is now in a deep, possibly fatal crisis. After two weeks of silence, Francis announced in mid-September that he would convene a large-scale gathering of the church’s bishops in February to discuss the protection of minors against sexual abuse by priests.

The case of Cardinal McCarrick, which figures heavily in Viganò’s letter, is emblematic of the church’s failure to act on the problem of sexual abuse—and of the tendentiousness of the letter itself. In the 1980s stories began to circulate that McCarrick had invited young seminarians to his beach house and asked them to share his bed. Despite explicit allegations that were relayed to Rome, in 2000 Pope John Paul II appointed him archbishop of Washington, D.C., and made him a cardinal. Viganò speculates that the pope was too ill to know about the allegations, but does not mention that the appointment came five years before John Paul’s death. He also praises Benedict XVI for finally taking action against McCarrick by sentencing him to a life of retirement and penance, and then accuses Francis of revoking the punishment and relying on McCarrick for advice on important church appointments. If Benedict did in fact punish McCarrick, it was a very well kept secret, because he continued to appear at major church events and celebrate mass; he was even photographed with Viganò at a church celebration.

Viganò’s partial account of the way the church handled the allegations about McCarrick is meant to absolve Pope Francis’s predecessors, whose conservative ideology he shares. Viganò lays the principal blame for failing to punish McCarrick on Francis, who does appear to have mishandled the situation—one he largely inherited. He may have decided to ignore the allegations because, while deplorable, they dated back thirty years and involved seminarians, who were adults, not minors. Last June, however, a church commission found credible evidence that McCarrick had behaved inappropriately with a sixteen-year-old altar boy in the early 1970s, and removed him from public ministry; a month later Francis ordered him to observe “a life of prayer and penance in seclusion,” and he resigned from the College of Cardinals. On October 7, Cardinal Marc Ouellet, prefect of the Congregation for Bishops at the Vatican, issued a public letter offering a vigorous defense of Francis and a direct public rebuke of his accuser:

Francis had nothing to do with McCarrick’s promotions to New York, Metuchen, Newark and Washington. He stripped him of his Cardinal’s dignity as soon as there was a credible accusation of abuse of a minor….

Dear Viganò, in response to your unjust and unjustified attack, I can only conclude that the accusation is a political plot that lacks any real basis that could incriminate the Pope and that profoundly harms the communion of the Church.

The greatest responsibility for the problem of sexual abuse in the church clearly lies with Pope John Paul II, who turned a blind eye to it for more than twenty years. From the mid-1980s to 2004, the church spent $2.6 billion settling lawsuits in the US, mostly paying victims to remain silent. Cases in Ireland, Australia, England, Canada, and Mexico followed the same depressing pattern: victims were ignored or bullied, even as offending priests were quietly transferred to new parishes, where they often abused again. “John Paul knew the score: he protected the guilty priests and he protected the bishops who covered for them, he protected the institution from scandal,” I was told in a telephone interview by Father Thomas Doyle, a canon lawyer who was tasked by the papal nuncio to the US with investigating abuse by priests while working at the Vatican embassy in Washington in the mid-1980s, when the first lawsuits began to be filed.

Benedict was somewhat more energetic in dealing with the problem, but his papacy began after a cascade of reporting had appeared on priestly abuse, beginning with an investigation published by the Boston Globe in 2002 (the basis for Spotlight, the Oscar-winning film of 2015). The church was faced with mass defections and the collapse of donations from angry parishioners, which forced Benedict to confront the issue directly.

Francis’s election inspired great hopes for reform. But those who expected him to make a clean break with this history of equivocation and half-measures have been disappointed. He hesitated, for example, to meet with victims of sexual abuse during his visit to Chile in January 2018 and then insulted them by insisting that their claims that the local bishop had covered up the crimes of a notorious abuser were “calumny.” In early October, he expelled from the priesthood two retired Chilean bishops who had been accused of abuse. But when he accepted the resignation of Cardinal Donald Wuerl—who according to the Pennsylvania grand jury report repeatedly mishandled accusations of abuse when he was bishop of Pittsburgh—he praised Wuerl for his “nobility.” Francis seems to take one step forward and then one step backward.

Viganò is correct in writing that one of Francis’s closest advisers, Cardinal Oscar Rodriguez Maradiaga, disregarded a grave case of abuse occurring right under his nose in Honduras. One of Maradiaga’s associates, Auxiliary Bishop Juan José Pineda Fasquelle of Tegucigalpa, was accused of abusing students at the seminary he helped to run. Last June, forty-eight of the 180 seminarians signed a letter denouncing the situation there. “We are living and experiencing a time of tension in our house because of gravely immoral situations, above all of an active homosexuality inside the seminary that has been a taboo all this time,” the seminarians wrote. Maradiaga initially denounced the writers as “gossipers,” but Pineda was forced to resign a month later.

“I feel badly for Francis because he doesn’t know whom to trust,” Father Doyle said. Almost everyone in a senior position in the Catholic Church bears some guilt for covering up abuse, looking the other way, or resisting transparency. The John Jay Report (2004) on sexual abuse of minors by priests, commissioned by the US Conference of Catholic Bishops, indicated that the number of cases increased during the 1950s and 1960s, was highest in the 1970s, peaking in 1980, and has gradually diminished since then. Francis may have hoped that the problem would go away and feared that a true housecleaning would leave him with no allies in the Curia.

Much of the press coverage of the scandal has been of the Watergate variety: what the pope knows, when he found out, and so forth. This ignores a much bigger issue that no one in the church wants to talk about: the sexuality of priests and the failure of priestly celibacy.

Viganò blames the moral crisis of the papacy on the growing “homosexual current” within the church. There is indeed a substantial minority of gay priests. The Reverend Donald B. Cozzens, a Catholic priest and longtime rector of a seminary in Ohio, wrote in his book The Changing Face of the Priesthood (2000) that “the priesthood is, or is becoming, a gay profession.” There have been no large surveys, using scientific methods of random sampling, of the sexual life of Catholic priests. Many people—a priest in South Africa, a journalist in Spain, and others—have done partial studies that would not pass scientific muster. The late Dr. Richard Sipe, a former priest turned psychologist, interviewed 1,500 priests for an ethnographic study.

There is some self-selection by priests who agree to answer questions or fill out questionnaires or seek treatment, which is why the estimates on, say, gay priests vary so widely. But the studies are consistent in showing high percentages of sexually active priests and of gay priests. As Thomas Doyle wrote in 2004, “Knowledgeable observers, including authorities within the Church, estimate that 40–50 percent of all Catholic priests have a homosexual orientation, and that half of these are sexually active.” Sipe came to the conclusion that “50 percent of American clergy were sexually active…and between 20 and 30 percent have a homosexual orientation and yet maintained their celibacy in an equal proportion with heterosexually oriented clergy.”

In his letter Viganò repeats the finding in the John Jay Report that 81 percent of the sexual abuse cases involve men abusing boys. But he ignores its finding that those who actually identify as homosexual are unlikely to engage in abuse and are more likely to seek out adult partners. Priests who abuse boys are often confused about their sexuality; they frequently have a negative view of homosexuality, yet are troubled by their own homoerotic urges.

Viganò approvingly cites Sipe’s work four times. But he ignores Sipe’s larger argument, made on his website in 2005, that “the practice of celibacy is the basic problem for bishops and priests.” Sipe also wrote, “The Vatican focus on homosexual orientation is a smoke screen to cover the pervasive and greater danger of exposing the sexual behavior of clerics generally. Gay priests and bishops practice celibacy (or fail at it) in the same proportions as straight priests and bishops do.” He denounced McCarrick’s misconduct on numerous occasions.

While the number of priests abusing children—boys or girls under the age of sixteen—is comparatively small, many priests have secret sex lives (both homosexual and heterosexual), which does not leave them in the strongest position to discipline those who abuse younger victims. Archbishop Rembert Weakland, for example, the beloved liberal archbishop of Milwaukee from 1977 to 2002, belittled victims who complained of sexual abuse by priests and then quietly transferred predatory priests to other parishes, where they continued their abusive behavior. It was revealed in 2002 that the Milwaukee archdiocese had paid $450,000 in hush money to an adult man with whom Weakland had had a longtime secret sexual relationship, which might have made him more reluctant to act against priests who abused children. But this could be true of heterosexual as well as homosexual priests who are sexually active.

Viganò believes that the church’s moral crisis derives uniquely from its abandonment of clear, unequivocal, strict teaching on moral matters, and from overly permissive attitudes toward homosexuality in particular. He does not want to consider the ways in which its traditional teaching on sexuality—emphasized incessantly by recent popes—has contributed to the present crisis. The modern church has boxed itself into a terrible predicament. Until about half a century ago, it was able to maintain an attitude of wise hypocrisy, accepting that priests were often sexually active but pretending that they weren’t. The randy priests and monks (and nuns) in Chaucer and Boccaccio were not simply literary tropes; they reflected a simple reality: priests often found it impossible to live the celibate life. Many priests had a female “housekeeper” who relieved their loneliness and doubled as life companions. Priests frequently had affairs with their female parishioners and fathered illegitimate children. The power and prestige of the church helped to keep this sort of thing a matter of local gossip rather than international scandal.

When Pope John XXIII convened the Second Vatican Council in 1962, bishops from many parts of the world hoped that the church would finally change its doctrine and allow priests to marry. But John XXIII died before the council finished its work, which was then overseen by his successor, Paul VI (one of the popes most strongly rumored to have been gay). Paul apparently felt that the sweeping reforms of Vatican II risked going too far, so he rejected the pleas for priestly marriage and issued his famous encyclical Humanae Vitae, which banned contraception, overriding a commission he had convened that concluded that family planning and contraception were not inconsistent with Catholic doctrine.

Opposing priestly marriage and contraception placed the church on the conservative side of the sexual revolution and made adherence to strict sexual norms a litmus test for being a good Catholic, at a time when customs were moving rapidly in the other direction. Only sex between a man and a woman meant for procreation and within the institution of holy matrimony was allowed. That a man and a woman might have sex merely for pleasure was seen as selfish and sinful. Some 125,000 priests, according to Richard Sipe, left the priesthood after Paul VI closed the door on the possibility of priestly marriage. Many, like Sipe, were straight men who left to marry. Priestly vocations plummeted.

Conversely, the proportion of gay priests increased, since it was far easier to hide one’s sex life in an all-male community with a strong culture of secrecy and aversion to scandal. Many devout young Catholic men also entered the priesthood in order to try to escape their unconfessable urges, hoping that a vow of celibacy would help them suppress their homosexual leanings. But they often found themselves in seminaries full of sexual activity. Father Doyle estimates that approximately 10 percent of Catholic seminarians were abused (that is, drawn into nonconsensual sexual relationships) by priests, administrators, or other seminarians.

This problem is nothing new. Homosocial environments—prisons, single-sex schools, armies and navies, convents and monasteries—have always been places of homosexual activity. “Man is a loving animal,” in Sipe’s words. The Benedictines, one of the first monastic orders, created elaborate rules to minimize homosexual activity, insisting that monks sharing a room sleep fully clothed and with the lights on.

The modern Catholic Church has failed to grasp what its founders understood quite well. “It is better to marry than to burn with passion,” Saint Paul wrote when his followers asked him whether “it is good for a man not to touch a woman.” “To the unmarried and the widows I say that it is well for them to remain unmarried as I am. But if they are not practicing self-control, they should marry.” Priestly celibacy was not firmly established until the twelfth century, after which many priests had secret wives or lived in what the church termed “concubinage.”

The obsession with enforcing unenforceable standards of sexual continence that run contrary to human nature (according to one study, 95 percent of priests report that they masturbate) has led to an extremely unhealthy atmosphere within the modern church that contributed greatly to the sexual abuse crisis. A 1971 Loyola Study, which was also commissioned by the US Conference of Catholic Bishops, concluded that a large majority of American priests were psychologically immature, underdeveloped, or maldeveloped. It also found that a solid majority of priests—including those ordained in the 1940s, well before the sexual revolution—described themselves as very or somewhat sexually active.

Sipe, during his decades of work treating priests as a psychotherapist, also concluded that the lack of education about sexuality and the nature of celibate life tended to make priests immature, often more comfortable around teenagers than around other adults. All this, along with a homosocial environment and the church’s culture of secrecy, has made seminaries a breeding ground for sexual abuse.

There are possible ways out of this dilemma for Francis. He could allow priests to marry, declare homosexuality to be not sinful, or even move to reform the patriarchal nature of the church—and to address the collapse in the number of nuns, which has decreased by 30 percent since the 1960s even though the number of the world’s Catholics has nearly doubled in that time—by allowing the ordination of women. But any of those actions would spark a revolt by conservatives in the church who already regard Francis with deep suspicion, if not downright aversion. John Paul II did his best to tie the hands of his successors by declaring the prohibition of female priests to be an “infallible” papal doctrine, and Francis has acknowledged that debate on the issue was “closed.” Even Francis’s rather gentle efforts to raise the possibility of allowing divorced Catholics who have remarried to receive the host at Mass was met with such strong criticism that he dropped the subject.

The sociology of religion offers some valuable insights into the church’s problems. One of the landmark texts in this field is the 1994 essay “Why Strict Churches Are Strong,” by the economist Laurence Iannaccone, who used rational choice theory to show that people tend to value religious denominations that make severe demands on them. The Mormon Church, for example, requires believers to give it a tenth of their income and a substantial amount of their time, abstain from the use of tobacco and alcohol, and practice other austerities. These costly demands create a powerful sense of solidarity. The commitment of time and money means that the church can undertake ambitious projects and take care of those in need, while the distinctive way of life serves to bind members to one another and set them apart from the rest of the world. The price of entry to a strict church is high, but the barrier to exit is even higher: ostracism and the loss of community.

Since the French Revolution and the spread of liberal democracy in the nineteenth century, the Catholic Church has been torn between the urge to adapt to a changing world and the impulse to resist it at all cost. Pope Pius IX, at whose urging the First Vatican Council in 1870 adopted the doctrine of papal infallibility, published in 1864 his “Syllabus of Errors,” which roundly condemned modernity, freedom of the press, and the separation of church and state. Significantly, its final sentence denounced the mistaken belief that “the Roman Pontiff can, and ought to, reconcile himself, and come to terms with progress, liberalism and modern civilization.” Since then the church has been in the difficult position of maintaining this intransigent position—that it stands for a set of unchanging, eternal beliefs—while still in some ways adapting to the times.

John XXIII, who became pope in 1958, saw a profound need for what he called aggiornamento—updating—precisely the kind of reconciling of the church to a changing world that Pius IX considered anathema. John XXIII was one of the high-ranking church leaders who regarded the Nazi genocide of the Jews as a moral crossroads in history. An important part of his reforms at Vatican II was to remove all references to the Jews as a “deicide” people and to adopt an ecumenical spirit that deems other faiths worthy of respect. After Vatican II, the church made optional much of the traditional window-dressing of Catholicism—the Latin Mass, the elaborate habits of nuns, the traditional prohibition against meat on Friday—but John died before the council took up more controversial issues of doctrine. With Vatican II, Iannaccone argued,

the Catholic church may have managed to arrive at a remarkable, “worst of both worlds” position—discarding cherished distinctiveness in the areas of liturgy, theology, and lifestyle, while at the same time maintaining the very demands that its members and clergy are least willing to accept.

Church conservatives are not wrong to worry that eliminating distinctive Catholic teachings may weaken the church’s appeal and authority. Moderate mainstream Protestant denominations have been steadily losing adherents for decades. At the same time, some forms of strictness can be too costly. The prohibitions against priestly marriage and the ordination of women are clearly factors in the decline of priestly vocations, and the even more dramatic decline in the number of nuns.

Both radical change and the failure to change are fraught with danger, making Francis’s path an almost impossible one. He is under great pressure from victims who are demanding that the church conduct an exhaustive investigation into the responsibility of monsignors, bishops, and cardinals who knew of abusing priests but did nothing—something he is likely to resist. Such an accounting might force many of the church’s leaders into retirement and paralyze it for years to come—but his failure to act could paralyze it as well. As for the larger challenges facing the church, Francis’s best option might be to make changes within the narrow limits constraining him, such as expanding the participation of the laity in church deliberations and allowing women to become deacons. But that may be too little, too late.

—October 25, 2018

Source Article from http://feedproxy.google.com/~r/nybooks/~3/y4v_zul-1Jw/

World War I Relived Day by Day


Photo12/UIG via Getty ImagesGavrilo Princip arrested after his assassination of Archduke Franz Ferdinand of Austria, Sarajevo, June 28, 1914

Four years ago, I went to war. Like many of the people whose stories I followed in my daily “live-tweets” on World War I, I had no idea what I was getting myself into. What began as an impulsive decision to commemorate the hundredth anniversary of Austrian Archduke Ferdinand’s death at the hands of a Serbian assassin, in June 1914, snowballed into a blood-soaked odyssey that took me—figuratively and literally—from the rolling hills of northern France, to the desert wastes of Arabia, to the rocky crags of the Italian Alps, to the steel turret of a rebel cruiser moored within range of the czar’s Winter Palace in St. Petersburg, Russia. And like the men and women who actually lived through it, now that the Great War is ending I find myself asking what, if anything, I’ve learned from it all.

In the American mind, World War I typically occupies an unimpressive place as a kind of shambolic preamble to the great good-versus-evil crusade of World War II, a pointless slugfest in muddy trenches for no worthy purpose, and no worthwhile result. Its catchphrases—“The War to End All Wars,” “Make the World Safe for Democracy”—evoke a wry and knowing chuckle. As if. But the war I encountered, as it unfolded day by day, was far more relevant, passionate, and unpredictable.

Posting daily newspaper clippings and photographs, found mainly in books and online archives, I began to see the Great War as a kind of portal between an older, more distant world—of kings with handlebar mustaches, splendid uniforms, and cavalry charges—and the one that we know: of planes and tanks, mass political movements, and camouflage. It snuffed out ancient monarchies in czarist Russia, Habsburg Austria, and Ottoman Turkey, and gave birth to a host of new nations—Poland, Hungary, Czechoslovakia, Syria, Iraq, Jordan, Lebanon, Finland, Estonia, Latvia, Lithuania, Ukraine, Armenia, Azerbaijan—that, in their struggles to survive and carve out an identity, continue to shape our world today. The British declared their intent to create a national homeland in Palestine for the Jews. 


Daily Mirror/Mirrorpix via Getty ImagesRussian infantry marching to battle, Poland, August 1914

The needs of the war brought women into the workforce, and helped win them the right to vote. The huge privations it inflicted triggered the world’s first (successful) Communist revolution, and the frustrations it unleashed prompted many, afterward, to turn to far-right authoritarians in Italy and then Germany. And finally—though many have forgotten it—the comings and goings of people caused by the war helped spread the deadliest epidemic the world has ever known: the 1918 influenza virus, which quietly killed an estimated 50–100 million human beings in their homes and in hospitals, more than both world wars combined.

I also encountered a cast of characters more varied and amazing than I thought possible. Rasputin, the dissolute Russian mystic who warned Czar Nicholas that going to war would destroy his dynasty, and was murdered in part because he was (falsely) suspected as a German agent. The Austrian Emperor Karl, who inherited a war he didn’t want, and tried fruitlessly to make peace. T.E. Lawrence, a scholarly young intelligence officer whose affinity for the Arabs helped turn them to the Allied cause, and shaped the modern Middle East. Mata Hari, a Dutch-born exotic dancer who played double-agent, seducing high-ranking Allied and German officers for valuable information, until she was caught and shot by the French as a spy.

Some of the names are familiar, and offer hints of future greatness—or infamy. A young anti-war journalist named Benito Mussolini, sensing the way the wind blows, changes his tune and aggressively advocates for Italy to enter the war, before signing up himself. A young Charles De Gaulle is wounded at Verdun and taken prisoner for the rest of the conflict. A relatively young Winston Churchill plans the disastrous Gallipoli Campaign and pays his penance by serving in the trenches, before making a political comeback. A young Harry S. Truman serves as an artillery officer on the Western Front, alongside (and outranked by) a young George C. Marshall (his future Army Chief of Staff and Secretary of State) and Douglas MacArthur (his future general in the Pacific and Korea). A young George S. Patton develops a fascination with tanks. A young Walt Disney doodles cartoons on the side of the ambulances he drives, in the same unit as a young Ray Kroc (the founder of McDonald’s). Another young ambulance driver, Ernest Hemingway, finds inspiration on the Italian Front for his novel A Farewell to Arms. A young Hermann Göring (later head of the Luftwaffe) becomes a dashing flying ace, while a young Erwin Rommel wins renown fighting at Verdun and in the Alps. Meanwhile, an odd young German corporal, who volunteered in the very first days of the war, is blinded by poison gas in its final days, and wakes up in hospital to the bitter news that Germany has lost. His name is Adolf Hitler.


General Photographic Agency/Getty ImagesFrench troops under shellfire during the Battle of Verdun, 1916

The dramatic panoply of people, places, and events, however, only occasionally rises to the fore. For the most part, the war is a steady stream of ordinary people doing ordinary things: washing their clothes, attending a concert, tallying supplies, fixing a car. History books give us a distorted sense of time, because they fast forward to major events. A day may take a chapter, a month may be passed over in a sentence. In fact, there were periods where nothing much happened—plans were being made, troops trained, supplies positioned—and when you live-tweet, you experience that waiting. Sometimes, it led to intriguing surprises, like photographs of dragon dances performed by some of the 140,000 Chinese laborers brought over to France to lend muscle to the Allied war effort. Mostly, it was a matter of endurance. Each winter, the fighting came to almost a complete stop as each country hunkered down and hoped its food would last. The “turnip winter” of 1916–1917, when the potato crop failed, nearly broke Germany; the increasingly desperate craving for “bread and peace” did break Russia the following year.  

The future president Herbert Hoover made his reputation by coordinating food relief shipments to German-occupied Belgium, and later as the US “food czar” ensuring Allied armies and populations were fed. The vast mobilization was effective: by 1918, the Allies were able to relax their food rationing, while Germany and its confederates, strangled by an Allied naval blockade, were on the verge of starvation. America’s war effort was accompanied by a vast expansion in the federal government’s power and reach. It nationalized (temporarily) the railroads and the telephone lines. It set prices for everything from sugar to shoes, and told motorists when they could drive, workers when they could strike, and restaurants what they could put on their menus. It seized half a billion dollars of enemy-owned property, including the brand rights to Bayer aspirin, and sold them at auction. The US government also passed espionage and sedition laws that made it illegal to criticize the war effort or the president. Some people were sent to prison for doing so, including the Socialist Party leader Eugene V. Debs, who ran for president for a fifth and final time from a cell.


Hulton Archive/Getty ImagesA woman munitions worker operating a machine in an armaments factory, Britain, circa 1915

Winning the war, however, was far from a sure thing. For three years, the Allies threw themselves against an evenly-matched enemy on the Western Front, without making any breakthroughs, while the Eastern Front gradually crumbled. An early Allied foray to take out Turkey, at Gallipoli in 1915, ended in bloody disappointment. Inducing Italy to enter the war on the Allies’ side, that same year, was supposed to swing the entire conflict in their favor; instead, the catastrophic Italian rout at Caporetto, in the autumn of 1917, put the Allied effort in greater jeopardy. When Lenin seized power in Russia, at the end of 1917, he took it immediately out of the war and ceded immense land and resources to German control. True, the US had by then entered the war, in response to Germany’s submarine campaign against merchant ships and its clumsy diplomatic scheming in Mexico. But with the war in the East essentially won, the Germans saw a window in which they could shift all of their armies to the West and crush the exhausted British and French before enough American troops could arrive to make a difference. Their spring offensive, or “Kaiser’s Battle,” in early 1918 drove deep into Allied lines, prompting the French government to evacuate Paris.

The Germans’ big roll of the dice failed. The Allies held, and the US mobilized much faster than anyone expected. By the summer of 1918, a perceptible change had taken place. Hundreds of thousands of American troops were arriving every month at French ports, and their first units were taking part in battles, piecemeal at first, to push the Germans back. Even in September, however, nearly everyone expected the war to continue into 1919. That was when a huge US army of 3 million men would be ready to take part in a big Allied offensive that would drive all the way to Berlin. It never happened. That fall, the German army—and those of Turkey, Austria, and Bulgaria—first buckled, then collapsed like a rotten log. By November 11, the war was over.

The fact that nobody saw the end coming, the way it did, highlights the value of going back, a hundred years later, and reliving events day by day, as they took place. What may seem obvious now was anything but so then, and we do the people who lived through it, and our understanding of them, a real disservice when we assume that it was. “Life can only be understood backwards,” the Danish philosopher Søren Kierkegaard observed, “but it must be lived forwards.” The British historian C.V. Wedgewood elaborated on the same idea: “History is lived forwards but is written in retrospect. We know the end before we consider the beginning and we can never wholly recapture what it was like to know the beginning only.” We can’t entirely forget that we know what happened next, but when we at least try to identify with people who did not know, we shed new light on them, and on what did happen.


Fine Art Images/Heritage Images/Getty ImagesLeon Trotsky with the Soviet delegation to negotiate a peace treaty with Germany, Brest-Litovsk, 1918

Take the Russian Revolution. We see it as the birth of a Communist superpower, and struggle to make sense of the seemingly half-baked, half-hearted effort by the Allies to intervene by sending troops, including Americans, to Russia’s ports in the far north and far east. People at the time, however, saw it almost entirely through the prism of the Great War. At first, the Allies welcomed the overthrow of the czar, and believed it would rejuvenate the failing Russian war effort. By replacing an infamous autocrat on the Allied roster with a fledgling democracy, it made “making the world safe for democracy” a more credible call to arms, and helped pave the way for the US to enter the war. When Lenin took over and made a ruinous peace with the Central Powers, he was seen as simply a German puppet. And when Bolshevik forces, augmented with released German and Austrian prisoners of war, attacked a unit of Czech soldiers crossing Siberia to rejoin the Allies on the Western Front, those suspicions blossomed into fear of a full-fledged German takeover of Russia. The Allies sent troops to key Russian ports to secure the war supplies stockpiled there and provide an exit route for the loyal Czechs. They considered trying to “reopen” the Eastern Front, but realized it would take far too many men. They assumed that when Germany was defeated, their proxy Lenin would eventually fall, and when the war ended, they naturally lost interest. It all makes sense, but only if you see through the eyes that people saw through at the time.

Did it really matter who won the war? In its aftermath, the Great War came to be seen as a colossal waste, a testament to the vanity of nations, of pompous older men sending foolish younger men into the meat-grinder for no good reason. War poems like “Dulce et decorum est” and novels like All Quiet on the Western Front have crystalized this impression. But this was not how people felt at the time. German atrocities in Belgium and on the high seas—some exaggerated, but others quite real—convinced many people that civilization, as they knew it, really was at stake. I was consistently and often surprisingly struck by the sincerity of support, not just on the home front, but among soldiers who had seen the worst of combat, for pursuing the war unto victory. The tone matures, but remains vibrant: these were, for the most part, people who believed in what they were fighting for. At what point the bitter cynicism set in, after the war ended, I cannot say. But at some point, that enthusiasm, and even the memory of it, became buried with the dead.


Bettmann/Getty ImagesBoys wearing bags of camphor around their necks to ward off influenza, 1917

Though, in fact, in many places the war did not actually end. An armistice was declared on the Western Front, and the armies there were disbanded and sent home. But Germany, Austria, and Hungary all descended into revolution and civil war for a time, with gangs of demobilized soldiers fighting on all sides. In Russia, the Soviet regime and its multiple enemies would battle for several years, while trying to reconquer territory surrendered when it quit the war against Germany. The Greeks tried to reclaim Constantinople from the Turks, and would be massacred when the Turks succeeded in reconsolidating their country. The Poles fought wars with the Ukrainians and the Soviets to define the boundaries of their newly independent country. Jews and Arabs continue to fight over the new lands liberated from the Ottoman Empire to this day.

In the Great War itself, over 16 million people died, including almost 7 million civilians. The US got off relatively lightly, with 117,465 people killed, just 0.13 percent of its population. In Serbia, somewhere between 17 percent and 28 percent of the country’s population was killed. But even numbers like these leave little concrete impression on our minds. Some of the most touching parts of my experience live-tweeting were the times when people would tweet back to me about a grandfather or great-uncle who fought and died in the war, and is forever twenty-four-years old in some field in France, or Turkey, or Italy, or at sea. For most people, that absence is what defined the war: someone left and never came home. The world that they shaped, by their presence and their absence, is the one that we live in, whether we realize it or not. And we, like them, can only grope our way forward, day by day, into an unknown future.


Historica Graphica Collection/Heritage Images/Getty ImagesBritish artillery at the Somme, France, 1916

Source Article from http://feedproxy.google.com/~r/nybooks/~3/pW4pUZXjwKQ/

The Raunchy Brilliance of Julie Doucet


Julie Doucet/Drawn & QuarterlyDetail from “Levitation” by Julie Doucet, first published February 1989, republished by Drawn & Quarterly in the box set Dirty Plotte: The Complete Julie Doucet, 2018

The Canadian artist Julie Doucet began self-publishing the zine Dirty Plotte in 1988, when she was twenty-two. She had been drawing comics since high school, but this was her first sustained project. Working at a feverish pace, she produced fourteen issues of Dirty Plotte in eighteen months before it was picked up by Chris Oliveros as the debut book from his new publishing outfit in Montreal called Drawn & Quarterly, which went on to publish twelve issues by Doucet from 1990 to 1998, incorporating material from the original run with new work. Louche, mordant, funny, and surreal, Dirty Plotte comprises a mix of short and long comics—wordless and with dialogue, narrative and plotless, autobiographical and fictional (and everything between)—in which there are no rules.

Nor are any subjects off-limits. In the first issue, Doucet levitates from the bed to the bathroom to change a tampon (period maintenance as a mind-body problem). In the second, a prostitute undresses and reveals herself to be a man, who, by unzipping his skin, transforms into a wolf; the wolf turns itself inside out to become a snake that coils up the waiting john’s leg and gives him a blowjob. In a dream recorded in issue six, Doucet sees her reflection in a mirror, and her double comes alive. The original Julie wills herself to turn into a man, her reflection breaks free from the mirror, and they have sex.


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “Levitation,” Dirty Plotte, 2018

Twenty years after publishing the last issue of Dirty Plotte, Drawn & Quarterly has gathered the first dozen issues along with Doucet’s early, unpublished, and previously uncollected work, and numerous appreciations, in a two-volume slipcase edition, Dirty Plotte: The Complete Julie Doucet. Such lavish treatment can’t dispel the unruliness of Doucet’s project; these comics are as pertinent and captivating today as when they first made their way into the culture (an occasion marked by “a thrilling mix of recognition and horror,” recalls the cartoonist Laura Park). Doucet’s parodic depictions of intense violence are still unsettling; her elastic treatment of sex and gender is still daring; and her open-ended treatment of female identity is still vital. She has said that from 1988 to 1990, “I was not questioning what I was doing… It was so unconscious, so directly my mind on paper.”


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “A Blow Job,” issue two, first published January 1991, Dirty Plotte, 2018

She gives free rein to complexity and contradiction—and to an athletic id. The unabashed world in her comics isn’t the real one (men in our realm don’t have vaginas surgically implanted into their foreheads, or not yet), but even reality, in Dirty Plotte, is phenomenologically fraught. A five-page story about a disturbing dream in which naked men aggressively invade a picnic and invite Doucet to “taste my croissant” (a literal croissant, but strategically placed) transitions into a waking state in which every inanimate object in her apartment comes alive with murderous rage. “Good ol’ reassuring reality!” she shrugs. But if the environment of Dirty Plotte is acutely Doucet’s own—relying primarily on dreams, fantasies, and imagined scenarios starring a version of herself—it is also freewheeling enough that readers, particularly women, can recognize something of themselves in it. The cover of every issue features possible versions of Doucet: a deranged artist, an old woman in the desert, a small figure lost in the big city. Issue three shows a group of Julies sobbing, laughing, anxious, and aloof—a scene she dubs, on the back cover, “Me, Myself, and I.” That multiplicity appears again in 1995 as a trio of weeping cartoonish Julies on the cover of My Most Secret Desire, which gathers Doucet’s dream comics. As the writer Deb Olin Unferth put it, “All I had to do was see the cover to know this cartoonist had stepped into my subconscious and found me cringing and giggling in a corner.”


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “Dreamt: February 17, 1990,” first published January 1991, Dirty Plotte, 2018

Each issue of Dirty Plotte occupies that peculiar nexus of cringing and giggling. At the moment when a gag comic might end, Doucet pushes further, into uncomfortable territory. The step-by-step instructions in the four-panel “Do It Yourself: Laugh!” conclude not with a lively chuckle but with an unhinged, sputtering roar. But in calling out her fantasies and fears with words and pictures on the page, Doucet uses transgression to carve out a space of power and freedom. She revels in the joy of unfettered exploration, and her enthusiasm buoys otherwise dark subject matter. A trio of strips called “If I Was a Man” begins by conjuring aggressive male sexual behavior (when male Julie muses dumbly on “the great mysteries of nature” after ejaculating on his girlfriend, it’s hard not to read it as a pointed commentary on the outsize male fantasies present in so many comics). But the series ends with idiosyncratic fantasy: the “useful” penis that can store small items like pens and rolled-up magazines and the “romantic” penis that begets flowers.

Vaginas, too, get full treatment. Plotte is Québécois slang that can refer derogatorily to a woman’s vagina and to the woman herself. Co-opting this term is the linchpin of Doucet’s rowdy perusal of femaleness. If plotte refers to a woman’s body, then Doucet refashions that body. In a what-if strip about breast cancer, she chooses a double mastectomy, then adds a pair of gold rings “for a joyous sucking.” And if plotte refers to the objectification of women, Doucet turns the ferocity of male scrutiny back onto men themselves. An audacious example is the four-panel “Self-Portrait in a Possible Situation,” in issue two. In three of the panels, she slices herself with a razor blade while posing suggestively. In the fourth panel, she addresses the strip’s voyeuristic reader: covered in bandages and seated before an assortment of knives, she portentously petitions her male readers to act as models “for some little drawings! Heh heh heh.” The tit for tat comes to fruition an issue later, in “Strip Tease of a Reader,” in which she kills and fastidiously dismembers “Steve,” a reader who has proffered himself.


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “If I Was A Man,” issue six, first published January 1993, Dirty Plotte, 2018

Doucet has said that the idea for “Strip Tease of a Reader” came from a French magazine, L‘Écho des Savanes, that asked its male readers to photograph their girlfriends performing a strip tease. “And people did it,” she says. “In every issue, there was a full page with about six Polaroids of girls stripping.” Doucet’s version is a parody—violent, but with a wink. She turns it into a burlesque performance by herself and Steve, implicating him in the farce, even ending the strip by scrawling “Fin“ on the wall with the blood of his dismembered member. The viciousness in her reversal is slyly subversive. If the girlfriends in L‘Écho consented to participate, so too does Steve. And if the titillation in a woman‘s strip tease is the measured revelation of flesh, so too is Steve‘s.

The bite and blood in Doucet’s comics were stirred in part by provocative French bande dessinée—for instance, by Claire Bretécher’s playful satires of self-involved French life, Nicole Claveloux’s surreal and erotic subversions, and F’murr’s absurdist parodies, as well as other, more “risqué” comics published in the French magazine Pilote, to which her mother subscribed. Doucet’s distinctiveness is equally due to her highly graphic drawing style: packed, rambunctious black-and-white panels depicting cramped interiors swarming with bric-a-brac and busy street scenes alive with eccentric humanity. Her dense shading and hatching, which produce moody, high-contrast drawings, become more finely rendered and more articulate in later issues. Doucet’s American antecedent is Aline Kominsky-Crumb, whose comics, beginning in the Seventies, are predicated on exploring the raw corporeality of the female body: masturbation, defecation, hunger, pain, and pleasure. But Doucet didn’t discover the American underground—its men or its women—until Dirty Plotte was underway. Still, earlier generations of North American women cartoonists saw in Doucet a kindred spirit. Kominsky-Crumb included Doucet’s Dirty Plotte comic “Heavy Flow,” in which a Godzilla-size Doucet floods a city with her menstrual blood, in issue number twenty-six (1989) of the comics anthology Weirdo, which Robert Crumb had begun in 1981. The cartoonist Phoebe Gloeckner selected three short comics by Doucet that same year for an issue of the long-running all-female anthology Wimmen’s Comix, where they appeared alongside work by trailblazing underground cartoonists Diane Noomin, Lee Mars, Sharon Rudahl, and Kominsky-Crumb (as well as a young Alison Bechdel).


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “Heavy Flow,” first published 1989, Dirty Plotte, 2018

Remarkably, Doucet’s comics found an enthusiastic, if awe-struck, fan base among men. Dirty Plotte’s letter columns teem with appreciative notes from male readers, and Doucet’s male contemporaries were among her most ardent admirers. The cartoonist John Porcellino discovered Dirty Plotte through the international review directory Factsheet Five in 1989 and launched his own single-author zine, the minimal King-Cat Comics, two months later, inspired by Dirty Plotte’s monographic verve. In 1990, the Canadian cartoonist Chester Brown plugged Dirty Plotte issue one as “the comic book event of the year” and, a year later, moved his own provocative, sometimes taboo-busting comic, Yummy Fur, to Drawn & Quarterly. Seth, another Canadian cartoonist, approached Oliveros about publishing his poignant new autobiographical series Palookaville the month the first issue of Dirty Plotte came out, and it became Drawn & Quarterly’s second series. Adrian Tomine, a high school student in California in the early Nineties, who would go on to become a New York literary darling, saw “infinite possibilities” in the Dirty Plotte comics; Optic Nerve, which Oliveros began publishing in 1995, was his response. The boys’ club that was Drawn & Quarterly’s early stable—Brown, Seth, Tomine, and Joe Matt—largely developed around Doucet’s work, an implicit argument against the persistent “great masters“ notion of artistic production. Doucet, though perhaps lesser known today than many of her stablemates, was a central figure in the nascent alternative-comics scene. Oliveros has called her “the foundation of Drawn & Quarterly,” and it was his founding intent to publish comics by and for women.


Julie Doucet/Drawn & QuarterlyPanels from Julie Doucet’s “New York Diary,” issue ten, first published 1996, Dirty Plotte, 2018

Most of the final three issues of Dirty Plotte are given over to “My New York Diary,” a self-contained story about Doucet’s year-long stay in New York, where she abruptly moved in 1991 after falling in love there. The story (published as a standalone book, with some additional material, in 1996) charts her time with her lover in his seedy apartment, as the initial bloom of romance is dulled by his resentment at her successes and his overbearing dependence on her, as well as her health problems and the difficulty navigating what she finds to be a “merciless” city. In the end, she leaves the man and the city, with no regrets. “My New York Diary” is the most extended comic in Dirty Plotte; it is one of the very few that is straight autobiography, that is told in hindsight, and that follows a traditional narrative arc. It is a work of realism, yet its nuance and honesty, about female identity, agency, and representation, could not exist without the experimentation that preceded it; “My New York Diary” gathers those earlier ideas in order to spin out its tale of romantic and worldly experience. The comics of Dirty Plotte are indeed complete. Doucet quit making comics altogether a few years after concluding the series (her interest in text-and-image combinations persists in her recent collaged photo comics). Yet they remain, as a fifteen-year-old burgeoning cartoonist Geneviève Castrée once discovered, “a parallel world about home, a world away from home.”


Dirty Plotte: The Complete Julie Doucet is published by Drawn & Quarterly

Source Article from http://feedproxy.google.com/~r/nybooks/~3/EqEJSlHqgUo/

The Don of Trumpery


Mandel Ngan/AFP/Getty ImagesPresident Trump at a rally at the Landers Center, Southaven, Mississippi, October 2, 2018

Trumpery

noun
1. attractive articles of little value or use.

adjective
2. showy but worthless.
“trumpery jewelry”

from the Old French tromper, “to deceive.”

synonyms
• cheesy, crappy, cut-rate, el cheapo, junky, lousy, rotten, schlocky, shoddy, sleazy, trashy

Making fun of other people’s names is one of the lowest forms of humor. But naming can also be an art. Victorian novelists like Charles Dickens named their characters to suggest moral traits: the inflexible pedant Thomas Gradgrind, the slimy Uriah Heep, the miserly Ebenezer Scrooge. In today’s Twitter and reality TV world, where we can name and rename ourselves ad libitum—every James Gatz his own Jay Gatsby—names can outstrip reality, and morality. We are all Reality Winners now. 

We happen to have a president who takes names very seriously, using them for specific purposes and according them strange powers. Having apprenticed himself to mobsters and wrestlers (great adopters of mythic nicknames), he has transformed politics into mass entertainment. He relishes the sound of names, especially his own. He surrounds himself with people whose names seem so appropriate to their roles, so closely aligning form and function—Price, Conway, Pecker, and the rest—that Dickens himself might have named them. Doesn’t Betsy DeVos, for example, have the faux-aristocratic sadism of a villainess from a children’s book, like Cruella de Vil? Let them eat vouchers. 

What gives additional piquancy to the names in Trump’s orbit is the way they seem constantly to be morphing into brands, advertisements for themselves.

For Trump, naming is branding. Extending the Trump brand appears to have been the central driver of his initially only half-serious presidential bid, and it continues to drive Trump’s presidency, as he dreams, no doubt, of a Trump Tower on Mars. His name is German, originally Drumpf. (Since my own name is a deformed German-Jewish name, far be it from me to make fun of Drumpf.) But in Trump: The Art of the Deal, Trump claimed that his grandfather came from Sweden as a boy. The Donald’s father, Fred Trump, apparently didn’t want to upset his Jewish tenants by revealing his German roots.

Donald Trump doesn’t so much name his kids as brand them. Tiffany and Barron are luxury brands. (Trump used to call himself “John Barron” when he pretended to be his own spokesman, giving journalists the inside dope on himself, so Barron Trump might as well be Trump Trump.) Donald Trump Jr. is a vanity brand, and Ivanka has become one, as Kelly Ann Conway—that latter-day Becky Sharp who should write a book on alternative facts called The Way of the Con—discovered when she was chided for breaking White House rules by praising Ivanka’s boots on TV (not that the promotion worked: Ivanka recently closed her ailing fashion line). Among Trump’s children, only Eric seems to have escaped branding. He also seems, perhaps not coincidentally, to have escaped notice. 

A significant part of Trump’s campaign was trafficking in pejorative nicknames—Crooked Hillary, Little Marco, Lying Ted, Low Energy Jeb. (He borrowed the idea of calling Elizabeth Warren “Pocahontas” from Shameless Scott Brown, who scorned her claim of Native American ancestry during his losing Senate run against her in 2012.) But Trump also believes in the power of positive nicknames. On the campaign trail he loved telling crowds that “Mad Dog” Mattis was his choice for Defense. He evidently wanted a general who acted like a mad dog, a sort of Mad Max on steroids. Among the attack dogs in the White House, led by the aptly named John Bolton—always in danger of bolting—Mattis has mercifully turned out to be the calmest of canines.

The president is said to choose his entourage by looks. Reportedly semi-phobic about facial hair on men, he was turned off by Bolton’s drooping mustache. But he also seems to pick his cohort in part by name. Hope Hicks should be the name of Perry Mason’s assistant, of every trustworthy assistant. And when you need someone to stir up trouble, get yourself a Scaramucci. Literally “little skirmisher,” Scaramouche was a clown in the Italian commedia dell’arte. Part servant and part henchman (Capitano), prone to boasting and cowardice, Scaramucci pretended, on stage, to be a Don—or perhaps a Donald.

Jefferson Beauregard Sessions III is a living, breathing Civil War monument. He was named for Jefferson Davis, president of the Confederate States of America, and for the Confederate General P.G.T. Beauregard, who ordered the first shot on Fort Sumter. In the 1956 film comedy Bus Stop, Beauregard “Bo” Decker tells Marilyn Monroe that his name means “good-lucking.” A cunning, conniving climber of the Snopes variety, Sessions belongs in a Faulkner novel.

Pompous Mike Pompeo thrust out his ample chest for his photo-op with MBS, as they cooked up a “narrative” for the butchering of Jamal Khashoggi. “To make an omelette,” their fatuous expressions seemed to say, “you have to crack some eggs.” As the poet Randall Jarrell quipped, “That’s what they tell the eggs.”

Tom Price won a seat in the Cabinet to bring down prescription drug prices. Instead, he lined his pockets with investments in medical stocks he himself had boosted in value. He charged the public for his luxury travel (a practice known, I believe, as Zinking). Asked about it, he presumably said, “The Price is Right.”

David Pecker has a file on Donald’s pecker. Albeit “not freakishly small,” according to Stormy Daniels (née Stephanie Clifford), the First Pecker inspired her nickname for the president: “Tiny.”

And come to think of it, isn’t it odd to have a man named Mark Judge weighing in on the judgeship of his pal and drinking partner, whom he renames, in his memoir Wasted, Bart O’Kavanagh?

For the midterms, the president has discovered a sonic affinity between the words Kavanaugh and Caravan. He pronounces them like anagrams of each other, and repeats the name Kavanaugh like a secret mantra. Kavanaugh will build that wall. Kavanaugh will separate those immigrant families. For people on the left, however, Kavanaugh seems the ultimate disaster, the gift (German for poison) that keeps on taking. Trump will eventually go, but Kavanaugh will last forever.

In these dark times, we need a word for when things seem as though they can’t possibly get any worse, and then they do anyway. There are plenty of names to choose from.

Source Article from http://feedproxy.google.com/~r/nybooks/~3/F6NyCVV3KcI/

Concentration Camps for Kids: An Open Letter


Joe Raedle/Getty ImagesChildren and staff at the Trump administration’s tent facility for detaining migrant children separated from their parents, Tornillo, Texas, June 19, 2018

In Tornillo, Texas, in rows of pale yellow tents, some 1,600 children who were forcefully taken from their families sleep in lined-up bunks, boys separated from the girls. The children, who are between the ages of thirteen and seventeen, have limited access to legal services. They are not schooled. They are given workbooks but they are not obliged to complete them. The tent city in Tornillo is unregulated, except for guidelines from the Department of Health and Human Services. Physical conditions seem humane. The children at Tornillo spend most of the day in air-conditioned tents, where they receive their meals and are offered recreational activities. Three workers look after groups of twenty children each. The children are permitted to make two phone calls per week to their family members or sponsors, and are made to wear belts with phone numbers written out for their emergency contacts.

However, the children’s psychological conditions are anything but humane. At least two dozen of the children who arrived in Tornillo were given just a few hours’ notice in their previous detention center before they were taken away—any longer than that, according to one of the workers at Tornillo, and the children may have panicked and tried to escape. Because of these circumstances, the children of Tornillo are inevitably subjected to emotional trauma. After their release (the date of which has not yet been settled), they will certainly be left with emotional scars, and no one can expect these children to ever feel anything but gut hatred for the country that condemned them to this unjust imprisonment.

The workers at the Tornillo camp, which was expanded in September to a capacity of 3,800, say that the longer a child remains in custody, the more likely he or she is to become traumatized or enter a state of depression. There are strict rules at such facilities: “Do not misbehave. Do not sit on the floor. Do not share your food. Do not use nicknames. Do not touch another child, even if that child is your hermanito or hermanita [younger sibling]. Also, it is best not to cry. Doing so might hurt your case.” Can we imagine our own children being forced to go without hugging or being hugged, or even touching or sharing with their little brothers or sisters?

Federal officials will not let reporters interview the children and have tightly controlled access to the camp, but almost daily reports have filtered through to the press. Tornillo, though unique—even among the hundred-plus US detention facilities for migrant children—in its treatment of minors, is part of a general atmosphere of repression and persecution that threatens to get worse. The US government is detaining more than 13,000 migrant children, the highest number ever; as of last month, some 250 “tender age” children aged twelve or under had not yet been reunited with their parents. Recently, the president has vowed to “put tents up all over the place” for migrants.

This generation will be remembered for having allowed for concentration camps for children to be built on “the land of the free and the home of the brave.” This is happening here and now, but not in our names.

Rabih Alameddine
Jon Lee Anderson
Margaret Atwood
Paul Auster
Andrea Bajani
Alessandro Baricco
Elif Batuman
Neil Bissoondath
José Burucúa
Giovanna Calvino
Emmanuel Carrère
Javier Cercas
Christopher Cerf
Roger Chartier
Michael Cunningham
William Dalrymple
Robert Darnton
Deborah Eisenberg
Mona Eltahawy
Álvaro Enrigue
Richard Ford
Edwin Frank
Garth Greenwell
Andrew Sean Greer
Linda Gregerson
Ethel Groffier
Helon Habila
Rawi Hage
Aleksandar Hemon
Edward Hirsch
Siri Hustvedt
Tahar Ben Jalloun
Arthur Japin
Daniel Kehlmann
Etgar Keret
Peter Kimani
Binnie Kirshenbaum
Khaled Al Khamissi
Dany Laferrière
Jhumpa Lahiri
Laila Lalami
Herb Leibowitz
Barry Lopez
Valeria Luiselli
Norman Manea
Alberto Manguel
Yann Martel
Guillermo Martínez
Diana Matar
Hisham Matar
Maaza Mengiste
Rohinton Mistry
Benjamin Moser
José Luis Moure
Azar Nafisi
Guadalupe Nettel
Mukoma Wa Ngugi
Ruth Padel
Rajesh Parameswaran
Dawit L. Petros
Caryl Phillips
Nelida Piñon
Francine Prose
Sergio Ramírez
David Rieff
Salman Rushdie
Alberto Ruy Sánchez
Aurora Juana Schreiber
Wallace Shawn
Sjón
Patti Smith
Susan Swan
Santiago Sylvester
Madeleine Thien 
Colm Tóibín
Kirmen Uribe
Juan Gabriel Vásquez
Juan Villoro
Susan Yankowitz

Source Article from http://feedproxy.google.com/~r/nybooks/~3/tBFZiePsueQ/

A Very Grim Forecast

Global Warming of 1.5°C: An IPCC Special Report


Diane BurkoDiane Burko: Grinnell Mt. Gould #1, #2, #3, #4, 2009; based on USGS photos of Grinnell Glacier at Glacier National Park, Montana, between 1938 and 2006. Burko’s work is on view in ‘Endangered: From Glaciers to Reefs,’ at the National Academy of Sciences, Washington, D.C., until January 31, 2019. The accompanying book is published by KMW Studio.

Though it was published at the beginning of October, Global Warming of 1.5°C, a report by the Intergovernmental Panel on Climate Change (IPCC), is a document with its origins in another era, one not so distant from ours but politically an age apart. To read it makes you weep not just for our future but for our present.

The report was prepared at the request of the United Nations Framework Convention on Climate Change at the end of the Paris climate talks in December 2015. The agreement reached in Paris pledged the signatories to

holding the increase in the global average temperature to well below 2°C above pre-industrial levels and pursuing efforts to limit the temperature increase to 1.5°C above pre-industrial levels, recognizing that this would significantly reduce the risks and impacts of climate change.

The mention of 1.5 degrees Celsius was unexpected; that number had first surfaced six years earlier at the unsuccessful Copenhagen climate talks, when representatives of low-lying island and coastal nations began using the slogan “1.5 to Stay Alive,” arguing that the long-standing red line of a two-degree increase in temperature likely doomed them to disappear under rising seas. Other highly vulnerable nations made the same case about droughts and floods and storms, because it was becoming clear that scientists had been underestimating how broad and deadly the effects of climate change would be. (So far we’ve raised the global average temperature just one degree, which has already brought about changes now readily observable.)

The pledges made by nations at the Paris conference were not enough to meet even the two-degree target. If every nation fulfills those pledges, the global temperature will still rise by about 3.5 degrees Celsius, which everyone acknowledged goes far beyond any definition of safety. But the hope was that the focus and goodwill resulting from the Paris agreement would help get the transition to alternative energy sources underway, and that once nations began installing solar panels and wind turbines they’d find it easier and cheaper than they had expected. They could then make stronger pledges as the process continued. “Impossible isn’t a fact; it’s an attitude,” said Christiana Figueres, the Costa Rican diplomat who deserves much of the credit for putting together the agreement. “Ideally,” said Philip A. Wallach, a Brookings Institution fellow, the Paris agreement would create “a virtuous cycle of ambitious commitments, honestly reported progress to match, and further commitments following on those successes.”

To some extent this is precisely what has happened. The engineers have continued to make remarkable advances, and the price of a kilowatt generated by the sun or wind has continued to plunge—so much so that these are now the cheapest sources of power across much of the globe. Battery storage technology has progressed too; the fact that the sun goes down at night is no longer the obstacle to solar power many once presumed. And so vast quantities of renewable technology have been deployed, most notably in China and India. Representatives of cities and states from around the world gathered in San Francisco in September for a miniature version of the Paris summit and made their own pledges: California, the planet’s fifth-largest economy, promised to be carbon-neutral by 2045. Electric cars are now being produced in significant numbers, and the Chinese have deployed a vast fleet of electric buses.

But those are bright spots against a very dark background. In retrospect, Paris in December 2015 may represent a high-water mark for the idea of an interconnected human civilization. Within nine weeks of the conference Donald Trump had won his first primary; within seven months the UK had voted for Brexit, both weakening and distracting the EU, which has been the most consistent global champion of climate action. Since then the US, the largest carbon emitter since the start of the Industrial Revolution, has withdrawn from the Paris agreement, and the president’s cabinet members are busy trying to revive the coal industry and eliminate effective oversight and regulation of the oil and gas business. The prime minister of Australia, the world’s biggest coal exporter, is now Scott Morrison, a man famous for bringing a chunk of anthracite into Parliament and passing it around so everyone could marvel at its greatness. Canada—though led by a progressive prime minister, Justin Trudeau, who was crucial in getting the 1.5-degree target included in the Paris agreement—has nationalized a pipeline in an effort to spur more production from its extremely polluting Alberta oil sands. Brazil seems set to elect a man who has promised not only to withdraw from the Paris agreement but to remove protections from the Amazon and open the rainforest to cattle ranchers. It is no wonder that the planet’s carbon emissions, which had seemed to plateau in mid-decade, are again on the rise: preliminary figures indicate that a new record will be set in 2018.

This is the backdrop against which the IPCC report arrives, written by ninety-one scientists from forty countries. It is a long and technical document—five hundred pages, drawing on six thousand studies—and as badly written as all the other IPCC grand summaries over the years, thanks in no small part to the required vetting of each sentence of the executive summary by representatives of the participating countries. (Saudi Arabia apparently tried to block some of the most important passages at the last moment during a review meeting, particularly, according to reports, the statement emphasizing “the need for sharp reductions in the use of fossil fuels.” The rest of the conclave threatened to record the objection in a footnote; “it was a game of chicken, and the Saudis blinked first,” one participant said.) For most readers, the thirty-page “Summary for Policymakers” will be sufficiently dense and informative.

The takeaway messages are simple enough: to keep warming under 1.5 degrees, global carbon dioxide emissions will have to fall by 45 percent by 2030, and reach net zero by 2050. We should do our best to meet this challenge, the report warns, because allowing the temperature to rise two degrees (much less than the 3.5 we’re currently on pace for) would cause far more damage than 1.5. At the lower number, for instance, we’d lose 70 to 90 percent of coral reefs. Half a degree higher and that loss rises to 99 percent. The burden of climate change falls first and heaviest on the poorest nations, who of course have done the least to cause the crisis. At two degrees, the report contends, there will be a “disproportionately rapid evacuation” of people from the tropics. As one of its authors told The New York Times, “in some parts of the world, national borders will become irrelevant. You can set up a wall to try to contain 10,000 and 20,000 and one million people, but not 10 million.”

The report provides few truly new insights for those who have been paying attention to the issue. In fact, because the IPCC is such a slave to consensus, and because its slow process means that the most recent science is never included in its reports, this one almost certainly understates the extent of the problem. Its estimates of sea-level rise are on the low end—researchers are increasingly convinced that melting in Greenland and the Antarctic is proceeding much faster than expected—and it downplays fears, bolstered by recent research, that the system of currents bringing warm water to the North Atlantic has begun to break down.* As the chemist Mario Molina, who shared the Nobel Prize in 1995 for discovering the threat posed by chlorofluorocarbon gases to the ozone layer, put it, “the IPCC understates a key risk: that self-reinforcing feedback loops could push the climate system into chaos before we have time to tame our energy system.”

All in all, though, the world continues to owe the IPCC a great debt: scientists have once again shown that they can agree on a broad and workable summary of our peril and deliver it in language that, while clunky, is clear enough that headline writers can make sense of it. (Those who try, anyway. An analysis of the fifty biggest US newspapers showed that only twenty-two of them bothered to put a story about the report on the homepages of their websites.)

The problem is that action never follows: the scientists do their job, but even the politicians not controlled by the fossil fuel industry tend to punt or to propose small-bore changes too slow and cautious to make much difference. By far the most important change between this and the last big IPCC report, in 2014, is simply that four years have passed, meaning that the curve we’d need to follow to cut our emissions sufficiently has grown considerably steeper. Instead of the relatively gentle trajectory that would have been required if we had paid attention in 1995, the first time the IPCC warned us that global warming was real and dangerous, we’re at the point where even an all-out effort would probably be too slow. As the new report concedes, there is “no documented historical precedent” for change at the speed that the science requires.

There’s one paramount reason we didn’t heed those earlier warnings, and that’s the power of the fossil fuel industry. Since the last IPCC report, a series of newspaper exposés has made it clear that the big oil companies knew all about climate change even before it became a public issue in the late 1980s, and that, instead of owning up to that knowledge, they sponsored an enormously expensive campaign to obfuscate the science. That campaign is increasingly untenable. In a world where floods, fires, and storms set new records almost weekly, the industry now concentrates on trying to slow the inevitable move to renewable energy and preserve its current business model as long as possible.

After the release of the IPCC report, for instance, Exxon pledged $1 million to work toward a carbon tax. That’s risible—Exxon made $280 billion in the last decade, and it has donated huge sums to elect a Congress that won’t pass a carbon tax anytime soon; oil companies are spending many millions of dollars to defeat a carbon tax on the ballot in Washington State and to beat back bans on fracking in Colorado. Even if a carbon tax somehow made it past the GOP, the amount Exxon says it wants—$40 a ton—is tiny compared to what the IPCC’s analysts say would be required to make a real dent in the problem. And in return the proposed legislation would relieve the oil companies of all liability for the havoc they’ve caused. A bargain that might have made sense a generation ago no longer counts for much.

Given the grim science, it’s a fair question whether anything can be done to slow the planet’s rapid warming. (One Washington Post columnist went further, asking, “Why bother to bear children in a world wracked by climate change?”) The phrase used most since the report’s release was “political will,” usually invoked earnestly as the missing ingredient that must somehow be conjured up. Summoning sufficient political will to blunt the power of Exxon and Shell seems unlikely. As the energy analyst David Roberts predicted recently on Twitter, “the increasing severity of climate impacts will not serve as impetus to international cooperation, but the opposite. It will empower nationalists, isolationists, & reactionaries.” Anyone wondering what he’s talking about need merely look at the Western reaction to the wave of Syrian refugees fleeing a civil war sparked in part by the worst drought ever measured in that region.


Anders NilsenAnders Nilsen: Rootball (Last Remnant), 2012

The stakes are so high, though, that we must still try to do what we can to change those odds. And it’s not an entirely impossible task. Nature is a good organizer: the relentless floods and storms and fires have gotten Americans’ attention, and the percentage of voters who acknowledge that global warming is a threat is higher than ever before, and the support for solutions is remarkably nonpartisan: 93 percent of Democrats want more solar farms; so do 84 percent of Republicans. The next Democratic primary season might allow a real climate champion to emerge who would back what the rising progressive star Alexandria Ocasio-Cortez called a “Green New Deal”; in turn a revitalized America could theoretically help lead the planet back to sanity. But for any of that to happen, we need a major shift in our thinking, strong enough to make the climate crisis a center of our political life rather than a peripheral question easily avoided. (There were no questions at all about climate change in the 2016 presidential debates.)

The past year has offered a few signs that such large-scale changes are coming. In October, the attorney general for New York State filed suit against ExxonMobil, claiming the company defrauded shareholders by downplaying the risks of climate change. In January New York City joined the growing fossil fuel divestment campaign, pledging to sell off the oil and gas shares in its huge pension portfolio; Mayor Bill de Blasio is working with London’s mayor, Sadiq Khan, to convince their colleagues around the world to do likewise. In July Ireland became the first nation to join the campaign, helping to take the total funds involved to over $6 trillion. This kind of pressure on investors needs to continue: as the IPCC report says, if the current flows of capital into fossil fuel projects were diverted to solar and wind power, we’d be closing in on the sums required to transform the world’s energy systems.

It’s natural following devastating reports like this one to turn to our political leaders for a response. But in an era when politics seems at least temporarily broken, and with a crisis that has a time limit, civil society may need to pressure the business community at least as heavily to divest their oil company shares, to stop underwriting and insuring new fossil fuel projects, and to dramatically increase the money available for clean energy. We’re running out of options, and we’re running out of decades. Over and over we’ve gotten scientific wake-up calls, and over and over we’ve hit the snooze button. If we keep doing that, climate change will no longer be a problem, because calling something a problem implies there’s still a solution.

—October 25, 2018

  1. *

    See, for example, L. Caesar et al., “Observed Fingerprint of a Weakening Atlantic Ocean Overturning Circulation,” Nature, April 12, 2018. 

Source Article from http://feedproxy.google.com/~r/nybooks/~3/CzH5TStfwcs/

Voting Machines: What Could Possibly Go Wrong?


Joe Raedle/Getty ImagesMiami-Dade election support specialists checking voting machines, Doral, Florida, August 8, 2018

Since the 2016 election, there has been a good deal of commentary and reporting about the threats to American democracy from, on the one hand, Russian interference by Facebook and Twitterbot-distributed propaganda, and on the other, voter ID laws and other partisan voter suppression measures such as electoral roll purges. Both of these concerns are real and urgent, but there is a third, yet more sinister threat to the integrity of the November 6 elections: the vulnerability of the voting machines themselves. This potential weakness is critical because the entire system of our democracy depends on public trust—the belief that, however divided the country is and fiercely contested elections are, the result has integrity. Nothing is more insidious and corrosive than the idea that the tally of votes itself could be unreliable and exposed to fraud. 

Although election officials often claim our computerized election system is too “decentralized” to allow an outcome-altering cyber-attack, it is, in fact, centralized in one very important way: just two vendors, Elections Systems & Software, LLC, and Dominion Voting, account for about 80 percent of US election equipment. A third company, Hart Intercivic, whose e-slate machines have recently been reported to be flipping early votes in the current Senate race in Texas between Beto O’Rourke and Ted Cruz, accounts for another 11 percent. The enormous reach of these three vendors creates an obvious vulnerability and potential target for a corrupt insider or outside hacker intent on wreaking havoc. 

These vendors supply three main types of equipment that voters use at the polls: optical or digital scanners for counting hand-marked paper ballots, direct record electronic (usually touchscreen) voting machines, and ballot-marking devices that generate computer-marked paper ballots or “summary cards” to be counted on scanners. 

Contrary to popular belief, all such equipment can be hacked via the Internet because all such equipment must receive programming before each election from memory cards or USB sticks prepared on the county’s election management system, which connects to the Internet. Thus, if an election management system is infected with malware, the malware can spread from that system to the memory cards and USB sticks, which then would transfer it to all voting machines, scanners, and ballot-marking devices in the county. 

Malicious actors could also attack election management systems via the remote access software that some vendors have installed in these systems. ES&S, which happens to have donated more than $30,000 to the Republican State Leadership Council since 2014, admitted earlier this year that it has installed remote access software in election management systems in 300 jurisdictions, which it refuses to identify. And in August 2004, as reported by bradblog.com, the United States Computer Emergency Readiness Team released a Cyber Security Bulletin concerning the Diebold GEMS central tabulator, stating that “a vulnerability exists due to an undocumented backdoor account, which could [allow] a local or remote authenticated user [to] modify votes [emphasis added].” This central tabulator was used to count one-third of the votes in 37 states in the 2004 election. 

The memory cards or USB sticks used to transfer the pre-election programming from the election management system to the voting machines, scanners, and ballot-marking devices constitute another potential attack vector. In theory, the person who distributes those cards or USB sticks to the precincts could swap them out for cards containing a vote-flipping program. 

Memory cards are also used in the reverse direction—to transfer precinct tallies from the voting machines and scanners to the election management system’s central tabulator, which aggregates those tallies. Problems can occur during this process, too. During the 2000 presidential election between George W. Bush and Al Gore, for example, a Global/Diebold machine in Volusia County, Florida, subtracted 16,000 Gore votes, while adding votes to a third-party candidate. The “Volusia error,” which caused CBS news to call the race prematurely for Bush, was attributed to a faulty memory card, although election logs referenced a second “phantom” card as well. As noted recently in the New York Times Magazine, questions from this disturbing episode remain unanswered, such as “[W]hat kind of faulty card deleted votes only for Gore, while adding votes to other candidates?” The incident, however, slipped from public consciousness amid the hoopla over hanging chads and butterfly ballots.

Further complicating matters, some jurisdictions transfer results from the precincts to the central tabulators via cellular modems. ES&S has recently installed such cellular modems in Wisconsin, Florida, and Rhode Island. Michigan and Illinois transfer results via cellular modem as well. According to Computer Science Professor Andrew Appel of Princeton University, these cellular modems could enable a malicious actor to intercept and “alter vote totals as they are uploaded” by setting up a nearby cell phone tower (similar to the Stingray system used by many police departments). 

After precinct tallies are sent by memory card or modem to the central tabulators, a memory card or flash drive transfers the aggregated totals from the central tabulators to online reporting systems, creating another hacking opportunity. In Georgia, a flash drive transfers results from the central tabulator to the online election night reporting system, and the same flash drive is then reinserted into the tabulator for the next round of memory cards. As explained by election integrity advocate Marilyn Marks, that is like “sharing needles.”

Central scanners, which are used to count absentee ballots and paper ballots from polling places that lack precinct-based scanners, are also vulnerable. As a video produced by the Emmy award-winning journalist and filmmaker Lulu Friesdat has demonstrated, the ES&S 650 central scanner, which is used in twenty-four states, can be rigged to flip votes within one minute of direct access. 

As troubling, voting machines themselves can be compromised within seven minutes of direct access, with little more than a screwdriver and a new ROM chip. According to computer science Professor Richard DeMillo of the Georgia Institute of Technology, voting machines are often left unattended for long periods: “We have pictures of [my colleagues] walking into gymnasiums with access to the [voting machines] that are left unattended overnight.” And as DeMillo explained, if a single voting machine is infected, the virus can spread to the election management system’s central tabulator, which aggregates all precinct tallies in the county, via the magnetic cards that are plugged into every machine to accumulate the results.

Vote flipping aside, malicious or benign actors can also cause electronic failure that prevents the machines from working at all. The potential impact of electronic failure is far greater with touchscreen systems, whether for voting machines or ballot-marking devices, than with hand-marked paper ballots counted on scanners because, when touchscreens fail, voters may have no means of voting whatsoever. In 2008, for example, voters in Horry County, South Carolina, were forced to vote on scraps of paper when touchscreen voting machines malfunctioned in 80 percent of the county’s precincts. A State Election Commission spokesperson was quoted telling people to vote on paper towels if necessary. In 2016, improperly coded memory cards caused most of the machines in Washington County, Utah, to break down. Poll sites offered backup paper ballots, until some ran out and told voters to return later. 

Touchscreen machines are also known to cause long lines because they limit the number of voters who can vote at any one time to the number of touchscreens available at the polling place. Again, this contrasts with hand-marked paper ballots and scanners, where the only limit to the number of people who can fill in their ballots concurrently is the number of pens and paper ballots at the polling station. 

Electronic poll books, the tablets and laptops that many jurisdictions now use to check voter registrations at the polls, are also of grave concern. The journalist and radio show host Brad Friedman, who has investigated and written about our computerized election system for almost two decades, warns that if electronic poll books “go down, and these places don’t have paper backups, it will be a disaster… [In the case of] a denial of service attack meant to knock out the Internet on election day, what do you do? There are no do-overs in elections.” 

We know what this might look like because on election day 2016 in Durham County, North Carolina, problems with the county’s poll books resulted in hundreds of calls from irate voters, many of whom were turned away at the polls, even when they displayed current registration cards. VR Systems, the Florida-based company that manufactured the poll books in Durham County, and which also supplies poll books to California, Florida, Indiana, North Carolina, New York, and Virginia, was hacked in August 2016 in a Russian spear-phishing attack. In 2017, current and former intelligence officials said that hackers had also breached at least two other providers of critical election services before the 2016 election, but would not disclose the names of the two other providers.  

USA Today reported in August last year that ES&S, which by itself accounts for about 44 percent of US election equipment, had left database files online and publicly available on an Amazon AWS cloud server for an “undetermined amount of time,” including “encrypted versions of passwords for ES&S employee accounts.” The database was discovered by a cybersecurity company called Upguard, which advised that “the encryption was strong enough to keep out a casual hacker but by no means impenetrable.” According to USA Today, “configuring the security settings for Amazon’s AWS cloud service is up to the user,” and the “default for all of AWS’ cloud storage is to be secure, so someone within ES&S would have had to choose to configure it as public.”

The most worrisome aspect of all these various vulnerabilities is that—should they be exploited—we will be unable to prove whether and to what extent they have affected the outcome of an election. The effect of even very visible problems, such as long lines, voter registration issues, and electronic failures, is difficult to quantify. Moreover, machine vendors claim proprietary ownership of their software and hardware, precluding forensic analysis. After the 2016 election, the Department of Homeland Security confirmed that it had conducted no such analysis. 

Thus the only way to know if foreign or domestic actors have altered electronic tallies is to conduct what statistics Professor Philip Stark of the University of California at Berkeley calls “evidence-based elections.” This would involve a robust manual audit or manual recount of the paper ballots (or other paper record that the voter has reviewed for accuracy), and a secure chain of custody between the election night count and any audit or recount. 

United States elections are not evidence-based elections. According to computer science Professor Alex Halderman of the University of Michigan, only two states, Colorado and New Mexico, conduct manual audits sufficiently robust to detect vote tally manipulation. More than half of US states do not require manual audits at all, while manual recount laws typically allow automatic state-funded recounts only if the margin of victory is less than 1 percent

Depending on the type of voting system used at the polls, some jurisdictions may have no paper ballots (or other paper records) with which to conduct a manual recount or manual audit or recount in the first place. As of April 2018, fourteen states still used such “paperless” voting machines.  

In the past few years, some jurisdictions have finally dumped their aging voting machines. But an alarming number—including counties in Kentucky, West Virginia, Arkansas, Tennessee, Delaware, Kansas, Michigan, Wisconsin, and Texas—have replaced the machines not with hand-marked paper ballots and scanners, but rather with ballot-marking devices and scanners. Although ballot-marking devices have long been used to serve the disabled community, the new versions are intended for so-called universal use. Like traditional touchscreen voting machines, they put a hackable touchscreen computer between the voter and his or her ballot.

These universal use ballot markers generate a summary card that some officials call a “paper ballot.” The idea is that the voter can review the text on the summary card to confirm that it is accurate, so that the card can provide the basis for a manual audit or recount. But a recent study (awaiting peer review) by computer science Professor Richard DeMillo of the Georgia Institute of Technology and Marilyn Marks of the Coalition for Good Governance suggests that “in actual polling place settings, most voters do not try to verify paper ballot summaries, even when directed to do so,” and that “among those voters who attempt to review their ballots, a statistically significant fraction… fail to recognize errors.” 

Thus, even if we had effective manual audit laws, our use of voting machines and universal-use ballot-marking devices would preclude reliable manual audits. As Friedman laments, “We do not have a system where supporters of the winners and the losers can walk away and know that the election was legitimately won or lost.”

There are still steps, however, that voters and candidates can and should take before and during the midterm elections to protect their votes and voter registrations, many of which I have compiled into a handout. And as the Brennan Center for Justice advises, voters should also seek confirmation from their local election officials that the requisite emergency measures are in place should technical problems arise on election day. 

Beyond the midterms, voters must pressure Congress to pass substantive election security legislation. A good example already before Congress is Senator Ron Wyden’s Protecting American Votes and Elections Act, which would require all states to give voters the option to mark their ballots by hand and to carry out robust audits. The hand-marked ballot option is important because it prevents states from forcing voters to use voting machines or ballot-marking devices. Voters must also pressure their state lawmakers to implement similar election security laws to protect elections.

False assurances about election security will not suffice. If lawmakers expect voters to believe in the integrity of America’s election system, then they must make the system secure and dispense with the complacent notion that the only threat is from a foreign adversary. As Friedman says, “[Y]ou do not need to be a fancy state-sponsored hacking organization to do it. It’s one guy on the inside, whether an election official, or a voting machine company, or contractor, or whatever… It doesn’t take a nation state to flip an election.”

Source Article from http://feedproxy.google.com/~r/nybooks/~3/pvG7f750_4E/